article
stringlengths
0
456k
abstract
stringlengths
0
65.5k
abstract argumentation is at the heart of many advanced argumentation systems and is concerned with finding jointly acceptable arguments by taking only their inter - relationships into account .efficient solvers for abstract argumentation are thus an important development , a fact that is also witnessed by a new competition which takes place in 2015 for the first time . to date , several approaches for implementing abstract argumentation exist , many of them following the so - called reduction - based ( see ) paradigm : hereby , existing efficient software which has originally been developed for other purposes is used .prominent examples for this approach are ( i ) the csp - based system conarg , ( ii ) sat - based approaches ( e.g. ) and ( iii ) systems which rely on answer - set programming ( asp ) ; see for a comprehensive survey .in fact , asp is particularly well - suited since asp systems by default enumerate all solutions of a given program , thus enabling the enumeration of extensions of an abstract argumentation framework in an easy manner .moreover , disjunctive asp is capable of expressing problems being even complete for the 2nd level of the polynomial hierarchy .in fact , several semantics for abstract argumentation like preferred , semi - stable , or stage are of this high complexity .one particular candidate for an asp reduction - based system is aspartix . here ,a fixed program for each semantics is provided and the argumentation framework under consideration is just added as an input - database . the program together with the input - database is then handed over to an asp system of choice in order to calculate the extensions .this makes the aspartix approach easy to adapt and an appealing rapid - prototyping method .the proposed encodings in aspartix for the high - complexity semantics mentioned above come , however , with a certain caveat .this stems from the fact that encodings for such complex programs have to follow a certain saturation pattern , where restricted use of cyclic negation has to be taken care of ( we refer to for a detailed discussion ) .the original encodings followed the definition of the semantics quite closely and thus resulted in quite complex and tricky loop - techniques which are a known feature for asp experts , but hard to follow for asp laymen .moreover , experiments in other domains indicated that such loops also potentially lead to performance bottlenecks . in this work, we thus aim for new and simpler encodings for the three semantics of preferred , semi - stable , and stage extensions . to this end , we provide some alternative characterizations for these semantics and design our new encodings along these characterizations in such a way that costly loops are avoided . instead we make use of the asp language feature of conditional literals in disjunction .moreover , we perform exhaustive experimental evaluation against the original aspartix - encodings , the conarg system , and another asp - variant which makes use of the asp front - end _ metasp _ , where the required maximization is handled via meta - programming .our results show that the new asp encodings not only outperform the previous variants , but also makes aspartix more powerful than conarg .the novel encodings together with the benchmark instances are available under http://dbai.tuwien.ac.at / research / project / argumentation/% systempage/#conditional[http://dbai.tuwien.ac.at / research / project / argumentation/% systempage/#conditional ] .[ [ acknowledgements ] ] acknowledgements + + + + + + + + + + + + + + + + this work has been funded by the austrian science fund ( fwf ) through projects y698 and i1102 , by the german research foundation ( dfg ) through project ho 1294/11 - 1 , and by academy of finland through grants 251170 coin and 284591 .first , we recall the main formal ingredients for argumentation frameworks and survey relevant complexity results ( see also ) . [ def : af ]an _ argumentation framework ( af ) _ is a pair where is a set of arguments and is the attack relation .the pair means that attacks .an argument is _ defended _ by a set if , for each such that , there exists a such that .we define the _ range of _ ( w.r.t . ) as .semantics for argumentation frameworks are given via a function which assigns to each af a set of extensions .we shall consider here for the functions , , , , and which stand for stable , admissible , preferred , stage , and semi - stable semantics respectively .[ def : semantics ] let be an af .a set is _ conflict - free ( in ) _ , if there are no , such that . denotes the collection of conflict - free sets of . for a conflict - free set , it holds that * , if ; * , if each is defended by ; * , if and there is no with ; * , if and there is no with ; * , if there is no in , such that .[ example : af ] consider the af with and , , , , , , , , and the graph representation of : node[arg](a) + + ( 1,0 ) node[arg , inner sep=3](b) + + ( 1,.4 ) node[arg](c) + + ( 0,-0.8 ) node[arg , inner sep=2.8](d) + + ( 1,0.4 ) node[arg](e) + + ( 1,0 ) node[arg , inner sep=1.8](f) ; ; ( a ) edge ( b ) ( c ) edge ( b ) ( b ) edge ( d ) ( d ) edge ( e ) ( c ) edge ( e ) ( e ) edge ( f ) ; ( c ) edge ( d ) ( d ) edge ( c ) ; we have .the admissible sets of are , , , , , , , , and , .we recall that each af possesses at least one preferred , semi - stable , and stage extension , while might be empty .however , it is well known that implies as also seen in the above example .next , we provide some alternative characterisations for the semantics of our interest .they will serve as the basis of our encodings .the alternative characterisation for preferred extensions relies on the following idea .an admissible set is preferred , if each other admissible set ( which is not a subset of ) is in conflict with .[ prop:2 ] let be an af and be admissible in .then , if and only if , for each such that , .let and assume there exists an admissible ( in ) set , such that .it is well known ( see , e.g. , lemma 1 ) that if two sets defend themselves in an af , then also defends itself in .it follows that and by assumption .thus , . for the other direction , let but .hence , there exists an such that .clearly , but .we turn to semi - stable and stage semantics . in order to verifywhether a candidate extension is a stage ( resp .semi - stable ) extension of an af , we check whether for any set such that there is no conflict - free ( resp .admissible ) set such that .we also show that is sufficient to check this for minimal such sets .observe that the above check is trivially true if is already stable , mirroring the observation that whenever .let be an af and .a _ cover _ of in is any such that .the set of covers of in is denoted by .[ prop:1 ] let be an af and ( resp .the following propositions are equivalent : ( 1 ) is a stage ( resp . semi - stable ) extension of ; ( 2 ) for each , there is no such that ( resp . ; ( 3 ) for each with , there is no , such that ( resp . ) .we give the proof for stage extensions .the result for semi - stable proceeds analogously .( 1)(3 ) : suppose there is an with , such that some is conflict - free in . by definition ,hence , .( 2)(1 ) : suppose .thus there exists with .let .it follows that .( 3)(2 ) is clear . finally , we turn to the complexity of reasoning in afs for two major decision problems . for a given af and an argument , credulous reasoning under the problem of deciding whether there exists an s.t .skeptical acceptance under is the problem of deciding whether for all it holds that .credulous reasoning for preferred semantics is -complete , while credulous reasoning for semi - stable and stage semantics is -complete . for preferred ,semi - stable , and stage semantics skeptical reasoning is -complete .we give an overview of the syntax and semantics of disjunctive logic programs under the answer - sets semantics .we fix a countable set of _ ( domain ) elements _ , also called _ constants _ ; and suppose a total order over the domain elements .atom _ is an expression , where is a _ predicate _ of arity and each is either a variable or an element from .an atom is _ ground _ if it is free of variables . denotes the set of all ground atoms over . a _ ( disjunctive ) rule _ is of the form with , , where are literals , and `` '' stands for _default negation_. the _ head _ of is the set = and the _ body _ of is .furthermore , = and = .a rule is _ normal _ if and a _ constraint _ if .a rule is _ safe _ if each variable in occurs in .a rule is _ ground _ if no variable occurs in .a _ fact _ is a ground rule without disjunction and empty body. an _ ( input ) database _ is a set of facts .a program is a finite set of disjunctive rules . for a program and an input database , we often write instead of . if each rule in a program is normal ( resp .ground ) , we call the program normal ( resp . ground ) . for any program , let be the set of all constants appearing in . is the set of rules obtained by applying , to each rule , all possible substitutions from the variables in to elements of .an _ interpretation _ _ satisfies _ a ground rule iff whenever and . satisfies a ground program , if each is satisfied by . a non - ground rule ( resp ., a program ) is satisfied by an interpretation iff satisfies all groundings of ( resp . , ) . is an _ answer set _ of iff it is a subset - minimal set satisfying the _ gelfond - lifschitz reduct _ . for a program , we denote the set of its answer sets by .modern asp solvers offer additional language features . among themwe make use of the _ conditional literal _ . in the head of a disjunctive ruleliterals may have conditions , e.g. consider the head of rule `` '' . intuitively , this represents a head of disjunctions of atoms where also is true . for our novel encodingswe utilize basic encodings for afs , conflict - free sets , and admissible sets from .an af is represented as a set of facts .let be an af .we define . in the following definition we first formalize the correspondence between an extension , as subset of arguments , and an answer set of an asp encoding ;then we extend it to the one between sets of extensions and answer sets respectively .[ def : correspondence ] let be a collection of sets of domain elements and let be a collection of sets of ground atoms .we say that and correspond to each other , in symbols , iff .we say that and correspond to each other , in symbols , iff ( i ) for each , there exists an , such that ; and ( ii ) for each , there exists an , such that .it will be convenient to use the following notation and result later in .let be sets of ground atoms .we say that and are equivalent , in symbols , iff .lmalmaequiv [ th : equiv ] let , and .if and , then .in we see the asp encoding for conflict - free sets , while shows defense of arguments .the encoding for admissible sets is given by .the following has been proven in ( * ? ? ?* proposition 3.2 ) .[ prop : partition ] for any af , and any , is a partition of . ....in(x ) : - arg(x ) , not out(x).\label{line : cf - r1} out(x ) : - arg(x ) , not in(x).\label{line : cf - r2} : - att(x , y ) , in(x ) , in(y).\label{line : cf-3} .... ....defeated(x ) : - in(y ) , att(y , x).\label{line : adm - r4} undefended(x ) : - att(y , x ) , not defeated(y).\label{line : adm - r5} : - in(x ) , undefended(x).\label{line : adm - r6} .... correctness of the encodings and was proven in .[ th : cf ] [ th : adm ] for any af , we have ( i ) , and ( ii ) . next , we characterize the encoding ( ) , which , given a module computing some extension ( via ) of an af , returns its range ( via ) and also collects the arguments not contained in the range .we indicate via that is not stable , i.e. . ....range(x ) : - in(x).\label{line : rng - r1} range(y ) : - in(x),att(x , y).\label{line : rng - r2} out_of_range(x ) : - not range(x),arg(x).\label{line : rng - r3} unstable : - out_of_range(x),arg(x).\label{line : rng - r4} .... lmaproprange [ prop:5 ] let be an af , and be a program not containing the predicates , and .let and s.t . . furthermore let and then , , if and only if . ....eq_upto(y ) : - inf(y ) , in(y ) , inn(y ) .eq_upto(y ) : - inf(y ) , out(y ) , outn(y ) .eq_upto(y ) : - succ(z , y ) , in(y ) , inn(y ) , eq_upto(z ) .eq_upto(y ) : - succ(z , y ) , out(y ) , outn(y ) , eq_upto(z ) .eq : - sup(y ) , eq_upto(y ) . .... the preferred , semi - stable and stage semantics utilize the so - called _saturation technique_. we sketch here the basic ideas .intuitively , in the saturation technique encoding for preferred semantics we make a first guess for a set of arguments in the framework , and then we verify if this set is admissible ( via module ) . to verifyif this set is also subset maximal admissible , a second guess is carried out via a disjunctive rule .if this second guess corresponds to an admissible set that is a proper superset of the first one , then the first one can not be a preferred extension . using the saturation techniquenow ensures that if all second guesses `` fail '' to be a strictly larger admissible set of the first guess , then there is one answer - set corresponding to this preferred extension .usage of default negation within the saturation technique for the second guess is restricted , and thus a loop - style encoding is employed that checks if the second guess is admissible and a proper superset of the first guess .roughly , a loop construct in asp checks a certain property for the least element in a set ( here we use the predicate ) , and then checks this property `` iteratively '' for each ( immediate ) successor ( via predicate ) . if the property holds for the greatest element ( ) , it holds for all elements . inwe illustrate loop encodings , where we see a partial asp encoding used for preferred semantics in that derives * eq * if the first and second guesses are equal , i.e. the predicates corresponding to the guesses via , resp . , and , resp . , are true for the same constants . another variant of asp encodings for preferred , semi - stable and stage semantics is developed by .there so - called meta - asp encodings are used , which allow for minimizing statements w.r.t .subset inclusion directly in the asp language .for instance , can then be augmented with a minimizing statement on the predicate * out * , to achieve an encoding of preferred semantics .here we present our new encodings for preferred , semi - stable , and stage semantics via the novel characterizations .the encoding for preferred semantics is given by , where is provided in .we first give the intuition of the program .a candidate for being preferred in an af is computed by the program via the predicate , and is already known admissible .if all arguments in are contained in we are done . ] .otherwise , the remainder of the program ( ) is used to check whether there exists a set such that and not in conflict with .we start to build by guessing some argument not contained in ( ) and then in we repeatedly add further arguments to unless the set defends itself ( otherwise we eventually derive ) .then , we check whether is conflict - free ( ) and is not in conflict with ( ) . if we are able to reach this point without deriving , then the candidate can not be an answer - set ( ) . this is in line with proposition [ prop:2 ] , which states that in this case is not preferred . by inspecting we also see important differences w.r.t . the encodings for preferred semantics of .in our new encodings , the `` second guess '' via predicate is constructed through conditional disjunction instead of simple disjunction .usage of the former allows to construct the witness set already with defense of arguments in mind .furthermore loops , such as the one shown in that checks if the second guess is equal to first one or a loop construct that checks if every argument is defended , can be avoided , since these checks are partially incorporated into of and into simpler further checks ..... nontrivial : - out(x).\label{line : pr - r01} witness(x):out(x ) : - nontrivial.\label{line : pr - r02} spoil | witness(z):att(z , y ) : - witness(x ) , att(y , x).\label{line : pr - r03} spoil : - witness(x ) , witness(y ) , att(x , y).\label{line : pr - r4} spoil : - in(x ) , witness(y ) , att(x , y).\label{line : pr - r5} witness(x ) : - spoil , arg(x).\label{line : pr - r6} : - not spoil , nontrivial.\label{line : pr - r7} .... correctness of this new encoding is stated and proved in the following proposition .[ prop : pref ] for any af , we have . according to , we have to prove ( i ) and ( ii ) . with line numbers we refer here to the asp encoding shown in .we employ the _ splitting theorem _ in order to get a characterisation of , in which the sub - programs and are considered separately .the splitting set is , , , , , , and we obtain [ [ proof - i . ] ] proof ( i ) .+ + + + + + + + + + we prove that each preferred extension has a corresponding answer - set . fromwe know that if , for some .moreover implies , hence by there is s.t . . in the following we distinguish between two complementary cases . in case ,the set is the only preferred one , since it is trivially admissible and it can not be contained in another set of arguments .we show is a subset - minimal model of .the subset - minimality is evident .then , for any by , hence satisfies the rule at .since , satisfies the rules at lines [ line : pr - r02 ] , and [ line : pr - r7 ] .every other rule is satisfied because for any . in case we can build an interpretation and prove that is an answer - set by contraposition , i.e. if there is an which satisfies , then .we define .we have since .the set satisfies ( got from by just removing the rule at ) , as and contains all the heads of the rules in .notice that guarantees that the head of the rule at is non - empty .now we describe the necessary shape of , in order to prove the main assertion next . must contain because of the rule at .indeed for some , since with and ( since ) , which implies the existence of ( we can not have simultaneously , and ) , which implies by .we have , otherwise also would be in ( because of the rule at ) , making equal to , but they are different by assumption .now we show that , given , it is possible to find a set s.t . and , which implies by .we define , and we show all the required properties : , otherwise we would have two arguments attacking each other , meaning , which implies and for some rule in the grounding of the rule at , since . , otherwise it would be possible to find two atoms [ in this proof , the square brackets are used to point out an immediate implication of the statement preceding them .usually the statement is about the framework and the implication about an interpretation , or the other way around . ] and [ for which there is no [ s.t . [ , thus violating the rule at , since . . indeedif we assume , then for every we have ( by definition of ) , which corresponds to ( ) , implying ( by ) , making it impossible for to satisfy the rule at , since . .the sets and are conflict - free , so we have to show that there can not be attack relations between the two sets : an argument can not attack an argument , otherwise we would have , , , which implies and for some rule in the grounding of the rule at , since ; an argument can not attack an argument , otherwise an argument should attack by admissibility of , thus violating the previous point . [ [ proof - ii . ] ] proof ( ii ) . + + + + + + + + + + + we prove that each corresponds to an . fromwe see that only if for some .we have , because , and does not have any additional ground atom , since does not appear in the head of any rule of . bythere exists s.t . , hence by .we show that is also preferred in , by distinguishing between two complementary cases .: we have for any , otherwise the rule at would be violated . by proposition[prop : partition ] this implies for every , and the same is true for ( ) , which we know to be admissible .hence , and .: we prove that is preferred by contraposition , i.e. if then is not a subset - minimal model of .we have that must have a clear shape in order to satisfy .in particular . then because of the rule at hence , for each because of the rule at .summing up we have .finally we show that , since we are able to build an interpretation satisfying the reduct .we remind that means that there exists s.t .we use to build the interpretation .we have , because it does not contain and . in the followingwe show that is a model of the reduct , because it contains and it satisfies each rule in . satisfies the rule at , because there exists s.t . , for some ( the element exists because is a proper superset of ) .if , then , then ( ) , then ( ) , then ( by ) , then ( ) . summing up , if , then , and by definition . ]since is admissible , for each [ ] attacked by [ there exists [ ] attacking [ . hence satisfies the rule at , even though . does not contain the body of any rule in the grounding of the rule at , otherwise would not be conflict free . does not contain the body of any rule in the grounding of the rule at , otherwise would not be conflict free , since . does not contain the body of any rule in the grounding of the rule at , because it does not contain . [ [ semi - stable - semantics ] ] semi - stable semantics + + + + + + + + + + + + + + + + + + + + + the encoding for semi - stable semantics is given by , with shown in .we first give the intuition .a candidate for being semi - stable is computed by the program via the predicate and is known admissible .the module computes the range and derives iff the extension is not stable . if is stable , we are done .otherwise the remainder of the program is used to check whether an admissible cover of a superset of the range exists .starting from ( ) , a superset is achieved by adding at least one element out of it ( ) . then a cover is found ( ) , which is admissible ( ) . if we are able to reach this point without deriving ( that is always a possibility for satisfying the constraints ) , then the candidate can not be an answer - set ( ) .this is in line with , which states that in this case is not semi - stable .here we state the correctness of the encoding , a full proof is given in the online appendix ( appendix a ) . ....larger_range(x):out_of_range(x ) : - unstable.\label{line : sm - r1} larger_range(x ) : - range(x ) , unstable.\label{line : sm - r2} witness(x ) | witness(z):att(z , x ) : - larger_range(x ) , unstable.\label{line : sm - r3} spoil : - witness(x ) , witness(y ) , att(x , y ) , unstable.\label{line : sm - r4} spoil | witness(z):att(z , y ) : - witness(x ) , att(y , x ) , unstable.\label{line : sm - r5} witness(x ) : - spoil , arg(x ) , unstable.\label{line : sm - r6} larger_range(x ) : - spoil , arg(x ) , unstable.\label{line : sm - r7} : - not spoil , unstable.\label{line : sm - r8} .... proppropsemicorrectness [ prop : semi ] for any af , we have . [[ stage - semantics ] ] stage semantics + + + + + + + + + + + + + + + the encoding for stage semantics is given by , where is the rule at of .the only differences w.r.t .the encoding for semi - stable semantics are : ( i ) it employs instead of , thus the candidate sets are only conflict - free ; and ( ii ) it lacks the rule at , hence it considers all the conflict - free covers of the candidate set , which is still in line with .a proof sketch for the forthcoming correctness result is given in the online appendix ( appendix a ) .proppropstagecorrectness for any af , we have .we tested the novel encodings ( new ) extensively and compared them to the original ( original ) and metasp ( meta ) encodings as well as to the system conarg .for the novel and original encodings we used _clingo 4.4 _ and for the metasp encodings we used _gringo3.0.5/clasp3.1.1 _ all from the potassco group . as benchmarks, we considered a collection of frameworks which have been used by different colleagues for testing before consisting of structured and randomly generated afs , resulting in 4972 frameworks .in particular we used parts of the instances federico cerutti provided to us which have been generated towards an increasing number of sccs .further benchmarks were used to test the system _ dynpartix _ and we included the instances provided by the iccma 2015 organizers .the full set is available at http://dbai.tuwien.ac.at/research/project/argumentation/systempage/#conditional . for each frameworkthe task is to enumerate all solutions .the computation has been performed on an intel xeon e5 - 2670 running at 2.6ghz . from the 16 available cores we used only every fourth core to allow a better utilization of the cpu s cache .we applied a 10 minutes timeout , allowing to use at most 6.5 gb of main memory .it turns out that for each semantics the new encodings significantly outperform the original ones as well as the system conarg .furthermore , there is a clear improvement to the metasp encodings , as illustrated in fig .[ fig : runtimes ] which shows the cactus plots of the required runtime to solve frameworks ( x - axis ) with the respective timeout ( y - axis ) for the three discussed semantics .while for preferred and semi - stable semantics the novel encodings are able to solve more than 4700 instances ( out of 4972 ) , one can observe a different trend for stage semantics .there , the new encodings return the best result with 2501 solved instances .table [ tab : summary ] gives a summary of the test results , where _ usc _ denotes the unique solver contribution , i.e. the number of afs which could only be solved by the particular solver , _ solved _ gives the number of solved instances by the solver , and _ med _ is the median of the computation time of the solver ..summary of test results.[tab : summary ] [ cols="<,<,<,>,<,<,<,>,<,<,<,>",options="header " , ] interestingly , conarg is able to solve 60 ( resp .50 ) instances for preferred ( resp .semi - stable ) semantics which are not solvable by the other systems . however , the novel encodings are able to uniquely solve 101 ( resp .82 ) instances for preferred ( resp .stage ) semantics .the original encodings have no _ unique solver contribution _ for all of the considered semantics , thus it is save to replace them with the new encodings .the entries for the median also show that all the novel encodings perform much faster than the other systems , except for semi - stable where conarg has the lowest median . however , here conarg is able to solve about 1300 instances less than the novel encodings .another interesting observation is that the grounding size of all new encodings is significantly smaller than of both the original and the metasp encodings .in this work , we have developed novel asp encodings for computationally challenging problems arising in abstract argumentation .our new encodings for preferred , semi - stable , and stage semantics avoid complicated loop constructs present in previous encodings .in addition to being more succinct , our empirical evaluation showed that a significant performance boost was achieved compared to the earlier asp encodings , and that our encodings outperform the state - of - the - art system conarg . from an asp perspective ,our results indicate that loops in saturation encodings ( as used in the previous encodings in ) are a severe performance bottleneck which should be avoided . in future work , we plan to compare our results also with the systems cegartix and argsemsat .furthermore , we also aim for finding better asp encodings for the ideal and eager semantics .constraint - based computational framework for argumentation systems . in _ proceedings of the 23rdieee international conference on tools with artificial intelligence ( ictai 2011 ) _ , t. m. khoshgoftaar and x. h. zhu , eds .ieee computer society press , 605612 . ,giacomin , m. , and vallati , m. 2014 . solving argumentation problems using sat . in _ proceedings of the 5th international conference on computational models of argument ( comma 2014 ) _ ,s. parsons , n. oren , c. reed , and f. cerutti , eds .faia , vol . 266 .ios press , 455456 . , oren , n. , strass , h. , thimm , m. , and vallati , m. 2014 .a benchmark framework for a computational argumentation competition . in _ proceedings of the 5th international conference on computational models of argument ( comma 2014 ) _ , s. parsons , n. oren , c. reed , and f. cerutti , eds .faia , vol . 266 .ios press , 459460 . ,dvok , w. , linsbichler , t. , and woltran , s. 2014 .characteristics of multiple viewpoints in abstract argumentation . in _ proceedings of the 14th international conference on principles of knowledge representation and reasoning ( kr 2014 ) _ , c. baral , g. de giacomo , and t. eiter , eds .aaai press , 7281 . ,gaggl , s. a. , wallner , j. p. , and woltran , s. 2013 . making use of advances in answer - set programming for abstract argumentation systems . in _ proceedings of the 19th international conference on applications of declarative programming and knowledge management ( inap 2011 ) , revised selected papers _, h. tompits , s. abreu , j. oetsch , j. phrer , d. seipel , m. umeda , and a. wolf , eds .lnai , vol .springer , 114133 .\2011 . argumentation and answer set programming . in _ logic programming , knowledge representation , and nonmonotonicreasoning : essays in honor of michael gelfond _ ,m. balduccini and t. c. son , eds .lncs , vol .springer , 164180 . ,cerutti , f. , and giacomin , m. 2014 .argumentation frameworks features : an initial study . in _ proceedings of the 21st european conference on artificial intelligence ( ecai 2014 ) _, t. schaub , g. friedrich , and b. osullivan , eds .faia , vol .ios press , 11171118 .
the design of efficient solutions for abstract argumentation problems is a crucial step towards advanced argumentation systems . one of the most prominent approaches in the literature is to use answer - set programming ( asp ) for this endeavor . in this paper , we present new encodings for three prominent argumentation semantics using the concept of conditional literals in disjunctions as provided by the asp - system clingo . our new encodings are not only more succinct than previous versions , but also outperform them on standard benchmarks . # 1 [ firstpage ] answer - set programming , abstract argumentation , implementation , aspartix
the detection of the cosmic microwave background ( cmb ) in 1965 stands as one of the most important scientific discoveries of the century , the strongest evidence we have of the hot big bang model .we know from the cobe satellite that it is an almost perfect blackbody with temperature k , with expected tiny spectral distortions only very recently discovered .once the cmb was discovered , the search was on for the inevitable angular fluctuations in the temperature , which theorists knew would encode invaluable information about the state of the universe at the epoch when big bang photons decoupled from the matter .this occurred as the universe cooled sufficiently for the ionized plasma to combine into hydrogen and helium atoms .this epoch was a few hundred thousand years after the big bang , at a redshift when the universe was a factor of a thousand smaller than it is today .theorists led the experimenters on a merry chase , originally predicting the fractional temperature fluctuation level would be , then in the seventies , then , where it has been since the early eighties , when the effects of the dark matter which dominates the mass of the universe were folded into the predictions .fortunately the experimenters were persistent , and upper limits on the anisotropy dropped throughout the eighties , leaving in their wake many failed ideas about how structure may have formed in the universe .a major puzzle of the hot big bang model was how regions that would not have been in causal contact at redshift could have the same temperature to such a high precision .this led to the theory of inflation , accelerated expansion driven by the energy density of a scalar field , dubbed the inflaton , in which all of the universe we can see was in contact a mere seconds after the big bang .it explained the remarkable isotropy of the cmb and had a natural byproduct : quantum oscillations in the scalar field could have generated the density fluctuations that grew via gravitational instability to create the large scale structure we see in the universe around us .this theory , plus the hypothesis that the dark matter was made up of elementary particle remnants of the big bang , led to firm predictions of the anisotropy amplitude . in the eighties ,competing theories arose , one of which still survives : that topologically stable configurations ( defects , such as cosmic strings ) of exotic particle fields arising in phase transitions could have formed in the early universe and acted as seeds for the density fluctuations in ordinary matter . immediately following the headline - generating detection of anisotropies by cobe [ ] in 1992 at the predicted level ,many ground and balloon experiment began seeing anisotropies over a broad range of angular scales .the emerging picture from this data has sharpened our theoretical focus to a small group of surviving theories , such as the inflation idea .the figures in this article tell the story of where we go from here .[ fig : allsky ] shows a realization of how the temperature fluctuations would look on the sky in an inflation - based model , at the resolution of the cobe satellite and what would be revealed at essentially full resolution .one sees not only the long wavelength ups and downs that cobe saw , but also the tremendous structure at smaller scales in the map .one measure of this is the _ power spectrum _ of the temperature fluctuations , denoted by , a function of angular wavenumber , or , more precisely , the multipole number in a spherical harmonic expansion . fig .[ fig : cltheory ] shows typical predictions of this for the inflation and defect theories , and contrasts it with the best estimate from all of the current data .the ups and downs in -space are associated with sound waves at the epoch of photon decoupling .the damping evident at high is a natural consequence of the viscosity of the gas as the cmb photons are released from it .the flat part at low is associated with ripples in the past light cone arising from gravitational potential fluctuations that accompany mass concentrations .all of these effects are sensitive to cosmological parameters , _e.g. _ , the densities of baryons and dark matter , the value of the cosmological constant , the average curvature of the universe , and parameters characterizing the inflation - generated fluctuations .if the spectrum can be measured accurately enough experimentally , such cosmological parameters can also be determined with high accuracy . for a review of cmb science see [ ] . once it became clear that there was something to measure , the race was on to design high - precision experiments that would cover large areas of the sky at the fine resolution needed to reveal all this structure and the wealth of information it encodes .these include ground - based interferometers and long duration balloon ( ldb ) experiments ( flying for 10 days vs. 10 hours for conventional balloon flights ) , as well as the use of large arrays of detectors .nasa will launch the microwave anisotropy probe ( map ) [ ] satellite in 2000 and esa will launch the planck surveyor [ ] around 2006 . they will each spend a year or two mapping the full sky .[ fig : clproj ] gives an idea of how well we think that the ldb and satellite experiments can do in determining if everything goes right .theorists have also estimated how well the cosmological parameters that define the functional dependence of in inflation models can in principle be determined with these experiments . in one exercise that allowed a mix of nine cosmological parameters to characterize the space of inflation - based theories , cobe was shown to determine one combination of them to better than 10% accuracy , ldbs and map could determine six , and planck seven .map would also get three combinations to 1% accuracy , and planck seven ! this is the promise of a high - precision cosmology as we move into the next millennium .cmb anisotropy experiments often involve a number of microwave and sub - millimeter detectors covering at least a few frequencies , located at the focal plane of a telescope .the raw data comes to us as noisy time - ordered recordings of the temperature for each frequency channel , which we shall refer to as timestreams , along with the pointing vector of each detector on the sky .the resolution of the experiment is usually fixed by the size of the telescope and the frequency of the radiation one looks at .we must learn from the data itself almost everything about the noise and the many signals expected , both wanted and unwanted , with only some guidance from other astrophysical observations .we shall see that to a large degree this appears to be a well - posed problem in bayesian statistical analysis .the major data products from the cobe anisotropy experiment were six maps , each with 6144 pixels , derived from six timestreams , one for each detector .the timestream noise was gaussian , which translated into correlated gaussian noise in the maps .much effort went into full statistical analyses of the underlying sky signals , most often under the hypothesis that the sky signal was a gaussian process as well .the amount of cobe data was at the edge of what could be done with 1992 workstations .the other experiments used in the estimate of the power spectrum in fig .[ fig : cltheory ] had less data , and full analysis was also feasible .we are now entering a new era : ldb experiments will have up to two orders of magnitude more data , map three and planck four .for the forecasts of impressively small errors to become reality , we must learn to deal with this huge volume of data . in this article, we discuss the computational challenges associated with current methods for going from the timestreams to multi - frequency sky maps , and for separating out from these maps of the different sky signals .finally , from the cmb map and its statistical properties , cosmological parameters can be derived . to illustrate the techniques , we use them to find estimates of .this represents an extreme form of data compression , but from which cosmological parameters and their errors can finally be derived .as we shall discuss at considerable length in this article , the analysis procedure we will describe is necessarily _ global _ ; that is , making the map requires operating on the entire time - ordered data , and estimating the power spectrum requires analyzing the entire map at once .this is due to the statistically - correlated nature of both the instrumental noise and the expected cmb sky signal which links up measurements made at one point with those made at all others .of the signals we know are present , there are of course the _ primary _ cmb fluctuations from the epoch of photon decoupling that we have already discussed , the primary goal of this huge worldwide effort .there are also _ secondary _ fluctuations of great interest to cosmologists arising from nonlinear processes at lower redshift : some come from the epoch of galaxy formation and some from scattering of cmb photons by hot gas in clusters of galaxies .extragalactic radio sources are another nontrivial signal . on top of this , there are various emissions from dust and gas in our milky way galaxy . while these are foreground nuisances to cosmologists , they are signals of passionate interest to interstellar medium astronomers .fortunately these signals have very different dependences on frequency ( fig . [fig : frequencyforegrounds ] ) , and , as we now know , rather statistically distinct sky patterns ( fig .[ fig : spatialforegrounds ] ) . we know how to calculate in exquisite detail the statistics of the primary signal for the various models of cosmic structure formation .the fluctuations are so small at the epoch of photon decoupling that linear perturbation theory is a superb approximation to the exact non - linear evolution equations .the simplest versions of the inflation theory predict that the fluctuations from the quantum noise form a gaussian random field .linearity implies that this translates into anisotropy patterns that are drawn from a gaussian random process and which can be characterized solely by their power spectrum .thus our emphasis is on confronting the theory with the data in the power spectrum space , as in fig .[ fig : cltheory ] .primary anisotropies in defect theories are more complicated to calculate , because non - gaussian patterns are created in the phase transitions which evolve in complex ways and for which large scale simulations are required , a computing challenge we shall not discuss in this article . in both theories , algorithmic advances have been very important for speeding - up the computations of .the _ secondary _ fluctuations involve nonlinear processes , and the full panoply of -body and gas - dynamical cosmological simulation techniques discussed in this volume are being brought to bear on the calculation .non - gaussian aspects of the predicted patterns are fundamental , and much beyond is required to specify them .further , some secondary signals , such as radiation from dusty star - burst regions in galaxies , are too difficult to calculate from first principles , and statistical models of their distribution must be guided by observations . at least for most cmb experiments ,they can be treated as point sources , much smaller than the observational resolution .the _ foreground _ signals from the interstellar medium are also non - gaussian and not calculable . they must be modeled from the observations and have the added complication of being extended sources . for each signal present , there is therefore a theoretical `` prior probability '' function specifying its statistical distribution , .a gaussian has the important property that it is completely specified by the two - point correlation function which is the expectation value of the product of the temperature in two directions and on the sky , . for non - gaussian processes an infinite number of higher order temperature correlation functionsare needed in principle .the inflation - generated or defect - generated temperature anisotropies are also usually statistically _ isotropic _ , that is , the -point correlation functions are invariant under a uniform rotation of the sky vectors .this implies is a function only of the angular separation .if the temperature field is expanded in spherical harmonics , then the two - point function of the coefficients is related to by a_m a_ m^ * = c___mm , t ( * * ) = _ m a_m y_m ( * * ) , [ eqn : sph - trn ] so the correlation function is related to the by where is a legendre polynominal . just as a fourier wavenumber corresponds to a scale , the spherical - harmonic coefficients correspond to an angular scale .figure [ fig : cltheory ] shows for two different cosmologies given the same primordial theory ; we plot since at high it gives the power per logarithmic bin of .a nice way to think about gaussian fluctuations is that for a given power spectrum , they distribute this power with the smallest dispersion .temperature fluctuations are typically within and rarely exceed , where is _ rms _ amplitude .such is the map in fig .[ fig : allsky ] .since the term non - gaussian covers all other possibilities , it may seem impossible to characterize , but the way the greater dispersion often manifests itself is that the power is more concentrated , _e.g. _ in extended hot and/or cold spots for the galactic foregrounds , and point - like concentrations for the extragalactic sources , as is evident in fig .[ fig : spatialforegrounds ] .although we may marvel at how well the basic inflation prediction from the 1980 s is doing relative to the current data in fig .[ fig : cltheory ] , it will be astounding is if no anomalies are found in the passage from those large error bars to the much smaller ones of fig .[ fig : clproj ] and human musings about such exotic ultra - early universe processes are confirmed .the new cmb anisotropy data sets will come from a variety of platforms : large arrays of detectors on the ground or on balloons , long duration balloons ( ldbs ) , ground - based interferometers and satellites .most of these experiments measure the sky at anywhere between 3 to 10 photon frequencies , with several detectors at each frequency . with detector sampling rates of about 100 hz and durations of weeks to years , the raw data sets range in size from gigabytes to nearly terabytes .another measure of the size of a data set is the number of resolution elements , or beam - size pixels , in the maps that are derived from the raw data . over the next two years, ldbs and interferometers will measure between to resolution elements , which is an impressive improvement upon cobe / dmr s elements .nasa s map satellite will measure the whole sky with resolution in its highest frequency channel , resulting in cmb maps with resolution elements .the planck surveyor has resolution , that of the lower panel of fig .[ fig : allsky ] , and will create maps with resolution elements . in fig .[ fig : clproj ] , forecasts of power spectra and their errors for tophat and boomerang ( two ldb missions ) and map and planck are given .these results ignore foregrounds and assume maps have homogeneous noise , and thus are highly idealized .extracting the angular power spectrum from such large maps presents a formidable computing challenge . except for the complication of being on a sphere , the difficulties are those shared with the more usual problem of power spectrum estimation in flat spaces ; in general , it is an process , where is the number of pixels in the map .what makes the process is either matrix inversion or determinant evaluation , depending on the particular implementation .( in special cases , the fast fourier transform is a particularly elegant matrix factorization , reducing the operations count from to , but it is not generally applicable . )in addition to the operations count , storage is also a challenge , since the operations are manipulations of matrices .for example , the noise correlation matrix for a megapixel map requires 2000 gbytes for single precision ( four byte ) storage !conceptually , the process of extracting cosmological information from a cmb anisotropy experiment is straightforward .first , maps of microwave emission at the observed wavelengths are extracted from the lengthy time - ordered data ; these are the maximum - likelihood estimates of the sky signal given a noise model . then , the various physical components are separated : solar - system contamination , galactic and extragalactic foregrounds , and the cmb itself .finally , given the cmb map , we can find the maximum - likelihood power spectrum , , from which the underlying cosmological parameters can be computed .this entire data analysis pipeline can be unified in a bayesian likelihood formalism .of course , this pipeline is complicated by the correlated nature of the instrumental noise , by unavoidable systematic effects and by the non - gaussian nature of the various sky signals .experiments measure the microwave emission from the sky convolved with their _ beam_. measurements of different parts of the sky are often combined using complicated difference schemes , called _ chopping patterns_. for example , while the planck surveyor will measure the temperature of a single point on the sky at any given time , map and cobe measure the temperature difference between two points .the purpose of these chops is to reduce the noise contamination between samples , which can be large and may have long - term drifts and other complications .observations are repeated many times over the experiment s lifetime in different orientations on the sky and in many detectors sensitive to a range of photon wavelengths .schematically , we can write the observation as here , is the vector of observations at frequency and time , is the noise contribution , is the microwave emission at that frequency and position on the sky , smeared by the experimental beam and averaged over the pixel .the pointing matrix , , is an operator which describes the location of the beam as a function of time and its chopping pattern . for a scanning experiment ,it is a sparse matrix with a 1 whenever position is observed at time ; for a chopping experiment it will have positive and negative weights describing the differences made at time .( note that we shall often drop the reference to the channel , , when referring to a single frequency ) .the first challenge is to separate the noise from the signal and create an estimate of the map , , and its noise properties .this alone is a daunting task : long - term correlations in the noise mean that the best estimate for the map is not simply a weighted sum of the observations at that pixel .rather , a full least - squares solution is required .this arises naturally as the maximum - likelihood estimate of the map if the noise is taken to be gaussian ( see eq .[ eqn : pofeta ] , below ) .this in turn requires complex matrix manipulations due to the long - term noise correlations .one of the most difficult forms of noise results from the random long term drifts in the instrument .these make it hard to measure the absolute value of temperature on a pixel , though temperature differences along the path of the beam can be measured quite well because the drifts are small on short time scales .however , by the time the instrument returns to scan a nearby area of the sky , the offset due to this drift can be quite large , resulting in an apparent _ striping _ of the sky along the directions of the scan pattern .the problem is even more complicated than a simple offset because the detector noise has a `` '' component at low frequencies accompanying the high frequency white noise .this striping can be reduced by using a better observing strategy .if the scan pattern is such that it often passes over one of a set of well sampled reference points , then the offset can be measured and removed from the timestreams .more complicated crossing patterns in which many pixels are quickly revisited along different scan directions provide a better sampling of the offset drift and allow it to be removed more effectively .the striping issue highlights the global nature of the problem of map - making .if the map did not need to be analyzed globally , then one could cut the map into pieces and speed up processing time by .however , including the reference points is essential and these can be far removed from the subset of pixels in which one is interested .more complicated crossing patterns which reduce these errors unfortunately increase the `` non - locality '' of the problem , making it difficult to use divide - and - conquer tactics successfully .solving for the map in the presence of this noise is , in general , an process , where is the number of elements in the time - ordered data .since may be anywhere from to upwards of , the general problem can not be solved in a reasonable time .fortunately , the problem becomes tractable if one can exploit the _stationarity _ , or time - translation invariance , of the noise .in addition to solving for the map , one also needs the statistical properties of the errors in the map .accurate calculation of the `` map noise matrix '' is critical , since the signal we are looking for is excess variance in the map , beyond that which is expected from the noise .it turns out that it is both easier to calculate and store the inverse of the map noise matrix , called the map weight matrix .the weight matrix is typically very sparse , whereas its inverse may be quite dense .it is therefore advantageous to have algorithms for power spectrum and parameter estimation which require the weight matrix , rather than its inverse .maps are made at a number of different wavelengths .each of these maps will be the sum of the cmb signal , , and contributions from astrophysical foregrounds : sources of microwave emission in the universe other than the cmb itself .this includes low - frequency galactic emission from the 20k dust that permeates the galaxy and from gas , emitting synchrotron and bremsstrahlung ( or free - free ) radiation .there are also extragalactic sources of emission : galaxies that emit in the infrared and the radio .these are treated as point sources , since their angular size is much smaller than the experimental resolution .in addition , clusters of galaxies and the filamentary structures connecting them will appear because their hot gas of electrons can compton scatter cmb photons to shorter wavelengths , a phenomenon known as the sunyaev - zeldovich ( sz ) effect .these clusters are typically a few arcminutes across , small enough to be resolved by planck but not map . in figure[ fig : spatialforegrounds ] , we schematically show the spatial patterns of some of these foregrounds , and in figure [ fig : frequencyforegrounds ] , we show their frequency spectra . the next challenge , then , is to separate these foregrounds from the cmb itself in the noisy maps .we write here , is the frequency - independent cmb temperature fluctuation , is the noise contribution whose statistics have been calculated in the map - making procedure , and is the contribution of the foreground or secondary anisotropy component .the shapes of the expected frequency dependences shown in figure [ fig : frequencyforegrounds ] show some uncertainty .there is none for some secondary anisotropy sources , _e.g. _ , the sunyaev - zeldovich effect , so can be considered a product of the given function of frequency times a spatial function . in the past ,an approximation like this involving a single spatial template and one function of frequency has been used for all of the foregrounds , but it is essential to consider fluctuations about this for the accuracy that will be needed in the data sets to come . a crude but reasonably effective method is to separate the signals using the multifrequency data on a pixel - by - pixel basis .however , it is clearly better to use our knowledge of the spatial patterns in the forms adopted for , _e.g. _ , the foreground power spectra shown in fig .[ fig : cltheory ] . even using a gaussian approximation for the foreground prior probabilities has been shown to be relatively effective at recovering the signals . in this case , the statistical distribution of the maps is again gaussian , with a mean given by the maximum likelihood , which turns out to involve _ wiener filtering _ of the data [ ] . in simulations for planck performed by bouchet and gispert, the layers making up the `` cosmic sandwich '' in figure [ fig : spatialforegrounds ] have been convolved with the frequency - dependent beams , and realistic noise has been added .the recovered signals look remarkably like the input ones .there is some indication that the performance degrades if too large a patch of the sky is taken , possibly because the non - gaussian aspects become more important .of course , good estimates of the power spectra for each of the foregrounds are essential ingredients for , and these must be obtained from the cmb data in question by iterative techniques , or with other cmb data .radio astronomers have a long history of image construction using interferometry data .one of the most effective techniques is the `` maximum entropy method '' .although this is often a catch - all phrase for finding the maximum likelihood solution , the implementation of the method involves a specific assumption for the nature of , derived as a limit of a poisson distribution . for small fluctuationsit looks like a gaussian , but has higher probability in the tails than the gaussian does .the poisson aspect makes it well - suited to find and reconstruct point sources . to apply it to the cmb , which has both positive and negative excursions , and to include signal correlation function information, some development of the approach was needed .this has been recently carried out and applied to the cosmic sandwich exercise [ ] .it did at least as well at recovery as the wiener method did , and was superior for the concentrated sunyaev - zeldovich cluster sources and more generally for point sources , as might be expected .errors on the maximum entropy maps are estimated from the second derivative matrix of the likelihood function .we regard these exercises as highly encouraging , but since the accuracy with which cosmological parameters can be determined is very dependent upon the accuracy with which separation can be done , it is clear that much work is in order for improving the separation algorithms .armed with a cmb map and its noise properties , we can try to extract its cosmological information .if we assume the cosmological signal is the result of a statistically isotropic gaussian random process , then all of the information is contained in the power spectrum , . with gaussian noise as well , we can write down the exact form of its likelihood function . unfortunately , because of incomplete sky coverage , and the presence of correlated , anisotropic noise , maximizing this likelihood function ( either directly or by some sort of an iterative procedure ) requires manipulation of matrices , typically needing operations and storage .this becomes computationally prohibitive on typical workstations when exceeds about ; for the satellite missions even supercomputers may be inadequate to the task .for example , on a single 1000 mhz processor , even one calculation of operations necessary for a ten - million - pixel map would take _ 30,000 years _ ! there is , as of yet , no general solution to this problem .however , in some cases , such as for the map satellite , a solution has been proposed which relies upon the statistical isotropy of the signal and a simple form for the noise .unfortunately , most experiments will produce maps with more complicated noise properties .the power spectrum is a highly compressed form of the data in the map , but it is not the end of the story .the real goal remains to determine the underlying _ cosmological parameters _ , such as the density of the different components in the universe .for the simple inflationary models usually considered , there are still at least ten different parameters which affect the cmb power spectrum , so we must find the best fit in a ten ( or more ) dimensional parameter space .just as the frequency channel maps were derived from the timestreams , the cmb map from the frequency maps , and the power spectrum from the cmb map , the cosmological parameters can be estimated from the power spectrum .although in doing so , one must be careful about the non - gaussian distribution of the uncertainty in the [ ] .we now take a more in - depth look at the problems of map - making and parameter estimation .the most general algorithms for solving these problems operate globally on the data set and are prohibitively expensive : both require matrix operations , where is either the number of points in the time series ( for upcoming satellites ) or the number of pixels on the sky ( ) .special properties , such as the approximate _stationarity _ of the instrumental noise , must be exploited in order to make the analysis of large data sets possible .to date most work has concentrated on efficient algorithms for the exact global problem , but for the new data sets it will be essential to develop approximate methods as well .we wish to find the _ most likely _ maps and power spectra .we can write down likelihood functions for both these quantities if we assume that both the noise and signal are gaussian . while the maximum - likelihood map has a closed - form solution, there is no such solution for the most likely power spectrum .thus , the problem of the cost of evaluating the likelihood function is compounded by having to search a very high - dimensional space for the global maximum .even these complex problems are an oversimplification because we know that foregrounds and secondary anisotropies have non - gaussian distributions .thus , although we expect to get valuable results using simplified approximations for , in particular the gaussian one we use in the discussion below , monte carlo approaches in which many maps are made will undoubtedly be necessary to accurately determine the uncertainty in the derived cosmological parameter . as described in eq .[ eq : data ] , for each channel we model the timestream , , as due to signal , , and noise , , , where is the pointing matrix that describes the observing strategy as a function of time . in the ideal case ,the noise is gaussian - distributed , _i.e. _ , its probability distribution is [ eqn : pofeta ] p ( ) = ^-1/2 ( -^n^-1 /2 ) , where is the number of time - ordered data points and is the noise covariance matrix . herethe denotes transpose and the brackets indicate an ensemble average ( integration over ) . substituting for in this expressiongives the probability of the time - ordered data given a map , , which is also referred to as the likelihood of the map , .we are actually interested in the probability of a map given the data , . if we assign a uniform prior probability to the underlying map , _i.e. _ , is constant , then by bayes theorem is simply proportional to the likelihood function , . the map that maximizes this likelihood function is [ eqn : mapsoln ] | = c_n p^n^-1 d where is the noise covariance matrix of the map , c_n(|- ) ( |-)^= ( p^n^-1 p)^-1 .this map is known as a _ sufficient statistic _ , in that and contain all of the sky information in the original data set , provided the pixels are small enough . as discussed above ,it is preferable to work with , the map weight matrix , which is often sparse or nearly so .for many purposes , the variance - weighted map , [ eqn : solvefordelta ] c_n^-1| = p^n^-1d may be more useful than the map itself , so that we can avoid the computationally intensive step of inverting the weight matrix .this is true for optimally combining maps , since variance - weighted maps and their weight matrices simply sum , and for finding the minimum - variance map in a different basis , such as fourier modes or spherical harmonics .an algorithm for finding the most likely power spectrum exploits this , as we will see below .if we do need to find , we can solve eq .[ eqn : solvefordelta ] iteratively by techniques like the conjugate gradient method . in general ,such methods require iterations and are effectively still methods .fortunately , we expect to be sufficiently diagonal - dominant that many fewer than iterations are required .this is aided by the use of _ pre - conditioners _ , which will be discussed further in the context of finding the maximum - likelihood power spectrum .whether we are interested in or , we still must convolve the inverse of with the data vector .the direct inversion of by brute force is impractical since it is an matrix where is often about .however , this is greatly simplified if the noise is stationary , which means its statistical properties are time translation invariant , so that .stationarity means that is diagonal in fourier space with eigenvalues , the noise _power spectrum_. is then just the inverse fourier transform of .knowing , it is easy to calculate the map weight matrix , .the convolution of with appears to be an operation .since there is much more timestream data , this is potentially the slowest step in the calculation of the map .fortunately , the convolution is actually much faster because generally goes nearly to zero for .the absence of weight at long time scales can be due to the `` '' nature of the instrument noise at low temporal frequencies .atmospheric fluctuations also have more power on long time scales than on short time scales , as do many noise sources .since these characteristic times do not scale with the mission duration , the convolution is actually .similarly , the multiplication of the pointing matrix is also because of its sparseness .thus , we can reduce the timestream data to an estimate of the map and its weight matrix in only operations , a substantial savings compared to the operations required for a direct calculation .these algorithms , or similar ones , have been implemented in practice , _e.g. _ , [ ] .above , we made two simplifying assumptions : that the statistical properties of the noise in the timestream were known and that the noise sources were all stationary .here we try to deal with the more general case .we would like to estimate the statistical properties of the noise by using a model of the instrument , but in practice , these models are never sufficient. one must always estimate the noise from the data set itself , and doing this from the timestream requires some assumptions .it is usually assumed that the noise is stationary over sufficiently long intervals of time and is gaussian. often the data set is dominated by noise and to a first approximation , is all noise .thus one has many pairs of points separated by to estimate .techniques are being developed [ ] to simultaneously determine the map and noise power spectrum and the covariance between the two .non - stationary noise can arise in a number of ways : possible sources include contamination by radiation from the ground , balloon or sun , some components of atmospheric fluctuations and cosmic ray hits .often they are synchronous with a periodic motion of the instrument .they can be taken into account by extending the model of the timestream given in eq .[ eq : data ] to include contaminants of amplitude with a known `` timestream shape '' , : the contaminant amplitudes are now on the same mathematical footing as the map pixels , , and both can be solved for simultaneously .a more conservative approach assigns infinite noise to modes of the time - ordered data which can be written as a linear combination of the .doing so removes all sensitivity of the map to the contaminant , irrespective of the assumption of gaussianity .operationally , we replace the timestream noise covariance matrix , with [ eq : matcon ] n_tt n_tt+_c ^2_c _ tc _ tc where the are taken to be very large , thereby setting the appropriate eigenvalues of to zero .this noise matrix has lost its time - translation invariance and so is no longer directly invertible by fourier transform methods . fortunately , there is a theorem called the _ woodbury formula _ [ ] which allows one to find the resulting correction to for additions to of the form in eq .[ eq : matcon ] while only having to invert matrices of dimension equal to the number of contaminants .we now turn to the determination of some set of cosmological parameters from the map .we will focus on the case where the parameters are the s because it is a model independent way of compressing the data .however , the discussion below can easily be generalized to any kind of parameterization , including the ten or more cosmological parameters that we would like to constrain .we wish to evaluate the likelihood of the parameters , which folds in the probability of the map given the data with all of the prior probability distributions , for the target signal and the foregrounds and secondary anisotropies , in a bayesian way : only in the gaussian or uniform prior cases is the integration over and analytically calculable .the usual procedure for `` maximum entropy '' priors is to estimate errors from the second derivative of the likelihood , _i.e. _ effectively use a gaussian approximation . exploring how to break away from the gaussian assumptionis an important research topic .assuming all signals and the noise are gaussian - distributed , the likelihood function is \over \left [ \left(2\pi\right)^{m_p } |c_n + c_s| \right]^{1/2 } } , \ ] ] where is the maximum - likelihood cmb map , with the foregrounds removed . is the noise matrix calculated above , modified to include variances determined for the foreground maps , and is the primary signal autocorrelation function which depends on ( as in eq .[ eqn : tt ] , but corrected for the effect of the beam pattern and finite pixel size ) .the likelihood function is a gaussian distribution _ in the data _ , but a complicated nonlinear function of the parameters , which enter into through the power spectrum . unlike the map - making problem ( eq .[ eqn : mapsoln ] ) , there is no closed - form solution for the most likely .thus we must use a search strategy and it should be a very efficient one , since brute force evaluation of the likelihood function requires determinant evaluation and matrix inversion which are both problems . compounding this , evaluating the likelihood is more difficult here because the signal and noise matrices have different symmetries , making it harder to find a basis in which has a simple form .a particularly efficient search technique for finding the maximum - likelihood parameters is a generalization of the _ newton - raphson _ method of root finding .the newton - raphson method finds the zero of a function of one parameter iteratively .one guesses a solution and corrects that guess based on the first derivative of the function at that point .if the function is linear , this correction is exact ; otherwise , more iterations are required until it converges . in maximizing the likelihood, we are searching for regions where the first derivative of the likelihood with respect to the parameters goes through zero , so it can be solved analogously to the newton - raphson method .we actually maximize , which simplifies the calculation and also speeds its convergence since the derivative of the logarithm is generally much more linear in than the derivative of the likelihood itself .solving for the roots of using the newton - raphson method requires that we calculate , which is known as the curvature of the likelihood function .operationally , we often replace the curvature with its expectation value , the _ fisher matrix _ , because it is easier to calculate and still results in convergence to the same parameters . the change in the parameter values at each iteration for this method is a quadratic form involving the map ; hence it is referred to as a _quadratic estimator_. using as our parameter , the new guess is modified by [ ] \ ] ] where the fisher matrix is given by we can recover the full shape of the likelihood for the s from this and one other set of numbers , calculated in approximately the same number of steps as the fisher matrix itself [ ] .the procedure is very similar to that of the levenberg - marquardt method [ ] for minimizing a with non - linear parameter dependence .there the curvature matrix ( second derivative of the ) is replaced by its expectation value and then scaled according to whether the is reduced or increased from the previous iteration .similar manipulations may possibly speed convergence of the likelihood maximization , although one would want to do this without direct evaluation of the likelihood function .this method has been used for the power spectrum estimates for cobe and other experiments , and for the compressed power spectrum bands estimated from current data shown in fig .[ fig : cltheory ] .this brute force approach is quite tractable for the current data and for idealized simulations of the satellite and ldb data , such as the power spectrum forecasts of fig .[ fig : clproj ] , in which the noise was assumed ( incorrectly ) to be homogeneous .we can calculate the time and memory required to do this quadratic estimation for a variety of realistic data sets and kinds of computing hardware . for this algorithm ,the operations must be performed for each parameter ( e.g. , each band of for ) .borrill [ ] has considered this issue under several different scenarios .for cobe , power spectrum calculation can easily be done on a modern workstation in less than one day . however , for the ldb data sets expected over the next several years ( with or so ) the required computing power becomes prohibitive , requiring 640 gb of memory and of order floating - point operations , which translates to _ 40 years _ of computer time at 400 mhz .this pushes the limits of available technology ; even spread over a cray t3e with 900 mhz processors , this would take a week or more .this data set is in hand _ now _ , so we can not even wait for computers to speed up .when the satellite data arrives , with , a brute - force calculation will clearly be impossible even with projected advances in computing technology over the next decade .the ten million pixel planck data set would require 1600 tb of storage and floating - point operations or 25,000 years of serial cpu time at 400 mhz .even a hundredfold increase in computing over the next decade , predicted by moore s law , still renders this infeasible . to solve these computing challenges ,shortcuts must be found .one area where there is great potential benefit is in deciding how the discretized map elements are to be distributed on the sky and stored . imposing enough symmetries at this early step can help greatly to speed up everything that follows .obviously it is important to keep the number of pixels as small as possible . for a given resolution , fixed for example by the beam size, the number of pixels is minimized by having them all roughly of the same area .if there are many pixels in a resolution element much smaller than the beam size , they will be highly correlated and little information is gained by treating them individually .the hierarchical nature of the pixelization used for the cobe maps was also a very useful property . in this pixelization , known asthe quadrilateralized spherical cube , the sky was broken into six base pixels corresponding to faces of a cube .higher resolution pixels were created hierarchically , by dividing each pixel into four smaller pixels of approximately equal area .one advantage of this hierarchical structure is that the data is effectively stored via a branching structure , so that pixels that are physically close to each other are stored close to each other . among other things ,this allows one to coarsen a map very quickly , by adding the ordered pixels in groups of four .finally , it is very beneficial to have a pixelization which is azimuthal , where many pixels share a common latitude .this is incredibly useful in making spherical harmonic transforms between the pixel space , where the data and inverse noise matrix are simply defined , and multipole space , where the theories are simple to describe .specifically , one wishes to make transforms of the type described by eq .[ eqn : sph - trn ] , as well as the inverse transformation . when discretized , these transforms naively take operations , because spherical harmonic functions need to be evaluated at separate points on the sky .however , as has been recently emphasized , if one uses a pixelization with azimuthal symmetry , then the spherical transforms can be greatly sped up [ ] .this utilizes the fact that the azimuthal dependence of the spherical harmonic functions can be simply factored out , .if one further requires that the pixels have discrete azimuthal symmetry , then the azimuthal sum can be performed quickly with a fast fourier transform .effectively , this means that the functions need only be evaluated at different latitudes , so that the whole process requires only operations .efforts have been made to speed this up even further , by attempting to use fft s in the direction as well , which in principle could perform the transform in operations .such implementations are still being developed , and do not tend to pay off until is very large .pixelizations have been developed which have all of these symmetries .healpix , devised by kris gorski and collaborators [ ] , has a rhombic dodecahedron as its fundamental base , which can be divided hierarchally while remaining azimuthal .it was used for the rapid construction of the map in fig .[ fig : allsky ] .another class of pixelizations is based on a naturally azimuthal igloo structure which has been specially designed to be hierarchical [ ] . in this scheme ,pixel edges lie along lines of constant latitude and longitude , so it is easy to integrate over each pixel exactly .this allows any suppression effects due to averaging over the varying pixel shapes to be simply and accurately included when making the transforms .since many of the signals are most simply described in multipole space , it is natural to try to exploit this basis when implementing the parameter estimation method described above .we should also try recasting the calculation to take advantage of the simple form the weight matrix has in the pixel basis . finally , with iterative methods we can exploit approximate symmetries of these matrices which can speed up the algorithms tremendously .oh , spergel and hinshaw [ ] , hereafter osh , have recently applied these techniques to simulations of the operation of parameter estimation for the map satellite to great effect .the newton - raphson method does not require the full inverse correlation matrix , but rather , which can be expressed in terms of and various factors .the equation can be solved using a simple conjugate gradient technique , which iteratively solves the linear system by generating an improved guess and a new search direction ( orthogonal to previous search directions ) at each step . in general ,conjugate gradient is no faster than ordinary methods , requiring of order iterations with operations per iteration required for the matrix - vector multiplications .however , this can be sped up in two ways .first , one can make the matrix well conditioned by finding an appropriate preconditioner which allows the series to converge much faster , in only a few iterations .second , one can exploit whatever symmetries exist to do the multiplications in fewer operations .a preconditioner is a matrix which approximately solves the linear system and is used to transform it to , making the series converge much faster .there are two requirements of a good preconditioner : it should be close enough to the original matrix to be useful and it should be quickly invertible .one can rewrite the linear system we need to solve as ( i + c_s^1/2c_n^-1c_s^1/2 ) c_s^1/2z = c_s^1/2c_n^-1 |t .osh use a preconditioner , where is an approximation to the inverse noise matrix in multipole space : is taken to be azimuthally symmetric , so that it is proportional to in multipole space , which makes it block diagonal and possible to invert quickly . for the case they looked at , which includes only uncorrelated pixel noise and an azimuthally symmetric sky cut , this turned out to be a very good approximation which allows for quick convergence .because the matrices are simple in the bases chosen , the vector - matrix multiplications are much faster than . in multipole space , the theory correlation matrix is simply diagonal , , where denotes the beam pattern in space .similarly , in pixel space , operations using the inverse noise matrix are much faster .( osh simplified to a case where the noise matrix was exactly diagonal in pixel space . )a time - consuming aspect is the transformation between pixel and multipole space , which is .the whole process is actually dominated by the calculation of the trace in eq .[ eq : quadraticcl ] , which is performed by monte carlo iterations of the above method , exploiting the fact that $ ] .the osh method requires effectively operations , a dramatic improvement over traditional methods .the methods highlighted here have focused on solving one well - posed problem under a number of important simplifying assumptions .it is not obvious whether any of these assumptions are correct or indeed if the problem itself is as simple as we have described .in addition , there remain other problems , as or more complex , which remain to be addressed . here, we briefly touch on some of these issues .the improvements in speed discussed in the last section relied heavily on assuming the error matrix was close to being both diagonal and azimuthally symmetric .this may well be the case for the map satellite , because it measures the temperature difference between each point on the sky and very many other points at a fixed angular separation of at many different time scales .in doing so , the off - diagonal elements of the noise matrix are `` beaten down '' and may indeed be negligible . however , for almost all other cases ( and indeed possibly for map when the effects of foreground subtraction are taken into account , ) the estimation problem becomes much more complicated . in the presence of significant striping or inhomogeneous sky coverage , the block - diagonality of the noise matrix is no longer a good approximation . in this case , finding a basis where both the signal and noise matrices are simple may not be possible .people have found signal - to - noise eigenmodes of the matrix ( or as in sec .[ sec : osh ] ) to be useful for data compression and computation speedup , but finding them is another problem .one might try to solve this by splitting the data set up into smaller bits and analyzing them separately , recombining the results at the end .however , as emphasized above , this can be difficult to do because of the global nature of the the mapmaking process . ignoring correlations between different regions is often a poor approximation . due to the complicated noise correlation structure ,optimally splitting and recombining may itself require the operations we are trying to avoid .another feature of realistic experiments that has not been properly accounted for in the formalism we have outlined is that of asymmetric or time - varying beams .the model of the experimental procedure we have given here ( eq . [ eq : data ] ) assumes that all observations of a given pixel see the same temperature .this implicitly assumes an underlying model of the sky that has been both beam - smoothed and pixelized .( pixelization effects were touched on in sec .[ sec : pixel ] . )if the beam is not symmetric , or if it is time - varying , then different sweeps through the same pixel will see different sky temperatures .this is very difficult to account for exactly and may be crucial for some upcoming experiments which can have significantly asymmetric beams .in addition , large uncertainties in the nature of the foregrounds may make their removal quite tricky .not only are they non - gaussian , but unlike the cmb , their frequency dependence is not well understood .above , we have cast the problem of foreground separation as essentially a separate step in the process , between the making of maps at various frequencies and the estimation of the cosmological power spectrum .however , we may need to study foregrounds contaminants in as much detail as the cmb fluctuations themselves in order to fully understand their impact on parameter determination . throughout ,we have emphasized the assumption of gaussianity for both the instrumental noise and the cosmological model .if one or both of these assumptions are violated , the theoretical underpinning of the algorithms we have described becomes shaky .non - gaussianity issues arise even in intrinsically gaussian theories , due to foregrounds and non - linear effects .more worrisome are models with intrinsic non - gaussianity at larger angular scales .how do we even begin to characterize an arbitrary distribution of sky temperatures ? as it is sometimes put , describing non - gaussian distributions is like describing `` non - dog animals . ''however , techniques do exist for finding specific flavors of non - gaussianity ; for example , estimations have been made recently of the so - called connected -point functions for which vanish for a gaussian theory .other methods have tried to find structures using wavelets , which localize phenomena in both position on the sky and scale ( wavenumber ) .still others have attempted to find topological measures of non - gaussianity , focusing on fixed temperature contours , like the isotherms of a weather map .for all of these cases , however , both the theoretical predictions and data analysis are considerably more difficult than the algorithms presented here ; in particular , none of them have been considered in the presence of complicated correlated noise .the computational challenges we have highlighted are associated specifically with parameter estimation from cmb data , but the problems are generic to other statistical measures that might be of interest .for example , goodness - of - fit tests ( like a simple or more complicated examples like those explored in [ ] ) require calculation of a quadratic form involving inversion of matrices , as in the parameter estimation examples above .one might hope that these problems may also be solvable given similar assumptions to those considered above , but this has yet to be addressed .finally , we have not even touched on the problem of analyzing measurements of the polarization of the cmb , which results from thomson scattering at the surface of last scattering .although the essential aspects of the analysis are the same , polarization data will be considerably more difficult to handle for several reasons .first , because polarization is defined with respect to spatially fixed axes , we must combine measurements from different experimental channels in order to make an appropriate sky map .second , the signal is expected to be about one tenth the amplitude of the already very small temperature anisotropies .third , the polarization of foreground contaminants is even less well - understood than their temperatures . with these greater experimental challenges , the resulting maps , and their construction algorithms ,will be more complicated .upcoming cmb data sets will contain within them many of the answers to questions that have interested cosmologist for decades : how much matter is there in the universe ?what does it consist of ?what did the universe look like at very early times ?our task will be to extract the answers and assess the errors from these large data sets . especially challenging are the necessities for a _global _ analysis of the data and for separating the various signals .although some of the issues we face are specific to the cmb problem , many are of common concern to all astronomers facing the huge onslaught of data from the ground , balloons and space that the next millennium is bringing ( see , _ e.g. _ the article on the sloan digital sky survey ) .we can not rely on raw computing power alone .computer scientists and statisticians are now collaborating with cosmologists in the quest for algorithmic advances .figure [ fig : allsky ] was provided by kris gorski and both computation and visualization have been handled using the http://www.tac.dk/ healpix software package .figure [ fig : spatialforegrounds ] was provided by francois bouchet and richard gispert .we also thank julian borrill and david spergel for discussion of computer timings and algorithmic issues .bennett , m.s .turner & m. white , `` the cosmic rosetta stone '' , physics today , november , 1997 . 2 .bennett et al ., `` 4-year cobe dmr cosmic microwave background observations : maps and basic results '' , astrophys.j . 464 ( 1996 ) l1-l4 . \13 .muciaccia , p. natoli & n. vittorio , `` fast spherical harmonic analysis : a quick algorithm for generating and/or inverting full sky , high resolution cmb anisotropy maps , '' astrophys .j , 488 , l63 ( 1998 ) .m. tegmark , `` how to measure cmb power spectra without losing information '' , _ phys .rev . _ * d55 * , 5895 ( 1997 ) .wright , `` scanning and mapping strategies for cmb experiments '' , astro - ph/9612006 ( 1996 ) .
the cosmic microwave background ( cmb ) encodes information on the origin and evolution of the universe , buried in a fractional anisotropy of one part in on angular scales from arcminutes to tens of degrees . we await the coming onslaught of data from experiments measuring the microwave sky from the ground , from balloons and from space . however , we are faced with the harsh reality that current algorithms for extracting cosmological information can not handle data sets of the size and complexity expected even in the next few years . here we review the challenges involved in understanding this data : making maps from time - ordered data , removing the foreground contaminants , and finally estimating the power spectrum and cosmological parameters from the cmb map . if handled naively , the global nature of the analysis problem renders these tasks effectively impossible given the volume of the data . we discuss possible techniques for overcoming these issues and outline the many other challenges that wait to be addressed . [ invited article for _ computing in science and engineering . _ ] 0.2 in
it has long been observed that particles with a finite size and mass have different dynamics from the ambient fluid . because of their inertiathe particles do not evolve as point - like tracers in a fluid .this leads to preferential concentration , clustering and separation of particles as observed in numerous studies .the inertial dynamics of solid particles can have important implications in natural phenomena , e.g. , the transport of pollutants and pathogenic spores in the atmosphere , formation of rain clouds by coalescence around dust particles and formation of plankton colonies in oceans .similarly , the inertial dynamics of reactant particles is important in the reaction kinetics and distribution of reactants in solution for coalescence type reactions . mixing sensitive reactions in the wake of bubbleshas been shown to be driven by buoyancy effects of reactants .recently , a principle of asymmetric bifurcation of laminar flows was applied to the separation of particles by size and demonstrated the separation of flexible biological particles and the fractional distillation of blood .innovative channel geometries have been empirically designed to focus randomly ordered inertial particles in microchannels .these phenomena and related applications rely on the non - trivial dynamics of inertial particles in a fluid . in this paper , we demonstrate a theoretical tool to achieve particle segregation , by studying the sensitivity of the dynamics of inertial particles in a fluid . we employ a simplified form of the maxey - riley equation as the governing equation for the motion of inertial particles in a fluid .the dynamics of a single particle occur in a four dimensional phase space. the sensitive dependence of the particle motion on initial conditions is quantified using the finite time lyapunov exponents ( ftle ) .it has been shown previously , that the ridges in the ftle field act as separatrices .these are in general time dependent and go by the name of lagrangian coherent structures ( lcs ) .we chose to do a simplified sensitivity analysis by perturbing the initial conditions in only two dimensions , in the initial relative velocity subspace .we obtain a sensitivity field akin to a ftle field but restricted to the relative velocity subspace and demonstrate numerically that the ridges in this field act as separatrices .the partitions in the relative velocity subspace created by these separatrices determine the eventual spatial distribution of particles in the fluid . using this partitioning schemewe show how the stokes number acts as a parameter in the separation of particles of different inertia or size . the paper is organized as follows . in 2 we review the the equation governing the inertial particle dynamics in a fluid and it s simplified form . in 3 we briefly review the background theory of phase space distributions of finite time lyapunov exponents , which we use to quantify the sensitivity of the physical location of inertial particles with respect to perturbations in the initial relative velocity .we also describe our computational scheme to obtain the sensitivity field in the relative velocity subspace . in 4 we present results for the sensitivity field of the inertial particles in a cellular flow . in 5 we demonstrate our procedure for the segregation of particles by their stokes number using the results from 4 . in 6 we give numerical justification for the robustness of the sensitivity field to perturbations in the velocity field of the fluid . in 7 we discuss the results and give conclusions .our starting point is maxey and riley s equation of motion of a rigid spherical particle in a fluid ( ) . where * v * is the velocity of the solid spherical particle , * u * the velocity field of the fluid , the density of the particle , , the density of the fluid , the kinematic of the viscosity of the fluid , , the radius of the particle and * g * the acceleration due to gravity .the term on the right hand side are the force exerted by the undisturbed flow on the particle , the force of buoyancy , the stokes drag , the added mass correction and the basset - boussinesq history force respectively .eq is valid under the following restrictions . + where and are the length scale and velocity gradient scale for the undisturbed fluid flow .the derivative is the acceleration of a fluid particle along the fluid trajectory whereas the derivative is the acceleration of a solid particle along the solid particle trajectory .. can be simplified by neglecting the faxen correction and the basset - boussinesq terms .we restrict our study to the case of neutrally buoyant particles , i.e . writing * w * = ( * v * - * u * ) , the relative velocity of the particle and the surrounding fluid, the evolution of * w * becomes and the change in the particle position is given by where is the gradient of the undisturbed velocity field of the fluid , , and is a constant for a particle with a given stokes number .eqs and can be rewritten as the vector field with .eq defines a dissipative system with constant divergence .it has been shown by haller that an exponentially attracting slow manifold exists for general unsteady inertial particle motion as long as the particle stokes number is small enough . for neutrally buoyant particles this attractor is ( the _ xy _ plane ) . despite the global attractivity of the slow manifold ,domains of instability exist in which particle trajectories diverge .the lyapunov characteristic exponent is widely used to quantify the sensitivity to initial conditions .a positive lyapunov exponent is a good indicator of chaotic behavior .we have used the finite time version of the lyapunov exponents , the ftle , a measure of the the maximum stretching for a pair of phase points .we review some important background regarding the ftle below , following .the solution to eq can be given by a flow map , , which maps an initial point at time to at time . the evolution over a time of the displacement between two initially close phase points , and , is given by neglecting the higher order terms , the magnitude of the perturbation is the matrix is the right cauchy green deformation tensor .maximum stretching occurs when the perturbation is along the eigenvector corresponding to the maximum eigenvalue of .the growth ratio is given by where is the maximal finite time lyapunov exponent .one can associate an entire spectrum of finite time lyapunov exponents with , ordering them as the entire spectrum of the lyapunov exponents can be computed from the state transition matrix using singular value decomposition . diagonal matrix gives all the lyapunov exponents .\ ] ] where and .an arbitrary perturbation in the fixed basis can be transformed using a time dependent transformation . such that in the new basis ( the primed frame ) , the variational equations become since is a constant diagonal matrix , we have the first coordinate in the new frame grows as .the time dependent transformation is given by , since the dynamics of the inertial particle is in a four - dimensional phase space , the separatrices , that is lcs defined by ridges in the field of the maximal ftle , are three dimensional surfaces ( see ) .however , because the system is dissipative and the global attractor is the subspace , we can obtain meaningful information by restricting the computations to a lower dimensional subdomain of the phase space .this we do by considering an initial perturbation only in the relative velocity subspace and study how this perturbation grows in the plane , the configuration space , i.e , ^{*}\ ] ] where are the perturbations in the relative velocity subspace . using the time dependent transformation the evolution of the perturbationis given by the growth of perturbation in the plane is given by the first two components of the above vector .the last two components of the above vector are the evolution of the perturbations in the relative velocity subspace .since the plane is a global attractor these tend to zero .one can choose a finite time , , such that the evolution of the initial perturbation comes arbitrarily close to the plane .in this way the sensitivity of the final spatial location of the particles to initial relative velocity can be computed . as equation(15 )shows the evolution of a perturbation is along the four basis vectors . for an arbitrarily oriented initial perturbation the growth may not be dominated in the direction of greatest expansion for short integration times .this can be overcome by sampling multiple perturbations in the different directions . a reference point and its neighborsare identified and after a finite time their positions in configuration space are computed .the state transition matrix can then be computed at each point in the plane , by using a central finite difference method . for initial perturbations restricted to subspace ,this gives the relative velocity sensitivity field , is given by , ridges on this sensitivity surface are one dimensional structures similar to lcs .the ridges in the maximal sensitivity field , partition the relative velocity subspace .we applied the above procedure to a cellular flow .+ _ a note on the terminology _ - the field measuring the sensitivity of the final location of particles in configuration space with respect to perturbations in initial relative velocity is analogous to the ftle field , but not identical . to obtain the true ftle field ,one would have to compute the state transition matrix , . using the notation of eq , the ftle fieldis then given by .ridges in this field are 3 dimensional structures and represent the true lcs .this flow is described by the stream function the velocity field is given by , there are heteroclinic connections from the stable and unstable manifolds of the fixed points , shown by the arrows in figure [ streamlines ] , which are also the boundaries of the cells .these coincide with lcs , the lcs for the fluid velocity field .the lcs is to be distinguished from the lcs of the inertial particle in the full four dimensional phase space . by choosing initial perturbations of the form given by eq . at different points along a streamline, we follow how these perturbations grow in the plane by integrating the particle trajectories numerically from which the sensitivity field is computed .figure [ variation of ftle ] shows the sensitivity field computed for initial perturbations in the relative velocity subspace , at different points on the streamline .the ridges in this field have high values of sensitivity .it can be seen that there is a continuous variation in the ridges of the sensitivity field with respect to the initial coordinates . in each casethe sensitivity field at a given point depends on the underlying lcs of the fluid flow .the ridges in the sensitivity field have meaningful information about the dynamics of inertial particles even when computed at points far from the saddle points of the fluid flow .this is shown in figure [ velocity_partition](a ) which is the sensitivity field computed at .the ridges in the sensitivity field partition the relative velocity subspace according to the final location of particles . in figure [ velocity_partition](b ) the ridges in the partial ftle field are used to identify regions in the relative velocity subspace , that produce qualitatively different trajectories .particles that start at the same physical location , but are in different regions of the relative velocity subspace , are neatly separated from particles that started in other regions , as shown in figure [ velocity_partition](c ) .thus the ridges in the sensitivity field , have the property of a separatrix .eq . can be diagonalised as where are the eigenvalues of the jacobian of the fluid velocity field .if , is very large , then both the components of would decay . for low values of , one component of would grow .therefore the dynamics of an inertial particles depend on the value of , that is on the stokes number .it is reasonable to expect that the computations of the sensitivity of the particles location to the initial relative velocity also would depend on the stokes number . that this is indeed the case is shown by the computations of the sensitivity field for a particle with stokes number for the time independent flow , as shown in figure [ particle_separation](a ) .the thick lines are the ridges in sensitivity field for particles with and the hatched lines are those of .it can be seen that though the structure of the sensitivity field field is similar , the ridges are present at different locations in the relative velocity subspace .this fact can be exploited to design a process to separate particles by their stokes number . in this sectionwe illustrate a simple procedure for doing this .the ridges of the sensitivity fields computed for the two different particles of stokes number 0.1 and 0.2 respectively are overlain in the same plot , as shown in figure [ particle_separation ] .the subdomain of the relative velocity subspace sandwiched between the ridges of the sensitivity fields of the two types of particles form a zone of segregation .one such sample zone is shown in grey in figure [ particle_separation ] .two particles with and with initial coordinates and the same initial relative velocity , belonging to this region , have trajectories that separate in the physical space . to illustrate this , the trajectories of five hundred particles of each stokes number , starting at the same initial physical point and with initial relative velocities values belonging to the grey region were computed .the position of these particles is plotted as a function of time to show that the particles are completely segregated into two different cells .the above procedure can be applied to other regions sandwiched between the ridges of the partial ftle of the two different types of particles .the time - independent flow given the stream function in eq .is perturbed by making it weakly time dependent .the modified fluid flow is given by the stream function the velocity field is given by , for time dependent systems the location of the lcs depends on the choice of initial time .for the computation of the sensitivity field , the location of ridges in the relative velocity subspace depend on the initial spatial coordinates of the particle as well as the initial time .however our computations show that the dependence of the ridge structure on the initial time is weak .figure [ ridge_robust ] shows the ridges in the sensitivity field . as the initial timeis increased , it is seen that there is a ` squeezing ' of the sensitivity field in some regions of the relative velocity subspace . a comparison with figure [ particle_separation ] and figure [ velocity_partition ] shows that the ridge locations in the sensitivity field remain qualitatively the same , for the three cases figure [ ridge_robust ] where the initial time is small .this offers a numerical evidence that the sensitivity field is robust to small perturbations in the fluid velocity .the dynamics of inertial particles in a fluid flow can exhibit sensitivity to initial conditions .the finite time lyapunov exponent can be used to characterize this sensitivity .the lcs obtained from the ridges of the ftle field offers a systematic method to identify qualitatively different regions of the phase space .we demonstrated that even a reduced one dimensional ridge in the sensitivity field contains important information about the sensitivity of the spatial location of particles to initial relative velocity .the stokes number , and by implication the size of the particle , is an important parameter that governs the clustering behavior of particles for a given flow .this property can be exploited to make particles of different sizes cluster in different regions of the fluid and thus separate them . for the more general case of non neutrally buoyant particles , the density of the particles could play a similar governing role as the stokes number .one could therefore design flows that can fractionally separate particles for a range of inertial parameters .davis , j.a . ,inglis , d.w . ,morton , k.j . ,lawrence , d.a . , huang , l.r . ,chou , s.y . ,sturm , j.c ., and austin , r.h , [ 2006 ] , deterministic hydrodynamics : taking blood apart ._ proceedings of national academy of sciences usa _ , 103(40):14779 - 84 .shadden , s.c . ,lekien , f. and marsden , j. [ 2005 ] , definition and properties of lagrangian coherent structures from finite - time lyapunov exponents in two - dimensional aperiodic flows ._ physica d _, 212 , 271 - 304 .babiano , a. , cartwright , j.h.e ., piro , o. and provenzale , a. [ 2000 ] , dynamics of a small neutrally buoyant sphere in a fluid and targeting in hamiltonian systems ._ physical review letters _ , 84 , 5764 - 5767 .ide k. , small , d. , wiggins , s. [ 2002 ] , distinguished hyperbolic trajectories in time dependent fluid flows : analytical and computational approach for velocity fields as data sets ._ nonlinear processes in geophysics _ 9 , 237 - 263 .( a ) ridges in the sensitivity field for the stream function for .the hatched lines and the thick lines are the ridges corresponding to st = 0.2 and st = 0.1 respectively. parameters , a = 100 , b = 0.25 , ,integration time = 0.24 . initial time .( b ) ( c ) , title="fig:",scaledwidth=34.0% ] + ( a) + + ( b) + + ( c) +
it is a commonly observed phenomenon that spherical particles with inertia in an incompressible fluid do not behave as ideal tracers . due to the inertia of the particle , the dynamics are described in a four dimensional phase space and thus can differ considerably from the ideal tracer dynamics . using finite time lyapunov exponents we compute the sensitivity of the final position of a particle with respect to its initial velocity , relative to the fluid and thus partition the relative velocity subspace at each point in configuration space . the computations are done at every point in the relative velocity subspace , thus giving a sensitivity field . the stokes number being a measure of the independence of the particle from the underlying fluid flow , acts as a parameter in determining the variation in these partitions . we demonstrate how this partition framework can be used to segregate particles by stokes number in a fluid . the fluid model used for demonstration is a two dimensional cellular flow . + + key words : particle separation , inertial particle , maxey - riley equation , separatrices
the extensive use of benchmarking is essential for successful algorithmic development , particularly , to validate the successful operation of a new algorithm , to quantitatively compare existing and newly developed approaches , and to systematically optimize algorithmic performance . in recent years , various benchmarks for bioimage analysis have been presented for tasks such as seed detection , segmentation or tracking . a general problem with manually created benchmark datasets , however , is caused by the inter- and intra - expert variability , which means that ambiguous image content may be rated differently by different investigators or even by the same investigator during multiple labeling iterations . an increasingly popular solution to tackle these problems and to additionally avoid time - consuming and tedious manual annotationsis the use of simulated benchmark datasets .it has been shown that biological phenomena such as fluorescently labeled cell populations can be realistically simulated if enough knowledge of the investigated probes was available . the charm of simulated data is the availability of a reliable ground truth and literally unrestricted possibilities to adjust parameters like noise levels , sampling rates or light attenuation , which can hardly be achieved by imaging dynamically changing organisms and thus prohibits robustness analyses as in .nevertheless , existing simulated benchmarks are often much simpler than the real application scenarios and mostly focus solely on a single processing step .challenges such as multiview acquisition and fusion , large file sizes and highly dynamic scenes with possibly thousands of objects , that are frequently observed in state - of - the - art experiments in embryomics using confocal or light - sheet microscopy , are not considered sufficiently yet . to evaluate the performance of an entire image analysis pipeline comprised of seed detection , segmentation , multiview fusion and tracking with a single benchmark, we present a new method that combines simulated fluorescent objects , realistic object movement based on real embryos and the ability to generate challenging large - scale microscopy data in a single framework including various acquisition deficiencies . in the remainder of this paper, we introduce the general concept that was used to generate new benchmark datasets and a proof - of - principle simulation that mimics the early development of a zebrafish embryo .the new benchmark required a realistic simulation of fluorescence properties of labeled nuclei and a customizable number of cells , nucleus size , division cycle duration and experimental duration .the simulation of nuclei should include realistic cell movement , cell divisions , neighborhood related movement dynamics as well as spatial restrictions .moreover , the generated simulation images should be artificially flawed by acquisition deficiencies such as an approximated point spread function ( psf ) , slice - dependent illumination variations , simulated multiview generation including light attenuation along the virtual axial direction as well as detector- and discretization - related deficiencies like dark current , photon shot noise and signal amplification noise . to achieve these requirements , we use object locations , displacement vectors and density information of real embryos and complemented the remaining components with synthetic data to a comprehensive benchmark generation framework offering the desired flexibility ( fig.1 ) .the first step of the benchmark generation is to specify the number of objects that should be simulated .we use a percentage value ] , a division cycle length ] and an object video i d $ ] , with being the number of single - cell object videos .the subsequent simulation of the cellular movement is determined by these randomized parameters and the valid parameter ranges can be estimated by manually investigating a few representative cells of the original images .one of the most challenging events that takes place during early embryonic development are cell divisions .there are various possibilities to add mitotic events to the simulation .the most straightforward approach is to simply specify minimum and maximum division cycle duration based on biological prior knowledge of the simulated specimen and to randomly assign a value between those two boundaries to each simulated cell . in each frame , the division cycle state is incremented and as soon as the maximum division cycle duration is reached , an object division is performed .however , this approach does not incorporate spatial information , _i.e. _ , all cells are dividing in a similar manner that does not necessarily correspond to real embryonic development .we found that the results get more realistic if the division events are directly coupled to the real number of objects . in each frame , the number of required cell divisions to reach the target number of cells is set to : in eq.(1 ) , is the percentage of real cells used for the simulation and and are the number of cells in the real and the simulated embryo at time point . to identify which of the cells should be divided , either the cells with the largest division cycle state are split or the divisions are performed density - based , by splitting the cells with with the largest relative density difference : the densities and are the number of neighboring cells of object at time point within a fixed radius around each cell calculated either on the real data or on the simulated data . although , the framework in principle allows using each of the described cell division approaches , the method based on the relative density difference yielded the most realistic results and was used for all presented results .the next step after the initialization and the selection of a cell division approach is the dynamic simulation .this step essentially comprises updating each object s spatial location as well as the simulated division cycle state . if an object s division cycle ended during the performed update step ( ) or if it was selected for division due to a large relative density difference , an object division is performed .each of the two new objects is again randomly initialized and positioned relative to its ancestor , with the division axis being set to the major axis of the mother cell . to obtain a dynamically changing scene ,the position of each object is updated at every simulation step by considering a set of simulated influences that are acting on it .the interactions are comprised of displacement vectors , and that originate from real object movements of the underlying embryo , repulsive behavior between nearby objects and an attraction that pulls simulated objects towards the embryo , respectively . the displacement vector of the directed cell movement that is defined by the movement direction of real cells that reside in the vicinity of the simulated object .this is the most important component to obtain realistically moving objects and it is defined as : where is the number of neighbors to use , are the indices of the nearest neighbors of and is the movement direction of neighbor . a repulsive component acting between two objects if their distance becomes smaller than the sum of their radii avoids intersections and is defined as : where in eq.(4 ) , is the centroid difference vector of two interacting objects , is the radius of the cell nucleus .as the simulation was only performed on cell nuclei , the parameters were set in relation to the nucleus radii of two interacting objects to for the nucleus radius parameter and for the membrane radii .the repulsive displacement can in some cases push nuclei apart from the locations of the real embryo . to compensate this behavior , we additionally introduce a displacement vector component that slightly pulls each of the simulated objects towards its nearest neighbor : the influence of the directed movement , the repulsive interaction and the nearest neighbor attraction can be controlled using the weights , and , respectively .the total displacement vector of a single object at a given time point can be summarized to : the nearest neighbor attraction weight in eq.(7 ) is clamped by the magnitude of the cell movement vector to avoid large jumps in cases where the nearest neighbor of a simulated object is erroneously missing in one or more frames of the underlying embryo data . generally , it is possible to add further displacement components to the simulation , _e.g. _ , to add a specific attractor . however , for the benchmark only movements that originated from the real cells ( brownian - like and directed movements ) and density variations caused by cell divisions were considered . to generate the actual benchmark images and the corresponding label images from the simulated object locations , the images were initialized as entirely black images .small single - cell 3d video sequences ( fig.2c ) were extracted from a simulated time - lapse dataset comprised of eight dividing cells over two division cycles ( data provided by d. svoboda ) and served as an object video database for the generation of the artificial benchmark images ( ) . by iterating over all simulated time points and all simulated objects , both the benchmark images and the ground truth imageswere successively filled with simulated fluorescent nuclei and the label masks , respectively .the specified object radii and the division cycle lengths of the simulated objects were used to scale the single - cell videos appropriately . to simulate the acquisition process of a fluorescence microscope , all generated images were filtered in several steps to obtain a realistic benchmark dataset ( fig.1 , right column ) .first , the intensities of the simulated images were attenuated along the virtual optical axis by multiplying the intensities of each slice by a linearly decreasing factor that is set to at the slice closest to the virtual detection objective and to at the slice farthest from the detection objective .subsequently , the entire image was convolved with a point spread function ( psf ) published in that was measured by imaging fluorescent beads in a light - sheet microscope .to optionally simulate a multiview acquisition experiment with a single rotation of , the multiplier used for signal attenuation was inverted and a point spread function that was analogously rotated by was used to convolve the images .an empirically determined positive offset determined from fluorescence microscopy images was added to all intensity values , in order to simulate the dark current signal of the detector .to simulate photon shot noise , an independent poisson process was applied to each voxel with the respective image intensities being its average .finally , a zero - mean additive gaussian noise with a standard deviation of was used to model the readout noise caused by signal amplification .the steps for modeling the acquisition deficiencies of a benchmark image are : where is the point spread function , is the dark current image of the detector , applies a poisson - based shot noise and finally is a normally distributed random variable with zero mean and standard deviation .the entire acquisition simulation was implemented in xpiwit .for the generation of an exemplary benchmark dataset , we used the spatio - temporal data of an early wild - type zebrafish embryo .the displacement vector weights were set to , and , and neighbors were used to estimate the object movements .the radius ranges were set to and , respectively , with different single - cell videos .we used a density - based cell division model where cell divisions were directly coupled to real amount of cells ( , _i.e. _ , 25% , 50% , 75% of the number of cells of the real embryo ) and locations of cell divisions were determined via the maximum density difference ( eq.(2 ) , , ) .all parameters were empirically determined to obtain movement behaviors that nicely resembled the actual cellular dynamics observed during real embryonic development. it should be noted , though , that the presented model does not necessarily represent an accurate physical simulation of the interacting objects .we successfully generated multiple time series of simulated embryos with varying numbers of cells that perfectly resemble the movement behavior and cell distributions of a real embryo ( fig.2a , b ) .on the basis of these simulated object locations , simulated fluorescent nuclei ( fig.2c , ) were used to generate time - resolved artificial benchmark images ( fig.2d ) .the generated data comprises ground truth label images , raw images including acquisition deficiencies and an object property database for each of the frames , and offers numerous possibilities to validate and analyze the robustness of images analysis and tracking operators .in this contribution we present a novel approach on how to generate realistic benchmark images that mimic the spatio - temporal cell dynamics of developing embryos .we successfully simulated the early development of a zebrafish embryo at various levels of detail that nicely imitated the movement behavior observed in a real embryo . depending on the desired number of cells , the simulation currently takes a few hours and artificial images can be obtained in a matter of minutes for a single frame .the framework is currently implemented in matlab and we host source code , sample data and videos on https://bitbucket.org / jstegmaier/. we plan to extend the benchmark framework by an easy - to - use graphical user interface and to systematically speed - up the benchmark generation in order to easily produce benchmark datasets for numerous application fields and additional model organisms . currently , the fluorescent objects used for the simulated images are based on artificial cells. a straightforward extension of the presented framework would be to replace the simulated video object library with manually annotated snippets of real microscopy images .if simulations should get even more realistic , cell distances could be learned from real data and tissue - dependent light scattering as well as a more realistic light attenuation model could be added .
systematic validation is an essential part of algorithm development . the enormous dataset sizes and the complexity observed in many recent time - resolved 3d fluorescence microscopy imaging experiments , however , prohibit a comprehensive manual ground truth generation . moreover , existing simulated benchmarks in this field are often too simple or too specialized to sufficiently validate the observed image analysis problems . we present a new semi - synthetic approach to generate realistic 3d+t benchmarks that combines challenging cellular movement dynamics of real embryos with simulated fluorescent nuclei and artificial image distortions including various parametrizable options like cell numbers , acquisition deficiencies or multiview simulations . we successfully applied the approach to simulate the development of a zebrafish embryo with thousands of cells over 14 hours of its early existence . image analysis , tracking , validation benchmarks , developmental biology , embryomics
interpreted in modern terms ( rather than in terms of the debate over completeness of quantum mechanics ) , the 1935 paper of einstein , podolsky , and rosen , through its demonstration that the orthodox rule for quantum state projection ( lders rule ) implies superluminal transmission of influences , opened a furious debate in the foundations of physics that rages to this day .john bell made a seminal contribution to the debate by developing inequalities combining the results of correlations for event detections in scenarios with different combinations of measurement settings .these inequalities transformed what was previously essentially a philosophical debate into an empirical matter , decidable by experiment , and they have subsequently been used to attempt to distinguish between a fully local universe and one that allows for nonlocal interactions .these nonlocal interactions have come to be referred to as ` quantum nonlocality ' . many modern experiments , including that of hensen et al . , use a particular bell - type inequality , the chsh inequality .these experiments are widely believed to confirm the existence of quantum nonlocality .however , these experiments have been subject to ` loopholes ' , purported to allow local classical mechanisms to reproduce the inequality violation reported in the experiments .it has been a longstanding quest of experimental physicists to close all the loopholes and thereby decisively confirm the existence of quantum nonlocality .the experiment of hensen et al . is the most prominent and arguably most convincing of the recent experiments claimed to be loophole - free . if the experiment was to be accepted as fully valid and decisive , the foundations of physics would be rocked .despite the claim by hensen et al .that their experiment is loophole - free , at least one important loophole , the postselection loophole , remains open .the postselection loophole requires ( in the general case ) , through a mechanism not specified , that some fraction of the full data of the experiment is absent from or not considered in the data analysis , producing only an artifactual violation of the applicable bell - like inequality .experimental physicists conducting an experiment must prove that postselection is not present in the experiment , or that any postselection that is present is harmless . herei discuss the postselection loophole in the context of the hensen et al .experiment , and i argue that hensen et al . have not succeeded in discharging their responsibility to prove the absence of harmful postselection . throughout the paper ,the term ` postselected ' means postselected _ out _ , i.e. , decimated . in some other contexts , the term might be used to denote selective inclusion , but here the meaning is selective exclusion .the scheme of the paper is as follows .section 2 demonstrates that postselection occurs in the hensen et al .section 3 shows how similar postselection for a classical local model can produce results indistinguishable from the results of the experiment , including a significant violation of the chsh inequality .section 4 shows how apparent violation of no - signaling in the experimental data further strengthens the arguments of this paper .section 5 discusses these results and concludes that the hensen et al .experiment does not succeed in rejecting local realism , and that a local realist may take it as confirming locality .to investigate the matter of postselection in the hensen et al .experiment , i instrumented the publicly available hensen et al . analysis code to test the fairness and uniformity of the random number generators ( rngs ) used in the experiment .specifically , i replaced the appropriate lines in the hensen et al .analysis code with the following lines : .... random_number_a = data[:,6].astype(bool , copy = false ) random_number_a_not = ~random_number_a random_number_b = data[:,7].astype(bool , copy = false ) random_number_b_not = ~random_number_b random_00 = random_number_a_not & random_number_b_not random_10 = random_number_a & random_number_b_not random_01 = random_number_a_not & random_number_b random_11 = random_number_a & random_number_b print('number of random 0\ 's at a = ' , np.sum(random_number_a_not ) ) print('number of random 1\ 's at a = ' , np.sum(random_number_a ) ) print('number of random 0\ 's at b = ' , np.sum(random_number_b_not ) ) print('number of random 1\ 's at b = ' , np.sum(random_number_b ) ) print('number of random 00 events = ' , np.sum(random_00 ) ) print('number of random 01 events = ' , np.sum(random_01 ) ) print('number of random 10 events = ' , np.sum(random_10 ) ) print('number of random 11 events = ' , np.sum(random_11 ) ) ....this instrumentation allows one to assess the fairness of the list of random setting choices for each side individually and the distribution of the four joint setting combinations .when the instrumented analysis code is run on the data of the experiment , the following results are obtained ( the published hensen et al .results are , of course , reproduced ) : the additional code instrumentation shows a large excess of 1 s at both a and b , and a large deficit of events for the \{00 } experiment . let us now estimate the probability that the joint counts distribution we see in the experiment could be obtained by chance .we see that the \{00 } experiment has a low count at 1143 while the other three experiments all have counts that exceed the expected mean value of 1186 .i wrote and executed a numerical simulation that estimates the probability that the \{00 } count is less than or equal to 1143 .the resulting p - value is 0.07 .while this result is not significant at an arbitrary 5% level , it shows that it is unlikely that the observed distribution could be produced by chance .the code for the numerical simulation is available on - line .my argument here does not depend on an exact p - value , but rather on a demonstration that it is unlikely that the observed distribution could be obtained by accident , together with consideration of a local model violating chsh using postselection and the presence of a no - signaling violation in the experimental data ( see section 4 ) .taken together , i argue that the hensen et al .data for the \{00 } experiment was postselected .the look - elsewhere effect is not included because i test the specific null hypothesis that the \{00 } experiment is postselected ( discussed further in section 5 ) .this hypothesis was chosen in advance because the \{00 } experiment is special for chsh .it is the ` converse ' to the \{11 } experiment , which is the only term in chsh that is negatively weighted , and so if one wanted to choose a term to postselect so as to have a maximum effect on the chsh metric , and so as to minimize the amount of required postselection , one would choose the \{00 } experiment .the charge of ` harking ' can therefore be rejected .acting as devil s advocates ( see section 5 ) , let us suppose that the \{00 } postselection loses only mismatch events ( events with opposite outcomes ) .the failure of this supposition does not weaken the conclusions of section 2 . to investigate the effect of this postselection i created a local numerical simulation model saturating the chsh inequality at the classical limit of 2 , together with variable postselection of \{00 } mismatch events ( the number of lost events can be selected via user input when running the simulation ) .the number of events per run is set to 245 as for the hensen et al .experiment ( there are no entanglement failure events in a simulation ) , and 100000 runs are performed to obtain estimates for the mean chsh metrics and , and the mean counts characterizing the distribution of settings .for example , i give below the result of running the simulation with 14 lost \{00 } mismatch events .it can be seen that the postselection artifactually increases and beyond the classical limits .the closeness of the simulated and values to the hensen et al . reported values of and , as well as the similarity of the patterns of counts characterizing the distribution of settings , are stunning ..... mean number of 0 's for a = 109 mean number of 1 's for a = 122 mean number of 0 's for b = 109 mean number of 1 's for b = 122 mean number of 00 events = 47 mean number of 01 events = 61 mean number of 10 events = 61 mean number of 11 events = 61 ---------------------------------------------- mean s = 2.414843 mean k = 197.250380 .... figure 1 shows the relationship between the number of lost events and .loss of 14 or more events will reproduce or exceed the reported hensen et al .the code for the numerical simulation model is available on - line .+ + figure 1 .effect of postselected \{00 } mismatch events on s for a local realist model operating at the classical limitbednorz and later adenier and khrennikov demonstrated statistically significant violation of the no - signaling criterion in the data of the hensen et al .postselection of \{00 } events produces an asymmetry that results in an artifactual violation of no - signaling .it is not difficult to see how this works .consider side a s counts of 1 s for the \{00 } and \{01 } experiments .if side b s setting is not affecting the counts , as required by no - signaling , then these two side a counts should be close to each other .however , postselection of \{00 } events will selectively reduce the first side a count , thereby biasing the results and producing an artifactual violation of no - signaling .this is confirmed by numerical simulation .the finding of apparent no - signaling violation in the hensen et al .data therefore strengthens the analysis of this paper .note that neither bednorz nor adenier and khrennikov discussed postselection , and they did not discover the anomalous postselection in the hensen et al .data that i report here .in an interesting thread at _ pubpeer _ discussing the hensen et al. experiment , ` peer 6 ' notices the missing events in the public release of data by hensen et al . with my own grammatical corrections and paraphrasing i give the main points made by ` peer 6 ' : \1 ) the published data does not contain raw data . instead , it contains preprocessed data , because the raw data from stations a , b , and c have already been brought together into a single file , and the file contains only 4746 events , whereas the paper reports orders of magnitude more events .\2 ) in the supplementary information , hensen et al .say : `` every few hundred milliseconds , the recorded events are transferred to the pc . during the experiment , about 2 megabytes of datais generated every second . to keep the size of the generated data - set manageable ,blocks of about 100000 events are saved to the hard drive only if an entanglement heralding event is present in that block . ''therefore , what is published ( 4746 events , approximately 420 kilobytes ) is only about 5% of a single block . while elimination of blocks without valid events is legitimate ( although it reduces the precision of possible rng tests in the analysis ), the deletion process also allows for deletion of \{00 } mismatch events .the processing to transform the individual raw lists to the published combined list is neither documented nor published , and the individual raw lists themselves are not available for inspection .this processing is exactly the critical step where postselection can occur .the withholding of the data and processing is troubling , given the importance of the result for the foundations of physics .hensen et al .address the distribution of randomness for setting choices in the experimental data , arguing that it is fair and uniform .however , the counts they provide are insufficient to expose the anomaly reported here .hensen et al .write : `` we can get further insight by looking at all the setting choices recorded during the test . around every potential heralding event about 5000 settings are recorded , for which we find a local p - value of 0.57 ( table 1 ) , consistent with a uniform setting distribution . ''unfortunately , the analysis code and data justifying this conclusion are not available and the claim of uniformity appears to be false for the published data , as i demonstrated in section 2 .the overall counts in table 1 apparently contain all the excluded blocks but the chsh calculation is performed only over the published data .if the overall counts are indeed highly uniform as claimed while the published subset of the overall data is not uniform , the conclusion that the published data was postselected is further strengthened .one also wonders why great pains are taken by hensen et al . to show uniformity for the overall data , butnothing is said about the uniformity of the published subset of the data .understandably , one might be expected to speculate about the specific mechanism of the postselection demonstrated here .given the transcending importance of the result for the foundations of physics , including einstein s legacy therein , it is appropriate to address matters from two stances .the first stance is that of the devil s advocate , a stance that dispenses with decorum and ` speaks truth to power ' .the second stance is that of the dispassionate observer , a stance that refrains from speculating on matters not in full evidence .the devil s advocate reasons as follows .consider the null hypothesis that hensen et al . did not manually ( possibly inadvertently ) postselect \{00 } mismatch events .is this hypothesis rejected by the evidence of this paper ? given that the probability of obtaining the observed distribution of random joint setting choices in the experimental data by chance is 0.07 ( section 2 ) , that an apparent violation of no - signaling is observed in the data , and that a local model that postselects \{00 } mismatch events produces results indistinguishable from the experimental results , the null hypothesis can be rejected , i.e. , hensen et al .manually ( possibly inadvertently ) postselected \{00 } mismatch events . the dispassionate observer reasons as follows : consider the null hypothesis that hensen et al. did not manually ( possibly inadvertently ) postselect \{00 } events .is this hypothesis rejected by the evidence of this paper ?given that the probability of obtaining the observed distribution of random joint setting choices in the experimental data by chance is 0.07 ( section 2 ) , and that an apparent violation of no - signaling is observed in the data , the null hypothesis can be rejected , i.e. , hensen et al .manually ( possibly inadvertently ) postselected \{00 } events .the difference between the two stances is subtle .the devil s advocate asserts in the null hypothesis that the postselected events are \{00 } mismatch events , while the dispassionate observer does not ( however , both conclude that the \{00 } data is postselected ) . while the reasoning of the devil s advocate already appears compelling , a direct determination between the stancescould easily be made by inspecting the individual raw lists .however , these raw lists have not been released ( hensen et al .have thus far declined to provide access to them ) .nevertheless , regardless of which stance we adopt , the experiment must be considered to be placed in doubt .it is theoretically possible , of course , that defective operation of the devices of the experiment , or improper design of the experiment could account for the observed postselection .however , it is very difficult to conceive of such defects , and so this possibility can be considered to be implausible .nevertheless , if this were the case , the experiment would again clearly be placed in doubt . the design and implementation of the hensen et al .experiment is laudable , and experiments like it offer the prospect of deciding the debate over nonlocality .however , the experiments can be decisive only when the data and analyses are correct and transparent , and when undocumented steps are not present in the analysis and interpretation .one could consider analyzing the second run of the hensen et al . experiment to see if similar postselection appears .however , hensen et al . concede that an equipment failure occurred in the middle of the run , placing the previously recorded data in doubt .i choose therefore , for the purposes of this paper , to confine the discussion to the first , successfully executed run .my personal views on quantum nonlocality are by now hopefully well - known .i argue that the correct quantum prediction for epr must not use lders rule , which means that nonlocal correlations are not predicted by quantum mechanics , and that the experiments , when properly designed , analyzed , and interpreted , confirm locality and disconfirm quantum nonlocality .einstein s legacy and lorenz invariance are safe , and physics remains consistent and coherent .b. hensen , h. bernien , a. e. drau , a. reiserer , n. kalb , m. s. blok , j. ruitenberg , r. f. l. vermeulen , r. n. schouten , c. abelln , w. amaya , v. pruneri , m. w. mitchell , m. markham , d. j. twitchen , d. elkouss , s. wehner , t. h. taminiau , and r. hanson , `` loophole - free bell inequality violation using electron spins separated by 1.3 kilometres '' , _ nature _ * 526 * , 682 - 686 ( 2015 ) .b. hensen , h. bernien , a. e. drau , a. reiserer , n. kalb , m. s. blok , j. ruitenberg , r. f. l. vermeulen , r. n. schouten , c. abelln , w. amaya , v. pruneri , m. w. mitchell , m. markham , d. j. twitchen , d. elkouss , s. wehner , t. h. taminiau , and r. hanson , `` loophole - free bell inequality violation using electron spins separated by 1.3 kilometres '' , _ nature _ * 526 * , supplementary information 15759 ( 2015 ) .b. hensen , n. kalb , m. s. blok , a. e. drau , a. reiserer , r. f. l. vermeulen , r. n. schouten , m. markham , d. j. twitchen , k. goodenough , d. elkouss , s. wehner , t. h. taminiau , and r. hanson , `` loophole - free bell test using electron spins in diamond : second experiment and additional analysis '' , arxiv:1603.05705 [ quant - ph ] ( 2016 ) .d. a. graft , on reconciling quantum mechanics and local realism " , proceedings of spie conference the nature of light : what are photons ? 5 " , spie , bellingham ( 2013 ) . also arxiv : quant - ph 1309.1153 ( 2013 ) .
it is shown that the data of the hensen et al . bell test experiment exhibits anomalous postselection that can fully account for the apparent violation of the chsh inequality . a simulation of a local realist model implementing similar postselection is presented . the model produces an apparent violation of chsh indistinguishable from that of the experiment . the experimental data also appears to violate no - signaling , and it is shown how postselection can produce an artifactual violation of no - signaling . the hensen et al . experiment does not succeed in rejecting classical locality and therefore does not confirm quantum nonlocality . + + * keywords * : bell / chsh inequality , quantum nonlocality , hensen et al . bell test , postselection .
by limiting the molecular treatment to regions where it is needed , a hybrid method allows the simulation of complex thermo - fluid phenomena which require modeling at the microscale without the prohibitive cost of a fully molecular calculation . in what followswe provide an overview of this rapidly expanding field and discuss recent developments .we also present archetypal hybrid methods for incompressible and compressible flows ; the hybrid method for incompressible gas flow is based on the schwarz alternating coupling method which uses chapman - enskog boundary condition imposition , the hybrid method for compressible flow is the recently developed flux - coupling based , multispecies adaptive mesh and algorithm refinement scheme that extends adaptive mesh refinement by introducing the molecular description at the finest level of refinement . over the yearsa fair number of hybrid simulation frameworks has been proposed leading to some confusion over the relative merits and applicability of each approach .original hybrid methods focused on dilute gases , which are arguably easier to deal with within a hybrid framework than dense fluids , mainly because boundary condition imposition is significantly easier in gases .the first hybrid methods for dense fluids appeared a few years later .these initial attempts have led to a better understanding of the challenges associated with hybrid methods . to a large extent ,the two major issues in developing a hybrid method is the choice of a coupling method and the imposition of boundary conditions on the molecular simulation . generally speaking ,these two can be viewed as decoupled , in the sense that the coupling technique can be developed on the basis of matching two compatible and equivalent over some region of space hydrodynamic descriptions and can thus be borrowed from the already existing and extensive continuum - based numerical methods literature .the choice of coupling technique is further discussed in section [ coupling ] .boundary condition imposition can again be considered in a decoupled sense and can be posed as a general problem of imposing `` macroscopic '' boundary conditions on a molecular simulation . in our opinion, this is a very challenging problem that has not been , in general , resolved to date completely satisfactorily .boundary condition imposition on the molecular sub - domain is discussed in section [ bcsection ] .boundary condition imposition on the continuum sub - domain is generally well understood , as is the process of extracting macroscopic fields from molecular simulations ( typically achieved through averaging ) . in section [ dsmcsec ]we give a brief description of the direct simulation monte carlo ( dsmc ) , the dilute gas simulation method used in this work . in section [ schwarz ]we demonstrate a hybrid scheme suitable for low speed , incompressible gaseous flows based on the schwarz alternating method .this scheme introduces chapman - enskog boundary condition imposition in incompressible hybrid formulations .subsequently , in section [ compressible ] we discuss a recently developed multi species compressible formulation for gases which introduces the molecular simulation as the finest level of refinement within a fully adaptive mesh refinement scheme .we finish with some concluding remarks .coupling a continuum to a molecular description is meaningful in a region where both can be presumed valid . in choosing a coupling methodit is therefore convenient to draw upon the wealth of experience and large cadre of coupling methods nearly 50 years of continuum computational fluid dynamics have brought us .coupling methods for the compressible and incompressible formulations generally differ , since the two correspond to two different physical and mathematical hydrodynamic limits .faithful to their mathematical formulations , the compressible formulation lends itself naturally to time explicit flux - based ( control - volume - type ) coupling while incompressible formulations are typically coupled using either state properties ( dirichlet ) or gradient information ( neumann ) .given that the two formulations have different limits of applicability and physical regimes in which each is significantly more efficient than the other , care must be exercised when selecting the ingredients of the hybrid method . in other words ,the choice of a coupling method and continuum sub - domain formulation needs to be based on the degree on which compressibility effects are important in the problem of interest and not on a preset notion that a particular coupling method is more appropriate than all others .the latter approach was recently pursued in a variety of studies which enforce the use of a control - volume - type approach to steady and essentially incompressible problems to achieve coupling by time explicit flux matching .this approach is not recommended . on the contrary , for an efficient simulation method , similarly to the case of continuum solution methods , it is important to allow the flow _ physics _ to dictate the appropriate formulation , while the numerical implementation is chosen to cater to the particular requirements of the latter .below , we expand on some of the considerations which influence the choice of coupling method under the assumption that the hybrid method is applied to problems of _ practical interest _ and therefore the continuum subdomain is appropriately large .our discussion focuses on timescale considerations that are more complex but equally important to limitations resulting from lengthscale considerations , such as the size of the molecular region(s ) .it is well known that the timestep for explicit integration of the compressible formulation , , scales with the physical timestep of the problem , ( , where is the numerical grid spacing and is the characteristic velocity ) , according to where is the mach number . asthe mach number becomes small , we are faced with the well - known stiffness problem whereby the numerical efficiency of the solution method suffers because the compressible formulation resolves the acoustic modes when those are not important .for this reason , when the mach number is small , the incompressible formulation is used which allows integration at the physical timestep . in the hybrid case mattersare complicated by the introduction of the molecular integration timestep , , which is at most of the order of ( in some cases in gases when , where is the molecular mean free path ) and in most cases significantly smaller .one consequence of equation ( [ mach ] ) is that as the global domain of interest grows , the total integration time grows , and transient calculations in which the molecular subdomain is explicitly integrated in time become more computationally expensive and eventually infeasible .the severity of this problem increases with decreasing mach number and makes unsteady incompressible problems very computationally expensive .new integrative frameworks which coarse grain the time integration of the molecular subdomain are therefore required .fortunately , for low speed steady problems implicit ( iterative ) methods exist which provide solutions without the need for explicit integration of the molecular domain to the global problem steady state .the particular method used here is known as the schwarz method and is discussed further in section [ schwarz ] .this method decouples the global evolution timescale from the molecular evolution timescale ( and timestep ) by achieving convergence to the global problem steady state through an iteration between steady state solutions of the continuum and molecular subdomains . because the molecular subdomain is small , explicit integration to its steady state is feasible .although the steady assumption may appear restrictive , it is interesting to note that the vast majority of both compressible and incompressible test problems solved to date and all incompressible practical problems of interest solved by hybrid methods have been steady .a variety of other iterative methods may be suitable as they provide for timescale decoupling .the choice of the schwarz coupling method , which uses state variables instead of fluxes to achieve matching , was motivated by the fact ( as explained below ) that state variables suffer from smaller statistical noise and are thus easier to prescribe on a continuum formulation .the above observations do not preclude the use of the compressible formulation in the continuum subdomain for low speed flows .in fact , preconditioning techniques which allow the use of the compressible formulation at very low mach numbers have been developed .such a formulation can , _ in principle _ , be used to solve the continuum sub - problem while this is being coupled to the molecular sub - problem via an implicit ( eg .schwarz ) iteration .what should be avoided is a time - explicit control - volume - type coupling procedure for solving essentially incompressible steady state problems .the issues discussed above have not been very apparent to date because in typical test problems published so far , the continuum and atomistic subdomains are of the same size ( and , of course , small ) . in this casethe large cost of the molecular subdomain masks the cost of the continuum subdomain and also typical evolution timescales ( or times to steady state ) are small .it should not be forgotten , however , that hybrid methods make sense when the continuum subdomain is significantly larger than the molecular subdomain . although the continuum subdomain stiffness in the limit ( see eq ( [ mach ] ) ) may be remedied by implicit timestepping methods or preconditioning approaches , control - volume - based hybrid approaches suffer from adverse signal to noise ratios in connection with the averaging required for imposition of boundary conditions from the molecular sub - domain to the continuum sub - domain . in the case of an ideal gas ( where compressible formulations are typical ) it has been shown in that , for the same number of samples , flux ( shear stress , heat flux ) averaging exhibits relative noise , , which scales as where is the relative noise in the corresponding state variable ( velocity , temperature ) . here is the knudsen number based on the characteristic lengthscale of the transport gradients , , and is the mean free path which is expected to be much smaller than since , by assumption , a continuum sub - domain is present .it thus appears that flux coupling will be significantly disadvantaged in this case , since the number of samples required to make scales as times the number of samples required by state - variable averaging . on the other hand ,schwarz - type iterative methods based on the incompressible physics of the flow require a fair number of iterations for convergence ( ) .these iterations require the re - evaluation of the molecular solution .this is an additional computational cost that is not shared by control - volume - type or explicit incompressible approaches . at this time , the choice between a time - explicit formulation or an iterative ( schwarz - type ) iteration for incompressible unsteady problems is not clear and may be problem dependent . despite the fact that as l grows the advantage seems to shift towards iterative methods , we should recall that from equation ( [ mach ] ) , unless time coarse - graining techniques are developed , large , low - speed , unsteady problems are currently too expensive to be feasible by either approach .consider the molecular region on the boundary of which , , we wish to impose a set of hydrodynamic ( macroscopic ) boundary conditions .typical implementations require the use of particle reservoirs ( see fig [ reservoirfig ] ) in which particle dynamics may be altered in such a way that the desired boundary conditions appear on ; the hope is that the influence of the perturbed dynamics in the reservoir regions decays sufficiently fast and does not propagate into the region of interest , that is , the relaxation distance both for the velocity distribution function and the fluid structure is small compared to the characteristic size of . in a dilute gas ,the non - equilibrium distribution function in the continuum limit has been characterized and is known as the chapman - enskog distribution .use of this distribution to impose boundary conditions on molecular simulations of dilute gases results in a robust , accurate and theoretically elegant approach .typical implementations require the use of particle generation and initialization within .particles that move into within the simulation timestep are added to the simulation whereas particles remaining in are discarded .see chapter [ schwarz ] for details . unfortunately , for dense fluids where not only the particle velocities but also the fluid structure is important and needs to be imposed , no theoretical results for their distributions exist .a related issue is that of domain termination ; due to particle interactions , , or in the presence of a reservoir , , needs to be terminated in a way that does not have a big effect on the fluid state inside of . as a result, researchers have experimented with possible methods to impose boundary conditions .it is now known that similarly to a dilute gas , use of a maxwell - boltzmann distribution for the velocities leads to slip . used a chapman - enskog distribution to impose boundary conditions to generate a dense - fluid shear flow . in this approach , particles crossing velocities that are drawn from a chapman - enskog distribution parametrized by the local values of the required velocity and stress boundary condition .although this approach was only tested for a couette flow , it appears to give reasonable results ( within molecular fluctuations ) .because in couette flow no flow normal to exists , can be used as symmetry boundary separating two back - to - back shear flows ; this sidesteps the issue of domain termination . in a different approach , flekkoy et al . use external forces to impose boundary conditions . more specifically , in the reservoir region they apply an external field of such magnitude that the total force on the fluid particles in the reservoir region is the one required by momentum conservation .they then terminate their reservoir region by using an ad - hoc weighing factor for the distribution of this force on particles within that prevents particles from leaving the reservoir region .in particular , they chose a weighing factor that diverges as particles approach the edge of such that particles do not escape the reservoir region while particles introduced there move towards .particles introduced into the reservoir are given velocities drawn from a maxwell - boltzmann distribution , while a langevin thermostat keeps the temperature constant .the method appears to be successful although the non - unique ( ad - hoc ) choice of force fields and maxwell - boltzmann distribution makes it not very theoretically pleasing .it is also not clear what the effect of these forces are on the local fluid state ( it is well known that even in a dilute gas gravity driven flow exhibits significant deviations from navier - stokes behavior ) but this effect is probably negligible since force fields are only acting in the reservoir region .delgado - buscalioni and coveney refined the above approach by using an usher algorithm to insert particles in the energy landscape such that they have the desired specific energy , which is beneficial to imposing a desired energy current while eliminating the risk of particle overlap at some computational cost .this approach uses a maxwell boltzmann distribution however for the initial velocities of the inserted particles .temperature gradients are imposed by a small number of thermostats placed in the direction of the gradient .although no proof exists that the disturbance to the particle dynamics is small , it appears that this technique is successful at imposing boundary conditions with moderate error .boundary conditions on md simulations can also be imposed through the method of constraint dynamics .although the approach in did not allow hydrodynamic fluxes across the matching interface , this feature can be integrated into this approach with a suitable domain termination . a method for terminating molecular dynamics simulations with small effect on particle dynamics has been suggested and used in .this simply involves making the reservoir region fully periodic . in this manner, the boundary conditions on also impose a boundary value problem on , where the inflow to is the outflow from . as becomes bigger, the gradients in become smaller and thus the flowfield in will have a small effect on the solution in .the disadvantage of this method is the number of particles that are needed to fill as this grows , especially in high dimensions .we believe that significant contributions can still be made by developing methods to impose boundary conditions in hydrodynamically consistent and , most importantly , rigorous approaches .the dsmc method was proposed by bird in the 1960 s and has been used extensively to model rarefied gas flows .a comprehensive discussion of dsmc can be found in the review article by alexander et .the dsmc algorithm is based on the assumption that a small number of representative `` computational particles '' can accurately capture the hydrodynamics of a dilute gas as given by the boltzmann equation .air under standard conditions narrowly meets the dilute gas criterion .empirical results show that a small number ( ) of computational particles per cubic molecular mean free path is sufficient to capture the relevant physics .this is approximately 2 orders of magnitude smaller than the actual number of gas atoms / molecules contained in the same volume .this is one source of dsmc s significant computational advantage over a fully molecular simulation .dsmc solves the boltzmann equation using a splitting approach : the time evolution of the system is approximated by a sequence of discrete timesteps , , in which particles undergo successively collisionless advection and collisions .collisions are performed between randomly chosen particle pairs within small cells of linear size .the flow solution is determined by averaging the individual particle properties over space and time .this approach has been shown to produce correct solutions of the boltzmann equation in the limit , .the splitting approach eliminates the computational cost associated with integrating the equations of motion of all particles , but most importantly allows the timestep to be significantly larger ( see also below ) than a typical timestep in a hard sphere molecular dynamics simulation .this is another reason why dsmc is significantly more computationally efficient than `` brute force '' molecular dynamics .recent studies have shown that for steady flows , or flows which are evolving at timescales that are long compared to the molecular relaxation times , a finite timestep leads to a truncation error that manifests itself in the form of timestep - dependent transport coefficients ; this error has been shown to be of the order of 5% when the timestep is of the order of a mean free time and goes to zero as .quadratic dependence of transport coefficients on the collision cell size was shown in .although in some cases compressibility may be important , a large number of applications are typically characterized by flows where use of the incompressible formulation results in a significantly more efficient approach .as explained in the introduction section , our definition of incompressible formulation is based on the _ flow physics _ and not on the numerical method used .although we have used here a finite element discretization based on the incompressible formulation , we believe that a preconditioned compressible formulation could also be used to solve the continuum subdomain problem if it could be successfully matched to the molecular solution through a coupling method which takes into account the elliptic nature of the ( low speed ) problem to provide solution matching consistent with the flow physics . here, matching is achieved through an iterative procedure based on the schwarz alternating method for the treatment of steady - state problems .the schwarz method was originally proposed for molecular dynamics - continuum methods in and extended in , but it is equally applicable to dsmc - continuum hybrid methods .this approach was chosen because of its ability to couple different descriptions through dirichlet boundary conditions ( easier to impose on dense - system molecular simulations compared to flux conditions , because fluxes are non - local in dense systems ) , and its ability to reach the solution steady state in an implicit manner .the importance of the latter characteristic can not be overemphasized ; the implicit convergence in time guarantees timescale decoupling that is necessary for the solution of macroscopic problems ; the integration of molecular trajectories at the molecular timestep for total times corresponding to macroscopic evolution times is , and will for a long time be , infeasible . within the schwarz coupling framework ,an overlap region facilitates information exchange between the continuum and atomistic subdomains in the form of dirichlet boundary conditions .a steady state continuum solution is first obtained using boundary conditions taken from the atomistic subdomain solution .for the first iteration this latter solution can be a guess .a steady state atomistic solution is then found using boundary conditions taken from the continuum subdomain .this exchange of boundary conditions corresponds to a single schwarz iteration .successive schwarz iterations are repeated until convergence , i.e. until the solutions in the two subdomains are identical in the overlap region .the schwarz method was recently applied to the simulation of flow through micromachined filters .these filters have passages that are sufficiently small that require a molecular description for the simulation of the flow through them .depending on the geometry and number of filter stages the authors have reported computational savings ranging from 2 to 100 .the approach in used a maxwellian velocity distribution and a `` control mechanism '' to impose the flowfield on the molecular simulation .this approach , although succesful in quasi one - dimensional flows , is not very general ; additionally , it is well known that using a maxwellian distribution to impose hydrodynamic boundary conditions , in general , if uncorrected will lead to slip ( discrepancy between the imposed and observed boundary conditions ) .general boundary condition imposition on dilute - gas molecular simulations can be performed using the chapman - enskog velocity distribution .this approach eliminates the need for a feedback correction since supplying the correct local distribution function eliminates slip .a chapman - enskog procedure for the schwarz method is described below .extensions of the schwarz method to time - dependent problems is currently under investigation , although , as shown by equation ( 1 ) , when the mach number is low , the disparity between the molecular and hydrodynamic timescales makes this a very stiff problem . in this sectionwe discuss the schwarz alternating method in the context of the solution of the driven cavity problem .we pay particular attention to the imposition of boundary conditions on the dsmc domain using a chapman - enskog distribution which is arguably the most rigorous and general approach . for illustration and verification purposeswe solve the steady driven cavity problem ( see figure [ fig : domain ] ) , in which the continuum subdomain is described by the navier - stokes equations solved by finite element discretization .the hybrid solution is expected to recover the fully continuum solution since the atomistic subdomain is far from solid boundaries and from regions of large velocity gradients .this test therefore provides a consistency check for the scheme .standard dirichlet velocity boundary conditions for a driven cavity problem were applied on the continuum subdomain ; the velocity component on the left , right and lower walls were held at zero while the upper wall velocity was set to m / s .the velocity component on all boundaries was set to zero . despite the high velocity ,the flow is essentially incompressible and isothermal .the pressure is scaled by setting the middle node on the lower boundary at atmospheric pressure ( pa ) .the imposition of boundary conditions on the atomistic subdomain is facilitated by a particle reservoir as shown in figure [ fig : interpolations ] .particles are created at locations within the reservoir with velocities drawn from a chapman - enskog velocity distribution .the chapman enskog distribution is generated by using the mean and gradient of velocities from the continuum solution , that is , the number and spatial distribution of particles in the reservoir are chosen according to the overlying continuum cell mean density and density gradients .after particles are created in the reservoir they move for a single dsmc timestep .particles that enter dsmc cells are incorporated into the standard convection / collision routines of the dsmc algorithm .particles that remain in the reservoir are discarded .particles that leave the dsmc domain are also deleted from the computation .the rapid convergence of the schwarz approach is demonstrated in figure [ fig : u_convergence ] .the continuum numerical solution is reached to within at the 3rd schwarz iteration and to within at the 10th schwarz iteration .our error estimate which includes the effects of statistical noise and discretization error due to finite timestep and cell size is approximately 2.5% .similar convergence of the velocity field is also observed .the close agreement with the fully continuum results indicates that the chapman - enskog procedure is not only theoretically appropriate but also robust . despite a reynolds number of , the schwarz method ( originally only shown to converge for elliptic problems ) converges with negligible errorthis is in agreement with the findings of liu who has recently shown that the schwarz method is expected to converge for .velocity component along the plane with successive schwarz iterations.,height=3 ]as discussed above , consideration of the compressible equations of motion leads to hybrid methods which differ significantly from their incompressible counterparts .the hyperbolic nature of compressible flows means that steady state formulations typically do not offer a significant computational advantage , and as a result , explicit time integration is the preferred solution method and flux matching is the preferred coupling method .given that the characteristic evolution time , , scales with the system size , the largest problem that can be captured by a hybrid method is limited by the separation of scales between the molecular integration time and .local mesh refinement techniques minimize the regions of space that need to be integrated at small cfl timesteps ( due to a fine mesh ) , such as the regions adjoining the molecular subdomain .implicit timestepping methods can also be used to speed up the time integration of the continuum subdomain .unfortunately , although both approaches enhance the computational efficiency of the continuum sub - problem , they do not alleviate the issues arising from the disparity between the molecular timestep and the total integration time . as explained in the introduction, overwhelming computational costs can be incurred when using a control - volume - type approach to capture steady phenomena where compressibility effects are negligible as is in most cases in dense fluids . in this casethe integration timestep of the continuum subdomain also becomes of the order of the molecular timescale , while the continuum subdomain is , presumably , much larger than the molecular subdomain and evolves at a much longer timescale .for an example , see in , where a compressible formulation for liquid flow results in a cfl timestep of 0.17 where is the lennard - jones timescale .this appears to not have been fully appreciated by various groups which have attempted to develop dense - fluid hybrid methods based on the compressible continuum formulation and control - volume - type matching procedures to solve steady and essentially incompressible problems . in hybrid continuum - dsmc methods , locally refining the continuum solution cells to the size of dsmc cells leads to a particularly seamless hybrid formulation in which dsmc cells differ from the neighboring continuum cells only by the fact that they are inherently fluctuating .( the dsmc timestep required for accurate solutions see is very similar to the cfl timestep of a compressible formulation . )coupled to a mesh refinement method , this approach can tackle true multiscale phenomena as shown below .another characteristic inherent to compressible formulations is the possibility of describing parts of the domain by the euler equations of motion . in that case , consistent coupling to the molecular formulationcan be performed using a maxwell - boltzmann distribution . in a recent paper , alexander et al .have shown that explicit time - dependent flux - based formulations preserve the fluctuating nature of the molecular description within the molecular regions but the fluctuation amplitude decays rapidly within the continuum regions ; correct fluctuation spectra can be obtained in the entire domain by solving a fluctuating hydrodynamics formulation in the continuum sub - domain . the compressible formulation of garcia et al . , referred to as amar ( adaptive mesh and algorithm refinement ) , pioneered the use of mesh refinement as a natural framework for the introduction of the molecular description in a hybrid formulation . in amarthe typical continuum mesh refinement capabilities are supplemented by an algorithmic refinement ( continuum to atomistic ) based on continuum breakdown criteria .this seamless transition is both theoretically and practically very appealing . in what followswe briefly discuss a recently developed _ fully adaptive _ amar method . in this method dsmcprovides an atomistic description of the flow while the compressible two fluid euler equations serve as the continuum scale model .the continuum and atomistic representations are coupled by matching fluxes at the continuum atomistic interfaces and by proper averaging and interpolation of data between scales .this is performed in three steps ; a ) the continuum solution values are interpolated to create dsmc particles in the reservoir region , here called buffer cells , b ) the conserved quantities in each continuum cell overlaying the dsmc region are replaced by averages over particles in the same region and c ) fluxes recorded when particles cross the dsmc interface are used to correct the continuum solution in cells adjacent to the dsmc region .this coupling procedure makes the dsmc region appear as any other level in an amr grid hierarchy .similarly to the overlap region described for the schwarz method above , the euler solution information is passed to the particles via buffer cells surrounding the dsmc region . at the beginning of each dsmc integration step ,particles are created in the buffer cells using the continuum hydrodynamic values .the above algorithm allows grid and algorithm refinement based on any combination of flow variables and their gradients .density gradient based refinement has has been found to be generally robust and reliable .concentration gradients or concentration values within some interval are also effective refinement criteria especially for multi - species flows involving concentration interfaces . in this particular implementation , refinement is triggered by spatial gradients exceeding user defined tolerances .this approach follows from the continuum breakdown parameter method proposed by bird . using the amr capabilities provided by the structured adaptive mesh refinement application infrastructure ( samrai ) developed at the lawrence livermore national laboratory , the above adaptive frameworkhas been implemented in a fully three - dimensional , massively parallel form in which , multiple molecular ( dsmc ) patches can be introduced or removed as needed . figure [ fig : moving - shock ] shows the adaptive tracking of a shockwave of mach number 10 used as a validation test for this method .density gradient based mesh refinement ensures the dsmc region tracks the shock front accurately .furthermore , as shown in figure [ fig : shock - profile ] the density profile of the shock wave remains smooth and is devoid of oscillations that are known to plague traditional shock capturing schemes .is the mean collision time ., height=336 ]one of the most important messages of this paper is that boundary condition imposition on molecular domains is quite independent of the choice of solution coupling approach . as a example , consider the schwarz method which provides a recipe for making solutions in various subdomains globally consistent subject to exchange of dirichlet conditions .the imposition of these boundary conditions can be achieved through any method and no certain method is favored by the coupling approach .flexibility in adopting _appropriate elements _ from previous approaches , and the importance of chosing the coupling method according to the flow physics are key steps to the development of more sophisticated , next - generation hybrid methods . although hybrid methods provide significant savings by limiting molecular solutions only to the regions where they are needed , solution of time - evolving problems which span a large range of timescalesis still not possible if the molecular domain , however small , needs to be integrated for the total time of interest .new frameworks are therefore required which allow timescale decoupling or coarse grained time evolution of molecular simulations .significant computational savings can be obtained by using the incompressible formulation when appropriate for steady problems .neglect of these simplifications can lead to a problem that is simply intractable when the continuum subdomain is appropriately large .it is interesting to note that , when a hybrid method was used to solve a problem of practical interest while providing computational savings , the schwarz method was preferred because it provides a steady solution framework with timescale decoupling . for dilute gases the chapman - enskog distribution provides a robust and accurate method for imposing boundary conditions .further work is required for the development of similar frameworks for dense liquids .the authors wish to thank r. hornung and a. l. garcia for help with the computations and valuable comments and discussions , and a. t. patera and b. j. alder for helpful comments and discussions .this work was supported in part by the center for computational engineering , and the center for advanced scientific computing , lawrence livermore national laboratory , us department of energy , w-7405-eng-48 .the authors also acknowledge the financial support from the university of singapore through the singapore - mit alliance .alexander , f. j. , garcia , a. l. and alder , b. j. , `` cell size dependence of transport coefficients in stochastic particle algorithms '' , _ physics of fluids _ , * 10 * , 1540 , 1998 ; erratum , _ physics of fluids _ , * 12 * , 731 , 2000 .li , j. , liao , d. and yip s. , `` nearly exact solution for coupled continuum / md fluid simulation '' , _ j. comput - aided mater _ , * 6 * , 95102 , 1999 . lions , p. l. , `` on the schwarz alternating method .i. '' , first international symposium on domain decomposition methods for partial differential equations , eds .r. glowinski , g. golub , g. meurant and j. periaux , 142 , siam , philadelphia , 1988 .liu , s. h. , `` on schwarz alternating methods for the incompressible navier - stokes equations '' , _ siam journal of scientific computing _ , * 22 * , no . 6 , pp .1974 - 1986 .wijesinghe , h. s. and hadjiconstantinou , n. g. , `` a hybrid continuum - atomistic scheme for viscous incmpressible flow '' , proceedings of the 23th international symposium on rarefied gas dynamics , whistler , british columbia , july , 907914 , 2002 .wijesinghe , h. s. , hornung , r. , garcia , a. l. , hadjiconstantinou , n. g. , `` 3dimensional hybrid continuum atomistic simulations for multiscale hydrodynamics '' , to appear in the proceedings of the 2003 international mechanical engineering congress and exposition .wijesinghe , h. s. , hornung , r. , garcia , a. l. , hadjiconstantinou , n. g. , `` three dimensional hybrid continuum atomistic simulations for multiscale hydrodynamics '' , submitted to the _ journal of fluids engineering .
we discuss hybrid atomistic - continuum methods for multiscale hydrodynamic applications . both dense fluid and dilute gas formulations are considered . the choice of coupling method and its relation to the fluid physics is discussed . the differences in hybrid methods resulting from underlying compressible and incompressible continuum formulations as well as the importance of timescale decoupling are highlighted . we also discuss recently developed compressible and incompressible hybrid methods for dilute gases . the incompressible framework is based on the schwarz alternating method whereas the compressible method is a multi - species , fully adaptive mesh and algorithm refinement approach which introduces the direct simulation monte carlo at the finest level of mesh refinement .
the _ compact muon solenoid _ ( cms )experiment is one of two general - purpose experiments at the _ large hadron collider _ ( lhc ) at cern . from the data collected during lhc run - i , which ended in early 2013 , the cms collaboration has published more than 250 papers describing searches for supersymmetry and exotic phenomena , measurements of qcd , electroweak , top , bottom , forward , and heavy - ion physics , as well as the discovery of the higgs boson .event displays are valuable tools that find many uses in collider experiments .these uses include validation of detector geometries , development of event reconstruction algorithms , visual inspection of reconstructed events , and also production of high - quality visual images for public presentations .one approach to develop an event display is to build `` from scratch '' by using a 3d graphic library , such as opengl , and a graphical user interface ( gui ) toolkit . in the cms experiment ,_ fireworks _ , _ frog _ , and _ ispy _ , all of which are actively used , were developed in this approach .here , we took an alternative approach ; we created 3d models of the cms detector and events in an already - existing 3d modelling application , widely used by architects , mechanical engineers , graphic designers , and other professionals : _ sketchup _this approach allows us to use many attractive features of sketchup : it has a highly intuitive user interface and precise dimensions ; it can export and import 2d images in various _ raster _ and _ vector _ formats ; it can exchange 3d models with other applications in several common formats , e.g. , 3ds and collada ; it can apply visual effects on models , such as shadows , fog , and different rendering styles .the cms geometry and event data are converted into sketchup by using the sketchup ruby api .our collection of ruby scripts are available on the github repository sketchupcms .the following sections describe the rendering of the cms detector geometry and events .r7 cm we obtain the detector geometry from the _ cms detector description _ .the cms detector description , written in xml , is the master source of the cms detector geometry used in the _ cms event reconstruction _ and the _ cms detector simulation_. it describes the cms detector as a _ directed tree_. each _ vertex _ of the tree corresponds to a _ component _ with a size , shape , material , and density .edge _ connects from a component to its subcomponent ; it specifies the position and angle of the subcomponent within the component .the ruby scripts take the following steps to render the geometry .first , the ruby scripts parse the xml files and recognize the directed tree of the detector geometry .second , they build each component as a _ solid _ with the given size and shape .figure [ solids ] shows the shapes and the names of implemented solids .then , for each edge from the _ leaves _ of the tree to the _ root _ , the scripts place the _ tail _ component in the _ head _ component as a subcomponent at the given position and angle . figure [ fig : cms - image ] is one of the cms detector cutaway images often used in public presentations . the 3d model in this figure was rendered in sketchup .this figure was used in the cms official website , posters created for cern open days , and the _ higgs boson discovery summary _ published in _ science _ .the level of detail in which to render the detector can be adjusted by choosing subcomponents to include in the model .for example , figure [ fig : eb - module ] shows a module of the _ barrel electromagnetic calorimeter _ ( eb ) , in which the geometry of each lead - tungstate ( ) crystal is drawn .figure [ fig : zooom ] includes detailed geometry of the innermost subsystems , the _ silicon strip _ and _ pixel trackers_. this figure is part of the exhibition_ `` zooom '' _ , displayed at _ point 5 _ , a cern site in cessy france , where the cms detector is installed . the cms detector will be continuously upgraded .the geometries for the detector upgrades , which are also described in the cms detector description , can be rendered in sketchup as well . in figure[ fig : pixel - image ] , one half of each model has the initial geometry of the cms pixel tracker and the other half has the geometry for its _ phase 1 upgrade_. these two images were used to illustrate the difference between before and after the upgrade in many public documents , including the _ cms technical design report for the pixel detector upgrade _ .the event input is the _ ig _ format developed for the ispy event display .the content includes information necessary for 2d and 3d rendering of event data such as kinematical properties of reconstructed particles and positions and energies of calorimetric deposits .the ig format is simply a zip archive containing event files in json ( javascript object notation ) format ; a javascript object instance can be easily converted to a ruby hash .the sketchup ruby api can therefore easily be used to render the events . in the larger project , a ruby module named _ig2rb _ parses an unzipped ig file and renders a subset of objects .tracks and electrons are rendered as _cubic bzier splines _ , energy deposits in the electromagnetic and hadronic calorimeters as scaled rectangles with six faces , muons as _ polylines _ , and hit muon chambers as rectangular boxes .an example image of a candidate higgs boson decaying into two photons can be seen in figure [ fig : higgs - skp ] .versions using other color sets and styles can be found at the cern document server .we have developed a set of ruby scripts to create 3d computer models of the cms detector and cms events in sketchup via its ruby api .these scripts are available at its github repository .the 3d models allow us to produce high quality images and exportable 3d models of the cms detector and events , which are used in a variety of public presentations .in addition , these models can be used to validate geometries for cms detector upgrades . furthermore , several art projects , exhibitions , outreach programs using these models are being planned .we plan to continue to develop the scripts to extend the range of supported reconstructed objects , detector subsystems , and input formats and to improve the user interfaces for both interactive and batch uses .we thank michael case and ianna osborne for explaining to us the technical implementation details of the cms detector descriptions .we have received valuable feedback and support from erik gottschalk , lucas taylor , michael hoch , david barney , achintya rao , harry cheung , teruki kamon , elizabeth sexton - kennedy , and other members of the cms collaboration .this work was partially supported by the u.s .department of energy and national science foundation .9 , `` the cms experiment at the cern lhc , '' 2008 _ jinst _ * 3 * s08004 , `` observation of a new boson at a mass of 125 gev with the cms experiment at the lhc , '' _ phys . lett . _* b716 * ( 2012 ) 30 `` fireworks : a physics event display for cms , '' 2010 _ j. phys . conf .ser . _ * 219 * 032014 , `` frog : the fast & realistic opengl displayer , '' arxiv:0901.2718 `` ispy : a powerful and lightweight event display , '' 2012 _ j. phys .ser . _ * 396 * 022002 , http://cern.ch/ispy , http://www.sketchup.com , http://www.sketchup.com/intl/en/developer , http://github.com/sketchupcms/sketchupcms `` cms detector description : new development , '' proceedings for chep 2004 , `` cms detector design , '' http://cms.web.cern.ch/news/cms-detector-design , http://opendays2013.web.cern.ch , `` a new boson with a mass of 125 gev observed with the cms experiment at the large hadron collider , '' 2012 _ science _ * 338 * 1569 `` zooom : cms visualized in 3d , '' http://cms.web.cern.ch/news/zooom-cms-visualised-3d `` cms geometry through 2020 , '' chep 2013 , `` cms technical design report for the pixel detector upgrade , '' cern - lhcc-2012 - 016 , cms - tdr-011
we have created 3d models of the cms detector and particle collision events in sketchup , a 3d modelling program . sketchup provides a ruby api which we use to interface with the cms detector description to create 3d models of the cms detector . with the ruby api , we also have created an interface to the json - based event format used for the ispy event display to create 3d models of cms events . these models have many applications related to 3d representation of the cms detector and events . figures produced based on these models were used in conference presentations , journal publications , technical design reports for the detector upgrades , art projects , outreach programs , and other presentations . = 1 110 = 10000 = 10000
network - related research has recently seen rapid developments in the domain of dynamic or adaptive networks .this is partly motivated by strong empirical observations that in many instances dynamical processes on networks co - evolve with the dynamics of the networks themselves .this in turn leads to a wider spectrum of possible behaviours , such as bistability and oscillations , when compared to static network models .examples of dynamic / adaptive networks are abundant and it is a common feature of both technological and social networks . in this paper , we study dynamic networks in which the timescales of the two processes are comparable , and we do this in the context of a basic but fundamental model of disease transmission. early work in the area of dynamic networks , which then gave rise to many model improvements and extensions , concentrated on epidemic models with ` smart ' rewiring , where infection transmitting links are replaced by non - risky ones , with a link - conserving network dynamics .the most widely used approach to study such systems is the development of mean - field models , such as pairwise and effective - degree models , which usually manage to capture and characterise such processes to a good level of detail .for example , using pairwise models and simulations , gross et al . showed that rapid rewiring can stop disease transmission , while marceau et al . used the effective - degree model , i.e. an improved compartmental model formalism , to obtain better predictions and shed more light on the evolution of the network structure .furthermore , juher et al . fine - tuned the bifurcation analysis of the model originally proposed in to distinguish between two extinction scenarios and to provide an analytical condition for the occurrence of a bistability region .recognising the idealised nature of link - number - conserving smart rewiring , various relaxations to these models were introduced .for example , kiss et al . and szabo et al . proposed a number of dynamic network models including random link activation - deletion with and without constraints , as well a model that considered non - link preserving link - type - dependent activation deletion .taylor et al . used the effective degree formalism to analyse a similar random link activation deletion model and to highlight the potential power of link deletion in eradicating epidemics .recently rogers et al . introduced an sirs model where random link activation deletion is combined with ` smart ' rewiring .they studied the resulting stochastic model at the level of singles and pairs as well as its deterministic limit , i.e. pairwise or pair - based models .other node dynamics , such as the voter model , have also been extensively studied on adaptive networks ; see for a review .an overview of the above studies highlights some important modelling and analysis challenges . on the model development frontit is clear that many models are still very idealised and small steps toward increasing model realism can quickly lead to a disproportionate increase in model complexity .the agreement between mean - field and stochastic models is very much model and parameter dependent with potentially large parameter regions in which agreement can be either good or poor .this is especially the case when oscillations are encountered or expected . when oscillatory behaviour is expected , there are only few results in an epidemic context , see for example. studies of epidemic propagation on adaptive networks have not focused much on characterising oscillatory behaviour in simulations and there is no widely accepted analytical framework for this as of yet .another major drawback in the analysis of dynamic networks is that even when mean - field models perform well , these only provide limited insight about the structure and evolution of the network . in this paper , we set out to address some of these issues and make important steps towards a more satisfactory treatment and analysis of dynamic network models , along with a better use and integration of various mathematical methods and techniques that can be employed . to this endwe propose a dynamic network model using epidemic propagation with both random and preferential , e.g. , link - type based , link activation and deletion .we provide model analysis based on both ( a ) the mean - field and ( b ) the purely network - based stochastic simulation model .the latter is carried out using fourier analysis , as well as through analysing the exact stochastic model in terms of the master equations .further , we develop a compact pairwise approximation for the current dynamic network model and , for the case of random activation deletion , we provide a bifurcation analysis of the network structure itself , aiming for a more comprehensive analysis , whereby both system - level ( e.g. , the outcome of the epidemic ) and network - level ( e.g. , the achievable network characteristics of network types ) behaviours are concurrently analysed and characterised . the paper is structured as follows .the link - type - dependent activation - deletion model is formulated in section [ modelform ] .then the pairwise ode approximation augmented by terms accounting for preferential link cutting and creation is analysed in section [ sec : pw ] . guided by the bifurcation analysis of the mean - field model , in section [ sec : netsim ], we carry out a detailed study of the agreement between models and simulations .oscillations are observed both in the mean - field approximation and the stochastic network simulation . in section [ sec : explosc ] , the emergence of oscillations in the stochastic model is investigated based on the outcome of simulation and on the exact master equations . finally , in section[ sec : netbif ] , a detailed and common bifurcation map of the system and network behaviour is presented emphasising its benefits in achieving a fuller understanding of the model as a whole , especially for dynamic network models .in this paper ( susceptible - infectious / infected - susceptible ) epidemic propagation is considered on an adaptive network with link - type dependent link activation and deletion .specifically , the model incorporates the following independent poisson processes : * * infection : * infection is transmitted across each contact between an and an node , or link , at rate , * * recovery : * each node recovers at rate , and this is independent of the network , * * link activation : * a non - existing link between a node of type and another of type is activated at rate , with , * * link deletion : * an existing link between a node of type and another of type is terminated at rate , with .we note that once a link type is chosen , the activation or deletion of such a link is done at random .our model is significantly different from the widely used setup of ` smart ' rewiring , where the nodes have full knowledge of the states of all other nodes and choose to minimise their exposure to the epidemic by cutting links to neighbours and immediately rewiring to a randomly chosen node .this also conserves the number of links in the network and can make the analysis more tractable , by reducing the complexity of the model . here, we set out to explore and explain as fully as possible the complete spectrum of system behaviours , including classical bifurcation analysis at system level , e.g. , die - out , endemic equilibria or oscillations , as well as the evolution of the network structure and attainable network equilibria . in order to do this, we will employ a number of approaches including : ( a ) an exact master equation formulation for small networks , ( b ) full network - based monte carlo simulations and ( c ) two different types of pairwise or pair - based mean - field ode models . as regards the rewiring parameters we focus on two scenarios , namely : 1 . and , and and , and 2 . and .while the first is motivated by practical considerations , such as those used in the ` smart ' rewiring where nodes aim to minimise the risk of becoming infected while maintaining their connectivity to the network , the second scenario removes the dependency of link activation and deletion on pair type and leads to a simple , more tractable model .we start by formulating the pairwise model for the expected values of the node and pair numbers .as was shown in , this gives rise to }&=\tau [ si ] - \gamma [ i ] , \label{smf1 } \\\dot{[si]}&=\gamma ( [ ii ] - [ si])+ \tau ( [ ssi]-[isi]-[si])+\alpha_{si } ( [ s][i]-[si])-\omega_{si}[si ] , \label{smf2 } \\\dot{[ii]}&=-2\gamma [ ii ] + 2\tau ( [ isi]+[si])+\alpha_{ii } ( [ i]([i]-1)-[ii])-\omega_{ii}[ii ] , \label{smf3 } \\\dot{[ss]}&=2\gamma [ si ] - 2\tau [ ssi]+\alpha_{ss } ( [ s]([s]-1)-[ss])-\omega_{ss}[ss ] . \label{smf4}\end{aligned}\ ] ] from the model it follows that +[i]=n ] and ( b ) an endemic equilibrium which can not be explicitly given due to being the solution of a quartic equation .the linerisation around the disease - free steady state gives rise to a jacobian , the eigenvalues of which can be determined explicitly , see appendix a. as shown , two of the eigenvalues are always negative and the remaining two have negative real part if and only if which gives rise to a transcritical bifurcation where , see fig .[ smf_bif ] .thus the following proposition holds .the disease - free steady state is stable if and only if .[ prop1 ] as mentioned above the endemic steady state is the solution of a quartic equation , see appendix a for its detailed derivation .the analysis of this equation leads to the following proposition concerning the existence of the endemic steady state .if is a root of polynomial given in appendix a , then the system has an endemic steady state , the coordinates of which can be given as {ss } = x , \quad [ i]_{ss } = n - x , \quad [ si]_{ss } = \frac{\gamma}{\tau}(n - x),\ ] ] {ss } = x(x-1)-2\frac{\omega_{si}\gamma}{\alpha_{ss}\tau } ( n - x ) , \quad [ ii]_{ss } = \frac{\gamma ( n - x)^2}{\tau x } + \frac{(n - x)[ss]_{ss}}{[ss]_{ss}+[si]_{ss } } .\ ] ] [ propendemic ] an extensive numerical study has shown that below the line of transcritical bifurcation there is a unique endemic steady state , i.e. , the polynomial has a single root providing a biologically plausible steady state , where the values of all singles and pairs are positive . for the endemic steady state , the coefficients of the characteristic polynomialcan only be determined numerically but this does not prevent from determining where the hopf bifurcation arises ( see appendix a for details ) .the hopf bifurcation points form a set as depicted by the curve , i.e. , the perimeter of the island , in fig . [ smf_bif ] .the results of the numerical study show that within the hopf island the mean - field model exhibits stable oscillations .the region below the transcritical bifurcation line and outside the hopf island is where the endemic equilibrium is stable .it is important to note that the system - level analysis can be complemented by the observation that the expected average degree displays a behaviour similar to that of the expected number of infected , as illustrated by fig .[ smf_timeevol ] .a careful analysis of the plots based on the mean - field model allows to make the following important remarks .edge addition and creation acts on potentially very different scales or number of edges .the creation of links acts on a set of edges with cardinality of , while the infection process and the deletion of edges act on a set of edges whose number scales as ] , =0 ] and =\frac{n(n-1)\alpha}{\omega+\alpha} ] and ] and all other parameters are kept constant .the figure shows a sharp increase in the proportion of realisations in which epidemics die out such that two potential boundaries can be defined : one which corresponds to the parameter up to which none of the realisations die out , and one in which all realisations die out .these two boundaries delimit a strip that can be considered as the bifurcation boundary of the disease - free regime .the width of this strip is shown by the grey - shaded area in figure [ fig : stochbifdiag ] ._ boundary of the oscillating regime ._ for those realizations that did not die out ( over the entire duration of the run , i.e. , ) , evidence for oscillatory behaviour was assessed through a rigorous statistical analysis of the power spectrum of the time series of the number of infected nodes .the spectral estimation procedure was based on weighted periodogram estimates .the data was split into non overlapping segments , each containing points and the periodograms were calculated from application of discrete fourier transforms ( dft ) . as application of dft requires stationarity of the time series , transients in the time serieswere removed using the following procedure . after confirming pseudo - stationarity of the last points ( one segment ) of the time series , their mean and standard deviation was computed . a running average ( low - pass filter ) of the entire time serieswas then applied and the first time point after which all subsequent time points of the filtered time series stayed within of the standard deviation of the mean of the last segment was selected as the starting point of the transient - free time series . only those time series with at least time points were kept for analysis , allowing for a theoretical minimum of segments to be included in the spectral estimation . in practice ,provided transient - free data could be found , all parameter configuration yielded no less than segments with most configurations yielding segments .presence of a non - trivial oscillatory component in the signal was assessed by the presence of statistically significant power at a non - zero frequency peak in the power spectrum , with confidence intervals for the spectral estimates obtained as per the framework of .the non - zero frequency constraint is required because , as mentioned in , time series in which the spectrum is dominated by the zero mode can be difficult to distinguish from pure white noise . to identify the boundaries of the oscillatory regime , power and frequency of the main peak of the power spectrumwere recorded for all configuration pairs studied above .figure [ fig : oscboundary ] shows a representative example ( at but see figure [ fig : stochbifdiag ] as well for an overall picture of the peak frequency ) when is varied over the whole range ] and ] , ] . the significant reduction in the number of equations is achieved or made possible by not having to account for pairs like ] .in fact , the pairs ] are used .we extended this model with terms responsible for link creation and deletion as follows .the deletion of links connecting an infected to a susceptible node with degree at rate contributes positively to ] .the creation of links connecting a susceptible to another susceptible node with degree contributes negatively to ] because the total number of such possible links is ([s]-1) ] .the same process contributes positively to ] , the approximation in eq . is used to to compute the pairs ] for , and according to , the triples are closed by = \frac{[as][si]}{([ss]+[si])^2}\sum_k k(k-1)[s_k ] .\ ] ] the solution of this system is also compared to simulation , see fig .[ fig : comparison ] .one can observe that introducing degree heterogeneity in the model does not improve the agreement significantly .this may be explained by the fact that the main argument for closing triples in the pairwise and compact pairwise models is identical .namely , both models assume that the states of a susceptible node s neighbours are effectively chosen independently and at random from the pool of available nodes .nevertheless , the compact pairwise model may have relevance in studying the process because it enables us to investigate how the degree distribution varies in time , and this can become valuable especially in the oscillatory regime .the detailed study of the compact pairwise model is beyond the scope of the present paper , but it will certainly motivate further research .oscillations within the context of adaptive networks have proved to be difficult to map out , especially directly from simulations . for many oscillatory systems the basic ideas of processes leading to oscillations are relatively simple .usually , a combination of a positive and negative feedback with a suitable time delay leads to robust oscillations . taking inspiration from this idea ,it is possible to give a heuristic explanation for the appearance of oscillations in this adaptive network .the basic oscillating quantities in our explanation are the prevalence ] , * * b : * recovery of nodes with rate ] and * * d : * link deletion with rate ] .this is expected as the epidemic is just recovering from an excursion where the connectivity of the network was low and the number of susceptible nodes was large .however , as the epidemic grows the balance of processes * c * and * d * changes .namely , when the epidemic is still strong , the number of susceptible nodes decreases while the link cutting acts on an increasing number of ( ) edges .this reverts the balance of edge cutting and deletion meaning that now > \alpha_{ss}([s]([s]-1)-[ss) ] increasing , increasing , \2 .* a * * b * , * c * * d * , ] decreasing , decreasing , \4 . * a * * b * , * c * * d * , ] and use the pair closure \approx \frac{n}{n-1}[s][i] ] and substituting =n ] , ] and ] and then a quartic equation for ] , then from we get =\frac{\gamma}{\tau}[i] ] in terms of ] as =[s]([s]-1)-2\frac{\omega_{si}\gamma}{\alpha_{ss}\tau } [ i ] .\label{ss}\ ] ] from we can express ] and ] yields }{[s ] } - \frac{[ss]}{[ss]+[si ] } = \frac{\gamma + \omega_{si } } { \tau } .\ ] ] now substituting =\frac{\gamma}{\tau}(n-[s]) ] with where , and .once is solved for , then the steady state , [ si ] , [ ii ] , [ ss]) ] , =\frac{\gamma}{\tau}[i] ] , which is zero at the disease - free steady state .the stability of the disease - free steady state is determined by the jacobian matrix determined at , [ si ] , [ ii ] , [ ss])=(0,0,0,n(n-1))$ ] . in order to compute this matrixwe determine the partial derivatives of the triples at this steady state , as they are given by the closures .}{\partial [ i ] } = 0 , \quad \frac{\partial [ ssi]}{\partial [ si ] } = n-2 , \quad \frac{\partial [ ssi]}{\partial [ ii ] } = 0 ,\quad \frac{\partial [ ssi]}{\partial [ ss ] } = 0 , \ ] ] }{\partial [ i ] } = 0 , \quad \frac{\partial [ isi]}{\partial [ si ] } = 0 , \quad \frac{\partial [ isi]}{\partial [ ii ] } = 0 , \quad \frac{\partial [ isi]}{\partial [ ss ] } = 0 , \ ] ] using these partial derivatives the jacobian matrix at the disease - free steady state can be given as it can be easily seen that and are eigenvalues of this matrix . teremaining two eigenvalues are the eigenvalues of the matrix in the middle : the disease - free steady state is stable if and only if all the eigenvalues have negative real part .this has to be checked only for the above matrix . for a matrixthe eigenvalues have negative real part if and only if its determinant is positive and its trace is negative .the determinant is positive if .the trace is positive if .the first condition implies the second one , hence we proved proposition [ prop1 ] .the stability of the endemic steady state can be determined only numerically . for a given set of the parametersthe coordinates of the endemic steady state can be computed according to appendix a1 .the partial derivatives in the jacobian can be calculated analytically , then substituting the numerically obtained coordinates of the endemic steady state we get the entries of the jacobian numerically .this enables us to calculate the coefficients of the characteristic polynomial where , and can be given as the sum of some subdeterminants of the jacobian , the concrete form of which is not important at this moment .to find the parameter values where hopf bifurcation occurs we use the method introduced in . in the case of the necessary and sufficient condition for the existence of pure imaginary eigenvalues is thus the hopf - bifurcation set in the parameter plane can be obtained as follows . for a given value of compute the value of numerically as is varied .it turns out that for a range of values this expression changes sign twice as is varied .more precisely , for given values of the other parameters ( ) there exist values and , such that for we get and such that for ( ) we have , i.e. hopf bifurcation occurs at ( ) .if is not in the interval , then there is no hopf bifurcation , i.e. the relation can not hold . for and is a stable periodic orbit .if is outside the interval , then there is no periodic orbit and either the endemic or the disease - free steady state is stable .the final state of the system is shown in the bifurcation diagram in fig .[ smf_bif ] .in the case , i.e. for a graph with two nodes , the number of edges is at most .that is there are two graphs on two nodes , one is a single edge the other one consists of two disjoint nodes .we denote the states with when the graph consists of two disjoint nodes . herethe state means that node 1 is and node 2 is .the states are denoted by when the graph is a single edge .thus the full state space for contains the following 8 states : .consider now the transitions between these states .there are two types of transitions : epidemic transitions ( infection and recovery ) and network transitions ( creating and deleting edges ) .epidemic transitions may occur among states that belong to the same graph , that is within the subsets and . within the first subset only recovery may occur since these states belong to a graph that consists of two separate nodes .so the only possible transitions are , , and , these may happen with rate . withinthe subset infection may happen as well , hence the possible transitions are and with rate and , , and with rate .network transitions occur between states in which the corresponding nodes are of the same type .for example , the transition occurs at rate since an type edge is created during this transition .similarly , the transition occurs at rate since an type edge is deleted during this transition . in general , the transition happens at rate and the transition happens at rate , where .all the transitions are shown in figure [ fig_transn2 ] .if the states are ordered as , then the transition matrix of the corresponding markov chain takes the form where the matrices and contain the transition rates corresponding to the epidemic transitions .these rates belong to the transitions within the subsets and , respectively .the matrices and contain the network transition rates that correspond to transitions between these two subsets . the master equation can be written as , where the coordinates of the eight dimensional vector are the probabilities of the states at time .let us briefly consider the case .then , hence there are different possible graphs , one without any edges , three graphs with one edge , three line graphs with two edges and a triangle with three edges .each graph can be in 8 possible states , namely .hence there are states altogether .epidemic transitions may occur among states that belong to the same graph , for example , in the case of a triangle graph the transition happens at rate , while its rate is for a line graph where node 2 is connected to node 1 and node 3 .the transition rate may be zero , e.g. in the case when there is only one edge in the graph connecting nodes 1 and 2 , or in the case when there are no edges at all in the graph .network transitions occur between states in which the corresponding nodes are of the same type .for example , denoting by the state where all nodes are susceptible and the graph consist of three disjoint nodes , and by the state where all nodes are susceptible and the graph contains one edge that connects node 1 and 2 , the rate of transition is , while the rate of transition is .the master equations take again the form , where the coordinates of the 64-dimensional vector are the probabilities of the states at time .we note that the size of the state space can be reduced by lumping some states together , similarly to the case of static graphs .the lumping of the state space for dynamic network processes is beyond the scope of this paper , here we only mention a few simple cases where lumping can be carried out . in the case it is easy to see that the states and can be lumped together , which means that their sum can be introduced as a new variable , and their differential equations can be added up .similarly , the sum of and can be introduced as a new variable .( by adding their differential equations one can immediately see that the old variables will not appear in the remaining system of equations . )hence the eight - dimensional system can be reduced to a six - dimensional system by lumping . in the case we have even more chance for lumping if it is assumed that , , , . without explaining the details we claim that in this case the 64-dimensional system of master equations can be reduced to a 20-dimensional one by lumping . in the case the dimension of the state space can be reduced from 1024 to 89 .danon , l. , ford , a. p. , house , t. , jewell , c. p. , keeling , m. j. , roberts , g. o. , ross , j. v. , vernon , m. c. , networks and the epidemiology of infectious disease , _ interdisciplinary perspectives on infectious diseases _ , * 2011:284909 * special issue `` network perspectives on infectious disease dynamics '' ( 2011 ) .halliday , d. m. , and rosenberg , j. r. and amjad , a. m. and breeze , p. and conway , b. a. and farmer , s. f. , a framework for the analysis of mixed time series / point process data theory and application to the study of physiological tremor , single motor unit discharges and electromyograms , _ prog ._ , * 64 * ( 1995 ) , 237 - 278. kiss ., berthouze , l. , taylor , t.j . , simon , p.l ., modelling approaches for simple dynamic networks and applications to disease transmission models , _ proc .* 468 * ( 2141 ) , ( 2012 ) , 1332 - 1355 .( continuous ) , ( dashed ) and for ( dotted ) .the inset in the right panel shows the time dependence of the average degree for .the values of the other parameters are fixed at , , and ., title="fig : " ] ( continuous ) , ( dashed ) and for ( dotted ) .the inset in the right panel shows the time dependence of the average degree for .the values of the other parameters are fixed at , , and ., title="fig : " ] time points ) of the number of infected nodes in the endemic regime ( top panel , ) and oscillatory regime ( bottom panel , ) .these samples were randomly chosen from one of realisations using the following parameters : , , , . ] with and ( horizontal axis ) varying between and .this percentage is calculated out of realisations using the following parameters : , , , and .the values of below which no realisations die out and above which all realisations die out define two possible boundaries for the disease - free regime and are shown by solid circles . ] and ( horizontal axis ) varying between and .the absence of data for reflects the fact that the disease - free regime has been reached , see fig .[ fig : zerodiseaseboundary ] .thresholding of peak frequency at non - zero value makes it possible to define a boundary for the oscillatory regime . ] ) parameter space for , , .identification of the oscillatory regime relies on the value of the frequency of peak power .two potential boundaries are provided in the form of iso - lines at values and .peak frequency was ( orange colour ) .near zero frequencies are shown in dark blue .these boundaries are qualitatively consistent with those observed in the theoretical model .the thick black line shows one boundary for the disease - free regime determined as the value of above which all realisations die out .the bottom boundary of the shaded area represents an alternative boundary determined as the value of under which no realisations die out . ]nodes obtained from the average of 500 simulations ( continuous blue curves ) , from the simple pairwise model - ( dashed black curves ) and from the compact pairwise model - ( dotted red curves ) .the parameter values for the endemic case ( upper curves ) are , , , and those yielding the oscillating solutions ( lower curves ) are , , , .for the oscillating case only simulations which did not die out until were taken into account . ] close to its maximum , ( b ) close to the maximum prevalence and a decreasing average degree , ( c ) decreasing prevalence with close to its minimum and , finally , ( d ) minimal prevalence but with growing average degree .parameter values are , , , with all the other activation and deletion rates being equal to zero.,title="fig:"][fig : felmeno ] close to its maximum , ( b ) close to the maximum prevalence and a decreasing average degree , ( c ) decreasing prevalence with close to its minimum and , finally , ( d ) minimal prevalence but with growing average degree .parameter values are , , , with all the other activation and deletion rates being equal to zero.,title="fig:"][fig : max ] close to its maximum , ( b ) close to the maximum prevalence and a decreasing average degree , ( c ) decreasing prevalence with close to its minimum and , finally , ( d ) minimal prevalence but with growing average degree .parameter values are , , , with all the other activation and deletion rates being equal to zero.,title="fig:"][fig : lenagy ] close to its maximum , ( b ) close to the maximum prevalence and a decreasing average degree , ( c ) decreasing prevalence with close to its minimum and , finally , ( d ) minimal prevalence but with growing average degree .parameter values are , , , with all the other activation and deletion rates being equal to zero.,title="fig:"][fig : lekicsi ] parameter space and the theoretical bifurcation curves for , , .the horizontal line represents the boundary of the parameter domain where the graph is connected . in the simulation , networks which on average had at least 3 disjointed components were considered disconnected .the other two curves are the transcritical bifurcation curves obtained from the mean - field approximation ( continuous diagonal line ) and from the pairwise approximation ( dashed curve ) .the markers are as follows : - connected , epidemic , - connected , no epidemic , - disconnected , epidemic , and - disconnected , no epidemic . ]
an adaptive network model using epidemic propagation with link - type dependent link activation and deletion is considered . bifurcation analysis of the pairwise ode approximation and the network - based stochastic simulation is carried out , showing that three typical behaviours may occur ; namely , oscillations can be observed besides disease - free or endemic steady states . the oscillatory behaviour in the stochastic simulations is studied using fourier analysis , as well as through analysing the exact master equations of the stochastic model . a compact pairwise approximation for the dynamic network case is also developed and , for the case of link - type independent rewiring , the outcome of epidemics and changes in network structure are concurrently presented in a single bifurcation diagram . by going beyond simply comparing simulation results to mean - field models , our approach yields deeper insights into the observed phenomena and help better understand and map out the limitations of mean - field models . institute of mathematics , etvs lornd university budapest , and + numerical analysis and large networks research group , hungarian academy of sciences , hungary + centre for computational neuroscience and robotics , university of sussex , falmer , brighton bn1 9qh , uk + school of mathematical and physical sciences , department of mathematics , university of sussex , falmer , brighton bn1 9qh , uk * keywords : * sis epidemic ; pairwise model ; dynamic network ; oscillation + corresponding author + email : i.z.kiss.ac.uk +
multi - target filtering / tracking involves the simultaneous estimation of the number of targets along with their states , based on a sequence of noisy measurements such as radar or sonar waveforms . to reduce complexity and facilitate tractability ,the sensor waveforms are typically processed into a sequence of detections .the key challenges in multi - target filtering / tracking thus include _ detection uncertainty _ , _clutter _ , and_ data association uncertainty_. to date , three major approaches to multi - target tracking / filtering have emerged as the main solution paradigms . these are , multiple hypotheses tracking ( mht ) , , joint probabilistic data association ( jpda ) , and random finite set ( rfs ) .the rfs or finite set statistics ( fisst ) approach pioneered by mahler provides principled recursive bayesian formulation of the multi - target filtering / tracking problem .the essence of the rfs approach is the modeling of the collection of target states and measurements , referred to as the multi - target state and multi - target measurement , as finite set valued random variables .the centerpiece of the rfs approach is the _ bayes multi - target filter _ , which recursively propagates the filtering density of the multi - target state forward in time . the phd , cphd and cardinality - balanced and labeled multi - bernoulli filters are tractable approximations to the bayes multi - target filter which are synonymous with the rfs framework .their tractability however largely hinges on the approximate form for the posterior which can not accommodate statistical dependencies between targets .the bayes multi - target filter is also a ( multi - target ) tracker when target identities or labels are incorporated into individual target states . in , the notion of _ labeled rfss _is introduced to address target trajectories and their uniqueness .the key results include conjugate priors that are closed under the chapman - kolmogorov equation , and an analytic solution to the bayes multi - target tracking filter known as the -generalized labeled multi - bernoulli ( -glmb ) filter . with detection based measurements , the computational complexity in the -glmb filter is mainly due to the presence of explicit data associations . for certain applications such as tracking with multiple sensors , partially observable measurements or decentralized estimation , the application of a -glmb filter may not be possible due to limited computational resources .thus cheaper approximations to the -glmb filter are of practical significance in multi - target tracking. in this paper we present a new approximation to the -glmb filter .our result is based on the approximation proposed in where it was shown that the glmb distribution can be used to construct a principled approximation to an arbitrary labeled rfs density that matches the phd and the cardinality distribution .we refer to the resultant filter as a marginalized -glmb ( m-glmb ) filter since it can be interpreted as a _ marginalization over the data associations_. the proposed filter is consequently computationally cheaper than the -glmb filter while still preserving key summary statistics of the multi - target posterior .importantly the m-glmb filter facilitates tractable multi - sensor multi - target tracking . unlike phd / cphd and multi - bernoulli based filters , the proposed approximation accommodates statistical dependence between targets .we also present an alternative derivation of the lmb filter proposed in based on the newly proposed m-glmb filter .simulations results verify the proposed approximation .this section briefly presents background material on multi - object filtering and labeled rfs , which form the basis for the formulation of our multi - target tracking problem .suppose that at time , there are object states , each taking values in a state space . in the random finite set ( rfs )framework , the _ multi - object state _ at time is represented by the finite set , and the multi - object state space is the space of all finite subsets of , denoted as .an rfs is simply a random variable that take values the space that does not inherit the usual euclidean notion of integration and density .mahler s finite set statistics ( fisst ) provides powerful yet practical mathematical tools for dealing with rfss based on a notion of integration / density that is consistent with point process theory .let denote the _ multi - target posterior density _ at time , and denote the _ multi - target prediction density _ to time k + 1 ( formally and should be written respectively as , and , but for simplicity the dependence on past measurements is omitted ) .then , the _ multi - target bayes recursion _ propagates in time , according to the following update and prediction where is the _ multi - object transition density _ to time , is the _ multi - object likelihood function _ at time , and the integral is a _ set integral _ defined for any function by an analytic solution to the multi - object bayes filter for labeled states and track estimation from the multi - object filtering density was given in . to perform tracking in the rfs framework we use the label rfs model that incorporates a unique label in the object s state vector to identify its trajectory .in this model , the single - object state space is a cartesian product , where is the feature / kinematic space and is the ( discrete ) label space .a finite subset set of has distinct labels if and only if and its labels have the same cardinality .an rfs on with distinct labels is called a _ labeled rfs _ . for the rest of the paper, we use the standard inner product notation , and multi - object exponential notation , where is a real - valued function , with by convention .we denote a generalization of the kroneker delta and the inclusion function that take arbitrary arguments such as sets , vectors , etc , by we also write in place of when .single - object states are represented by lowercase letters , e.g. , while multi - object states are represented by uppercase letters , e.g. , , symbols for labeled states and their distributions are bolded to distinguish them from unlabeled ones , e.g. , , , etc , spaces are represented by blackboard bold e.g. , , , etc .an important class of labeled rfs is the generalized labeled multi - bernoulli ( glmb ) family , which is the basis of an analytic solution to the bayes multi - object filter . under the standard multi - object measurement model ,the glmb is a conjugate prior that is also closed under the chapman - kolmogorov equation . if we start with a glmb initial prior , then the multi - object prediction and posterior densities at any time are also glmb densities .let be the projection , and denote the _ distinct label indicator_. a glmb is a labeled rfs on distributed according to ^{{\mathbf{x}}}\label{eq : glmb}\ ] ] where is a discrete index set , and satisfy : the glmb density ( ) can be interpreted as a mixture of multi - object exponentials .each term in ( ) consists of a weight that depends only on the labels of , and a multi - object exponential ^{{\mathbf{x}}} ] .two sensor sets are used to represent scenarios with different observability capabilities . in particular : i ) a single radar in the middle of the surveillance region is used as it guarantee observability ; ii ) a set of _ range - only _ ( time of arrival , toa ) , deployed as shown in fig .[ fig:3toa ] , are used as they do not guarantee observability individually , but information from different sensors need to be combined to achieve it .the scenario consists of targets as depicted in fig .[ fig:5trajectories ] . .the indicates a rendezvous point . ] .the indicates a rendezvous point . ] for the sake of comparison , the m-glmb is also compared with the -glmb ( -glmb ) and lmb ( lmb ) filters .the three tracking filters are implemented using gaussian mixtures to represent their predicted and updated densities . due to the non linearity of the sensors , the _ unscented kalman filter _ ( ukf ) is exploited to update means and covariances of the gaussian components .the kinematic object state is denoted by ^{\top} ] and the sampling interval is ] .the measurement functions of the toa of fig .[ fig:3toa ] are : where represents the known position of sensor ( indexed with ) .the standard deviation of the toa measurement noise is taken as ] .the reported metric is averaged over monte carlo trials for the same target trajectories but different , independently generated , clutter and measurement noise realizations .the duration of each simulation trial is fixed to ] , ) using 1 radar . ] ] , ) using 3 toa . ] $ ] , ) using 3 toa . ]this paper has proposed a novel approximation to the -glmb filter with standard point detection measurements .the result is based on a principled glmb approximation to the labeled rfs posterior that matches exactly the posterior phd and cardinality distribution .the proposed approximation can be interpreted as performing a marginalization with respect to the association histories arising from the -glmb filter .the key advantage of the new filter lies in the reduced growth rate of the number of new components generated at each filtering step .in particular , the approximation ( or marginalization ) step performed after each update is guaranteed to reduce the number of generated components which normally arise from multiple measurement - to - track association maps .typically , the proposed m-glmb filter requires much less computation and storage especially in multi - sensor scenarios compared to the -glmb filter .furthermore the proposed m-glmb filter inherits the same implementation strategies and parallelizability of the -glmb filter . a connection and alternative derivation of the lmb filter is also provided .future works will consider distributed estimation with the m-glmb filter .m. mallick , s. coraluppi , and c. carthel , multi - target tracking using multiple hypothesis tracking , in _ integrated tracking , classification , and sensor management : theory and applications _ , m. mallick , v. krishnamurthy , b .-n . vo ( eds . ) , wiley / ieee , pp . 165201 , 2012 .vo , and b .- a random finite set conjugate prior and application to multi - target tracking , proc .intelligent sensors , sensor networks & information processing _ ( issnip2011 ) , adelaide , australia , dec . 2011 .l. a. mcgee , s. f. schmidt and g. l. smith , `` applications of statistical filter theory to the optimal estimation of position and velocity on board a circumlunar vehicle , '' nasa technical report r-135 , tech .rep . , 1962. s. j. julier and j. k. uhlmann , `` a non - divergent estimation algorithm in the presence of unknown correlations , '' _ proc . of the ieee american control conference ( acc 1997 )4 , pp . 23692373 , 1997 .
the multi - target bayes filter proposed by mahler is a principled solution to recursive bayesian tracking based on rfs or fisst . the -glmb filter is an exact closed form solution to the multi - target bayes recursion which yields joint state and label or trajectory estimates in the presence of clutter , missed detections and association uncertainty . due to presence of explicit data associations in the -glmb filter , the number of components in the posterior grows without bound in time . in this work we propose an efficient approximation to the -glmb filter which preserves both the phd and cardinality distribution of the labeled posterior . this approximation also facilitates efficient multi - sensor tracking with detection - based measurements . simulation results are presented to verify the proposed approach . rfs , fisst , -glmb filter , lmb filter , phd
consider a quantity distributed on .suppose that the distribution evolves along a controlled vector field according to the law of mass conservation ; the control parameter can be chosen at every time from a compact set .given a time moment and a target set , we aim at finding a control that maximizes the total mass within at the time moment . more formally the problem can be written as follows where is the density function of an initial distribution and \right\}\ ] ] is the set of admissible controls .first , let us give several motivating examples . [[ dynamical - system - with - uncertain - initial - state . ] ] dynamical system with uncertain initial state .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + let be an initial state of the following dynamical system assume that one wants to find a control that brings the state of the system to a target set at a given time .now let the precise initial state be unknown .instead , assume that a _ probability distribution _ of on the state space is given .in this case , one naturally looks for a control that maximizes the _ probability _ of finding the state of the system within the target set at time .this leads to problem .[ [ flock - control . ] ] flock control .+ + + + + + + + + + + + + + let characterize an initial distribution of sheep in a given area .assume that the herd drifts along a vector field .assume , in addition , that there is a dog located at .in this case , a sheep located at obtains an additional velocity when is positive this means that the sheep tries to escape the dog .if the interaction between the sheep are not relevant , the motion of the whole herd is described by the equation \rho\right)=0.\ ] ] typically , the dog wants to steer the herd to a target set at a given time . in this casean optimal strategy of the dog is determined by .[ [ beam - control . ] ] beam control .+ + + + + + + + + + + + + the motion of a charged particle in an electromagnetic field is described by the system the particle is characterised by its charge , rest mass , and the relativistic mass . above , are the particle s coordinates , are the velocities , and are the electric and magnetic fields , is the speed of light , and represents additional forces due to the interaction between the particle and the environment .assume that the electromagnetic field depends on a parameter , which can be chosen at every time moment , i.e. , then can be rewritten in the form with . producing a single particle ( for example , in a particle accelerator )is extremely difficult . instead , a _ beam of particles _ is produced . at the initial time momentevery beam is characterized by its density function defined on the state space of .usually , one wants to focus the beams to ensure that the particles traveling in the accelerator collide . clearly , one can formulate this problem as with the target set where is a desired radius of the beam .the connection between dynamical systems with uncertain initial states and continuity equations is well - known . in the context of control theoryit was mentioned in . a basic mathematical model for flocks controlled by a leaderis presented in .in contrast to these papers we neglect the interactions between flock s members .formally , a nonlocal term describing the internal dynamics of the flock is omitted . for models of the beam controlwe refer to .existence results for optimal control problems governed by continuity equations seem to be missing in the current literature ( at least for nonlinear vector fields ) .necessary optimality conditions were derived by d.a .ovsyannikov in and by a.i .propo with his collaborators in .we remark that all the papers mentioned above consider the terminal cost functional with _ smooth _ and/or an integral cost functional .moreover , the initial density is always assumed to be smooth . finally , let us mention the paper of s.s .mazurenko , where the dynamical programming method was developed , and the papers of r. brockett , who discussed controllability and some connections with stochastic equations .in this paper we first study the existence of optimal controls .next , assuming that the initial density is smooth and the target set is sufficiently regular , we derive a necessary optimality condition . finally , we discuss the case of integrable and arbitrary .more precisely , we replace by a perturbed problem which satisfies all the assumptions of our necessary optimality condition .then we show that every control that is optimal for is `` nearly optimal '' for .we believe that our choice of the cost functional is not relevant in the sense that the methods developed here should work in other cases as well . at the same time , in many situations it seems natural to maximize the total mass within a target set at a given time moment .moreover , our choice of the cost functional allows to derive a rather simple necessary optimality condition ( see section [ sec : pmp ] ) . as we shall see later , dealing with the continuity equation for measuresis more natural than dealing with that of for functions .for this reason , the main subject of our study is the following optimal control problem now , is just a particular case of , where the initial probability measure is absolutely continuous with density .the paper is organized as follows . in section [ sec : prlim ] we introduce basic notations , discuss the continuity equation and generalized controls . in the next section we study the existence of optimal controls .then , in section [ sec : pmp ] , we prove a necessary optimality condition for a special case of , where the initial distribution is absolutely continuous with smooth density and the target set has certain regularity properties . in the last sectionwe show that the general problem can be replaced by a perturbed problem such that satisfies all the assumptions of our necessary optimality condition and every optimal control for is `` nearly optimal '' for . for the readers convenience , we place in appendix a brief introduction to young measures .in what follows , is the euclidean norm of and is the scalar product of . by and we mean the closed and the open -neighbourhoods of , i.e. , let denote the set of all probability measures on .we equip with the prohorov distance : the convergence in the resulting metric space is exactly the narrow convergence of measures ( see ) . given a borel map and a probability measure , define the _ pushforward ( or image ) measure _ by recall the change of variables formula which holds for all bounded borel functions .below , is the -dimensional lebesgue measure , is the -dimensional hausdorff measure , is a probability measure which is absolutely continuous with respect to and whose density is .consider a vector field \times{{\mathbb{r}}}^n\to{{\mathbb{r}}}^n and ,}\\ { { \left|v(t , x)-v(t , x')\right|}}\leq l|x - x'| \quad\mbox { and } \quad { { \left|v(t , x)\right|}}\leq c\left(1+|x|\right ) .\end{cases}\ ] ] then there exists a unique solution to the cauchy problem the map is called _ the flow of the vector field _recall that the function is a diffeomorphism and for any ] is a _ distributional solution _ to if for any bounded lipschitz continuous test function \times{{\mathbb{r}}}^n\to { { \mathbb{r}}} ] whose support is contained in \times u ] .it is evident that a young measure _ associated with _a usual control belongs to ;u\right) ] is compact in the space of young measures ;{{\mathbb{r}}}^m\right) ] is relatively narrowly compact by theorem [ thm : prokhorov ] .it remains to verify that it is closed . to this end , consider a converging sequence ;u\right) ] . for each , we have ,t[\times u^c\,\right)=0 ] is open , it follows from basic properties of the narrow convergence that ,t[\times u^c\,\right ) \geq \nu\left(\,]0,t [ \times u^c\,\right)=\nu\left([0,t]\times u^c\right).\ ] ] we say that is a _ trajectory corresponding to a generalized control _ if is a distributional solution to the cauchy problem where with } ] , , be vector fields satisfying * ( a0 ) * with common constants and , and let denote a solution to the cauchy problem if weakly in ;{{\mathbb{r}}}^n\right) ] . fora fixed consider the obvious identity {\mathinner{\mathrm{d}{s } } } \\ & + \int_0^t \left[v_j(s , x_0(s))-v_0(s , x_0(s))\right]{\mathinner{\mathrm{d}{s}}}.\end{aligned}\ ] ] observing that we obtain , by gronwall s inequality , the following estimate : where {\mathinner{\mathrm{d}{s}}}.\ ] ] to complete the proof it remains to show that .for this purpose , take a sequence such that and consider the identity {\mathinner{\mathrm{d}{s } } } \\ & + \sum_{i=1}^{n}\int_{\tau_{i-1}}^{\tau_i}\left[v_j(s , x_0(\tau_{i-1}))-v_0(s , x_0(\tau_{i-1}))\right]{\mathinner{\mathrm{d}{s } } } \\ & + \sum_{i=1}^{n}\int_{\tau_{i-1}}^{\tau_i}\left[v_0(s , x_0(\tau_{i-1}))-v_0(s , x_0(s))\right]{\mathinner{\mathrm{d}{s}}}.\end{aligned}\ ] ] by assumption * ( a0 ) * , we have for all , , and .moreover , by applying the standard ode s technique , we can easily verify that ,\ ] ] where depends only on , , and .therefore , {\mathinner{\mathrm{d}{s}}}\big|.\ ] ] passing to the limit as and then as , we get .[ lem : weakmu ] let and \to \mathcal{p}({{\mathbb{r}}}^n) ] for all , then .\ ] ] let denote the flow of .in view of lemma [ lem : weakf ] , the sequence converges pointwise to .since , the statement follows from lemma [ lem : f_j ] .[ lem : convex ] let and .if the support of is contained in a compact set , then where is the convex hull of .choose a sequence of probability measures of the form and such that narrowly .this can be done as in ( * ? ? ?* example 8.1.6 ) .clearly , at the same time , we have since is closed , the proof is complete .in this section we study the existence of optimal controls for . throughout the section , we make the following assumption \times { { \mathbb{r}}}^n\times u\to{{\mathbb{r}}}^n l c t\in [ 0,t] u\in u x , x'\in{{\mathbb{r}}}^n ] of generalized controls .let be a maximizing sequence of generalized controls .in view of proposition [ prop : compact ] , we may assume , without loss of generality , that converges to some . consider the averaged vector fields defined by it is easy to see that each satisfies assumption * ( a0 ) * with constants and as in assumption * ( a1)*. hence , for each , there exists a unique trajectory corresponding to . for a given ,we denote by a continuous extension of to \times { { \mathbb{r}}}^m ] such that for almost every ] , we have here with being the flow of the vector field , is the measure theoretic outer unit normal to at , is the -dimensional hausdorff measure , is the density of .\1 . if is an -dimensional surface , then automatically has the interior ball property .moreover , each is also an -dimensional surface .consequently , in this case is the usual outer unit normal to at , and is the usual -dimensional volume form .the necessary optimality condition has a visual geometrical meaning .let be optimal .shift the target set along the vector field backwards .denote the resulting image of at the time moment by .then minimizes the outflow through at almost every time moment .the proof of theorem [ thm : pmp ] is based on ideas of the pontryagin maximum principle ( see , e.g. , ) .in addition , it relies on notions of the _ interior ball property _ of a set and the _ directional derivative _ of a real - valued function on .we discuss these notions below , but first let us briefly outline the proof .[ [ sketch - of - the - proof ] ] sketch of the proof : + + + + + + + + + + + + + + + + + + + + let be an optimal control , be the corresponding trajectory , and be the flow of . fix some ] and , we define the perturbed control by the formula ,\\ \bar u(t ) & \mbox{otherwise } , \end{cases}\ ] ] and denote the corresponding trajectory by .let and be the flows of the vector fields and .notice that finally , let denote the flow of the vector field [ lem : needle ] under the assumptions of theorem [ thm : pmp ] , we have ,\ ] ] where is defined by .* in view of , we have to verify that .\ ] ] * 2 . * it follows from that the map is lipschitz continuous .now , by proposition [ prop : mainbound ] and by the lipschitz property of , we obtain for some positive .* 3 . * for each , we have {\mathinner{\mathrm{d}{s } } } \notag \\ \label{eq : first } & \;= -w\left(x\right ) { \varepsilon}- \int_{0}^{{\varepsilon}}\int_0 ^ 1 dw\left(\alpha w_s^0(x ) + ( 1-\alpha)x\right){\mathinner{\mathrm{d}{\alpha}}}\cdot \left(w_s^0(x)-x\right){\mathinner{\mathrm{d}{s}}}.\end{aligned}\ ] ] on the other hand , it follows from * ( a1 ) * that for all ] and , where * 5 .* set .now we combine , and then recall together with * ( a1 ) * in order to estimate the derivatives of the vector fields . in this way we find that for every , where as a consequence , we get notice that the map is integrable .hence , applying the lebesgue differentiation theorem and then looking at the definition of , we conclude that holds for almost every ] , where is the density of .since is an optimal trajectory , it follows that for every . therefore , where . by the definition of , we obtain for all and almost every ] ) .extract from it a subsequence such that converges narrowly to some .then , by , we obtain now it follows from lemma [ lem : convergence] that thus , every converging subsequence of satisfies .this proves .let us prove the second part of the theorem .again , take any converging sequence .extract a subsequence such that converges narrowly to some as . in this case , by lemma [ lem : weakmu ] , we have since is absolutely continuous , we conclude that is absolutely continuous as well . now the identity imlies that is a _continuty set _ of .thus , it follows from that in other words , every converging subsequence of has as its limit .this proves and completes the proof of theorem [ thm : vs ] .below , is the lebesgue measure on , is a borel set with , and is the -algebra of all borel measurable subsets of .the _ narrow topology _ on is the weakest topology for which the maps are continuous , where runs through the set of all bounded charatodory integrands - measurable map which is continuous w.r.t . the second variable . ] on .\1 . in the above definition onemay replace charathodory integrands by continuous ones .this fact follows easily from the scorza - dragoni theorem . as a consequence, the narrow convergence of young measures enjoys all the properties of the narrow convergence of probability measures .\2 . in the definition onemay also take test functions of the form where is a borel subset of and is a bounded continuous function on .bearing that in mind , one may easily show that the narrow limit of a sequence of young measures is a young measure . indeed , taking , we get every young measure can be described by its _ disintegration _ , which is a family of probability measures on characterized by where .one may show that for any -integrable {\mathinner{\mathrm{d}{x}}}.\ ] ] consider , for instance , the young measure associated with a measurable function .its disintegration is .the notion of disintegration explains why we think of young measures as generalized controls .indeed , a map is a usual control : at every the value of the control parameter is prescribed and equal to .a young measure is a generalized control : at every the control parameter is taken randomly according to the probability distribution . therefore young measures are analogous to mixed strategies in game theory , where players choose their strategies randomly according to a probability distribution .[ prop : ymlimit ] let be a sequence of young measures .let be a bounded continuous map .if converges narrowly to a young measure , then the sequence of maps converges to the map weakly in . take a bounded measurable function .the map is a bounded carathodory integrand .therefore , we may equivalently write changing on a -negligible set does not affect both sides of the equality . therefore , it holds for any .
an optimal control problem for the continuity equation is considered . the aim of a `` controller '' is to maximize the total mass within a target set at a given time moment . the existence of optimal controls is established . for a particular case of the problem , where an initial distribution is absolutely continuous with smooth density and the target set has certain regularity properties , a necessary optimality condition is derived . it is shown that for the general problem one may construct a perturbed problem that satisfies all the assumptions of the necessary optimality condition , and any optimal control for the perturbed problem , is nearly optimal for the original one . _ 2000 mathematics subject classification : _ 49k20 , 49j15 _ keywords : _ continuity equation , liouville equation , optimal control , beam control , flock control , necessary optimality condition , variational stability
motion estimation from onboard sensors is currently a hot topic in robotics and computer vision communities , as it can enable emerging technologies such as autonomous cars , augmented and virtual reality , service robots and drone navigation . among different sensor modalities ,visual - inertial setups provide a cheap solution with great potential .on the one hand cameras provide rich information of the environment , which allows to build 3d models , localize the camera and recognize already visited places . on the other hand imu sensors provide self - motion information , allowing to recover metric scale for monocular vision , and to estimate gravity direction , rendering absolute pitch and roll of the sensor .visual - inertial fusion has been a very active research topic in the last years .the recent research is focus on tightly - coupled ( i.e. joint optimization of all sensor states ) visual - inertial odometry , using keyframe - based non - linear optimization or filtering .nevertheless these approaches are only able to compute incremental motion and lack the capability to close loops and reuse a map of an already mapped environment .this implies that estimated trajectory accumulates drift without bound , even if the sensor is always localizing in the same environment .this is due to the use of the marginalization of past states to maintain a constant computational cost , or the use of full smoothing , with an almost constant complexity in exploration but that can be as expensive as a batch method in the presence of loop closures . in this paperwe present visual - inertial orb - slam , to the best of our knowledge the first keyframe - based visual - inertial slam that is able to close loops and reuse its map .inspired by our tracking optimizes current frame assuming a fixed map , and our backend performs local bundle adjustment ( ba ) , optimizing a local window of keyframes , including an outer window of fixed keyframes that ensures global consistency .this approach allows for a constant time local ba , in contrast to full smoothing , and as not marginalizing past states we are able to reuse them .we detect large loops using place recognition and a lightweight pose - graph optimization , followed by full ba in a separate thread not to interfere with real - time operation .[ fig : view ] shows the reconstruction of our system in a sequence with continuous revisiting .both tracking and local ba work fixing states , which could potentially bias the solution , therefore we need a very good visual - inertial initialization that provides accurate state values before we start fixing states . to this endwe propose in section [ sec : ini ] a novel imu initialization method that estimates scale , gravity direction , velocity , and gyroscope and accelerometer biases , by processing the keyframes created by orb - slam from a few seconds of video .in contrast to where vision and imu are jointly estimated , we only need to estimate the imu variables , as the vision part is already solved by orb - slam .we divide the initialization into simpler subproblems .we first propose a method to estimate gyroscope biases which are ignored in .we then solve scale and gravity without considering accelerometer bias , in a similar way to ( which ignored gyroscope biases and did not solve scale as it uses stereo vision ) .we then introduce the knowledge of gravity magnitude and solve for accelerometer bias , ignored in , also refining scale and gravity direction . in a final stepwe compute the velocity of all keyframes .we validate the method in real sequences , concluding that it is an efficient , reliable and accurate method that solves imu biases , gravity , velocity and scale .moreover our method is general and could be applied to any monocular slam system .the input for our visual - inertial orb - slam is a stream of imu measurements and monocular camera frames .we consider a conventional pinhole - camera model with a projection function , which transforms 3d points in camera reference , into 2d points on the image plane : f_v\frac{y_\mathtt{c}}{z_\mathtt{c } } + c_v \end{bmatrix } , \quad \mathbf{x_\mathtt{c}}=\left[x_\mathtt{c}\,\,y_\mathtt{c}\,\,z_\mathtt{c}\right]^t\ ] ] where ^t ] the principal point .this projection function does not consider the distortion produced by the camera lens .when we extract keypoints on the image , we undistort their coordinates so that they can be matched to projected points using .the imu , whose reference we denote with , measures the acceleration and angular velocity of the sensor at regular intervals , typically at hundreds of herzs .both measurements are affected , in addition to sensor noise , by slowly varying biases and of the accelerometer and gyroscope respectively . moreoverthe accelerometer is subject to gravity and one needs to subtract its effect to compute the motion .the discrete evolution of the imu orientation , position and velocity , in the world reference , can be computed as follows : the motion between two consecutive keyframes can be defined in terms of the preintegration , and from all measurements in - between .we use the recent imu preintegration described in : where the jacobians and account for a first - order approximation of the effect of changing the biases without explicitly recomputing the preintegrations .both preintegrations and jacobians can be efficiently computed iteratively as imu measurements arrive .camera and imu are considered rigidly attached and the transformation ] means the first two columns of the matrix .stacking all relations between three consecutive keyframes we form a linear system of equations which can be solved via svd to get the scale factor , gravity direction correction and accelerometer bias . in this casewe have equations and 6 unknowns and we need again at least 4 keyframes to solve the system. we can compute the condition number ( i.e. the ratio between the maximum and minimum singular value ) to check if the problem is well - conditioned ( i.e. the sensor has performed a motion that makes all variables observable ) .we could relinearize and iterate the solution , but in practice we saw that a second iteration does not produce a noticeable improvement .we considered relations of three consecutive keyframes in equations and , so that the resulting linear systems do not have the additional unknowns corresponding to velocities .the velocities for all keyframes can now be computed using equation , as scale , gravity and bias are known . to compute the velocity of the most recent keyframe, we use the velocity relation .when the system relocalizes after a long period of time , using place recognition , we reinitialize gyroscope biases by solving .the accelerometer bias is estimated by solving a simplified , where the only unknown is the bias , as scale and gravity are already known .we use 20 consecutive frames localized with only vision to estimate both biases .we evaluate the proposed imu initialization method , detailed in section [ sec : ini ] and our visual - inertial orb - slam in the euroc dataset .it contains 11 sequences recorded from a micro aerial vehicle ( mav ) , flying around two different rooms and an industrial environment .sequences are classified as _ easy _ , _ medium _ and _ difficult _ , depending on illumination , texture , fast / slow motions or motion blur .the dataset provides synchronized global shutter wvga stereo images at with imu measurements at and trajectory ground - truth .these characteristics make it a really useful dataset to test visual - inertial slam systems .the experiments were performed processing left images only , in an intel core i7 - 4700mq computer with 8 gb ram .+ + + + we evaluate the imu initialization in sequences _v1_01_easy _ and _ v2_01_easy_. we run the imu initialization from scratch every time a new keyframe is inserted by orb - slam .we run the sequences at a lower frame - rate so that the repetitive initialization does not interfere with the normal behavior of the system .the goal is to analyze the convergence of the variables as more keyframes , i.e. longer trajectories , are processed by the initialization algorithm .[ fig : ini ] shows the estimated scale and imu biases .it can be seen that between 10 and 15 seconds all variables have already converged to stable values and that the estimated scale factor is really close to the optimal .this optimal scale factor is computed aligning the estimated trajectory with the ground - truth by a similarity transformation .[ fig : ini ] also shows the evolution in the condition number , indicating that some time is required to get a well - conditioned problem .this confirms that the sensor has to perform a motion that makes all variables observable , especially the accelerometer bias . the last row in fig .[ fig : ini ] shows the total time spent by the initialization algorithm , which exhibits a linear growth .this complexity is the result of not including velocities in and , which would have resulted in a quadratic complexity when using svd to solve these systems . subdividing the initialization in simpler subproblems , in contrast to , results in a very efficient method .the proposed initialization allows to start fusing imu information , as gravity , biases , scale and velocity are reliably estimated .for the euroc dataset , we observed that 15 seconds of mav exploration gives always an accurate initialization .as a future work we would like to investigate an automatic criterion to decide when we can consider an initialization successful , as we observed that an absolute threshold on the condition number is not reliable enough ..keyframe trajectory accuracy in euroc dataset [ cols="<,^,^,^,^ " , ] we evaluate the accuracy of our visual - inertial orb - slam in the 11 sequences of the euroc dataset .we start processing sequences when the mav starts exploring .the local window size for the local ba is set to 10 keyframes and the imu initialization is performed after 15 seconds from monocular orb - slam initialization .the system performs a full ba just after imu initialization .table [ t : acc ] shows the translation root mean square error ( rmse ) of the keyframe trajectory for each sequence , as proposed in .we use the raw vicon and leica ground - truth as the post - processed one already used imu .we observed a time offset between sensor and ground - truth of in the _ vicon _ _ room _ _ 2 _ sequences and in the _ machine _ _ hall _ , that we corrected when aligning both trajectories .we also measure the scale factor that would align best the estimated trajectory and ground - truth .this scale factor can be regarded as the residual scale error of the trajectory and reconstruction .our system is able to process all these sequences in real - time , except sequence _ v1_03_difficult _, where the movement is too extreme for the monocular system to initialize .the results show that our system is able to recover motion with metric scale , with a scale error typically below .the system achieves a typical precision of for room environments and of for industrial environments .note that our system is able to close loops and localize using the existing map when revisiting , which avoids drift accumulation .these results can be improved by applying a full ba afterwards , as seen in table [ t : acc ] .the reconstruction for sequence _v1_02_medium _ can be seen in fig .[ fig : view ] , and in the accompanying video . in order to test the capability of our system to reuse a previous map of an environment ,we run in a row all sequences of the same scene .we first process the first sequence and perform a full ba. then we run the rest of the sequences , where our system performs relocalization and continue doing slam .we then compare the accumulated keyframe trajectory with the ground - truth .bottom rows of table [ t : acc ] show accuracy results .as there exists a previous map , our system is now able to localize the camera in sequence _ v1_03_difficult_.these results show that there is no drift accumulation when revisiting the same scene , as the rmse for all sequences is not larger than for individual sequences .we have compared our system with the state - of - the - art direct visual - inertial odometry for stereo cameras , which also showed results in _ vicon room 1 _ sequences , allowing for a direct comparison .[ fig : comp ] shows the relative pose error ( rpe ) . to compute the rpe for our method, we need to recover the frame trajectory , as only keyframes are optimized by our backend . to this end ,when tracking a frame we store a relative transformation to a reference keyframe , so that we can retrieve frame pose from the estimated keyframe pose at the end of the execution .we have not run a full ba at the end of the experiment .we can see the error for the visual - inertial odometry method grows with the traveled distance , while our visual - inertial slam system does not accumulate error due to map reuse .the stereo method is able to work in _ v1_03_difficult _ , while our monocular method fails .our monocular slam successfully recovers metric scale , and achieves comparable accuracy in short paths , where the advantage of slam is negligible compared to odometry .this is a remarkable result of our feature - based monocular method , compared to which is direct and stereo .+ + sequence : v1_01_easy + + sequence : v1_02_mediumwe have presented in this paper a novel tightly coupled visual - inertial slam system , that is able to close loops in real - time and localize the sensor reusing the map in already mapped areas .this allows to achieve a _ zero - drift _ localization , in contrast to visual odometry approaches where drift grows unbounded .the experiments show that our monocular slam recovers metric scale with high precision , and achieves better accuracy than the state - of - the - art in stereo visual - inertial odometry when continually localizing in the same environment .we consider this _ zero - drift _ localization of particular interest for virtual / augmented reality systems , where the predicted user viewpoint must not drift when the user operates in the same workspace .moreover we expect to achieve better accuracy and robustness by using stereo or rgb - d cameras , which would also simplify imu initialization as scale is not longer unknown .we thanks the authors of for releasing the euroc dataset and the authors of for providing their data to compare our results in fig . [fig : comp ] .
in recent years there have been excellent results in visual - inertial odometry techniques , which aim to compute the incremental motion of the sensor with high accuracy and robustness . however these approaches lack the capability to close loops , and trajectory estimation accumulates drift even if the sensor is continually revisiting the same place . in this work we present a novel tightly - coupled visual - inertial simultaneous localization and mapping system that is able to close loops and reuse its map to achieve zero - drift localization in already mapped areas . while our approach can be applied to any camera configuration , we address here the most general problem of a monocular camera , with its well - known scale ambiguity . we also propose a novel imu initialization method , which computes the scale , the gravity direction , the velocity , and gyroscope and accelerometer biases , in a few seconds with high accuracy . we test our system in the 11 sequences of a recent micro - aerial vehicle public dataset achieving a typical scale factor error of and centimeter precision . we compare to the state - of - the - art in visual - inertial odometry in sequences with revisiting , proving the better accuracy of our method due to map reuse and no drift accumulation . slam , sensor fusion , visual - based navigation
the celebrated paper by slepian and wolf has ignited a long lasting , intensive research activity on separate source coding and joint decoding of correlated sources , during the last four decades .besides its extensions in many directions , some of the more recent studies have been devoted to further refinements of performance analysis , such as exponential bounds on the decoding error probability . in particular , gallager derived a lower bound on the achievable random coding error exponent pertaining to random binning ( henceforth , random binning exponent ) , using a technique that is very similar to that of his derivation of the ordinary random coding error exponent ( * ? ? ?* sections 5.55.6 ) .this random binning exponent was later shown by csiszr , krner and marton , to be universally achievable .the work of csiszr and krner is about a universally achievable error exponent using linear codes as well as a non universal , expurgated exponent which is improved at high rates .more recently , csiszr and oohama and han have derived error exponents for the more general setting of coded side information . for large rates at one of the encoders , kelly and wagner improved upon these results , but they did not consider the general case . since slepian wolf decoding is essentially an instance of channel decoding , we find it natural to examine its performance also in the framework of generalized channel decoders , that is , decoders with an erasure / list option .accordingly , this paper is about the analysis of random binning exponents associated with generalized decoders .it should be pointed out that error exponents for list decoders of the slepian wolf encoders were already analyzed in , but in that work , it was assumed that the list size is fixed ( independent of the block length ) and deterministic . in this paper , on the hand , we analyze achievable trade - offs between random binning exponents associated with erasure / list decoders in the framework similar to that of forney .this means , among other things , that the erasure and list options are treated jointly , on the same footing , using an optimum decision rule of a common form , and that in the list option , the list size is a random variable whose typical value might be exponentially large in the block length .the erasure option allows the decoder not to decode when the confidence level is not satisfactory .it can be motivated , for example , by the possibility of generating a rate - less slepian wolf code ( see also ) , provided that there is at least some minimum amount of feedback .we analyze random binning error exponents associated with erasure / list slepian wolf decoding using two different methods and then compare the resulting bounds .the first method follows the well known techniques of gallager and forney , whereas the second method is based on a technique of distance enumeration , or more generally , on type class enumeration .this method has already been used in previous work ( see ( * ? ? ?* chapters 67 ) and references therein ) and proved useful in obtaining bounds on error exponents which are always at least as tight ( and in many cases , strictly tighter ) than those obtained in the traditional methods of the information theory literature .this technique is rooted in the statistical mechanics of certain models of disordered magnetic materials . while in the case of ordinary random coding , the parallel statistical mechanical model is the random energy model ( rem ) ( * ? ? ?* chapters 56 ) , ( * ? ? ?* chapters 67 ) , here , since random binning is considered , the parallel statistical mechanical model is slightly different , but related .we will refer to this model as the _ random dilution model _ ( rdm ) for reasons that will become apparent in the sequel .as mentioned in the previous paragraph , the type class enumeration method is guaranteed to yield an exponent function which is at least as tight as that of the classical method .but it is also demonstrated that for certain combinations of coding rates and thresholds of the erasure / list decoder , the exponent of the type class enumeration method is strictly tighter than that of the ordinary method .in fact , the gap between them ( i.e. , their ratio ) can be arbitrarily large , and even strictly infinite .in other words , for a small enough threshold ( pertaining to list decoding ) , the former exponent can be infinite while the latter is finite . while the above described study is carried out for fixed rate slepian wolf encoding , we also demonstrate how variable rate encoding ( with a certain structure ) can strictly improve on the random binning exponents .this is shown in the context of the exponents derived using the forney / gallager method , but a similar generalization can be carried out using the other method .the outline of the paper is as follows . in section 2 ,we provide notation conventions and define the objectives of the paper more formally . in section 3 , we derive the random binning exponents using the forney / gallager method , and in section 4 , we extend this analysis to allow variable rate coding . finally , in section 5 , after a short background on the relevant statistical mechanical model ( subsection 5.1 ) , we use the type class enumeration technique , first in the binary case ( subsection 5.2 ) , then compare the resulting exponents to those of section 3 ( subsection 5.3 ) , and finally , generalize the analysis to a general pair of correlated finite alphabet memoryless sources ( subsection 5.4 ) .throughout the paper , random variables will be denoted by capital letters , specific values they may take will be denoted by the corresponding lower case letters , and their alphabets will be denoted by calligraphic letters .random vectors and their realizations will be denoted , respectively , by capital letters and the corresponding lower case letters , both in the bold face font .their alphabets will be superscripted by their dimensions .for example , the random vector , ( positive integer ) may take a specific vector value in , the order cartesian power of , which is the alphabet of each component of this vector .for a given vector , let denote the empirical distribution , that is , the vector , where is the relative frequency of the letter in the vector .let denote its type class of , namely , the set .the empirical entropy associated with , denoted , is the entropy associated with the empirical distribution .similarly , for a pair of vectors , the empirical joint distribution is the matrix of relative frequencies of symbol pairs .the conditional type class is the set .the empirical conditional entropy of given , denoted , is the conditional entropy of given , associated with the joint empirical distribution .the expectation operator will be denoted by .logarithms and exponents will be understood to be taken to the natural base unless specified otherwise .the indicator function will be denoted by .the notation function + x y x y x y x x x y x x x y x y x y x x x y x x x y x y x x x y x x ] .taking now the expectation w.r.t .the randomness of the binning , and assuming that , we get \right)^\rho\right\}\\ & \le&e^{nst}\sum_{{\mbox{\boldmath }},{\mbox{\boldmath }}}p^{1-s}({\mbox{\boldmath }},{\mbox{\boldmath }})\left(\sum_{{\mbox{\boldmath }}'\ne{\mbox{\boldmath } } } p^{s/\rho}({\mbox{\boldmath }}',{\mbox{\boldmath }}){\mbox{\boldmath }}\{{{\cal i}}[f({\mbox{\boldmath }}')=f({\mbox{\boldmath }})]\}\right)^\rho\\ & = & e^{nst}\sum_{{\mbox{\boldmath }},{\mbox{\boldmath }}}p^{1-s}({\mbox{\boldmath }},{\mbox{\boldmath }})\left(\sum_{{\mbox{\boldmath }}'\ne{\mbox{\boldmath } } } p^{s/\rho}({\mbox{\boldmath }}',{\mbox{\boldmath }})e^{-nr}\right)^\rho\\ & = & e^{-n(\rho r - st)}\sum_{{\mbox{\boldmath }},{\mbox{\boldmath }}}p^{1-s}({\mbox{\boldmath }},{\mbox{\boldmath }})\left(\sum_{{\mbox{\boldmath }}'\ne{\mbox{\boldmath } } } p^{s/\rho}({\mbox{\boldmath }}',{\mbox{\boldmath }})\right)^\rho\\ & = & e^{-n(\rho r - st)}\sum_{{\mbox{\boldmath }}}p({\mbox{\boldmath }})\sum_{{\mbox{\boldmath }}}p^{1-s}({\mbox{\boldmath }}|{\mbox{\boldmath }})\left(\sum_{{\mbox{\boldmath }}'\ne{\mbox{\boldmath } } } p^{s/\rho}({\mbox{\boldmath }}'|{\mbox{\boldmath }})\right)^\rho\\ & \le&e^{-n(\rho r - st)}\left[\sum_{y\in{{\cal y}}}p(y)\sum_{x\in{{\cal x}}}p^{1-s}(x|y)\left(\sum_{x'\in{{\cal x } } } p^{s/\rho}(x'|y)\right)^\rho\right]^n.\end{aligned}\ ] ] thus , after optimization over and , subject to the constraints , we obtain where \ ] ] with .\ ] ] a few elementary properties of the function are the following . 1 . is jointly convex in both arguments .this follows directly from the fact that it is given by the supremum over a family of affine functions in .clearly , is increasing in and decreasing in .2 . at ,the optimum is , similarly as in and .thus , as observed in , here too , the case is essentially equivalent ( in terms of error exponents ) to ordinary decoding , although operationally , there still might be erasures in this case .3 . for a given , the infimum of such that is which is a concave increasing function . at , 4 . for a given , the supremum of such that is which is a convex increasing function , the inverse of .additional properties can be found similarly as in , but we will not delve into them here .a possible extension of the above error exponent analysis allows variable rate coding . in this section ,we demonstrate how the flexibility of variable rate coding can improve the error exponents .consider an encoder that first sends a relatively short header that encodes the type class of ( using a logarithmic number of bits ) , and then a description of within its type class , using a random bin in the range - 1\} ] are not all positive , the optimum solution is given by +\ ] ] where is the ( unique ) solution to the equation +=r.\ ] ] for , the optimization over is less trivial , but it can still be carried out at least numerically .this subsection can be skipped without essential loss of continuity , however , we believe that before getting into the detailed technical derivation , it would be instructive to give a brief review of the statistical mechanical models that are at the basis of the type class enumeration method . in ordinary random coding ( as opposed to random binning ) , the derivations of bounds on the error probability ( especially in the methods of gallager and forney ) are frequently associated with expressions of the form , where is ( randomly selected ) codebook and is some parameter . as explained in ( * ? ? ?6 ) , this can be viewed , from the statistical mechanical perspective , as a partition function where plays the role of inverse temperature and where the energy function ( hamiltonian ) is .since the codewords are selected independently at random , then for a given , the energies are i.i.d. random variables .this is , in principle , nothing but the _ random energy model _ ( rem ) , a well known model in statistical mechanics of disordered magnetic materials ( spin glasses ) , which exhibits a phase transition : below a certain critical temperature ( ), the system freezes in the sense that the partition function is exponentially dominated by a subexponential number of configurations at the ground state energy ( zero thermodynamical entropy ) .this phase is called the _ frozen phase _ or the _ glassy phase_. the other phase , , is called the _ paramagnetic phase _ ( see more details in ( * ? ? ?* chap.5 ) ) .accordingly , the resulting exponential error bounds associated with random coding ` inherit ' this phase transition ( see and references therein ) . in random binning the situation is somewhat different . as we have seen in section 3 , here the bound involves an expression like y y x x y e x x x y x x y e x x x y x x e x x x y x x x y x y e x x y e x x y ] . now , is the sum of i.i.d . binary random variables \} ] ( a list option ) .then , ^s\right)\right]-\right.\nonumber\\ & & \left.\ln\left[p^{1-s}\left(1+\left[\frac{1-p}{p}\right]^{1-s}\right)\right]\right\}\\ & = & \lim_{s\to\infty}\left\{r - st - s\ln(1-p)- \ln\left(1+\left[\frac{p}{1-p}\right]^s\right)-(1-s)\ln p-\right.\nonumber\\ & & \left.\ln\left(1+\left[\frac{p}{1-p}\right]^{s-1}\right)\right\}\\ & = & \lim_{s\to\infty}\left\{r - st - s\ln(1-p ) -(1-s)\ln p\right\}\\ & = & \ln\frac{1}{p}+r+\lim_{s\to\infty}s\left[\ln\frac{p}{1-p}-t\right]\\ & = & \infty.\end{aligned}\ ] ] on the other hand , in this case , \}\\ & = & r+|t| < \infty.\end{aligned}\ ] ] another situation , where it is relatively easy to calculate the exponents is the limit of very weak correlation between the bss s and ( in analogy to the notion of a very noisy channel ( * ? ? ?* , example 3 ) ) .let for . in this case , a second order taylor series expansion of the relevant functions ( see appendix b for the details ) yields ,for and , with being fixed : whereas \epsilon^2.\ ] ] now , observe that the upper bound on is affine in , whereas the lower bound on is quadratic in , thus the ratio can be made arbitrarily large for any sufficiently large . in both examples , we took advantage of the fact that the range of optimization of for includes all the positive reals , whereas for , it is limited to the interval ] , whereas in the second example , . in this subsection, we use the type class enumeration method for general finite alphabet sources and . consider the expression \right]^s\right\}\ ] ] that appears upon taking the expectation over the last line of ( [ beginning ] ) .then , we have \right]^s\right\}\\ & = & p^s({\mbox{\boldmath }}){\mbox{\boldmath }}\left\{\left[\sum_{{\mbox{\boldmath }}'\ne{\mbox{\boldmath } } } p({\mbox{\boldmath }}'|{\mbox{\boldmath }}){{\cal i}}[f({\mbox{\boldmath }}')=f({\mbox{\boldmath }})]\right]^s\right\}\\ & \le&p^s({\mbox{\boldmath }})\sum_{{{\calt}}({\mbox{\boldmath }}'|{\mbox{\boldmath }})}p^s({\mbox{\boldmath }}'|{\mbox{\boldmath }}){\mbox{\boldmath }}\left\{\left[\sum_{\tilde{{\mbox{\boldmath }}}\in { { \cal t}}({\mbox{\boldmath }}'|{\mbox{\boldmath } } ) } { { \cal i}}[f(\tilde{{\mbox{\boldmath}}})=f({\mbox{\boldmath }})]\right]^s\right\}\\ & { \stackrel{\delta } { = } } & p^s({\mbox{\boldmath }})\sum_{{{\cal t}}({\mbox{\boldmath }}'|{\mbox{\boldmath }})}p^s({\mbox{\boldmath }}'|{\mbox{\boldmath }}){\mbox{\boldmath }}\left\{n^s({\mbox{\boldmath }}'|{\mbox{\boldmath }},{\mbox{\boldmath }})\right\}\end{aligned}\ ] ] where is the ( random ) number of in which belong to the same bin as .now , \ } & \hat{h}_{{\mbox{\boldmath }}'{\mbox{\boldmath }}}(x|y)>r\\ \exp\{n[\hat{h}_{{\mbox{\boldmath }}'{\mbox{\boldmath }}}(x|y)-r]\ } & \hat{h}_{{\mbox{\boldmath }}'{\mbox{\boldmath }}}(x|y)\le r\end{array}\right.\\ & = & \exp\{n(s[\hat{h}_{{\mbox{\boldmath }}'{\mbox{\boldmath }}}(x|y)-r]-(1-s)[r-\hat{h}_{{\mbox{\boldmath }}'{\mbox{\boldmath }}}(x|y)]_+)\},\end{aligned}\ ] ] thus , \right]^s\right\}\\ & { \stackrel{\cdot } { = } } & p^s({\mbox{\boldmath }})\sum_{{{\cal t}}({\mbox{\boldmath }}'|{\mbox{\boldmath }})}p^s({\mbox{\boldmath }}'|{\mbox{\boldmath }})\exp\{n(s[\hat{h}_{{\mbox{\boldmath }}'{\mbox{\boldmath }}}(x|y)- r]-(1-s)[r-\hat{h}_{{\mbox{\boldmath }}'{\mbox{\boldmath }}}(x|y)]_+)\}\\ & = & p^s({\mbox{\boldmath}})\sum_{{{\cal t}}({\mbox{\boldmath }}'|{\mbox{\boldmath }})}p^s({\mbox{\boldmath }}'|{\mbox{\boldmath }})\exp\{n(s[\hat{h}_{{\mbox{\boldmath }}'{\mbox{\boldmath }}}(x|y)- r]-(1-s)[r-\hat{h}_{{\mbox{\boldmath }}'{\mbox{\boldmath }}}(x|y)]_+)\}\\ & = & p^s({\mbox{\boldmath }})\sum_{{{\cal t}}({\mbox{\boldmath }}'|{\mbox{\boldmath }})}\exp\{-n(s[d(\hat{p}_{{\mbox{\boldmath }}'|{\mbox{\boldmath }}}\|p_{x|y}|\hat{p}_{{\mbox{\boldmath }}})+r]+ ( 1-s)[r-\hat{h}_{{\mbox{\boldmath }}'{\mbox{\boldmath }}}(x|y)]_+)\}\\ & { \stackrel{\cdot } { = } } & p^s({\mbox{\boldmath }})\exp\left\{-n\min_{p_{x'|y}}(s[d(p_{x'|y}\|p_{x|y}|\hat{p}_{{\mbox{\boldmath }}})+r]+ ( 1-s)[r - h(x'|y)]_+)\right\}\\ & { \stackrel{\delta } { = } } & p^s({\mbox{\boldmath }})e^{-nl(\hat{p}_{{\mbox{\boldmath }}},r , s)},\end{aligned}\ ] ] where is the empirical conditional distribution of a random variable given induced by , and is defined as consequently , where -st.\ ] ] finally , * calculation of . *let ] , where . in this case , the minimizer that achieves is given by here , for , the derivative of the objective function vanishes only at , where the term + ] is active . in the intermediate range , the derivative jumps from a negative value to a positive value at discontinuously , hence it is a minimum .thus , for , we have : & r < h(p)\\ sh^{-1}(r)\ln\frac{1-p}{p } & h(p)\le r < h(p_s)\\ sp_s\ln\frac{1-p}{p}+r - h(p_s ) & r\ge h(p_s)\end{array}\right.\ ] ] for , . and so .here , for , which means also , the derivative vanishes only at . on the other hand , for ,the derivative vanishes only at . in the intermediate range , ,the derivative vanishes both at and , so the minimum is the smaller between the two .namely , it is if +(1-s)[r - h(p_s)]_+\le sp\ln\frac{1-p}{p}+s[r - h(p)]+(1-s)[r - h(p)]_+\ ] ] or equivalently , ,\ ] ] and it is otherwise .the choice between the two depends on .let +sh(p_s)-h(p)}{s-1 } = -\frac{\ln[p^s+(1-p)^s]}{s-1}\ ] ] then , for , & r < r(s)\\ sp_s\ln\frac{1-p}{p}+r - h(p_s ) & r\ge r(s)\end{array}\right.\ ] ]* calculations of error exponents for very weakly correlated bss s . * for , we have , to the second order in , . consider the range of rates .a second order taylor series expansion of ] , and so , from here on , the rate is parametrized by . the maximization over , for a given , is readily found to give on substituting , we get \\ & = & \max_{0\le s\le 1}\left[s(4\epsilon^2-t)-s|\epsilon|\sqrt{2(\ln 2-r)}-2s^2\epsilon^2 - 2s|\epsilon|\sqrt{\frac{\ln 2-r}{2}}\right]\\ & = & \max_{0\le s\le 1}\{s[4\epsilon^2-t-2|\epsilon|\sqrt{2(\ln 2-r)}]-2s^2\epsilon^2\}\\ & = & \max_{0\le s\le 1}\{s[4\epsilon^2(1-\theta)-t ] -2s^2\epsilon^2\}\end{aligned}\ ] ] where the inequality is because when we maximized over , we have ignored the constraint .next , let for , then and so , on the other hand , \\ & = & \sup_{s\ge 1}[s(4\epsilon^2-t)-4s^2\epsilon^2]+r-\ln 2\\ & = & \sup_{s\ge 1}[s(4\epsilon^2-t)-4s^2\epsilon^2]-2\theta^2\epsilon^2\\ & = & \frac{(4\epsilon^2-t)^2}{16\epsilon^2}-2\theta^2\epsilon^2\\ & \ge&\frac{[(\tau+4)\epsilon^2]^2}{16\epsilon^2}-2\epsilon^2\\ & = & \left[\frac{\tau(\tau+8)}{16}-1\right]\epsilon^2.\end{aligned}\ ] ]
we analyze random coding error exponents associated with erasure / list slepian wolf decoding using two different methods and then compare the resulting bounds . the first method follows the well known techniques of gallager and forney and the second method is based on a technique of distance enumeration , or more generally , type class enumeration , which is rooted in the statistical mechanics of a disordered system that is related to the random energy model ( rem ) . the second method is guaranteed to yield exponent functions which are at least as tight as those of the first method , and it is demonstrated that for certain combinations of coding rates and thresholds , the bounds of the second method are strictly tighter than those of the first method , by an arbitrarily large factor . in fact , the second method may even yield an infinite exponent at regions where the first method gives finite values . we also discuss the option of variable rate slepian wolf encoding and demonstrate how it can improve on the resulting exponents . + * index terms : * slepian wolf coding , error exponents , erasure / list decoding , phase transitions . department of electrical engineering + technion - israel institute of technology + technion city , haifa 32000 , israel + e mail : merhav.technion.ac.il +
in marketing literature it has been successively referred the importance of calculating the value of a customer .in fact , such indicative value enables firms to select those customers that can add profit and consequently constitutes an important information to segment the market and efficiently allocate marketing policy resources .the objective of this work is to establish and study a compartmental model , mathematically translated into a system of ordinary differential equations , for the evolution of the number of customers of some firm , assuming that the customers are divided in two subgroups corresponding to different profitabilities . until recently ,the value of a customer for a company was based on the present value of future profits generated by a customer over the full course of their dealings with a particular company , this is the customer life - time - value ( clv ) .however , other authors refer to the importance of including not only the present and future revenue from the customer purchases , but also the value of the potential to influence other customers under incentives on behalf of the company ( customer referral value ) or by own initiative ( customer influencer value ) .customer influencing behaviors consists of the intrinsic behaviors motivating the customer to persuade and influence other customers without there being any type of reward on behalf of the company and thus designated the customer influencer value ( civ ) . in turn , the patterns of customer recommendation are related to the acquisition of new customers due to company initiatives that reward recommendations made to other customers , and thereby establishing the customer referral value ( crv ) . according to kumar et al . , these components are mutually interwoven .thus , clv positively correlates with crv ( although only up to a certain point and in an inverted u - shaped relational curve , which means customers reporting average clv are those most interested in company referral programs ) and clv is positively related with civ ( with an inverted u - shaped relationship in effect between these two concepts ) .much of the literature has focused on the customer referral value through the influence customers might have on the formation of other customers attitudes ( ) in the purchasing making decision and in the reduction of other customers perceived risks , but little is known about how this processes occur .since customer referral value and influencer value might have a great impact for companies , these latter try to identify the most influential customers .a number of studies allow us to think that the customers of a firm can be classified into several groups according to their influential role over other potential buyers . in imperfect competitive markets informationis not purely transparent ; some persons are more able than others of influencing people to become a customer of that firm .it is also acceptable to assume that knowing the referrals among each firm s customers and quantifying their influence constitutes an important asset for the firm competitive advantage , although all customers are important , referrals would be more valuable .we mostly agree with marti and zenou when they state that physics / applied mathematics are capable of reproducing many real networks but never reach to explain why they emerge ; the economists are very precise to explain why they emerge but their approach does a poor job in matching real world networks .that is why some game theorists are now improving models which take networks as given entities and study the impact of their structure on individuals outcomes .based on the network theory some models have been tested to study the way influential customers can influence other consumers . for instance kiss and bichler tested real network models , simulated networks and diffusion models to predict influence between customers based on their position within the network. however , as the authors mention this analysis not always is possible if we do not know or do not have information regarding the customer social network .therefore , other models are needed to try to explain these processes . in this workwe propose a model suitable to describe the dynamics of the number of customers of a given firm .this model is given by a system of ordinary differential equations whose variables correspond to groups of customers and potential customers divided according to their profile and whose parameters reflect the structure of the underlying social network and the marketing policy of the firm .we intend to understand the flows between these groups and its consequences on the raise of customers of the firm .we also want to highlight the usefulness of these models in helping firms deciding their marketing policy .specifically , the main objectives of our study is threefold : we intend to obtain theoretical results concerning the long term behavior of the number of customers in various scenarios , we want to present some simulation aimed at illustrating the possibilities of application of our model and , finally , we want to discuss the benefits and limitations of this type of analysis .as referred , we will consider a compartmental model .as far as we are aware , this is the first time such type of mathematical model is considered in the context of marketing research .we believe that this type of model can be fruitfully explored in this context .this believe is based on the fact that compartmental models have proved to be an important tool not only in the natural sciences , particularly in mathematical epidemiology and in population biology , but also , with increasing notoriety in recent years , in the context of economy and other social sciences .we consider a continuous compartmental model with four compartments , represented by the graph in figure [ diagram : model ] and governed by an autonomous system of four ordinary differential equations .( 360,190)(-30,-20 ) ( 5,0)(60,45) ( 5,100)(60,45) ( 270,0)(60,45) ( 270,100)(60,45) ( 80,140)(1,0)180 ( 160,145) ( 260,120)(-1,0)180 ( 160,127) ( 260,110)(-1,0)180 ( 110,92) ( 80,40)(1,0)180 ( 160,45) ( 260,20)(-1,0)180 ( 160,27) ( 260,10)(-1,0)180 ( 130,-7) ( 25,50)(0,1)45 ( 0,70) ( 35,95)(0,-1)45 ( 40,70) ( 295,50)(0,1)45 ( 265,70) ( 305,95)(0,-1)45 ( 310,70) ( 0,25)(-1,0)30 ( -20,10) ( 0,115)(-1,0)30 ( -20,100) ( 375,30)(-1,0)40 ( 335,37) ( 375,130)(-1,0)40 ( 345,137) ( 335,115)(1,0)40 ( 345,105) ( 335,15)(1,0)40 ( 345,5) measuring time in years , we consider the following ( pairwise disjoint ) compartments : , the referral customers in time t , , the regular customers in time t , , the potential referral customers in time t and , the potential regular customers in time t. to model transitions between compartments we consider the following parameters : , the natural transition rate between and , given by the number of potential regular customers that become regular customers without external influence per year over the number of potential customers ( by `` without external influence '' we mean without being influenced by marketing campaigns or referral customers ) ; , the referral pull effect , given by the average number of customers that a single referral brings ( with no additional incentive ) per year over the number of potential customers ; , the natural to transition rate , corresponding to the number of potential referral customers that become referral customers without external influence per year over the number of potential referral customers ; , the undifferentiated marketing costs , corresponding to marketing costs associated to undifferentiated marketing campaigns per year ; , the pull effect due to undifferentiated marketing , corresponding to the quotient of the outcome of undifferentiated marketing campaigns per year by the number of potential customers ( by `` outcome of undifferentiated marketing campaigns '' it is meant the number of potential customers that become customers in the sequence of undifferentiated marketing campaigns per unitary marketing cost per year ) ; , the referral associated marketing costs , corresponding to marketing costs associated to referral directed marketing campaigns per year ; , the pull effect due to referral directed marketing , given by the referral directed marketing campaigns outcome over the number of potential customers ( by `` referral directed marketing campaigns outcome '' it is meant the average number of additional customers that a single referral can bring with incentives per unitary marketing cost per year ) ; , the non - central / central transition in the social network equal to the number of individuals non - central in the social network that become central over the total number of individuals in the social network ; , the central / non - central transition in the social network , given by the number of individuals central in the social network that become non - central over the total number of individuals in the social network ; , the regular customer defection rate , equal to the number of regular customers that cease to be customers over the number of regular customers ; , the referral defection rate , given by the number of referrals that cease to be customers over the number of referrals ; , the customer & potential customer defection rate , corresponding to the number of individuals that leave the universe of customers and potential customers per year over the number of customers and potential customers ( by `` number of individuals that leave the universe of customers and potential customers per year '' it is meant the number of customers and potential customers that cease to be in the set of customers or potential customers per year due to emigration , death , etc . ) ; , the customer and potential customer recruitment rate , given by the number of individuals that enter the universe of customers and potential customers per year over the number of customers and potential customers ( by `` number of individuals that leave the universe of customers and potential customers per year '' it is meant the number of customers and potential customers that cease to be in the set of customers or potential customers per year due to immigration , etc . ) ; , the referral recruitment rate , equal to the number of referrals that enter the universe of customers and potential customers per year over number of individuals that enter the universe of customers and potential customers per year . our model can be translated into the following system of differential equations to be studied along this paper . noticethat correspond to the average number of referrals that are brought with no incentive by the referrals per year , that is the average number of regular customers that are brought with no incentive by the referrals per year , that is the average number of additional customers brought due to incentives per year , that is the number of potential referrals that become referrals in the sequence of undifferentiated marketing campaigns per year and that is the number of potential regular customers that become regular customers in the sequence of undifferentiated marketing campaigns per year .this paper is divided in the following way : in section [ section : mr ] we state our main results concerning the asymptotic behavior of the number of regular customers and referral customers , in section [ section : s ] we present some simulation with the objective of illustrating our theoretical results , in section [ section : p ] we prove our results and , finally , in section [ section : conc ] we discuss the results obtained .one of the first natural issues to address when studying a compartmental model is the existence and stability of equilibrium solutions .we obtain several results on the existence and stability of equilibrium solutions in of model in this section .we first derive an auxiliary result .given define the sets we have the following result that shows that the total population in system converges to the ratio , independently of the nonnegative initial conditions considered .[ teo : general_system ] let and let be some solution of system with nonnegative initial conditions : , , , . then : a. [ teo : general_system-1 ] for all , we have ; b. [ teo : general_system-2 ] we have . in particular , given and any solution , with nonnegative initial conditions there is such that for all and any equilibrium solution is in the set .the case is not a very interesting case since it corresponds to the situation where there is no customer & potential customer defection rate which is not a realistic assumption .nevertheless it is easy to check that , if , then , given initial conditions , , and , we have in particular if , then the total population remains constant .we now obtain a result on the existence of equilibrium solutions . under the assumption of positivity of the defection rate ,the referral pull effect and the non central / central transition in the social network we conclude that there are one , two or three equilibrium solutions , depending on the number of real roots of some third degree polynomial .we need to define the constants and also [ teo : equilibriums ] let .then : a. [ teo : equilibriums-1 ] system has up to three equilibrium solutions .the first component , , is always a nonnegative solution of the cubic equation where , , and ; b. [ teo : equilibriums-2 ] any equilibrium solution is obtained in the following way : is a nonnegative solution of and , and are nonnegative constants . in the next resultwe discuss the asymptotic behavior of solutions of under some assumptions on the parameters that roughly correspond to require that the referral pull effects are bounded by some functions that we can identify with the other `` forces '' in the model such as the natural transition rates , the pull effects due to undifferentiated marketing and the defection rates ( see equation ) . in the following theorem we were able to show that , under the mentioned assumptions , the asymptotic behavior of the solutions of can be obtained by the two dimensional autonomous system .[ teo : asymptotic_behavior ] let and assume that consider the system and set and .then the asymptotic behavior of , , and in system is the same as the asymptotic behavior of , , and .namely if is a solution of with initial condition and is a solution of with initial condition then in the next two theorems we discuss two particular situations where we analyse the existence of equilibriums and their stability .the vector fields plotted with the objective of illustrating the situations correspond to the reduced system but we considered situations where theorem [ teo : asymptotic_behavior ] apply so that the asymptotic behavior of referrals and regular clients is the same for both systems .first we will discuss the situation where there is no transition between referral / potential referral and customer / potential customer and thus we set .we consider two cases : the situation where and ( we named it `` static social network '' to reflect the fact that there is no transition between referrals and regular customers ) and the situation where , corresponding to the case where all potential customers and potential referrals that become customers are consequence of referral influence ( we named it `` word of mouth '' to emphasise that all marketing efforts are related to referrals ) . we have the following result in the static social network case .[ teo : static_social_network ] the following holds for system with , , , and : there is a unique equilibrium solution that is locally asymptotically stable and is given by + and where define we now consider the word of mouth case . in figure [ fig1 ]we show the behavior in the plane in both of the regimes of theorem [ teo : mri ] . in figure [ fig1 ]we used for the plot in the left , , and , and , for the plot in the right , , , and .[ teo : mri ] the following statements holds for system with , and : a. if then there a unique locally stable equilibrium given by b. if then there are two equilibrium solutions . an unstable equilibrium given by and a locally stable equilibrium given by and and we next consider a scenario where there is no direct referral influence and thus we set .define }{up({\varepsilon}+\beta_2+{\lambda}_7 ) } \quad \text{and } \quad \kappa_2=\dfrac{{\lambda}_7[{\lambda}_5q+({\lambda}_3+m{\lambda}_4)p]}{vq({\varepsilon}+\beta_1+{\lambda}_5)}.\ ] ] [ teo : no_referral_influence ] for system with , and there is a unique equilibrium solution that is globally asymptotically stable and is given by in figure [ fig2 ] we illustrate the behavior of the reduced system in the plane for the setting in theorem [ teo : no_referral_influence ] . in figure [ fig2 ]we used , , , , , , and , for the plot on the left , , and , for the plot on the right . note that for the figure in the left the number of referrals in the equilibrium point is nonzero , although it seems to be ( the number of referrals is very low since the there are no referrals entering the population ) . and ] and ]to obtain a better understanding of the behavior of our model , we assume that , for a given corporation , we consider the values for the parameters presented in table [ tab : ref ] . and initial conditions , , and . we write . the value considered for and based on usual assumptions concerning the defection rate .the assumptions and are made to assure that the underlying social network hasnt an initial tendency to `` benefit '' any of the four compartments .we also consider so that the total population converges to an equilibrium where the total population equals .we solved system ( named _ initial _ ) and system ( named _ reduced _ ) with matlab . in the figures we plot the solution for the initial and the reduced systems .we considered two sets of values for and , namely corresponding to a situation of undifferentiated marketing and corresponding to a situation where some effort is made for attracting referrals . in both situationswe maintain the same total effort , , in order to be able to compare both cases . in figures [ fig : undif - costumers ] and[ fig : undif - referrals ] we consider the evolution of customers and referrals in the case where marketing is used in an undifferentiated way .we can see that the number of customers and referrals decreases in this situation and stabilizes in some lower value for both compartments .0.49 and ,title="fig : " ] 0.49 and ,title="fig : " ] 0.49 and ,title="fig : " ] 0.49 and ,title="fig : " ] in figures [ fig : mark - costumers ] and [ fig : mark - referrals ] we now consider the evolution of customers and referrals in the case where some marketing effort is used to attract referrals .we can see that there is an initial small decrease in the number of customers that is followed rapidly by an increase that asymptotically doubles its number .there is also an increase in the number of referrals due to the positive value of .0.49 and ,title="fig : " ] 0.49 and ,title="fig : " ] 0.49 and ,title="fig : " ] 0.49 and ,title="fig : " ] in the figure [ fig : asymptotic ] we present the evolution of customers and referrals in the case where is reduced to in order to satisfy the condition in theorem .we can see that , as stated in the theorem , the solutions asymptotically approach the same value . in the previous cases , although condition is not satisfied , there is computational evidence that the same happens .thus we conjecture that condition can be weakened .0.49 , and ,title="fig : " ] 0.49 , and ,title="fig : " ]let .analysing the direction of the flow on the boundary of the set we immediately obtain [ teo : general_system-1 ] ) . adding the four equations in and letting , we get the differential equation the general solution of is and thus . in particular , if corresponds to an equilibrium solution then .this proves [ teo : general_system-2 ] ) .adding equations for and and for and in we get and , setting and , we obtain the linear system has the general solution and thus with as for . replacing and by the expressions in in the first two equations in systemwe obtain where and are given by . by lemma [ teo : general_system ]every equilibrium solution must belong to .thus every equilibrium solution must satisfy since we have substituting in the first equation of we obtain equation .each solution of uniquely determines , and .thus , we have at most three equilibrium solutions and [ teo : equilibriums-1 ] ) follows . by for any equilibrium solution we have and . bywe obtain and [ teo : equilibriums-2 ] ) .assume that and choose .additionally , define by and let be a solution of with initial conditions , , and and be a solution of with initial conditions and .define and . by andwe have and by in the proof of theorem [ teo : general_system ] , there is such that and , for all . using this fact , by and we have for , where thus , using the fact that , we get for . by , there is such that since as , there is sufficiently large such that , for , we have . thus for .therefore , for , we conclude that since is arbitrary , the theorem follows . adding equations for and and for and in we get the system and thus c(t)+p_c(t)=\frac{(1-\alpha)\gamma}{{\varepsilon}}+c_2\operatorname{e}^{-{\varepsilon}t } \end{cases}.\ ] ] therefore ,if is an equilibrium solution , then using these expressions and the first two equations in system , we obtain and using the second equation in and dividing by ( that is nonzero by assumption ) , we get thus we have a unique nonnegative root given by . by the first equation in we obtain by weconclude that and it is easy to check that the expression above is nonnegative . again by we get and it is also immediate that this expression is nonnegative .to study the stability of the equilibrium , we consider the jacobian matrix at the equilibrium .namely ,\ ] ] where and .it is easy check that the eigenvalues of are , and .thus , all eigenvalues have negative real part and we conclude that the equilibrium solution is locally asymptotically stable .by the second equation in we can see that , if is an equilibrium solution of our system , then and thus or . using the remaining equations and condition it is straightforward to obtain the equilibrium points . in the equilibrium ,the jacobian matrix is given by .\ ] ] we can easily check that the constants , and are the eigenvalues of . therefore if the equilibrium is unstable and if it is locally asymptotically stable . in the equilibrium ,the jacobian matrix has the following form .\ ] ] we can easily check that the negative constants , and are the eigenvalues of . therefore the equilibrium is locally asymptotically stable . in our case , equation in theorem [ teo : equilibriums ] has a unique solution which is nonnegative and is given by thus , again by theorem [ teo : equilibriums ] , the possible equilibrium solutions are given by where is the constant above , and since , is immediate that and .we conclude that there is a unique equilibrium solution .the jacobian matrix at the equilibrium is given by .\ ] ] we can check that the eigenvalues are all negative and given by , and } \right).\ ] ] therefore the equilibrium is asymptotically stable . since the system in this case is linear ,the equilibrium is globally asymptotically stable .we presented and studied in this paper a compartmental model with four compartments to describe the evolution of the number of customers and potential customers of some corporation based on the marketing policy of the corporation , determined by the effort used in undifferentiated marketing campaigns and in referral directed marketing .apparently the results obtained are reasonable in the sense that the qualitative behavior obtained is not going against common sense .the results show that the model works in theory in the several scenarios considered , with and without marketing incentives , thus relying on normal marketing policies or on incentives to referrals .the model shows that , in theory , it is possible to predict the influence referrals can have on their peers based on the incentives given to them by the company .the different scenarios also allows to see , in a specific period of time , what happens to the number of current and potential customers based on the zero incentives policy , and therefore , only based on the natural attractiveness power of the referrals ( theorem [ teo : mri ] with ) ; what happens if the company invests in marketing but not on incentives to referrals ( theorem [ teo : no_referral_influence ] ) ; and also , what happens if the company only invests on incentives to referrals ( theorem [ teo : mri ] with ) . based on this results , this model allows companies to adjust their marketing policies in order to maximize a specific parameter of the model .for instance , the model allows a company to estimate the amount of investment necessary to transform an number of potential customers in customers .this study must be followed by work where the model is used in real world problems .in fact , comparing the results given by the model and real data would of fundamental importance to confirm the usefulness of it .naturally this is a major task whose feasibility will certainly depend on the accuracy in the estimation of parameters .we believe that this work opens several possibilities for future studies .for instance , it would be interesting to consider versions of this model with time - varying parameters to model seasonal phenomena that may occur in some economic activities. it can also be of interest to consider age structured populations , to distinguish different consumption habits , or to subdivide the universe of referrals , to reflect different aspects that make those customers important to the corporation .10 url # 1`#1`urlprefix artzrouni m. , tramontana f. , the debt trap : a two - compartment train wreck ... and how to avoid it , journal of policy modeling 36 ( 2014 ) 241256 h. s. bansal , p. a. voyer , word of mouth processes within a services purchase decision context , journal of service research 3 ( 2 ) ( 2000 ) 166177 brauer , f. and van den driessche , p. and wu , j. , mathematical epidemiology , lecture notes in mathematics 1945 , 2008 .p. f. bone , word of mouth effects on short - term and long - term product judgements , journal of business research 32 ( 3 ) ( 1995 ) 213223 d. godes , d. mayzlin , using online conversation to study word - of - mouth communication , marketing science 23 ( 4 ) ( 2004 ) 545560 kiss , christine , and bichler , martin .`` identification of influencers measuring influence in customer networks . ''decision support systems 46.1 ( 2008 ) , 233 - 253 kumar , venkat , clv : the databased approach , journal of relationship marketing 5.2 - 3 ( 2006 ) , 7 - 35 .kumar , vineet . customer relationship management .john wiley & sons , ltd , 2010 .kumar , v. , et al .undervalued or overvalued customers : capturing total customer engagement value , journal of service research 13.3 ( 2010 ) , 297310 j. lee , j. lee , l. feick , incorporating word - of - mouth effects in estimating customer lifetime value , journal of database marketing & customer strategy management 14 ( 2006 ) , 2939 lin l. , stabilization analysis for economic compartmental switched systems based on quadratic lyapunov function , nonlinear analysis : hybrid systems 2 ( 2008 ) , 11871197 de mart and y. zenou , `` social networks '' in : i. jarvie and j. zamora bonilla ( eds . ) , handbook of philosophy of social science , london : sage publications , chap .16 , 339361 kumar , viswanathan , j. andrew petersen , and robert p. leone , how valuable is word of mouth ? , harvard business review 85.10 ( 2007 ) , 139 greenwald b. , stiglitz j. e. , financial market imperfections and business cycles , q. j. econ . 108 ( 1993 ) , 77114 thieme , h. r. , mathematics in population biology , princeton series in theoretical and computational biology , princeton university press , 2003 .tramontana f. , economics as a compartmental system : a simple macroeconomic example , int rev econ 57 ( 2010 ) , 347360 zhao , x - q ., dynamical systems in population biology , springer - verlag , new york , 2003
we consider a compartmental model to study the evolution of the number of regular customers and referral customers in some corporation . transitions between compartments are modeled by parameters depending on the social network and the marketing policy of the corporation . we obtain some results on the asymptotic number of regular customers and referral customers in several particular scenarios . additionally we present some simulation that illustrates the behavior of the model and discuss its applicability .
in this paper , we have defined some new measures of distances between samples of functions to solve the problem of homogeneity in the context of functional data analysis . combining these measures with the depth functions defined by fraiman - muniz , cuevas - fraiman - muniz and lpez - pintado - romo, we propose a hypothesis test based on the bootstrap methology and apply it to a number of simulated and real functional data .our measures shows their effectiveness in detecting differences of magnitudes and shape in some samples generated by gaussian processes , and moreover are able to show heterogeneity for ramsay data , mitochondrial data and the second derivatives tecator data .it is significant that our methods show homogeneity in the tecator data without differentation , a phenomenon widely dealt with in the literature .it is also noteworthy that our method improves the rank - test in some cases .once the concept of depth of a function with regard to a sample is defined , several generalizations appear to be possible .for example , the sample of tecator data discussed above shows that there is information about homogeneity hidden in the derivatives that can not be directly extracted from the original functions .hence , it should be interesting to define and describe a unified way to deal with all the depth measures and statistics used in our work when applied at the same time to all the functions and all their derivatives .it is likely that such a notion would be able to show patterns in the homogeneity of the samples that could not be deduced without differentiation . on the other hand, it would be also interesting to define some measures that allow us to test at the same time the homogeneity of several samples of functions .we plan to undertake this task in subsequent work .
in the context of functional data analysis , we propose new two sample tests for homogeneity . based on some well - known depth measures , we construct four different statistics in order to measure distance between the two samples . a simulation study is performed to check the efficiency of the tests when confronted with shape and magnitude perturbation . finally , we apply these tools to measure the homogeneity in some samples of real data , obtaining good results using this new method . * keywords : * functional depth , homogeneity , fda . [ cols= " < " , ]
astronomical wide field ( hereafter wf ) imaging encompasses the use of images larger than pxls ( lipovestky 1993 ) and is the only tool to tackle problems based on rare objects or on statistically significant samples of optically selected objects . therefore ,wf imaging has been and still is of paramount relevance to almost all field of astrophysics : from the structure and dynamics of our galaxy , to the environmental effects on galaxy formation and evolution , to the large scale structure of the universe . in the past ,wf was the almost exclusive domain of schmidt telescopes equipped with large photographic plates and was the main source of targets for photometric and spectroscopic follow - up s at telescopes in the 4 meter class .nowadays , the exploitation of the new generation 8 meter class telescopes , which are designed to observe targets which are often too faint to be even detected on photographic material ( the poss - ii detection limit in b is , requires digitised surveys realized with large format ccd detectors mounted on dedicated telescopes .much effort has therefore been devoted worldwide to construct such facilities : the megacam project at the cfh , the eso wide field imager at the 2.2 meter telescope , the sloan - dss and the eso - oac vst ( mancini et al .1999 ) being only a few among the ongoing or planned experiments .one aspect which is never too often stressed is the humongous problem posed by the handling , processing and archiving of the data produced by these instruments : the vst alone , for instance , is expected to produce a flow of almost 30 gbyte of data per night or more than 10 tbyte per year of operation .the scientific exploitation of such a huge amount of data calls for new data reduction tools which must be reliable , must require a small amount of interactions with the operators and need to be as much independent on a priori choices as possible . in processing a wf image ,the final goal is usually the construction of a catalogue containing as many as possible astrometric , geometric , morphological and photometric parameters for each individual object present on the image .the first step in any catalogue construction is therefore the detection of the objects , a step which , as soon as the quality of the images increases ( both in depth and in resolution ) , becomes much less obvious than what it may seem at first glance . the traditional definition of object " as a set of connected pixels having brightness higher than a given threshold , has in fact several well known pitfalls .for instance , low surface brightness galaxies very often escape recognition since i ) their central brightness is often comparable or fainter than the detection threshold , and ii ) their shape is clumpy , which implies that even though there may be several nearby pixels above the threshold , they can often be not connected and thus escape the assumed definition . a similar problem is also encountered in the catalogues extracted from the _ hubble deep field _( hdf ) where a pletora of small clumpy `` objects is detected but it is not clear whether each clump represents an individual object or rather is a fragment of a larger one .ferguson ( 1998 ) stresses some even stronger limitations of the traditional approach to object detection : i ) a comparison of catalogues obtained by different groups from the same raw material and using the same software shows that , near the detection limits the results are strongly dependent on the assumed definition of ' ' object `` ; ii ) object detection performed by the different groups is worse than what even an untrained astronomer can attain by visually inspecting an ] the theory ---------- in the ai domain there are dozens of different nn s used and optimised to perform the most various tasks . in the astronomical literature , instead , only two types of nn s are used : the ann , called in the ai literature multi - layer perceptron ( mlp ) with back - propagation learning algorithm , and the kohonen s self - organizing maps ( or their supervised generalization ) .we followed a rather complex approach which can be summarised as follows : principal component analysis ( pca ) nn s were used to reduce the dimensionality of the input space .supervised nn s need a large amount of labelled data to obtain a good classification while unsupervised nn s overcome this need , but do not provide good performances when classes are not well separated .hybrid and unsupervised hierarchical nn s are therefore very often introduced to simplify the expensive post - processing step of labelling the output neurons in classes ( such as objects / background ) , in the object detection process . in the following subsectionswe illustrate the properties of several types of nn s which were used in one or another of the various tasks .all the discussed models were implemented , trained and tested and the results of the best performing ones are illustrated in detail in the next sections .a pattern can be represented as a point in a -dimensional parameter space . to simplify the computations, it is often needed a more compact description , where each pattern is described by , with , parameters .each -dimensional vector can be written as a linear combination of orthonormal vectors or as a smaller number of orthonormal vectors plus a residual error .pca is used to select the orthonormal basis which minimizes the residual error .let be the -dimensional zero mean input data vectors and be the covariance matrix of the input vectors .the -th principal component of is defined as , where is the normalized eigenvector of corresponding to the -th largest eigenvalue .the subspace spanned by the principal eigenvectors is called the pca subspace ( with dimensionality ; oja 1982 ; oja et al .1996 ) . in order to perform pca , in some cases and expecially in the non linear one , it is convenient to use nn s which can be implemented in various ways ( baldi & hornik 1989 ; jutten & herault 1991 ; oja 1982 ; oja , ogawa & wangviwattana 1991 ; plumbley 1993 ; sanger 1989 ) . the pca nn used by us was a feedforward neural network with only one layer which is able to extract the principal components of the stream of input vectors .2 summarises the structure of the pca nn s .as it can be seen , there is one input layer , and one forward layer of neurons which is totally connected to the inputs . during the learning phasethere are feedback links among neurons , the topology of which classifies the network structure as either hierarchical or symmetric depending on the feedback connections of the output layer neurons .typically , hebbian type learning rules are used , based on the one unit learning algorithm originally proposed in ( oja 1982 ) .the adaptation step of the learning algorithm - in this case the network is composed by only one output neuron - is then written as : \label{eq2.1}\ ] ] where , and are , respectively , the value of the -th input , of the -th weight and of the network output at time , while is the learning rate . is the hebbian increment and eq.1 satisfies the condition : many different versions and extensions of this basic algorithm have been proposed in recent years ( karhunen & joutsensalo 1994 = kj94 ; karhunen & joutsensalo 1995 = kj95 ; oja et al .1996 ; sanger 1989 ) .the extension from one to more output neurons and to the hierarchical case gives the well known generalized hebbian algorithm ( gha ) ( sanger 1989 ; kj95 ) : \label{eq2.3}\ ] ] while the extension to the symmetric case gives the oja s subspace network ( oja 1982 ) : \label{eq2.4}\ ] ] in both cases the weight vectors must be orthonormalized and the algorithm stops when : where is an arbitrarily choosen small value .after the learning phase , the network becomes purely feedforward .kj94 and kj95 proved that pca neural algorithms can be derived from optimization problems , such as variance maximization and representation error minimization .they generalized these problems to nonlinear problems , deriving nonlinear algorithms ( and the relative networks ) having the same structure of the linear ones : either hierarchical or symmetric .these learning algorithms can be further classified in robust pca algorithms and nonlinear pca algorithms . kj95defined robust pca as those in which the objective function grows slower than a quadratic one .the non linear learning function appears at selected places only .in nonlinear pca algorithms all the outputs of the neurons are nonlinear function of the responses .more precisely , in the robust generalization of variance maximization , the objective function is assumed to be a valid cost function such as or .this leads to the adaptation step of the learning algorithm : where : in the hierarchical case . in the symmetric case ,the error vector becomes the same for all the neurons , and eq .[ eq2.5 ] can be compactly written as : where is the instantaneous vector of neuron responses at time .the learning function , derivative of , is applied separately to each component of the argument vector .the robust generalisation of the representation error problem ( kj95 ) with , leads to the stochastic gradient algorithm : this algorithm can be again considered in both the hierarchical and symmetric cases . in the symmetric case , the error vector is the same for all the weights . in the hierarchical case , eq .[ eq2.7 ] gives the robust counterparts of principal eigenvectors . in eq .[ eq2.7 ] the first update term is proportional to the same vector for all weights .furthermore , we can assume that the error vector is relatively small after the initial convergence .hence , we can neglect the first term in eq .[ eq2.7 ] and this leads to : let us consider now the nonlinear extensions of pca algorithms which can be obtained in a heuristic way by requiring all neuron outputs to be always nonlinear in eq .[ eq2.5 ] , then : where : in previous experiments ( tagliaferri et al . 1999 , tagliaferri et al . 1998 ) we found that the hierarchical robust nn of eq .[ eq2.5 ] with learning function achieves the best performance with respect to all the other mentioned pca nn s and linear pca .unsupervised nn s partition the input space into clusters and assign to each neuron a weight vector which univocally individuates the template characteristic of one cluster in the input feature space .after the learning phase , all the input patterns are classified .kohonen ( 1982 , 1988 ) self organizing maps ( som ) are composed by one neuron layer structured in a rectangular grid of neurons .when a pattern is presented to the nn , each neuron receives the input and computes the distance between its weight vector and .the neuron which has the minimum is the winner .the adaptation step consists in modifying the weights of the neurons in the following way : where is the learning rate ( ) decreasing in time , is the distance in the grid between the and the neurons and is a unimodal function with variance decreasing with .the neural - gas nn is composed by a linear layer of neurons and a modified learning algorithm ( martinetz berkovitch & shulten 1993 ) .it classifies the neurons in an ordered list accordingly to their distance from the input pattern .the weight adaptation depends on the position of the -th neuron in the list : and works better than the preceding one : in fact , it is quicker and reaches a lower average distortion value be the pattern probability distribution over the set and let be the weight vector of the neuron which classifies the pattern .the average distortion is defined as .the growing cell structure ( gcs ) ( fritzke 1994 ) is a nn which is capable to change its structure depending on the data set .aim of the net is to map the input pattern space into a two - dimensional discrete structure in such a way that similar patterns are represented by topological neighboring elements .the structure is a two - dimensional simplex where the vertices are the neurons and the edges attain the topological information .every modification of the net always maintains the simplex properties .the learning algorithm starts with a simple three node simplex and tries to obtain an optimal network by a controlled growing process : _ i d est _ , for each pattern of the training set , the winner and the neighbors weights are adapted as follows : connected to ; where and are constants which determine the adaptation strength for the winner and for the neighbors , respectively .the insertion of a new node is made after a fixed number of adaptation steps .the new neuron is inserted between the most frequent winner neuron and the more distant of its topological neighbors .the algorithm stops when the network reaches a pre - defined number of elements .the on - line k - means clustering algorithm ( lloyd 1982 ) is a simpler algorithm which applies the gradient descent ( = gd ) directly to the average distortion function as follows : the main limitation of this technique is that the error function presents many local minima which stop the learning before reaching the optimal configuration .finally , the maximum entropy nn ( rose , gurewitz & fox , 1990 ) applies the gd to the error function to obtain the adaptation step : where is the inverse temperature and takes value increasing in time and is the distance between the -th and the winner neurons .hybrid nn s are composed by a clustering algorithm which makes use of the information derived by one unsupervised single layer nn . after the learning phase of the nn , the clustering algorithm splits the output neurons in a number of subsets which is equal to the number of the desired output classes . since the aim is to put similar input patterns in the same class and dissimilar input patterns in different classes , a good strategy consists in applying a clustering algorithm directly to the weight vectors of the unsupervised nn . a non - neural agglomeration clustering algorithm that divides the pattern set ( in this case the weights of the neurons ) in clusters ( with )can be briefly summarized as follows : 1 .it initially divides in clusters such that ; 2 .then it computes the distance matrix with elements ; 3 .then it finds the smallest element and unifies the clusters and in a new one ; 4 .if the number of clusters is greater than then it goes to step 2 else , it finally stops .many algorithms quoted in literature ( everitt 1977 ) differ only in the way in which the distance function is computed .for example : ( nearest neighbor algorithm ) ; ( centroid method ) ; ( average between groups ) .the output of the clustering algorithm will be a labelling of the patterns ( in this case neurons ) in different classes .unsupervised hierarchical nn s add one or more unsupervised single layers nn to any unsupervised nn , instead of a clustering algorithm as it happens in hybrid nn s . in this way , the second layer nn learns from the weights of the first layer nn and clusters the neurons on the basis of a similarity measure or a distance .the iteration of this process to a few layers gives the unsupervised hierarchical nn s .the number of neurons at each layer decreases from the first to the output layer and , as a consequence , the nn takes the pyramidal aspect shown in fig .the nn takes as input a pattern and then the first layer finds the winner neuron .the second layer takes the first layer winner weight vector as input and finds the second layer winner neuron and so on up to the top layer .the activation value of the output layer neurons is 1 for the winner unit and 0 for all the others . in short : the learning steps of a layer hierarchical nn with training set are the following : * the first layer is trained on the patterns of with one of the learning algorithms for unsupervised nn s ; * the second layer is trained on the elements of the set which is composed by the weight vectors of the first layer winner units ; * the process is iterated to the layer nn ( ) on the training set which is composed by the weight vectors of the winner neurons of the layer when presenting to the first layer nn , to the second layer and so on . by varying the learning algorithms we obtain different nn s with different properties and abilities . for instance , by using only som s we have a multi - layer som ( ml - som ) ( koh j. , suk & bhandarkar 1995 ) where every layer is a two - dimensional grid .we can easily obtain ( tagliaferri , capuano & gargiulo 1999 ) _ ml - neuralgas _ , _ ml - maximum entropy _ or _ ml - k means _ organized on a hierarchy of linear layers .the ml - gcs has a more complex architecture and has at least 3 units for layer . by varying the learning algorithms in the different layers, we can take advantage from the properties of each model ( for instance , since we can not have a ml - gcs with 2 output units we can use another nn in the output layer ) .a hierarchical nn with a number of output layer neurons equal to the number of the output classes simplifies the expensive post - processing step of labelling the output neurons in classes , without reducing the generalization capacity of the nn .a _ multi - layer perceptron _ ( mlp ) is a layered nn composed by : * one input layer of neurons which transmit the input patterns to the first hidden layer ; * one or more hidden layers with units computing a nonlinear function of their inputs ; * and one output layer with elements calculating a linear or a nonlinear function of their inputs .aim of the network is to minimize an error function which generally is the sum of squares of the difference between the desired output ( target ) and the output of the nn .the learning algorithm is called back - propagation since the error is back - propagated in the previous layers of the nn in order to change the weights . in formulae , let be an input vector with corresponding target output .the error function is defined as follows : where is the output of the output neuron .the learning algorithm updates the weights by using the gradient descent ( gd ) of the error function with respect to the weights .if we define the input and the output of the neuron respectively as : and where is the connection weight from the neuron to the neuron , and is linear or sigmoidal for the output nodes and sigmoidal for the hidden nodes .it is well known in literature ( bishop 1995 ) that these facts lead to the following adaptation steps : and for the output and hidden units , respectively .the value of the learning rate is small and causes a slow convergence of the algorithm .a simple technique often used to improve it is to sum a momentum term to eq .[ eq2.15 ] which becomes : this technique generally leads to a significant improvement in the performances of gd algorithms but it introduces a new parameter which has to be empirically chosen and tuned . bishop ( 1995 ) and press et al .( 1993 ) summarize several methods to overcome the problems related to the local minima and to the slow time convergence of the above algorithm . in a preliminary step of our experiments , we tried all the algorithms discussed in chapter 7 of bishop ( 1995 ) finding that a hybrid algorithm based on the scaled conjugate gradient for the first steps and on the newton method for the next ones , gives the best results with respect to both computing time and relative number of errors . in this paperwe used it in the mlp s experiments .in this work we use a 2000x2000 arcsec area centered on the north galactic pole extracted from the slightly compressed poss - ii f plate n. 443 , available via network at the canadian astronomy data center ( http://cadcwww.dao.nrc.ca ) .poss - ii data were linearised using the sensitometric spots recorded on the plate .the seeing fwhm of our data was 3 arcsec .the same area has been widely studied by others and , in particular , by infante & pritchet ( 1992 , = ip92 ) and infante , pritchet & hertling ( 1995 ) who used deep observations obtained at the 3.6 m cfht telescope in the photographic band under good seeing conditions ( fwhm arcsec ) to derive a catalogue of objects complete down to , i d est , much deeper than the completeness limit of our plate . their catalogue is therefore based on data of much better quality and accuracy than ours , and it was for the availability of such good template that we decided to use this region for our experiments .we also studied a second region in the coma cluster ( which happens to be in the same n. 443 plate ) but since none of the catalogues available in literature is much better than our data , we were forced to neglect it in most of the following discussion .the characteristics of the selected region , a relatively empty one , slightly penalise our nn detection algorithms which can easily recognise objects of quite different sizes . on the contrary of what happens to other algorithmsnext works well even on areas where both very large and very small objects are present such as , for instance , the centers of nearby clusters of galaxies as our preliminary test on a portion of the coma clusters clearly shows ( tagliaferri et al .1998 ) .the detection and classification of the objects are a multi - step task : \1 ) first of all , following a widely used ai approach , we mathematically transform the detection task into a classification one by compressing the redundant information contained in nearby pixels by means of a non linear pca nn s .principal vectors of the pca are computed by the nn on a portion of the whole image .the values of the pixels in the transformed dimensional eigen - space obtained via the principal vectors of the pca nn are then used as inputs to unsupervised nn s to classify pixels in few classes .we wish to stress that , in this step , we are still classifying pixels , and not objects .the adopted nn is unsupervised , i.e. we never feed into the detection algorithm any a priori definition of what an object is , and we leave it free to find its own object definition .it turns out that image pixels are split in few classes , one coincident with what astronomers call background and some others for the objects ( in the astronomical sense ) .afterwords , the class containing the background pixels is kept separated from the other classes which are instead merged together . therefore , as final output , the pixels in the image are divided in object " or background " .\2 ) since objects are seldom isolated in the sky , we need a method to recognise overlapping objects and deblend them .we adopt a generalisation of the method used by focas ( jarvis & tyson 1981 ) .\3 ) due to the noise , object edges are quite irregular .we therefore apply a contour regularisation to the edges of the objects in order to improve the following star / galaxy classification step .\4 ) we define and measure the features used , or suitable , for the star / galaxy classification , then we choose the best performing features for the classification step , through the sequential backward elimination strategy ( bishop 1995 ) .\5 ) we then use a subset of the ip92 catalog to learn , validate and test the classification performed by next on our images .the training set was used to train the nn , while the validation was used for model selection , i.e. to select the most performing parameters using an independent data set . as template classifier , we used sex , whose classifier is also based on nns . the detection and classification performances of our algorithmwere then compared with those of traditional algorithms , such as sex .we wish to stress that in both the detection and classification phases , we were not interested in knowing how well next can reproduce sex or the astronomer s eye performances , but rather to see whether the sex and next catalogs are or are not similar to the true " , represented in our case by the ip92 catalog . finally , we would like to stress that in statistical pattern recognition , one of the main problems in evaluating the system performances is the optimisation of all the compared systems in order not to give any unfair advantage to one of the systems with respect to the others ( just because it is better optimised than the others ) .for instance , since the background subtraction is crucial to the detection , all algorithms , including sex , were run on the same background subtracted image . from the astronomical point of view, segmentation allows to disentangle objects from noisy background . from a mathematical point of view , instead , the segmentation of an image consists in splitting it into disconnected homogeneous ( accordingly to a uniformity predicate ) regions , in such a way that their union is not homogeneous : where and when is adjacent to . the two regions are adjacent when they share a boundary , i.e. when they are neighbours .a segmentation problem can be easily transformed into a classification one if classes are defined on pixels and is written in such a way that if and only if all the pixels of belong to the same class .for instance , the segmentation of an astronomical image in background and objects leads to assign each pixel to one of the two classes . among the various methods discussed in the literature ,unsupervised nn s usually provide better performance than any other nn type on noisy data ( pal & pal 1993 ) and have the great advantage of not requiring a definition ( or exhaustive examples ) of object .the first step of the segmentation process consists in creating a numerical mask where different values discriminate the background from the object ( fig .5 ) . in well sampled images ,the attribution of a pixel to either the background or to the object classes depends on both the pixel value and on the properties of its neighbours : for instance , a `` bright '' isolated pixel in a `` dark '' environment is usually just noise .therefore , in order to classify a pixel , we need to take into account the properties of all the pixels in a window centered on it .this approach can be easily extended to the case of multiband images . , however , is a too high dimensionality to be effectively handled ( in terms of learning and computing time ) by any classification algorithm .therefore , in order to lower the dimensionality , we first use a pca to identify the ( with ) most significant features . in detail : \i ) we first run the window on a sub - image containing representative parts of the image .we used both a and a windows .\ii ) then we train the pca nn s on these patterns .the result is a projection matrix with dimensionality , which allows us to reduce the input feature number from to .we considered only the first three components since , accordingly to the pca , they contain almost of the information while the remaining is distributed over all the others .\iii ) the -dimensional projected vector is the input of a second nn which classifies the pixels in the various classes .\iv ) finally , we merge all classes except the background one in order to reduce the classification problem to the usual object / background dichotomy . much attention has also to be paid to the choice of the type of pca .after several experiments , we found that - for our specific task which is characterised by a large dynamical range in the luminosities of the objects ( or , which is the same , in the pixel values ) - pca s can be split into two gross groups : pca s with linear input - output mapping ( hereafter linear pca nn s ) and pca s with non linear input - output mapping ( non - linear pca nn s ) ( see section [ section3.1 ] ) .linear pca nn s turned out to misclassify faint objects as background .non - linear pca nn s based on a sigmoidal function allowed , instead , the detection of faint sources .this can be better understood from fig .6 and 7 which give the distributions of the training points in the simpler case of two dimensional inputs for the two types of pca nn s .linear pca nn s produce distributions with a very dense core ( background and faint objects ) and only a few points ( luminous objects ) spread over a wide area .such a behaviour results from the presence of very luminous objects in the training set which compress the faint ones to the bottom of the scale .this problem can be circumvented by avoiding very luminous objects in the training set , but this would make the whole procedure too much dependent on the choice of the training set .non - linear pca nn s , instead , produce better sampled distributions and a better contrast between background and faint objects .the sigmoidal function compresses the dynamical range squeezing the very luminous objects into a narrow region ( see fig .7 ) . among all ,the best performing nn ( tagliaferri et al .1998 ) turned out to be the hierarchical robust pca nn with learning function given in eq .[ eq2.5 ] .this nn was also the faster among the other non - linear pca nn s .the principal components matrices are detailed in the tables 1 - 3 and 4 - 6 for the and cases , respectively . in tables 13 ,numbers are rounded to the closest integere since they differ from an integer only at the 7-th decimal figure .not surprisingly , the first component turns out to be the mean in the case .the other two matrices can be seen as anti - symmetric filters with respect to the centre .the convolution of these filters ( see fig .8) with the input image gives images where the objects are the regions of high contrast .similar results are obtained for the case . at this stagewe have the principal vectors and , for each pixel , we can compute the values of the projection of each pixel in the eigenvector space .the second step of the segmentation process consists in using unsupervised nn s to classify the pixels into few classes , having as input the reduced input patterns which have been just computed .supervised nn would require a training set specifying , for each pixel , whether that pixel belongs to an object or to the background .we no longer consider such a possibility , due to the arbitrariness of such a choice at low fluxes , the lack of elegance of the method and the problems which are encountered in the labelling phase .unsupervised nn s are therefore necessary .we considered several types of nn s .as already mentioned several times , our final goal is to classify the image pixels in just two classes : objects and background , which should correspond to two output neurons .this simple model , however , seldom suffice to reproduce real data in the bidimensional case ( but similar results are obtained also for the 3-d or multi - d cases ) , since any unsupervised algorithm fails to produce spatially well separated clusters and more classes are needed .a trial and error procedure shows that a good choice of classes is : fewer classes produce poor classifications while more classes produce noisy ones . in all cases , only one class ( containing the lowest luminosity pixels )represents the background , while the other classes represent different regions in the objects images .we compared hierarchical , hybrid and unsupervised nn s with output neurons . from theoretical considerations and from preliminary work ( tagliaferri et .al 1998 ) we decided to consider only the best performing nn s , i d est neural gas , ml - neural gas , ml - som , and gcs+ml - neural gas . for a more quantitative and detailed discussionsee section [ section3.5 ] , where the performances of these nn s are evaluated . after this stageall pixels are classified in one of six classes .we merge together all classes , with the exception of the background one and reduce the classification to the usual astronomical dichotomy : object or background .finally , we create the masks , each one identifying one structure composed by one or more objects .this task is accomplished by a simple algorithm , which , while scanning the image row by row , when it finds one or more adjacent pixels belonging to the object class expands the structure including all equally labelled pixels adjacent to them .once objects have been identified we measure a first set of parameters .namely : the photometric barycenter of the objects computed as : where is the set of pixels assigned to the object in the mask , is the intensity of the pixel , and is the flux of the object integrated over the considered area . the semimajor axis of the object contour defined as : with position angle defined as: where is the most distant pixel from the barycenter belonging to the object .the semiminor axis of the faintest isophote is given by : \right| \cdot r(x , y)\ ] ] these parameters are needed in order to disentangle overlapping objects .our method recognises multiple objects by the presence of multiple peaks in the light distribution .search for double peaks is performed along directions at position angles with . at difference with focas , ( jarvis & tyson 1981 ), we sample several position angles because not always objects are aligned along the major axis of their light distribution , as focas implicitly assumes . in our experiments the maximum was set to .when a double peak is found , the object is split into two components by cutting it perpendicularly to the line joining the two peaks .spurious peaks can also be produced by noise fluctuations , a case which is very common in photographic plates near saturated objects .a good way to minimise such noise effects is , just for deblending purposes , to reduce the dynamical range of the pixels values , by rounding the intensity ( or pixel values ) in equi - espaced levels .multiple ( i.e. or more components ) objects pose a more complex problem . in the case shown in fig .9 , the segmentation mask includes three partially overlapping sources .the search for double peaks produces a first split of the mask into two components which separate the third and faintest component into two fragments .subsequent iterations would usually produce a set of four independent components therefore introducing a spurious detection . in order to solve the problem posed by multiple " non spurious objects erroneously split, a recomposition loop needs to be run .most celestial objects - does not matter whether resolved or unresolved - present a light distribution rapidly decreasing outwards from the centre .if an object has been erroneously split into several components , then the adjacent pixels on the corresponding sides of the two masks will have very different values .the implemented algorithm checks each component ( starting from the one with the highest average luminosity and proceeding to the fainter ones ) against the others .let us now consider two parts of an erroneously split object .when the edge pixels have luminosity higher than the average luminosity of the faintest component , the two parts are recomposed .this procedure also takes care of all spurious components produced by the haloes of bright objects ( an artifact which is a major shortcoming of many packages available in the astronomical community ) . the last operation before measuring the objects parameters consists in the regularization of the contours since due to noise , overlapping images , image defects , etc . segmentation produces patterns that are not similar to the original celestial objects that they must represent . for the contour regularisation ,we threshold the image at several sigma over the background and we then expand the ellipse describing the objects in order to include the whole area measured in the object detection . after the above described steps , it becomes possible to measure and compare the performances of the various nn models .we implemented and compared : neural gas ( ng3 ) , ml - neural gas ( mlng3 or mlng5 ) , ml - som ( k5 ) , gcs+ml - neural gas ( ngcs5 ) . the last digit in the nn name indicating the dimensions of the running window .attention was paid in choosing the training set , which needed to be at the same time small but significant . by trial and error, we found that for pca nn s and unsupervised nn s it was enough to choose sub - images , each one pixels wide and not containing very large objects .as all the experienced users know , the choice of the sex parameters ( minimum area , threshold in units of the background noise , and deblending parameter ) is not critical and the default values were choosen ( 4 pixel area , ) .table 7 shows the number of objects detected by the five nn s and sex .it has to be stressed that objects out of the 4819 available in the ip92 reference catalogue are beyond the detection limit of our plate material .sex detects a larger number of objects but many of them ( see table 7 ) are spurious .nn s detect a slightly smaller number of objects but most of them are real .in particular : mng5 looses , with respect to sex , only 79 real objects but detects 400 spurious objects less ; mng3 is a little less performing in detecting true objects but is even cleaner of spurious detections .the upper panel of fig .10 shows the number of " true objects ( i.e. objects in the ip92 catalogue ) .most of them are fainter than mag , i d est they are fainter than the plate limit .the lower panel shows instead the number of objects detected by the various nn s relative to sex .the curves largely coincide and , in particular , mlng5 and sex do not statistically differ in any magnitude bin while mlng3 slightly differs only in the faintest bin ( ) .the class of `` missed '' objects ( i d est objects which are listed in the reference catalogue but are not in the nn s or sex catalogues ) needs a detailed discussion .we focus first on brighter objects .they can be divided in : few `` true '' objects with a nearby companion which are blended in our image but are resolved in ip92 . parts of isolated single large objects incorrectly split by ip92 . a few cases . a few detections aligned in the e - w direction on the two sides of the images of a bright star .they are likely false objects ( diffraction spikes detected as individual objects in the ip92 catalog ) . objects in ip92 which correspond to empty regions in our images : they can be missing because variable , fast moving , or with an overestimated luminosity in the reference catalog ; they can also be missed because spurious in the template catalog .therefore , a fair fraction of the `` missed '' objects is truly non existent and the performances of our detection tools are lower bounded at mag .we wish to stress here that even though there is nothing like a perfect catalogue , the ip92 template is among the best ones ever produced to our knowledge .the upper panel of fig .11 is the same as in fig .the lower panel shows instead the fraction of " false objects , i d est of the objects detected by the algorithms but not present in the reference catalogue .ip92 were interested to faint objects and masked out the bright ones , therefore their catalogue may exclude a few `` true '' objects ( in particular at ) .we believe that all objects brighter than mag are really `` true '' since they are detected both by sex and nn s with high significance . for objects brighter than mag ,the nn s and sex have similar performances .they differ only at fainter magnitudes .the catalogue with the largest contamination by `` false '' objects is sex , followed by mlng5 , mlng3 and the other nn s beeing much less contaminated .mlng5 is quite efficient in detecting `` true '' objects and has a cleaner detection rate in the highly populous bin mag .mlng3 is less efficient than mlng5 in detecting `` true '' objects but it is even cleaner than mlng5 of false detections .let us now consider whether or not the detection efficiency depends on the degree of concentration of the light ( stars have steeper central gradients than galaxies ) . in ip92 objectsare classified in two major classes , star & galaxies , and a few minor ones ( merged , noise , spike , defects , etc . ) that we neglect .the efficiency of the detection is shown in fig .12 for three representative detection algorithms : mlng5 , k5 , and sex . at mag , the detection efficiency is large , close to 1 and independent on the central concentration of the light. please note that there are no objects in the image having mag and that in the first bin there are only 4 galaxies . at fainter magnitudes ( mag )detection efficiencies differ as a function of both the algorithm and of the light concentration .in fact , sex , mlng5 , and to less extent k5 , turn out to be more efficient in detecting galaxies rather than stars ( in other words : `` missed '' objects are preferentially stars ) . for sex , a possible explanation is that a minimal area above the background is required in order for the object to be detected and at mag and noise fluctuations can affect the isophotal area of unresolved objects bringing it below the assumed threshold ( 4 pixels ) .this bias is minimum for the k5 nn . however , this is more likely due to the fact that k5 misses more galaxies than the other algorithms , rather than to the fact that it detects more stars . in conclusion :mlng3 and mlng5 turn out to have high performances in detecting objects : they produce catalogs which are cleaner of false detections at the price of a slightly larger uncompleteness than the sex catalogues below the plate completness magnitude .we also want to stress that since the less performing nn s produce catalogs which are much cleaner of false detections , the selected objects are in large part true , and not just noise fluctuations .these nn s can therefore be very suitable to select candidates for possible follow up detailed studies at magnitudes where many of the objects detected by sex would be spurious .deeper catalogs having a large number of spurious source , such as those produced by sex or other packages are instead preferable if , for instance , they can be cleaned by subsequent processing ( for instance by matching the detected objects with other catalogs ) . a posteriori , one could argue that performances similar to those of each of the nn s could be achieved by running sex with appropriate settings .however , it would be unfair ( and methodologically wrong ) to make a fine tuning of any of the detection algorithms using an a - posteriori knowledge .it would also make cumbersome the authomatic processing of the images which is the final goal of our procedure . in this sectionwe discuss the feature extraction and selection of the features which are useful for the star / galaxy classification .features are chosen from the literature ( jarvis & tyson 1981 ; miller & coe 1996 ; odewahn et al .1992 ( = o92 ) , godwin & peach 1977 ) , and then selected by a sequential forward selection process ( bishop 1995 ) , in order to extract the most performing ones for classification purposes .the first five features are those defined in the previous section and describing the ellipses circumscribing the objects : the photometric barycenter coordinates ( ) , the semimajor axis ( ) , the semiminor axis ( ) . and the position angle ( ) . the sixth one is the object area , , i.e. the number of pixels forming the object .the next twelve features have been inspired to the pioneeristic work by o92 : the object diameter ( ) , the ellipticity ( ) , the average surface brightness ( ) , the central intensity ( ) , the filling factor ( ) , the area logarithm ( ) , the harmonic radius ( ) .the latter beeing defined as : and five gradients , , , and defined as : where is the average surface brightness within an ellipse , with position angle , semimajor axis , . and ellipticity .two more features are added following miller & coe ( 1996 ) : the ratios and .finally , five focas features ( jarvis & tyson 1981 ) have been included : the second ( ) and the fourth ( ) total moments defined as : where are the object central momenta computed as : the average ellipticity : the central intensity averaged in a and , finally , the kron radius defined as : for each object we therefore measure features , where the first are reported only to easy the graphical representation of the objects and have a low discriminating power .the complete set of the extracted features is given in table 8 .our list of features includes therefore most of those usually used in the astronomical literature for the star / galaxy classification .are all these features truly needed ? and , if this is not and a smaller subset contains all the needed information , what are the most useful ones ?we tried to answer these questions by evaluating the classification performance of each set of features through the a - priori knowledge of the true classification of each object , as it is listed in a much deeper and higher quality reference catalog .most of the defined features are not independent .the presence of redundant features decreases the classification performances since any algorithm would try to minimise the error with respect to features which are not particularly relevant for the task .furthermore , by introducing useless features the computational speed would be lowered .the feature selection phase was realised through the sequential backward elimination strategy ( bishop 1995 ) , which works as follows : let us suppose to have features in one set and to run the classification phase with this set .then , we build different sets with features in each one and then we run the classification phase for each set , keeping the set which attains the best classification .this procedure allows us to eliminate the less significant feature .then , we repeat times the procedure eliminating one feature at each step . in order to further reduce the computation time we do not use the validation set and the classification error is evaluated directly on the test set .it has to be stressed that this procedure is common in the statistical pattern recognition literature where , very often , for this task are also introduced simplified models .this however could be avoided in our case due to the speed and good performances of our nn s unsupervised nn s were not successful in this task , because the input data feature space is not separated into two not overlapping classes ( or , in simpler terms , the images and therefore the parameters of stars and galaxies fainter than the completeness limit of the image are quite similar ) , and they reach a performance much lower than supervised nn s . supervised learning nn s give far better results .we used a mlp with one hidden layer of neurons and only one output , assuming value for star and value for galaxy .after the training , we calculate the nn output as if it is greater than and otherwise for each pattern of the test set .the experiments produce a series of catalogues , one for each set of features .13 shows the classification performances as a function of the adopted features .after the first step , the classification error remains almost constant up to , i d est up to the point where features which are important for the classification are removed . a high performance can be reached using just 6 features . with a lower number of features the classificationworsen , whereas a larger number of features is unjustified , because it does not increase the performances of the system .the best performing set of features consists of features 11 , 12 , 14 , 19 , 21 , 25 of table 8 .they are two radii , two gradients , the second total moment and a ratio which involves measures of intensity and area .let us discuss now how the star / galaxy classification takes place .the first step is accomplished by teaching " the mlp nn using the selected best features . in this casewe divided the data set into three independent data sets : training , validation and test sets .the learning optimization is performed using the training set while the early stopping technique ( bishop 1995 ) is used on the validation set to stop the learning to avoid overfitting .finally , we run the mlp nn on the test set . as comparison classifier, we adopt sex , which is based on a mlp nn .as features useful for the classification , sex uses eight isophotal areas and the peak intensity plus a parameter , the fwhm of stars .since the sex nn training was already realised by bertin & arnouts ( 1996 ) on simulated images of stars and galaxies , we limit ourselves to tune sex in order to obtain the best performances on the validation set . both sex and our system use nn s for the classification , but they follow two different , alternative approaches : sex uses a very large training set of simulated stars and galaxies , our system uses noisy , real data . furthermore , while the features of sex are fixed by the authors , and the nn s output is a number , ; our system selects the best performing ones and its output is an integer : 0 or 1 ( i d est star or galaxy ) .therefore , we use the validation set for choosing the threshold which maximises the number of correct classifications by sex ( see fig .the experimental results are shown in fig .15 where the errors are plotted as a function of the magnitude . at all magnitudes next misclassify less objects than sex . out of 460 objects, sex makes 41 incorrect classifications , while next just 28 . in order to check thethat our feature selection is optimal , we also compared our classification with those obtained using our mlp nn s with others feature sets , selected as shown in table 9 .the total number of misclassified objects in the test set of 460 elements were : o - f , 43 errors ; o - l , 30 errors ; o - s , 35 errors ; gp1 , 48 errors ; gp2 , 49 errors .16 shows the classification performances of the considered feature sets as a function of the magnitude of the objects .results for stars are presented as solid line , while for galaxies we used dotted lines .the perfomances of next are presented in the top - left panel : galaxies are correctly classified as long as they are detected , whereas the correctness of the classification of stars drops to 0 at .fainter stars are pratically absent in the ip92 catalog , thus explaining why the stars point stop at brighter magnitudes than galaxies .o92 selected a 9 features set ( o - f ) for the star / galaxy classification .their set ( central left panel ) is slightly less performing for bright ( ) galaxies and for faint stars ( ) than the set of features selected by us ( upper left panel ) .they select also a smaller ( four ) set of features ( o - f ) quite useful to classify large objects .the classification performances of this set , when applied to our images , turn out to be better than the larger feature dataset : in fact , bright galaxies are not misclassified ( see the bottom left panel ) .even with respect to our dataset o - f performs well : their set is sligthly better in classifying bright galaxies , at the price of a achieving lower performances on faint stars .the further set of features by o92 ( o - s ) was aimed to the accurate detection of faint sources and performs similarly to their full set : it misclassifies bright galaxies and faint stars .the performances of the traditional classifiers , ( gp1 ) and ( gp2 ) , are presented in the central and low right panels . with just two features , all the faint objects are classified as galaxies , and due to the absence of stars in our reference catalog , the classification performances are . however , this is not a real classification . at bright magnitudes ,the classification of the traditional classified dataset are as large as , or sligthly lower , than the next dataset .in this paper we discuss a novel approach to the problem of detection and classification of objects on wf images . in section 2we shortly review the theory of some type of nn s which are not familiar to the astronomical community .based on these considerations , we implemented a neural network based procedure ( next ) capable to perform the following tasks : i ) to detect objects against a noisy background ; ii ) to deblend partially overlapping objects ; iii ) to separate stars from galaxies .this is achieved by a combination of three different nn s each performing a specific task .first we run a non linear pca nn to reduce the dimensionality of the input space via a mapping from pixels intensities to a subspace individuated through principal component analysis . for the second step we implemented a hierarchical unsupervised nn to segmentate the image and , finally after a deblending and reconstruction loop we implemented a supervised mlp to separate stars from galaxies . in order to identify the best performing nn s we implemented and tested in homogeneous conditions several different models .next offers several methodological and practical advantages with respect to other packages : i ) it requires only the simplest a priori definition of what an `` object '' is ; ii ) it uses unsupervised algorithms for all those tasks where both theory and extensive testing show that there is no loss in accuracy with respect to supervised methods. supervised methods are in fact used only to perform star / galaxy separation since , at magnitudes fainter than the completeness limit , stars are usually almost indistinguishable from galaxies and the parameters characterizing the two classes do not lay in disconnected subspaces .iii ) instead of using an arbitrarily defined and often specifically tailored set of features for the classification task next , after measuring a large set of geometric and photometric parameters , uses a sequential backward elimination strategy ( bishop 1995 ) to select only the most significant ones .the optimal selection of the features was checked against the performances of other classificators ( see sect .3.8 ) . in order to evaluate the performances of next , we tested it against the best performing package known to the authors ( i d est sex ) using a dposs field centered on the north galactic pole .we want also to stress here that - in order to have an objective test and at difference of what is currently done in literature - next was checked not against the performances of an arbitrarily choosen observer but rather against a much deeper catalogue of objects obtained from better quality material .the comparison of next performances against those of sex show that in the detection phase , next is at least as effective as sex in detecting true " objects but much cleaner of spurious detections . for what classificationis concerned , next nn performs better than the sex nn : 28 errors for next against 41 for sex on a total of 460 objects , most of the errors referring to objects fainter than the plate detection limit .other attempts , besides those described in the previous sections , to use nn for similar tasks have been discussed in the literature .balzell & peng ( 1998 ) , used the same north galactic pole field ( but extracted from poss - i plates ) used in this work .they tested their star / galaxy classification nn on objects which are both too few ( 60 galaxies and 27 stars ) and too bright ( a random check of their objects shows that most of the galaxies extend well over than 20 pixels ) to be of real interest .it needs also to be stressed that , due to their preprocessing strategy , their nn s are forced to perform cluster analysis on a huge multidimensional imput space with scarsely populated samples .naim ( 1997 ) follows instead a strategy which is similar to ours and makes use of a fairly large dataset extracted from poss - i material .he , however , trained the networks to achieve the same performances of an experienced human observer while , as already mentioned , next is checked against a catalogue of true " objects . even though his target is the classification of objects fainter and larger than those we are dealing with , he tested the algorithm in a much more crowded and difficult region of the sky near the galactic plane .o92 makes use of a traditional mlp and succeeded in demonstrating that ai methods can reproduce the star / galaxy classification obtained with traditional diagnostic diagrams by trained astronomers .their aim , however , was less ambitious than that of performing the correct star / galaxy classification " which is instead the final goal of next .this paper is a first step toward the application of artificial intelligence methods to astronomy .foreseen improvements of our approach are the use of ica ( independent component analysis ) nn s instead of pca nn s and the adoption of bayesian learning techniques to improve the classification performences of mlp s .these developments and the application of next to other wide field astronomical data sets obtained at large format ccd detectors will be discussed in forthcoming papers .* acknowledgements * the authors wish to thank chris pritchet for providing them with a digital version of the ip92 catalogue .we also acknoledge the canadian astronomy data center for providing us with poss - ii material .this work was partly sponsored by the special grant murst cofin 1998 , n.9802914427 .
astronomical wide field imaging performed with new large format ccd detectors poses data reduction problems of unprecedented scale which are difficult to deal with traditional interactive tools . we present here next ( neural extractor ) : a new neural network ( nn ) based package capable to detect objects and to perform both deblending and star / galaxy classification in an automatic way . traditionally , in astronomical images , objects are first discriminated from the noisy background by searching for sets of connected pixels having brightnesses above a given threshold and then they are classified as stars or as galaxies through diagnostic diagrams having variables choosen accordingly to the astronomer s taste and experience . in the extraction step , assuming that images are well sampled , next requires only the simplest a priori definition of what an object is " ( i d est , it keeps all structures composed by more than one pixels ) and performs the detection via an unsupervised nn approaching detection as a clustering problem which has been thoroughly studied in the artificial intelligence literature . the first part of the next procedure consists in an optimal compression of the redundant information contained in the pixels via a mapping from pixels intensities to a subspace individuated through principal component analysis . at magnitudes fainter than the completeness limit , stars are usually almost indistinguishable from galaxies , and therefore the parameters characterizing the two classes do not lay in disconnected subspaces , thus preventing the use of unsupervised methods . we therefore adopted a supervised nn ( i.e. a nn which first learns the rules to classify objects from examples and then applies them to the whole dataset ) . in practice , each object is classified depending on its membership to the regions mapping the input feature space in the training set . in order to obtain an objective and reliable classification , instead of using an arbitrarily defined set of features , we use a nn to select the most significant features among the large number of measured ones , and then we use their selected features to perform the classification task . in order to optimise the performances of the system we implemented and tested several different models of nn . the comparison of the next performances with those of the best detection and classification package known to the authors ( sextractor ) shows that next is at least as effective as the best traditional packages . psfig.tex = = = = = = = = # 1 # 1 # 1 # 1 = `` 019 = ' ' 016 = `` 040 = ' ' 336 = " 33e = = = = = = = = # 1 # 1 # 1 # 1 = = = = = = = = [ firstpage ] astronomical instrumentation , methods and techniques methods : data analysis ; astronomical instrumentation , methods and techniques techniques : image processing ; astronomical data bases catalogues .
nongaussianity ( ng ) is a resource for the implementation of continuous variable quantum information in bosonic systems .several schemes to generate nongaussian states from gaussian ones have been proposed , either based on nonlinear interactions or on conditional measurements . in many casesthe effective nonlinearity is small , and so it is the resulting ng .it is thus of interest to investigate the ng of states in the neighbourhood of a gaussian state , i.e. the ng of slightly perturbed gaussian states . besides the fundamental interest this also provides a way to assess different degaussification mechanisms , as well as ng itself as a resource for quantum estimation .indeed , in an estimation problem where the variation of a parameter affects the gaussian character of the involved states one may expect the amount of ng to play a role in determining the estimation precision . quantum estimation deals with situations where one tries to infer the value of a parameter by measuring a different quantity , which is somehow related to .this often happens in quantum mechanics and quantum information where many quantities of interest , e.g. entanglement , do not correspond to a proper observable and should be estimated from the measurement of one or more observable quantities .given a set of quantum states parametrized by the value of the quantity of interest , an estimator for is a real function of the outcomes of the measurements performed on .the quantum cramer - rao theorem establishes a lower bound for the variance of any unbiased estimator , i.e. for the estimation precision , in terms of the number of measurements and the so - called quantum fisher information ( qfi ) , which captures the statistical distinguishability of the states within the set .indeed , the qfi distance itself is proportional to the bures distance ] being the fidelity , between states corresponding to infinitesimally close values of the parameter , i.e. , in terms of metrics , where , and we have used the eigenbasis gaussian states and a measure of non gaussianity ------------------------------------------------ let us consider a single - mode bosonic system described by the mode operator with commutation relations =1 ] where is the displacement operator .the canonical operators are given by and with commutation relations given by =i ] is the expectation value of the operator on the state .a quantum state is said to be gaussian if its characteristic function has a gaussian form .once the cm and the vectors of mean values are given , a gaussian state is fully determined .the amount of ng ] between and its reference gaussian state , which is a gaussian state with the same covariance matrix as . as for its classical counterpart , the kullback - leibler divergence, it can be demonstrated that when it is definite , _i.e. _ when .in particular iff . since is gaussian =0 ] where ] average thermal quanta , i.e. , in the fock number basis .an infinitesimal perturbation of the eigenvalues of a gaussian state , i.e. results in a perturbed state which , in general , is no longer gaussian . since the ng of a state is invariant under symplectic operations we have =\delta[\eta] ] up to the second order , = \sum_k \frac{dp_k^2}{2 p_k } - \frac{dn^2}{2 n_\nu(1+n_\nu)}\ , .\label{eq : nongbures}\end{aligned}\ ] ] nongaussianity of perturbed states is thus given by the sum of two contributions .the first term is the fisher information of the probability distribution , which coincides with the classical part of the bures distance in the hilbert space .the second term is a negative contribution expressed in terms of the infinitesimal change of the average number of quanta .when traveling on surfaces at constant energy the amount of ng coincides with a proper distance in the hilbert space and , in this case , it has a geometrical interpretation as the infinitesimal bures distance . at the same time , since bures distance is proportional to the qfi one , it expresses the statistical distinguishability of states , and we conclude that moving out from a gaussian state towards its nongaussian neighbours is a resource for estimation purposes .similar conclusions can be made when comparing families of perturbations corresponding to the same infinitesimal change of energy : in this case the different amounts of ng induced by the perturbations are quantified by the bures distance minus a constant term depending on and the initial thermal energy , i.e. the intial purity = \hbox{tr}[\tau^2]= ( 2 n_\nu + 1)^{-1} ] plus a term depending both on the infinitesimal variation of energy and on the initial purity + \frac{2\ ,\mu^2 dn^2}{1-\mu^2 } \:.\ ] ] in particular , for perturbations that leave the energy unperturbed the ng of the perturbed state coincides with the qfi distance , whereas , in general , it provides a lower bound . in order to explore specific directions in the neighbourhood of a gaussian state us write the perturbation to the eigenvalues as where is a given distribution . in this casethe ng of the perturbed state is given by = \epsilon^2\ , \left(\sum_k \frac{(p_k-\mu_k)^2}{2 p_k } - \frac{\delta n_\mu^2}{2n_t(1+n_t)}\right ) + o(\epsilon^3)\,,\label{eq : nongexp}\end{aligned}\ ] ] where let us now consider the families of states generated by the convex combination of the gaussian states with a the target state , which itself is obtained by changing the eigenvalues of the initial gaussian state to .again we exploit invariance of ] , where =-\sum_k q^{(\epsilon)}_k \log q^{(\epsilon)}_k ] is the average number of quanta of .notice that for a thermal state with quanta we have = h[n_t + 1/2] ] only depends on the difference between the entropy of the initial and target distributions .let us now consider perturbations towards some relevant distributions i.e. poissonian , thermal , and fock and evaluate the ng of states obtained as convex combination of a thermal state with quanta and a diagonal quantum state with a poissonian , thermal or fock distributions and quanta . [ cols="^,^ " , ] in fig .[ f:1n ] we plot the ng of the convex combination as a function of for different values of : if we consider the convex combination with a fock state , the ng simply increases monotonically with the energy of the added state . for combinations with poissonian and thermal distributionswe have a maximum for , then a local minimum ( which for the thermal distribution corresponds trivially to =0 ] provides an upper bound to the qfi distance \:.\ ] ] * proof * : if and have the same cm then the ng of , = \s(\varrho_{\lambda+d\lambda}||\tau_\lambda)=\widetilde h(\lambda)d\lambda^2 $ ] , where the so - called kubo - mori - bogolubov information provides an upper bound for the quantum fisher information , thus proving the theorem . + the above theorem says that a larger ng of the perturbed state may correspond to a greater distinguishability from the original one , thus allowing a more precise estimation . of course, this is not ensured by the theorem , which only provides an upper bound to the qfi .one may wonder that when is itself a gaussian state the theorem requires , i.e. no reliable estimation is possible .indeed , this should be the case , since gaussian states are uniquely determined by the first two moments and thus the requirement that the perturbed and the original state are both gaussian and have the same covariance matrix implies that they are actually the same quantum state .in conclusion , we have addressed the ng of states obtained by weakly perturbing a gaussian states and have investigated the relationships with quantum estimation .we found that ng provides a lower bound to the qfi distance for classical perturbations , i.e. perturbations to eigenvalues leaving the eigenvectors unperturbed , and an upper bound for perturbations leaving the covariance matrix unperturbed . for situations wherethe cm is changed by the perturbation we have no general results . on the other hand, it has been already shown that non - gaussian states improve quantum estimation of both unitary perturbations as the displacement and the squeezing parameters and nonunitary ones as the loss parameter of a dissipative channel .overall , our results show that the geometry of nongaussian states in the neighbourhood of a gaussian state is definitely not trivial and can not be subsumed by a differential structure . despite this fact ,the analysis of perturbations to a gaussian state may help in revealing when , and to which extent , ng is a resource for quantum estimation .we have also analyzed the ng of specific families of perturbed gaussian states with the aim of finding the maximally non gaussian state obtainable from a given gaussian one .mgg acknowledge the uk epsrc for financial support .mgap thanks r. f. antoni for being a continuing inspiration .99 , phys . rev .a * 82 * , 052341 ( 2010 ) .s. olivares , m. g. a. paris , j. opt .b * 7 * , s616 ( 2005 ) .m. s. kim , j. phys .b , * 41 * , 133001 ( 2008 ) .a. i. lvovsky , h. hansen , t. aichele , o. benson , j. mlynek , and s. schiller , phys .lett . * 87 * , 050402 ( 2001 ) .j. wenger , r. tualle - brouri , p. grangier , phys .lett . * 92 * , 153601 ( 2004 ) .a. zavatta , s. viciani , m. bellini , phys .a. * 70 * , 053821 ( 2004 ) .a. ourjoumtsev , r. tualle - brouri , j. laurat , and p. grangier , science * 312 * , 83 ( 2006 ) .neergard - nielsen , b. m. nielsen , c. hettich , k. molmer and e. s. polzik , phys .lett . * 97 * , 083604 ( 2007 ) .a. ourjoumtsev , h. jeong , r. tualle - brouri , and p. grangier , nature * 448 * , 784 ( 2007 ) .a. zavatta , v. parigi , and m. bellini , phys . rev . a * 75 * , 052106 ( 2007 ) .v. parigi , a. zavatta , m.s .kim , and m. bellini , science , * 317 * , 1890 ( 2007 ) .a. zavatta , v. parigi , m. s. kim , h. jeong , and m. bellini , phys .lett . * 103 * , 140406 ( 2009 ) .a. ourjoumtsev , f. ferreyrol , r. tualle - brouri , and p. grangier , nature phys .* 5 * , 189 ( 2009 ) .a. ourjoumtsev , a. dantan , r. tualle - brouri , and p. grangier , phys .lett . * 98 * , 030502 ( 2007 ) .h. takahashi , j. s. neergaard - nielsen , m. takeuchi , m. takeoka , k. hayasaka , a. furusawa and m. sasaki , nature phot .* 4 * 178 ( 2010 ) .m. sasaki and s. suzuki , phys .a , * 73 * ( 2006 ) 043807 . v. dauria , c. de lisio , a. porzio , s. solimeno , j. anwar and m. g. a. paris , phys .a , * 81 * ( 2010 ) 033846 .a. chiummo , m. de laurentis , a. porzio , s. solimeno and m. g. a. paris , opt .* 13 * ( 2005 ) 948 . c. silberhorn , p. k. lam , o. wei , f. knig , n. korolkova , and g. leuchs , phys .lett . , * 86 * , 4267 ( 2001 ) .o. glckl , u. l. andersen and g. leuchs , phys .a * 73 * , 012306 ( 2006 ) .t. tyc and n. korolkova , new j. phys ., * 10 * , 023041 ( 2008 ) .m. genoni , f. a beduini , a. allevi , m. bondani , s. olivares , m. g. a. paris , phys .scr . * t140 * , 014007 ( 2010 ) .a. allevi , a. andreoni , m. bondani , m. g. genoni and s. olivares , phys . rev .a * 82 * , 013816 ( 2010 ) ., epl * 92 * , 20007 ( 2010 ) .m. barbieri , n. spagnolo , m. g. genoni , f. ferreyrol , r. blandino , m. g. a. paris , p. grangier , r. tualle - brouri , phys .a * 82 * , 063833 ( 2010 ) .m. allegra , p. giorda , m. g. a. paris , phys .. lett . * 105 * , 100503 ( 2010 ) .m. genoni , p. giorda , m. g. a. paris , phys .a * 78 * , 032303 ( 2008 ) .g. brida , i. degiovanni , a. florio , m. genovese , p. giorda , a. meda , m. g. paris , a. shurupov , phys .lett . * 104 * , 100501 ( 2010 ) .m. g a paris , int .* 7 * , 125 ( 2009 ) .s. l. braunstein , c. m. caves , phys .lett . * 72 * 3439 ( 1994 ) ; s. l. braunstein , c. m. caves , g. j. milburn , ann . phys . * 247 * , 135 ( 1996 ) .a. fujiwara , metr * 94 - 08 * ( 1994 ) .s. amari and h. nagaoka , _ methods of information geometry _ , trans .* 191 * , ams ( 2000 ) .d. c. brody , l. p. hughston , proc .a * 454 * , 2445 ( 1998 ) ; a * 455 * , 1683 ( 1999 ) . ,a * 78 * , 060303(r ) ( 2008 ) .b. schumacher , m. d. westmoreland , _ relative entropy in quantum information theory _ in ams cont ., * 305 * ( 2002 ) . v. vedral , rev .mod . phys . * 74 * , 197 ( 2002 ) .m. m. wolf , g. giedke , and j. i. cirac , phys .lett . * 96 * , 080502 ( 2006 ) .x. b. wang , t. hiroshima , a. tomita , m. hayashi , phys . rep . * 448 * , 1 ( 2007 ) .s. amari , h. nagaoka , _ methods of information geometry _( ams & oxford university press , 2000 ) . d. petz , lin . alg . appl . * 224 * , 81 ( 1996 ) .m. g. genoni , c. invernizzi and m. g. a. paris , phys .a * 80 * , 033842 ( 2009 ) .g. adesso , f. dellanno , s. de siena , f. illuminati , l. a. m. souza , phys .a * 79 * , 040305(r ) ( 2009 ) .
we address the nongaussianity ( ng ) of states obtained by weakly perturbing a gaussian state and investigate the relationships with quantum estimation . for classical perturbations , i.e. perturbations to eigenvalues , we found that ng of the perturbed state may be written as the quantum fisher information ( qfi ) distance minus a term depending on the infinitesimal energy change , i.e. it provides a lower bound to statistical distinguishability . upon moving on isoenergetic surfaces in a neighbourhood of a gaussian state , ng thus coincides with a proper distance in the hilbert space and exactly quantifies the statistical distinguishability of the perturbations . on the other hand , for perturbations leaving the covariance matrix unperturbed we show that ng provides an upper bound to the qfi . our results show that the geometry of nongaussian states in the neighbourhood of a gaussian state is definitely not trivial and can not be subsumed by a differential structure . nevertheless , the analysis of perturbations to a gaussian state reveals that ng may be a resource for quantum estimation . the ng of specific families of perturbed gaussian states is analyzed in some details with the aim of finding the maximally non gaussian state obtainable from a given gaussian one .
investigations of traffic flows on substrates of various topologies and discussions of their efficiency have been a topic of recent research interest .the optimization of network structure and traffic protocols to achieve maximum efficiency is also a problem of practical importance .congestion effects can occur in real networks like telephone networks , computer networks and the internet .congestion / decongestion transitions are seen in these systems .recent studies on the ` ping'-experiment shows fluctuation at the critical point . message transport on different network geometries has been studied earlier on a linear chain , on two - dimensional lattices and on cayley trees where messages are routed through shortest paths . here, we consider a ring lattice of ordinary nodes and hubs similar to that considered in .networks based on ring geometries have been studied in the context of atm networks and local area networks ( lan ) .a realistic ring topology such as fiber distributed data interface ( fddi ) sends messages clockwise or counter - clockwise through the shared link .similarly , messages are deposited on our model ring lattice at regular intervals .we show that the network reproduces the experimental findings of the internet traffic flow .here we discuss an one dimensional version of the communication network of nodes and hubs . the base network is a ring lattice of size with nearest neighbor connections .hubs are distributed randomly in the lattice where each hub has nearest neighbors .no two hubs are separated by a less than a minimum distance , . in our simulationwe have taken =4 and =1 , although fig.[fig_sim1](a ) illustrates only =2 connections .the distance between a source and target is defined by the manhattan distance .messages are routed along the shortest path between a source and a target in the clockwise direction taking advantage of all links in the direction of the target .thus , if a message is routed from a source to a target on this lattice through the baseline mechanism , it takes the path - 1 - 2-y-3 - 4 - 5- as in fig.[fig_sim1](a ) .in our simulation , a given number of source and target pairs start sending messages at every time step for a total run time of for a lattice size , and . the average load per node is given as where is the total number of messages flowing on the lattice .for smaller values of the posting rate , the value of is very small and the system is in the decongested phase . as the posting rate of messagesis increased , the system attains the congested regime .the autocorrelation function of the average load per node ( ) is defined as : the fourier transform of the autocorrelation function is known as the spectral density or power spectrum , and is defined as we plot against for posting rates of .the plot of against shows a power law : . in this case the spectral exponent thus indicating scaling irrespective of the posting rate ( fig.[fig_sim1](b ) ) .we also study the inter - arrival time of messages for the most congested hub .the most congested hub is identified by calculating the coefficient of betweenness centrality ( cbc ) , which is defined as the ratio of the number of messages which pass through a given hub to the total number of messages which run simultaneously i.e. .hubs with higher cbc value are more prone to congestion .we calculate the inter arrival time of messages for the hub with highest cbc .inter - arrival times were studied earlier in the context of dynamics of information access on the web and also for human dynamics . for the baseline mechanism ,the distribution of inter - arrival times is of the stretched exponential form , given by where for ( fig.[fig_sim3](b ) ) . if the hubs in the lattice are connected by random assortative connections with two connections per hubs as shown in fig.[fig_sim3](a ) , the inter arrival time of messages show power law behavior of the form where for ( fig.[fig_sim3](b ) ) . in the next section , we will discuss a double ring variation of the network , and discuss another statistical characteriser , the travel time distribution .the ring lattice can be easily modified to the double - ring lattice as shown in fig.[fig_sim2](a ) .double - ring network topologies have been used earlier to model the head - direction system in animals as well as for local area networks ( lan ) .our double - ring lattice consists of two concentric ring lattices ( fig.[fig_sim2](a ) ) of size and respectively , where is the size of the inner ring lattice and is the size of the outer ting lattice .the source - target pairs and the hubs are located in the outer lattice , with each hub having a connection to a node in the inner lattice . as before a messageis routed along the shortest path - 1-x-2 - 3 - 4 - 5-y-6- in the clockwise direction as shown in fig.[fig_sim2](a ) .we study the travel time distribution of messages which are flowing on the lattice .the travel time is defined to be the time required for a message to travel from source to target , including the time spent waiting at congested hubs .a given number of source and target pairs start sending messages continuously at every time steps for a total run time of . in our simulation the travel time is calculated for a source - target separation of on a and double ring lattice , and averaged over hub realizations .the distribution of travel times of messages shows bimodal behaviour .the peak at higher travel times shows gaussian behavior whereas the peak at lower travel time shows log - normal behavior . in the case of the ring ,crossover from gaussian to log - normal behavior was observed during the congestion - decongestion transition in the ring lattice .hence we conclude that the gaussian peak at higher travel times for the double ring corresponds to the initial congestion in the system , whereas the log - normal peak at lower travel times corresponds to the later decongested stage .to summarize , we have studied message transport on model communication network of ordinary nodes and hubs , embedded on a ring lattice .the properties of message traffic on such a lattice is largely consistent with the real world networks like the internet .the power spectral analysis of load time series data shows type fluctuations confirming long - ranged correlation in the network load time series , which is also seen in real life networks . for the baseline mechanism the inter arrival time distribution of messagesshow a stretched exponential behavior. the behavior changes to a power law if random assortative connections are introduced in the lattice .we also studied a variation of the ring lattice , namely the double ring lattice .the travel time distribution is bimodal , with one gaussian peak and one log - normal peak. it would be interesting to see if our results have relevance in real life communication networks like telephone networks , biological networks etc .we thank csir , india for support under their extra - mural scheme .the authors also thank a. prabhakar for helpful suggestions and comments .
we study message transport on a ring of nodes and randomly distributed hubs . messages are deposited on the network at a constant rate . when the rate at which messages are deposited on the lattice is very high , messages start accumulating after a critical time and the average load per node starts increasing . the power - spectrum of the load time - series shows like noise similar to the scenario of the internet traffic . the inter - arrival time distribution of messages for the ring network shows stretched exponential behavior , which crosses over to power - law behavior if assortative connections are added to the hubs . the distribution of travel times in a related double ring geometry is shown to be bimodal with one peak corresponding to initial congestion and another peak to later decongestion . communication network ; message transport ;
the introduction of the adjoint neutron transport was one of the key landmarks in the evolution of nuclear reactor physics .the solution of this equation was interpreted as neutron importance and opened the way to many applications in nuclear reactor physics and engineering .the basic concept of neutron importance was established by weinberg and wigner , although the idea was proposed in various forms also by other authors .a consistent derivation of the adjoint boltzmann equation for the critical reactor in steady state and its physical interpretation as the neutron importance conservation equation in integro - differential form is due to ussachoff .the integral form of the adjoint transport equation has also been introduced and in a recent contribution the connection between adjoints and green s functions is highlighted .the concept was also generalized to source - driven systems and to time - dependent situations .a general approach to the theory of neutron importance was proposed and discussed by lewins , as `` the physical basis of variational and perturbation theory in transport and diffusion problems '' . as this subtitle of lewinss book clearly states , the theory of the adjoint function lays the foundation for the applications of perturbation and variational methods in the field of nuclear reactor physics . over the years , these methods have provided powerful and effective tools for the analysis of nuclear reactors .the literature on the interpretation and on the applications of the adjoint function is huge ( it is impossible to give an exhaustive list of references here ; see the bibliography in lewins s book that covers at least the earliest history ) .a significant thrust forward in perturbation analysis is due to gandini .his generalizations led to a huge extension of the possibilities of the perturbative approach in various fields of applied sciences .as an example , in nuclear reactor physics , the technique could be effectively used in the fields of nuclide evolution and fuel cycle and it can be applied also to non - linear problems .a theory of the adjoint function can be developed also for source - driven problems , once a problem - tailored definition of the adjoint function is introduced .the methods developed for nuclear reactor kinetics rely heavily on the neutron importance concept .the standard kinetic equations for the point reactor were consistently derived by projection of the neutron balance equations on the adjoint function . the various quasi - static schemes for spatial and spectral kinetics nowadays used for time - dependent full - core simulationsare based on this idea ( see , for instance , ) . at lastit is worth to cite the use of the adjoint quantities in modern sensitivity analysis and uncertainty quantification , which have a crucial role in today s nuclear science .some efforts were made in the past to sample weighted quantities directly in the monte carlo ( mc ) process .the significant work by rief on direct perturbation evaluations by monte carlo should be acknowledged .also some more recent work must be acknowledged .the sampling of weighted quantities has been introduced for the evaluation of integral reactor parameters . the capability to evaluate the adjoint function is included in the standard deterministic neutronic codes used for reactor analysis .monte carlo statistical methods are gaining a prominent role in a wide set of nuclear applications and the information on neutron importance is being used to guide the sampling procedure and speed up its convergence .the importance sampling techniques can be included within the frame of the so - called contributon theory .several works have been performed with the objective of accelerating the statistical convergence of monte carlo ( see , for instance , and the bibliography therein ) .however , the possibility of using monte carlo for the solution of the adjoint equation is very attractive per se .two approaches are possible : either a backward neutron propagation technique or a proper forward procedure .many authors have tackled the problem , with various attempts to maintain the same sampling approach as the one used in the direct monte carlo simulation , although no physical interpretation of such procedures is usually given .the procedure leads to the introduction of a concept of pseudo - particles , named adjunctons , which , through an appropriate transport process , are distributed as the adjoint function . in this framework ,starting from the integral form of the transport equation , the work by irving is certainly standing . in the present work a consistent approach to the sampling procedure for a monte carlo simulation for the adjoint functionis illustrated and a physical interpretation is discussed .the procedure draws its inspiration from the work carried out by de matteis .further developments were presented in a later work .the concept of pseudo - particles named adjunctons and of adjoint cross sections was used in these works , as well as in the works by eriksson and by irving in the following the sampling procedure for the solution of the adjoint equation is defined for a fixed source problem , although it could be easily extended to eigenvalue simulations .the interest is mainly focused on the energy and direction variables , since the problem in space can be handled by simple extension of the procedure for the direct equation .afterwards , the validation of the sampling procedure is considered .a reliable benchmark can be employed for this purpose : analytical benchmarks are particularly useful for a sound validation , since they are not affected by any discretization or truncation error and they have been widely proposed for various physical problems in transport theory . to obtain an analytical benchmark the classical problem of neutron slowing down in an infinite mediumis considered .the direct problem leads to the classical placzek functions . on the other hand , in this work, the adjoint equation is solved analytically using the same approach as for the direct equation , showing an interesting and useful duality property , and the results are compared to the ones obtained by monte carlo . as a last outcome of the work here presented ,a fully analytical closed - form for both the direct and adjoint placzek functions is obtained .the neutron transport equation reads : where , when necessary , also the fission process is introduced in the source term as : the static version of the transport equation is now considered . in the presence of fission and of an external source a physically meaningful solution exists , i.e. non - negative over the whole phase space considered , only if the fundamental multiplication eigenvalue , defined by : is strictly smaller than unity ( ) . for the purpose of the present work the ( possible ) space dependence is not relevant and , therefore , the static version of eq .( [ eqn : xxxxyyy ] ) for an infinite homogeneous system with homogeneous isotropic source is considered : clearly , the angular flux must be space independent . furthermore, if an isotropic medium is considered , the transfer kernel depends only on and , hence , the angular flux is independent of .this is physically easily understandable , since the flux must be isotropic in an isotropic homogeneous infinite medium , since no source of anisotropy is present .it can be also proved mathematically , observing that , in such a case , the collision integral in the r.h.s . ofthe above equation , by integration over all , obviously turns out to be independent .the equation takes the following form : one can now define partial collision kernels through the following expression : where represents - in analogy with the usual fission term - the _ mean number of neutrons emitted by a collision of type which has been triggered by a neutron of energy _ ) it is possible to include also collision processes other than fission or scattering . ] .the _ total collision kernel _ is then expressed as a probability - weighted sum of partial collision kernels : in these last two relations it is implicit that the energy - angle distributions are normalized _ with respect to the outgoing neutron energy _ , so that a probabilistic intepretation , useful for sampling in a monte carlo procedure , is natural : whenever a neutron with ( incoming ) energy suffers a collision , which is of the kind with probability , then a mean number of neutrons exits from collision with energy and angular distribution given by .these relations are the conceptual basis for the monte carlo sampling process in neutron transport .as anticipated , the space dependence is omitted in the present discussion . by straighforward mathematical reasoning, the equation adjoint to ( [ eqn : directeq ] ) takes the following form : a few comments on the physical meaning of this equation are worth - while .although one refers to as the `` adjoint flux '' , physically it is not a flux .it is known as `` neutron importance '' , it is not a density and as such it is a dimensionless quantity , quite differently from the neutron flux .this fact leads also to an interpretation of the integral terms in the above equation ( [ eqn : adj ] ) that is quite different from the interpretation of the corresponding terms in equation ( [ eqn : directeq ] ) .for instance , to physically derive the scattering integral term in the balance established by eq .( [ eqn : directeq ] ) , one takes the total track length within the elementary volume , i.e. , and multiplies by the transfer function , in order to obtain the number of neutrons emitted per unit energy and per unit solid angle at and .the integration collects the contributions from all possible incoming energies and directions . on the other hand , for the balance of importance in eq .( [ eqn : adj ] ) , one must collect the contributions to importance of all neutrons generated by the scattering of a neutron characterized by energy and direction .therefore is the fraction of scattered neutrons within and , consequently , their contributions to the balance of importance is obtained multiplying by the importance of neutrons at the outgoing energy and direction .the integration now collects the contributions from all possible outgoing energies and directions .the simplest way to obtain a basis for the mc simulation of the adjoint flux is to manipulate eq .( [ eqn : adj ] ) in such a way as to obtain a set of relations formally identical to ( [ eqn : forwone ] , [ eqn : forwtwo ] ) ; we remark that the main difficulty in developing a sampling scheme for ( [ eqn : adj ] ) stems from the fact that in this case is the energy of particles outgoing from the collision .it is clear that this difficulty can be ( formally ) overcome by defining in such a way that ( [ eqn : adjf ] ) appears identical to ( [ eqn : forwone ] ) , _ provided one assumes - or better defines _- , which implies that the total rate of collision for the pseudo - particles here implicitly introduced into the game is the same as for the corresponding physical particles : this is the only physical constraint we assume to set up a simulation framework for the adjoint equation . in this way we obtain for the adjoint equation: we underline that the superscript does not imply here transposition and complex conjugation , but it simply hints to the fact that dagged quantities refer to the parameters defining the transport properties of pseudo - particles : through this identification a purely formal transposition acquires a true physical meaning .however this is not sufficient , because we must also require that the adjoint kernel takes the form of a sum of partial collision kernels for pseudo - particles , namely : so that we can interpret all the dagged quantities in the same fashion as the original macroscopic cross sections for neutrons , in particular the fact that the probability for the -reaction to happen is given by .then one can write : by equating ( [ 111 ] ) and ( [ 222 ] ) , the following relation is established : which is trivially fulfilled if for all reactions : this seemingly obvious solution requires however a non trivial assumption , that pseudo - particles are subject to the same set of reactions as neutrons .this is not at all mandatory and it is simply a convenient choice for the purpose of simulation . along this line of thoughtwe can assume that not only the total cross section for pseudo - particles is the same as for neutrons , but that the same happens for all partial reactions , that is for all : however analogies between forward and adjoint simulation shall not go beyond this point , essentially because the true difference between the two cases is that in taking the adjoint we loose the kernel normalization ( with respect to outgoing energies and directions ) .in fact , if we assume - as it is natural - that the partial adjoint kernels are normalized with respect to the outgoing pseudo - particle energy by integrating ( [ eqn : soladj ] ) over and , we have : which implies that , in general , the mean number of pseudo - particles outgoing from a collision is not the same as for neutrons .it is remarkable that with these choices neutron importance can again be interpreted as a flux ( density ) of pseudo - particles , so that for instance traditional collision or track - length estimators can be used throughout the simulation process : this fact is apparently in contradiction with the discussion above about the physical intepretation of the neutron importance - a dimensionless quantity - with respect to a flux - a dimensional quantity. however it should be clear that when building a transport monte carlo model for the solution of the importance equation we implicitly modify the meaning ( not the numerical value ) we attribute to the adjoint source that in this scheme really corresponds to some pseudo - particle density ; in other words , we build an effective transport model for pseudo - particles whose solution - that is a flux - numerically coincides with the solution for the neutron importance , which instead is dimensionless . as a last remark , it is worth observing that the importance function for the pseudo - particles herewith introduced obeys the direct transport equation , thus establishing a full duality for the two equations , with specular physical meanings .as an example , for the sake of simplicity , consider -wave neutron scattering , for which where ^ 2 ] and that the maximum value allowed for in this case is , we realize that the maximum value is , correctly , .the initial condition requires and finally : \ , .\nonumber\end{aligned}\ ] ] the solution of this equation can be given in closed form as ( for a proof see appendix [ sec : proof ] ) : \nonumber\\ \nonumber & \vdots & \\ g_n(e)&=&\frac{1}{(1-\alpha)^n}\left(\frac{e}{e_0}\right)^\frac{1}{1-\alpha}\times\label{eqn : fullform}\\ & & \times\left [ ( 1-\alpha)^{n-1}\left(1-\alpha^\frac{1}{1-\alpha}\right)-(1-\alpha)^{n-2}\alpha^\frac{1}{1-\alpha}\ln\frac{\alpha e}{e_0 } + \right.\nonumber\\ & & \left.\vphantom{\alpha^\frac{1}{1-\alpha } } \hskip-2.5truecm+\sum_{m=2}^{n-1}(-1)^{m}\alpha^\frac{m}{1-\alpha}\left ( \frac{(1-\alpha)^{n - m}}{(m-1)!}\ln^{m-1}\frac{\alpha^m e } { e_0}+ \frac{(1-\alpha)^{n - m-1}}{m!}\ln^m\frac{\alpha^m e}{e_0 } \right ) \right],\nonumber\end{aligned}\ ] ] where the general expression holds for .from this expression it is immediate to conclude that in fact the last term in the sum for is for and a factor is always present because the minimum power for logarithms is : for this is .the remaining terms coincide with the expression for .the adjoint flux is given , in the corresponding energy intervals by the following formula : and explictly : \nonumber\\ \nonumber & \vdots & \\ \phi^\dagger_n(e)&=&\frac{1}{e_0\sigma_s(e_0)(1-\alpha)^n}\left(\frac{e}{e_0}\right)^\frac{\alpha}{1-\alpha}\times\label{eqn : fullformadjflux}\\ & & \times\left [ ( 1-\alpha)^{n-1}\left(1-\alpha^\frac{1}{1-\alpha}\right)-(1-\alpha)^{n-2}\alpha^\frac{1}{1-\alpha}\ln\frac{\alpha e}{e_0 } + \right.\nonumber\\ & & \left.\vphantom{\alpha^\frac{1}{1-\alpha } } \hskip-2.5truecm+\sum_{m=2}^{n-1}(-1)^{m-2}\alpha^\frac{m}{1-\alpha}\left ( \frac{(1-\alpha)^{n - m}}{(m-1)!}\ln^{m-1}\frac{\alpha^m e } { e_0}+ \frac{(1-\alpha)^{n - m-1}}{m!}\ln^m\frac{\alpha^m e}{e_0 } \right ) \right].\nonumber\end{aligned}\ ] ] graphs of the adjoint placzek functions in the lethargy variable are shown in fig .[ fig : adjplfig ] .a comment is here useful : the reason why it is possible to obtain a compact , closed form for the adjoint flux is related to the fact that _ we do not make use from the beginning of the lethargy variable _ as it is traditionally done in literature .it is quite evident from ( [ eqn : fullformadjflux ] ) that it is possible to transform the above expressions in terms of lethargy variable , may be used .] , but the original forms in terms of energy are much simpler . for instance : anyway , in terms of the lethargy variable the discontinuities of the adjoint flux or of its derivatives occur for as it is shown in the figures . to emphasize oscillations on a wider lethargy range . ]in particular it can be shown that is discontinuous at .moreover it turns out that ,\end{aligned}\ ] ] which can be written in the more compact expression : that is naturally a divergent quantity because it is the adjoint flux that has a finite limit for .this requires to divide by , so yielding the adjoint flux values at the discontinuity points as : this is a useful expression for approximate numerical evaluations , because it is stable for moderate values of .however , the finiteness of the asymptotic limit for is guaranteed only by the factor , being the sequence defined by ( [ eqn : divergent ] ) divergent .the implementation and test of the proposed sampling procedure for the adjoint flux appears straigthforward from the previous considerations and we show some mc simulations in fig . [fig : adjsimul ] , together with the corresponding analytic result obtained from eq .( [ eqn : fullformadjflux ] ) .statistical error bar is shown ; the grey area is the confidence region built around the analytical result using the estimated statistical standard deviation : as expected the mean number of points outside this confidence region does not exceed 1% .the number of histories used in the simulation is . ]we have further verified that the simulation of the adjoint flux along the proposed scheme preserves all the properties that are expected by a mc sampling : in particular - as it is well known - the mc procedure provides a statistical sample of the desired quantity ; this entails that the simulated adjoint flux is by itself a stochastic quantity with an associated probability distribution , with its well - defined mean and variance . for a sufficiently large number of histories , the statistical error associated can be estimated as : in fig . [fig : sample ] we show that this is indeed the case : we produced a set of 100 mc estimates of the adjoint flux for and : in each of the graphs every single adjoint flux estimate - obtained using the conventional collision estimator - is shown as a single black dot as a function of the energy bin interval . ) to collect results for adjoint flux estimator . ]( on the left ) and ( on the right ) : larger ( red ) points represent the mean of the sampled flux estimates and the lower and upper limits of respective confidence interval . in this case .,title="fig : " ] ( on the left ) and ( on the right ) : larger ( red ) points represent the mean of the sampled flux estimates and the lower and upper limits of respective confidence interval . in this case .,title="fig : " ] however from figure [ fig : adjsimul ] we realize that the proposed simulation scheme displays an unwanted and unpleasant feature : the statistical uncertainty on the result is manifestly increasing with energy , an effect that is more pronounced for low values of the mass number .this is a consequence of two concurrent effects : on one side we built simulations in the interval , with equally spaced energy bins , so that is not constant and this implies an increase of the statistical error .on the other side , and more and more relevant for low mass numbers , the assumed simulation scheme implies that the weight of the ( adjoint ) particle increases at every scattering event , again causing higher statistical errors at higher energies .while the first effect can be easily corrected using energy bins such that is constant , the second one requires to implement some proper variance reduction scheme , for instance splitting histories when weight becomes higher than some pre - defined cutoff .we show in figure [ fig : adjsimul1 ] the results from simulation when both these corrections are implemented : we used weight cutoff 2 , 2 , 2 , 2 , 3 and 5 for a = 25 , 20 , 15 , 10 , 8 and 5 , respectively .it is clear by comparison of the figures [ fig : adjsimul ] and [ fig : adjsimul1 ] that smoother behavior of the variance along energy variable can be obtained applying the proposed procedure .the simulation time is enhanced by a factor of 1.07 , 1.12 , 1.29 , 1.7 , 1.65 and 2.4 , respectively . , but with constant and weight splitting enabled . ]in this paper some basic aspects in the theory of the adjoint neutron transport equation are presented .some important works on the physical aspects of the problem and on the relevant applications in the nuclear reactor physics field are reviewed . a sampling method enabling a monte carlo approach for the solution of the adjoint equationis then presented and its physical meaning is discussed in terms of the transport of virtual ( adjoint ) particles .the statistical procedure proposed can be applied in a straightforward manner using the same monte carlo tool suitable to solve direct transport problems . in the second part of the papera paradigmatic adjoint transport problem amenable to a fully analytical solution is considered .the solution of this problem provides a reference solution that can serve as a benchmark for the statistical procedure proposed .the problem refers to the solution of the adjoint equation for the infinite - medium slowing down process .the corresponding direct problems can be analytically solved by the use of the classic placzek functions . in the present case ,the theory of these functions is constructed and their analytical determination is carried out .this work leads to disclose novel and interesting properties of the placzek functions , which can be established through their relationship with the adjoint ones , and to yield a full closed analytical formulation for all of them ( see appendix a ) .the direct comparisons between the analytical results and those obtained by a monte carlo simulation allow to validate the suitability of the statistical approach proposed for the solution of the adjoint transport problem .the favorable comparison allows to conclude that the sampling procedure can be successfully applied for the determination of the adjoint flux in neutron transport for reactor physics applications .it is useful and educationally worth - while to see how the previous technique applies also to the calculation of the `` original '' placzek functions .it is known that they obey the following equation : they are discontinuous or they have discontinuous derivatives at , so they are appropriately defined as over intervals .the first two of these functions , in terms of the lethargy variable are given by ( cfr ._ ibidem _ eqn .( 8 - 50 ) and ( 8 - 55 ) ) : }{1-\alpha}\\ f_2(u ) & = & s_0\left(\frac{1-\alpha^\frac{1}{1-\alpha}}{1-\alpha}\right)\exp\left[\frac{\alpha}{1-\alpha}u\right ] -\\ & & -s_0 \frac{\alpha^\frac{\alpha}{1-\alpha}}{(1-\alpha)^2}\left(u-\ln\frac{1}{\alpha}\right ) \exp\left[\frac{\alpha}{1-\alpha}u\right],\end{aligned}\ ] ] which can be translated into the energy variable as : ] which entails ] .\end{aligned}\ ] ] it is remarkable that in these two intervals ( and ) the following property holds : let us suppose now that a function exists such that the previous relation hold over all the allowed energy range , that is : to prove the consistency of this hypothesis we can start from the equations for and for and show that they imply the same equation for for the argument of . ] : this is a necessary and sufficient condition because we have shown by inspection that ( [ eqn : firsttwo ] ) holds in the first two respective intervals of energy maps different energy intervals on a single one for the function . ] .the equation to be satisfied by for is and , multiplying by , on the other hand the equation satisfied by is for : \ ] ] and then \\ & = & \frac{1}{s_0}f_c(e)\left[1-\frac{1}{1-\alpha}\right]+\frac{1}{s_0}\frac{1}{(1-\alpha)}f_c\left(\frac{e}{\alpha}\right)\\ & = & \frac{1}{s_0}\frac{1}{(1-\alpha)}f_c\left(\frac{e}{\alpha}\right)-\frac{1}{s_0}\frac{\alpha}{1-\alpha}f_c\left(e\right),\end{aligned}\ ] ] or now if we let , we have : or on the other hand in ( [ eqn : a6 ] ) we can substitute ( and again we are constrained to ) , obtaining which is manifestly the same equation as ( [ eqn : a7 ] ) :here we give the proof of eqn .( [ eqn : fullform ] ) : \nonumber\\ g_n(e)&=&\frac{1}{(1-\alpha)^n}\left(\frac{e}{e_0}\right)^\frac{1}{1-\alpha}\times\label{eqn : fullforma}\\ & & \times\left [ ( 1-\alpha)^{n-1}\left(1-\alpha^\frac{1}{1-\alpha}\right)-(1-\alpha)^{n-2}\alpha^\frac{1}{1-\alpha}\ln\frac{\alpha e}{e_0 } + \right.\nonumber\\ & & \left.\vphantom{\alpha^\frac{1}{1-\alpha } } \hskip-2.5truecm+\sum_{m=2}^{n-1}(-1)^{m}\alpha^\frac{m}{1-\alpha}\left ( \frac{(1-\alpha)^{n - m}}{(m-1)!}\ln^{m-1}\frac{\alpha^m e } { e_0}+ \frac{(1-\alpha)^{n - m-1}}{m!}\ln^m\frac{\alpha^m e}{e_0 } \right ) \right].\nonumber\end{aligned}\ ] ] the term can be found carrying out the following steps : \\ & = & \frac{\left(\frac{e}{e_0}\right)^\frac{1}{1-\alpha}}{(1-\alpha)^2}\left[\left(\frac{\alpha}{e_0}\right)^\frac{1}{1-\alpha } \int_{e_0}^{e_0/\alpha}y^{\frac{1}{1-\alpha}-1}dy-\alpha^\frac{1}{1-\alpha}\int_{e_0}^{\alpha e}\frac{dy}{y } \right]\\ & = & \frac{\left(\frac{e}{e_0}\right)^\frac{1}{1-\alpha}}{(1-\alpha)^2}\left[\left(\frac{\alpha}{e_0}\right)^\frac{1}{1-\alpha } ( 1-\alpha)\left.y^\frac{1}{1-\alpha}\right\vert_{e_0}^{e_0/\alpha } -\alpha^\frac{1}{1-\alpha}\ln\frac{\alpha e}{e_0 } \right]\\ & = & \frac{\left(\frac{e}{e_0}\right)^\frac{1}{1-\alpha}}{(1-\alpha)^2}\left [ ( 1-\alpha)\left(1-\alpha^\frac{1}{1-\alpha}\right ) -\alpha^\frac{1}{1-\alpha}\ln\frac{\alpha e}{e_0 } \right].\end{aligned}\ ] ] then we recognize that ( [ eqn : fullforma ] ) implies for an alternative recurrence relation for the functions , namely : from which it is also immediate to conclude that form ( [ eqn : altforma ] ) , which is easily seen as perfectly equivalent to ( [ eqn : fullforma ] ) in the remaining terms , so reproducing the same form ( [ eqn : fullforma ] ) , with replaced by .] , is simpler to use to obtain a proof by mathematical induction .suppose in fact that the equation defining : holds for some value of .next insert ( [ eqn : altforma ] ) for ; we have we must verify that this expression is equal to the following one : that is to say that itself satisfies the same equation as the s ; however , by definition the following equality holds : therefore , to prove our thesis we must simply show that or which , being is trivially verified.qed a. gandini , `` generalized perturbation theory ( gpt ) methods . a heuristic approach , '' in _ advances in nuclear science and technology _( j. lewins and m. becker , eds . ) , ch .19 , new york : plenum publishing corporation , 1987 .s. dulla , f. cadinu , and p. ravetto , `` neutron importance in source - driven systems , '' in _ international topical meeting on mathematics and computation , supercomputing , reactor physics and biological applications _ , ( avignon ) , 2005 .m. aufiero , a. bidaud , m. hursin , j. leppnen , g. palmiotti , s. pelloni , p. rubiolo , a collision history - based approach to sensitivity / perturbation calculations in the continuous energy monte carlo code serpent , annals of nuclear energy , 85 , 245 - 258 , 2015 computing adjoint - weighted kinetics parameters in tripoli-4/e by the iterated fission probability method , annals of nuclear energy , 85 , 17 - 26 , 2015 g. truchet , p. leconte , a. santamarina , e. brun , f. damian , a. zoia j. densmore and e. w. larsen , `` variational variance reduction for particle transport eigenvalue calculations using monte carlo adjoint simulation , '' _ journal of computational physics _ , vol .192 , pp . 387405 , 2003 .s. a. h. feghhi , m. shahriari , and h. afarideh , `` calculation of neutron importance function in fissionable assemblies using monte carlo method , '' _ annals of nuclear energy _ , vol .34 , pp .514520 , 2007 . c. m. diop , o. petit , c. jouanne , and m. coste - delclaux , `` adjoint monte carlo neutron transport using cross section probability table representation , '' _ annals of nuclear energy _ , vol .37 , pp .11861196 , 2010 .
the adjoint equation was introduced in the early days of neutron transport and its solution , the neutron importance , has ben used for several applications in neutronics . the work presents at first a critical review of the adjoint neutron transport equation . afterwards , the adjont model is constructed for a reference physical situation , for which an analytical approach is viable , i.e. an infinite homogeneous scattering medium . this problem leads to an equation that is the adjoint of the slowing - down equation that is well - known in nuclear reactor physics . a general closed - form analytical solution to such adjoint equation is obtained by a procedure that can be used also to derive the classical placzek functions . this solution constitutes a benchmark for any statistical or numerical approach to the adjoint equation . a sampling technique to evaluate the adjoint flux for the transport equation is then proposed and physically interpreted as a transport model for pseudo - particles . this can be done by introducing appropriate kernels describing the transfer of the pseudo - particles in phase space . this technique allows estimating the importance function by a standard monte carlo approach . the sampling scheme is validated by comparison with the analytical results previously obtained .
the study of quantum measurement has come a long way since the proposal of wavefunction collapse by heisenberg and von neumann , the philosophical debates by bohr and einstein , and the cat experiment hypothesized by schrdinger . with more and more experimental demonstrations of bizarre quantum effectsbeing realized in laboratories , many researchers have shifted their focus to the practical implications of quantum mechanics for precision measurements , such as gravitational - wave detection , optical interferometry , atomic clocks , and magnetometry .braginsky , thorne , caves , and others pioneered the application of quantum measurement theory to gravitational - wave detectors , while holevo , yuen , helstrom , and others have developed a beautiful theory of quantum detection and estimation based on the more abstract notions of quantum states , effects , and operations .although holevo _ et al . _s approach was able to produce rigorous proofs of quantum limits to various information processing tasks , so far it has been applied mainly to simple quantum systems with trivial dynamics measured destructively to extract static parameters . applying such an approach to gravitational - wave detection , or optomechanical force detection in general , proved to be far trickier ; the signal of interest there is time - varying ( commonly called a waveform in engineering literature ) , the detector is a dynamical system , and the measurements are nondestructive and continuous .quantum limits to such detectors had been a subject of debate , with no definitive proof that any limit exists . in more recent years, the rapid progress in experimental quantum technology suggests that quantum effects are becoming relevant to metrological applications and has given the study of quantum limits a renewed impetus . generalizing the quantum cramr - rao bound first proposed by helstrom , tsang , wiseman , and caves recently derived a quantum limit to waveform estimation , which represents the first step towards a rigorous treatment of quantum limits to a waveform sensor .that work assumes that one is interested in estimating an existing waveform accurately , so that the mean - square error is an appropriate error measure .the first goal of gravitational - wave detectors is not estimation , however , but to detect the existence of gravitational waves , in which case the miss and false - alarm probabilities are the more relevant error measures and the existence of quantum limits remains an open problem .here we settle this long - standing question by proving lower error bounds for the quantum waveform detection problem .to illustrate our results , we apply them to optomechanical force detection , demonstrating a fundamental trade - off between force detection performance and precision in detector position , and discuss how the limits can be approached in some cases of interest using a quantum - noise cancellation ( qnc ) technique and an appropriate optical receiver , such as the ones proposed by kennedy and dolinar . merging the continuous quantum measurement theory pioneered by braginsky _et al . _ and the quantum detection theory pioneered by holevo _, these results are envisaged to play an influential role in quantum metrological techniques of the future .let ] is its prior probability functional , and ] and ] at the the final time via the principle of deferred measurement : & = { \operatorname{tr}}{\left\{e[y]u_0(t_f , t_i){|\psi\rangle}{\langle\psi|}u_0^\dagger(t_f , t_i)\right\ } } , \\p[y|x,\mathcal h_1 ] & = { \operatorname{tr}}{\left\{e[y]u_1(t_f , t_i){|\psi\rangle}{\langle\psi|}u_1^\dagger(t_f , t_i)\right\}},\end{aligned}\ ] ] where only the unitaries and are assumed to differ and depends on .assume further that } , \\u_1(t_f , t_i ) & = \mathcal t \exp{\left[-\frac{i}{\hbar } \int_{t_i}^{t_f } dt h_1(x(t),t)\right ] } , \\h_1(x(t),t ) & = h_0(t ) + \delta h(x(t),t),\end{aligned}\ ] ] where denotes time - ordering and is the hamiltonian term responsible for the coupling of the waveform to the quantum detector .figure [ naimark ] shows the quantum - circuit diagrams that depict the problem . with unitary evolution ( or ) under each hypothesis ( or ) in a large enough hilbert space for a given classical waveform , which perturbs the evolution under .if is stochastic , the final quantum state under is mixed .measurements are modeled as a positive - operator - valued measure ( povm ) ] , where denotes the identity operator with respect to .the average error probability is thus lower - bounded by : which is valid for any purification .hence where is the quantum fidelity by uhlmann s theorem : as is pure , the fidelity is given by where we have defined classical and quantum averages by (\cdot ) , \\{ \langle\cdot\rangle } & \equiv { \langle\psi|}\cdot{|\psi\rangle}.\end{aligned}\ ] ] by similar arguments , a quantum bound on the miss probability for a given allowable false - alarm probability can be derived from the bound for the pure - state case : ^ 2 , & p_{10 } \le f ; \\ 0 , & p_{10 } \ge f. \end{array } \label{p01}\end{aligned}\ ] ] note that the latter bound is equally valid if we interchange and ; for example , fixing means .equations ( [ pe ] ) and ( [ p01 ] ) are valid for any povm and achievable if is known _ a priori _ , such that both and are pure . in terms of related prior work at this point ,ou and paris studied quantum limits to interferometry in the context of detection , while childs _ et al . _ , acn _ et al . _ , and dariano _ _ et al.__ also studied unitary or channel discrimination , but all of them did not consider time - dependent hamiltonians , which are the subject of interest here . a key step towards simplifying eq .( [ fx ] ) is to recognize that },\end{aligned}\ ] ] where is in the _ interaction picture _ . in general , eq .( [ fx ] ) can then be expanded in a dyson series and evaluated using perturbation theory . to derive analytic expressions , however, we shall be more specific about the hamiltonians and the initial quantum state .assume that is a force on a quantum object with position operator , so that and the conditional fidelity becomes }\right\rangle}\right|}^2 , \label{fx2}\end{aligned}\ ] ] with obeying equations of motion under the null hypothesis in the interaction picture .the expression in eq .( [ fx2 ] ) is a noncommutative version of the characteristic functional . to simplify it ,assume further that consists of terms at most quadratic with respect to canonical position or momentum operators , such that the equations of motion are linear and depends linearly on the initial - time canonical operators .let be a column vector of canonical position / momentum operators , including , that obey the equation of motion under hypothesis , where is a drift matrix and is a source vector , both consisting of real numbers . can then be written as where is a row vector and a function of .this gives with now given by }\right\rangle}\right|}^2,\end{aligned}\ ] ] the time - ordering operator becomes redundant : }{|\psi\rangle}\right|}^2.\end{aligned}\ ] ] this expression can be simplified using the wigner representation of , which has the following property : }{|\psi\rangle } & = \int dz w(z , t_i ) \exp(i\kappa^\top z),\end{aligned}\ ] ] where is a column vector of phase - space variables . assuming further that is gaussian with mean vector and covariance matrix , we obtain an analytic expression for : } , \label{fx_gauss } \\\sigma_q(t , t ' ) & \equiv v_q(t , t_i ) \sigma v_q^\top(t',t_i).\end{aligned}\ ] ] the covariance matrix is given by the weyl - ordered second moment : hence it is interesting to note that the expression given by in eq .( [ fx_gauss ] ) coincides with the one proposed in refs . as an upper quantum limit on the force - sensing signal - to - noise ratio , and is equal to the quantum fisher information in the quantum cramr - rao bound for waveform estimation .the relation of this expression to the fidelity and the detection error bounds is a novel result here , however . if the statistics of can be approximated as stationary ; viz ., ,\end{aligned}\ ] ] becomes } , \label{fx3 } \\x(\omega ) & \equiv \int_{t_i}^{t_f } dt x(t ) \exp(i\omega t).\end{aligned}\ ] ] for example , if is a sinusoid , } , & t&\equiv t_f - t_i.\end{aligned}\ ] ] these expressions for the fidelity suggest that , for a given , there is a fundamental trade - off between force detection performance and precision in detector position . , and the output field measured to infer whether a force has perturbed the motion of the mirror.,scaledwidth=45.0% ]suppose now that the mechanical object is a moving mirror of an optical cavity probed by a continuous - wave optical beam , the phase of which is modulated by the object position and the intensity of which exerts a measurement backaction via radiation pressure on the object , as depicted in fig .[ optomech ] .this setup provides a basic and often sufficient model for more complex optomechanical force detectors .let the output field operator under hypothesis be where denotes convolution , is an impulse - response function with in the frequency domain , is the input mean field , is the optical carrier frequency , is the cavity length , and is the optical cavity decay rate . is the position operator under each hypothesis , which can be written as ,\end{aligned}\ ] ] where is another impulse response function that transfers a force to the position , is the backaction noise , and the transient solutions are assumed to have decayed to zero . defining such that the position power spectral density is we obtain }. \label{fx4}\end{aligned}\ ] ] the backaction noise that appears in the output field , in addition to the shot noise in , can limit the detection performance at the so - called standard quantum limit .this does not seem to agree with the fundamental quantum limits in terms of eq .( [ fx4 ] ) , which suggest that increased fluctuations in due to can improve the detection .fortunately , it is now known that the backaction noise can be removed from the output field .one method , called quantum - noise cancellation ( qnc ) , involves passing the optical beam through another quantum system that has the effective dynamics of an optomechanical system with negative mass . withthe backaction noise removed , the output fields become if the phase quadrature of is measured by homodyne detection , the outputs can be written as .\end{aligned}\ ] ] the power spectral densities of and satisfy an uncertainty relation : the detection problem described by eqs .( [ y0 ] ) and ( [ y1 ] ) becomes a classical one with additive gaussian noise , a scenario that has been studied extensively in gravitational - wave detection .suppose that is known _ a priori_. it is then well known that the error probabilities for the detection problem described by eqs .( [ y0 ] ) and ( [ y1 ] ) using a likelihood - ratio test are where is the threshold in the likelihood - ratio test , which can be adjusted according to the desired criterion , and is a signal - to - noise ratio given by for a long observation time relative to the duration of plus the decay time of .to compare homodyne detection with the quantum limits , suppose that the duration of is long and increases at least linearly with , so that we can define an error exponent as the asymptotic decay rate of an error probability in the long - time limit . for simplicity, we consider here only the exponent of the higher error probability : although this asymptotic limit may not be relevant to gravitational - wave detectors in the near future , the error probabilities for which are anticipated to remain high , we focus on this limit to obtain simple analytic results , which allow us to gain useful insight into the fundamental physics. more precise calculations of error probabilities are more tedious but should be straightforward following the theory outlined here . for homodyne detection , the error exponent is the quantum limit , on the other hand , is which gives using the uncertainty relation between and in eq .( [ uncertain ] ) , it can be seen that that is , the homodyne error exponent is at most half the optimal value .this fact is well known in the context of coherent - state discrimination .the suboptimality of homodyne detection here should be contrasted with the conclusion of ref . , which states that homodyne detection together with qnc are sufficient to achieve the quantum limit for the task of waveform estimation . to see how one can get closer to the quantum limits , let s go back to eqs .( [ aout0 ] ) and ( [ aout1 ] ) .observe that , if the input field is in a coherent state , the output field is also in a coherent state ( in the schrdinger picture ) under each hypothesis .this means that existing results for coherent - state discrimination can be used to construct an optimal receiver .the kennedy receiver , for example , displaces the output field so that it becomes vacuum under and then detects the presence of any output photon .any detected photon means that must be true .deciding on if no photon is detected and otherwise , the false - alarm probability is zero , while the miss probability is the probability of detecting no photon given , or }.\end{aligned}\ ] ] for a long observation time with for a coherent state , } = f,\end{aligned}\ ] ] which makes the kennedy receiver optimal under the neyman - pearson criterion in the case of according to eq .( [ p01 ] ) and also achieve the optimal error exponent : the kennedy receiver can be integrated with the qnc setup ; an example is shown in fig .[ qnc_kennedy ] .the dolinar receiver , which updates the displacement field continuously according to the measurement record , can further improve the average error probability slightly to saturate the lower limit given by eq .( [ pe ] ) .other more recently proposed receivers may also be used here to beat the homodyne limit . from the optomechanical force detector in fig .[ optomech ] is displaced by and then passed through an optical setup that removes the measurement backaction noise .the dash arrows represent a red - detuned optical cavity mode that mimics a negative - mass oscillator and interacts with the optical probe field via a beam splitter ( bs ) and a two - mode optical parametric amplifier ( opa ) .details of how this setup works can be found in refs .if the field is in a coherent state , the final output field should be in a vacuum state under the null hypothesis .any photon detected at the output indicates that must be true.,scaledwidth=45.0% ]consider now a stochastic , which should be relevant to the detection of stochastic backgrounds of gravitational waves . since is gaussian , \exp{\left[-\frac{1}{\hbar^2}\int dt dt ' x(t)\sigma_q(t , t')x(t')\right]}\end{aligned}\ ] ] can be computed analytically if the prior 12 & 12#1212_12%12[1][0] * * , ( ) * * , ( ) * * , ( ) * * , ( ) _ _ ( , , ) * * , ( ) * * , ( ) _ _ ( , , ) _ _ ( , , ) _ _ ( , , ) * * , ( ) _ _ ( , , ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) _ _ ( , , ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) _ _ ( , , ) _ _ ( , , ) _ _ ( , , ) `` , '' ( ) , * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) _ _ ( , , ) _ _ ( , , ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( )
ever since the inception of gravitational - wave detectors , limits imposed by quantum mechanics to the detection of time - varying signals have been a subject of intense research and debate . drawing insights from quantum information theory , quantum detection theory , and quantum measurement theory , here we prove lower error bounds for waveform detection via a quantum system , settling the long - standing problem . in the case of optomechanical force detection , we derive analytic expressions for the bounds in some cases of interest and discuss how the limits can be approached using quantum control techniques .
the network representation applies to large communication infrastructure ( internet , e - mail networks , the world - wide - web ) , transportation networks ( railroads , airline routes ) , biological systems ( gene and/or protein interaction networks ) and to a variety of social interaction structures . very interestingly, many real networks share a certain number of topological properties .for example , most networks are small - worlds : the average topological distance between nodes increases very slowly ( logarithmically or even slower ) with the number of nodes .additionally , `` hubs '' [ nodes with very large degree compared to the mean of the degree distribution are often encountered .more precisely , the degree distributions exhibit in many cases heavy - tails often well approximated for a significant range of values of degree by a power - law behavior ( ) from which the name scale - free networks originated .real networks are however not only specified by their topology , but also by the dynamical properties of processes taking place on them , such as the flow of information or the traffic among the constituent units of the system . in order to account for these features ,the edges are endowed with weights : for example , the air - transportation system can be represented by a weighted network , in which the vertices are commercial airports and the edges are non - stop passenger flights . in this context ,a natural definition of link weights arises , as the capacity ( in terms of number of passengers ) of the corresponding flight .data about real weighted networks ( communication and infrastructure networks , scientific collaboration networks , metabolic networks , etc . ) have been recently studied , giving particular attention to the relation between weight properties and topological quantities .these findings have also generated several studies concerning modeling approaches in which the mutual influence of weights and topology plays an explicit role in determining network s properties .one of the most striking effects of the complex topological features of networks concerns their vulnerability to attacks and random failures . compared to `` regular '' -dimensional lattices and random graphs with a bounded degree distribution, heavy - tailed networks can tolerate very high levels of random failure . on the other hand , malicious attacks on the hubscan swiftly break the entire network into small components , providing a clear identification of the elements which need the highest level of protection against such attacks . in this contextit is therefore important to study how the introduction of traffic and geographical properties may alter or confirm the above findings .in particular we are interested in two main questions : ( i ) which measures are best suited to assess the damage suffered by weighted networks and to characterize the most effective attack ( protection ) strategies ; ( ii ) how traffic and spatial constraints influence the system s robustness . in this article , our attention is therefore focused on weighted networks with geographical embedding and we analyze the structural vulnerability with respect to various centrality - driven attack strategies . in particular ,we propose a series of topological and weight - depending centrality measures that can be used to identify the most important vertices of a weighted network .the traffic integrity of the whole network depends on the protection of these central nodes and we apply these considerations to a typical case study , namely the world - wide airport network . we find that weighted networks are even more vulnerable than expected in that the traffic integrity is destroyed when the topological integrity of the network is still extremely high .in addition all attacks strategies , both local and non - local perform with almost the same efficacy .the present findings may help in providing a quantitative assessment of the most vulnerable elements of the network and the development of adaptive reactions aimed at contrasting targeted attacks .in the following we use the world - wide air - transportation network ( wan ) , built from the international air transportation association database ( www.iata.org ) . this database contains the direct flight schedules and available seats data from the vast majority of the world s airlines for the year 2002 .the network obtained from the iata database contains interconnected airports ( vertices ) and direct flight connections ( edges ) .this corresponds to an average degree of , while the maximal one is showing a strong heterogeneity of the degrees .this is confirmed by the fact that the degree distribution can be described by the functional form , where and is an exponential cut - off which finds its origin in physical constraints on the maximum number of connections that can be handled by a single airport .the wan is a small - world : the average shortest path length , measured as the average number of edges separating any two nodes in the network , is .the data contained in the iata database allow to go beyond the simple topological representation of the airports connections by obtaining a weighted graph that includes the traffic and actual length of each link , specyfying respectively the number of available seats in flights between cities and during the year 2002 and the euclidean distance specifying the route length between cities and .the weights are symmetric ( ) for the vast majority of edges so that we work with a symmetric undirected graph .in addition to the very large degree fluctuations , both the weights and the strength are broadly distributed adding another level of complexity in this network .a key issue in the characterization of networks is the identification of the most central nodes in the system .centrality is however a concept that can be quantified by various measures .the degree is a first intuitive and local quantity that gives an idea of the importance of a node .its natural generalization to a weighted graph is given by the strength of vertices defined for a node as where the sum runs over the set of neighbors of .in the case of the air transportation network it quantifies the traffic of passengers handled by any given airport , with both a broad distribution and strong correlations with the degree , of the form with ( a random attribution of weights would lead to and thus ) . since space is also an important parameter in this network , other interesting quantities are the _ distance strength _ and _ outreach _ of where is the _euclidean _ distance between and .these quantities describe the cumulated distances of all the connections from the considered airport and the total distance traveled by passengers from this airport , respectively .they display both broad distributions and grow with the degree as with , and , with , showing the existence of important correlations between distances , topology and traffic .such local measures however do not take into account non - local effects , such as the existence of crucial nodes which may have small degree or strength but act as bridges between different part of the network . in this context , a widely used quantity to investigate node centrality is the so - called betweenness centrality ( bc ) , which counts the fraction of shortest paths between pairs of nodes that passes through a given node .more precisely , if is the total number of shortest paths from to and is the number of these shortest paths that pass through the vertex , the betweenness of the vertex is defined as , where the sum is over all the pairs with .key nodes are thus part of more shortest paths within the network than less important nodes . in weighted networks ,unequal link capacities make some specific paths more favorable than others in connecting two nodes of the network .it thus seems natural to generalize the notion of betweenness centrality through a _ weighted betweenness centrality _ in which shortest paths are replaced with their weighted versions . a straightforward way to generalize the hop distance ( number of traversed edges ) in a weighted graphconsists in assigning to each edge a length that is a function of the characteristics of the link .for example for the wan , should involve quantities such as the weight or the euclidean distance between airports and .it is quite natural to assume that the effective distance between two linked nodes is a decreasing function of the weight of the link : the larger the flow ( traffic ) on a path , the more frequent and the fastest will be the exchange of physical quantities ( e.g. information , people , goods , energy , etc . ) .in other words , we consider that the `` separation '' between nodes and decreases as increases . while a first possibility would be to define the length of an edge as the inverse of the weight , , we propose to also take into account the geographical embedding of the network , through the following definition : it is indeed reasonable to consider two nodes of the networks as further apart if their geographical distance is larger , however a large traffic allows to decrease the `` effective '' distance by providing more frequent travel possibilities . for any two nodes and , the weighted shortest path between and is the one for which the total sum of the lengths of the edges forming the path from to is minimum , independently from the number of traversed edges .we denote by the total number of weighted shortest paths from to and the number of them that pass through the vertex ; the weighted betweenness centrality ( wbc ) of the vertex is then defined as where the sum is over all the pairs with where is the number of edges . ] .the weighted betweenness represents a trade - off between the finding of `` bridges '' that connect different parts of a network , and taking into account the fact that some links carry more traffic than others .we note that the definition ( [ wbc ] ) is very general and can be used with any definition of the effective length of an edge .+ the probability distributions of the various definitions of centrality are all characterized by heavy tailed distributions .in addition a significant level of correlation is observed : vertices that have a large degree have also typically large strength and betweenness .when a detailed analysis of the different rankings is done , however we observe that they do not coincide exactly .for example , in the case of the wan the most connected airports do not necessarily have the largest betweenness centrality .large fluctuations between centrality measures also appear when inspecting the list of the airports ranked by using different definitions of centrality including weighted ones : strikingly , each definition provides a different ranking .in addition , some airports which are very central according to a given definition , become peripheral according to another criteria .for example , anchorage has a large betweenness centrality but ranks only and in terms of degree and strength , respectively .similarly , phoenix or detroit have large strength but low ranks ( ) in terms of degree and betweenness .while previous analysis have focused on the quantitative correlations between the various centrality measures here we focus on ranking differences according to the various centrality measures .a quantitative analysis of the correlations between two rankings of objects can be done using rank correlations such as kendall s where is the number of pairs whose order does not change in the two different lists and is the number of pairs whose order was inverted .this quantity is normalized between and : corresponds to identical ranking while is the average for two uncorrelated rankings and is a perfect anticorrelation ..similarity between the various rankings as measured by kendall s . for random rankings of values ,the typical is of order . [cols="<,^,^,^,^,^,^,^",options="header " , ] table [ tab2 ] gives the values of for all the possible pairs of centrality rankings .for , two random rankings yield a typical value of so that even the smallest observed is the sign of a strong correlation ( all the values in this table were already attained for a sublist of only the first most central nodes , with ) .remarkably enough , even a highly non - local quantity such as the bc is strongly correlated with the simplest local , non weighted measure given by the degree .the weighted betweenness is the least correlated with the other measures ( except with the betweenness ) , because involves ratios of weights and distances .another important issue concerns how the centrality ranking relates to the geographical information available for infrastructure networks such as the wan .figure [ fig : geo ] displays the geographical distribution of the world s fifteen most central airports ranked according to different centrality measures .this figure highlights the properties and biases of the various measures : on one hand , topological measures miss the economical dimension of the world - wide airport while weighted measures reflect traffic and economical realities . betweenness based measures on the other hand pinpoint the most important nodes in each geographical zone. in particular , the weighted betweenness appears as a balanced measure which combines traffic importance with topological centrality , leading to a more uniform geographical distribution of the most important nodes .the example of the wan enables us to raise several questions concerning the vulnerability of weighted networks .the analysis of complex networks robustness has indeed been largely investigated in the case of unweighted networks .in particular , the topological integrity of the network has been studied , where is the size of the largest component after a fraction of vertices has been removed and is the size of the original ( connected ) network . when , the entire network has been destroyed .damage is generally studied for increasingly larger fractions of removed nodes in the network , where the latter are chosen following different strategies .heterogeneous networks with a scale - free degree distribution are robust to situations in which the damage affects nodes randomly . on the other hand ,the targeted destruction of nodes following their degree rank is extremely effective , leading to the total fragmentation of the network at very low values of .moreover , the removal of the nodes with largest betweenness typically leads to an even faster destruction of the network . in the case of weighted networks ,the quantification of the damage should consider also the presence of weights . in this perspective , the largest traffic or strength still carried by a connected component of the network is likely an important indicator of the network s functionality .for this reason , we define new measures for the network s damage where , and are the total strength , outreach and distance strength in the undamaged network and , and correspond to the largest strength , outreach or distance strength carried by any connected component in the network , after the removal of a density of nodes .these quantities measure the _ integrity _ of the network with respect to either strength , outreach or distance strength , since they refer to the relative traffic or flow that is still handled in the largest operating component of the network .+ in order to evaluate the vulnerability of the air - transportation network wan , we study the behavior of damage measures in the presence of a progressive random damage and of different attack strategies . similarly to the simple topological case , weighted networks are inherently resilient to random damages . even at a large density of removed nodes , and all integrity measuresdecrease mildly and do not seem to have a sharp threshold above which the network is virtually destroyed .this is in agreement with the theoretical prediction for the absence of a percolation threshold in highly heterogeneous graphs . very differentis the scenario corresponding to the removal of the most central nodes in the network . in this case , however , we can follow various strategies based on the different definitions for the centrality ranking of the most crucial nodes : nodes can indeed be eliminated according to their rank in terms of degree , strength , outreach , distance strength , topological betweenness , and weighted betweenness .in addition , we consider attack strategies based on a recursive re - calculation of the centrality measures on the network after each damage .this has been shown to be the most effective strategy , as each node removal leads to a change in the centrality properties of the other nodes .such procedure is somehow akin to a cascading failure mechanism in which each failure triggers a redistribution on the network and changes the next most vulnerable node . in fig .[ fig : damage ] we report the behavior of and of the outreach integrity for all cases . as expected , all strategies lead to a rapid breakdown of the network with a very small fraction of removed nodes .more precisely , the robustness level of the network depends on the quantity under scrutiny .first , the size of the giant component decreases faster upon removal of nodes which are identified as central according to global ( i.e. betweenness ) properties , instead of local ones ( i.e. degree , strength ) , showing that , in order to preserve the structural integrity of a network , it is necessary to protect not only the hubs but also strategic points such as bridges and bottle - neck structures .indeed , the betweenness , which is recomputed after each node removal is the most effective quantity in order to pin - point such nodes .the weighted betweenness combines shortest paths and weights and leads to an intermediate result : some of the important topological bridges carry a small amount of traffic and are therefore part of more shortest paths than weighted shortest paths . these bridges have therefore a lower rank according to the weighted betweenness .the weighted betweenness is thus slightly less efficient for identifying bridges .finally , we note that all locally defined quantities yield a slower decrease of and that the removal of nodes with the largest distance strength is rather effective since it targets nodes which connect very distant parts of the network .interestingly , when the attention shifts on the behavior of the integrity measures , one finds a different picture in which all the strategies achieve the same level of damage ( the curves of and present shapes very close to the one of ) .most importantly , their decrease is even faster and more pronounced than for topological quantities : for still of the order of , the integrity measures are typically smaller than .this emphasizes how the purely topological measure of the size of the largest component does not convey all the information needed .in other words , the functionality of the network can be temporarily jeopardized in terms of traffic even if the physical structure is still globally well - connected .this implies that weighted networks appear more fragile than thought by considering only topological properties .all targeted strategies are very effective in dramatically damaging the network , reaching the complete destruction at a very small threshold value of the fraction of removed nodes . in this picture ,the maximum damage is achieved still by strategies based on non - local quantities such as the betweenness which lead to a very fast decrease of both topological and traffic related integrity measures . on the other hand ,the results for the integrity shows that the network may unfortunately be substantially harmed also by using strategies based on local quantities more accessible and easy to calculate .+ the previous strategies based on a recursive re - calculation of the centrality measures on the network are however computationally expensive and depend upon a global knowledge of the effect of each node removal .it is therefore interesting to quantify the effectiveness of such a strategy with respect to the more simple use of the ranking information obtained for the network in its integrity . in this casethe nodes are removed according to their initial ranking calculated for the undamaged network . as shown in fig .[ fig : no_recalc ] , successive removals of nodes according to their initial outreach or bc lead to a topological breakdown of the network which is maximized in the case of recalculated quantities .this effect is very clear in the case of global measures of centrality such as the betweenness that may be altered noticeably by local re - arranegements .when traffic integrity measures are studied , however , differences are negligible ( fig .[ fig : no_recalc ] , bottom curves ) : a very fast decrease of the integrity is observed for all strategies , based either on initial or recalculated quantities .the origin of the similarity between both strategies can be traced back by studying how much the centrality ranking of the network vertices is scrambled during the damage process . in order to quantify the reshuffling of the ranking of the nodesaccording to various properties , we study the previously used rank correlation as measured by kendall s , computed between the rankings of the nodes according to a given property before and after each removal . in all cases , remains very close to , showing that the reshuffling caused by any individual removal remains extremely limited .slightly smaller values are observed when we compare the rankings of the betweenness or of the weighted betweenness .this fact can be understood since such quantities are non - local and the betweennesses is more prone to vary when any node in the network is removed .this evidence brings both good and bad news concerning the protection of large scale infrastructures . on one hand, the planning of an effective targeted attack does need only to gather information on the initial state of the network . on the other hand ,the identification of crucial nodes to protect is an easier task that somehow is weakly dependent on the attack sequence . + and are comparable for both cases .inset : initial decrease of for very small values of .,scaledwidth=44.0% ] as shown in fig .[ fig : geo ] , various geographical zones contain different numbers of central airports .the immediate consequence is that the different strategies for node removal have different impacts in different geographical areas .figure [ fig : geo2 ] highlights this point by showing the decrease of two integrity measures representative of topological and traffic integrity , respectively .these quantities were measured on subnetworks corresponding to the six following regions : africa , asia , europe , latin and north america , and oceania .figure [ fig : geo2 ] displays the case of a removal of nodes according to their strength ( other removal strategies lead to similar data ) .while the curves of topological damage are rather intertwined , the decrease of the different integrity measures is much faster for north america , asia and europe than africa , oceania and latin america ; in particular the removal of the first nodes do not affect at all these three last zones .such plots demonstrate two crucial points .first , various removal strategies damage differently the various geographical zones .second , the amount of damage according to a given removal strategy strongly depends on the precise measure used to quantify the damage .more generally , these results lead to the idea that large weighted networks can be composed by different subgraphs with very different traffic structure and thus different responses to attacks . +in summary , we have identified a set of different but complementary centrality measures for weighted networks . the various definitions of centrality are correlated but lead to different rankings since different aspects ( weighted or topological , and local or global ) are taken into account .the study of the vulnerability of weighted networks to various targeted attack strategies shows that complex networks are more fragile than expected from the analysis of topological quantities when the traffic characteristics are taken into account . in particular , the network s integrity in terms of carried traffic is vanishing significantly before the network is topologically fragmented .moreover , we have compared attacks based on initial centrality ranking with those using quantities recalculated after each removal , since any modification of the network ( e.g. a node removal ) leads to a partial reshuffling of these rankings .strikingly , and in contrast to the case of purely topological damage , the integrity of the network is harmed in a very similar manner in both cases .all these results warn about the extreme vulnerability of the traffic properties of weighted networks and signals the need to pay a particular attention to weights and traffic in the design of protection strategies . + * acknowledgments * we thank iata for making the airline commercial flight database available .are partially supported by the eu within the 6th framework programme under contract 001907 `` dynamically evolving , large scale information systems '' ( delis ) .a. barrat , m. barthlemy , r. pastor - satorras , and a. vespignani , proc .natl . acad .usa * 101 * , 3747 ( 2004 ) .e. almaas , b. kovcs , t. viscek , z. n. oltvai and a .-barabsi , _ nature _ * 427 * , 839 ( 2004 ) .w. li and x. cai , phys .e * 69 * , 046106 ( 2004 ) .a. barrat , m. barthlemy , and a. vespignani , phys .lett . , * 92 * , 228701 ( 2004 ) .a. barrat , m. barthlemy , and a. vespignani , phys .e * 70 * , 066149 ( 2004 ) .
in real networks complex topological features are often associated with a diversity of interactions as measured by the weights of the links . moreover , spatial constraints may as well play an important role , resulting in a complex interplay between topology , weight , and geography . in order to study the vulnerability of such networks to intentional attacks , these attributes must be therefore considered along with the topological quantities . in order to tackle this issue , we consider the case of the world - wide airport network , which is a weighted heterogeneous network whose evolution and structure are influenced by traffic and geographical constraints . we first characterize relevant topological and weighted centrality measures and then use these quantities as selection criteria for the removal of vertices . we consider different attack strategies and different measures of the damage achieved in the network . the analysis of weighted properties shows that centrality driven attacks are capable to shatter the network s communication or transport properties even at very low level of damage in the connectivity pattern . the inclusion of weight and traffic therefore provides evidence for the extreme vulnerability of complex networks to any targeted strategy and need to be considered as key features in the finding and development of defensive strategies .
gene expression is the process by which information from a gene is used in the synthesis of a functional gene product .these products are often proteins , but there are also non - protein coding genes where the product is a functional rna . it has been predicted that more than 30,000 rna genes are associated with the human genome .non - protein coding genes and their products can vary considerably in length .the shortest products , micro rnas ( mirna ) , are on average only 22 bp , whereas long non - coding rna ( lncrnas ) are defined as transcribed rna molecules longer than 200 nucleotides in length .there have been several publications indicating that lncrnas might play an important role in cancer development and a good review of their functional role in human carcinomas is given in .lncrnas are also thought to play a regulatory role in cancer - associated pathways governing mechanisms such as cell growth , invasion , and metastasis and have been seen to be expressed differently in primary and metastatic cancer .lncrnas might thus provide insights into the mechanisms underlying tumor development .lncrnas originate everywhere in the genome , but especially in long stretches where no protein - coding genes have been identified chung2011association .an example of such area is 8q24 , where multiple single nucleotide polymorphisms ( snps ) have been associated with risk of developing prostate cancer . currently , there are at least 11 databases which record lncrnas .microarrays are frequently used to locate rna genes .a microarray contains multiple copies of the same dna oligonucleotides , known as probes , which are hybridized to a labeled rna sample and the array is subsequently washed .theoretically this will result in the labeled sample only remaining where the sample hybridized to probes .the signal intensities at the corresponding location on the microarray are used as a measure of the relative abundance of hybridization of each probe .typically a probe corresponds to a specific genomic region .sometimes the probes overlap , referred to as tiling , and such arrays are called tiled microrarrays .tiled microarrays have been successful in assessing expression of non - coding rnas . the ability to accurately detect the true gene - expression signal in microarrays is affected by several sources of variation .further issues and different biases arise when using tiled microarrays , as opposed to other analysis of differential expression .it is therefore important to take technical variation into account when doing statistical analysis on microarray data .currently a variety of methods are available to analyse data from tiled microarrays , but as expression levels are generally lower for lncrna than protein coding genes, conventional methods for differential expression detection may have difficulties detecting them . a good overview of available methods is found in otto et al. , where the tileshuffle method is introduced and shown to have higher precision than the commonly used tas and mat methods .the tileshuffle method identifies transcribed segments in terms of significant differences from the background distribution , using a permutation test statistic , called a window score .all probes within a sliding window have a window score assigned ( arithmetic mean trimmed by median or max and min value ) .further , probes are subdivided into bins by gc content and processed independently .the significance of a window score is assessed by permuting probes accross the array , but always within the same bin .empirical p - values are estimated by counting the number of permuted windows with higher score .the aim of this study is to assess the robustness of the tileshuffle method on the expressed regions level .it utilizes a special array - design where every probe is repeated ten times on each tiled array .this enables monte - carlo simulations of expression signals which are used to estimate expression on pseudo - arrays , whose differences lie in a variability that is usually neglected in microarray experiments .further , a single biological sample was split in three and used on repeated arrays , providing estimates of another variability that is commonly neglected .the consistency in regions selected across the pseudo - arrays , that should `` in theory '' give identical results , will be used as a measure of the robustness of the tileshuffle method .the data discussed in this publication are rna expression data from custom designed nimblegen microarray experiments where the same prostate tissue sample was used on three arrays .the data have been deposited in ncbi s gene expression omnibus and are accessible through geo series accession number gse45934 . the arrays contained 50 nucleotide probes from chr8:127640000 - 129120000 at locus 8q24 , tiled at a 20 base interval .the whole region was tiled evenly , but probes with blat score greater than 5 or blast score greater than 40 were excluded . that left in total 54236 ( out of 74000 ) probes ,each of which was replicated 10 times on the array .spatial artifacts in the expression signal were minimized by aggregating the wells of the microarray into ten non - overlapping logical virtual containers , allocating each of the ten replicates af a probe to a different container .these ten replicate spots for each probe , evenly spread across the array , permitted monte - carlo simulation of the expression signals . in that way , for each of the three microarrays , 1000 pseudo replicate arrays were produced , with only one repetition of each probe , selected at random .the peudo - arrays were made in triplets , such that the same set of replicates was used to produce pseudo - arrays for all three microarrays within every simulation .the arrays were normalized by the quantile normalization method and consequently analysed with the tileshuffle method under various conditions , detailed later , but always one at a time .the window size , the minimum length of selected areas , was set as 1000 bases , as the aim was to detect relatively long areas .three gc - content bins were used and the number of permutations was 1000 .all statistical analyses were performed in the r statistical package and graphics are generated with the ggplot2 library .the tileshuffle method assesses significance on minimal expected transcriptional units rather than on a single probe level . therefore the tiled region was split up into areas of length 100 bases .these areas will be underlying when addressing genomic locations that are expressed .an area will be deemed as expressed if all corresponding 100 bases were within an expressed region .as each probe was repeated ten times on every array , the consistency of the method could be estimated by monte - carlo simulations on the probe sets . in that way , 1000 pseudo - arrays were produced , each by randomly selecting one repetition of each probe .all experimental sources of variation of these pseudo - arrays are identical , except the physical location of the probesets within the microarray . in order to investigate the effect of this probe - to - probe variation on calls of expression , the tileshuffle method was run on each of the 1000 pseudo - arrays .figure [ fig1 ] shows on how many of the pseudo - arrays each genomic location was `` called '' `` expressed '' ( p .05 , adjusted for multiplicity ) . for clarity, the figure shows only the first 300.000 bases of the tiled area or about one - fifth of the tiled region .graphs for the remaining regions were similar .the figure shows that a great majority of the underlying tiled region is selected on at most 25% of the pseudo - arrays , whereas a few areas are selected consistently in near all cases . to compare across arrays, pseudo arrays were simulated in triplicates so that a single set containing probes from the same physical location on the microarray was generated for all three arrays at a time .thus , within each triplicate of pseudo - arrays , all probes have the same internal physical location of probes on the original microarrays , blocking the location effect of probes within a microarray .this emulates real situations where probes are not replicated .the performance of different methods for selecting subsets of the areas deemed expressed is compared in table [ tab:01 ] .the three columns show the areas selected on exactly one , two or all three pseudo - arrays within a triplicate , as a proportion of areas that are selected on * some * pseudo - array within the triplicate .ideally one would like to maximize the proportion of instances where a location is expressed on either none , or all of the three arrays .the results are shown by increasing proportion of areas that are selected on all three pseudo - arrays ..the average proportion of areas that are selected on exactly one , two or all three pseudo - arrays within a triplicate of all areas that are selected on some triplicate . from top - down : 1 ) only the 30 areas with the highest window score are selected in every simulation , 2 ) all the probeset , one replication of each probe , 3 ) half of the probeset was used with two replicates of each probe , 4 ) the median score over every 10 probes was calculated a priori and fed to the method , 5 ) only select areas that are deemed expressed in at least 99% of the replications 6 ) all 10 replicates were fed to the method [ tab:01 ] [ cols="<,<,<,<",options="header " , ] finally , supplementary fig .1 shows the relationship between the average proportion of the total underlying genomic area that is chosen in each simulations against the number of replicates used in every simulation .the relationship is shown for the proportion that is selected on at least one array , at least two arrays and all three arrays and a number of replicates running from 1 up to 10 replicates of each probe per array .this paper is based on an experimental setup using three tiled microarrays containing the same biological sample , each using ten repetitions of each probe .monte - carlo simulations from real data are used to investigate the robustness of the tileshuffle method when targeting areas on locus 8q24 that are expressed in prostate cancer .this study raises several concerns regarding the consistency of areas selected .first of all the method shows considerable variability depending on which of the 10 replicates of each probe the method is applied to . ideally , every area on figure [ fig1 ] should be expressed in either all or none of the monte - carlo simulations , resulting in the proportion being close to 0 or 1 .as shown on the figure , these proportions span the whole spectrum from zero to one .most probes which are `` called '' are only called in fewer than 25% of all simulations , indicating a serious lack of repeatability .a few areas are selected consistently on nearly all pseudo - arrays , but as probes are not repeated in the common situation , this plot is not available and one can not identify locations that are consistently expressed across pseudo - arrays .table [ tab:01 ] shows poor between - array consistency in choice of areas , which should ideally be identical .probes which show consistency across repetitions within an array ( selected in at least 99% of simulations ) do not show more consistency across arrays .the difference in results by applying the method on all ten replicates v.s . first calculating the median of every ten probes andthen applying the method is somewhat counter - intuitive .a few further points should also be noted : selecting a fixed number of areas with the highest window score is an unrobust method .the difference in using repeated probes rather then denser tiling is small , although in favour of repeated probes . finally , as shown on supplementary fig .1 , the average proportion of the underlying genomic region that is selected increases rapidly as the number of replicates of each probe increases . withten replicates of each probe , almost 60% of the underlying region is selected on at least one array out of three .this might suggest that majority of the underlying region is `` expressed '' by definition and the lack of consistency is in caused by small power .this paper shows poor consistency of the tileshuffle method both between selection of replicates of probes within a microarray and also between microarrays containing the same sample . as the tileshuffle method has shown to have higher precision than the mas and tas software, one can conclude that methods giving unreliable results are in common use .this research project was funded in part by grant 5r01ca129991 - 02 from the nci and by an fs - grant from the icelandic centre for research ( rannis ) .benjamin m bolstad , rafael a irizarry , magnus strand , and terence p. speed .a comparison of normalization methods for high density oligonucleotide array data based on variance and bias ._ bioinformatics _ , 190 ( 2):0 185193 , 2003 .d. bu , k. yu , s. sun , c. xie , g. skogerb , r. miao , h. xiao , q. liao , h. luo , g. zhao , et al .noncode v3 .0 : integrative annotation of long noncoding rnas ._ nucleic acids research _ , 400 ( d1):0 d210d215 , 2012 .s. chung , h. nakagawa , m. uemura , l. piao , k. ashikawa , n. hosono , r. takata , s. akamatsu , t. kawaguchi , t. morizono , et al .association of a novel long non - coding rna in 8q24 with prostate cancer susceptibility ._ cancer science _ ,1020 ( 1):0 245252 , 2011 .gibb , e.a .vucic , k.s.s .enfield , g.l .stewart , k.m .lonergan , j.y .kennett , d.d .becker - santos , c.e .macaulay , s. lam , c.j .brown , et al . human cancer long non - coding rna transcriptomes ._ plos one _ , 60 ( 10):0 e25915 , 2011 . j.m .johnson , s. edwards , d. shoemaker , and e.e .dark matter in the genome : evidence of widespread transcription detected by microarray tiling experiments ._ trends in genetics_ , 210 ( 2):0 93102 , 2005 . w evan johnson , wei li , clifford a meyer , raphael gottardo , jason s carroll , myles brown , and x shirley liu .model - based analysis of tiling - arrays for chip - chip ._ proceedings of the national academy of sciences _ , 1030 ( 33):0 1245712462 , 2006 .dione kampa , jill cheng , philipp kapranov , mark yamanaka , shane brubaker , simon cawley , jorg drenkow , antonio piccolboni , stefan bekiranov , gregg helt , et al .novel rnas identified from an in - depth analysis of the transcriptome of human chromosomes 21 and 22 . _ genome research _ , 140 ( 3):0 331342 , 2004 .q. liao , h. xiao , d. bu , c. xie , r. miao , h. luo , g. zhao , k. yu , h. zhao , g. skogerb , et al .ncfans : a web server for functional annotation of long non - coding rnas . _ nucleic acids research _ , 390 ( suppl 2):0 w118w124 , 2011 .a. risueo , c. fontanillo , m.e .dinger , and j. de las rivas .gatexplorer : genomic and transcriptomic explorer ; mapping expression probes to gene loci , transcripts , exons and ncrnas ._ bmc bioinformatics _ , 110 ( 1):0 221 , 2010 .sacco , a. baldassarre , and a. masotti .bioinformatics tools and novel challenges in long non - coding rnas ( lncrnas ) functional analysis ._ international journal of molecular sciences _ , 130 ( 1):0 97114 , 2011 .shore , j.i .herschkowitz , and j.m .rosen . noncoding rnas involved in mammary gland development and tumorigenesis : theresa long way to go. _ journal of mammary gland biology and neoplasia _ ,pages 116 , 2012 .tahira , m.s .kubrusly , m.f .faria , b. dazzani , r.s .fonseca , v. maracaja - coutinho , s. verjovski - almeida , m.c.c .machado , and e.m .long noncoding intronic rnas are differentially expressed in primary and metastatic pancreatic cancer ._ molecular cancer _ , 100 ( 1):0 141 , 2011 .x. wang , x. song , c.k .glass , and m.g .the long arm of long noncoding rnas : roles as sensors regulating gene transcriptional programs ._ cold spring harbor perspectives in biology _ , 30 ( 1 ) , 2011
* motivation : * in this paper the tileshuffle method is evaluated as a search method for candidate lncrnas at 8q24.2 . the method is run on three microarrays . microarrays which all contained the same sample and repeated copies of tiled probes . this allows the coherence of the selection method within and between microarrays to be estimated by monte carlo simulations on the repeated probes . * results : * the results show poor consistency in areas selected between arrays containing identical samples . a crude application of the method can result in majority of the region to be selected , resulting in a need for further restrictions on the selection . restrictions based on ranking internal tileshuffle test statistics do not increase precision . as the tileshuffle method has been shown to have higher precision than the mas and tas software , one can conclude that methods giving unreliable results are in common use . * availability : * the data discussed in this publication have been deposited in ncbi s gene expression omnibus and are accessible through geo series accession number gse45934 . * contact : * sigrunhelga.com
spectral analysis of large random matrices plays an important role in multivariate statistical estimation and testing problems .for example , variances of the principal components are functions of covariance eigenvalues , and roy s largest root test statistic is the spectral distance between the sample covariance and its population counterpart .asymptotic behaviors of sample covariance eigenvalues have been extensively studied in the literature .when the dimension is small and the population eigenvalues are distinct , and proved the asymptotic normality for sample eigenvalues . under the gaussian assumption , established the edgeworth expansion and showed that the convergence rate is of order . and illustrated the effects of skewness and kurtosis on the limiting distribution .when is large , revealed for gaussian data that the largest sample eigenvalue , after proper standardization , follows the tracy - widom law asymptotically . further proved that the convergence rate to the tracy - widom law is of order , which is astonishingly fast . despite these elegant properties ,existing results rely heavily on some simple gaussian or sub - gaussian assumptions .their applications to hypothesis testing and constructing confidence intervals under more general settings are largely unknown . motivated by the covariance testing problem, the major focus of this paper is to study asymptotic behaviors of a particular type of spectral statistics related to the covariance matrix .here we are interested in the non - gaussian setting with the dimension allowed to grow with the sample size .specifically , let be independent realizations of a -dimensional random vector with mean and covariance matrix .denote the sample covariance matrix by .we shall derive the limiting distribution and establish bootstrap confidence intervals for the following spectral statistic where is a prespecified integer - valued parameter representing the degree of sparsity " .the statistic is of general and strong practical interest . by setting , it reduces to the conventional roy s test statistic , where denotes the spectral norm of .if , we obtain a generalized version of roy s test statistic , allowing us to deal with large covariance matrices : .we defer to section [ sec : main ] for more details . ] . to study the limiting behavior of in high dimensions , a major insight is to build the connection between the analysis of the maximum eigenvalue and recent developments in extreme value theory .in particular , by viewing the maximum eigenvalue as the extreme value of a specific infinite - state stochastic process , the gaussian comparison inequality recently developed in can be used .new empirical process bounds are established to ensure the validity of the inference procedure . in the end , bootstrap inference follows .two interesting observations are discovered .first , in the low - dimensional regime ( ) , the results in this paper solve a long standing question on bootstrap inference of eigenvalues when multiple roots exist .the -out - of- bootstrap is known to be rather sensitive to the choice of . in comparison , the multiplier - bootstrap - based inference procedure used in this paper does not involve any tuning parameter , and is fairly accurate in approximating the distribution of the test statistic .secondly , it is well - known that roy s largest root test is optimal against rank - one alternatives .previously it was unclear whether such a result could be extended to high dimensional settings .this paper demonstrates that such a generalization can be made . throughout the paper ,let and denote the sets of real numbers and integers .let be the indicator function .let and be a dimensional real vector and a real matrix . for sets , let be the subvector of with entries indexed by , and be the submatrix of with entries indexed by and .we define the vector and ( pseudo-)norms of to be and .we define the matrix spectral ( ) norm as . for every real symmetric matrix , we define and to be its largest and smallest eigenvalues . for any integer and real symmetric matrix , we define the -sparse smallest and largest eigenvalues of to be where is the set of all -sparse vectors on the -dimensional sphere .moreover , we write for any positive definite matrix . for any and positive definite real - valued matrix , we write for any random vectors , we write if and are identically distributed. we use to denote absolute positive constants , which may take different values at each occurrence .for any two real sequences and , we write , , or equivalently , if there exists an absolute constant such that for any large enough .we write if both and hold .we write if for any absolute constant , we have for any large enough .we write and if and hold stochastically . for arbitrary positive integer ,we write =\{a\in\z : 1\leq a\leq n\} ] , with slight abuse of notation , we write for simplicity . by lemma[ lem : discret ] in section [ sec : proof ] , for any , there exists an -net of equipped with the euclidean metric , with its cardinality satisfying .[ thm : limiting_dist ] let assumptions [ cdt1 ] and [ cdt2 ] be satisfied and put .then for any -net of with cardinality , there exists a -dimensional centered gaussian random vector satisfying for with , such that where is an absolute positive constant , , and .there are several interesting observations drawn from theorem [ thm : limiting_dist ] .first , as long as for a properly chosen , the distribution of can be well approximated by that of the maximum of a gaussian sequence .it is worth noting that no parametric assumption is imposed on the data generating scheme .secondly , the result in theorem [ thm : limiting_dist ] , though not reflecting the exact limiting distribution of , sheds light on its asymptotic behavior . following the standard extreme value theory , when and the covariance matrix is sparse , follows a gumbel distribution asymptotically as .thirdly , we note that when , the techniques used to prove theorem [ thm : limiting_dist ] can be adapted to derive the limiting distributions of extreme sample eigenvalues .see section [ sec : app1 ] for details .the detailed proof of theorem [ thm : limiting_dist ] is involved .hence , a heuristic sketch is useful .a major ingredient stems from a gaussian comparison inequality recently developed by .[ lem : coupling_ineq ] let be independent random vectors in with mean zero and finite absolute third moments , that is , and for all and .consider the statistic .let be independent random vectors in with , .then for every , there exists a random variable such that where we write ,~~d_2=\e\bigg ( \max_{1\leq j\leq d}\sum_{i=1}^n|x_{ij}|^3 \bigg ) , \\d_3 = \sum_{i=1}^n\e\bigg[\max_{1\leq j\leq d}|x_{ij}|^3\cdot \mathds{1}\bigg\ { \max_{1\leq j\leq d}|x_{ij}|>\frac{\delta}{\log(d n)}\bigg\ } \bigg].\end{aligned}\ ] ] in view of lemma [ lem : coupling_ineq ] and the fact that is the supremum of an infinite - state process , the proof can be divided into three steps . in the first step ,we prove that the difference between and a discretized version " of it is negligible asymptotically .this is implied by the following generalized -net argument for the rescaled spectral norm .it extends the standard -net argument .[ lem : obs ] for any with the same support , positive definite matrix , and any real symmetric matrix , we have in the second step , we show that this discretized version of converges in distribution to the maximum of a finite gaussian sequence .this can be achieved by exploiting lemma [ lem : coupling_ineq ] .lastly , anti - concentration bounds are established to bridge the gap between the distributions of and its discretized version . the complete proof is provided in section [ sec : proof ] .the asymptotic result in theorem [ thm : limiting_dist ] is difficult to use in practice . to estimate the limiting distribution of empirically ,bootstrap approximation is preferred .for any , define where are i.i.d .standard normal random variables that are independent of .we use the conditional distribution of given the data to approximate the distribution of .the next theorem characterizes the validity of bootstrap approximation .[ thm : multiplier_bootstrap ] let assumptions [ cdt1 ] and [ cdt2 ] be satisfied , and assume that as .then there exists a sufficiently large absolute constant such that in other words , we have the proof of theorem [ thm : multiplier_bootstrap ] heavily relies on characterizing the convergence rates of sub - gaussian fourth - order terms .we defer this result and the detailed proof of theorem [ thm : multiplier_bootstrap ] to section [ sec : proof ] .the rest of this section gives asymptotic results for the non - normalized version of . to this end , let and be the rank - one projection and -sparse largest singular value of , given respectively by technically speaking , is a simpler version of .we show that , under an additional eigenvalue assumption , converges weakly to the extreme of a gaussian sequence .in particular , the following condition assumes that the -sparse ( restricted ) largest eigenvalue of is upper bounded by an absolute constant .[ cdt4 ] there exists an absolute constant such that .we define , for any , where are i.i.d .standard normal random variables independent of .the following theorem gives the gaussian approximation result for .[ thm : limiting_dist2 ] let assumptions [ cdt1][cdt4 ] be satisfied and set .then , for any -net of with cardinality , there exists a -dimensional centered gaussian random vector satisfying for with , such that where is a constant depending only on , , and . in addition , if satisfies as , then there exists an absolute constant large enough such that in other words , we have by comparing theorems [ thm : limiting_dist ] and [ thm : limiting_dist2 ] , we immediately observe some difference between the properties of and . to ensure the validity of the multiplier bootstrap approximation for , we only require , and thus allow to grow quickly .in contrast , the bootstrap approximation consistency for relies on , a constant of the same order as .a direct application of theorem [ thm : limiting_dist ] is on inferring extreme sample eigenvalues of spherical distributions .a random vector is said to be spherically distributed if its covariance matrix is proportional to the identity .note that this definition is slightly different from its counterpart in robust statistics , where a more stringent rotation - invariant property is required .it is known that when multiple roots exist ( i.e. , the population eigenvalues are not distinct ) , the sample eigenvalues are not asymptotically normal even under the gaussian assumption . and showed that inference is even more challenging for non - gaussian data as the limiting distributions of the sample eigenvalues rely on the skewness and kurtosis of the underlying distribution .estimation of these parameters is statistically costly .bootstrap methods are hence recommended for conducting inference .however , when multiple roots occur , beran and srivastava pointed out that the nonparametric bootstrap for eigenvalue inference is inconsistent .the -out - of- bootstrap and its modification are hence proposed to correct this .but the implementation is complicated since tuning parameters are involved .based on theorem [ thm : multiplier_bootstrap ] , we show that a simple multiplier bootstrap method leads to asymptotically valid inference for extreme eigenvalues , as stated in the next theorem .[ thm : limiting_dist3 ] suppose that assumptions [ cdt1 ] and [ cdt2 ] hold .in addition , assume that with an absolute constant. then , as long as , \bigg|\!=\!o_p(1),\end{aligned}\ ] ] and \bigg|\!=\!o_p(1).\end{aligned}\ ] ] here forms an independent standard gaussian sequence independent of the data . [cols="^ " , ] [ tab:3 ]spectral analysis for large random matrices has a long history and maintains one of the most active research areas in statistics .recent advances include the discovery of the tracy - widom law , an important family of distributions that quantifies the fluctuation of sample eigenvalues .a vast literature follows .however , more questions are raised than answered .in particular , no result has been promised for extensions to non - gaussian distributions with a nontrivial covariance structure .this paper fills this long - standing gap from a new perspective grown in the literature of extreme value theory .the obtained results prove to work in many cases which for a long time are known to be challenging to deal with .very recently , studied asymptotic behaviors of sample covariance eigenvalues under a pervasive assumption , that is , the largest eigenvalue grows quickly with the dimension . under this assumption, they proved the asymptotic normality for the sample eigenvalues . in comparison ,our results are built on the normalized covariance matrix and are obtained in the settings where the signals are not too strong .a natural question arises that whether a phase transition phenomenon occurs when signals change from weak to strong .in particular , how do the asymptotic distributions of sample eigenvalues change with the growing magnitudes of extreme eigenvalues ?we conjecture that this problem may be related to the normal mean problem in extreme value theory , and leave that question for future research .this section contains the proofs of the results in this paper .we first give an outline of the proof , which consists of three main steps .( i ) in the first step , we approximate , the supremum over a continuous function space induced by , by the maximum over a discrete function space induced by , for as in theorem [ thm : limiting_dist ] .( ii ) in the second step , we show that the above discretized version of over converges weakly to the maximum of a gaussian sequence .lastly , we employ the anti - concentration inequality ( lemma [ lem : anticoncentration ] ) to complete the proof . * step i. * let be an arbitrary number .we first employ the following lemma to connect the supremum over a continuous function space induced by to the maximum over a discrete function space induced by .[ lem : discret ] there exists an -net of equipped with the euclidean metric satisfying that .further , for any -net of , we have where lemma[ lem : discret ] and the fact yield that , for any , below to bound .note that , by lemma [ lem : orlicz norm ] , we have for any that . taking maximum over on both sides yields [ lem : connecting ] for any , we have \geq1 - 4e^{-t},\end{aligned}\ ] ] where are absolute constants , and are as in theorem [ thm : limiting_dist ] . using lemma [ lem : connecting ], it follows from that for any and , \bigg)\geq1\!-\!4e^{-t}.\end{aligned}\ ] ] taking and , we deduce that \bigg)\notag \\\geq1-\frac{4}{n}.\end{aligned}\ ] ] for any , write for the -net constructed in * step i * , and recall that for and , it follows from that the following lemma gives a gaussian coupling inequality for .[ lem : coupling ] for every , we have where is as in , is the constant in lemma [ lem : maximal_ineq ] with and . in view of lemma [ lem : coupling ] , by taking we have where .putting , we have without loss of generality , assume that ( the case when can be similarly dealt with by replacing all below by ) . combining and , we have where is an absolute constant . taking , we deduce from that there exists an absolute positive constant such that by lemma [ lem : anticoncentration ] , we have for some absolute constant . note that , for every and , \bigg)+\p\bigg ( \bigg| \hat{q}_{\max}-\max_{\bv\in\mathbb{n}_{\epsilon_1}}|g_{\bv}| \bigg|>\eta\bigg).\end{aligned}\ ] ] taking in the last display , we deduce from and that this completes the proof . noting that we have by the triangle inequality , it follows that using lemma [ lem : sigma norm ] , we deduce that combining and gives as desired .based on the -net described in lemma [ lem : discret ] and the corresponding -dimensional gaussian random vector introduced in the proof of theorem [ thm : limiting_dist ] with , we aim to show that in view of theorem [ thm : limiting_dist ] , it suffices to prove that where . in particular, we note that via the proof of theorem [ thm : limiting_dist ] . by lemma [ lem : comparison ], we have , where satisfies and next we bound . for , we have by definition , we have , for , it follows that , for , for simplicity , we define in this notation , we have further , define the following lemma gives an upper bound for .[ lem : lt2 ] for any , there exists an absolute positive constant only depending on such that =o(p_1^{-m}),\end{aligned}\ ] ] where , and for and are defined in .by lemma [ lem : lt2 ] , there exists an absolute positive constant depending only on such that turning to , by lemma [ lem : connecting ] , there exists a constant depending only on such that \leq \frac{4}{n}.\end{aligned}\ ] ] combining and , we have with probability greater than , since is non - decreasing for , we have with probability greater than , putting , , and together , we conclude that this proves . finally , using theorem [ thm : limiting_dist ], we deduce that for any , there exists a constant depending only on and such that \lesssim p_1^{-m}.\end{aligned}\ ] ] this completes the proof .theorems [ thm : limiting_dist2 ] and [ thm : limiting_dist3 ] can be proved based on similar arguments used in the proofs of theorems [ thm : limiting_dist ] and theorem [ thm : multiplier_bootstrap ] .the details are hence omitted . to begin with , we introduce the following notations .define where .we divide the proof into three main steps .( i ) first , using the discretized version as a bridge , we show that converges weakly to the maximum of a gaussian sequence .( ii ) next we show that the difference between and the test statistic is negligible asymptotically .( iii ) finally , we show that the gaussian maximum can be approximated by its multiplier bootstrap counterpart .the technical details are stated as lemmas with their proofs deferred to section [ sec : main_lemmas ] .[ lem : testinglem1 ] let assumptions [ cdt5 ] and [ cdt6 ] be satisfied .under the null hypothesis , we have the following two assertions hold .\(i ) we have \geq1-\frac{4}{n}-\frac{4}{m},\end{aligned}\ ] ] where is an absolute constant , , and .\(ii ) let be an -net with and .then , there exists a -dimensional gaussian random vector satisfying with ( here , without loss of generality , we assume ) such that \lesssim l_2 ^ 2 \frac { \gamma_{m}^{1/8}(s , d)}{m^{1/8}}+l_2 ^ 2 \frac { \gamma^{9/2}_{m}(s , d)}{m^{1/2}},\end{aligned}\ ] ] where is an absolute constant and is the constant in lemma [ lem : maximal_ineq ] by taking .[ lem : testinglem2 ] let assumptions [ cdt5 ] and [ cdt6 ] be satisfied . under the null hypothesis , we have , as , \notag \\ \geq1-\frac{4}{n}-\frac{4}{m},\end{aligned}\ ] ] where is an absolute constant .[ lem : testinglem3 ] let assumptions [ cdt5 ] and [ cdt6 ] be satisfied . under the null hypothesis , we have , as , combining andwe deduce that there exists an absolute constant such that \lesssim l_2 ^ 2 \frac { \gamma^{1/8}_{m}(s , d ) } { m^{1/8}}+l_2 ^ 2 \frac { \gamma^{9/2}_{m}(s , d ) } { m^{1/2}}.\end{aligned}\ ] ] using arguments similar to those used in the proof of theorem [ thm : limiting_dist ] , we deduce that this , together with lemma [ lem : testinglem3 ] yields that which completes the proof .it is equivalent to proving that for sufficiently large , first we claim that .to see this , it suffices to show that it suffices to show since , by exactly the same argument as in the proof of lemma [ lem : testinglem2 ] , the difference between and is of the order .it then reduces to show since we have , for any , this is due to the fact that and .then we can further write \notag\\ \leq&~ \p\left\ { h_1\geq\frac{c_{41}}{4 } \sqrt{\frac{s\log ( ed / s)}{m}}\right\ } + \p\left\ { h_2\geq\frac{c_{41}}{4 } \sqrt{\frac{s\log ( ed / s)}{m}}\right\ } \notag\\ & ~+\p\left\ { h_3\geq\frac{c_{41}}{4 } \sqrt{\frac{s\log ( ed / s)}{m}}\right\ } + \p\left\ { h_4\geq\frac{c_{41}}{4 } \sqrt{\frac{s\log ( ed / s)}{m}}\right\},\end{aligned}\ ] ] where we bound , and respectively . without loss of generality , we only need to consider and . for , define for some sufficiently large . using the standard -net argument , it can be shown that ( using lemma 5.4 in ) and similar to lemma [ lem : lt2 ] , define and , by markov s inequality , we have for any , taking , it follows \bigg).\end{aligned}\ ] ] similar to , we get as long as . furthermore , using the fact we deduce that .putting together the pieces , we conclude that . secondly , we study . as in lemma[ lem : testinglem2 ] , we bound instead , where this is , again , because the difference between them is of the order .note that where equation then follows from the facts that .this completes the proof .define the class of rank one perturbations of the identity matrix as follows : then it suffices to prove the conclusion with replaced by all and .let be sufficiently small .for any two distributions and , we write to represent the product measure of and . in particular , we use to denote the product distribution of independent copies of . recall that the minimax risk is lower bounded by the bayesian risk .define to be the mixture alternative distribution with a prior distribution on with taking values uniformly in : where and denotes the uniform measure on with respect to the haar measure .define to be the probability measure of .in particular , let be the probability measure of . note that , for any measurable set , the measure satisfies also by the definition of the probability measure , we have due to the triangular inequality , we have putting , we deduce that where denotes the total variation distance between the two probability measures and .to finish the proof , we introduce another distance measurement over distributions .let the -divergence between two probability measures and be defined as in view of the proof of proposition 2 in , there exists a function with such that where tends to zero as . using the pinsker s inequality ( see , for example , lemma 2.5 in ) deduce from that this completes the proof .for any fixed and ] .then it is straightforward to see that using the binomial coefficient bound we get next we prove the second assertion . for every with support and its -net , we can find some satisfying that and . by lemma [ lem : obs ], we have therefore , we have taking maximum over $ ] with on both sides yields together , the last two displays imply .this completes the proof .we follow a standard procedure .first we show concentration of around its expectation .next we upper bound .to prove the concentration , we define for every that by lemma [ lem : tail_ineq ] , there exists an absolute constant such that for every , \notag\\ \geq 1 - 4e^{-t},\end{aligned}\ ] ] where .we first bound and , starting with . under assumption[ cdt1 ] , we have and hence . for , using a similar argument as in the proof of lemma [ lem : discret ] , we deduce that for every , by taking , we have where is an -net of with properties in lemma [ lem : discret ] . using lemma [ lem : maximal_ineq ] it follows that combining , , and gives \geq1 - 4e^{-t},\end{aligned}\ ] ] where we recall that .now we bound the expectation .here we use a result that involves the generic chaining complexity , , of a metric space . see definition 2.2.19 in .we refer the readers to for a systematic introduction .note that and for any .it follows from lemma [ lem : gc1 ] and lemma [ lem : gc2 ] that by lemma [ lem : gc3 ] , we have where .similar to the proof of lemma [ lem : discret ] , we have where . together , , , and imply that combining and , we deduce that \geq1 - 4e^{-t}.\end{aligned}\ ] ] this completes the proof .recall that and for and .moreover , define for and put for .let be a -dimensional gaussian random vector satisfying applying lemma [ lem : coupling_ineq ] to and , we have , for any , where we put ,\\ d_2&=\e\bigg ( \max_{1\leq j\leq 2p_{\epsilon}}\sum_{i=1}^n|r_{ij}|^3\bigg ) , \\d_3&=\sum_{i=1}^n \e\bigg[\max_{1\leq j\leq 2p_{\epsilon}}|r_{ij}|^3 \mathds{1}\bigg\ { \max_{1\leq j\leq 2p_{\epsilon}}|r_{ij}|>\frac{\delta n^{1/2}}{\log(2p_{\epsilon}\vee n)}\bigg\ } \bigg].\end{aligned}\ ] ] note that , for , ,\end{aligned}\ ] ] we have hence , we deduce from that where next we bound , , and , starting with . by lemma [ lem : cher1 ] , for , using lemma [ lem : orlicz norm ] , we deduce that this gives and hence to bound , by lemmas [ lem : orlicz norm ] and [ lem : maximal_ineq ] , we have which further implies combining , , and yields for , it follows from lemma [ lem : cher2 ] that by lemma [ lem : orlicz norm ] , we have and hence further , in view of lemma [ lem : maximal_ineq ] , we have together , , , and yield that for , using lemmas [ lem : orlicz norm ] and [ lem : maximal_ineq ] , we deduce that consequently , we have finally , putting , , , and together , we obtain as desired .define for some sufficiently large .then , for some constant , \\ & + \p\left\ { \max_{1\leq j , k\leq p_1}\bigg|\frac{1}{n}\sum_{i=1}^n(w_{ij}\bar w_{ik})^2-\e ( w_{ij}w_{ik})^2 \bigg|\!\geq\ ! c_{22}\sqrt{\frac{\log p_1}{n } } \right\ } .\end{aligned}\ ] ] using cauchy - schwarz inequality , we deduce that , for any , ^{1/2}\\ \leq & ( \e w_{ij}^4 ) ^{1/2 } ( n+p_1)^{- \tau^2\eta/4 } \cdot\big[\e\big\ { w_{ik}^4\exp ( \eta w_{ik}^2 / 2 ) \big\ } \big]^{1/2}.\end{aligned}\ ] ] by the elementary inequality , , we have , for any , under assumptions [ cdt1 ] and [ cdt2 ] , for any , there exists a constant such that hence , for all sufficiently large , , and , we have it follows that } _{ f_1}\\ & + \underbrace{\p\left\ { \max_{1\leq j , k\leqp_1}\bigg|\frac{1}{n}\sum_{i=1}^n(w_{ij}\bar w_{ik})^2\!-\!\e ( w_{ij}\bar w_{ik})^2 \bigg|\!\geq\ ! \frac{c_{22}}{2}\sqrt{\frac{\log p_1}{n}}\right\ } } _ { f_2}.\end{aligned}\ ] ] for , we have , for any , , and sufficiently large , to bound , it suffices to show that , for any , there exists an absolute constant depending only on such that define for and . by markov s inequality, we have for any , \\ & \leq \exp\big ( -c_{25 } \ , t\sqrt{n\log p_1 } \ , \big ) \prod_{i=1}^n \e \exp(t w_{ijk } ) .\end{aligned}\ ] ] using inequalities and for , we deduce that \\ \leq&\exp\bigg [ -c_{25 } \ , t\sqrt{n\log p_1}+\sum_{i=1}^n\e \ { t^2w_{ij k}^2\exp(t|w_{ijk}| ) \ } \bigg].\end{aligned}\ ] ] taking gives .\end{aligned}\ ] ] using cauchy - schwarz inequality , we have ^{1/2 } .\end{aligned}\ ] ] according to assumption [ cdt1 ] , for any and sufficiently large and satisfying that , there exists a constant depending on , , and such that consequently , there exists a positive constant depending on , , and such that combining and , we obtain that for and sufficiently large , , and , for any . therefore , for any , there exists a constant depending only on such that holds .similarly , it can be shown that by taking , we get , which completes the proof .similar to lemma [ lem : discret ] , we have , for any , of note , we have using lemma [ lem : connecting ] , we deduce that , for any , \bigg)\geq1 - 4e^{-t},\end{aligned}\ ] ] and \bigg)\geq1 - 4e^{-t},\end{aligned}\ ] ] where is an absolute constant , , , and .it follows that , for any , \bigg)\geq1 - 4e^{-t_1}-4e^{-t_2}.\end{aligned}\ ] ] taking and gives \geq1-\frac{4}{n}-\frac{4}{m},\end{aligned}\ ] ] which proves . combining and , and taking , we obtain \bigg)\notag\\ & \geq1-\frac{4}{n}-\frac{4}{m}.\end{aligned}\ ] ] recalling the definition of in , we have where is as in the proof of theorem [ thm : testing ] .moreover , there exists a -dimensional gaussian random vector satisfying such that for every , where .it follows that taking , it follows from and that \lesssim l_2 ^ 2 \frac { \gamma^{1/8}_{m}(s , d)}{m^{1/8}}+l_2 ^ 2 \frac { \gamma^{9/2}_{m}(s , d)}{m^{1/2}}.\end{aligned}\ ] ] this completes the proof .write . by definition, we have combining and , and taking and , we have \geq 1-\frac{4}{n}-\frac{4}{m},\end{aligned}\ ] ] where is an absolute constant , , and .it follows that , for all sufficiently large , ^{-1}\right ) \geq1-\frac{4}{n}-\frac{4}{m}. \notag\end{aligned}\ ] ] this , together with , , , and , proves .define , where and are independent standard gaussian random variables that are independent of and . as in lemma[ lem : testinglem1 ] , we have for , putting , we have , for , .\end{aligned}\ ] ] it follows that define similar to the proof of theorem [ thm : multiplier_bootstrap ] , it can be shown that with probability greater than , where as . by lemma [ lem : comparison ], we have then , using lemma [ lem : discret ] and lemma [ lem : testinglem2 ] , we deduce that as desired . in the sequel, we define and to be the sets of positive real values and integers .the following two lemmas are elementary , yet very useful , in the proofs of the above results .[ lem : sigma norm ] for any , we have by definition , it is straightforward that , and as desired .[ lem : orlicz norm ] for , define the function , .the orlicz norm for a random variable is given by also , define the ( ) norm of a random variable by .then , for every we have note that for every and , .then , we have for any , the conclusion thus follows immediately .the following lemma is from .[ lem : covering numbers ] let be a metric space . for every , a subset of called an -net of if for every , there is some such that .the minimal cardinality of an -net , if finite , is called the covering number of at scale , and is denoted by . the unit sphere equipped with the euclidean metric satisfies that for every , .the following anti - concentration lemma is theorem 3 in and is used in the proofs of theorems [ thm : limiting_dist ] and [ thm : testing ] .[ lem : anticoncentration ] let be a centered gaussian random vector in with for all .define , , and .+ ( i ) if , then for every , ( ii ) if , then for every , where is a constant depending only on and .the following lemma from is used in the proof of lemma [ lem : connecting ] .[ lem : tail_ineq ] let be independent random variables taking values in a measurable space , and let be a countable class of measurable functions .assume that for , for every and .define then , for every and , there exists a constant such that for all , and the following lemma is lemma 2.2.2 in and is used in the proofs of lemma [ lem : connecting ] and lemma [ lem : coupling ] .[ lem : maximal_ineq ] for any , there exists a constant depending only on such that the following lemma is theorem a in and is used in the proof of lemma [ lem : connecting ] .[ lem : gc1 ] let be a class of mean - zero functions on a probability space , and let be independent random variables in distributed according to .then , there exists an absolute constant such that the complexity parameter of is the functional with respect to the norm .see for its definition and properties .the following two lemmas are theorem 2.7.5 and theorem 2.4.1 in on generic chaining , and are used in the proof of lemma [ lem : connecting ] .[ lem : gc2 ] if is surjective and there exists a constant such that for any .then , we have where is an absolute constant depending only on . [lem : gc3 ] for any metric space and centered gaussian process , there exist universal constants such that the following two lemmas are lemma 1 and lemma 9 in and are used in the proof of lemma [ lem : coupling ] .[ lem : cher1 ] let be independent centered random vectors in with . then there exists a absolute constant such that \\ \leq & c\bigg [ \sqrt{\frac{\log d}{n } } \max_{1\leq j\leq d } \bigg\ { \frac{1}{n}\sum_{i=1}^n\e(x_{ij}^4)\bigg\}^{1/2}\!+\!\frac{\log d}{n } \bigg\ { \e \bigg ( \max_{1\leq i\leq n}\max_{1\leq j\leq d}x_{ij}^4 \bigg ) \bigg\}^{1/2}\bigg].\end{aligned}\ ] ] [ lem : cher2 ] let be independent random vectors in with such that for all and . then the following lemma is theorem 2 in and is used in the proofs of theorem [ thm : multiplier_bootstrap ] and lemma [ lem : testinglem3 ] .[ lem : comparison ] let and be centered gaussian random vectors in with covariance matrices and , respectively .suppose that and for all .define then where is an absolute constant depending only on and .in particular , we have and where is an absolute constant depending only on and .the authors sincerely thank the editor , associate editor , and an anonymous referee for their valuable comments and suggestions . a part of this workwas carried out when wen - xin zhou was a research fellow at the university of melbourne , and fang han was visiting department of biostatistics at johns hopkins university .chang , j. , zhou , w. , zhou , w .- x . and wang , l. ( 2015 ) .comparing large covariance matrices under weak conditions on the dependence structure and its application to gene clustering . .to appear .available at arxiv:1505.04493 .
recently , chernozhukov , chetverikov , and kato [ _ ann . statist . _ * 42 * ( 2014 ) 15641597 ] developed a new gaussian comparison inequality for approximating the suprema of empirical processes . this paper exploits this technique to devise sharp inference on spectra of large random matrices . in particular , we show that two long - standing problems in random matrix theory can be solved : ( i ) simple bootstrap inference on sample eigenvalues when true eigenvalues are tied ; ( ii ) conducting two - sample roy s covariance test in high dimensions . to establish the asymptotic results , a generalized -net argument regarding the matrix rescaled spectral norm and several new empirical process bounds are developed and of independent interest . * keywords : * gaussian comparison inequality ; extreme value theory ; spectral analysis ; random matrix theory ; roy s largest root test .
the question of how an astonishingly long dna chain consisting of basepairs or even more folds into a compact state within a small volume of cell nucleus is still intriguing and not completely resolved . indeed , not only is the packing compact which would have been easily achievable if dna was in a so - called equilibrium globule state , but it is also capable to function biologically in a meaningful way .this biological function enforces many very specific and clearly non - equilibrium features including the existence of distinct chromosome territories and topological domains ( tads ) within single chromosome , easy unentanglement of chromosomes and chromosome parts ( needed in preparation to mitosis , and during transcription ) , and ability of different parts of the genome to find each other in space strikingly fast in e.g. so - called promoter - enhancer interactions .the concrete mechanism which stabilizes these features is not yet completely understood .the main candidates for this stabilization is the so - called model of non - equilibrium fractal globule , as well as various models accentuating the formation of saturating bonds between the fragments of chromatin . on the experimental side , the high - resolution data concerning the spatial organization of the genomeis mostly due to the development of the genome - wide chromosome conformation capture ( so called hi - c ) method which allows to obtain the _ colocalization _ matrices of the genome packing , containing information about which particular genome fragments are _ closed to each other _ in space .it is from the statistical analysis of these matrices that the authors of deduced that the fractal globule state is a suitable candidate to describe chromosome packing .originally , the hi - c matrices represent the colocalization data averaged over many cells , however , recently a significant progress was reported in obtaining single - cell hi - c maps .note therefore , that the data available include not the information about full spatial organization of the genome , but just on the parts of the genome which are spatially close to each other .it is , therefore , of great interest to develop methods extracting as much information as possible from such a dataset . in particular , it is a challenging question whether one can recover the information about the exact structure of tads from colocalization data .it seems in principle reasonable to believe that such extraction is possible : indeed , parts of a chromosome belonging to a single tad are spatially compact and should therefore more often find themselves in a close proximity with each other .in this paper we suggest an algorithm , based on the methods of complex network theory which allows to reveal a hierarchical tad structure of a polymer conformation if it does exist , and check the applicability of this algorithm on several model polymer conformations . in what followswe discuss this algorithm for a particular case of a single - conformation colocalization matrix whose elements are 0 s and 1 s depending on whether the two corresponding monomers are spatially adjacent or not .the generalization for a more experimentally typical case of hi - c maps _ averaged over _ many conformations is absolutely straightforward .note , however , that the significance of resulting community structure is not ensured : indeed , as we argue elsewhere the experimental hi - c maps are in fact averaged over many substantially different folding conformations , and one expects therefore the community structure of the average to be significantly less rich than the community structure of the individual conformations .we hope , however , that ( _ i _ ) further advances in the single - cell hi - c mapping techniques will allow in - depth analysis of the community structure of individual genome conformations , and ( _ ii _ ) that comparison of the community structures corresponding to individual and averaged hi - c maps may shed light on which characteristics of genome folding are conserved from realization to realization and which ones are variable .it is feasible to construct a mapping between any given configuration of a polymer chain of monomers and a colocalization matrix with matrix elements equal to 1 if -th and -th monomers are close to each other in space ( i.e. , if the spatial distance is less than some cut - off value which here and below is presumed to be of order of the monomer - monomer bond length ) and 0 otherwise .our aim is to extract the information about topological domain structure of the chain conformation from this matrix , and to estimate if this domain structure is stable ( in some sense to be determined later ) , and if it possesses ultrametric properties ( i.e. , consists of sequentially nested domains of smaller and smaller size ) expected from the structure of fractal globule ( see , e.g. , for the discussion of ultrametricity in the context of fractal globules ) . in what follows wedevelop a method allowing us to do that and show the results obtained by this method on several test polymer configurations .application of this method to real experimental data goes beyond the scope of this paper and will be provided elsewhere .the analysis of matrix starts from the notion that it could be reinterpreted as an adjacency matrix of some complex graph ( network ) , which allows us to use the community detection methods developed in the complex network theory .indeed , it seems natural to assume that monomers belonging to the same tad will more often be in spatial proximity to each other in space than monomers belonging to different tads , and therefore a community of comparatively well - connected nodes in the network theory sense can be a good proxy to a real topological domain .it is known , however , that the problem of optimal division of a network into a set of communities ( comparatively well connected clusters ) is ill - posed and does not have a single definite answer ; there exist numerous techniques based on spectral properties of adjacency matrix , non - backtracking matrix , synchronization in networks . in what follows we employ the modified modularity - optimization method as developed in which we find most appropriate to our needs as it very naturally allows to look into the community structure of networks on different scales .consider a partitioning of an -node network into a set of clusters ( see fig.[figex1 ] below for a toy example of such partitioning ) .any such partitioning can be described by a matrix of size with if -th element of the network belongs to a -th cluster and 0 otherwise .naturally , one assumes for any ( i.e. , each node belongs to one and only one cluster ) .then , according to the modularity $ ] of such a partition is defined as =\frac{1}{2w } \sum_{i , j}{\left(}w_{ij}-\frac{w_i w_j}{2w}{\right)}\sum_{\alpha}{c_{i\alpha}c_{j\alpha } } , \label{e1}\ ] ] where are the elements of , is the total number of neighbors of -th node ( also called ` strength ' of the node ) , is the total number of links ( strength ) of the network ; the sum over in ( [ e1 ] ) equals 1 if -th and -th monomers belong to the same cluster , and 0 otherwise .the original modularity - based community detection algorithm demands to maximize the functional with respect to and , the corresponding maximizing separation is then considered to be optimal . thus defined method of modularity optimization is known to have two important drawbacks .first , it is known to have a so - called resolution limit , so that it is impossible to find any clusters of size less than correctly .second , the total number of possible partitions of a network into clusters is exponentially large in making the search of an optimal partition an np - complete problem ( i.e. , a problem whose shortest possible solution time grows exponentially with , see for the introduction to this concept ) .there exists , however , a possibility to circumvent the first problem .following introduce the so - called resistance parameter , i.e. , introduce a modified adjacency matrix whose matrix elements equal to for , and . in terms of the underlying networkthis corresponds to adding self - loops with weight ( strength ) , which is , generally speaking , non - integer and can be even negative .now , proceed with the optimization of the modularity functional for this new matrix ( note that the definition of modularity never relies on the elements of adjacency matrix being boolean variables , indeed , it was originally introduced for weighted networks ) .the larger the resistance parameter the more are nodes coupled to themselves as compared to other surrounding nodes . as a result of that , as shown in smaller and smaller clusters get determined .indeed , if is larger than some network - dependent critical value the optimal partition is one separating the network into clusters consisting of 1 monomer each . on the other hand side , if is less than some ( once again , network dependent ) which is usually negative , the optimal partition consist of a single cluster covering the whole network . as for the second drawback of usual modularity algorithm , i.e. the exponential increase of the possible number of separations with the size of the system, it also can be circumvented in this particular case . indeed ,remind that we originally defined the adjacency matrix as a colocalization matrix of some _ polymer _ configuration .that is to say , the monomers of the network are naturally numbered along the chain , and monomers close along the chain are automatically close in space due to the connectivity of a polymer .this allows us to postulate by definition that we only consider clustering partitions which separate a chain in fragments which are adjacent along the chain , i.e. if -th and -th monomer belong to the same cluster , than any -th monomer with belong to the same cluster as well .note , that such definition of clusters is in accordance with how topological domains are usually understood in polymer and biophysical literature : parts of the chain that are close _ both _ along the chain and in real space .simultaneously , it is easy to see that such a restriction on possible partitioning reduces their overall number from exponential in to quadratic in , allowing us to produce a rather fast deterministic partitioning algorithm , which we realized in fortran 95 . for the purposes of our work ,it is instructive to consider how the resulting partitioning ( community structure ) _ evolves _ with the change of .indeed , if one considers a completely random erds renyi network , one would expect that at the network is separated into two clusters of roughly same size , then as the value of resistance reaches some it separates into three clusters of , once again , roughly same size , than at into four clusters , etc .the important thing is that in the absence of any underlining structure of the network one expects the cluster boundaries to be essentially uncorrelated : every time the number of clusters increases by one and _ all _ cluster boundaries rearrange . on the other hand ,if the networks has an underlying structure of hierarchically organized tads , one expects the increase of to cause not a complete rearrangement of cluster structure , but a decomposition of already existing clusters into smaller parts with boundaries of larger clusters remaining essentially stable . in order to quantify this qualitative notionwe introduce here the concept of a _ spectrum of borders_. let ( ) be a heaviside step function indicating whether the bond between -th and -th monomer is a border of a cluster at given ( that is to say if the said bond is a boundary , and otherwise ). then the total fraction of the range when a given bond is a border of a cluster is given by the higher is , the more stable is the cluster border at bond .the whole set of for all is what we call a spectrum of borders ( sob ) .inspecting such a spectrum one should be able to estimate how stable the cluster structure of the network is , and what exactly is the natural hierarchical domain structure of a network , if any .if there is no well - defined clusters in a network , the sob is a more or less uniform distribution of lines , while if there is a small number of significantly exceeding the rest , it signifies a network with stable and well - defined cluster structure . to clarify this definition ,let us start with a toy example .consider a model network with 8 consequentially numbered nodes ( see fig.[figex1]a ) which we study in the interval between some fixed and .assume first that along the whole interval it is partitioned into exactly two clusters with the border at node 4 ( fig.[figex1]b ) .then this means that in the whole range of there is only one border between two clusters and this border does not shift to the left or right , thus the clusters always contain exactly 4 nodes each ( i.e. , if partitioning shown in fig.[figex1]b holds for any , than the sob is the one shown in fig.[figex2]a ) .if there appears a region within which these clusters vanish and optimal partitioning consists of just one cluster ( i.e. , fig.[figex1]b for some and fig.[figex1]a for some others ) , than the amplitude becomes less than 1 ( fig.[figex2]b ) .suppose now , that above certain value the first of the two clusters splits into 2 smaller ones of size 2 each ( see fig.[figex1]c ) .this will result in the emergence of new line of smaller amplitude in the sob at node 2 ( fig.[figex2]c ) : where a different possibility is that at the border at node 4 disappears and new set of three clusters consisting of , say , nodes , , and arise ( see fig.[figex1]d ) .then the set of amplitudes in the sob will read the corresponding spectra of borders are shown in fig.[figex2]c and fig.[figex2]d , respectively .thus , in accordance with what have been said above , if the borders of clusters dangle more or less randomly along the chain , the resulting sob consists of a number of low peaks of roughly equal height , and such a behavior is characteristic for networks with labile , fuzzy domain structure . on the contrary ,the changes when the existing large cluster splits into smaller ones without changing its outer borders give rise to a series of peaks which significantly differ in height .such behavior seems to be characteristic of networks with well - defined hierarchical domain structure .to check the aptitude of the described approach we applied it to series of adjacency matrices of several model polymer conformations .in particular , we studied , ( i ) a completely deterministic peano curve which is the simplest possible proxy of a fractal globule conformation with most prominent hierarchical self - similar cluster organization possible , ( ii ) an equilibrium conformation of a gaussian polymer globule , ( iii ) a random fractal globule conformation obtained by the conformation - dependent polymerization .the first two conformations are more or less standard , while the third , suggested originally in ( the algorithm is briefly outlined below , and we address the reader to the supplementary materials of that paper for full details about this algorithm ) is , in our opinion , one of the best existing candidates to represent the generic metastable fractal globule state .it shows significant stability in dynamic computer simulations and its statistical characteristics are very similar to what is obtained in other fractal - globule generating algorithms .therefore , the main aim of our test is to show that random fractal globule conformations could be robustly and reproducibly distinguished from equilibrium gaussian ones based on their sobs .the length of polymer under consideration was for peano curve and for random fractal and equilibrium conformations .all chain configurations were generated on cubic lattice .for generation of the fractal globule the following rules were used .the monomers are added sequentially to the chain , with a new monomer added to one of the 6 nodes of a lattice adjacent to the endchain monomer with probabilities to go in each direction equal to : where is a normalizing constant , and is the number of occupied sites within a unit sphere centered at site .the significant difference with is that the fractal globules were obtained in a free volume without periodic boundary conditions , which allowed to obtain conformations with very developed surface .equilibrium globule was obtained as random walk within a sphere of radius and reflecting surface . in case of trapped configurationsthe chain end can go back through randomly chosen , among already visited , site .the elements of adjacency matrix equal to 1 ( monomers are considered to be neighbors ) if they are separated by distance lattice units . in our analysiswe somewhat arbitrarily have chosen and . while roughly corresponds to the point where the first partitioning of a network into two clusters takes place for the networks under study ,the chosen value of is significantly smaller than the natural limit defined above .that is to say , even for largest under consideration , our networks are quite far from being separated into single node clusters .this choice was dictated mostly by the saving of cpu modeling time .the calculations presented below thus grasp the essential behavior of the sob for large clusters and the most stable domain walls , which are , in our opinion , of the most interest .indeed , small - scale clustering structure of our test configurations can be significantly plagued by the underlying discrete lattice .however , we perform a check of the robustness of suggested algorithm with respect to the choice of ( see below ) . the application of the proposed algorithm to peano curve reveals hierarchically nested clustering , as expected . at values there is the only community with 4096 nodes . at two clusters , consisting of 2048 nodes each ,with further increase of each of these clusters splits exactly into 2 half - size clusters , but the border at node remains at place , then each of the four clusters splits into halves , etc , all the way until therefore , is the highest peak in sob for peano curve , followed by and , etc .we get finally a sob consisting of an hierarchical set of equidistant peaks as shown in figure 2 .note that smaller clusters positioned deep inside the globule and on its outer surface behave somewhat differently , which explains why starting from the third generation the values of peaks of the same generation are not exactly equal .the fig.[fig2 ] shows six levels of hierarchical organization of peano curve , each larger cluster ( consisting of the nodes between two largest peaks ) having a fully deterministic internal structure .for example , the second level cluster , consisting of the nodes 2048 to 3072 , has two domains ( divided by line at node 2560 ) of 512 nodes each .the fourth level of hierarchy is organized by eight peaks on nodes ( ) with those of higher level , which make it a total of 16 clusters .such a picture of discrete lines with gradually lowering heights at each level is a qualitative representation of regularly fractal , self - similar polymer globule .consider now a random fractal globule .clearly , it does not have such a regular fully symmetric domain structure ( see fig.[fig3 ] ) , whatever one may notice that the peaks in the structure are once again very widely distributed : there are some relatively very high peaks ( e.g. , ones at nodes 4441 , 9849 , 2844 reach well above 0.8 ) and the general structure of the sob is very widely disperse .contrary to that , the sob for an equilibrium globule ( fig.[fig4 ] ) is essentially a dense forest of peaks , peaks of any height seem to appear with equal probability without any gaps between higher and lower peaks .such a picture tells us that almost each of the node has been the border of any cluster , or in other words , as it was described above , on magnifying the network the cluster structure rearranges completely with changing , confirming thus that an equilibrium globule does not have a well - defined cluster structure . in order to make the difference between figures fig.[fig3 ] and fig.[fig4 ] more clear ,it is instructive to reorder peaks of the sob in descending order , constructing thus an _ ordered sob _, where the highest peak is renamed , the second highest , and so on .fig.[fig5 ] shows the envelope lines of ordered sobs for 10 different random fractal realizations ( all of them cluster in the right bottom of the picture ) and 10 different equilibrium globule conformations ( which cluster in the top left ) .one sees immediately that spectra of fractal and equilibrium globules are clearly distinguishable : spectra of equilibrium conformations show approximately linear descent , confirming that in such conformation borders of all intensities occur with roughly the same probability , while fractal globule has a clear cluster structure with a small number of very strong borders dominating over the rest .moreover , in order to check if this distinction between two classes of conformations is robust with respect to change in , we have plotted a series of ordered sobs for a single random fractal and a single equilibrium conformation but different values of varying from 80 to 140 , see fig.[fig1 ] .it is clear from the figure that the changes of the curves , although clearly visible , do not change the general result : once again the curves for fractal and equilibrium conformations are clearly distinguishable .fig.[fig6 ] shows the average shape of ordered sobs , obtained from averaging the curves shown in fig.[fig5 ] over 10 different conformations of the same class .clearly , the spectra have very different shape , the curve corresponding to the equilibrium globule can be fitted as a straight line , whereas the case of fractal globules highly non - linear with a steep descent at the beginning .we are now working on a possibility to construct theoretical explanations for the shapes of the obtained curves , corresponding results will be presented elsewhere .we expect that the concept of the ordered spectrum of borders introduced above will provide an additional tool to differentiate between different possible conformation types of the polymer molecules , while the exact positions of the most strong borders will provide information about the position of tad borders in real chromosome conformations . in conclusion, let us discuss once again the possible applicability of the algorithm suggested in this paper to the experimental hi - c maps , which are usually averaged over millions of cell with , generally speaking , different folding structures .such maps are in fact symmetric matrices with non - negative non - integer elements , and they can be naturally interpreted as adjacency matrices of _ weighted _ complex networks .the algorithm we suggest does not at any point rely on the fact that the network under consideration is unweighted , and thus can be applied without change to this averaged hi - c maps .there is , however , a more subtle question .it is not completely clear how much of the original hierarchy of the chain packing is conserved from realization to realization .we expect , and plan to check elsewhere , that those parts of the tad structure which are repeted in all ( or most ) cells , should be accessible from analysing community structure of averaged hi - c maps . on the contrary, those parts which are different from cell to cell should be observable only from the single - cell hi - c maps ( provided the corresponding experimental techniques will progress ) but we expect them to be smeared over in the averaged maps .we expect that the comarison of the community structures of the averaged and single - cell hi - c maps can elucidate ( especially coupled with the progress of the single - cell hi - c mapping techniques ) the question of how stable individual tads are from cell to cell ( see , e.g. for further discussion of this subject ) .authors are grateful to s. k. nechaev , l. mirny , a. v. chertovich , and p. kos for fruitful discussions and to a. cherstvy for his useful comments on the text of the manuscript .the work is partially supported by the rfbr grant 14 - 03 - 00825 and by the higher school of economy program for basic research .all authors declare no conflicts of interest in this paper . p .- g . de gennes , scaling concepts in polymer physics , cornell university press , ny , 1979 .grosberg , a.r .khokhlov , statistical physics of macromolecules , aip press , ny , 1994 .m. rubinstein , r.h .colby , polymer physicsm oxford university press , oxford , 2003 .grosberg , s.k .nechaev and e.i .shakhnovich , j. de physique * 49 * , 2095 ( 1988 ) .grosberg , y. rabin , s. havlin , and a. neer , europhys . lett . * 23 * 373 ( 1993 ) .a. rosa , r. everaers , plos computational biology , * 4 * : e1000153 ( 2008 ) .e. lieberman - aiden _ et al _ , science * 326 * , 289 ( 2009 ) .mirny , cromosome res .* 19 * , 37 ( 2011 ) .halverson , j. smrek , k. kremer , and a.yu .grosberg , rep .phys . , * 77 * , 022601 ( 2014 ) .a.yu . grosberg , soft matter , * 10 * , 560 ( 2014 ) .a. rosa , r. everaers , phys .lett . , * 112 * , 118302 ( 2014 ) .nazarov , m.v .tamm , v.a .avetisov , s.k .nechaev , soft matter , * 11 * , 1019 ( 2015 ) .tamm , l.i .nazarov , a.a .gavrilov , a.v .chertovich , phys .lett . , * 114 * , 178102 ( 2015 ) .g. bunin , m. kardar , phys .lett . , * 115 * , 088303 ( 2015 ) .sachs , g. van der engh , b. trask , h. yokota , and j.e .hearst , proc ., * 92 * , 2710 ( 1995 ) . c. mnkel , and j. langowski , phys .e , * 57 * , 5888 ( 1998 ) .j. ostashevsky , mol .biol . of the cell ,* 9 * 3031 ( 1998 ) . j.mateos - langerak _ et al _ , proc ., * 106 * , 3812 ( 2009 )iyer , and g. arya , phys .e , * 86 * , 011911 ( 2012 ) .m. barbieri , m. chotalia , j. fraser , l .- m .lavitas , j. dostie , a. pombo , and m. nicodemi , proc ., * 109 * , 16173 ( 2012 ) . c.c .fritsch and j. langowski , j. chem .phys . , * 133 * , 025101 ( 2010 ) .fritsch and j. langowski , chromosome res . ,* 19 * , 63 ( 2011 ) .j. dekker , k. rippe , m. dekker , and n. kleckner , science * 295 * 1306 ( 2002 ) .n. naumova , m. imakaev , g. fudenberg , y. zhan , b.r .lajoie , l.a .mirny , and j. dekker , science , * 342 * 948 ( 2013 ) .t. nagano , y. lubling , t.j .stevens et al . ,nature , * 502 * , 7469 ( 2013 ) .newman , phys .e , * 70 * , 056131 ( 2004 ) .a. lanchinetti , s. fortunato , phys .e , * 84 * , 066122 ( 2011 ) . c. granell , s. gomez , a. arenas , int .. chaos , * 22 * , 1250171 ( 2012 ) .a. arenas , a. fernandez , s. gomez , new j. physics , * 10 * , 053039 ( 2008 ) .s. fortunato , phys ., * 486 * , 75 ( 2010 ) .newman , proc ., * 103 * 8577 ( 2006 ). f. krzakala , c. moore , e. mossel et al .nat . acad .sci . , * 110 * , 20935 ( 2013 ) . a. arenas , a. diaz - guilera , k. kurths et al ., * 469 * , 93 ( 2008 ) .sethna , statistical mechanics : entropy , order parameters and complexity , oxford university press , oxford , 2006 .m. mezard , a. montanari , information , physics , and computation , oxford university press , oxford , 2009 .
one of the most important tasks in understanding the complex spatial organization of the genome consists in extracting information about this spatial organization , the function and structure of chromatin topological domains from existing experimental data , in particular , from genome colocalization ( hi - c ) matrices . here we present an algorithm allowing to reveal the underlying hierarchical domain structure of a polymer conformation from analyzing the modularity of colocalization matrices . we also test this algorithm on several model polymer structures : equilibrium globules , random fractal globules and regular fractal ( peano ) conformations . we define what we call a spectrum of cluster borders , and show that these spectra behave strikingly differently for equilibrium and fractal conformations , allowing us to suggest an additional criterion to identify fractal polymer conformations .
complex systems consisting of large number of highly interconnected dynamic units , whose structure is usually irregular have been the subject of intense research efforts in the past few years .the complexity of such systems is reflected not only in their structure but also in their dynamics .the usual representation of a wide range of systems of this kind in nature and society uses networks as the concept appropriate for the study of both the topology and dynamics of complex systems .the usual approach to study networks is via graph theory which was well developed for regular and random graphs both of which have been found to be exceptional cases of limited use in real world realizations and applications .recently , along with the discovery of new types of network structures such as the small - world and scale - free networks , the tools of statistical mechanics have been successfully implemented offering explanations and insights into the the newly recognized properties of these systems . in spite of many advances based on statistical mechanics approaches to various issues involving networks , from biology to social sciences, it is our opinion that there is a need for more versatile approach which would rely on new topological methods either separately or in combination with the techniques of statistical mechanics . in particular, the program is to encode the network into a simplicial complex which may be considered as a combinatorial version of a topological space whose properties may now be studied from combinatorial , topological or algebraic aspects .the motivation stems from the q - analysis introduced by r. atkin , who advocated its use in various areas of physics and social systems analysis in the 70 s .the methods of q - analysis were extended further into a combinatorial homotopy theory , called a - theory .consequently , the invariants of simplicial complexes may be defined from three different points of view ( combinatorial , topological or algebraic ) and each one of them provides completely different measures of the complex and , by extension , of the graph ( network ) from which the complex was constructed . in , for several standard types of networks we constructed vector valued quantities representing topological and algebraic invariants and showed , among other issues , that their statistical properties perfectly match their corresponding degree distributions .such an approach provided a link between topological properties of simplicial complexes and statistical mechanics of networks from which simplicial complexes were constructed . in the present exposition we focus on simplicial complexes ( obtained from random , scale- free networks and networks with exponential conectivity distributions ) and their homological properties . in most general terms ,algebraic topology offers two methods for gauging the global properties of a particular topological space by associating with it a collection of algebraic objects .the first set of invariants are the _ homotopy groups _ the first one ( i.e. for ) , known as the fundamental homotopy group being well known .homotopy groups contain information on the number and kind of ways one can map a -dimensional sphere into , with two spheres in considered equivalent if they are homotopic ( belonging to a same path equivalence class ) relative to some fixed basepoint .computational demands of such an approach are in general extremely high and for that reason the second set of invariants , the _ homology groups , _ is of more practical interest .homology groups of dimension , , provide information about properties of chains formed from simple oriented units known as simplices .the elements of homology groups are cycles ( chains with vanishing boundary ) and two -cycles are considered homologous if their difference is the boundary of -chain . in more general terms determines the number of -dimensional subspaces of which have no boundary in and themselves are not boundary of any -dimensional subspace .in contrast to homotopy groups , homology groups can be computed using the methods of linear algebra and the ease of these methods are counterbalanced by obtained topological resolution .it should be remarked that these computations can be quite time consuming in spite of recent advances in computational techniques mishaikow .although homology groups are computable and provide insight into topological spaces and maps between them , our interest is in discerning which topological features are essential and which can be safely ignored , similar to signal processing procedure when signal is removed from noise .one of the important informations about the topological space is the number and type of holes it contains and going beyond standard homological approaches one could be interested in finding out which holes are essential and which are unimportant .this is the subject of persistence and persistent homology , as introduced by edelsbrunner , letscher and zomorodian , whose aim is to extract long - lived topological features ( topological signal ) which persist over a certain parameter range and which are contrasted with short - lived features ( topological noise ) . with networks encoded into simplicial complexes we are interested in topological features which persist over a sequence of simplicial complexes of different sizes .this sequence reflects the formation of the network or the change of the existing network when new node or nodes are introduced or removed .here we focus on recognizing persistent and non persistent features of random , modular and non modular scale - free networks and networks with exponential connectivity distribution . in the following expositionour main topic will be homology and although it is self contained an elementary knowledge of homology would be helpfull , as may be found for example in chapter 2 of . our main motivation is to show that each of these different types of networks have different persistent homological properties although here we do not attempt to present these features as generic .moreover , long - lived topological attributes reveal new and important information related to connectivity of the network which could not be inferred using any other conventional methods .the outline of the exposition is as follows : in section 2 we review concepts from algebra and simplicial homology while in section 3 we present the methods of constructing simplicial complexes from graphs . in section 4we introduce the concept of persistent homology and discuss computational aspects .section 5 contains description of graphical representation of persistent homology groups . in section 6we present the results of persistent homology calculations for random networks while in section 7 and 8 persistent homologies are determined for networks with exponential degree distribution and three types of scale - free networks respectively .concluding remarks are given in section 9 .any subset of determines an - denoted by the elements of are the vertices of the simplex denoted by and is the dimension of the simplex .any set of simplices with vertices in is called a simplicial family and its dimension is the largest dimension of its simplices .a -simplex is a -face of an -simplex , denoted by , if every vertex of is also a vertex of a simplicial complex represents a collection of simplices .more formally , a simplicial complex on a finite set of vertices is a nonempty subset of the power set of , so that the simplicial complex is closed under the formation of subsets .hence , if and . then . two simplices and are if there is a sequence of simplices such that any two consecutive ones share a -face,.implying that they have at least vertices in common . such a chain is called a -chain .the complex is -connected if any two simplices in of dimensionality greater or equal to are -connected .the dimension of a simplex is equal to the number of vertices defining it minus one .the dimension of the simplicial complex is the maximum of the dimensions of the simplices comprising . in fig .1 we show an example of a simplicial complex and its matrix representation .in this example } , and the simplicial complex consists of the subsets and .its dimension is , as there is a -dimensional simplex , in addition to two -dimensional ones , two -dimensional and one -dimensional simplex .a convenient way to represent a simplicial complex is via a so called incidence matrix , whose columns are labeled by its vertices and whose rows are labeled by its simplices , as shown also in fig .1 . the multifaceted property ( algebraic , topological and combinatorial ) of simplicial complexes makes them particularly convenient for modelling complex structures and connectedness between different substructures .chains and cycles are simplicial analogs of paths and loops in the continuous domain .the set of all -chains together with the operation of addition forms a group .a collection of -dimensional faces of a -simplex itself a -chain , is the boundary of the boundary of -chain is the sum of the boundaries of the simplices in the chain .the boundary operator is a homomorphism and s for connect the chain groups into a chain complex, with for all .the kernel of is the set of -chains with empty boundary while a -cycle , denoted by , is a -chain in the kernel of the image of is the set of -chains which are boundaries of -chains with a -boundary , denoted by , being a -chain in the image of collection of s and s together with addition form subgroups of while the property shows that i.e. these groups are nested as illustrated in figure 2 . the -th homology group is if , then the difference between and is the boundary and and are homologous .the -th betti number of a simplicial complex is the rank of the -th homology group , or from expression ( [ h ] ) , to an alexander duality property , there is an intuitive depiction of the first three betti numbers nicely explained in .since a non - bounding -cycle represents the set of components of complex , there is one basis element per component so that consequently represents the number of components of .hence , for connected complex so that the notion of connectivity is reflected in .a non - bounding -cycle represents a collection of non - contractible closed curves in , or based on duality property , a set of tunnels formed by .each tunnel can be represented as a sum of tunnels from the basis so that represents the dimension of the basis for the tunnels .these tunnels may be perceived as forming graph with cycles . a -cycle which itself is not a boundary represents the set of non - contractable closed surfaces in , or based on duality principle , a set of voids which exist in the complement of the simplicial complex , i.e. the dimension of the basis for voids , equal to the number of voidsis represented by complexes may be constructed from undirected or directed graphs ( digraphs ) in several different ways .here we only consider two of them : the neighborhood complex and the clique complex .the neighborhood complex _ _ n__ is constructed from the graph , with vertices in such a way that for each vertex of there is a simplex containing the vertex , along with all vertices corresponding to directed edges the neighborhood complex is obtained by including all faces of those simplices and in terms of matrix representation , the incidence matrix is obtained from the adjacency matrix of by increasing all diagonal entries by .an example of the construction of a neighborhood complex is represented in fig . .the clique complex has the complete subgraphs as simplices and the vertices of as its vertices so that it is essentially the complete subgraph complex .the maximal simplices are given by the collection of vertices that make up the cliques of .in literature , a clique complex is also referred to as flag complex .an example of a clique complex is presented in fig .4 . these two methods are not the only ones that may be used for constructing simplicial complexes from graphs .actually , any property of the graph that is preserved under deletion of vertices or edges may be used for construction purposes .a detailed account of the methods for obtaining simplicial complexes from graphs , among many other issues related to the relationship between graphs and simplicial complexes , may be found in johnss .the basic aim of persistent homology is to measure life - time of certain topological properties of a simplicial complex when simplices are added to the complex or removed from it .usually the evolution of the complex considers its creation starting from the empty set , hence the assumption is that simplices are added to the complex ( corresponding to the growing network ) .the sequence of subcomplexes constructed in the process is known as filtration . in more formal termsthe filtration of the simplicial complex is a sequence of complexes , such that: simplices in are indexed by their rank in a filtration sequence and each prefix of the sequence is a subcomplex .two filtration constructions are usually considered when the history of the complex is studied .the first one is formed when at each stage of the filtration only one simplex is added ( i.e. consists of one simplex for each ) . in the second casea simplex is added to the sequence , say to subcomplex , when all its faces are already parts of some hence , the second case does not require only one simplex to be added at each stage of filtration .these two filtrations contain complete orderings of its simplices and figure 5 illustrates the two progressive sequences .naturally , other filtrations may also be applied in practice including `` irregular '' ones when simplices are removed or disappear in the sequence . for these filtrations the main aspect of change is not only growth but decrease as well . following the expositions in the pioneering paper on persistent homology and in reference we give here some basic notions and concepts .persistence is defined in conjunction with cycle and boundary groups of complexes in filtration i.e. with respect to homology groups and associated betti numbers . since homology captures equivalent classes of cycles by factoring out the boundary cycles , the focus is on the count of non - bounding cycles whose life - span lasts beyond a chosen threshold ( say represented by number of next complexes in the filtration sequence ) and which determine persistent or long lasting topological properties of the complex .these cycles persist through phases of the sequence , hence they are important . in a complementary mannerour interest also lies in cycles with short life - spans which convert to boundaries during filtration .algebraically , it is relatively simple to perform the count of persistent non - bounding cycles .let and represent the -th cycle group and the -th boundary group , respectively , of the -th complex in filtration sequence .in order to obtain the long - lasting non - bounding cycles , the -th cycle group is factored by the -th boundary group of the complex , complexes later in the filtration sequence .formally , the -persistent -th homology group of is is a group itself being an intersection of two subgroups of the -persistent -th betti number , of the -th complex in filtration is the rank of , counts homological classes in the complex which were created during filtration in the complex or earlier .there is a betti number for each dimension and for every pair of indices to get a more intuitive illustration of persistence concept let us consider a non - bounding -cycle created at time ( step ) as a consequence of the appearance of the simplex in the complex so that the homology class of is an element of , i.e. \in h_{k}^{i}. ] into a boundary , so that this causes the decrease of the rank of the homology group since the class ] .the persistence of and its homology class ] . here and may denote filtration times and or filtration complexes and .clearly , barcodes do not provide information on delicate structure of the homology however the information about persistent parametrized rank ( since a barcode reflects the persistent properties of betti numbers ) enables clear distinction between topological noise and topological `` signal '' .for the purpose of illustrating persistent homology we first consider random ( erds - rnyi ) networks for which the number of nodes , is fixed and with each link inserted with the same probability as is well known , a random network has a characteristic scale in its node connectivity reflected by the peak of the distribution which corresponds to the number of nodes with the average number of links .we have constructed the clique complex of a random network so that the obtained complex is a random simplicial complex the filtration of the complex is the -th complex in the filtration is given by is the -th skeleton of the clique complex ( the set of simplices of dimension less or equal to ) .the random network considered consists of nodes with the probability of two nodes having a link equal to . the corresponding barcode is presented in fig . 7 . due to sparsity of the networkthe filtration steps are limited to complexes of dimension .it is evident that persistent has betti number corresponding to one line that persists through all stages of filtration .since the zero dimensional homology measures the connectivity of the underlying graph the graph is always connected and this property remains for arbitrary choice of or , as one would expect .in addition while the maximal rank of persistent homology of this random network is .however , due to the short lifetime of through only two filtrations , it may be inferred that the content of topological noise dominates the network for this choice of parameters and .the same results , from the aspect of persistence , are obtained for the neighborhood complex increasing the probability or the number of nodes leads to occurrence of higher dimensional homology groups which though appear only as noise as illustrated in fig . 8 for the case of and .there is an interval outside which homology vanishes , and inside which only lowest ranked homologies persist , i.e. and .this conlcusion is in agreement with recent theoretical studies on clique and neighborhood complexes of random graphs , .in order to analyze the emergence of self - similar properties in a complex network , an e - mail network was studied in .each e - mail address in this network represents a node and links between nodes indicate e - mail communication between them . after removal of bulk e - mails ,the connectivity distribution of this network is exponential , for and with the number of nodes ( e - mail users ) is 1700 .calculations were performed using both the clique and the neighborhood complex and both showed consistent persistency property . the corresponding persistency barcode is presented in fig . 9in which the rank of the homology group equals the number of intervals in the barcode intersecting the dashed line which corresponds to the filtration stage .the first three homology groups , i.e. , and have long lived generators while higher dimensional homology groups appear only as topological noise .although random networks analyzed earlier and the e - mail network have comparable number of nodes , the number of higher dimensional homology groups is considerably larger in the latter case .this is the consequence of an internal organization of an e - mail network into a number of communities arenas which is an essential prerequisite for emergence of higher dimensional complete graphs .clearly , no such organizational principle exists in random networks ( random simplicial complexes ) and -cycles dominate the complex .the fact that homology groups of dimension higher than have short lifetimes indicates that communications among certain groups of e - mail users may not exist for a certain time during the growth of the network however these communication channels are reestablished at later stages of the network evolution .among scale - free networks we consider scale - free models with modular structure developed recently . the model including preferential - attachement and preferential - rewiring during the graph growthis generalized so that new modules are allowed to start growing with finite probability . the structural properties of modular networksare controlled by three parameters : the average connectivity , the probability of the emergence of a new modul and the attractiveness of the node by varying these parameters the internal structure of modules and the network connecting various modules is kept under control .detailed explanation of the role of each of these parameters in the control process are discussed in .here we consider the persistent homology of three scale - free networks developed using three diferent sets of parameters chosen as paradigmatic for the type of network considered .the results for both clique and neighborhood complexes were constructed and since the results do not differ for the two cases the presented ones are obtained from the clique complex filtration [ f1 ] and [ f2 ] .all networks were generated with 1000 nodes .the average connectivity ( number of links per node ) is .the network has nodes and modules so that the attractiveness of the node is which enables stronger clustering effect , hence the label `` clustered modular network '' .the corresponding barcodes are presented in fig . 10 .there are unique persistent generators for and while for there are persistent generators . also has a persistent generator which starts at stage of filtration .it is interesting that once the homology is generated at later stages of filtration it remains persistent for all as indicated by arrows .one aspect of existence of persistent homology groups is robustness of the complex ( network ) with respect to addition or reduction of simplices ( nodes ) .the fact that four homology groups show persistence is a clear sign of robustness .moreover , practically there is no topological noise in this case .the parameters for this type of network are ( no modules ) and ( strong clustering ) .the persistence barcodes for this network are presented in fig .the most striking feature of these topological persistency representations is the existence of another striking feature is that does not exist for this particular value of clustering parameter showing that higher ranked persistency generators may not be distributed continuously across dimensions .there are four generators for however they persist through five stages of filtration and there are several more generators with shorter lifetime some of which may be considered as topological noise , such as the ones whose lifetime is one or two filtration phases . the fact that generators do not exist shows that for this choice of parameters there are no -dimensional non - bounding cycles in the complex .the average connectivity is .modular probability is and clustering coefficient so that there is only one link between each of the modules and effectively there is non clustering .the corresponding barcodes are shown in fig .there is only one generator for for there is a unique generator persistent from the beginning of filtration however there are several generators which persist while occurring with the slight delay in filtration sequence .the maximal persistent homology rank is and has relatively long lived generators with a slight noise .of the three cases considered this one has the smallest number of persistent homology groups , namely three ( , and ) , and also the smallest number of generators for the homology group .since both clustered modular and clustered non - modular networks have higher ranked persistent homology ( and respectively ) then the non - clustered modular network ( ) , it is clear that clustered networks are more robust with respect to addition ( removal ) of nodes ( simplices ) .moreover , clustering property is more important for robustness then modularity as may be also inferred by comparison with the e - mail network discussed in sec .7 which also shows modular structure .the fact that only -dimensional and -dimensional cycles ( voids ) are persistently missing in non - clustered simplices with respect to additional lack of and -dimensional cycles in modular simplices may convey important information depending upon the context of the analysis and types of networks under study .in general the persistence of -th homology generators ( -th betti numbers ) means that somewhere in the complex -th dimensional subcomplex is missing through all stages of complex growth or reduction . in other wordsan -dimensional object formed by simplices of dimension at most is absent from the complex .this property may be translated to the `` network language '' in terms of connectivity relations which depend on the context . in simplified terms ,for example for the network lacks in dyadic ( binary ) relations ; for there are no triadic ( ternary ) relations and so on where -adic relations should be regarded not only as the set of its node - to - node relations but in their relational entirety . as an example, a face of a triangle represents a relational entirety ( essentially a relationship of higher order ) of a three node relation .construction of simplicial complexes from graphs ( networks ) creates a topological setting which offers flexible tools for gauging various topological attributes . hereour interest lies in detection of long lived homology groups of a simplicial complex ( network ) during the course of its history which includes both addition and removal of simplices ( nodes ) .the method relies on visual approach of recognizing persistent features in the form of a barcode which may be regarded as the persistence analogue of a betti number .the results show distinct persistency attributes for random networks , networks with exponential degree distributions and for scale - free networks .persistency includes the two lowest dimensional homology groups and for random networks . for the case of neworks with exponential degree distribution persistencyincludes and while for scale - free networks persistent homology groups are , and even .an obvious consequence of persistency is that it gives important information about robust quality of the network so that scale - free networks , especially the ones with clustering properties , exhibit the highest topological resilience to change in the form of addition or removal of the nodes. however persistence of certain topological attributes assumes also long lived defficiency in certain topological forms in simplicial complexes corresponding to defficiency of certain relations in networks . in order to reveal more about the sense of balance between these two properties we will use more subtle topological methods in our future work .99 boccaletti s , latora v , moreno y , chavez m and hwang d - u 2006 _ phys .rep . _ * 424 * 175
long lived topological features are distinguished from short lived ones ( considered as topological noise ) in simplicial complexes constructed from complex networks . a new topological invariant , persistent homology , is determined and presented as a parametrized version of a betti number . complex networks with distinct degree distributions exhibit distinct persistent topological features . persistent topological attributes , shown to be related to robust quality of networks , also reflect defficiency in certain connectivity properites of networks . random networks , networks with exponential conectivity distribution and scale - free networks were considered for homological persistency analysis .
the word _ cybernetics _ ( _ `` the art of steersmanship '' _ ) was coined by norbert wiener in 1948 to define a cross - disciplinary research field aimed at studying regulatory phenomena in a broad range of contexts , from engineering to biology , from finance to cognitive and social sciences ._ `` the art of steersmanship [ ... ] stands to the real machine electronic , mechanical , neural , or economic much as geometry stands to real object in our terrestrial space ; offers a method for the scientific treatment of the system in which complexity is outstanding and too important to be ignored.''_ this statement by w. ross ashby highlights the trans - disciplinary vocation of cybernetics , a meta - theory for describing common features of complex systems. one may wonder what exactly we mean by `` complex '' .this is a term which enjoys many possible interpretations .we refer the reader to a reference textbook and an extended analysis of the state - of - the - art in quantifying complexity . to establish a solid ground for the discussion ,let us state that a complex system is an aggregate of many parts which interact nontrivially with each other . by nontrivial interactionwe refer to correlations which allow the global system to behave in a qualitatively different way with respect to the parts considered separately . in aristotles poetic words , complexity emerges when _ `` the whole is other than the sum of its parts''_. the actual partition of the system is usually determined by the problem particulars , for example the spatial separation between the components of the system , or the role they play in a specific information processing protocol .cybernetics focuses mostly with the latter case .the general problem we are investigating can be formalized as depicted in fig.[fig1 ] .a system is initially prepared in a state .an observer , or the system itself , wants to regulate the dynamics of the system in order to reach their expected outcome , or goal , .this entails to balance or counteract the typically detrimental action of an external disturbance ( e.g. the environment or an adversarial agent ) by applying a control , or regulation strategy , ( either by accessing an ancillary system , or by internal mechanisms ) .the aim is to ( self-)drive the system into a desirable final state , e.g. to send a living system into a state in which it is still alive and healthy . in the context of cybernetics , the specific nature of the information content and the physical properties of the information carriersdo not have any relevance .the roles of system , regulator and disturbance are dictated by the experimentalist or by constraints inherent to the problem of interest .( red ball ) into a desirable target configuration ( yellow ) out of all possible outcomes ( light gray ) .the environmental noise is modelled by means of a second system ( green ) which disturbs .a third system ( blue ) is available in the laboratory to correct the evolution of in order to drive to the target state .[ fig1 ] ] one may notice that fig.[fig1 ] is nothing but a protocol where information is processed in order to drive the system into a target state , i.e. towards an objective . such a problem has been widely discussed in the control theory , which has been proven highly successful in the last decades , both in the classical and in the quantum regimes . in particular , several feedback control strategies , where information is obtained by a measurement or a coherent interaction and then employed by the controller to implemement the appropriate driving dynamics to the system ,have been proposed and succesfully applied .however , they definitely departed from the cybernetics approach , which tackles the problem from an information theory viewpoint .the question is to set general prescriptions to determine the minimal requirements for a successful regulation and to explore the limits imposed by the fundamental law of physics , in particular the second law of thermodynamics .in fact , an information - theoretic analysis of classical state regulation has been provided , but a full treatment in the quantum mechanical scenario is missing . in a parlancewhich is familiar to information theorists , we may then ask what is the _ resource _ for quantum regulation .the peculiar ability of quantum systems to store and process information harnessing nonclassical features such as coherence and quantum correlations ( not limited to entanglement ) , suggests that a quantum controller may be intrinsically more efficient than a classical one in the regulation of open , quantum or classical complex systems . in particular , we are interested in cooperative effects between the regulator and the disturbance , i.e. when their interaction makes the difference between a hostile and a helpful environment .we note that the protocol in fig .[ fig1 ] resembles information processing tasks as quantum teleportation , remote state preparation , and quantum state merging .all these protocols can be rethought as state driving problems under different constraints , and it is well known that quantum correlations play a decisive role in their optimal realisation .therefore it seems sound and interesting to us to investigate if a more general statement about the role of quantum correlations in quantum control is possible .the controllability of the system may be benchmarked by how much correlations need to be created , i.e. what is the optimal trade off between creation of correlations when information goes from the system to the controller and correlation consumption during the feedback step .also , recent results in quantum information theory led to refine the law of thermodynamics for individual quantum systems , thus calling for shaping the limits to controllability in such a scenario . to do that , a quantum cybernetics , i.e. an information - theoretic study of quantum state driving ,is required .the paper is organized as follows .we will discuss the classical limitations to successful regulation in section [ classical ] , which are summarized by the surprisingly simple law of requisite variety , originally introduced by ashby and then rediscovered and extended in more recent years . in section [ quantum ] , we comment on potential quantization strategies for the regulation protocol of fig .it is then legit to ask if nature provides us with examples of optimal ( quantum ) regulation , e.g. in the _ par excellence _complex systems , i.e. the biological ones .we discuss the exploitability of an information theory of classical and quantum control to self - regulating and biological systems in section [ bio ] .we draw our conclusions in section [ fine ] .in general , we consider a tripartite composite system , consisting of the principal system , an environment , whose action into the system provides the disturbance , and a regulator , whose interaction with the system ( and possibly with the environment ) provides the regulation , see fig . [ fig2 ] . as we are only interested in the system and the interaction with the other two components , the setting is complex in the sense that the relevant dynamics is largely determined by the correlations between the subsystems . in particular the combined action of both disturbance and regulation can drive the system to goals which neither of the respective bipartite interactions are able to achieve on their own , as has been shown , e.g. in .however , a complexity measure based on the number of reachable states ( with the focus on a possible increase due to the tripartite interaction ) misses out a crucial point of control setting : not the bare number of reachable states are relevant , but whether the desirable states are reachable .it then turns out that fig .[ fig1 ] is slightly misleading , as it suggests a time - ordering and separating of disturbance and regulation as well as implying basically an error correction mechanism . in the most general case , in fact , for realistic open quantum systems, it is reasonable to assume instead that disturbance and regulation interact in parallel with the system . or even , when e.g. decoherence - free subspaces are employed for control tasks , that the regulation acts before ( i.e. quicker than ) the disturbance .( quantum ) cybernetics is also not restricted to control settings , where the task for the regulator consists solely of inverting the disturbance . nevertheless , for the sake of clarity , we stick in the figures to this exemplary setting . , an environment and regulator .the dynamics are governed by the interaction between the subsystems .[ fig2 ] ] the state of a ( classical ) system can be described by a set of values of its relevant variables .given a set , the logarithm ( in base ) of the number of distinguishable elements in the set defines the _ variety _ of the set .for instance , the set has variety . having in mind the prototype of fig . [ fig1 ] , if we have a system in a certain initial state , a disturbance induces a set of possible undesired actions .the regulatory mechanism , in turn , is able to produce a set of responses .the final state of the system is therefore determined by a payoff matrix of possible outcomes , corresponding to each pair ( this was introduced in for studying games and economic strategies ) .for example , if we are driving our car , all kind of disturbances can happen . if is : `` a person suddenly crosses in front of us '' , then a response : `` do nothing '' leads to : `` the person is ran over '' , while a better regulation : `` brake '' leads to : `` safety '' . similarly ,if is `` it starts raining '' the best regulatory action is to switch on the wipers , which leads to `` safety '' once more ; and so on . eventually , the matrix of possible outcomes can be quite big if one considers all possibilities for and .in typical complex phenomena , the desired , or expected outcome is only a small subset of all that can happen . in our carwe just want to drive safely and reach our destination .if we define ``safety'' , then we will be able to achieve our goal provided that , for every , there exists at least one action which leads to safety .therefore , the role of the regulator is to reduce the achievable variety in the outcomes .it is intuitive to see that , in order to do so , the regulator itself has to have a sufficiently high variety .that is , we need enough controls in our car to counteract the various mishaps which might occur .the law of requisite variety formalizes this quantitatively in a simple inequality : thus , the entropy of the regulator must be bigger than a function of the entropies of the disturbing system and the potential final states of the controlled system . a useless controller , which always responds with the same regulatory action ( ) will result in an outcome with at least as much variety as the one of the disturbance . on the other hand ,a perfect controller is able to release a counteraction for every disturbance ( ) so that , ideally , the possible outcomes are reduced to the expected goal , , with minimal variety .as it stands , the law of requisite variety is formulated at a very general , abstract level .what is the connection with information theory ?it comes naturally once we make a further assumption , that the process under scrutiny can be repeated times , .this has been the traditional setting for communication theory . in this context, we can think of , , and as three random variables .the variety of a statistical variable , which can assume outcomes with probabilities , can then be interpreted as its _ entropy _ as adopted by shannon , which laid the mathematical foundation for information theory in a probabilistic framework .in such asymptotic scenario , if a message to be transmitted consists of independent and identically distributed ( i.i.d . ) random variables , then such a message can be noiselessly encoded in a string of bits of length at least . the bit , an entity which can take two values ( or ) , physically represented e.g. by a coin or a light switch , is the fundamental unit of information . on this hand, ashby s law appears as a generalization of the shannon equivocation theorem ( th .10 of ) : in a typical communication setup , a sender wants to transmit a message to a receiver , but the message is sent down a noisy channel .a regulator would then be another channel used as a corrective tool to filter out the undesired randomness from the received message .using entropy to revisit the law in eq .( [ req ] ) for three random variables , we recall as previously noticed that , if the regulator has a fixed realization , then all variety ( entropy ) of is retrieved at . one can write , using conditional entropies , . exploiting the properties of the entropy onecan easily show that which is a restatement of the law of requisite variety ( upon observing that if the regulator has a deterministic action in response to a disturbance ) .this has been independently rederived in the contexts of control theory and computational mechanics . in eq .the term is of course the mutual information between regulation and disturbance , i.e. the information the regulator can make use of , in order to drive the system to the desired goal . in fact , as shown in , ashby s law of requisite variety is essentially a formulation of the second law of thermodynamics .nevertheless , it focuses on an important aspect , as it tells us how much information the regulator must be able to store for a successful regulation . in terms of resources , this is also a question of complexity , or more specifically of correlations between the three parties of the protocol . for a classical system , as stated , the conditional entropy is in the best case zero .this , however , does not hold in the quantum case .it is then natural to wonder : can quantum correlations or other signatures of quantumness be exploited to improve the performance of the regulator ?we now discuss possible extensions to the quantum case of the regulation protocol .these would set general limitations on the controllability of a quantum system .inspired by the successes of quantum control and quantum information processing , one would expect to find that quantum regulators are more efficient than classical ones .the main issue to tackle is how to define variety in the quantum domain .the most obvious way to define variety is by replacing the shannon entropy with the von neumann entropy and the random variables replaced by the quantum states of the systems involved in the regulatory process.although apparently innocent , such a step already features nontrivial subtleties .first of all , the notion of conditional entropy has to be carefully defined .we have two possibilities for in eq .( [ req2 ] ) .one is to use formally the same expression as in the classical case , .however , this quantity can be negative .this can happen in particular when the quantized systems and are in an entangled state .the negativity of the conditional quantum entropy has been operationally interpreted in the context of quantum state merging , an important primitive of quantum shannon theory , and in quantum thermodynamics .the other possibility is to imagine that , in order to learn about the occurred disturbance , performs a measurement on .the role of the observer in the regulation , precisely the disturbance induced by applying a general quantum map to a given state , can be neglected in a classical scenario , but it plays a decisive role in the quantum case . in this picture , the conditional entropy , optimized over all possible measurements ( to single out the least disturbing one ) , can be written as the latter quantity is always nonnegative .it turns out that the two quantities and coincide if and only if is effectively classical ; when this does not happen , the regulator and the disturbance display quantum correlations as revealed by the so - called _ quantum discord _ . quantum discord and in general quantum correlations between and are one strong element which can mark a departure from the classical paradigm studied by ashby . in particular , if and share quantum correlations we can expect by looking at eq .( [ req2])rewritten with von neumann entropies that such correlations constitute a further , genuinely quantum resource to lessen the variety in the outcome , in addition to the purity of the regulator .this is especially true when and are entangled , as can be negative as observed before .shannon and von neumann entropies are meaningful figures of merit for asymptotic information theory , i.e. , for ergodic systems . however , ashby himself recognized that ergodicity is too often not a realistic feature of cooperative living or social phenomena .much more recently , theorists in classical and quantum information theory have also ( independently ) recognized that the paradigm of i.i.d .messages is not necessarily respondent to common practice .it is rather more natural to consider _ one - shot _ scenarios : the sender encodes a message in a single system , transmits it down a single channel , possibly resorts to another single additional regulative channel , and the receiver receives and decodes the message in a single run .if more trials of a process are repeated , it is equally unrealistic to assume that they are completely independent . in data transmission , as well as in biological processes ,the different runs are typically correlated .in such a more general situation , the association between variety and the shannon / von neumann entropies is not correct anymore , and we need to resort to the primitive formulation of ashby s law as given by eq .( [ req ] ) . fortunately , a more general framework to define and quantify information in the non - i.i.d . setting , in both classical and quantum scenarios , is available and makes use of so - called smooth renyi entropies .therefore another , perhaps more informative , avenue towards quantum cybernetics suitable for quantum and nanoscale systems goes through an alternative formulation of variety in terms of such entropies , in particular the smooth min- and max - entropies , which have a nice operational intepretation in one - shot information theory and cryptography .let us just recall that renyi entropies are a family of additive entropies defined as $ ] .min - entropy corresponds to and max - entropy to .very recently , adopting the formalism of smooth renyi entropies , it has been shown that there exists a set of many second laws in quantum thermodynamics .we can then expect to recover equivalently many laws of requisite variety in the quantum domain .let us note that measures of quantum correlations can be defined in the case of smooth renyi entropies as well , which gives us in principle all the necessary tools to quantify quantum advantages and the role of nonclassical effects for enhanced regulation .finally , a remark is in order .the law of requisite variety is essentially an inequality between probability distributions .one may wonder if this minimal description is sufficient to describe efficiently any regulation protocol . in general ,even if the goal is to achieve a pure state of the system ( and this is not always true ) , the focus on entropy reduction both in the ergodic and non - ergodic case does not distinguish between different pure states .therefore , while focusing on entropy reduction seems the standard approach to employ information theory to control problems , this is arguably not universally sufficient for control even in the classical case .optimal regulation should then be benchmarked by the minimisation of an appropriate , experimentally appealing cost function .for instance , for practical purposes , a fidelity measure appropriate for the regulative task under investigation appears to be a reasonable resort .this will be the subject of further investigation .the regulation of biological systems and their interplay with the environment appear as the ideal testbeds for the law of requisite variety .plenty of examples can easily be concocted which fit into this paradigm . in any living system ,the desired outcome is to stay alive and healthy , the disturbance can be caused by toxins such as bacteria and viruses , and suitable anti - toxins act as the regulators .the question we raise is if the law of requisite variety , or one of its declinations , may serve as a general design principle for biological complexes .here we consider two specific classes of problems which may be interpreted as examples of regulation protocols .the first case study is related to chemotaxis , i.e. the dynamics of microscopic bacteria based on information about gradients of the concentration of specific chemical elements in the environment ( e.g. searching for food ) .search algorithm inspired by chemotaxis for macroscopic devices ( robots ) working with incomplete information about the environment ( infotaxis ) have been developed . at the same time , an information theoretic analysis of chemotaxis as a self - regulation protocol has been recently proposed .it should be then of great interest to determine if bacteria perform at the limits imposed by ashby s law , as well as to assess the performance of bio - inspired devices against the information - theoretic limits to controllability .if an optimal quantum controller can overcome such limits , then quantum mechanics may find a surprising new functional role in robotics .the other intriguing question is if biological systems exploit non - trivial quantum effects for their optimal regulation and adaptation .three main biological processes are currently under investigation by quantum physicists : the energy transport mechanisms regulating photosynthesis , the magneto - reception system of birds and the olfactory sense . focusing in particular on the first two examples , recent experimental evidence and theoretical modelling suggest that coherence ( in the case of light - harvesting organisms such as the fenna - matthew - olson complex ) and entanglement ( in the case of the radical pair model for the avian compass e.g. in the european robins ) are exploited by living systems to optimize their biological processes in the presence of a decohering environment .these case studies have triggered in the last decade the dawn of quantum biology as a multidisciplinary research field .seen in the light of what presented in the previous section , these are clear examples of regulatory phenomena : organisms pursuing a physiological function , subject to external disturbance , and responding with a ( self)-regulatory action so that their outcome is kept at sufficiently low variety , thus ensuring that the expected goal ( e.g. transporting a photon from the photoreceptor to the reaction centre in the case of photosynthetic complexes , or maintaining an accurate navigation route for migratory purposes in the case of birds ) is achieved with the highest possible chance .what is remarkable , is that we are dealing with undoubtedly complex macroscopic systems which would traditionally be ascribed to the classical domain , and are certainly in contact with classical environments . yet , in a multibillion - year stint of evolution , they appear to have developed effective quantum strategies for their optimal regulation . a current challenge is to understand the key principle(s ) underpinning such possible quantum effects in biology . in particular , it is pivotal to identify the resource which enables biological systems to control and exploit the interplay with the environment. it is known that biological processes are optimized by intermediate levels of coherence , i.e. too much coherence can be detrimental .thus , we should search for a more - elusive - than - coherence resource. the answer may be in the structure , i.e. the degree of organisation of the system itself , which allows the complex to self - regulate its dynamics .control theory has a long and glorious history of successes , yet we have still incomplete knowledge about a general design principle of optimal controllability in open quantum systems .combining the quantum control rationale and the latest results in quantum information may lead to the establish the ultimate , general , quantitative limits to the controllability of quantum systems .an important finding of classical cybernetics is the law of requisite variety : the purity of the controller and the degree of correlations it can establish with the system determines the controllability of the latter , independently of its peculiar chemico - physical properties . quantum cybernetics ( cf . ) will provide the framework for a fundamental study of the role that quantum effects and quantum correlations play in the regulation of open quantum and classical systems .furthermore , it will enable a rigorous treatment of self - regulating quantum systems , where regulator and environment are effectively the same physical object , whose interaction with the principal system has different effects for different timescales .as the framework in fig.[fig1 ] is independent of the particulars of the considered problem , it is applicable to a number of apparently unrelated phenomena as bacteria infotaxis and photosynthesis . the current technological advances in the manipulation of single quantum systems demand for investigations on the ultimate limits to the controllability of physical systems imposed by quantum mechanics .when reading the original works of ashby and wiener from half a century ago , one can be delightfully surprised by the modern flavour of their insights : their line of thinking resonates very closely with the state - of - the - art research and challenges in contemporary quantum information theory and technology . with this article , we hope to have stimulated the interest of the reader in the topic and to have proved it worthwhile of attention .once quantum cybernetics will be fully developed , experimental proof - of - concepts of the ensuing limitations on regulative processes may be in reach of current technological possibilities . also, firm answers to at least some of the following questions will be provided : how much i have to correlate the controller to the system under investigation to obtain a certain degree of controllability ?is the quantum treatment of the problem significatively different from the classical one ? does quantum discord , a recently discovered and very debated quantum feature , help controlling a quantum system ? do complex systems in nature exploit this supposed quantum advantage ?controllability is a task regulated by the law of thermodynamics . how `` one shot '' quantum control works ?discovering the ultimate strategies to manipulate single quanta may translate into the ability to control other kinds of systems which are far removed from the nanoscale regime , i.e. certain social systems , as an audacious mind precognized almost a century ago .we acknowledge fruitful discussion with f. carusela , l. a. correa , s. de martino , s. lee , p. liuzzo scorpo , and s. lloyd .this work was supported by the foundational questions institute ( grant no .fqxi - rfp3 - 1317 ) , the erc stg gqcop ( grant no .637352 ) , the uk engineering and physical sciences research council ( grant no .ep / l01405x/1 ) and the wolfson college , university of oxford and the university of nottingham staff travel prize .h. ollivier and w. h. zurek , phys .lett . * 88 * , 017901 ( 2001 ) ; l. henderson and v. vedral , j. phys .a * 34 * , 6899 ( 2001 ) ; k. modi , a. brodutch , h. cable , t. paterek , and v. vedral , rev .phys . * 84 * , 1655 ( 2012 ) .v. p. belavkin , preprint instytut fizyki * 411 * , 3 ( 1979 ) , quant - ph/0208108 ; ibid , proc . of 9th ifip conf . on optimizatnotes in control and inform .* 1 * ( springer - verlag , warsaw , 1979 ) ; ibid , theory of the control of observable quantum systems , automatica and remote control * 44 * , 178 ( 1983 ) . g. s. engel _et al . _ ,nature * 446 * , 782 ( 2007 ) ; e. collini _ et al ._ , nature * 463 * , 644 ( 2010 ) ; m. b. plenio and s. f. huelga , new j. phys .* 10 * , 113019 ( 2008 ) ; m. mohseni , p. robentrost , s. lloyd , and a. aspuru - guzik , j. chem . phys . * 129 * , 176106 ( 2008 ) ; a. ishizaki and g. r. fleming , pnas * 106 * , 17255 ( 2009 ) ; g. d. scholes , g. r. fleming , a. olaya - castro , and r. van grondelle , nature chem . * 3 * , 763 ( 2011 ) ; m. sarovar , a. ishizaki , g. r. fleming , and k. b. whaley , nature phys . *6 * , 462 ( 2010 ) ; focus issue on `` quantum effects and noise in biomolecules '' , new j. phys .( 201011 ) .e. gauger , e. rieper , j. j. l. morton , s. benjamin , and v. vedral , phys .lett . * 106 * , 040503 ( 2006 ) ; k. maeda __ , nature * 453 * , 387 ( 2008 ) ; j. cai , g. g. guerreschi , and h. briegel , phys . rev . lett .* 104 * , 220502 ( 2010 ) ; j. n. bandyopadhyay , t. paterek , and d. kaszlikowski , phys . rev. lett . * 109 * , 110502 ( 2012 ) ; j. cai , f. caruso , and m. b. plenio , phys . rev .a * 85 * , 040304(r ) ( 2012 ) .
cybernetics is a successful meta - theory to model the regulation of complex systems from an abstract information - theoretic viewpoint , regardless of the properties of the system under scrutiny . fundamental limits to the controllability of an open system can be formalized in terms of the law of requisite variety , which is derived from the second law of thermodynamics and suggests that establishing correlations between the system under scrutiny and a controller is beneficial . these concepts are briefly reviewed , and the chances , challenges and potential gains arising from the generalisation of such a framework to the quantum domain are discussed . in particular , recent findings in quantum information theory unveiled a new kind of quantum correlations called quantum discord . we conjecture a quantitative link between quantum correlations and controllability , i.e. quantum discord may be employed as a resource for controlling a physical system .
underwater source detection and localization is an important but challenging problem .classical range - based or energy - based source localization algorithms usually require energy - decay models and the knowledge of the environment .however , critical environment parameters may not be available in many underwater applications , in which case , classical model - dependent methods may break down , even when the measurement snr is high .there have been some studies on source localization using nonparametric machine learning techniques , such as kernel regressions and support vector machines .however , these methods either require a large amount of sensor data , or some implicit information of the environment , such as the choice of kernel functions .for example , determining the best kernel parameters ( such as bandwidth ) is very difficult given a small amount of data .this paper focuses on source detection and localization problems when only some structural properties of the energy field generated by the sources are available .specifically , instead of requiring the knowledge of how energy decays with distance to the source , the paper aims at exploiting only the assumption that the closer to the source the higher energy received , and moreover , the energy field of the source is spatially invariant and decomposable .in fact , such a structural property is generic in many underwater applications .the prior work studied the single source case , where an observation matrix is formed from a few energy measurements of the field in the target area , and the missing entries of the observation matrix are filled using matrix completion methods . knowing that the matrix would be rank-1 under full and noise - free sampling of the whole area , svd is applied to extract the dominant singular vectors , and the source location is inferred from analyzing the peaks of the singular vectors .herein , we propose to improve upon two shortcomings in : we make rigorous an estimation / localization bound ( versus focusing on the reduction of the search region ) and we provide a method for localizing two sources . in the two source case, we need to tackle an additional difficulty that the svd of the observation matrix does not correspond to the signature vectors of the sources . to resolve this issue , a method of rotated eigenstructure analysisis proposed , where the observation matrix is formed by rotating the coordinate system such that the sources are aligned in a row or in a column of the matrix .we develop algorithms to first localize the central axis of the two sources , and then separate the sources on the central axis . to summarize ,we derive algorithms to simultaneously localize up to two sources based on only a few power measurements in the target area without knowing any specific energy - decay model .the contributions of this paper are as follows : * we derive the location estimators with analytical results to show that the squared error decreases at a rate for a gaussian field with a single source , where scales proportionally to the number of samples .* we develop a localization algorithm for the double source case based on a novel rotated eigenstructure analysis .we show that the two sources can be separated even when their aggregate power field has a single peak .the rest of the paper is organized as follows .section [ sec : system - model ] gives the system model and assumptions .section [ sec : single - source ] develops location estimator with performance analysis for single source case .section [ sec : double - source ] proposes rotated eigenstructure analysis for double source case .numerical results are given in section [ sec : numerical ] and section [ sec : conclusion ] concludes this work .consider that there are ( ) sources with unknown locations located in a bounded area .suppose that the sensors can only measure the aggregate power transmitted by the sources , and is given by for measurement location , where is the power density from source , where .the explicit form of the density function is unknown to the system , except that the _ characteristic _ function is known to have the following properties 1 .positive semi - definite , i.e. , for all 2 .symmetric , i.e. , 3 .unimodal , i.e. , for , 4 .smooth , i.e. , for some , and 5 .normalized , i.e. , .note that can be considered as the marginal power density function .consider that power measurements are taken over distinct locations , , uniformly at random in the target area .the measurements are assigned to a observation matrix as follows . first , partition the target area into disjoint cells , and , where and are to be determined .second , assign the power measurements to the corresponding entry of as if ,where measures the area of ., the value of that entry is the average of the sample values . ]denote as the set of observed entries of , i.e. , if there exists such that * * is assigned to . for easy discussion ,assume that \times[-\frac{l}{2},\frac{l}{2}] ] if , and {ij}=0 ] and , where becomes a rank-1 matrix when the sources are aligned with one of the axes .the maximization problem ( [ eq : rho - function ] ) is in general non - convex .an exhaustive search for the solution is computationally expensive , since for each , svd should be performed to obtain the singular value profile of .therefore , we need to study the properties of the alignment metric in order to develop efficient algorithms for the source detection .we also show that the function also has the unimodal property defined as follows .[ unimodality ] a function is called unimodal in a bounded region , if there exists ] , i.e. , , and compute using ( [ eq : h - theta ] ) and ( [ eq : rho - function ] ) .compute in the similar way .3 . if , then ; otherwise , .repeat form step [ enu : alg - loop - start ] ) until small enough . then is found .note that condition ( [ eq : correlation - condition ] ) can be satisfied by a variety of energy fields .for example , for laplacian field , we have , and ; for gaussian field , we have , and . in both cases ,condition ( [ eq : correlation - condition ] ) is satisfied . in the coordinate system under optimal rotation ( assuming alignment on the -axis ) ,the left and right singular vectors of can be modeled as and , respectively .correspondingly , the -coordinates of the sources can be the found using estimator ( [ eq : location - estimator ] ) based on reflected correlation to find the -coordinates , note that the function is symmetric at .therefore , the center of the two sources can be found by in addition , after estimating , the marginal power density function can be obtained as , where is a regression function from ( for example , by linear interpolation among ) . as a results , the -coordinates of the two sourcescan be found using similar techniques as spread spectrum early gate synchronization , and obtained as and , where and it is straight - forward to show that is maximized at . as a benchmark , consider a naive scheme that estimates and by analyzing the peaks of .however , such naive strategy can not work for small source separation , because if is too small , the aggregate power density function would be unimodal and there is only one peak in .as a comparison , the proposed procedure estimator from procedure ( [ eq : location - estimator - two - source - y])([eq : location - estimator - two - source - d ] ) does not such a limitation .in this section , we evaluate the performance of the proposed location estimator in both single source and double source cases .two sources are placed in the area \times[-0.5,0.5] ] uniformly at random .the parameter of the proposed observation matrix is chosen as the largest integer satisfying , for . as a benchmark ,the proposed location estimation is compared with the naive scheme , which determines the source location directly form the position of the measurement sample that observes the highest power . in the two source case , the naive algorithm aims at detecting either one of the sources , and the corresponding localization error is computed as . as a comparison ,the localization error of the proposed algorithm is computed as .[ ] [ ] [ 0.7]x axis [ ] [ ] [ 0.7]y axis [ ] [ ] [ 0.7]0 [ ] [ ] [ 0.7]1 [ ] [ ] [ 0.7]-1 [ ] [ ] [ 0.7]0.5 [ ] [ ] [ 0.7]-0.5 localizing two sources using samples , where red crosses denote the true source locations , and black circles denote the estimates .the color map represents the aggregate power field generated by the two sources.,title="fig : " ] fig .[ fig : mse ] depicts the mse of the source location versus the number of samples . in the single source case, the coefficient of the worst case upper bound ( [ eq : sqaured - error - bound - gaussian ] ) is chosen as to demonstrate the asymptotic decay rate of the worst case squared error bound .the decay rate of the analytic worst case error bound is roughly the same as the mse obtained from the numerical experiment .it is expected that as increases , the two curves merge in an asymptotic way . as a benchmark ,the proposed scheme requires less than half of the samples to achieve similar performance to that of the naive baseline even for small ( around ) .more importantly , it demonstrates a higher mse decay rate , where for medium ( around ) , the proposed scheme reduces the number of samples to . in the double source case , there is an error floor for the naive scheme , because the location that observes the highest power may not be either one of the source locations . as a comparison, there is no error floor in for proposed scheme as increases .[ fig : map ] shows an example on simultaneously localizing two sources ( red crosses ) .although the aggregate power field has only one peak , the algorithm ( black circles ) is able to separate the two sources .this paper developed source localization algorithms from a few power measurement samples , while no specific energy - decay model is assumed .instead , the proposed method only exploited the structural property of the power field generated by the sources .analytical results were developed to demonstrate that the proposed algorithm decreases the localization error at a higher rate than the baseline algorithm when the number of samples increases .in addition , a rotated eigenstructure analysis technique was derived for simultaneously localizing two sources .numerical results demonstrate the performance advantage in localizing single or double sources .this research was supported , in part , by national science foundation under grant nsf cns-1213128 , ccf-1410009 , cps-1446901 , grant onr n00014 - 15 - 1 - 2550 , and grant afosr fa9550 - 12 - 1 - 0215 .where ( [ eq : app - lem - tau - eq1 ] ) is due to the change of variable and , ( [ eq : app - lem - tau - eq2 ] ) is to change the variable , ( [ eq : app - lem - tau - eq3 ] ) exploits the fact that , and the last inequality is due to and for all .let .then for all , and for due to the zero mean and independent assumption on . similarly , for all . as a result , we have which is maximized at . on the other hand , it is easy to verify that and has the same distribution , since the elements of are iid . therefore , , which confirms that the estimator is unbiased .[ matrix completion with noise ][prop : matrix - completion - noise ] consider that in ( [ eq : matrix - completion ] ) is chosen such that .then , with high probability , where . by exploiting lemma [ lem : singular - vector - perturbation ] for our case, we have where denotes the absolute value operator , and we drop the second order term , since is small as we focus on large . we also note that .let .note that by construction , is an linear interpolation of the error vector at .let be an -dimensional vector that takes value . from the iid assumption of , the two vectors are identical in distribution , i.e. , .therefore , based on assumptions a1 and a2 , we can make the following approximation moreover , we have since we focus on not too small , and the elements of ( and ) are zero mean and independent .recall that maximizes and maximizes .we have where is due to the fact that we keep omitting the higher order terms . finally , we obtain and hence , [ singular vectors in two source case][lem : eigenvectors ] let and be the vectors defined following ( [ eq : uk ] ) and ( [ eq : vk ] ) in the rotated coordinate system . the svd of is given by where and are the singular values , and are the corresponding singular vectors .consider an arbitrary coordinate system .wlog ( due to assumption 1 ) , assume that the first source is located at the origin , and , and the second source is away from the first source with distance and angle to the -axis , and .in addition , defining we have ^{\text{t}}\\ \mathbf{v}_{1 } & = \sqrt{\delta}\big[u(y_{1}),u(y_{2}),\dots , u(y_{m})\big]^{\text{t}}\\ \mathbf{u}_{2 } & = \sqrt{\delta}\big[u_{\text{c}}(x_{1},\theta),u_{\text{c}}(x_{2},\theta),\dots , u_{\text{c}}(x_{n},\theta)\big]^{\text{t}}\\ \mathbf{v}_{2 } & = \sqrt{\delta}\big[u_{\text{s}}(y_{1},\theta),u_{\text{s}}(y_{2},\theta),\dots , u_{\text{s}}(y_{m},\theta)\big]^{\text{t}}.\end{aligned}\ ] ] as an equivalent statement to theorem [ thm : unique - local - maximum ] , we need to show that is a strictly increasing function in .equivalently , we should prove that the function is strictly increasing in , where the approximated integrals are obtained from ( [ eq : approximation - integral-1 ] ) . to simplify the notation ,define the integration operator as for a function . by definition ,the integration operator is linear and satisfies the additive property , i.e. , and , for a constant and a function . as a result , , andthe function can be written as with some algebra , the derivative of can be obtained as \\ & = \eta\big[-t\cdot\tau^{'}(s)\big(1-\tau(t)^{2}\big)+s\cdot\tau^{'}(t)\big(1-\tau(s)^{2}\big)\big]\end{aligned}\ ] ] where , , and . note that for . applying condition ( [ eq : correlation - condition ] ) , we have \\ & = \eta\cdot t\cdot\tau^{'}(s)\big(\tau(t)^{2}-\tau(s)^{2}\big)\\ & > 0\end{aligned}\ ] ] since and for . x. sheng and y .- h .hu , `` maximum likelihood multiple - source localization using acoustic energy measurements with wireless sensor networks , '' _ ieee trans . signal process ._ , vol .53 , no . 1 , pp . 4453 , 2005 .i. ziskind and m. wax , `` maximum likelihood localization of multiple sources by alternating projection , '' _ proc .acoustics , speech , and signal processing _36 , no . 10 , pp . 15531560 , 1988 .y. jin , w .- s .soh , and w .- c .wong , `` indoor localization with channel impulse response based fingerprint and nonparametric regression , '' _ ieee trans .wireless commun ._ , vol . 9 , no . 3 , pp .11201127 , 2010 .w. kim , j. park , j. yoo , h. j. kim , and c. g. park , `` target localization using ensemble support vector regression in wireless sensor networks , '' _ ieee trans . on cybernetics _ ,43 , no . 4 ,11891198 , 2013 .
herein , the problem of simultaneous localization of two sources given a modest number of samples is examined . in particular , the strategy does not require knowledge of the target signatures of the sources _ a priori _ , nor does it exploit classical methods based on a particular decay rate of the energy emitted from the sources as a function of range . general structural properties of the signatures such as unimodality are exploited . the algorithm localizes targets based on the rotated eigenstructure of a reconstructed observation matrix . in particular , the optimal rotation can be found by maximizing the ratio of the dominant singular value of the observation matrix over the nuclear norm of the optimally rotated observation matrix . it is shown that this ratio has a unique local maximum leading to computationally efficient search algorithms . moreover , analytical results are developed to show that the squared localization error decreases at a rate for a gaussian field with a single source , where scales proportionally to the number of samples .
monotonicity is one of the simplest property a signal may have .it offers a powerful qualitative description ( `` it goes up , '' `` it goes down '' ) . given data coming in from either sensors or from a numerical simulation ,monotonicity is independent of the sampling frequency and is robust with respect to missing data .many geometrical objects such as curves are typically defined in a parametrization - independent way which makes monotonicity appealing . in this paper, we are concerned with discretely sampled curves ( which we call chains ) such as the trajectory of a particle in some vector space .this problem has applications in motion capture and tracking .we expect a `` smooth '' scalar - valued signal not to change too quickly : it should be locally constant .therefore , classical low pass filters such as the moving average ( ma ) are often sufficient to help smooth signals . unfortunately ,`` smooth '' chains are not locally constant : consider a loosely sampled circle ( see fig . [ filteredcircle ] ) . moreover, a chain may lie on a sphere or other higher dimensional surface and we may need to preserve this embedding . in fig .[ filteredcircle ] , a chain on a circle is filtered using a moving average : we see that the filtered chain can , at best , follow a circle of a smaller radius . a filter is sphere - preserving ( resp .circle - preserving ) if , when the input data points are on a sphere ( resp . circle ) , the filtered data points also lie on the same sphere ( resp . circle ) .it is readily shown that no linear filter except the identity can be sphere - preserving ( sp ) or circle - preserving ( cp ) . in general ,an sp filter is cp .we offer a simple sp filter in section 5 .one of the main contribution of this paper is to provide a generalization of the concept of monotonicity which applies to vector - valued signals and to curves .this definition is shown to be robust with respect to removal of data points and to be efficiently computed . over curves , we show that monotone curves have many of the same properties as monotone functions as far as continuity and differentiability are concerned .we also propose a sp filter which we show to never decrease the degree of monotonicity .experimentally , we show that the degree of monotonicity is inversely correlated with noise and we compare the sp filter with simple ma filters , proving the nonlinear sp filter is a good choice when noise levels are low . applications of this work include chain reconstruction from unordered data points and optical character recognition .a motion signal is comprised of two components : orientation and translation . the orientation vector indicates where the object is facing , whereas the translation component determines the object s location .recent work has focused on smoothing the orientation vectors , whereas the results of the present paper apply equally well to orientation vectors ( points on the surface of a unit sphere ) as to arbitrary translation signals . in , the authors chose to define monotonicity for curves or chains with an arbitrary direction vector : a curve is monotone if its projection on a line is does not backtrack .while this is a sensible choice given the lack of definition elsewhere , we argue that not all applications support an arbitrary direction that can be used to define monotonicity . the definition of monotonicity has been extended to real - valued functions ( ) by using contour lines ( or surfaces ) but the idea does not immediately generalize to curves and chains .one approach to chain smoothing is to use b - splines and bezier curves with the norm .correspondingly , we could measure the `` smoothness '' of a chain by measuring how closely one can fit it to a smooth curve .our approach differs in that we do not use polygonal approximations or curve fitting : we consider chains to be first - class citizens .recall that a function is said to be monotone increasing if whenever and monotone decreasing if whenever .a monotone increasing or monotone decreasing function is said to be monotone . recall that is called a ( closed ) ball of radius centered around : in the multidimensional case , the ball is a generalization of the ( closed ) interval . is monotone if and only if is connected for all balls .an arc - length parametrized curve is -monotone for if the inverse image of any closed ball of radius at most , under , is connected .straight lines are -monotone for all . as motivation the discrete case ,we want to compare monotone curves with monotone functions .monotone functions are differentiable almost everywhere , and they do not have to be continuous .-monotone also do not have to be continuous : the curve where a.e .is -monotone for all . moreover , they are also differentiable a.e . as the next proposition shows .continuous -monotone curves are differentiable a.e .take any point in the ( open ) domain of the curve .choose another point so that the arc - length over is smaller than .consider any point on between and , then must be contained in all balls of radius containing both and .it follows that must be differentiable from the left at .similarly , is differentiable from the right at .if the two derivative from the left and from the right do not match , then it is possible to find and close to from the left and the right such that there is a ball of radius containing both and but not , a contradiction . just like monotone functions , continuous -monotone curvesdo not have to be twice differentiable , consider the arc - length parametrized version of for .differentiable functions are not necessarily monotone .likewise differentiable curves are not necessarily -monotone as the next proposition shows .there is a differentiable continuous finite curves with no cross - over ( that is , is one - to - one ) which is not -monotone for any . consider a curve following a inward spiral around a fixed point such as for ] is a set of consecutive integers ] .equivalently , the values of the signal never go down ( ) or never go up ( ) .another equivalent definition is given by the next proposition .a scalar - valued signal is monotone if and only if , for any 3 consecutive samples , , the index set of the values contained in any closed interval ] .equivalently , the index set is a convex set under an appropriate definition of convexity .it is easy to extend this definition of monotonicity to the case of vector - valued signals .unfortunately , a straightforward generalization , based on considering the set of indices of the values contained in any closed ball , would lead us to conclude that the only monotone vector - valued signals are on straight lines and never backtrack .it is not hard to realize no sensible filter could turn any vector - valued signal into a monotone signal . in order to obtain nontrivial results , we need to restrict the class of balls considered , as in the following definition . a vector - valued signal has a degree of monotonicity if is the largest value such that , considering only 3 consecutive samples , , the index set of the values contained in any closed ball of radius at most is a set of consecutive integers in .if the signal values are on a straight line with no backtracking , then the degree of monotonicity is , and the degree of monotonicity is always larger than for finite signals .[ monotonicityfailure ] gives an intuitive view of the degree of monotonicity .this measure of monotonicity is robust in the following sense .if one point is omitted from a vector - valued signal , the degree of monotonicity can not decrease . while this discrete definition is similar to the definition given for -monotone curves , to allow efficient computation, we consider only sets of 3 consecutive samples , thus replacing a global problem by a local problem .if we lift the requirement that only 3 samples are considered , then a signal is -monotone if and only if all subchains of length are -monotone .this suggests that the cost of checking global -monotonicity grows in a cubic fashion with respect to the length of the signal which is unacceptable for most applications . in practical applications , maximizing the degree of monotonicity leads to useful chains .for example , noise tends to reduce by creating sharp turns and local backtracking and a highly monotone curve ( large ) is more likely to be noise - free . on the other hand ,when reconstructing chains from unordered sets of points , as happens in computer vision , we often want to minimize sharp turns and backtracking .therefore , solving for the chain maximizing while passing through all available data points is a sensible `` curve reconstruction '' strategy . as a prerequisite to computing the degree of monotonicity, we need a computationally effective way to compute the radius of the circle going through 3 points .given , we can compute the radius of the circle passing through them ( denoted ) by first computing , , , , and then we have the classical heron s formula for the radius of the circle : whenever .the next theorem gives us a way to compute the ( local ) degree of monotonicity for any 3 points , to compute the degree of monotonicity of an entire signal simply requires , by definition , to take the * minimum * of the result for all consecutive 3 points .the theorem essentially says that if , the degree of monotonicity is then half the distance between and , and otherwise , it is ( see fig . [ computingr ] ) . to see that this local form of monotonicity is distinct from the global form suggested earlier , consider a chain in the form of a figure `` * * 8**. '' [ formulathm ]the degree of monotonicity for the sequence is [ prop : formula ] where , , .consider the disk containing and , centered at and having radius .the point is outside the disk if and only if is positive .thus , is outside the disk if and only if . clearly .next , suppose that is in the disk .we have that any ball containing and but not must be larger than since is the smallest ball containing both and . now, suppose there is a ( closed ) ball of minimal radius containing and , but not .this implies a non - zero distance , , between and .we have that the center of the ball has to be away from the line formed by : if not then it must be a ball containing .this means we can move the center of the ball slightly closer to and while reducing the radius just enough so that remains outside the ball . by repeating this process , we show that , a contradiction . hence , there is no ( closed ) ball of minimal radius containing and , but not . hence have a degree of monotonicity .in this section , we propose a sp filter which never decreases the degree of monotonicity of the signal . given a signal , we consider recursive ( iir ) filters of the form to ease the notation , we write , , , , so that the equation becomes let be the degree of monotonicity of computed as the following proposition gives us a condition of to increase the monotonicity of a vector - valued signal . given , if is such that the degree of monotonicity then the recursive filter never decreases the degree of monotonicity of a signal .it seems that should be chosen so that is as large as possible .to maximizes with , should be either or .in other words , we improve monotonicity best when we make the sample `` virtually disappear . '' is minimized when or and these choices are unique unless in which case any point on the arc of the circle between and inclusively qualifies .fortunately , we can easily define a more interesting sp filter .given an arc of a circle , denoted , and a point , we can project on by solving for the point closest in .the projection onto a circle can be determined easily using only linear algebra . in the plane ,start with equation and substitute 3 values of , getting 3 equations . by pairwise subtraction, we can remove the unknown , and be left with linear system having 2 equations and 2 unknowns ( the center of the circle ) .we apply this by first projecting on the circle and if the projected point does not belong to the given arc we move it to the closest point on the arc ( an endpoint of the arc ) .let us define to be the projection of on the arc of the circle , and define to be the projection of on the arc of the circle .intuitively , either point or would make a good choice for . to ensure that the degree of monotonicity is never decreased , we set this function can be computed quickly and is sphere - preserving .we generate a chain in the plane by regularly sampling a unit circle 3 times for a total of 30 samples . a ma filter with window width averages each data points .we add white noise to every point in the chain and we filter it using simple ma filters with window widths of 3 and 5 samples as well as with the sp filter of the previous section .each test is repeated 10 times and we keep only the averages . fig .[ monversusnoiseversuswidth ] shows the degree of monotonicity versus the noise level ( mean square error ) with the three smoothing filters and the unfiltered chain .the noise level ranges from none to over 0.05 ( mse ) which corresponds roughly to a 5% noise - to - signal ratio .an example of filtering is given in fig .[ visualcompared ] .in the unfiltered chain , the degree of monotonicity is inversely correlated with the noise level : the pearson correlation is ( 90% ) .the degree of monotonicity seems a good indicator of noise , which in particular suggests that a method for increasing the degree of monotonicity would also function as a good noise reduction technique .as required , the sp filter always increases the degree of monotonicity with respect to the unfiltered data .simple ma filters * decrease the degree of monotonicity * when noise levels are low , and more aggressive filtering ( window width of 5 versus 3 ) even more so .the result of aggressive lowpass filtering on the curvature of a chain is explained by fig .[ filteredcircle ] .the relative performance of filters over chains can vary depending on the level of noise and the distance between the points : as noise levels increase , the sp filter is less competitive .the design of sphere - preserving filters optimally increasing the degree of monotonicity is an open problem .k. agarwal , s. har - peled , n. h. mustafa , and y. wang , near - linear time approximation algorithms for curve simplification , in _ proceedings of the tenth annual european symposium on algorithms _ , springer - verlag , london , uk , 2002 , 2941 .h. carr , j. snoeyink , and u. axen , computing contour trees in all dimensions , in _ proceedings of the eleventh annual acm - siam symposium on discrete algorithms _, acm press , new york , ny , usa , 2000 , 918926 .m. van kreveld , r. van oostrum , c. bajaj , v. pascucci , and d. schikore , contour trees and small seed sets for isosurface traversal , in _ proceedings of the thirteenth annual symposium on computational geometry _ , acm press , new york , ny , usa , 1997 , pages 212220 .
chains are vector - valued signals sampling a curve . they are important to motion signal processing and to many scientific applications including location sensors . we propose a novel measure of smoothness for chains curves by generalizing the scalar - valued concept of monotonicity . monotonicity can be defined by the connectedness of the inverse image of balls . this definition is coordinate - invariant and can be computed efficiently over chains . monotone curves may be discontinuous , but continuous monotone curves are differentiable a.e . over chains , a simple sphere - preserving filter is shown to never decrease the degree of monotonicity . it outperforms moving average filters over a synthetic data set . applications include time series segmentation , chain reconstruction from unordered data points , optical character recognition , and pattern matching . # 1#1
more than 95% of all catalogued blazars have been found in either shallow radio or shallow x - ray surveys ( e.g. see padovani , these proceedings ) . because of the range of blazar spectral energy distributions ( sed ) the two selection methods yield different types , the `` red '' objects ( with the peak of the synchrotron emission at ir - optical wavelengths , lbl ) in radio samples , and the `` blue '' ( whose synchrotron emission peaks at uv - x - ray wavelengths , hbl ) in x - ray samples. the differences in the seds do reflect different physical states but only as the extrema of an underlying continuous population .the relative space densities of the different types , not to mention their absolute space densities or their evolution in cosmic time still remain indeterminate .different scenarios predict a difference of two orders of magnitude ( ! ) in the ratio of the `` red '' and `` blue '' types , nevertheless the presently available samples are unable to distinguish between them .the blazar demographics are this uncertain essentially because the flux limits of current complete samples are high , so only the tip of the population is sampled .the interpretation of observed phenomenology depends on the complicated sensitivity of diverse surveys to a range of spectral types .ultimately , this means we do not know which kind of jets nature preferentially makes : those with and high and ( `` blue '' blazars ) or low and ( `` red '' blazars ) .we also do not know whether they evolve differently and/or if `` red '' blazars dominate at high redshift and evolve into `` blue '' blazars at low redshift , and what is the relationship between the `` non - thermal '' and `` thermal '' power / components .the implications for understanding jet formation are obvious . herewe present a concise account of the preliminary results of numerical simulations of a set of unification models , including an actual fit of the model parameters to reproduce the general characteristics of a few reference samples ( 2 ) .we also introduce a `` concept '' experiment , devised to address the role of selection effects ( 3 ) , and discuss a couple of issues that are connected to this problem . in 4 we comment on future developments .we compared the existing surveys with a set of three alternative unified schemes , following the discussion developed in recent years after padovani & giommi ( 1995 ) , and fossati et al .( 1997 , 1998 ) .they are : i ) the `` radio leading '' , where the primary luminosity is the radio one and n .ii ) the `` x ray - leading '' , where the primary band are the x ray , and n .iii ) the `` bolometric '' , where the sed properties ( and in turn the distribution of l/l , i.e. the balance between lbl and hbl ) are determined by the total power of the source , with hbls being the less powerful objects . in fossati1997 ) the input parameters of each model were pre - set to values based on those of the observed samples .the most interesting results was the success of the new model , the bolometric one .= 0.32=0.32=0.32 in this work our approach is different . first we normalize / optimize each unifying scheme by performing an _ actual fit _ to three reference samples ( emss , slew , 1 jy ) .we leave free to vary 78 variables , such as the normalization and slope of the primary luminosity function , and the distribution of the l/l ratio / l : for the bolometric scenario we allow for a spread in the relationship between peak frequency and luminosity . for the radio and x ray leading scenarios we use the combination of two gaussians , for which we fit the mean , sigma and area . ] . for those parameterfor which there is a measured value ( e.g. the luminosity function ) we allowed their values to move within their 2 interval .the observational quantities to reproduce were the number , and average radio and x ray luminosities of hbls and lbls .the technique used for the fit is _ `` simulated annealing '' _ ( e.g. kirkpatrick et al . 1983 ) , which is based on statistical mechanics , and implemented via montecarlo .it is a very robust technique , very well suited for many parameter fits .moreover the `` global '' nature of the technique is very effective for cases where there might be multiple secondary local minima in the parameter space . in fig . 1 we show examples of the evolution of the fit .we here just point out interesting results concerning two of the `` core '' issues : i ) the best fit of the bolometric model requires a finite width for the l relationship ( see fig .the best fit value is , i.e. at any given l the synchrotron peak frequency will be distributed as a gaussian of width centered at the value determined by the relationship .ii ) in both the radio and x ray ( not shown ) leading cases the best fit l/l distribution is basically a single , broad , gaussian ( see fig .for the radio leading case the lbl gaussian comprises 98% of the total area , and it is centered at with . = 0.34=0.34=0.34 the next step is to use the results of the fits to predict the properties of samples that have not been used to optimize the parameters of the models .we present here only the integral log(n)log(s ) curves , and we only sub - divide the samples in hbl / lbl ( according to the values of f/f ) . it is worth noting that the absolute normalizations may not be completely reliable , because of uncertainties on the sky coverage . the uncertainty on the ( details of ) sky coverage is indeed probably the main one involved in the simulations .the relative fraction of hbl and lbl may be a more robust parameter , and it is the one more easily amenable to a quick comparison . [ [ section ] ] the dxrbs sample ( perlman et al .1998 ) is still in progress , but an `` off record '' comparison of the predicted log(n)log(s ) ( shown if fig .2a , b ) with the observed one seems to show that the models are ( still ) in good agreement with the data .the predictions of the bolometric and radio leading models become radically different below about 100 mjy , a domain now reachable . the lbl / hbl density ratio at a few radio flux limitsare the following : .... flux bolometric x - ray leading radio - leading mjy 9.7 6.9 6.6 mjy 6.2 5.7 6.4 mjy 3.2 5.2 6.1 .... [ [ section-1 ] ] the `` sedentary '' sample ( giommi , menna & padovani 1999 ) comprises only hbls because of the built in cut in . the radio log(n)log(s )is shown in fig .2c , where the grey square represent a density mjy between the actual `` sedentary '' and the emss , showing that there is a quite good agreement . here , as for the dxrbs , we do not plot the predictions of the x ray leading model because they are not satisfactory .in fact this scenario does not seem to be able to explain the properties of these recent samples , at least with its parameters set at the best fit values .= 0.43=0.43 in fig .3a , b we show the predictions of the 3 scenarios for the number densities of hbls and lbls in radio surveys with a secondary cut in optical magnitude at m=20 , and m=22 .we see that the x ray leading model is giving a substantially different answer from the two other competing models , which seem to agree over most of the accessible radio flux range .the bolometric and radio leading models actually start to give different predictions only at very faint radio fluxes , as seen in fig .3b . in the radio leading model the radio counts of hbls and lbls keep a fixed ratio by definition , while in the bolometric picture hbls are deemed to eventually outnumber the lbls , but this seems to happen at radio fluxes lower that expected .however , we are not far from the range of radio fluxes that will be the most sensitive to discriminate among the different pictures .actually there are already a few samples going deep enough . to try to assess the problem of selection effects we introduced the `` cube '' .( fossati & urry , in preparation ) , a toy model stripped down of every a priori assumption as to the presumed intrinsic properties of the sed .we assume that the radio / optical / x ray luminosities are completely un - correlated , and we take simple power law luminosity functions .we then simulate samples of sources that would be selected by a generic flux limited radio or x ray survey ( including the flux dependent sky coverage ) , with a possible additional cut in another spectral band .an example of the results of this exercise is shown in fig 4a .it seems to be relatively `` natural '' to obtain patterns in a color color diagram which looks like those that are actually observed , and promptly interpreted as tracing intrinsic properties of the sources .of course the `` cube '' is not able to reproduce the large variety of patterns and correlations observed in luminosity luminosity , color color diagrams , nevertheless we regard it as a very instructive example of how careful we need to be when dealing with selection effects .= 0.40=0.40 figure 4b shows the log(n)log(s ) of _ observed _ extreme hbl ( lower dashed line ) and of _ intrinsic _ extreme hbl ( upper solid line ) , defined as such according to the observed or intrinsic x ray / radio ratio . because of the k correction and their sed shape , blazars systematically shift towards the lbl side when seen at higher redshift , when `` classified '' on the basis of the _ observed _ x / radio ratio .the effect can be sensible when comparing relative populations of hbl and lbl .on the basis of the analysis presented here , we think that there might be already enough information available to proceed to constrain meaningfully the main features of unified scenarios .the comparison of observed samples with simulations performed in a systematic fashion ( e.g. by means of simultaneous fit ) may provide an extremely powerful and effective tool to address the problem of the intrinsic properties of blazars .in fact , although there is not a single sample comprising all the desirable characteristics to provide the least possible biased picture of the intrinsic properties of blazars , the f/f plane is now well covered ( see fig . 1 , 2 in padovani s contribution ) .moreover , the quality of the most recent samples will allow to compare the predictions and the data directly by using the distribution of the , an important step forward and past some confusion created by selection effects combined with the `` two bins '' approach ( e.g. 2.5 ) .if the selection biases of each of surveys can be regarded as being under control ( and therefore reliably implemented in the simulations ) we may soon be able not only to test a given unified scheme , but even to derive directly from the data what should be the general properties of a successful unified scheme . finally , we think that more than ever it is necessary to shift the focus away from the bl lacs sub - class , because this could still be the source of significant confusion .the best progress could be made by considering the bl lacs fsrqs relationship as a whole , also from the observational point of view .the bolometric scenario was meant from the beginning to unify bl lacs and fsrqs , and it tries to connect some basic physical ideas to the observed phenomenology .on the other hand we need to figure out how to explain the hbl / lbl ratios assumed by the radio and x ray leading scenarios , and in turn how to extend these models to include smoothly the fsrqs .there should be a way to tell from `` first principles '' on which side of the 1/1010/1 range the real value of n/n ratio is more likely to belong .i d like to thank the organizers for a great workshop , and for bearing with my request of delaying my talk by one day , and ilaria cagnoni for very kindly accepting to swap our talks in the schedule .i also thank _ pippol _ for the neverending support .fossati , g. et al .1997 , , 289 , 136 fossati , g. et al .1998 , , 299 , 433 giommi , p. , menna , m.t . , & padovani , p. 1999, , 310 , 465 kirkpatrick , s. et al .1983 , science , 220 , 671 padovani , p. , & giommi , p. 1995, , 444 , 567 perlman , e. , et al .1998 , , 115 , 1253
we discuss the preliminary results of an extensive effort to address the fundamental , and yet un - answered , question that can be trivialized as : `` are there more blue or red blazars ? '' . this problematic is tightly connected with the much debated issue of the unified picture(s ) of radio loud agns , which in turn revolves around the existence , and the properties of relativistic jets . we address this question by comparing simultaneously the properties of the collection of heterogeneously selected samples that are available now , with the predictions of a set of plausible unifications scenarios . we show that it is already possible to make significant progress even by using only the present samples . the important role of selection effects is discussed . for instance we show that the multiple flux selections typical of available surveys could induce some of the correlations found in color color diagrams . these latter results should apply to any study of flux limited samples . # 1_#1 _ # 1_#1 _ = # 1 1.25 in .125 in .25 in
with the continuous growth of wireless networks and emerging new technologies such as wimax and lte , wireless networks security has received extensive attention .current popular security schemes , e.g. public key cryptography , are based on computationally secure trapdoor one - way functions .these schemes depend on the assumption that it is _ hard _ for an attacker to decipher the message without knowing the trapdoor ( i.e. the secret key ) .however , these schemes do not prevent a computationally unlimited attacker from decrypting the message without knowing the trapdoor as it is not proven yet that one - way functions can not be inverted efficiently .therefore , these schemes are not _ provably secure_. information theoretic secrecy , on the other hand , introduces the possibility of having perfectly secure communication independently from the computational capabilities of the attacker .in particular , shannon proved that , using a shared secret key , the achievability of _ perfect secrecy _ requires that the entropy of be at least equal to the entropy of the message ( i.e. , ) .wyner showed that it is possible to send perfectly secure messages at a non - zero rate , _ without _ relying on secret keys or any limiting assumptions on the computational power of the wiretapper , under the condition that the source - wiretapper channel is a degraded version of the source - destination channel .this was later extended to the non - degraded scenario in . in ,the effect of fading on the secrecy capacity was studied and it was shown that distributing the message across different fading realizations actually increases the secrecy capacity . although information theoretic security schemes provide provable security , they have been considered * not practical * due to the simplifying assumptions they have to prove their security .recently , we have introduced a number of practical and provably - secure protocols for * _ two - node _ * communication based on information theoretic concepts .our work in exploits the multi - path nature of the wireless medium to provide * practical * information - theoretic security in channels with feedback .the basic idea is to distribute the secret key among multiple arq frames .this concept has been used to enhance the security of practical wi - fi and rfid protocols at the expense of slight loss in throughput .direct extensions of these two - node schemes to the * multi - node * case , by applying the protocol to each pair of communicating nodes , lead to a considerable waste of throughput .this is due to optimizing each pair independently , extending the two - node overhead to the multi - node case . in this paper, we present a practical and provably - secure scheme at the presence of a passive eavesdropper that is designed for the multi - node case from the beginning .our scheme is based on a novel two - phase approach : in the first phase , i.e. the selection phase , a node is selected as the transmitter using information theoretic techniques that hide the identity of the selected node . in the second phase , i.e. the data transmission phase , data frames are transmitted without the source / destination i d in the packet header .this leads to ambiguity at the eavesdropper .the length of the data transmission phase can be tuned to trade - off secrecy and efficiency .nodes not selected at the selection phase can sleep to the next cycle , further reducing their energy consumption .we present different variations of the basic scheme , all having the same overhead , that can achieve different secrecy - fairness trade - offs .we evaluate our proposed schemes both analytically and through implementation over crossbow telosb motes , equipped with cc2420 radio chips .our evaluation shows that the scheme can achieve both significant secrecy gain and decrease in overhead as compared to direct extensions to the two - node schemes .the rest of the paper is organized as follows : we define the system model in section [ sec : model ] .section [ sec : basic ] presents the basic scheme . in section [ sec : extended ] , we present four different extensions to the basic scheme that can achieve different secrecy - fairness trade - offs .we analyze the proposed schemes in section [ sec : analysis ] along with the system implementation .we finally conclude the paper in section [ sec : conclude ] .we consider a network with legitimate nodes in the presence of a passive eavesdropper ( eve ) .we assume a star topology , where all the traffic between nodes has to go through a central node , i.e. a coordinator .this is common in wlans , cellular , and sensor networks .this coordinator ( e.g. access point , base station , or gateway ) is responsible for controlling the transmission in the network and assigning turns .all nodes are equipped with half - duplex antennas .we further assume a time - slotted communication system , where all nodes are synchronized ( figure [ fig : network ] ) . for space constraints, we also assume that all nodes have equal load and eve can not differentiate between nodes based on power .we leave the general case to a future paper . to further remove the need of acknowledgment ,each message is erasure - coded into frames such that the reception of any frames at the receiver can be used to reconstitute with high probability .note that using erasure coding does not give any advantage to eve as ( a ) she can not determine the identity of the transmitter and ( b ) there is no message level error detection ( only crc at the frame level ) .all system parameters are assumed to be known to the eavesdropper along with the details of the technique , but not the instantaneous random values .each node needs to send and receive frames , each of bits .table [ symbol_table ] summarizes the different symbols we use in the paper ..symbols used in the paper . [ cols="^,<,<",options="header " , ] [ tab : schemes ]to allow for two - way communication , we need to specify which slots within a session in the data transmission phase will be from / to the coordinator . in order to do that , we add a third short direction determination phase between the selection and data transmission phases in which the co sends bits , using dialog codes again , where each bit corresponds to a slot in the data transmission session . a bit set to 1 ( 0 )corresponds to a from ( to ) co slot .note that the node i d is not sent in this phase .therefore , eve can not know the identity of the selected node . in the data transmission phase , a node will follow the schedule received during the direction selection phase . in the rest of this section , we present four different schemes for assigning the schedule between the different nodes and from / to the coordinator .the different schemes can achieve different fairness - security goals as we quantify in section [ sec : analysis ] .fairness refers to balancing the access opportunity within nodes and between the from / to coordinator traffic .therefore , we have four combinations of fairness : node fairness ( short and long term ) and direction fairness ( short and long term ) . in all schemes ,all nodes have to finish one message of transmission before any node can start a new message for fairness purposes .table [ tab : schemes ] compares the different schemes .we start by some notations followed by the details of the four schemes .the following notations are illustrated in figure [ fig : extensions ] . *a * session * is a group of slots that represent a single selection phase followed by a data transmission phase .the data phase of each session contains frames . * a * round *is defined as a group of sessions , in which each one of the nodes is assigned one session . *a * supersession * is a group of sessions ( ) in which _ all nodes _ finish the transmission of one message ( i.e. a transmission of frames in each direction from / to the coordinator ) . *a * node supersession * is a group of sessions that belong to _ one _ node in which this node finishes the transmission of one message .this scheme combines short term node fairness and short term direction fairness ( figure [ fig : fn_fd ] ) .in particular , all nodes must take a turn within the round ( in a random fashion ) before a node can be assigned another turn by the coordinator .the number of from / to slots within each data transmission session are equal .therefore , both node and direction distributions have short term fairness .this scheme , however , reduces the ambiguity at the eavesdropper and hence decreases security as we quantify in the next section . in this scheme ( figure [ fig : rn_fd ] ), the number of from / to frames within each data session has to be equal ( achieving direction fairness ) .however , the sessions assigned to a specific node can be anywhere within the supersession ( random node selection ) , i.e. there are no rounds . in this scheme ( figure [ fig : fn_rd ] ) , each node has to be selected at least once before another node gets a second chance . in other words ,each node will take a turn within the round .the direction of traffic from / to the coordinator need nt be balanced within a session , but is balanced on the long term in the supersession .the send / receive queue at the coordinator though may not be balanced due to the random direction assignment .the constraints of the long term fairness over the direction of traffic increases the amount of state that needs to be kept at the coordinator . in this last scheme, the coordinator divides the sessions among the nodes and the from / to traffic randomly within the supersession .therefore , both node and direction distributions do not have short term fairness ( figure [ fig : rn_rd ] ) .this scheme has the advantage of increasing the ambiguity at the eavesdropper and hence increasing security .however , it lacks short term fairness and the coordinator has to keep track of more state for the long term fairness .in this section , we analyze the different schemes through analysis and simulation in terms of security , overhead , and fairness . for security , we have two modes , depending on the eavesdropper goal . in the first mode , ( single node )the eavesdropper is only interested in the messages of a specific node .the second mode assumes that the eavesdropper is interested in the entire network traffic .let the selected node for the attack be .we analyze the security of the four different schemes .note that the total number of slots for all nodes to transmit one message in each direction each is .the corresponding total number of sessions therefore is and the number of sessions allocated to a single node is .[ [ rn - fd - scheme ] ] * rn - fd scheme * + + + + + + + + + + + + + + in order for eve to guess the message of , it needs to guess the sessions assigned to .this occurs with probability in addition , eve has to guess the direction of the frames to avoid mixing the packets from / to the coordinator .this occurs with probability : therefore , the outage probability for this scheme , for an -node network ( ) is : [ [ fn - rd - scheme ] ] * fn - rd scheme * + + + + + + + + + + + + + + in this case , eve needs to decide in each round which session belongs to .therefore the probability of correctly guessing eve s sessions in the entire supersession is : once the sessions of node are determined , eve has to guess which of the total of frames are to the coordinator and which are from it .this occurs with probability : therefore , the outage probability is this case is [ [ rn - rd - scheme ] ] * rn - rd scheme * + + + + + + + + + + + + + + similar to equations [ r_s ] and [ r_t ] , the outage probability in this case is : [ [ fn - fd - scheme ] ] * fn - fd scheme * + + + + + + + + + + + + + + similarly , the outage probability here can be obtained by combining equations [ f_t ] and [ 3r_s ] as : in this attack , eve is interested in obtaining the entire network traffic .once eve guesses the frames of one node , the problem size decreases to that of an node network .therefore , the outage probability in this case ( ) is : the four different schemes have the same overhead which is due to the selection and direction determination phases .therefore , the overhead for all four schemes is : which is a function of the number of nodes ( ) , dialog codes preamble length ( ) , number of slots in a session ( ) , and the data frame length ( ) . figure [ fig : secrecy ] shows the effect of changing the system parameters , i.e. , , and , on the outage probability for the four schemes .the figure shows that all schemes have the advantage of enhancing the secrecy with the increase of the number of nodes in the network .increasing increases the space of guessing at eve , and hence enhances secrecy . increasing leads to increasing the length of the data transmission phase and hence reducing the frequency of the selection phasethis reduces secrecy .the figure also shows that the rn - rd scheme has the highest secrecy .this is due to the increased ambiguity at eve due to the randomization of both node selection and direction . on the other extreme, the fn - fd scheme has the least secrecy .the other two schemes have a secrecy outage probability in between : as the data phase length increases , direction randomization leads to more secrecy than node randomization . figure [ fig : overhead ] shows the effect of changing the system parameters on the system overhead .the figure shows that the overhead increases with the increase of the number of nodes in the network and the decrease of the data transmission phase length .therefore , a trade - off exists between overhead and secrecy .the operation point can be selected based on the specific application need .bits , ) .note that a longer frame length ( ) leads to lower overhead . ] for the fairness in node selection , we use the variance of the _ difference between two consecutive sessions _ indices as our metric .the more consistent this difference , the lower the variance , and the higher the fairness . more formally , if the session indices assigned to a node are , then and the unfairness index equals .figure [ fig : fairness_node ] shows the effect of the different parameters on node fairness .the figure confirms that the round - based schemes are fairer than the random schemes .as the number of nodes ( ) increases , the unfairness increases . on the other hand , for a fixed ,increasing the number of sessions , by either increasing or reducing , the unfairness increases .however , this is limited to within a round in the short term node fairness ( fn ) schemes and is more variable in the long term node fairness schemes .the saturation in both cases is due to the limitation imposed by the supersession size . for direction fairness ,our metric is the absolute difference between the sum of the send and receive indices within a specific node supersession , averaged over all nodes .the smaller this number , the higher the fairness .note that since the fairness metric is node based , it is independent from the number of nodes . figure [ fig : fairness_dir ] shows the effect of the different parameters on direction fairness .the figure confirms that the short term direction fairness ( fd ) schemes are fairer than the long term direction fairness ( rd ) schemes .as the number of frames required to construct a mesage ( ) increases , the unfairness increases in the long term direction fairness scheme as the overall number of slots in the node supersession will increase . has no effect on the short term direction fairness schemes as all direction selections are based on a round , which is independent of .this is the opposite case as we fix and change the number of frames within a session ( ) . in this case , the performance of the completely random case is independent of the number of frames within a session , as all sessions are concatenated in one supersession . increasing increases the unfairness of the short term direction fairness schemes .however , their worst case performance is bounded by the performance of the long term direction fairness scheme , where on session becomes a node supersession .figure [ fig : compare ] compares the proposed schemes to the practical provably secure two - node scheme proposed in under typical parameters for all schemes .the scheme in is based on randomization between two nodes . a direct extension for this case to the multi - node caseis to apply it pairwise to each transmitter receiver .the figure shows that this reduces secrecy significantly , with several orders of magnitude and this loss in secrecy increases with the increase in the number of nodes .since does not leverage the multi - node in its design , both its security and overhead is independent of .our proposed schemes also have much better overhead under typical network sizes .we have also implemented the proposed scheme on telosb motes equipped with cc2420 radio chips which come with half - duplex antennas .the motes run the tinyos operating system .the network consists of three types of nodes nodes : 1 .an observer node , which plays a double role in our setup .first , it plays the role of the passive eavesdropper which sniffs all sent frames .second , it is responsible for synchronizing all nodes in the network by sending a pulse ( synchronization frame in our case ) at constant intervals to initiate the start of a slot and hence transmission of frames .the remaining nodes react to these synchronization frames. 2 . a normal node ( representing one of the legitimate nodes ) .3 . a coordinator node , which selects which node to transmit and the direction of traffic .the implementation results confirm the analysis results in the previous sections .more details about the implementation can be found in .we presented a novel practical and provably secure solution to the multi - node wireless communication problem .our solution is based on hiding the identity of the communicating nodes from the eavesdropper .we presented four different variations of the basic scheme that can achieve different fairness - security tradeoffs .we evaluated the proposed techniques using analysis and implementation .our results show that our scheme outperforms direct extensions of the two - node communication schemes in terms of both overhead and secrecy , highlighting its suitability for highly secure applications .x. tang , r. liu , and p. spasojevic , `` on the achievable secrecy throughput of block fading channels with no channel state information at transmitter , '' in _ciss_.1em plus 0.5em minus 0.4emieee , 2007 , pp . 917922 .a. elmorsy , m. yasser , m. elsabagh , and m. youssef , `` practical provably secure communication for half - duplex radios , '' in _ communications ( icc ) , 2011 ieee international conference on_.1em plus 0.5em minus 0.4emieee , 2011 , pp .15 .o. hassan , m. fouad , and m. youssef , `` demonstrating practical provably secure multi - node communication , '' in _ proceedings of the the seventh acm international workshop on wireless network testbeds , experimental evaluation , and characterization ( wintech12 ) , in conjunction with mobicom 2012 _ , 2012 .
we present a practical and provably - secure multi - node communication scheme in the presence of a passive eavesdropper . the scheme is based on a random scheduling approach that hides the identity of the transmitter from the eavesdropper . this random scheduling leads to ambiguity at the eavesdropper with regard to the origin of the transmitted frame . we present the details of the technique and analyze it to quantify the secrecy - fairness - overhead trade - off . implementation of the scheme over crossbow telosb motes , equipped with cc2420 radio chips , shows that the scheme can achieve significant secrecy gain with vanishing outage probability . in addition , it has significant overhead advantage over direct extensions to two - nodes schemes . the technique also has the advantage of allowing inactive nodes to leverage sleep mode to further save energy .
max - margin learning has been effective on learning discriminative models , with many examples such as univariate - output support vector machines ( svms ) and multivariate - output max - margin markov networks ( or structured svms ) .however , the ever - increasing size of complex data makes it hard to construct such a fully discriminative model , which has only single layer of adjustable weights , due to the facts that : ( 1 ) the manually constructed features may not well capture the underlying high - order statistics ; and ( 2 ) a fully discriminative approach can not reconstruct the input data when noise or missing values are present . to address the first challenge ,previous work has considered incorporating latent variables into a max - margin model , including partially observed maximum entropy discrimination markov networks , structured latent svms and max - margin min - entropy models .all this work has primarily focused on a shallow structure of latent variables . to improve the flexibility , learningsvms with a deep latent structure has been presented in .however , these methods do not address the second challenge , which requires a generative model to describe the inputs .the recent work on learning max - margin generative models includes max - margin harmoniums , max - margin topic models , and nonparametric bayesian latent svms which can infer the dimension of latent features from data .however , these methods only consider the shallow structure of latent variables , which may not be flexible enough to describe complex data .much work has been done on learning generative models with a deep structure of nonlinear hidden variables , including deep belief networks , autoregressive models , and stochastic variations of neural networks . for such models , inference is a challenging problem , but fortunately there exists much recent progress on stochastic variational inference algorithms .however , the primary focus of deep generative models ( dgms ) has been on unsupervised learning , with the goals of learning latent representations and generating input samples .though the latent representations can be used with a downstream classifier to make predictions , it is often beneficial to learn a joint model that considers both input and response variables .one recent attempt is the conditional generative models , which treat labels as conditions of a dgm to describe input data .this conditional dgm is learned in a semi - supervised setting , which is not exclusive to ours . in this paper, we revisit the max - margin principle and present a max - margin deep generative model ( mmdgm ) , which learns multi - layer representations that are good for both classification and input inference .our mmdgm conjoins the flexibility of dgms on describing input data and the strong discriminative ability of max - margin learning on making accurate predictions .we formulate mmdgm as solving a variational inference problem of a dgm regularized by a set of max - margin posterior constraints , which bias the model to learn representations that are good for prediction .we define the max - margin posterior constraints as a linear functional of the target variational distribution of the latent presentations .then , we develop a doubly stochastic subgradient descent algorithm , which generalizes the pagesos algorithm to consider nontrivial latent variables . for the variational distribution ,we build a recognition model to capture the nonlinearity , similar as in .we consider two types of networks used as our recognition and generative models : multiple layer perceptrons ( mlps ) as in and convolutional neural networks ( cnns ) .though cnns have shown promising results in various domains , especially for image classification , little work has been done to take advantage of cnn to generate images . the recent work presents a type of cnn to map manual features including class labels to rbg chair images by applying unpooling , convolution and rectification sequentially ; but it is a deterministic mapping and there is no random generation .generative adversarial nets employs a single such layer together with mlps in a minimax two - player game framework with primary goal of generating images .we propose to stack this structure to form a highly non - trivial deep generative network to generate images from latent variables learned automatically by a recognition model using standard cnn .we present the detailed network structures in experiments part .empirical results on mnist and svhn datasets demonstrate that mmdgm can significantly improve the prediction performance , which is competitive to the state - of - the - art methods , while retaining the capability of generating input samples and completing their missing values .: the recent work presents a convolutional neural networks conditioned on class labels with the primary goal of generating 3d images instead of predicting class labels .regularized bayesian inference ( regbayes ) presents a generic framework for constrained bayesian inference , from which our work draws inspirations ; but no existing work has attempted to learn a multi - layer representations with highly nonlinear transformations .inspired by the power of dgms , it s natural to use these representations , or features , to do some supervised tasks , such as classification , regression and so on .recently , dgms are used in semi - supervised learning .two kinds of models were proposed : the latent - feature discriminative model and the conditional generative model .latent - feature discriminative model trains the generative model and prediction model independently .conditional generative model treats labels as latent class variables that can be missing .however , we want to deal with supervised learning problems with full labelled data and learn prediction - oriented deep features through dgms .we propose to apply the regbayes framework to dgms to do maximum margin learning with features learned by dgms .often , regbayes introduces linear operator on the posterior distribution as the regularization term to capture the structures of data .regbayes has successful applications in several domains , such as topic models , i.e. medlda .medlda is a concrete example to learn predictive features via a generative model .medlda employs maximum entropy discrimination ( med ) principle to combine max - margin prediction models with hierarchical bayesian models .we marry the large margin idea , which leads to maximum margin supervised learning with dgms .our model will be presented in section 2 .we design global coordinate decent inference algorithm to learn for both latent structures and global parameters . to deal with expectations in the objective function , we use the stochastic variational methods raised recently . to handle large scale dataset , we uses pagesos , a stochastic subgradient svm solver instead of batch svm to obtain a complete stochastic inference algorithm .the details are shown in section 3 .our experiments show two benefits of our model : firstly it can explain the data well and secondly it leads to more accurate prediction in supervised tasks .the rest of the paper is organized as follows .2 reviews the basics of deep generative models .3 presents the max - margin deep generative models , with a doubly stochastic subgradient algorithm .sec . 4 presents experimental results .finally , sec .5 concludes .we start from a general setting , where we have i.i.d .data . a deep generative model ( dgm )assumes that each is generated from a vector of latent variables , which itself follows some distribution .the joint probability of a dgm is as follows : where is the prior of the latent variables and is the likelihood model for generating observations . for notation simplicity ,we define . depending on the structure of ,various dgms have been developed , such as the deep belief networks , deep sigmoid networks , deep latent gaussian models , and deep autoregressive models . in this paper , we focus on the directed dgms , which can be easily sampled from via an ancestral sampler. however , in most cases learning dgms is challenging due to the intractability of posterior inference .the state - of - the - art methods resort to stochastic variational methods under the maximum likelihood estimation ( mle ) framework , .specifically , let be the variational distribution that approximates the true posterior .firstly , we estimate the partial marginal likelihood of the model , i.e. . since the posterior distribution is intractable in general , we can introduce a variational distribution to approximate the true posterior distribution . a variational upper bound of the per data point negative log - likelihood ( nll ) is : , \end{aligned}\ ] ] which is the same as auto - encoding variational bayes .then , a lower bound of the whole log - likelihood , , can be obtained as : a variational upper bound of the per sample negative log - likelihood ( nll ) is : , \end{aligned}\ ] ] where is the kullback - leibler ( kl ) divergence between distributions and . then , upper bounds the full negative log - likelihood .it is important to notice that if we do not make restricting assumption on the variational distribution , the lower bound is tight by simply setting .that is , the mle is equivalent to solving the variational problem : .however , since the true posterior is intractable except a handful of special cases , we must resort to approximation methods .one common assumption is that the variational distribution is of some parametric form , , and then we optimize the variational bound w.r.t the variational parameters . for dgms, another challenge arises that the variational bound is often intractable to compute analytically . to address this challenge, the early work further bounds the intractable parts with tractable ones by introducing more variational parameters .however , this technique increases the gap between the bound being optimized and the log - likelihood , potentially resulting in poorer estimates .much recent progress has been made on hybrid monte carlo and variational methods , which approximates the intractable expectations and their gradients over the parameters via some unbiased monte carlo estimates .furthermore , to handle large - scale datasets , stochastic optimization of the variational objective can be used with a suitable learning rate annealing scheme .it is important to notice that variance reduction is a key part of these methods in order to have fast and stable convergence .it is important to notice that if we do not make restricting assumption on the variational distribution , the lower bound is tight simply by setting .that is , the maximum likelihood estimation problem is equivalent to solving the variational problem however , since the true posterior is intractable , we must resort to approximation methods . for instance , we can assume a parametric form of the variational distribution , and then maximize the variational bound under this assumption .most work on directed dgms has been focusing on the generative capability on inferring the observations , such as filling in missing values , while little work has been done on investigating the predictive power , except the semi - supervised dgms which builds a dgm conditioned on the class labels and learns the parameters via mle .below , we present max - margin deep generative models , which explore the discriminative max - margin principle to improve the predictive ability of the latent representations , while retaining the generative capability .we consider supervised learning , where the training data is a pair with input features and the ground truth label . without loss of generality , we consider the multi - class classification , where .a max - margin deep generative model ( mmdgm ) consists of two components : ( 1 ) a deep generative model to describe input features ; and ( 2 ) a max - margin classifier to consider supervision . for the generative model , we can in theory adopt any dgm that defines a joint distribution over as in eq .( [ eq : dgm - joint - dist ] ) . for the max - margin classifier , instead of fitting the input features into a conventional svm, we define the linear classifier on the latent representations , whose learning will be regularized by the supervision signal as we shall see .specifically , if the latent representation is given , we define the latent discriminant function where is an -dimensional vector that concatenates subvectors , with the being and all others being zero , and is the corresponding weight vector .we consider the case that is a random vector , following some prior distribution .then our goal is to infer the posterior distribution , which is typically approximated by a variational distribution for computational tractability .notice that this posterior is different from the one in the vanilla dgm .we expect that the supervision information will bias the learned representations to be more powerful on predicting the labels at testing . to account for the uncertainty of , we take the expectation and define the discriminant function , ] , and the margin constraints are from the classifier ( [ eq : mm - classifier ] ) .if we ignore the constraints ( e.g. , setting at 0 ) , the solution of will be exactly the bayesian posterior , and the problem is equivalent to do mle for . by applying the maximum entropy discrimination ( med ) principle , we combine the deep generative models and max - margin classification together as an optimization problem : \ge \delta l_n(y ) -\xi_n\\ \xi_n \ge 0 , \end{array } \right.\ ] ] by absorbing the slack variables , we can rewrite the problem in an unconstrained form : where the hinge loss is : ) . ] moreover , we can solve for the optimal solution of in some analytical form .in fact , by the calculus of variations , we can show that given the other parts the solution is \big ) , ] therefore , even though we did not make a parametric form assumption of , the above results show that the optimal posterior distribution of is gaussian .since we only use the expectation in the optimization problem and in prediction , we can directly solve for the mean parameter instead of .further , in this case we can verify that and then the equivalent objective function in terms of can be written as : where is the total hinge loss , and the per - sample hinge - loss is ) ] is the loss - augmented prediction . if the prior is normal , , we have the normal posterior : \ ] ] therefore, we know that the optimal posterior distribution of is gaussian . since we only use the expectation in optimization of all parameters and in prediction , we can optimize instead of actually .further , in this case we can verify that and then the equivalent objective function in terms of can be written as : where and ) ] and ) ] . instead of using traditional mean filed method , an alternative way is to take advantage of the stochastic variational methods raised recently to do the inference . under ceratin mild conditions mentioned in these papers ,the value of the objective function as well as the gradient can be easily computed by sampling from simple distribution .specifically , introduce an auxiliary random variable and a differentiable function such that : then the expectation can be approximated by : = \mathbb{e}_{p({\boldsymbol{\epsilon}})}[f(g_{{\boldsymbol{\phi}}}({\boldsymbol{\epsilon } } , { \mathbf{x } } ) ) ] \\ \simeq & \frac{1}{l } \sum_{l = 1}^l f(g_{{\boldsymbol{\phi}}}({\boldsymbol{\epsilon } } , { \mathbf{x } } ) ) \textrm { where } { \boldsymbol{\epsilon}}\sim p ( { \boldsymbol{\epsilon } } ) \end{split}\ ] ] consequently , the gradient of the variational bound part and the subgradient of the empirical loss part can be estimated efficiently by sampling . given , we solve another regbayes problem to inference as following : ) \end{split}\ ] ] the close form solution is : where and can be solved alternatively . firstly fix and compute ||^2}\ ] ] where ) ] and ) ] can be estimated by sampling method mentioned in section 3.2.2 .our model is flexible because of the untied weight and any successful discriminative model structures could be used as our encoders .unconvnets in is used to map hand - designed features to rbg images .similar structures is used to generate images in gan , begio in a different framework while it concentrates on generative performance .understanding cnn used this structure to visualize features , no training procedure .we now present experimental results on the widely adopted mnist and svhn datasets . though mmdgms are applicable to any dgms that define a joint distribution of and , we concentrate on the variational auto - encoder ( va ) , which is unsupervised .we denote our mmdgm with va by mmva . in our experiments , we consider two types of recognition models : multiple layer perceptrons ( mlps ) and convolutional neural networks ( cnns ) .we implement all experiments based on theano .in the mlp case , we follow the settings in to compare both generative and discriminative capacity of va and mmva . in the cnn case, we use standard convolutional nets with convolution and max - pooling operation as the recognition model to obtain more competitive classification results . for the generative model ,we use unconvnets with a `` symmetric '' structure as the recognition model , to reconstruct the input images approximately .more specifically , the top - down generative model has the same structure as the bottom - up recognition model but replacing max - pooling with unpooling operation and applies unpooling , convolution and rectification in order .the total number of parameters in the convolutional network is comparable with previous work . for simplicity, we do not involve mlpconv layers and contrast normalization layers in our recognition model , but they are not exclusive to our model .we illustrate details of the network architectures in appendix a. in both settings , the mean and variance of the latent are transformed from the last layer of the recognition model through a linear operation . it should be noticed that we could use not only the expectation of but also the activation of any layer in the recognition model as features .the only theoretical difference is from where we add a hinge loss regularization to the gradient and back - propagate it to previous layers . in all of the experiments , the mean of the same nonlinearity but typically much lower dimension than the activation of the last layer in the recognition model , and hence often leads to a worse performance . in the mlp case ,we concatenate the activations of 2 layers as the features used in the supervised tasks . in the cnn case, we use the activations of the last layer as the features .we use adam to optimize parameters in all of the models .although it is an adaptive gradient - based optimization method , we decay the global learning rate by factor three periodically after sufficient number of epochs to ensure a stable convergence .we denote our mmdgm with mlps by * mmva*. to perform classification using va , we first learn the feature representations by va , and then build a linear svm classifier on these features using the pegasos stochastic subgradient algorithm .this baseline will be denoted by * va+pegasos*. the corresponding models with cnns are denoted by * cmmva * and * cva+pegasos * respectively .we now present our experimental results on the widely adopted mnist and its variant datasets for handwritten digits recognition . since the goal of our model is to learn latent features that both explain the data well and are more discriminative in supervised tasks, we examine the results with two types of criteria , i.e. , the variational lower bound and prediction error rates . though mmdgms are applicable to any deep generative models that define a joint distribution of and , we concentrate on the variational auto - encoder ( va ) , which is unsupervised .we denote our mmdgm with va by * mmva*. to perform classification using va , we first learn the feature representations by va , and then build a linear svm classifier on these features using the pegasos stochastic subgradient algorithm .this baseline will be denoted by * va+pegasos*. to have a fair competition , all of these models are under same corresponding parameters .there are three sets of parameters in our model : parameters in generative model and recognition model , parameters in prediction model and parameters that control the relative weight of generative part and discriminative part .typically , we choose the first two sets of parameters following related works .if we fine tune the parameters according to the validation result , we will state it explicitly . for the recognition model in defining the variational distribution ( [ eq : recognition - model ] ) , we follow the settings in and employ a two - layer mlp in both va and mmva .the mean and variance of the latent are transformed from the last layer of the recognition model through a linear operation .it should be noticed that we could use not only the expectation of but also the activation of any layer in the recognition model as features .the only theoretical difference is from where we add a hinge loss regularization to the gradient and back - propagate it to previous layers . in the experiments of va and mmva ,the mean of has the same nonlinearity but typically much lower dimension than the activation of the last layer in the recognition model , and hence often leads to a worse performance . to obtain the best empirical results , we add label information explicitly in both layers of the recognition model , i.e. , we concatenate the activations of 2 layers as the features used in the supervised tasks .we present both the prediction performance and the results on generating samples of mmva and va+pegasos with both kinds of recognition models on the mnist dataset , which consists of images of 10 different classes ( 0 to 9 ) of size with 50,000 training samples , 10,000 validating samples and 10,000 testing samples . to take advantages of bayesian approach , instead of using the zero - mean standard gaussian prior, we introduce an informative gaussian prior in the training procedure to refine the posterior distribution . in thissetting , the mean vectors , which could be pre - extracted from the training data by any other discriminative models , are different for individual data points to explain different digits and writing styles .notice that we do not use the informative prior of the testing data when we do classification since it is irrelevant to the computation of the posterior distribution of testing data . in our experiments , we use features learned by a well - tuned deep convolutional network from an unpublished work as the mean of the prior for both va and mmva .the network has 5 convolutional layers and 2 max - pooling layers , and the features could achieve test error rate 0.3% on the standard mnist data .we still use the identity matrix as the covariance to remain the generative capability .in the zero mean case , we set the dimension of the latent variables to 50 following to compare with va and the conditional generative va . in the case with an informative prior, we change the dimension of latent variables to 96 both for va and mmva to match the dimension of the prior .all of the mlps have 500 hidden units at each layer .we set the weight decay term of pegasos to in all of the models and train pegasos with 200,000 mini - batches of size 100 for all va models to ensure the convergence of the prediction weights .our model is not too sensitive to the choice of , which controls the relative weight of generative part and discriminative part .we set for all mmva models on standard mnist dataset .typically , our model has better performance with an unsupervised pre - training procedure because the hinge loss part makes sense only when we have reasonable features .the number of total epochs and corresponding global learning rate are the same for va and mmva .
deep generative models ( dgms ) are effective on learning multilayered representations of complex data and performing inference of input data by exploring the generative ability . however , little work has been done on examining or empowering the discriminative ability of dgms on making accurate predictions . this paper presents max - margin deep generative models ( mmdgms ) , which explore the strongly discriminative principle of max - margin learning to improve the discriminative power of dgms , while retaining the generative capability . we develop an efficient doubly stochastic subgradient algorithm for the piecewise linear objective . empirical results on mnist and svhn datasets demonstrate that ( 1 ) max - margin learning can significantly improve the prediction performance of dgms and meanwhile retain the generative ability ; and ( 2 ) mmdgms are competitive to the state - of - the - art fully discriminative networks by employing deep convolutional neural networks ( cnns ) as both recognition and generative models .
the chicago board options exchange volatility index ( vix ) provides investors with a mechanism to gain direct exposure to the volatility of the s&p500 index without the need for purchasing index options .consequently , the trading of vix derivatives has become popular amongst investors . in 2004futures on the vix began trading and were subsequently followed by options on the vix in 2006 .furthermore , since the inception of the vix , volatility indices have been created to provide the same service on other indices , in particular , the vdax and the vstoxx , which are based on the dax and the euro stoxx 50 indices respectively . sincederivative products are traded on both the underlying index and the volatility index , it is desirable to employ a model that can simultaneously reproduce the observed characteristics of products on both indices. models that are capable of capturing these joint characteristics are known as consistent models .a growing body of literature has been devoted to the joint modelling of equity and vix derivatives .the literature can generally be classed in terms of two approaches . in the first approach ,once the instantaneous dynamics of the underlying index are specified under a chosen pricing measure , the discounted price of a derivative can be expressed as a local martingale .this is the approach adopted in , , and . derived an analytic formula for vix futures under the assumption that the s&p500 is modelled by a heston diffusion process .a more general result was obtained in . through a characteristic function approach these authors provided exact solutions ( dependent upon a fourier inversion ) for the price of vix derivatives when the s&p500 is modelled by a heston diffusion process with simultaneous jumps in the underlying index and the volatility process . a square - root stochastic variance model with variance jumps and time - dependent parameterswas considered for the evolution of the s&p500 index in .the author provided formulae for the pricing and hedging of a variety of volatility derivatives .alternatively there is the market - model " approach , where variance swaps are modelled directly , as is done in and .the latter authors proposed a flexible market model that is capable of efficiently pricing realized - variance derivatives , index options , and vix derivatives .realized - variance derivatives were priced using fourier transforms , index derivatives were priced using a mixing formula , which averages black - scholes model prices , and vix derivatives were priced , subject to an approximation , using fourier - transform methods .models considered under the first approach generally yield ( quasi-)closed - form solutions for derivative prices , which by definition are tractable .the challenge lies in ensuring that empirically observed facts from the market data , i.e. characteristic features of the joint dynamics of equity and vix derivatives , are captured . on the other hand ,the market - model approach ensures by construction that models accurately reflect observed empirical characteristics .the challenge remaining is to obtain an acceptable level of tractability when pricing derivative products . in this paperwe follow the first approach and consider the joint modelling of equity and vix derivatives when the underlying index follows a 3/2 process with jumps in the index only ( henceforth called the 3/2 plus jumps model ) .the model presented here is more parsimonious than competing models from its class ; it is able to accurately capture the joint dynamics of equity and vix derivatives , while retaining the advantage over market models of analytic tractability .we point out that this model was used in the context of pricing target volatility fund derivatives in .the selection of a model for the underlying index is motivated by several observations in recent literature .there is both empirical and theoretical evidence suggesting that the model is a suitable candidate for modelling instantaneous variance . conducted an empirical study on the time - series properties of instantaneous variance by using s&p100 implied volatilities as a proxy .the authors found that a linear - drift was rejected in favour of a non - linear drift and estimated that a variance exponent of approximately 1.3 was required to fit the data . in a separate study , proposed a new framework for pricing variance swaps and were able to support the findings of using a purely theoretical argument .furthermore , the excellent results obtained by , who employed the 3/2 model to price realized - variance derivatives , naturally encourage the application of the 3/2 framework to vix derivatives . despite having a qualitative advantage over other stochastic volatility models , the model , or any augmented version of this model , has yet to be applied to the consistent pricing of equity and vix derivatives .the final motivating factor is the claim that jumps must be included in the dynamics of the underlying index to capture the upward - sloping implied volatility skew of vix options . in related literature the only mention of the model in the context of vix derivatives is in , where the problem is approached from the perspective of directly modelling the vix .closed - form solutions are found for vix derivatives under the assumption that the vix follows a process . in this paper a markedly different approach is adopted . rather than specifying dynamics for the untradable vix , without providing a connection to the underlying index ,we follow the approach from , where the dynamics of the underlying index are specified and an expression for the vix is later derived .our approach is superior to that of ; issues of consistency are addressed directly and the model lends itself to a more intuitive interpretation .the main contribution of this paper is the derivation of quasi - closed - form solutions for the pricing of vix derivatives under the assumption that the underlying follows the 3/2 model .the newly - found solutions retain the analytic tractability enjoyed by those found in the context of realized - variance products .the formulae derived in this paper allow for a numerical analysis to be performed to assess the appropriateness of the 3/2 framework for consistent modelling .upon performing the analysis we find that the pure - diffusion 3/2 model is capable of producing the commonly observed upward - sloping skew for vix options .this contradicts the previously made claims that pure - diffusion stochastic volatility models can not consistently model vix and equity derivatives .this desirable property distinguishes the 3/2 model from competing pure - diffusion stochastic volatility models .we compare the 3/2 model to the heston model and find that the latter produces downward - sloping implied volatilities for vix options , whereas the former produces upward - sloping implied volatilities for vix options .pure - diffusion volatility models , however , fail to capture features of implied volatility in equity options for short maturities . to remedy this shortcoming jumpsare introduced in the underlying index .the resulting plus jumps model is consequently studied in detail : first , by following the approach used for the pure - diffusion 3/2 model , we derive the conditions that ensure that the discounted stock price is a martingale under the pricing measure .the novelty of this result is that we discuss whether a stochastic volatility model that allows for jumps is a martingale .so far in the literature these results have been provided for pure - diffusion processes only , as they are based on feller explosion tests .next , we produce the joint fourier - laplace transform of the logarithm of the index and the realized variance , which allows for the pricing of equity and realized - variance derivatives . though the model is not affine, we find that the joint fourier - laplace transform is exponentially affine in the logarithm of the stock price .this allows for the simultaneous pricing of equity options across many strikes via the use of the fourier - cosine expansion method of .such a finding is expected to significantly speed up the calibration procedure .the approach used in this paper is not restricted to the 3/2 plus jumps model and can be extended to a more general setting .in fact , we use this approach to obtain a closed - form solution for vix options in the stochastic volatility plus jumps ( svj ) model , see , resulting in a small extension of the stochastic - volatility pricing formula presented in .the paper is structured as follows : in section 2 we introduce the pure - diffusion model and present the empirical result that illustrates that this model is able to capture the joint characteristics of equity and index options .we compare the pure - diffusion 3/2 model with the heston model to highlight the difference in shape of the vix implied volatilities . the rest of the paperis concerned with the 3/2 plus jumps model .section 3 introduces the 3/2 plus jumps model and establishes the conditions that ensure that the discounted stock price is a martingale under the assumed pricing measure .next , characteristic functions for the logarithm of the index and the realized variance are derived .finally , a quasi - analytic formula for call and put options on the vix is derived .conclusions are stated in section [ secconc ] .in this section we introduce the pure - diffusion model and present numerical results to illustrate that this model is able to produce upward - sloping implied volatility skews in vix options . on a probability space , we introduce the risk - neutral dynamics for the stock price and the variance processes starting at and respectively , where is a two - dimensional brownian motion under the risk - neutral measure .all stochastic processes are adapted to a filtration } ] .the distribution of is assumed to be normal with mean and variance .the parameters , , and satisfy the following relationship all other stochastic processes and parameters have been introduced in section [ seccomp ] . integrating equation yields where and we use to denote the logarithm of the relative jump size of the jump . since the model - is not affine, equation gives us an important starting point for our analysis .in particular , one can now determine if the discounted stock price is a martingale under our assumed pricing measure .[ propmartingality ] let and be given by equations and respectively .then the discounted stock price is a martingale under , if and only if we compute equation is clearly independent of the jump component of .hence is a martingale under if and only if the corresponding discounted pure - diffusion model , , is a martingale under . since this question was answered in , see his equation ( 4 ) , the desired result follows . starting with , there has been a growing body of literature dealing with the question of whether the discounted stock price in a particular stochastic volatility model is a martingale or a strict local martingale under the pricing measure , e.g. , , , and .the specification of the model , in particular equation , allows for the application of the above results , which were all formulated for pure diffusion processes .we remark that condition is the same as the one presented in .besides analyzing the martingale property of the model - we also compute functionals , which are required for the pricing of equity and vix derivatives . in this sectionwe derive formulae for the pricing of equity and realized - variance derivatives under the plus jumps model .we demonstrate that by adding jumps to the 3/2 model a better fit to the short - term smile can be obtained without incurring a loss in analytic tractability .consider and define the realized variance as the quadratic variation of , i.e. where denotes realized variance and denotes the maturity of interest .we have the following result , which is the analogue of proposition 2.2 in . [ propjointfltransform ]let and . in the jumps model , the joint fourier - laplace transform of and is given by where denotes the confluent hypergeometric function .the proof is completed by noting that the first conditional expectation was computed in and and is given by furthermore , it can be seen that and for equity and realized - variance derivatives can now be priced using proposition [ propjointfltransform ] . for equity derivatives pricingrequires the performance of a numerical fourier inversion , such as those presented in and . furthermore , since the characteristic function of is exponentially affine in , we can apply the fourier - cosine expansion method as described in section 3.3 in .this allows for the simultaneous pricing of equity options across many strikes , which is expected to significantly speed up the calibration procedure . for realized - variance derivatives one can employ a numerical laplace inversion ,see , or the more robust control - variate method developed in .we comment that implied volatility approximations for small log - forward moneyness and time to maturity for the plus jumps model can be obtained from , as their proposition 3 covers the plus jumps model. plus jumps model to days s&p500 implied volatilities on march , 2012 .model parameters obtained , , , , , , , .[fig3over2plusjumpsfit ] ] this section is concluded with a calibration of the plus jumps model to short - maturity s&p500 option data .the inclusion of jumps improves the fit significantly as illustrated when comparing figures [ fig3over2plusjumpsfit ] and [ fig3over2fit ] .the values for and decrease when we allow for jumps . also , the parameters for the jump component are roughly in line with those obtained for svj models ( see for example ) . in this sectionwe provide a general pricing formula for ( european ) call and put options on the vix by extending the results of .the newly - found formula is then used for the pricing of vix derivatives when the index follows a plus jumps process .of course , the results shown in section [ seccomp ] are obtained by setting the jump intensity equal to .we recall the definition of the vix , where . the following result , which is an extension of proposition 1 in , allows for the derivation of a pricing formula for vix options . [ propvixformula ]let , , and be defined by equations , , and .then where lemma [ propvixformula ] is useful as it shows that the distribution of can be obtained via the distribution of , for .consequently , the problem of pricing vix derivatives is reduced to the problem of finding the transition density function for the variance process . in the following proposition , we present the zhang - zhu formula for the futures price and a formula for call options .[ propderivvix ] let , , and be given by equations , and .we obtain the following zhang - zhu formula for futures on the vix and the following formula for a call option where denotes the transition density of started from at time being at at time . an expression for vixput options can be obtained via the put - call parity relation for vix options , namely see equation ( 25 ) in . for the model instead of using lemma [ propvixformula ]we could alternatively use theorem 4 in .however , our approach is not restricted to the model and applies to all stochastic volatility models for which the laplace transform of the realized variance is known .furthermore , in the case of the model it is well known that is the inverse of a square - root processs .[ lemtransdensvt ] let be defined as in equation , then the transition density of is given by where , , and denotes the probability density function of a non - central chi - squared random variable with degrees of freedom , and non - centrality parameter .as indicated above we introduce the process via , whose dynamics are given by given , we note from that where denotes a non - central chi - squared random variable with degrees of freedom and non - centrality parameter .since we have an expression for the transition density of , proposition [ propderivvix ] can be used to price derivatives on the vix as a discounted expectation . to further demonstrate that the methodology presented in this section is not restricted to the plus jumps model , consider the stochastic volatility plus jumps model given by where and are as defined for the model , and , and . using lemma [ propvixformula ] we have where mentioned previously , it is well known that the transition density of a square - root process is non - central chi - squared .therefore , proposition [ propderivvix ] can be used to price options on the vix in the setting - .this result is a small extension of proposition 3 in , as our result allows for jumps in the index .we derive general formulae for the pricing of equity and vix derivatives .the newly - found formulae allow for an empirical analysis to be performed to assess the appropriateness of the 3/2 framework for the consistent pricing of equity and vix derivatives .empirically the pure - diffusion 3/2 model performs well ; it is able to reproduce upward - sloping implied volatilities in vix options , while a competing model of the same complexity and analytical tractability can not .furthermore , the 3/2 plus jumps model is able to produce a better short - term fit to the implied volatility of index options than its pure - diffusion counterpart , without a loss in tractability .these observations make the 3/2 plus jumps model a suitable candidate for the consistent modelling of equity and vix derivatives .10 andersen , l. b. g. , and piterbarg , v. , moment explosions in stochastic volatility models , finance and stochastics , 11 , 2950 , 2007 .bakshi , g. and ju , n. and yang , h. , estimation of continuous time models with an application to equity volatility , journal of financial economics , 82 , 227249 , 2006 .baldeaux , j. , exact simulation of the model , international journal of theoretical and applied finance , to appear .bates , d. , jumps and stochastic volatility : exchange rate processes implicit in deutsche mark options , review of financial studies , 9 , 69107 , 1996 .bayraktar , e. , kardaras , c. , and xing , h. , valuation equations for stochastic volatility models , siam journal of financial mathematics , to appear .bergomi , l. , smile dynamics iii , risk , 18 , 6773 , 2005 .buehler , h. , consistent variance curve models , finance and stochastics , 10 , 178203 , 2006 .carr , p. , geman , h. , madan , d. , and yor , m. , pricing options on realized variance , finance and stochastics , 9 , 453475 , 2005 .carr , p. , and madan , d. , option pricing using the fast fourier transform , journal of computational finance , 4 , 6173 , 1999 .carr , p. , and sun , j. , a new approach for option pricing under stochastic volatility , review of derivatives research , 10 , 87150 , 2007 .cont , r. , and kokholm , t. , a consistent pricing model for index options and volatility derivatives , mathematical finance , to appear .chan , l .- l . , and platen , e. , exact pricing and hedging formulas of long dated variance swaps under a volatility model , working paper .drimus , g. g. , options on realized variance by transform methods : a non - affine stochastic volatility model , quantitative finance , to appear .duffie , d. , pan , j. , and singleton , k. , transform analysis and asset pricing for affine jump - diffusion , econometrica , 68 , 1343-1376 , 2000 .fang , f. , and osterlee , k. , a novel pricing method for european options based on fourier - cosine series expansions , siam journal of scientific computing , 31 , 826848 , 2008 .gatheral , j. , the volatility surface , wiley finance , 2006 .goard , j. , and mazur , m. , stochastic volatility models and the pricing of vix options , mathematical finance , to appear .heston , s. l. , a closed - form solution for options with stochastic volatility with applications to bond and currency options , review of financial studies , 6 , 327343 , 1993 .heston , s. l. , a simple new formula for options with stochastic volatility , washington university of st .louis ( working paper ) .itkin , a. , and carr , p. , pricing swaps and options on quadratic variation under stochastic time change models - discrete observations case , review of derivatives research , 13 , 141176 , 2010 .jeanblanc , m. , yor , m. , and chesney , m. , mathematical methods for financial markets , springer finance , springer , 2009 .karatzas , i. , and shreve s. , brownian motion and stochastic calculus , springer - verlag , new york , 1991 .keller - ressel , m. , moment explosions and long - term behavior of affine stochastic volatility models , mathematical finance , 21 , 7398 , 2011 .lennox , k. , lie symmetry methods for multidimensional linear parabolic pdes and diffusions , phd thesis , university of technology , sydney , 2012 .lewis , a. , l. , option valuation under stochastic volatility , finance press , newport beach , 2000 .lian , g .- h . , and zhu , s .-, pricing vix options with stochastic volatility and random jumps , decisions in economics and finance , to appear .medvedev , a. , and scaillet , o. , approximation and calibration of short - term implied volatilities under jump - diffusion stochastic volatility , review of financial studies , 20 , 427459 , 2007 .meyer - dautrich , s. , and vierthauer , r. , pricing target volatility fund derivatives , quantitative methods in finance conference presentation , 2011 .mijatovi , a. , and urusov , m. , on the martingale property of certain local martingales , probability theory and related fields , to appear .sepp , a. , pricing options on realized variance in the heston model with jumps in returns and volatility , journal of computational finance , 11 , 33 - 70 , 2008 .sepp , a. , vix option pricing in a jump - diffusion model , risk , 8489 , april 2008 .sepp , a. , parametric and non - parametric local volatility models : achieving consistent modeling of vix and equity derivatives , quant congress europe , london conference presentation , november 2011 .sin , c. a. , complications with stochastic volatility models , advances in applied probability , 30 , 256268 , 1998 .zhang , j. e. , and zhu , y. , vix futures , journal of futures markets , 26 , 521531 , 2006 .p . , and lian , g .- h ., an analytical formula for vix futures and its applications , journal of futures markets , to appear .
the paper demonstrates that a pure - diffusion model is able to capture the observed upward - sloping implied volatility skew in vix options . this observation contradicts a common perception in the literature that jumps are required for the consistent modelling of equity and vix derivatives . the pure - diffusion model , however , struggles to reproduce the smile in the implied volatilities of short - term index options . one remedy to this problem is to augment the model by introducing jumps in the index . the resulting plus jumps model turns out to be as tractable as its pure - diffusion counterpart when it comes to pricing equity , realized variance and vix derivatives , but accurately captures the smile in implied volatilities of short - term index options . stochastic volatility plus jumps model , model , vix derivatives
we thank u. biets and j. bailey for providing photographs used here .a.b . thanks a. maffini for discussions during the early stages of the project . j.l.s .thanks m. bierbaum , j. sethna , and i. cohen for useful discussions . thanks to a. gdin and j. svensson for software development . a.b . acknowledges funding from the centre for interdisciplinary mathematics ( cim ) .j.l.s . was independently funded .39ifxundefined [ 1 ] ifx#1 ifnum [ 1 ] # 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1'''' [ 0]secondoftwosanitize [ 0 ] + 12 ] by an amount equal to the characteristic speed squared .consequently , at steady - state .[ [ the - correlation - matrix . ] ] the correlation matrix .+ + + + + + + + + + + + + + + + + + + + + + + as the starting point of our analysis , we use the simulated trajectories to compute the displacement covariance matrix .we treat separately the and components of the position vector ] , with is constructed by combining the and components . in crystal theorythe participation ratio of a mode describes how many particles in the system move in a given mode , and runs between 0 ( fully localized ) to 1 ( fully extended ) .if we think about modes as collective dynamics , another useful characterization of their collective nature and spatial coherence is given by the mean polarization and the correlation function of the fluctuations around it from this computation , we define the correlation length such that . in order to characterize the structure around spps in soft spots , we use the two particle radial structure factor ] for different fractions of spps 12 & 12#1212_12%12[1][0] * * , ( ) * * , ( ) * * , ( )
collective motion of large human crowds often depends on their density . in extreme cases like heavy metal concerts and black friday sales events , motion is dominated by physical interactions instead of conventional social norms . here , we study an active matter model inspired by situations when large groups of people gather at a point of common interest . our analysis takes an approach developed for jammed granular media and identifies goldstone modes , soft spots , and stochastic resonance as structurally - driven mechanisms for potentially dangerous emergent collective motion . studies of collective motion cover a broad range of systems including humans , fish , birds , locusts , cells , vibrated rice , colloids , actin - myosin networks , and even robots . often , theoretical models of these active matter systems take a newtonian approach by calculating individual trajectories generated _ in silico _ from the sum of forces acting on each of particles . for the work focusing on humans , social interactions such as collision avoidance , tendencies to stay near social in - group members , directional alignment , and preference for personal space have been examined to understand their role in emergent behavior . generally , these studies show order - disorder transitions are driven by the competition between social interactions and randomizing forces . moreover , these models have been incorporated into predictive tools used to enhance crowd management strategies at major organized gatherings . in extreme social situations such as riots , protests , and escape panic , however , the validity of this approach is diminished . conventional social interactions no longer apply to individual people , and the actual collective behavior can be quite different from model predictions . situations involving large groups of people packed at high - densities provide a unique view of the emergent collective behavior in extreme circumstances . for example , attendees at heavy metal concerts often try to get as close as possible to the stage , but are unable to do so due to the shear number of people trying to attain the same goal [ fig . [ fig:1](a ) ] . consequently , the audience in this region of the concert venue becomes a densely packed shoulder - to - shoulder group with little room for individuals to freely move . often , the stresses involved become dangerously high and security professionals standing behind physical barriers are required to pull audience members from the crowd for medical attention . at black friday sales events , we find a similar situation when individuals seeking low - cost consumer goods congregate at the entrance of a store before it opens [ fig . [ fig:1](b ) ] . as documented in many news reports and online videos , these events can have tragic outcomes in the critical moments after the doors open and the crowd surges forward leading to increased risk of stampedes and trampling . in extreme situations involving large high - density crowds , physical interaction between contacting bodies and the simultaneous collective desire of each individual to get to a stage , through a door , or to a particular location become the dominant considerations . to generically capture these scenarios , we use a conventional force - based active matter model for human collective motion , but remove terms that account for social interaction . with this simplification , we have an _ asocial _ model for human collective behavior describing people aggregating around a common point of interest . here , we place at the side of a 2d simulation box [ fig . [ fig:1](c ) ] . in this framework , each person is modeled as a disk with radius positioned at a point subject to pairwise soft - body respulsive collision forces , a self - propulsion force , random force fluctuations from environmental stimuli , and a rigid - wall collision force . for each of the self - propelled particles ( spps ) in our model we have , which takes non - zero values only when the distance between two particles ; , where is a constant preferred speed , is the current speed of the spp , and is a unit vector pointing from each particle s center to the common point of interest ; is a random force vector whose components are drawn from a gaussian distribution with zero mean and standard deviation defined by the correlation function , which ensures noise is spatially and temporally decorrelated . the simulation box s boundaries are rigid so that collisions with spps give rise to a force similar to the repulsion force , , which is non - zero when the distance of the particle from the wall , and is directed along the wall s outward normal direction . in terms of the simulation unit length and unit time , we set the particle radius , the simulation box size , the preferred speed , the random force standard deviation and the force scale coefficients , . results presented here are for a collection of spps , though varying population size has little effect ( supplemental materials ) . simulations were initialized with random initial positions for each particle . trajectories were evolved with newton - stomer - verlet integration according to for a total of units of time [ fig . [ fig:1](c ) ] , where each consists of 10 integration time steps . while data for the initial was dominated by transient motion , we discarded the first from our analysis to avoid far - from - equilibrium effects [ fig . [ fig:1](c ) , linear path segments ] . by the spps aggregated near and settled into a steady - state configuration with each particle making small random motions about their average positions [ fig . [ fig:1](d ) ] . for the model parameters studied here , collisions and random force fluctuations contribute roughly equally to these motions , which can be seen by estimating the relevant time scales . at average crowd density , the collision time scale is , the noise time scale is ( supplemental materials ) , so that at steady - state . thus , while acts as an external field confining spps , collision and noise forces are responsible for position fluctuations and the aggregate s disordered structure [ fig . [ fig:1](d ) ] . to better understand the role of local structure on global collective motion , we note a striking resemblance between these simulations of high - density crowds and previous studies of disordered packings . in the context of jammed granular materials , a significant amount of effort has gone into developing theoretical tools that connect local structure to dynamical response . a key analysis method involves the displacement correlation matrix whose components are defined by \cdot \left [ \vec{r}_j(t ) - \langle \vec{r}_j \rangle \right ] \rangle$ ] . here , is the instantaneous position at time step , is the mean position of the spp , and all averages were calculated by sampling position data every for a total of 270 measurements . this sampling was chosen to reduce effects of auto - correlated motion while still accumulating sufficient statistically independent measurements in a finite time . in this computation , we exclude underconstrained spps that do not contribute to the overall collective motion . in the jamming literature these particles are called `` rattlers , '' and they are distinguished by abnormally large position fluctuations . in our analysis , we used a position fluctuation threshold of 4 standard deviations to identify rattlers . however , our results were self - consistent when we varied this parameter from 2 to 5 indicating the methodology is robust to a range of threshold values ( supplemental materials ) . of the displacement correlation matrix exhibits scaling properties between and ( black dashed lines ) . low eigenmodes in both ( blue ) and ( orange ) directions are larger than a random matrix model ( ) , and thus describe correlated motion . ( b ) snapshot of instantaneous displacements and example vector fields for various eigenmodes . lower eigenmodes are more spatially correlated than higher . ( c ) a heatmap of the polarization correlation function for the first 10 eigenmodes as a function of distance between spps . black line is where the correlation function decays to 0 demonstrating a long - range highly correlated mode for . ] to extract quantitative information from the configuration of spps , we computed the eigenmodes and eigenvalues of the displacement correlation matrix . in the harmonic theory of crystals , these normal modes fully characterize the linear response of the system to perturbations . for disordered materials , these modes convey information about structural stability as well as coherent and localized motion . plotting the eigenvalue spectrum as a function of mode number averaged over 10 runs with random initial conditions revealed an approximate power - law decay [ fig . [ fig:2](a ) , blue and orange data ] . while the debye model for 2d crystals obeys [ fig . [ fig:2](a ) , upper dashed line ] , the simulation data has an exponent between -1 and -2 . using a random matrix model of uncorrelated gaussian variables as a control for decoherent motion [ fig . [ fig:2](a ) , black dotted line ] ( supplemental materials ) , we see the lowest six eigenmodes contain information about correlated motion . plotting displacement vector fields for a few eigenmodes , we indeed find a higher degree of spatial correlation for lower that rapidly diminishes with increasing mode number [ fig . [ fig:2](b ) ] . to quantify this observation , we measured the polarization of the each mode s vector field and calculated the correlation function for this order parameter ( supplemental materials ) . remarkably , we find the first eigenmode carries a system - spanning displacement modulation [ fig . [ fig:2](c ) , ] , whereas the correlation for higher modes rapidly decays over a few particle diameters [ fig . [ fig:2](c ) , . to understand the origins of this long wavelength mode , we note the point of interest breaks translational symmetry , and therefore the goldstone theorem implies the existence of low - frequency long - wavelength deformations . this goldstone mode is expected to arise at low since eigenvalues are related to vibrational frequencies by , and the largest eigenvalue in the spectrum occurs at the lowest mode number [ fig . [ fig:2](a ) ] . thus , the system - spanning eigenmode is the system s goldstone boson ; when excited , it drives the spps to move collectively as one . an example of a disaster resulting from this type of coherent long - range motion is known as `` crowd crush '' . in these situations , a large number of people are suddenly displaced toward a wall , fence , or other architectural element resulting in dangerously high pressures . as a consequence , injuries and death are known to occur . determining if goldstone modes are responsible for crowd crush would require careful image analysis of crowd structure and motion in the moments before such an event . nevertheless , we expect any large dense gathering of people to exhibit this type of long - range collective behavior since its origins can be traced to the general principle of symmetry breaking . away from . ( c ) structure factor measures the pair - wise spp distribution and reveals structural features distinguishing spps in soft spots that suggest why they are subject to large displacements . ] another type of disaster found at high - density social gatherings is when sudden unexpected movements of the crowd cause individuals to trip and fall . because the majority of people are unaware this accident has happened , the rest of the crowd continues to move largely uninterrupted , resulting in injury or death due to trampling or compressive asphyxia this is more general than the excitation of a pure goldstone mode , and is better characterized by a superposition of modes . thus , we focus on the particles that displace significantly more than average in a given mode [ fig . [ fig:3](a ) , displacement threshold is 2.5 standard deviations more than average ] ( supplemental materials ) . studies of jammed granular media show these particles , which tend to cluster in regions called `` soft spots , '' correlate with structural rearrangements when the system is perturbed . superimposing data from the first 10 modes of a single simulation run reveals a soft spot near the core of the aggregate [ fig . [ fig:3](a ) ] . regions along the perimeter also featured large displacements , but they are essentially underconstrained edge effects and therefore not relevant for our analysis . identifying spps undergoing the largest displacements in each mode up to in all simulation runs showed the region near the core of the crowd is the most likely area to find soft spots [ fig . [ fig:3](b ) , peak centered on . cross - correlating soft spot spps with their real - space dynamics confirmed these particles typically displace the greatest amount despite being confined within a disordered aggregate [ fig . [ fig:3](a ) ] . we further studied the relation between structural disorder and large displacements in soft spots by measuring the pairwise distribution as a function of distance between particles ( supplemental materials ) and found that soft spot spps have an intrinsically different structure compared to the average population [ fig . [ fig:3](c ) ] . the plateaued region in around [ fig . [ fig:3](c ) full line ] indicates soft spot spps are more highly squeezed by some of their neighbors , while the shifted peak centered on indicates they re also further away than average from other neighbors [ fig . [ fig:3](c ) , dashed line peak at . these data suggest soft spot spps are being compressed tightly in one direction , and as a consequence displace greater amounts in the orthogonal direction . as such , structural disorder is fundamental for large displacements and rearrangements [ fig . [ fig:3](a ) ] . thus , we hypothesize soft spots in human crowds pose the greatest risk for tripping and subsequent trampling . if found true , real - time image analysis identifying soft spots in densely - packed human crowds may provide useful predictive power for preventing injuries . our results thus far have focused on structural origins of collective motion with all model parameters kept constant . in real life situations , not all people behave the same : some agitate more easily , others less so . accordingly , we modify the asocial model to study how mechanisms for coherent collective motion are affected by active perturbations from within . specifically , we introduce a second population of spps so that a fraction exhibits a more agitated behavior , while the remaining fraction of the population are the same as before . we model these agitated spps with a larger distribution of force fluctuations in by increasing their standard deviation to , and analyzing the two parameter phase space made of and . we first consider the case and vary from 0 to 1 . calculating the spectrum of eigenvalues shows the qualitative trends are independent of , though numerical values of tend to increase with more agitators ( supplemental materials ) . to understand how long - range collective motion is affected by agitated spps , we measured the polarization correlation function for the first 10 modes by varying and [ fig . [ fig:4 ] ] . surprisingly , the correlation functions for at various values of show a qualitative transition unanticipated from the eigenvalue spectrum . for , a long - range correlated goldstone mode is observed as before . however , multiple long - range correlated modes are observed for , and no long - range correlated modes are observed for . examining other values of shows a similar transition with increasing from a single well - defined long - range mode , to multiple long - range modes , to no long - range modes whatsoever [ fig . [ fig:4 ] , rows left - to - right ] . of agitated spps with variance in to the total population probes structural origins of collective motion . each heat map is the polarization correlation function for the first 10 eigenmodes as a function of distance ( same as fig . [ fig:2](c ) ) . low fluctuations ( white background ) preserve the long - range highly - correlated goldstone mode near . high fluctuations ( dark gray background ) destroy long - range correlated modes . intermediate fluctuations ( light gray background ) add new modes with long - range correlations , indicating stochastic resonance . ] the low - agitation and high - agitation limits are intuitive . for low agitation [ fig . [ fig:4 ] , white region ] , additional force fluctuations through increasing with low or increasing with low induce small perturbations to the overall structure . as such , the existence of a goldstone mode at low is anticipated based on the homogeneous population results [ fig . [ fig:2](c ) ] . for high agitation where the combined effect of and is large [ fig . [ fig:4 ] , dark gray shaded region ] , we expect local structure of the aggregated spps to break down and correlated motion to be marginalized . consistent with this reasoning , we find no long - range modes in the high - agitation limit ( supplemental materials ) . between the high and low agitation limit , we find a boundary in the phase diagram characterized by multiple long - range modes [ fig . [ fig:4 ] , light gray shaded region ] . this result is striking because it shows moderate levels of noise induces new coherent modes . noting that correlated motion allows mechanical information to be transferred across the aggregate , an appearance of multiple long - range modes implies greater information bandwidth . in certain settings , signal enhancement mediated by noise is called _ stochastic resonance _ . stochastic resonance can be found in systems where nonlinear effects dampen signal propagation , but by introducing random noise , the effects of nonlinear terms are reduced leading to a restoration of signal propagation . in our case , nonlinear effects come from structural packing disorder that suppresses conventional phonon modes found in ordered 2d systems . random noise from agitators increases an internal pressure within the aggregate that helps break - up this heterogeneous structure . consequently , additional phonon modes are able to reassert their presence . in the context of our model , this finding means that modest random fluctuations can enhance overall collective motion , which increases the potential for injurious outcomes in high - density crowds . our analysis of collective motion in dense crowd simulations relies on trajectory data in order to identify and understand the emergence of goldstone modes , soft spots , and stochastic resonance . with an eye to crowd safety , the dependence on readily measurable quantities combined with computer vision techniques provides significant potential for applications in real - time crowd management . in the long - run this may help protect attendees at large gatherings by reducing emergent risks . more theoretically , the observation of goldstone modes hints that a collective motion analogous to the higgs particle may also be found in future studies of crowd speed modulations . indeed , developing an effective field theory with quasi - particle - like excitations could present new opportunities to understand emergent collective motions , their interactions , and potential hazards
inverse formulations are solved on a daily basis in many disciplines such as image and signal processing , astrophysics , acoustics , quantum mechanics , geophysics and electromagnetic scattering .the inverse formulation , as an interdisciplinary field , involves people from different fields within natural science . to find out the contents of a given black box without opening it , would be a good analogy to describe the general inverse problem .experiments will be carried on to guess and realize the inner properties of the box .it is common to call the contents of the box `` the model '' and the result of the experiment `` the data '' .the experiment itself is called `` the forward modeling '' . as sufficient informationcan not be provided by an experiment , a process of regularization will be needed .the reason to this issue is that there can be more than one model ( different black boxes ) that would produce the same data . on the other hand , improperly posed numerical computations will occur in the calculation procedure .thus , a process of regularization constitutes a major step to solve the inverse problem .regularization is used at the moment when selection of the most reasonable model is on focus .computational methods and techniques ought to be as flexible as possible from case to case .a computational technique utilized for small problems may fail totally when it is used to large numerical domains within the inverse formulation .hence , new methodologies and algorithms would be created for new problems though existing methods are insufficient .this is the major character of the existing inverse formulation in problems with huge numerical domains .there are both old and new computational tools and techniques for solving linear and nonlinear inverse problems .linear algebra has been extensively used within linear and nonlinear inverse theory to estimate noise and efficient inverting of large and full matrices . asexisting numerical algorithms may fail , new algorithms must be developed to carry out nonlinear inverse problems .+ electromagnetic inverse,- and direct scattering problems are , like other related areas , of equal interest .the electromagnetic scattering theory is about the effect an inhomogeneous medium has on an incident wave where the total electromagnetic field is consisted of the incident,- and the scattered field .the direct problem in such context is to determine the scattered field from the knowledge of the incident field and also from the governing wave equation deduced from the maxwell s equations .as the direct scattering problem has been thoroughly investigated , the inverse scattering problem has not yet a rigorous mathematical / numerical basis . because the nonlinearity nature of the inverse scattering problem , one will face improperly posed numerical computation .this means that , in particular applications , small perturbations in the measured data cause large errors in the reconstruction of the scatterer .some regularization methods must be used to remedy the ill - conditioning due to the resulting matrix equations . concerning the existence of a solution to the inverse electromagnetic scattering one has to think about finding approximate solutions after making the inverse problem stabilized .a number of methods is given to solve the inverse electromagnetic scattering problem in which the nonlinear and ill - posed nature of the problem are acknowledged .earlier attempts to stabilize the inverse problem was via reducing the problem into a linear integral equation of the first kind .however , general techniques were introduced to treat the inverse problems without applying any integral equation formulation of the problem .scattering theory has had a major roll in twentieth century mathematical physics . in computational electromagnetics ,the direct scattering problem is to determine a scattered field from knowledge of an incident field and the differential equation governing the wave equation .the incident field is emitted from a source , an antenna for instance , against an inhomogeneous medium .the total field is assumed to be the sum of the incident field and the scattered field .the governing differential equation in such cases is the coupled differential form of maxwell s equations , which will be converted to the wave equation . + in order to guarantee operability of advanced electronic devices and systems , electromagnetic measurements should be compared to results from computational methods .the experimental techniques are expensive and time consuming but are still widely used .hence , the advantage of obtaining data from tests can be weighted against the large amount of time and expense required to operate such tests .analytic solution of maxwell s equations offers many advantages over experimental methods but applicability of analytical electromagnetic modeling is often limited to simple geometries and boundary conditions . as the analytical solutions of maxwell s equations by the method of _ separation of variables _ and _ series expansions _ have a limited scope , they are not applicable in a general case and in a real - world application .availability of high performance computers during the last decades has been one of the reasons to use numerical techniques within computational modeling to solve maxwell s equations also for complicated geometries and boundaries .+ the main objective of this article is to investigate mathematical modeling and algorithms to solve the direct , and inverse electromagnetic scattering problem due to biological tissues for a model based illustration technique within the microwave range .such algorithms are used to make it possible for parallel processing of the heavy and large numerical calculation due to the inverse formulation of the problem .the parallelism of the calculations can then be performed on gpu : s , cpu : s , and fpga : s . by the aid of a deeper mathematical analysis and thereby faster numerical algorithmsan improvement of the existing numerical algorithms will occur .the algorithms may be in the the time domain , frequency domain and a combination of both domains .in constructing the electrostatic model , the electric field intensity vector and the electric flux density vector , , are respectively defined .the fundamental governing differential equations are where is the volume charge density . by introducing as the the _ electric permittivity _ where is _ relative permittivity _ , and as the _ permittivity of free space _ for a linear and isotropic media , and are related by relation the fundamental governing equations for magnetostatic model are where and are defined as the magnetic flux density vector and the magnetic field intensity vector , respectively . and are related as where is defined as magnetic permeability of the medium which is measured in ; is called _ permeability of free space _ and is a ( material - dependent ) number .the medium in question is assumed to be linear and isotropic .( [ eq : ma1 ] ) and ( [ eq : ma2 ] ) are known as maxwell s equations and form the foundation of electromagnetic theory . as it is seen in the above relations , and in the electrostatic modelare not related to and in the magnetostatic model .the coexistence of static electric fields and magnetic electric fields in a conducting medium causes an electromagnetostatic field and a time - varying magnetic field gives rise to an electric field .these are verified by numerous experiments .static models are not suitable for explaining time - varying electromagnetic phenomenon . under time - varying conditionsit is necessary to construct an electromagnetic model in which the electric field vectors and are related to the magnetic field vectors and . in such situations ,the equivalent equations are constructed as where is current density .as it is seen , the maxwell s equations above are in differential form .to explain electromagnetic phenomena in a physical environment , it is more convenient to convert the differential forms into their integral - form equivalents .there are several techniques to convert differential equations into integral equations but in the above cases , one may apply stokes s theorem to obtain integral form of maxwell s equations after taking the surface integral of both sides of the equations over an open surface with contour the result will be constructed as in the following table . _maxwell s equations _ + _ differential form _ _ integral form _ , in the above table , is the electric charge density in .when a physical system is subject to some external disturbance , a non - homogeneity arises in the mathematical formulation of the problem , either in the differential equation or in the auxiliary conditions or both .when the differential equation is nonhomogeneous , a particular solution of the equation can be found by applying either the method of undetermined coefficients or the variation of parameter technique . in general , however , such techniques lead to a particular solution that has no special physical significance .green s functions are specific functions that develop general solution formulas for solving nonhomogeneous differential equations .importantly , this type of formulation gives an increased physical knowledge since every green s function has a physical significance .this function measures the response of a system due to a point source somewhere on the fundamental domain , and all other solutions due to different source terms are found to be superpositions for .if each number of the functions is a solution to the partial differential equation with as a linear operator and with some prescribed boundary conditions , then the linear combination also satisfies here , is a known excitation or source .this fundamental concept is verified in different mathematical literature .] of the green s function .there are , however , cases where green s functions fail to exist , depending on boundaries .although green s first interest was in electrostatics , green s mathematics is nearly all devised to solve general physical problems .the inverse - square law had recently been established experimentally , and george green wanted to calculate how this determined the distribution of charge on the surfaces of conductors .he made great use of the electrical potential and gave it that name .actually , one of the theorems that he proved in this context became famous and is nowadays known as green s theorem .it relates the properties of mathematical functions at the surfaces of a closed volume to other properties inside .the powerful method of green s functions involves what are now called green s functions , .applying green s function method , solution of the differential equation , by as a linear differential operator , can be written as to see this , consider the equation which can be solved by the standard integrating factor technique to give so that .this technique may be applied to other more complicated systems . in an electrical circuitthe green s function is the current due to an applied voltage pulse . in electrostatics ,the green s function is the potential due to a change applied at a particular point in space . in general the green s functionis , as mentioned earlier , the response of a system to a stimulus applied at a particular point in space or time .this concept has been readily adapted to quantum physics where the applied stimulus is the injection of a quantum of energy . within electromagnetic computation, it is common practice to use two methods for determining the green s function in the cases where there is some kind of symmetry in the geometry of the electromagnetic problem .these are the eigenvalue formulation and the method of images .these two methods are described in the following sections , but in order to its importance , the method of the eigenfunction expansion method is first presented .if the eigenvalue problem associated with the operator can be solved , then one may find the associated green s function .it is known that the eigenvalue problem by prescribed boundary conditions , has infinite many eigenvalues and corresponding orthonormal eigenfunctions as and respectively , where moreover , the eigenfunctions form a basis for the square integrable functions on the interval .therefore it is assumed that the solution is given in terms of eigenfunctions as where the coefficients are to be determined .further , the given function forms the source term in the nonhomogeneous differential equation where is the inverse operator to the operator .now , the given function can be written in terms of the eigenfunctions as with combining ( [ eq : ux ] ) , ( [ eq : luf ] ) , and ( [ eq : fx4 ] ) gives by the linear property associated with superposition principle , it can be shown that but which finally yields by comparing the above equations , it will be obtained that further now , it is supposed that an interchange of summation and integral is allowed . in this case( [ eq : gf1 ] ) can be written as on the other hand , by the definition of green s function , one may write by comparing the last two equations , can be expressed in terms of green s functions as is the green s function associated with the eigenvalue problem ( [ eq : lulu ] ) with the differential operator .solution of electromagnetic fields is greatly supported and facilitated by mathematical theorems in vector analysis .maxwell s equations are based on helmholtz s theorem where it is verified that a vector is uniquely specified by giving its divergence and curl , within a simply connected region and its normal component over the boundary .this can be proved as a mathematical theorem in a general manner .solving partial differential equations ( pde ) like maxwell s equation desires different methods , depending on , for instance , which boundary condition the pde has and in which physical field it is studied .the green s function modeling is an applicable method to solve maxwell s equations for some frequently used cases by different boundary conditions .the issue in this type of formulation is , in the first hand , determining and solving the appropriate green s function by its boundary condition .once the green s function is determined , one may receive a clue to the physical interpretation of the whole problem and hence a better understanding of it .this forms the general manner of applying green s function formulation in different fields of science . in some cases within electromagnetic modeling , where the physical source is in the vicinity of a perfect electric conducting ( pec ) surface and where there is some kind of symmetry in the geometry of the problem , the method of images will be a logical and facilitating method to determine the appropriate green s function .the method of images is , in its turn , based on the uniqueness theorem verifying that a solution of an electrostatic problem satisfying the boundary condition is the only possible solution .electric- , and magnetic field of an infinitesimal dipole in the vicinity of an infinite pec surface is one of the subjects that can be studied and facilitated by applying the method of images . in the following section , the method of images is applied to derive the electromagnetic modeling for different electrical sources above a pec surface .it is assumed that an electric point charge is located at a vertical distance above an appropriate large conducting plane that is grounded .it will be difficult to apply the ordinary field solution in this case but by the image methods , where an equivalent system is presented , it will be considerably easier to solve the original problem .an equivalent problem can be to place an image point charge on the opposite side of the pec plane , i.e. . in the equivalent problem ,the boundary condition is not changed and a solution to the equivalent problem will be the only correct solution .the potential at the arbitrary point is which is a contribution from both charges and as and respectively . according to the image methods , eqn .( [ eq : point_potential ] ) gives the potential due to an electric point source above the pec plane on the region .the field located at will be zero ; it is indeed the region where the image charge is located .now it is assumed that a long line charge of constant charge per unit length is located at distance from the surface of the grounded conductor , occupying half of the entire space .it is also assumed that the line charge is parallel to both the grounded plane and to the -axis in the rectangular coordinate system .further , the surface of the conducting grounded plane is coincided with -plane and -axis passes through the line charge so that the boundary condition for this system is where is defined as the electric potential . to find the potential everywhere for this system applying the method of images , one may start by converting this system to an equivalent system where the boundary condition of the original problem will be preserved . to solve this problem by the method of images , the original system will first be converted to another system where the conducting grounded plane vanishes , i.e. a system where the line charge is in the free - space . by using the polar coordinate system ,the potential at an arbitrary point , is .\end{aligned}\ ] ] + ( a ) + and at distance from each other and observed as ( a ) : perpendicular to the paper plane , ( b ) : coincided by the paper plane.,title="fig:",width=226 ] + ( b ) + an equivalent problem may consist of a system of two parallel long lines with opposite charges in the free - space at distance from each other ; the charge densities of the two lines are assumed to be and , respectively .according to the method of images , the total potential will be determined by contribution from these two line charges , which respectively are above it.,title="fig:",width=226 ] + \end{aligned}\ ] ] and .\end{aligned}\ ] ] the total potential is resulted from both of these two line charges as according to the uniqueness theorem and the method of images , eqn .( [ eq : line_potential_4 ] ) gives the solution for a long line charge at distance above the pec plane .the potential below the pec surface will be zero .this is illustrated in fig . [fig : phi_x ] .the overall radiation properties of a radiating system can significantly alter in the vicinity of an obstacle .the ground as a lossy medium , i.e. , is expected to act as a very good conductor above a certain frequency .hence , by applying the method of images the ground should be assumed as a perfect electric conductor , flat , and infinite in extent for facilitating the analysis .it will also be assumed that any energy from the radiating element towards the ground undergoes reflection and the ultimate energy amount is a summation of the reflected and directed ( incident ) components where the reflected component can be accounted for by the introduction of the image sources . in all of the following cases ,the far - field observation is considered . to find the electric field , radiated by a current element along the infinitesimal length , it will be convenient to use the magnetic vector potential as where and represent the observation point coordinates and the coordinates of the constant electric current source , respectively . is the distance from any point on the source to the observation point ; the integral path is the length of the source , and where and are permeability and permittivity of the medium . by the assumption that an infinitesimal dipole is placed along the -axis of a rectangular coordinate system plus that it is placed in the origin, one may write for constant electric current , and . hence , the distance will be by knowing that , and by setting , eqn .( [ eq : line_vec_pot ] ) may be written as the most appropriate coordinate system for studying such cases is the spherical coordinate system , so the vector potential in eqn .( [ eq : line_vec_pot2 ] ) should be converted into the spherical components as in the last three equations , by the assumption that the infinitesimal dipole is placed along the -axis . for determining the electric field radiation of the dipole, one should operate the magnetic vector potential by a curl operation to obtain the magnetic field intensity as in spherical coordinate system , eqn .( [ eq : magnetic_curl1 ] ) is expressed as + \frac{\hat{\theta}}{r}\left[\frac{1}{\sin \theta}\frac{\partial a_{r}}{\partial\phi}-\frac{\partial}{\partial r}(ra_{\phi})\right]+\frac{\hat{\phi}}{r}\left[\frac{\partial}{\partial r}(ra_{\theta})-\frac{\partial a_{r}}{\partial\theta}\right]\right).\nonumber\end{aligned}\ ] ] but according to eqn .( [ eq : spher_vec_pot3 ] ) and due to spherical symmetry of the problem , where there are no -variations along the -axis , the last equation simplifies to ,\end{aligned}\ ] ] which together with eqn .( [ eq : spher_vec_pot1 ] ) and ( [ eq : spher_vec_pot22 ] ) gives further , by equating maxwell s equations , it will be obtained that by setting in eqn .( [ eq : magnetic3 ] ) , it will be obtained that eqn .( [ eq : electric4 ] ) , together with eqns .( [ eq : spher_vec_pot1])-([eq : spher_vec_pot3 ] ) yields ^{-j\beta r},\end{aligned}\ ] ] ^{-j\beta r},\end{aligned}\ ] ] where is called the intrinsic impedance ( ohms for the free - space ) .stipulating for the far - field region , i.e. the region where , the electric field components and in eqns .( [ eq : electric11])-([eq : electric33 ] ) can be approximated by which is the electric far - field solution for an infinitesimal dipole along the -axis and in the spherical coordinate system .the same procedure may be used to solve the electric field for an infinitesimal dipole along the -axis where the magnetic vector potential is defined as in the spherical coordinate system , the above equation is expressed as it should be mentioned that due to the placement of the infinitesimal dipole along the -axis . by far - field approximation , and based on eqns .( [ eq : vector_potential_r])-([eq : vector_potential_phi ] ) , the electric field can be written as the electric field , as a whole , will be contributions from both and which is expressed as the overall radiation properties of a radiating system can significantly alter in the vicinity of an obstacle . the ground as a medium is expected to act as a very good conductor above a certain frequency . applying the method of images and for simplifying the analysis , the ground is assumed to be a perfect electric conductor , flat , and infinite in extent .it is also assumed that energy from the radiating element undergoes reflection and the ultimate energy amount is a summation of the reflected and the direct components respectively where the reflected component can be accounted for by the image sources . a vertical dipole of infinitesimal length and constant current ,is now assumed to be placed along -axis at distance above the pec surface by an infinite extent .the far - zone directed- , and reflected components in a far - field point are respectively given by and where and are the distances between the observation point and the two other points , the source- and the image- locations ; and are the related angles between these lines and the -axis .it is intended to express all the quantities only by the elevation plane angle and the radial distance between the observation point and the origin of the spherical coordinate system .for this purpose , one may utilize the law of cosines and also a pair of simplifications regarding the far - field approximation .the law of cosines gives by binomial expansion and regarding phase variations , one may write by utilizing the far - zone approximation where , and all of the above simplifications , it is obtained that finally , after some algebraic manipulations , one may find for .\end{aligned}\ ] ] according to the image theory , the field will be zero for .determining of green s functions for stratified media has , during the last decades , been an important and fundamental stage to design of high - frequency circuits . in the case of a layered medium , a so - called _ mixed - potential integral equation ( mpie ) _ ,is applied to the associated geometry .mpie can be solved in both spectral- , and spatial domain and the both solutions require appropriate green s functions .the green s functions for multi - layered planar media are represented by the sommerfeld s integral whose integrand is consisted of the hankel function , and the closed - form spectral - domain green s functions .a two - dimensional inverse fourier transformation is needed to determine the spectral - domain green s functions analytically via the following integral which is along the sommerfeld s integration path ( sip ) and the -plane as where is the hankel function of the second kind ; and are the green s functions in the spatial- and spectral- domain .one of the topics in this context is that there is no general analytic solution to the hankel transform of the closed - form spectral - domain green s function .numerical solution of the above transformation integral is very time - consuming , partly due to the slow - decaying green s function in the spectral domain , partly due to the oscillatory nature of the hankel function . dealing with such problemconstitutes one of the major topics within the computational electromagnetics for multi - layered media . in many applications , the _ discrete complex image methods _ ( dcim ) is used to handle this numerically time - consuming process .the strategy in this process is to obtain green s functions in a closed - form as where with will be complex - valued . the constants and are to be determined by numerical processes such as the prony s method . in dyadic form and by assuming an time dependence , the electric field at an observation point , defined by the vector , produced by a surface current of a surface can be expressed as \frac{\mu e^{-j\beta r}}{4\pi r}\mathbf{j}(\mathbf{r},\mathbf{r}')d s ' \nonumber\\ & = & \int_{s'}\mathbf{g}(\mathbf{r},\mathbf{r}')\mathbf{j}(\mathbf{r},\mathbf{r})'d s'\end{aligned}\ ] ] where by and as the electromagnetic characteristics for the layered medium ; is the distance from the source point to the field point . is the unit dyad and is defined as the dyadic green s function .there are different methods to construct the auxiliary green s function in the case of boundary value problems , which are as a consequence of using mathematics to study problems arising in the real world .the numerical solution of an integral equation has the general property that the coefficient matrix in the ultimate linear equation will consist of a dense coefficient matrix and a relatively fewer number of elements in the unknown vector .numerical solution of a general integral equation involves challenges due to the ill - conditioned coefficient matrix , as a rule and not as an exception ; the integration operator to solve a differential equation is a smoothing operator and the differential operator to solve an integral equation will be a non - smooth operator .this is the main reason of the ill - conditioning .generally , and depending on the kind of problem , there are several numerical methods to handle the ill - conditioning and in the case of solution of maxwell s equations in the integral form , ill - conditioning will be a problem to handle .generally , the exact mathematical solution of the field problem is the most satisfactory solution , but in modern applications one can not use such analytical solution in majority of cases .although the analytical solution of the field problem has its limitations , the numerical methods can not be applied without checking and realizing the limitations in classical analytical methods .indeed , every numerical method involves an analytical simplification to the point where it is easy to apply a certain numerical method .the most commonly used analytical solutions in computational electromagnetics are * laplace , and fourier transforms , * perturbation methods , * separation of variables ( eigenfunction expansion method ) , * conformal mapping , * series expansion .the method of separation of variables ( eigenfunction expansion method ) is described in the next subsection .the method of eigenfunction expansion can be applied to derive the green s function for partial differential equations by known homogeneous solution .the partial differential equation with features a problem with homogeneous boundary conditions .the green s function , in this case , can be represented in terms of a series of orthonormal functions that satisfy the prescribed boundary conditions . in this process, it is assumed that the solution of the partial differential equation may be written in the form where are eigenfunctions belonging to the associated eigenvalue problem , satisfies the prescribed homogeneous boundary conditions , since each eigenfunction does . ] by prescribed boundary condition ( b.c . ) and initial conditions ( i.c . ) . are time - dependent coefficients to be determined .it is also assumed that termwise differentiation is permitted has a continuous derivative on ] and if the series converges uniformly to on ] equivalently .introduction to mathematical analysis page 206-william parzynski , philip w. zipse . ] . in this case and which together with ( [ eq:32 ] ) gives this is a result of applying the superposition principle which can be deduced as from ( [ eq:32 ] ) .next , by rewriting the partial differential equation above as and inserting the expressions ( [ eq:33 ] ) and ( [ eq:34 ] ) into the right - hand side of ( [ eq:35 ] ) , it can be obtained that \psi _ { n}(x).\ ] ] the right - hand side of the equation above is interpreted as a generalized fourier series for where the set of functions is orthogonal on the specified interval by a given weighting function that is for all of the function for a fixed value of thus , the fourier coefficients are defined as where is defined as the norm of with the relation ^{2}dx\textnormal { , for } n=1,2, ... \ ] ] eqn .( [ eq:37 ] ) as a first - order linear differential equation , has the general solution for by the assumption that for all it has to be added that are arbitrary constants . in the equation above, is defined as now , by substituting ( [ eq:40 ] ) into ( [ eq:31 ] ) , it will be obtained that for determining the arbitrary coefficients , , one shall force eqn .( [ eq:41 ] ) to satisfy the prescribed initial condition . by using the above process and applying the method of moments ( mom ) , described in the previous sections ,the scattering problem of a dielectric half - cylinder which is illuminated by a transmission wave can be obtained by the matrix equation [e]=[e^i]\end{aligned}\ ] ] where and with for by as the number of cells the cylinder is divided into . is the average dielectric constant of cell and is the radius of the equivalent circular cell by the same cross section as cell . is the field inside the dielectric half - cylinder and is the bessel function ; and are hankel functions of the first and second kinds .almost any problem involving derivatives , integrals , or non - linearities can not be solved in a finite number of steps and thus must be solved by a theoretically infinite number of iterations for converging to an ultimate solution ; this is not possible for practical purposes where problems will be solved by a finite number of iterations until the answer is approximately correct .indeed , the major aspect is , by this approach , finding rapidly convergent iterative algorithms in which the error and accuracy of the solution will also be computed . in computational electromagnetics , a difficult problem like a partial differential equation or an integral equationwill be replaced by , for instance , a much simpler linear equation system .replacing complicated functions with simple ones , non - linear problems with linear problems , high - order systems by low - order systems and infinite - dimensional spaces with finite - dimensional spaces are applied as other alternatives to solve easier problems that have the same solution to a difficult mathematical model .numerical modeling of electromagnetic ( em ) properties are used in , for example , the electronic industry to : 1 ._ ensure functionality of electric systems_. system performance can be degraded due to unwanted em interference coupling into sensitive parts .ensure compliance with electromagnetic compatibility ( emc ) regulations and directives_. to prevent re - designs of products and ensure compliance with directives post - production .the technique for solving field problems , maxwell s equations , can be classified as experimental , analytical ( exact ) , or numerical ( approximate ) .the experimental techniques are expensive and time - consuming but are still used .the analytical solution of maxwell s equations involves , among others , _ separation of variables _ and _ series expansion _ , but are not applicable in the general case .the numerical solution of the field problems became possible with the availability of high performance computers .the most popular numerical techniques are ( 1 ) _ finite difference methods ( fdm ) _ , ( 2 ) _ finite element methods ( fem ) _ , ( 3 ) _ moment methods ( mom ) _ , ( 4 ) _ partial element equivalent circuit ( peec ) method_. the differences in the numerical techniques have their origin in the basic mathematical approach and therefore make one technique more suitable for a specific _ class of problems _ compared to the others .typical classes of problems in the area of em modeling are : * printed circuit board ( pcb ) simulations ( mixed circuit and em problem ) .* electromagnetic field strength and pattern characterization .* antenna design .further , the problems presented above require different kinds of analysis in terms of : * requested solution domain ( time and/or frequency ) . *requested solution variables ( currents and/or voltages or electric and/or magnetic fields ) .the categorization of em problems into classes and requested solutions in combination with the complexity of maxwell s equations emphasizes the importance of using the right numerical technique for the right problem to enable a solution in terms of accuracy and computational effort . in the following sections , four different types of em computational techniques are briefly presented .the first three , fem , mom , and fdm are the most comon techniques used today for simulating em problems .the fourth technique , the peec method , is widely used within signal integrity . the finite element method ( fem ) is a powerful numerical technique to handle problems involving complex geometries and heterogeneous media .the method is more complicated than fdm but also applicable to the wider range of problems .fem is based on the differential formulation of maxwell s equations in which the complete field space is discretized .the method is applicable in both the time,- and frequency domain . in this method ,partial differential equations ( pdes ) are solved by a transformation to matrix equations .this is done by minimizing the energy using the mathematical concept of a functional , where the energy can be obtained by integrating the ( unknown ) fields over the structure volume .the procedure is commonly explained by considering the pde described by the function with corresponding excitation function as : where is a pde operator .for example , laplace equation is given by , , and .the next step is to discretize the solution region into finite elements for which the functional can be written .the functional for each fem element , , is then calculated by expanding the unknown fields as a sum of known basis functions , , with unknown coefficients , .the total functional is solely dependent on the unknown coefficients and can be written as where is the number of finite elements in the discretized structure and where depends on what kind of finite elements are used in the discretization .the last step is to minimize the functional for the entire region and solve for the unknown coefficients , , to be zero , i.e. the method offers great flexibility to model complicated geometries with the use of nonuniform elements . as for the fdm , the fem delivers the result in field variables , and , for general em problems at all locations in the discretized domain and at every time or frequency point . to obtain structured currents and voltages post - processingis needed for the conversion . in this section a finite difference time domain ( fdtd ) method is described .the method is widely used within em modeling mainly due to its simplicity .the fdtd method can be used to model arbitrarily heterogeneous structures like pcbs and the human body . in the fdtd methodfinite difference equations are used to solve maxwell s equations for a restricted computational domain .the method requires the whole computational domain to be divided , or discretized , into volume elements ( cells ) for which maxwell s equations have to be solved .the volume element sizes are determined by considering two main factors : 1 . _frequency_. the cell size should not exceed , where is the wavelength corresponding to the highest frequency in the excitation ._ structure_. the cell sizes must allow the discretization of thin structures .the volume elements are not restricted to cubical cells , parallelepiped cells can also be used with a side to side ratio not exceeding , mainly to avoid numerical problems . in many cases ,the resulted fdtd method is based according to the well - known yee formulation .however , there are other fdtd methods which are not based in the yee cell and thus have another definition of the field components . to be able to apply maxwell s equations in differential form to the yee cell, the time and spatial derivatives using finite difference expressions will result in the fdtd equations .the equations are then solved by : 1 . calculating the electric field components for the complete structure .2 . advancing time by .3 . calculating the magnetic field components for the complete structure based on the electric field components calculated in .4 . advancing time by and continuing to .the fdtd method delivers the result in field variables , and , at all locations in the discretized domain and at every time point . to obtain structured currents and voltages post - processingis needed for the conversion .method of moments ( mom ) is based on the integral formulation of the maxwell s equations .the basic feature makes it possible to exclude the air around the objects in the discretization .the method is usually employed in the frequency domain but can also be applied to the time domain problems . in the mom , integral - based equations , describing the current distribution on a wire or a surface , are transformed into matrix equations easily solved using matrix inversion . when using the mom for surfaces , a wire - grid approximation of the surface can be utilized as described in .the wire formulation of the problem simplifies the calculations and is often used for field calculations .the starting point for theoretical derivation is to apply a linear ( integral ) operator , , involving the appropriate green s function , applied to an unknown function , , by an equation as where is the known excitation function for the above system . as an example the above equation can be the pocklington s integral equation , describing the current distribution on a cylindrical antenna , written as then the un - known function , ,can be expanded into a series of known functions , , with un - known amplitudes , , resulting in where , are called basis ( or expansion ) functions . to solve the unknown amplitudes , ,equations are derived from the combination of eqn .( [ eq : mom1 ] ) and eqn .( [ eq : mom3 ] ) and by the multiplication of weighting ( or test ) functions , integrating over the wire length ( the cylindrical antenna ) and the formulation of a proper inner product .this results in the transformation of the problem into a set of linear equations which can be written in matrix form as [i ] = [ v]\end{aligned}\ ] ] where the matrices , , , and are referred to as generalized impedance , current , and voltage matrices and the desired solution for the current , , is obtained by matrix inversion .thus , the unknown solution is expressed as a sum of known basis functions whose weighting coefficients corresponding to the basis functions will be determined for the best fit .the same process applied to differential equations is known as the `` weighted residual '' method .the mom delivers the result in system current densities and/or voltages at all locations in the discretized structure and at every frequency point ( depending on the integral in eqn .( [ eq : mom2 ] ) ) . to obtain the results in terms of field variables ,post - processing is needed for the conversion . the well - known computer program _numerical electromagnetics code _ , often referred to as nec , utilizes the mom for calculation of the electromagnetic response for antennas and other metal structures .the basis of the partial element equivalent circuit ( peec ) method originates from inductance calculations performed by dr .albert e. ruehli at ibm t.j .watson research center , during the first part of 1970s .ruehli was working with electrical interconnect problems and understood the benefits of breaking a complicated problem into basic partitions , for which inductances could be calculated to model the inductive behavior of the complete structure . by doing so ,return current paths need not to be known _ a priori _ as required for regular ( loop ) inductance calculations .the concept of partial calculations was first introduced by rosa in 1908 , further developed by grover in 1946 and hoer and love in 1965 .however , dr .ruehli included the theory of partial coefficients of potential and introduced the partial element equivalent circuit ( peec ) theory in 1972 .significant contributions of the peec method includes : * the inclusion of dielectrics , * the equivalent circuit representation with coefficients of potential , * the retarded partial element equivalent circuit representation , * peec models to include incident fields , scattering formulation , * nonorthogonal peecs .the interest and research effort of the peec method have increased during the last decade .the reasons can be an increased need for combined circuit and em simulations and the increased performance of computers enabling large em system simulations .this development reflects on the areas of the current peec research , for example , model order reduction ( mor ) , model complexity reduction , and general speed up .the peec method is a 3d , full wave modeling method suitable for combined electromagnetic and circuit analysis . in the peec method ,the integral equation is interpreted as the kirchhoff s voltage law applied to a basic peec cell which results in a complete circuit solution for 3d geometries .the equivalent circuit formulation allows for additional spice - type circuit elements to easily be included .further , the models and the analysis apply to both the time and the frequency domain .the circuit equations resulting from the peec model are easily constructed using a condensed modified loop analysis ( mla ) or modified nodal analysis ( mna ) formulation . in the mna formulation ,the volume cell currents and the node potentials are solved simultaneously for the discretized structure . to obtain field variables , post - processing of circuit variables are necessary .this section gives an outline of the nonorthogonal peec method as fully detailed in . in this formulation ,the objects , conductors and dielectrics , can be both orthogonal and non - orthogonal quadrilateral ( surface ) and hexahedral ( volume ) elements .the formulation utilizes a global and a local coordinate system where the global coordinate system uses orthogonal coordinates where the global vector is of the form .a vector in the global coordinates are marked as .the local coordinates are used to separately represent each specific possibly non - orthogonal object and the unit vectors are , , and , see further . the starting point for the theoretical derivationis the total electric field on the conductor expressed as where is the incident electric field , is the current density in a conductor , is the magnetic vector potential , is the scalar electric potential , and is the electrical conductivity .the dielectric areas are taken into account as an excess current with the scalar potential using the volumetric equivalence theorem . by using the definitions of the vector potential and the scalar potential we can formulate the integral equation for the electric field at a point which is to be located either inside a conductor or inside a dielectric region according to eqn .( [ eq : fullie ] ) is the time domain formulation which can easily be converted to the frequency domain by using the laplace transform operator and where the time retardation will transform to .+ the peec integral equation solution of the maxwell s equations is based on the total electric field , e.g. ( [ eq : efield ] ) .an integral or inner product is used to reformulate each term of ( [ eq : fullie ] ) into the circuit equations .this inner product integration converts each term into the fundamental form where is the voltage or potential difference across the circuit element .it can be shown how this transforms the sum of the electric fields in ( [ eq : efield ] ) into the kirchhoff s voltage law ( kvl ) over a basic peec cell .[ fig : peecmo_o ] details the ( ,,)peec model for the metal patch in fig .[ fig : basic_c ] when discretized using four edge nodes ( solid dark circles ) .the model in fig .[ fig : peecmo_o ] consists of : * partial inductances ( ) which are calculated from the volume cell discretization using a double volume integral .* coefficients of potentials ( ) which are calculated from the surface cell discretization using a double surface integral . * retarded controlled current sources , to account for the electric field couplings , given by where is the free space travel time ( delay time ) between surface cells and , * retarded current controlled voltage sources , to account for the magnetic field couplings , given by where is the free space travel time ( delay time ) between volume cells and . + by using the mna method , the peec model circuit elements can be placed in the mna system matrix during evaluation by the use of correct matrix stamps .the mna system , when used to solve frequency domain peec models , can be schematically described as where : _ * p * _ is the coefficient of potential matrix , _ * a * _ is a sparse matrix containing the connectivity information , _ * * l** _ is a dense matrix containing the partial inductances , elements of the type , _ * r * _ is a matrix containing the volume cell resistances , _ v _ is a vector containing the node potentials ( solution ) , elements of the type , _ i _ is a vector containing the branch currents ( solution ) , elements of the type , _i _ is a vector containing the current source excitation , and _v _ is a vector containing the voltage source excitation .the first row in the equation system in ( [ eq : mna_a ] ) is the kirchhoff s current law for each node while the second row satisfy the kirchhoff s voltage law for each basic peec cell ( loop ) .the use of the mna method when solving peec models is the preferred approach since additional active and passive circuit elements can be added by the use of the corresponding mna stamp . for a complete derivation of the quasi - static and full - wave peec circuit equations using the mna method , see for example .the main objective of this section is to investigate biological imaging algorithms by solving the direct , and inverse electromagnetic scattering problem due to a model based illustration technique within the microwave range .a well - suited algorithm will make it possible to fast parallel processing of the heavy and large numerical calculation of the inverse formulation of the problem .the parallelism of the calculations can then be performed and implemented on gpu : s , cpu : s , and fpga : s . by the aid of mathematical / analytical methods and thereby faster numerical algorithms , an improvement of the existing algorithmsis also expected to be developed .these algorithms may be in time domain , frequency domain and a combination of both .there is a potential in the microwave tomographic imaging for providing information about both physiological state and anatomical structure of the human body . by several strong reasonsthe microwave tomographic imaging is assumed to be tractable in medical diagnostics : the energy in the microwave region is small enough to avoid ionization effects in comparison to x - ray tomography .furthermore , tissue characteristics such as blood content , blood oxygenation , and blood temperature can not be differentiated by the density - based x - ray tomography .the microwave tomography can be used instead of determining tissue properties by means of complex dielectric values of tissues .it is shown that the microwave tissue dielectric properties are strongly dependent on physiological condition of the tissue .the dependence of the tissue dielectric properties plays a major roll to open opportunities for microwave imaging technology within medical diagnostics . as in tomography by x - ray densities of tissuesare investigated , the electromagnetic scattering technique is based on determining the permittivity of tissues . in such context , the interesting thing to think about is , always , how the old electromagnetic scattering computations can be improved by smarter faster mathematical / numerical algorithms .in addition , there are promising methods providing a good compromise between rapidity and cost why there is a potential interest of microwave imaging in biomedical applications .the area of the research is rather new so that new approaches and new methods are expected to be developed in tomographic imaging .the inverse electromagnetic scattering should be solved in order to produce a tomographic image of a biological object . in this process ,the dielectric properties of the object under test is deduced from the measured scattered field due to the object and a known incident electric field .nonlinearity relations arise between the scattered field and multiple paths through the object .approximations are used to linearize the resulting nonlinear inverse scattering problem . asthis problem is ill - posed , the existence and uniqueness of the solution and also its stability should be established .scattering theory has had a major roll in the twentieth century mathematical physics .the theory is concerned with the effect an inhomogeneous medium has on an incident particle or wave . the direct scattering problem is to determine a scattered field from a knowledge of an incident field and the differential equation governing the wave equation .the incident field is emitted from a source , an antenna for example , against an inhomogeneous medium .the total field is assumed to be the sum of the incident field and the scattered field .the governing differential equation in such cases is maxwell s equations which will be converted to the wave equation .generally , the direct scattering problems depend heavily on the frequency of the wave in question .in particular , the phenomenon of diffraction is expected to occur if the wavelength is very small compared to the smallest observed distance ; is the wave number .thus , due to the scattering obstacle , an observable shadow with sharp edges is produced .obstacles which are small compared with the wavelength disrupt the incident wave without any identifiable shadow .two different frequency regions are therefore defined based on the wave number and a typical dimension of the scattering objects .the set of values such that is called the _ high frequency region _ and the set of values where is called the _ resonance region_. the distinction between these two frequency regions is due to the fact that the applied mathematical methods in the resonance region differ greatly from the methods used in the high frequency region .one of the first issues to think about when studying the direct scattering problem is the _ uniqueness _ of the solution .then , by having established uniqueness , the existence of the solution and a numerical approximation of the problem must be analyzed and handled .the uniqueness of the solution will be discussed in the next section . within the electromagnetic field theorythere are two fundamental governing differential equations for electrostatics in any medium .these are : where and , are the electric flux density and electric field intensity , as defined earlier ; is the volume charge density . because is rotation - free ,a scalar electric potential can be defined such that combining ( [ eq : div.d ] ) and ( [ eq : scalar v ] ) yields where is the permittivity due to linear isotropic medium in which .the above equations will finally result in eqn .( [ eq : poisson ] ) is called the _poisson s equation_. in this equation is _laplacian_. if there is no charge in the simple medium , i.e. , then eqn .( [ eq : poisson ] ) will be converted into which is called the _laplace s equation_. the concept of uniqueness has arisen when solving the laplaces,- or poisson s equation by different methods . depending on the complexity and the geometry of the problem , one may use analytical , numerical , or experimental methods .the question is whether all of these methods will give the same solution .this may be reformulated as : is the present particular solution of the laplaces,- or poisson s equation , satisfying the boundary conditions , the only solution ?the answer will be yes by relying on the uniqueness theorem .irrespective of the method , a solution of the problem satisfying the boundary conditions is the only possible solution . in connection with the concept of the uniqueness ,two theorems are extensively discussed within the computational electromagnetics .these are : [ theorem : unique1 ] a vector is uniquely specified by giving its divergence and its curl within a simply connected region and its normal component over the boundary .[ theorem : unique2 ] a vector with both source and circulation densities vanishing at infinity may be written as the sum of two parts , one of which is irrotational , the other solenoidal .a proof of the uniqueness theorem due to the laplace s equation is given in .the theorem ( [ theorem : unique2 ] ) is called the helmholtz s theorem . the theorems ( [ theorem : unique1 ] ) and ( [ theorem : unique2 ] ) can together be interpreted as : `` _ _ a solution of the poisson s equation ( [ eq : poisson ] ) and eqn .( [ eq : laplace ] ) ( as a special case ) , which satisfies a given boundary condition , is a unique solution _ _ '' . in , there is another interpretation of the uniqueness theorem : `` _ _ a field in a lossy region is uniquely specified by the sources within the region plus the tangential components of the electric field over the boundary , or the tangential components of the magnetic field over the boundary , or the former over part of the boundary and the latter over the rest of the boundary _ _ '' . hence , according to the uniqueness theorem , the field at a point in space will be sufficiently determined by having information about the tangential electric field and the tangential magnetic field on the boundary .this means that to determine the field uniquely , one of the following alternatives must be specified : * everywhere on , * everywhere on , * on a part of and on the rest of , with as the boundary of the domain .directly related to the electromagnetic obstacle scattering two other theorems can be found in ; these are : [ theorem : scatterobstacle1 ] assume that and are two perfect conductors such that for one fixed wave number the electric far - field patterns for both scatterers coincide for all incident directions and all polarizations. then . [theorem : scatterobstacle2 ] assume that and are two perfect conductors such that for one fixed incident direction and polarization the electric far field patterns of both scatterers coincide for all wave numbers contained in some interval . then .as depicted in the above theorems , the scattered wave depends analytically on the wave number .the simplest problem in the direct scattering problem is scattering by an impenetrable obstacle d. then , the total field can be determined by in which , and is the refractive index due to the square of the sound speeds . by the assumption that the medium is absorbing and also assuming that has _ _ compact support __ , will be complex - valued . for the homogeneous host medium , , and for the inhomogeneous medium , .depending on obstacle properties , different boundary conditions will be assumed .( [ eq : boundary1 ] ) is called _ sommerfeld radiation condition_. acoustic wave equations possessing such kind of boundary condition guarantee that the scattered wave is outgoing . within the computational electromagnetics for the scattering problem , the incident field by the time - harmonic electromagnetic plane wave can be expressed as where is the wave number , the radial frequency , the electric permittivity in vacuum , the magnetic permeability in vacuum , the direction of propagation and the polarization .assuming variable permittivity but constant permeability , the electromagnetic scattering problem is now to determine both the electric , and magnetic field according to where is the refractive index by the ratio of the permittivity in the inhomogeneous medium and and the permittivity in the homogeneous host medium ; will have a complex value if the medium is conducting .it is assumed that has compact support .the total electromagnetic field is determined by so that where eqn .( [ eq : silver - m ] ) is called the _ silver - mller radiation condition_. the electromagnetic scattering by a perfect obstacle is now to find an electromagnetic field such that where is the unit outward normal to .( [ eq : harmmaxw ] ) are called the _ time harmonic maxwell s equations_. the above formulation is called the _ direct electromagnetic scattering problem_. the method of integral equations is a common method to investigate the existence of a numerical approximation of the direct problem . the integral equation associated with the electromagnetic scattering problem due to eqns.([eq : em - scatterng_1])-([eq : em - scattering_4 ] ) is given by where and ; if is the solution of eqn .( [ eq : em - scattering_6 ] ) , one can define letting tend to the boundary of and introducing as a tangential density to be determined , one can verify that will be a solution for in the following boundary integral equation : in this formulation , the boundary integral equation in eqns .( [ eq : em - scattering_8 ] ) will be used to solve eqns .( [ eq : em - scatterng_1])-([eq : em - scattering_4 ] ) .the fact is that the integral equation is not uniquely solvable if is a neumann eigenvalue of the negative laplacian in .the numerical solution of boundary integral equations in scattering theory is generally a much challenging area and a deeper understanding of this topic requires knowledge in different areas of functional analysis , stochastic processes , and scientific computing .in fact , the electromagnetic inverse medium problem is not entirely investigated and numerical analysis and experiments have yet to be done for the three dimensional electromagnetic inverse medium .the inverse scattering problem is , in many areas , of equal interest as the direct scattering problem .inverse formulation is applied to a daily basis in many disciplines such as image and signal processing , astrophysics , acoustics , geophysics and electromagnetic scattering .the inverse formulation , as an interdisciplinary field , involves people from different fields within natural science . to find out the contents of a given black box without opening it , would be a good analogy to describe the general inverse problem .experiments will be carried on to guess and realize the inner properties of the box .it is common to call the contents of the box `` the model '' and the result of the experiment `` the data '' .the experiment itself is called `` the forward modeling . '' as sufficient information can not be provided by an experiment , a process of regularization will be needed .the reason to this issue is that there can be more than one model ( different black boxes ) that would produce the same data . on the other hand , improperly posed numerical computations will arise in the calculation procedure .a regularization process in this context plays a major roll to solve the inverse problem . as in the direct formulation ,the permittivity has a constant value , in inverse scattering formulation has to be assumed as room - dependent .assuming outside a sphere with radius , and inside , the following equation can be deduced by starting from maxwell s equations and some vector algebra where is the room variable and the scatterer material with volume is assumed to be non - magnetic , i.e. ; no other current sources except induced current generated by the incident field are assumed to exist either . by introducing a dimensionless quantity , known as the _ electric susceptibility _, a new equation will be introduced as where is defined as _ electric displacement _ , see previous sections . by eqn .( [ eq : chi ] ) , it is easy to see that a dielectric medium is , by definition , linear if is independent of and homogeneous if is independent of space coordinates .in fact , the electric susceptibility gives the dielectric deviation between the free - space and other dielectric media in the case of inverse scattering problem . it is equal to zero in the free - space on the outside of the sphere with radius and distinct from zero inside .the sphere contains in fact the scatterer with the volume .in addition , it is assumed that the medium contained in the volume is not _ dispersive _ , i.e. inside the volume is not dependent on the frequency . in the case of the inverse electromagnetic scattering problem ,the goal is to determine the function by experimentally obtained incident electric field and scattered electric field and the total field .this process is started by re - writing the eqn .( [ eq : scattinverse1 ] ) as where in which is the wave number associated with vacuum as the surrounding medium . due to the incident field ,a current will be induced in with the associated current density , which can be expressed as by the aid of this induced current density , the scattered electric field can be expressed as \cdot\int_{v_{s}}\frac{e^{jk|\mathbf{r}-\mathbf{r}'|}}{4\pi|\mathbf{r}-\mathbf{r'}|}\chi_{e}(\mathbf{r}')\mathbf{e}(\mathbf{r}')dv ' , \hspace{4 mm } \mathbf{r } \not \in v_{s}.\ ] ] as it is seen in eqn .( [ eq : inccurrentdensity ] ) , the integral deals with the inside of the scatterer which is unobservable by experimentally measuring the electric field . both the scattered,- and the incident electric field can be measured at the outside of the scatterer and the unknown electric field inside the integral should be determined in different situations . in the caseswhere , there are different methods to approximate the integral in eqn .( [ eq : inccurrentdensity ] ) . in the _born _ approximation , the dielectrical properties of the scatterer can be determined by a three - dimensional inverse fourier transforming of the far - field in certain directions and for any frequency .this means that for the experimentally given incident plane wave with propagation vector and for a fixed point , a three - dimensional fourier transform of the function can be calculated in a point , that is where the far - field scattering amplitude ( measured data in the far - field ) is as depicted in eqn .( [ eq : incid_fourier ] ) , in the born approximation the problem is linearized with substitution of the unknown field in the integral by the given incident filed . in the _ rytov _ approximation , the polarization field is assumed to be almost unchanged and the phase of the field is interpreted as all the scattering , that is where is the field phase as in which is the deviations from , i.e. , the phase associated with the incident field . by application of some vector algebra and by the aid of an approximation , ( [ eq : scattinverse2 ] ) can be written as that yields by which the electric susceptibility can be determined by the following process . + byintroducing new cartesian coordinates and it will be possible to have the directions of lying in , for example , the -plane so that the -plane is perpendicular to the -plane , that is where is the rotation angle between the two coordinate systems of and .finally , the phase can , by the rytov approximation , be expressed as there are two methods to obtain from eqn .( [ eq : rytov6 ] ) : the method of _ projection _ and the method of _ integral equation_. following , the method of projection is briefly explained .the general inverse formulation of determining dielectric properties of the scatterer is in the form of the following integral where is a two - dimensional regional vector ; eqn .( [ eq : projection1 ] ) is , by inspection , according to the definition of the dirac s delta function .the coordinates and are associated with the directions and according to according to this formulation of inverse electromagnetic scattering , the data is actually the fourier transform of the dielectric properties of the scatterer in question .this means which together with ( [ eq : projection1 ] ) gives by using the dirac s delta function properties , ( [ eq : projection2 ] ) can be written as the unknown dielectric properties can now be determined by inverse fourier transforming of ( [ eq : projection3 ] ) , that is where expressed in the cartesian coordinates , the vector can be written as as the direct scattering problem has been thoroughly investigated , the inverse scattering problem has not yet a rigorous mathematical / numerical basis . because the nonlinearity nature of the inverse scattering problem , one will face improperly posed numerical computation in the inverse calculation process .this means that , in many applications , small perturbations in the measured data cause large errors in the reconstruction of the scatterer .some regularization methods must be used to remedy the ill - conditioning due to the resulting matrix equations . concerning the existence of a solution to the inverse electromagnetic scattering one has to think about finding approximate solutions after making the inverse problem stabilized .a number of methods is given to solve the inverse electromagnetic scattering problem in which the nonlinear and ill - posed nature of the problem are acknowledged .earlier attempts to stabilize the inverse problem was via reducing the problem into a linear integral equation of the first kind .however , general techniques were introduced to treat the inverse problems without applying an integral equation .the process of regularization is used at the moment when selection of the most reasonable model is on focus .computational methods and techniques ought to be as flexible as possible from case to case .a computational technique utilized for small problems may fail totally when it is used for large numerical domains within the inverse formulation .new methodologies and algorithms would be created for new problems since existing methods are insufficient .this is the major character of the existing inverse formulation in problems with huge numerical domains .there are both old and new computational tools and techniques for solving linear and nonlinear inverse problems .linear algebra has been extensively used within linear and nonlinear inverse theory to estimate noise and efficient inverting of large and full matrices . asdifferent methods may fail , new algorithms must be developed to carry out nonlinear inverse problems .sometimes , a regularization procedure may be developed for differentiating between correlated errors and non - correlated errors .the former errors come from linearization and the latter from the measurement . to deal with the nonlinearity, a local regularization will be developed as the global regularization will deal with the measurement errors .there are researchers who have been using integral equations to reformulate the inverse obstacle problem as a nonlinear optimization problem . in some approaches ,a priori is assumed such that enough information is known about the unknown scattering obstacle .then , a surface is placed inside such that is not a dirichlet eigenvalue of the negative laplacian for the interior of . then , assuming a fixed wave number and a fixed incident direction , and also by representing the scattered field as a single layer potential where is to be determined ; is the space of all _ _ square integrable functions _ _ on the boundary .the far field pattern is then represented as where is the unit sphere , and . by the aid of the given ( measured ) far field pattern , one can find the density by solving the ill - posed integral equation of the first kind in eqn .( [ eq : inverse2 ] ) .this method is described thoroughly in . in another methodit is assumed that the given ( measured ) far field for all , and is given .the problem is now to determine a function such that where is an integer and as fixed ; is a spherical harmonic of order .it can be shown that solving the ill - posed integral equation ( [ eq : inverse3 ] ) leads , in special conditions , to the nonlinear equation in which is to be determined , and where ; is the spherical hankel function of the first kind of order . in ,this method is developed and applied to the case of the electromagnetic inverse obstacle problem .a linear inverse problem can be given in form of finding such that , where , , and are vectors , and is a matrix ; is the noise which has to be minimized by different so called _ regularization _ methods . within the field of image processing ,a forward model is defined as an unobservable input which returns as an observable output . here , the forward problem is modeled by a forward model and the inverse problem will be an approximation of by .the forward process is , in other words , a mapping from the image to error - free data , , and the actual corrupted data , ; the noise is the difference .the corruption in such context is due to small round off error by a computer representation and also by inherent errors in the measurement process .the collection of values that are to be reconstructed is referred to as the _image_. denoting as the image , the forward problem is the mapping from the image to the quantities that can be measured . by the forward mapping denoted by , the actual data can be denoted by in which may be either a linear,- or a nonlinear mapping .accordingly , the inverse problem can now be interpreted as finding the original image given the data , and the information from the forward problem .as the image and data are infinite - dimensional ( continuous ) or finite - dimensional ( discrete ) , there will be several classifications .image and data can be both continuous ; they can also be both discrete , or the former continuous , the latter discrete , and vice versa. however , each of the cases is approximated by a discrete - discrete alternative as computer implementation is in a discrete way .the other mentioned alternatives are always an idealization of the problem . according to hadamard ,the inverse problem to solve is a _well - posed _problem if * a solution exists for any data , * there is a unique solution in the image space , * the inverse mapping from to is continuous .in addition , an _ ill - posed _problem is where an inverse does not exist because the data is outside the range of .other interpretations of the above three conditions is _ an ill - posed problem is a problem in which small changes in data will cause large changes in the image_. to stabilize the solution of ill - conditioned and rank - deficient problems , the concept of _ singular value decomposition ( svd ) _ is widely used .the reason is that relatively small singular values can be dropped which makes the process of computation less sensitive to perturbations in data .another important application of the svd is the calculation of the condition number of a matrix which is directly related to ill - posed problems . in connection with rank - deficient and ill - posed problems ,it is convenient to describe singular value expansion of a kernel due to an integral equation .this calculation is by means of the singular value decomposition ( svd ) .all the difficulties due to ill - conditioning of a matrix will be revealed by applying svd .assuming be a rectangular or square matrix and letting , the svd of is a decomposition in form of where the orthonormal matrices and are such that .the diagonal matrix has decreasing nonnegative elements such that where the vectors and are the _ left and right singular vectors _ of , respectively ; are called the _ singular values _ of which are , in fact , the nonnegative square roots of the eigenvalues of . columns of and are orthonormal eigenvectors of and respectively .the rank of a matrix is equal to the number of nonzero singular values , and a singular value of zero indicates that the matrix in question is rank - deficient .one of the most significant applications of matrix decomposition by svd is within parallel matrix computations .the svd has other important applications within the area of scientific computing .some of them are as follows : * solving linear least squares of ill - conditioned and rank - deficient problems , * calculation of orthonormal bases for range and null spaces , * calculation of condition number of a matrix , * calculation of the euclidean norm . as an example , the euclidean norm of a matrix can be calculated by svd as the first element in ( [ eq : svd22 ] ) , i.e. .this value is indeed the first ( and the largest ) singular value , positioned on the diagonal matrix , that is : with respect to the euclidean norm in ( [ eq:2norm ] ) , and also the smallest singular value , both calculated by the svd procedure , one can determine the condition number of the matrix by with as the smallest element on the diagonal matrix in ( [ eq : svd1 ] ) . with an origin in the _ fredholm integral equation _ of the first kind as with and known and unknown , most inverse problems describe the continuous world .the _ kernel _ represents the response functions of an instrument ( determined by known signals ) , and represents measured data ; represents the underlying signal to be determined .integral equations can also result from _ the method of green s functions _ and the _ boundary element methods _ for solving differential equations .the _ existence _ and _ uniqueness _ of solutions to integral equations is more complicated in comparison to algebraic equations . in addition , the solution may be highly sensitive to perturbations in the input data .the reason to sensitivity lies in the nature of the problem that has to do with determining the integrand from the integral ; this is just the opposite integration operator which is a smoothing process .such an integral operator with a smooth kernel , i.e. a kernel that does not possess singularities , has zero as an eigenvalue .this means that there are nonzero functions that will be annihilated under the integral operator .solving for in ( [ eq : fredholm1 ] ) tends to introduce high - frequency oscillation as the integrand contains as an arbitrary function and the smooth kernel .the sensitivity in the process of solving integral equations of type ( [ eq : fredholm1 ] ) is inherent in the problem and it has not to do with the method of solving . for an integral operator with a smooth kernel by having zero as an eigenvalue , additional information may be required .the reason to this is that using a more accurate quadrature rule leads to a resulting ill - conditioned linear equation system , which thereby results into a more erratic solution . to handle the ill - conditioning in such context, several numerical methods have been used . in _ truncated singular value decomposition _the solution of the ultimate linear equation system is computed by using the singular value decomposition of . in this process, small singular values of are omitted from the solution ; the small singular values of reflects and generates in fact ill - conditioning when solving the ultimate linear equation system .the method of _ regularization _ solves a minimization problem to obtain a physically meaningful solution . starting from the fredholm integral equation in ( [ eq : fredholm1 ] ) and introducing as the model and letting ^t$ ] be the vector of the measured data , a connection between and will be where is still the smooth kernel , and is the measurement noise ; is the domain of the integration .the goal is now to find the model assuming that the noisy data is given .the problem ( [ eq : fredholm2 ] ) becomes a well - posed least - squares system if it will be discretized with a number of parameters which is smaller than . as a disadvantage, this discretization makes the solution lie in a small subspace which does not always fit the problem .however , by choosing a discretization with a number of parameters bigger than , the discrete system will possess some of the characteristics of the continuous system .two different methods have been used to discretize eqn .( [ eq : fredholm2 ] ) .the first method uses a quadrature rule to approximate the integral in eqn .( [ eq : fredholm2 ] ) , that is this discretization results into a rectangular system like in which and which is a vector in .the second method uses discretization by the _ galerkin _ methods in which the model is described by where for is an orthonormal set of basis functions , see appendix .the integral in eqn .( [ eq : discrfred21 ] ) can now be written as which is in the same form as in eqn .( [ eq : discrfred22 ] ) , that is , in which is a vector of coefficients and the `` trade - off '' is of importance to think about when selecting discretization methods in computational work ; as quadrature methods are easier to implement , the galerkin method gives more accurate results and requires fewer unknowns to obtain the same accuracy .however , the major issue to think about in this stage is that the matrix is , as a rule , ill - conditioned and to get rid of ill - conditioning , regularization is needed for the solution of the problem . in the following section ,two different methods for regularization are presented .they are the _ tikhonov _ regularization and regularization by the _ subspace _ methods . according to tikhonov, the problem of finding as a solution to can be substituted by a minimization problem as where is called the _ global objective function_. in this formulation is the _ data misfit _ and is called the _ model objective function _ ; is a _ penalty parameter _ as a parameter that determines how well the solution is fitted with data . by adjusting , the solution will fit the data in an optimal way . by differentiating the problem in ( [ eq : discrfred2galerk2 ] ) with respect to and setting the differentiation to zero ,a solution will be achieved , that is it is shown that the penalty parameter is found by solving where is the identity matrix .inversion or decomposition of the term is costly in this equation and this constitutes a major challenge in finding the solution . in the context of inverse problems , the _ tikhonov _regularization is used to damp the singular vectors , which are associated with small singular values in the problem , formulated as a singular value decomposition .referred to eqn .( [ eq : penaltybeta ] ) and with the matrix decomposed by singular value decomposition as one can find out that by multiplying both sides in in ( [ eq : svd3 ] ) and by other simplifications , can be found as by having ( [ eq : svd_x1 ] ) in vector form , it can be written as by introducing a function as which is called the _ tikhonov filter function _ , eqn .( [ eq : svd_x2 ] ) will be rewritten as in fact , the tikhonov filter function in ( [ eq : filterfunction ] ) , `` filters '' the singular vectors which are associated with small singular values .these vectors are in their turn associated with which are much smaller than as the penalty parameter .the tikhonov regularization is a fundamental process in inverse problems . for more efficiency, the tikhonov regularization can be extended by the _subspace _ regularization method .in fact , the tikhonov regularization solutions require a long time and considerable memory .any shortcut like discretizing the problem with fewer parameters , leads to an overdetermined system for a solution to . as a consequence, a coarse discretization will not fit the problem as the solution is forced into a small subspace .the challenge in such context will be to transform the problem into a small appropriate one by choosing a new subspace in the minimization problem of where .subspace regularization is involved with definition of the subspace for such that .hence , the original problem of ( [ eq : minimization2 ] ) is now converted into an equivalent minimization problem of the least - square system of in fact , a more realistic formulation in this context is to solve a minimization problem of ( [ eq : minimization2 ] ) by defining a subspace with that leads to a well - posed overdetermined system by choosing a small enough and a good choice of .there are different methods in which the subspace is chosen such that it is spanned by singular values .a two - dimensional prototype microwave tomographic imaging system composed of antennas ( a circular antenna array ) with the operating frequency in mhz is considered in .the antennas are located on the perimeter of a cylindrical microwave chamber with an internal diameter of mm which can be filled with various solutions , including deionized water . by separating the antennas into emitters and receivers , the influence of the emitter signalis assumed to be avoided .the sequential radiation emitting of emitters , and receiving antennas , is measured .the antennas are used with a narrow radiation pattern in the vertical direction for creating a two - dimensional slice of the three - dimensional object under test ( out ) .special waveguides are also used to get a wider horizontal projection . the amplitude and the phase of the scattered field due to the outis also measured .for the two - dimensional mathematical formulation it is assumed that the out with the complex dielectric permittivity is not dependent on the coordinate in the media .the out is located in the media with a constant complex dielectric permittivity of .in addition , the magnetic permeability is assumed to be constant everywhere .the dielectric properties of the out which is assumed to be an infinite cylindrically symmetric object with volume is investigated .the situation is finally modeled by the following integral equation where in which + + eqns .( [ eq : out1 ] ) describe the out with unknown dielectric characteristics which is illuminated from the circular antenna array ; the scattered field is received by the receiving antennas on the same antenna array . as the ill - posed problem for the inverse system of determining in eqns .( [ eq : out1 ] ) , approximation methods should be chosen . in modified rytov s approximation is used .born approximation is also used for the above inverse problem concerning the objects with high contrast of . in this casethe rytov s approximation gives better results .the algorithm in gives an accurate solution of the inverse problem in two - dimensional cases including image reconstruction of a phantom consisted of a semisoft gel cylinder .the gel phantom is immersed into the working chamber after being cooled in a refrigerator .it is shown that the dielectric situation inside the working chamber are affected by the temperature gradients .in addition , the dielectric properties of the phantom are also affected by non - isothermic conditions in the working chamber .assuming that the frequency range from to ghz gives the most suitable results for microwave imaging , there are technical difficulties in building a tomographic system for the whole body concerning the frequency range .one of the reasons is that the acquisition time would be unrealistically long .however , at the lower frequency of about ghz suitable spatial resolution is achieved . in summary ,the multifrequency range from to ghz is optimal for microwave tomographic imaging . in , a suitable method for quasi real - time microwave tomography for biomedical applicationsis presented . by simulating a focusing system characterized by small field depth and a variable focal length ,a tomographic process is achieved in this work .the organ under test , which constitutes the scatterer , transforms the divergent wavefront from the focusing system into a convergent wavefront .an image , corresponding to a thin organ slice , from the divergent wavefront can be derived . by changing the focal length ,different slices can be obtained resulting into a cross - section of the organ . from the measured field distribution ,the slice images are deduced .letting and be the length of the organ and the distance between the observation line and the slice , respectively , the length of the observation domain will be .the equivalent currents , responsible for the scattered field is \mathbf{e}_{t}(x , y)\ ] ] where and are the total field and the wavenumber inside the organ , respectively ; is the wavenumber of the homogeneous surrounding medium . for cylindrical objects , illuminated by a plane wave ,the scattered field is determined by in which is the hankel function of order zero and of the second kind .for both two - dimensional and the three - dimensional cases , such algorithms can be used to reconstruct from the scattered field . here , the reconstructed current is the image which appears as the convolution between the point - spread function of the focusing system and the induced current distribution in the organ .the _ method of angular spectrum _ may be used for reconstruction of the current distribution from the scattered field . for the direct electromagnetic formulation ,a classical approach considering a 2d version of the problem may be used as an alternative .a 3d version of the problem would otherwise be to describe the field properties using the maxwell s equations which leads to a heavy 3d vectorial problem . in the 2d formulation , the biological object under testis considered to be nonmagnetic with constant dielectric properties along its vertical axis .the whole strategy in this approach is to convert the electromagnetic scattering problem into a radiating problem in the free space and a , so called , 2d scalar electrical field integral equation ( efie ) .the implicit time dependence of , with as the radial frequency is also introduced .the homogeneous,- and inhomogeneous wave equations in this context are and respectively . here , , the incident field , is the propagation of a tm - polarized , single - frequency , time - harmonic electromagnetic wave and is the total electric field ; the constant wavenumber inside the homogeneous media , and the wavenumber are respectively as and where is the complex permittivity inside the homogeneous media , and the complex permittivity of the inhomogeneous region .the total field , , as a superposition of the incident field and the scattered field can be written as introducing a new constant as together with the above equations will result into the following wave equation associated with the scattered field in eqn .( [ eq : newwe ] ) , an equivalent current can be defined as in fact , this equivalent current produces the scattered field and the wave equation above can now be written as a green s function formulation for the inhomogeneous wave equation in ( [ eq : ultimwe ] ) can be deduced to solve , that is where is the dirac delta function ; the associated green s function is where is , as previously mentioned , the zero - order hankel function of the first kind . by the aid of the green s function formulation above , andthe principle of superposition , the scattering field can be solved by considering ( [ eq : totfield ] ) and ( [ eq : greenscatteringeq ] ) , the total field is finally expressed as the following integral formulation : as the complex permittivity is known and the incident field is given , the scattered field will be computed as the direct formulation of the electromagnetic scattering problem . in such context ,( [ eq : greenscatteringeq ] ) and ( [ eq : greensctot ] ) can be solved , for example , by moment methods ( mom ) , see previous chapters . by this numerical method , two different two - dimensional configurations , by planar ,- or cylindrical situated dipoles , are solved in . by assuming constant fields and dielectric properties in a rectangular cell as the out, the incident,- and the scattered field will be discretized as (\vec{r_{j } } ) , \hspace{3 mm } n=1,2, ... ,n\ ] ] and (\vec{r_{j } } ) , \hspace{3 mm } m=1,2, ... ,m\ ] ] where the region , i.e. the out , is discretized into cells and also receiving points for the observed scattered field ; the green s function can be computed analytically as depicted in .numerical solution of this direct scattering problem will be used for creating image reconstruction algorithms for the inverse problem by which the unknown permittivity contrast distribution of the out will be found . concerning biological image reconstruction by microwave methods ,there are different approaches which are generally based on either _ radar techniques _ or _ tomographic formulation _ .99 f. monsefi `` mathematical modeling of electromagnetic disturbances in railway system '' , licentiate thesis , lule university of technology , 2006 .m. n. o. sadiku , _ numerical techniques in electromagnetics_. crc press , inc .george b. arfken , hans j. weber , _ mathematical methods for physicists _ , academic press , 2001 .d. k. cheng , _ field and wave electromagnetics_. addison - wesley publishing co. , reading , mass . , 1989 .d. k. cheng , _ fundamentals of engineering electromagnetics_. addison - wesley series in electrical engineering , nov . 1993 .c. a. balanis , _ antenna theory : analysis and design ._ john wiley & sons , inc . ,1982 . c. a. balanis , _ advanced engineering electromagnetics ._ john wiley & sons , inc . , 1989 .juan r. mosig , `` arbitrary shaped microstrip structures and their analysis with a mixed potential integral equation '' , _ ieee trans .on microwaves theory tech .mtt-36 , pp .314 - 323 , feb . 1988 .w. c. chew , _ waves and fields in inhomogeneous media ._ new york : ieee press series on electromagnetic waves , 1995 .j. j. yang , y. l. chow , d. g. fang , `` discrete complex images of a three - dimonsional dipole above and within a lossy ground '' , _ ieee proceedings - h _ , vol .f. b. hildebrand , _ introduction to numerical analysis _ , second edition , dover publications , inc ., new york , 1987 .gunnar sparr , _ kontinuerliga system _ , lund institute of technology , department of mathematics , sweden .lund 1984 .j. jin , _ the finite element method in electromagnetics ._ , second edition , john wiley & sons . , new york , usa , 1993. w. cheney and d. kinacid , _ numerical analysis _ , second edition , isbn 0 - 534 - 33892 - 5 , 1996 .c. johnson , _ numerical solution of partial differential equations by the finite element method . _ studentlitteratur , isbn 91 - 44 - 25241 - 2 , 1987 . b. archambeault , c. brench , and o. rahami _emi / emc computational modeling handbook ._ , kluwer academic publishers 1998. j. carlsson , _ computation of emc properties of slots and printed circuit boards _ ,phd dissertation , chalmers university of technology and sp swedish national testing and research institute , sweden 1998 .k. s. yee , `` numerical solution of initial value problems involving maxwell s equations in isotropic media '' , _ ieee trans .antennas and propagation _ , vol .302 - 307 , 1966 .d. m. sullivan , _ electromagnetic simulation using the fdtd method_. john wiley & sons , inc . , 2000 .r. f. harrington , _ field computation by moment methods_. robert e. kreiger , malabar , fl , 1987 .a. f. peterson , s. l. ray , and r. mittra , _ computational methods for electromagnetics_. ieee press , new york , usa , 1998 . numerical electromagnetics code nec2 unofficial home page .[ online ] .available : http:/www.nec2.org/ a. e. ruehli `` inductance calculations in a complex integrated circuit environment '' , _ ibm journnal development _ , vol .470 - 481 , sep . 1972 .p. a. brennan and a. e. ruehli , `` efficient capacitance calculations for thr three - dimensional multiconductor systems '' , _ ieee trans .microwave theory tech .76 - 82 , feb . 1973 .a. e. ruehli , `` equivalent circuit models for three - dimensional multiconductor systems '' , _ ieee trans .microwave theory tech .216 - 221 , mar .p. a. brennan , n. raver , and a. e. ruehli , `` three - dimensional inductance computation with partial element equivalent circuits '' , _ ibm journal of research and development _ , vol .23 , no . 6 , pp .661 - 668 , nov .1979 . `` the self and mutual inductance of linear conductors '' , _ bulletin of the national bureau of standards _ , 4(2):301 - 344 , 1908 .f. grover , _ inductance calculations : working formulas and tables_. van nostrand , 1946 .c. hoer and love , `` exact inductance equations for rectangular conductors with applications to more complicated geometries '' , _ journal of research of the national bureau of standards - c. engineering and instrumentation _ , 69c(2):127 - 137 , 1965 .a. e. ruehli _ an integral equation equivalent circuit solution to a large class of interconnect system _ , phd .dissertation , the university of vermont , usa , 1972 .a. e. ruehli `` circuit models for three - dimensional geometries including dielectrics''__ieee trans .microwave theory tech ._ _ , vol.40 , no.7 , pp . 1507 - 1516 , jul . 1992. h. heeb and a. e. ruehli , `` approximate time - domain models of three - dimensional interconnects '' , in _ proc . of the ieee int .conference on computer - aided design _ , santa clara , ca , usa , 1990 , pp . 201205 . h. heeb and a. e. ruehli , `` three - dimensional interconnect analysis using partial element equivalent circuits '' , _ ieee trans . circuits and systems _ , vol .974 - 982 , nov . 1992 .h. heeb and a. e. ruehli , `` three - dimensional interconnect analysis using partial element equivalent circuits '' , _ ieee trans .circuits and systems _ , vol .974 - 982 , nov . 1992 .a. e. ruehli _et al . _ , `` nonorthogonal peec formulation for time- and frequency- domain em and circuit modeling '' , _ ieee trans . on emc167 - 176 , may 2003 . c. ho , a. ruehli and p. brennan , the modified nodal approach to network analysis `` , _ ieee trans . on circuits and systems _ , pages 504509 , june 1975 .j. e. garrett , ' ' advancements of the partial element equivalent circuit formulation `` , phd dissertation , the university of kentucky , 1997 .serguei y. semenov , * robert h. svenson , alexander e. boulyschev , alexander e. souvorov , vladimir y. borisov , yuri sizov , andrey n. starostin , kathy r. dezern , george p. tatsis , and vladimir y. baranov : microwave tomographi : two - dimensional system for biological imaging ._ ieee transactions on biomedical engineering _ , vol .9 , september 1996 .tommy henriksson , ' ' contribution to quantitaive microwave imaging techniques for biological applications `` , phd dissertation , mlardalen university , sweden , 2009 . matthew n. o. sadiku _ elements of electromagnetics _ , fourth edition , oxford university press , new york , 2007 .l. tsang , j. a. kong , k - h .ding _ scattering of electromagnetic fields _ , john wiley & sons , new york , 2000 .d. colton , and r. kress , _ inverse acoustic and electromagnetic scattering theory_. 2nd edn .springer - verlog berlin heidelberg new york , 1998 .k. atkinson , w. han , _ theoretical numerical analysis _ , a functional analysis framework .springer - verlog , new york , inc . , 2001 .gerhard kristensson , _ spridningsteori med tillmpningar _ ,studentlitteratur , lund , sweden , 1999 .kirsch , a. , and kress , r. : on an integral equation of the first kind in inverse acoustic scattering . in : _inverse problems _ ( canon and hornung , eds ) .isnm 77 , 93 - 102 ( 1986 ) .kirsch , a. , and kress , r. : a numerical method for an inverse scattering problem . in : _inverse problems _( engl and groetsch , eds ) . academic press , orlando , 270 - 290 ( 1987 ) .kirsch , a. , and kress , r. : an optimization method in inverse acoustic scattering . in : _boundary elements ix , vol 3 . fluid flow and potentialapplications_. springer - verlog , berlin heidelberg new york , 3 - 18 ( 1987 ) .kress , r. , and zinn , a. : three dimonsional representation in inverse obstacle scattering . in : _mathematical methods in tomography _ ( hermans et al , eds ) .springer - verlog lecture notes in mathematics * 1497 * , berlin heidelberg new york , 125 - 138 ( 1991 ) .kress , r. , and zinn , a. : three dimensional reconstructions from near - field data in obstacle scattering . in : _inverse problems in engineering sciences _ ( yamaguti et al , eds ) .icm-90 satellite conference proceedings , springer - verlog , tokyo berlin heidelberg , 43 - 51 ( 1991 ) . kress , r. , and zinn , a. : on the numerical solution of the three dimensional inverse obstacle scattering problem .appl . math . *42 * , 49 - 61 ( 1992 ) .blbaum , j. : optimization methods for an inverse problem with time - harmonic electromagnetic waves : an inverse problem in electromagnetic scattering .inverse problems * 5 * , 463 - 482 ( 1989 ) . _ inverse problems and inverse scattering of plane waves_. d. n. ghosh roy & l. s. couchman , academic press , orlando , florida , usa , 2001 .heath , michael t. _ scientific computing : an introductory survey_. mcgraw - hill international editions , compter science series , singapore 1997 . j. f. roach ._ green s functions .cambridge university press .new york , 1982 .p. k. kythe ._ boundary element methods . _ crc press , boca raton , fl , 1995 . ' ' total variation regularization for linear ill - posed inverse problems : extensions and applications `` , phd dissertation , arizona state university , december , 2008 . ' ' frequency optimization for microwave imaging of biological tissues `` _ in proc .ieee , _ vol .374 - 375 , 1985 .bolomey , a. izadnegahdar , l. jofre , ch .pichot , g. peronnet , and m. solaimani : microwave diffraction tomography for biomedical applications , _ ieee transactions on microwave theory and techniques _ , vol .11 , november 1982 .pichot , l. jofre , g. peronnet , a. izadnegahdar , and j. ch . bolomey , ' ' an angular spectrum method for inhomogeneous bodies reconstruction in microwave .`` ' ' focusing of two - dimensional waves , `` _ j. opt .1 , pp . 15 - 31 , 1981 .devaney , a. j. 1989 .the limited - view problem in diffraction tomography . _ inverse problems ._ 5 , 4 , 510 - 510 .x. li , s. k. davis , s. c. hagness , d. w. van der weide , and b. d. veen , ' ' microwave imaging via spacetime beamforming : experimental investigation of tumor detection in multilayer breast phantoms , `` _ ieee trans . microw .theory tech .1856 - 1856 , aug . 2004 .j. bond , x. li , and s. c. hagness , ' ' microwave imaging via space - time beamforming for early detection of breast cancer , `` _ ieee trans .antennas propag .1690 - 1705 , aug .2003 . j. bond , x. li , and s. c. hagness , ' ' numerical and experimental investigation of an ultrawideband ridged pyramidal horn antenna with curved launching plane for pulse radiation , " vol .259 - 262 , 2003 .definition 1 : : a sequence of elements in a normed vector space is called a cauchy ( fundamental ) sequence if there exists such that for every . definition 2 : : a sequence of elements in a normed vector space converges to an element if and there exists such that for every . definition 3 : : a normed space is complete ( also called a banach space ) if every cauchy ( fundamental ) sequence in converges to an element in .definition 4 : : let .then has compact support if for every .that is for some compact set .definition 5 : : if is a linear space with a scalar product with a corresponding norm , then is said to be a hilbert space if is complete , i.e. , if every cauchy sequence with respect to is convergent .if , then the space of square integrable functions on is defined as by defining a scalar product as and a corresponding norm ( norm ) as ^{1/2}\ ] ] the scalar product is such that which means that the above integral exists if and .
scattering theory has had a major roll in twentieth century mathematical physics . mathematical modeling and algorithms of direct,- and inverse electromagnetic scattering formulation due to biological tissues are investigated . the algorithms are used for a model based illustration technique within the microwave range . a number of methods is given to solve the inverse electromagnetic scattering problem in which the nonlinear and ill - posed nature of the problem are acknowledged . + * key words * : electromagnetic fields , computational electromagnetics , electromagnetic scattering , direct problem , inverse problem , ill - posed problems , biological tissues , maxwell s equations , integral equations , boundary conditions , green s functions , uniqueness , numerical methods , optimization , regularization . * direct and inverse computational methods for electromagnetic scattering in biological diagnostics * + farid monsefi + school of education , culture and communication ( ukk ) , + department of innovation , design , and technique ( idt ) , + m university , sweden magnus otterskog + department of innovation , design , and technique ( idt ) , + m university , sweden + sergei silvestrov + school of education , culture and communication ( ukk ) , + m university , sweden + november 2013
several gateway technologies exist today to relay data aggregated from an ad - hoc sensor network cluster .such technologies include bluetooth , wi - fi , and gsm / gprs . while gprs has the added advantage of relaying the data directly to the internet , bluetooth and wi - fi can be used to relay data over short to medium range respectively .one deterrent to the wide - spread use of such technologies in the rural context comes from the fact that most villages in india have very little access to grid power .often power cuts last for 12 - 16 hours a day .gprs technology requires sufficiently high energy with peak currents of about during data transmissions .even large battery backups are insufficient to guarantee its continuous operation .is there a solution to this problem ?can we generate power just sufficient for gprs transmission ?our work positions itself to tackle the issue of powering the gprs gateway from harvested energies .fig.[fig : bigpicture ] shows the block diagram of e - dtn multi - interface gateway . alongsideare shown sensor network gathering data in the field and other embedded devices such as camera phones and data modems . in the agriculture context , the purpose of a sensor network deployment is to collect data to provide information to small and marginal farmers about the standing crop by evaluating its stress in adverse situations such as drought and pest attacks that impact the yield .the requirement for data gateway is to relay data for the purpose of analysis and decision science .our goal is to demonstrate the capabilities of a grid independent hybrid data relay communication system comprising of bluetooth , and wi - fi .gprs technology is used as the internet gateway .we use a bicycle dynamo to generate this energy . in this energy generating system ,data downloads are possible over wi - fi or bluetooth , and upload to internet uses gprs technologies .since energy is generated on the fly , it now becomes necessary to negotiate this quantity . in our work, we employ the delay / disruption tolerant network ( dtn ) stack and exploit its features from the view of _ energy availability _ rather than connectivity and we therefore call our system `` e - dtn '' . in authors discuss an energy driven system to improve packet delivery in a sparse sensor deployment . in this work , we adapt packet buffering and propose an algorithm towards an energy based data transfer , where data bundles are exchanged between e - dtn end points to match the minimum energy available between the node pairs .thus our scheme is comprehensive compared to . using e - dtn , energy availability in terms of `` energy bundles ''is negotiated .the input parameters considered for negotiation include : ( a ) energy availability ( b ) data rate ( c ) transmit power and finally ( d ) channel state based on signal strength .the outcome determines the data transfer either over bluetooth or wi - fi .we show that energy stored in a super capacitor is sufficient for our purpose .the initial latency for the energy bundle transfer is seconds from the time the data mule ( dm ) and field aggregation node ( fan ) estimate their energies .we implemented the data mule using gumstix s system on module ( som ) overo fire as the controller , and siemens tc65 as the gprs module .table [ energywifi ] shows the split time and energy break - up for a single bundle transfer .the data mule consumes around and for a bundle transfer using dtn over wi - fi to download the data from fan and gprs for uploading the bundle to the server .fig.[fig : bicycle_dtn ] shows these system components including the super capacitor banks for storing the energy .+ table [ energy ] shows the latency in a single bundle transfer between the e - dtn end points over bluetooth and wi - fi .we experimentally evaluated the optimal size of the gprs buffer .the packet size was fixed at bytes .experiments were conducted by varying the buffer size and programming the gprs module .once the buffer is full , the gprs radio is switched to `` on '' state .[ fig : confidence ] shows 95% confidence interval of energy / packet to transmit .as we increase the buffer size on the module , the transfer energy for a packet decreases until the buffer size is .soon , the energy increases , although very slowly . by taking the buffer , our results show that in order to complete a gprs transfer for a single packet , the minimum amount of energy consumed is .the packet delivery latency is and energy consumed is for ..single bundle transfer over wi - fi [ cols="<,^,^",options="header " , ]we used a super capacitor across all our measurements .since our energy requirement is to the extent of retrieving the data bundles from the fan and transferring the same over a gprs link , we do not require an infinite buffer .based on the energy measurements we conducted , a capacitor is sufficient to transfer one data bundle of packets over gprs .the advantage of this optimal value ensures that the cyclist does not have to pedal for longer periods to kick - off packet transmissions .we found that of cycling at about is required to generate energy sufficient to transfer a packet buffer .we measured watts as the power generated from the dynamo .+ the model we have proposed becomes sustainable and general enough for application in several scenarios .it is sustainable in the field due to the fact that there are no replaceable components such as batteries and associated charging electronics .an ideal super capacitor has infinite charge - discharge cycles and does not require complex charging circuitry .thus , dtn from an energy perspective combined with reliability is a novelty in our proposed scheme .the solution is general enough for application in future home networks as well , where home networks require zero downtime .the only way to ensure this in today s world is to make users generate their own power .the field aggregation node ( fan ) and the data mule ( dm ) estimate their respective energies [ and [ . a decision is made to determine the number of bundles `` n '' to be sent from the fan based on the minimum energy i.e. , = min(, ) .the outcome of the energy negotiation determines the number of bundles and the technology ( wi - fi or bluetooth ) to be used for the data transfer .the fan transmits the bundles to the dm and awaits an ack .the dm , after receiving a bundle , starts the gprs buffer transmission . on successful delivery at the remote server , the dm sends an ack back to the fan .the fan deletes all successfully acknowledged bundles .if the dm has more energy , it puts up a fresh request and steps from 2 to 7 are repeated .
to overcome the problem of unavailability of grid power in rural india , we explore the possibility of powering wsn gateways using a bicycle dynamo . the `` data mule '' bicycle generates its own power to ensure a self sustainable data transfer for information dissemination to small and marginal farmers . our multi - interface wsn gateway is equipped with bluetooth , wi - fi and gprs technologies . to achieve our goal , we exploit the dtn stack in the _ energy sense _ and introduce necessary modifications to its configuration . + icts , agriculture , bicycle dynamo , energy harvesting , dtn , wsns , wi - fi , bluetooth .
the metric in general relativity can be recovered from the causal or conformal structure , that is , the lightcones , up to a scaling factor . by knowing a measure defined on the spacetime, one can find the volume element , which provides the needed information to completely recover the metric , provided that the spacetime is distinguishing . recall that a spacetime is _ distinguishing _ if its events can be distinguished by their chronological relations with the other events alone .for example , if the spacetime contains closed timelike curves , then it is not distinguishing .if spacetime is distinguishing , then the _ horismos relation _( for two events , we say `` horismos '' , and we write , if lies on the future lightcone of ) is enough to recover the causal structure .we can even start with a reflexive relation which represents the horismos , and by imposing simple conditions , we can recover the relativistic spacetime .can we then say that _ causal structure measure = lorentzian spacetime _ ? while it is true that the metric of any distinguishing lorentzian spacetime can be recovered from its causal structure and a measure , not any causal structure and measure lead to a lorentzian metric . in some casesthe determined metric has singularities .but this is a good thing , since we already know from the singularity theorems of penrose and hawking that , under very general conditions , singularities are unavoidable .the problem is that at singularities the metric itself becomes singular , so it seems that it is not the appropriate tool to describe singularities , and something else may be needed .i will argue in this article that the causal structure may be a better tool , and provide a better insight into singularities , than the metric .on the other hand , the metric still can be used to describe singularities , at least in some cases . to do this , the standard mathematical framework used in general relativity , which is _ semiriemannian geometry _, has to be replaced with the more general _ singular semiriemannian geometry _ , which deals with both nondegenerate and degenerate metrics .this allowed us to rewrite einstein s equation in a way which remains finite at some singularities where otherwise would have infinities , but which outside the singularities remains equivalent to the original equations .it also allowed us to provide finite descriptions of the big - bang singularities .we call these singularities , characterized by the fact that the metric becomes degenerate , bur remains smooth , _ benign singularities_. for the black hole singularities , which usually are thought to have singularities where components of the metric tensor tend to infinity ( called _ malign singularities _ ) , it was shown that there are atlases where the metric is degenerate and smooth .this approach to singularities turned out to have an unexpected positive side effect : they are accompanied by dimensional reduction effects which were postulated in various approaches to make quantum gravity perturbatively renormalizable .all these reasons justify the research in _ singular general relativity _ , and suggest that if we remove the constraints that the metric has to be _ nondegenerate _( _ i.e. _ with nonvanishing determinant ) everywhere , singularities turn out not to be a problem . but the metric and other geometric objects like covariant derivative and curvature seem to be different at singularities , despite the fact that the new equations treat on equal footing the singularities and the events outside them. it would be desirable to have a more homogeneous description of the spacetime , which treats even more uniformly the events at and outside the singularities .this homogeneous description is provided by the causal structure and the measure giving the volume element .hence , the fact that the causal structure and the measure can lead to degenerate metrics provides an extra justification of the methods of singular semiriemannian geometry , explaining its success in the case of the benign singularities . in the same time, this suggests once more that the causal structure is more fundamental than the metric .as long as the metric is nondegenerate , the intervals determined by the metric between events are in correspondence with the relations determined by the causal structure .but when we start with a causal structure , the things are different , as we show in this section .we recall first the standard definitions ( see for example , then we show that they are not appropriate for singularities , and then we replace them with the appropriate ones .[ def_old_interval ] let be a vector space of dimension , endowed with a bilinear form ( called _ metric _ ) of signature .a vector is said to be * _ lightlike _ or _ null _ if , * _ timelike _ if , * _ spacelike _ if , * _ causal _ if and . to the vector space associate an affine space which we will also denote by when there is no danger of confusion .the elements of the affine space are named _events_. let be two distinct events from , joined by a vector , hence .the events and are said to be separated by a _lightlike _ , _ timelike _ or _ spacelike _ _ interval _ , if the vector is respectively lightlike , timelike or spacelike .the null vectors form the _ lightcone_.the interior of the lightcone is made of the timelike vectors , while the exterior , on the spacelike vectors .the causal vectors form two connected components , and the choice of one of these connected components is a _ time direction_. a causal vector from the chosen connected component is said to be _ future - directed _ , while one from the other one is called _ past - directed _ ( see fig .[ causal-structure-minkowski.pdf ] ) . with the interval between two events and the time directionone defines the following relations : [ def_old_relations_minkowski ] two events joined by the vector are said to be in a : * _ horismos relation _ , if is a lightlike future - directed vector , * _ chronological relation _ , if is a timelike future - directed vector , * _ causal relation _ , if is a causal future - directed vector , * _ non - causal relation _ , if is a spacelike vector .these relations can be generalized to a __ ( a differentiable manifold of dimension , endowed with a metric of signature ) .first , note that the tangent space at each event has the structure of a minkowski spacetime of dimension , given by the metric at that point .[ def_old_curves ] let be a real interval , and a curve which is differentiable everywhere . then , the curve is said to be lightlike / timelike / causal / spacelike if the tangent vectors at each of its points are lightlike / timelike / causal / spacelike . if the curve is causal , it is said to be future / past - directed if the tangent vectors at each of its points are future / past - directed ( with respect to its parametrization ) . definition [ def_old_relations_minkowski ] extends to any lorentzian spacetime : [ def_old_relations_lorentzian ] two events are said to be in a : * _ horismos relation _ ( _ horismos _ ) , if they can be joined by a lightlike future curve , * _ chronological relation _ ( _ chronologically precedes _ ) , if and they can be joined by a timelike future curve , * _ causal relation _ ( _ causally precedes _ ) , if they can be joined by a causal future curve , * _ non - causal relation _ , if and they can be joined by a spacelike curve .but if we start with a causal structure on a topological manifold , the standard correspondence between the intervals and causal relations no longer applies .consider for example the causal structure of the four - dimensional minkowski spacetime , and endow it with the metric , where and is a smooth scalar function .as long as , the causal structure determined by the metric is the same as that determined by the metric ( see fig .[ causal-structure-minkowski.pdf ] ) .[ ex_minkowski_spacelike_singularity ] consider now that .then , the metric is degenerate on the hyperplane .the length of a smooth curve contained in the hyperplane is always vanishing , therefore even though two events and for which are not causally correlated , they ca nt be joined by a spacelike curve .they can be joined instead by a curve so that for any event , a vector tangent to at satisfies instead of .[ ex_minkowski_timelike_singularity ] similarly , if we choose , then the metric is degenerate on the curve , .for any vector tangent to at , instead of , despite the fact that any two events are in a chronological relation .examples [ ex_minkowski_spacelike_singularity ] and [ ex_minkowski_timelike_singularity ] show that the chronological , horismos and non - causal relations are characterized by the sign of only as long as the metric is nondegenerate .if the metric is degenerate , it is possible that the relation between two events is or , and yet they are not joined by a timelike or spacelike curve in the sense of the definition [ def_old_curves ] .this justifies that we change the definition of lightlike , timelike and spacelike curves to depend on the causal structure only , and not on the metric .the examples from the previous section revealed that , if we want to deal with degenerate metrics , and not only with the nondegenerate one , we have to change the definitions of the intervals between events , and consequently of the lightlike / timelike / causal / spacelike curves , and of the relations .[ def_old_undefined]forget definitions [ def_old_interval ] of intervals , [ def_old_curves ] of causal curves , [ def_old_relations_minkowski ] and [ def_old_relations_lorentzian ] of relations on minkowski and lorentzian spacetimes . we will start instead from the causal structure , and consider the relations as given .for we define , , , , , and .the tuple is called the _ causal structure _ of the lorentzian spacetime .actually , the same information is contained in any of the triples , and .if the spacetime is _ distinguishable _ ( that is , for any two events so that follows that ) , then the causal structure can be recovered from alone .[ def_new_interval ] two events in the minkowski spacetime are said to be separated by a : * _ lightlike interval _ , if or , * _ timelike interval _ , if or , * _ causal interval _ , if or , * _ spacelike interval _ , if .in the minkowski spacetime , a vector joining two events is _ lightlike / timelike / causal / spacelike _ , according to how the interval between those events is .this definition allows one to call the intervals and vectors from examples [ ex_minkowski_spacelike_singularity ] and [ ex_minkowski_timelike_singularity ] spacelike , respectively timelike , despite the fact that they satisfy .now we can review definition [ def_old_curves ] of lightlike / timelike / causal / spacelike curves .there is no need to change it , just to plug in it the new definition of intervals [ def_new_interval ] , instead of the old one [ def_old_interval ] .in fact , we can even skip altogether the differentiability of the curve ( needed to discuss about tangent vectors ) , and characterize the curves in terms of the relations only , as we did in .[ def_causal_curve ] let be a relation on the events of a spacetime ( usually one of the relations , , and , where denotes `` or '' ) .an _ open curve with respect to the relation _ defined on a horismotic set is a set of events so that the following two conditions hold 1 .the relation is _ total _ on , that is , for any , , either or , 2 . for any pair , ,if there is an event so that and , the restriction of the relation to the set is not total .we denote by the set of curves with respect to the relation . a curve from called _causal curve_. a curve from is called _ chronological curve_. a curve from is called _we will take first a look at the causal structure of the simplest big - bang cosmological model , that of flrw ( friedmann - lematre - robertson - walker ) . in this model , at any moment of time , where is an interval , space is a three - dimensional riemannian space , scaled by a factor .the total metric is obtained by taking the _ warped product _ between the riemannian spaces and , with the _ warping function _ , the typical space can be any riemannian manifold , but usually is taken to be one that is homogeneous and isotropic , to satisfy the _cosmological principle_. this is satisfied by , , and , whose metric is where for the -sphere , for the euclidean space , and for the hyperbolic space .the flrw solution corresponds to a fluid with mass density and pressure density represented by the scalar functions and . as , both and tend to infinite .but the correct densities are not the scalars and , but the _ densities _ and , which are the components of the differential -forms and .the latter are shown to remain finite in , because as and tend to infinite , the _ volume form _ tends to zero precisely to compensate them . also , all the terms in the _ densitized einstein equation _ introduced in , are finite and smooth at the singularity .this equation is equivalent to einstein s outside the singularity , where . by the results presented in , we know that the flrw singularity is well behaved , despite the fact that the usual methods of semiriemannian geometry fail when the metric becomes degenerate , because instead we use the tools of singular semiriemannian geometry . as shown in , the solution extends naturally beyond the singularity .now we will show that the causal structure remains intact at the flrw singularity . to find the null geodesics , we solve , assuming that when . in coordinates the tangent of the angle made by the null geodesics and the spacelike hypersurfaces grows as grows , and is zero when .hence , the null geodesics start tangential to the hypersurface , and as the time coordinate increases , their angle grows too ( see fig .[ causal-structure-flrw.pdf ] ) .the lightcones in the tangent space become degenerate at the singularity . however , the topology of a lightcone in the manifold at singularity is the same as that of one which is outside the singularity .the fact that the lightcones originating in the singularity are degenerate is a differential structure property , not a topological one . to see this, we make the change of coordinate to get .this puts equation in the form the causal structure becomes now identical to that of the metric , which is nondegenerate .this is to be expected , because the flrw spacetime is conformally flat . if , the causal structure becomes that of a minkowski spacetime ( fig .[ causal-structure-minkowski.pdf ] ) .the topology of the causal structure is the same everywhere , making the causal structure universal .by contrast , the metric tensor is very different at the singularity , because it becomes degenerate . for more generality, we can drop the conditions of homogeneity and isotropy . to do this, we allow the metric on to depend not only on time , via , but also on the position .so , in equation , we allow to depend on time , but in such a way that it never becomes degenerate .the metric becomes it is degenerate when .we make the same change of the time coordinate , and we get that the causal structure is identical to that of the metric , which is nondegenerate .the schwarzschild solution represents a spacetime containing a spherically symmetric , non - rotating and uncharged black hole of mass .the metric , expressed in the schwarzschild coordinates , is where natural units and are used .the metric is that of the unit sphere ( see _ e.g. _ p. 149 ) .the _ event horizon _ is at .here apparently there is a singularity , since .this singularity is due to the coordinates , and is not genuine , as one can see by using the eddington - finkelstein coordinates .but the singularity at is genuine and malign , and ca nt be removed because the scalar .however , this singularity also has a component due to the coordinates , and when we choose better coordinates , the metric becomes finite , analytic at the singularity too , even thought it still remains singular , because it becomes degenerate .moreover , this degenerate singularity is of a benign , nice kind , named _ semiregular _ .the new coordinates are given by and the metric becomes which is analytic and degenerate at .let s find the causal structure of this extension of the schwarzschild solution at the singularity .we will consider in the following only the coordinates , and the corresponding components of the metric .the full metric is obtained by taking the warped product with the metric from equation , with warping function . in coordinates the metric is analytic near the singularity and has the form to find the null tangent vectors , we have to solve for the equation , that is which is quadratic in the solutions are hence , the null geodesics satisfy the differential equation they are plotted in fig .[ causal-structure-schw.pdf ] .we can see that the situation is very similar to that of the flrw singularity : in coordinates , the null geodesics are oblique everywhere , except at , where they become tangent to the hypersurface . since in coordinates determinant of the metric is one may think that the metric , where , is nondegenerate , because its determinant is not vanishing .however , it becomes a malign singularity , since the component becomes infinite .however , the -dimensional lightcones originating in the singularity have the same topology as any other -dimensional lightcone . if the spherical non - rotating black hole of mass has an electric charge , the solution is given by the reissner - nordstrm metric , where is that from equation , and the units are natural . the real zeros of give the event horizons .the event horizons are apparent singularities , removable by eddington - finkelstein coordinates , just like for the schwarzschild black hole .the singularity at ca nt be removed , but it can be made analytic and degenerate . to do this , we change the coordinates to where .the metric becomes the metric is analytic if to find the null geodesics , we proceed as in the case of the schwarzschild black hole . in coordinates , the metric is to find the null directions , we solve , which becomes therefore hence , the null geodesics satisfy the differential equation to ensure that the coordinate remains spacelike , it has to satisfy , which is ensured in a neighborhood of by the condition .we see that in coordinates the null geodesics are tangent to the axis , and outside the singularity they are oblique .the lightcones are stretched as approaching , until they become degenerate ( fig .[ causal-structure-rn.pdf ] ) .we have seen that the lightcones at two distinct events have the same topology around their origins .that is , for any two events and there are two open sets and , and a _ homeomorphism _ ( continuous bijective function ) , so that . butthe function ca nt be always chosen to be a _ diffeomorphism _( differentiable bijective function whose inverse is differentiable ) .so , lightcones are not always diffeomorphic around their origins with the other lightcones .figure [ lightcones ] represents various cases of lightcones .[ lightcones ] * a * represents a nondegenerate lightcone , associated to a nondegenerate metric , or to a metric that is degenerate in an isotropic manner ( obtained by rescaling a nondegenerate metric ) . fig .[ lightcones ] * b * and * c * represent degenerate lightcones associated to metrics degenerate in spacelike ( sections [ s_causal_structure_big_bang ] , [ s_causal_structure_schw ] ) , respectively timelike directions ( section [ s_causal_structure_rn ] ) .the fact that lightcones are at least topologically the same around their origins allows the causal structure to be recovered from the metric not only when the metric is nondegenerate . in the casesjust described , when the metric is degenerate only along a subset so that is dense in , the causal structure is determined at the points where the metric is nondegenerate , and extends by continuity to the entire spacetime .the examples analyzed in the previous sections suggest that the causal structure can be seen as more fundamental , at least when the metric is allowed to become degenerate .the importance that the causal structure is maintained even at singularities can be seen from , where it has been shown that big - bang and black hole singularities are compatible with global hyperbolicity , which allows the time evolution of the fields in spacetime .these results explain the success of singular semiriemannian geometry and singular general relativity to the problem of singularities , by the fact that the causal structure is not broken at singularities , and suggests to reconstruct general relativity starting from the causal structure .o. c. stoica .http://www.degruyter.com/view/j/auom.2012.20.issue-2/v10309-012-0050-3/v10309-012-0050-3.xml[spacetimes with singularities ] . ,20(2):213238 , july 2012 .http://arxiv.org/abs/1108.5099[arxiv:gr-qc/1108.5099 ] .
in general relativity the metric can be recovered from the structure of the lightcones and a measure giving the volume element . since the causal structure seems to be simpler than the lorentzian manifold structure , this suggests that it is more fundamental . but there are cases when seemingly healthy causal structure and measure determine a singular metric . here it is shown that this is not a bug , but a feature , because big - bang and black hole singularities are instances of this situation . but while the metric is special at singularities , being singular , the causal structure and the measure are not special in an explicit way at singularities . therefore , considering the causal structure more fundamental than the metric provides a more natural framework to deal with spacetime singularities .
quantum key distribution is rapidly emerging as an elegant application of quantum information theory with immense practical value . the advent of quantum computing compromises classical encryption schemes which are dependent on computational difficulty for security .fortunately , quantum information theory solves the exact problem it creates .if a transmitter , alice , wants to exchange a message with a receiver , bob , then the fundamental principles of quantum mechanics allow them to generate a key that can not be obtained by an eavesdropper , eve [ 1 - 3 ] . in the theoretic framework of bb84, alice sends a sequence of single photon pulses to bob .these photons are prepared in randomly chosen orthogonal bases . in the receiving lab , bob has two bases in which to measure the photon and randomly alternates between them .if eve tries measuring alice s photon and then sending the result of her measurement to bob , the eavesdropper will introduce errors into the key , since she does not know in which basis the photon is being sent nor does she know in which basis bob will measure .alice and bob can then use these errors to detect the eavesdropper s presence and determine the security of the key [ 4 ] .however , in many experimental settings , alice does not have a true single photon source , so she sends weak laser pulses ( wlp ) instead .this coherent light photon number probability follows a poisson distribution .the probability of a pulse containing photons is where is the mean photon number which will be taken to be a positive number less than one to avoid pulses with more than one photon .however , multiple photon pulses will still occur with probability .this exposes the scheme to the photon number splitting ( pns ) attack . to perform the pns attack, eve replaces the high loss channel that alice and bob are using with a lossless channel .eve then performs a quantum non - demolition ( qnd ) measurement on each pulse to obtain number information without perturbing the bases in which the information is encoded .when she determines a pulse with a single photon is in the line , eve simulates the loss of the original line by blocking a fraction of these pulses . when eve observes a pulse that has multiple photons , she splits the pulse and stores a photon in a quantum memory .eve then sends the rest of the pulse to bob .after alice and bob perform public discussion and announce the bases used for each pulse , eve can retrieve the photons from her quantum memory and obtain a significant fraction of the key without being detected by alice and bob [ 5 - 9 ] .in general , all losses must be attributed to eavesdropping and privacy amplification methods are used to distill a smaller secret key from the raw key generated via the bb84 protocol . in single photon bb84 , the distilled secure key rate has approximately linear dependence on the transmittivity .however , for wlp bb84 , the pns attack reduces the secure key rate to approximately quadratic dependence on the channel s transmittivity [ 10 ] . in a typical high loss situation, this presents a major problem for the key rate .one solution is to use coherent decoy states , a technique which has met with multiple experimental successes [ 11 - 18 ] .another alternative is to use entanglement to effectively trump eve s use of the pns attack .this is the impetus for the development of our entanglement enhanced scheme for bb84 . for convenience and clarity, we will refer to this entanglement enhanced wlp bb84 as ee bb84 .most entanglement based quantum key distribution schemes rely on violations of bell s inequalities to ensure security [ 19 ]. however , this is not the strategy that our ee bb84 employs here .instead , we detect eve by introducing an entangled quantum state into the system that is not used to transmit key bits but only to detect eve s qnd measurements . in figure 1we schematically illustrate how such an entanglement ancilla may be generated .this allows for a recovery of an approximately linear dependence on transmittivity for the key rate .ee bb84 shares this advantage with coherent decoy state protocols as well as schemes that utilize strong phase reference pulses to eliminate eve s ability to send bob vacuum signals [ 10 ] .in our ee bb84 , alice and bob randomly alternate between implementing wlp bb84 and an entangled decoy state ancilla .the entangled states are not primarily used to distribute key bits . instead , alice and bob use the entangled states to detect the presence of an eavesdropper .alice sends the entangled pulses randomly mixed with the weak laser pulses to guard against the use of a qnd measurement device .when eve measures photon number in the pns attack on unaugmented wlp bb84 , she avoids detection .the qnd measurement collapses the coherent state into a number state , which bob can not distinguish from the coherent state .this is related to the fact that the number operator commutes with the prepared bases .however , phase and number do not commute , as they are conjugate variables .therefore , alice and bob can use the phase information provided by phase entangled decoy states to detect eve whenever she chooses an attack scheme that involves measuring number . in the entangled state mode , we generate two time - entangled photons using spontaneous parametric down conversion ( spdc ) .alice measures one photon in the pair to obtain an accurate time of emission for the other photon .this combination of pump laser , spdc , and detection of one of the pair of photons gives us a heralded single photon source .as in bb84 , the heralded photon is randomly assigned either a horizontal , vertical , diagonal , or anti - diagonal polarization .then , the heralded photon is sent to a beam splitter which leads to the state .half of the state travels down the longer arm , while the other half travels down the shorter arm .the halves recombine at the second beam splitter where there is a probability for the state to leave the quantum channel ( see figure 1 ) .a detector will distinguish these possibilities and allows them to be ignored .however , when the pulse does exit into the quantum channel , it is an entangled pulse , where half is delayed in time due to extra path length of the long arm .when bob receives the test pulse from alice in his lab , he detects the pulse by sending it through a beam splitter which puts the pulse through long and short arms identical to the setup in alice s lab .the pulse then encounters the final beam splitter . in this process, there are three possibilities for the pulse .the strong time information from the photon initially detected by alice allows for the differentiation between these three outcomes .one possibility is that the photon takes the short path both times , labeled ss in figure [ fig : chernoff_experimentpaths ] .another outcome is that the photon takes the long path both times , labeled as ll .these two possibilities do not yield strong information about eve s activities .however , the other possibility is that the photon travels down one long path and one short path , labeled ls or sl .this possibility can detect the use of a quantum non - demolition measurement device .the photon s self - interference will result in a bright port and a dark port in bob s detection apparatus . yet, if eve is measuring number for the pns attack , then bob s dark port will not be completely dark .obviously , it will not be completely dark even without an eavesdropper , since a practical system will have imperfections and not identically match the ideal case .nevertheless , eve s actions will still introduce additional error , which can be used to detect her presence .in our setup , bob s detection scheme for the entangled pulses is different from his detection scheme for the signal states .this is less than ideal , because if the mode that alice and bob are operating in at any given time is not random , then the security of the entire protocol is compromised .if eve can predict whether a signal state or a decoy state is being sent , then she can adjust her attack plan accordingly and render the entangled states useless .therefore , it is critical that eve can not distinguish between the entangled states and the signal states . additionally , alice and bob must randomly alternate between the signal and decoy modes .felicitously , the decoy mode does not need to be run with very high frequency in order to detect the use of a quantum non - demolition attack .nevertheless , since alice and bob must each run separate modes for the signal states and the decoy states , a fraction of the pulses they exchange will be worthless .alice and bob runs wlp bb84 protocol with frequencies and respectively .they implement the entangled state decoy ancilla with frequencies and .alice and bob exchange key information with frequency , and the entangled decoy pulses yield information about the presence of a quantum non - demolition measurement device with frequency . with frequency ,alice and bob are operating in incompatible modes , and these exchanges will provide no valuable information , because bob does not obtain polarization information when measuring phase . since and are much larger than and , this inefficiency is undesirable , but ultimately does not significantly diminish the practicality of the scheme .nevertheless , it is also indicative of the trade - off in quantum cryptography between speed and security .we use chernoff distance and symmetric hypothesis testing to calculate the confidence in which eve is known to be listening or not listening . for eebb84 the null hypothesis is that eve is not measuring number using a qnd measurement device , and the alternative hypothesis is that eve is using such a device to measure number . for the null hypothesis ,the probability that the photon will enter the bright port is , and there is probability for the photon to enter the dark port .when eve is acting on the system in the alternative hypothesis , there is a probability for the photon to enter bob s light port and a probability for it to enter the dark port .furthermore , the maximum probability of a false positive or of choosing the wrong hypothesis after trials is : where is the chernoff distance given by the equation : where and .we use equations [ eqn : error_probability ] and [ eqn : chernoffdistance ] to calculate the number of trials needed for a given maximum uncertainty : this analysis determines the number of trials necessary for a given confidence of detecting an eavesdropper for ee bb84 and coherent decoy states .in an ideal scenario , with no dephasing from the environment , we can easily construct the probabilities of the two hypotheses .for the null hypothesis , the probability that the photon will enter the bright port is , and there is probability for the photon to enter the dark port .when eve is acting on the system in the alternative hypothesis , there is an equal probability , , for the photon to enter either of bob s detectors .this results in a chernoff distance of .69 .therefore , if we define a trial to be a photon sent from alice and detected by bob , the number of trials to detect eve at the 99% confidence level ( ) requires an exchange of a maximum of just 6 photons between alice and bob .we are only investigating the photons that reach bob with the proper time information .thus , unlike the coherent decoy states , loss is not the most significant quantity to investigate quantitatively . instead , dephasing is our primary concern .the environment can affect the entangled decoy state by changing the phase information in it .since the two states are sent down the line close together , it might be assumed that any environmental factor that would affect one half of the state , would affect the other and therefore the total phase information in the state would remain unchanged .however , since in our framework , dephasing is what would affect the scheme the most , we still want to investigate its effect on the chernoff distance .when dephasing is included , the problem turns into that of determining whether a coin is fair .the question becomes : how many trials does it take to be confident that eve is there or not ? when dephasing is present , the probability for a photon to be detected in the dark port increases .it becomes more difficult to tell eve apart from the environment . with complete dephasingthe probability to find a photon in either the bright port or the dark port becomes 50 - 50 .figure [ fig : number_vs_loss ] shows how many trials are needed to have a 99% confidence of determining if eve is listening or not versus the probability of finding a photon in the dark port ( dephasing ) regardless of eve .the alternative to ee bb84 is the popular coherent decoy state solution . in the pns attack, eve assumes alice s photon source has a constant mean photon number .however , if alice randomly alters the mean photon number of her source in a way that is known to her , but not perceivable to eve , then she can detect the pns attack .this is the idea that motivates coherent decoy states .pulses from the source with a higher mean photon number will contain a greater fraction of multi - photon pulses , which eve will not block . therefore , when alice and bob discuss the protocol , alice can compare the loss in the line for when different mean photon numbers were used .if there is a marked difference between the loss for the decoy states and the loss for the signal states , then alice can conclude that eve is using the photon number splitting attack [ 22 - 26 ] .we treat coherent decoy states in a similar manner to ee bb84 , but instead of dephasing being the key quantity of interest , loss is , because eve hides in the loss of the system .the coherent decoy state solution uses two ( or more ) attenuated coherent sources with different average photon numbers and .alice determines the percentage of each of these states that is sent down the channel .if alice sends bob a total of 100 pulses , of which 70 ( 70% ) have an average photon number of and 30 ( 30% ) have and we assume a loss of 50% , then bob should receive 35 ( 70% ) pulses with an average photon number of and 15 ( 30% ) with . in this scenario , we define loss as losing the whole pulse . loss affects the total number of photons received , but not the percentage of and .eve performs a pns attack by replacing all or part of the lossy transmission line with a lossless line and altering the percentage of and sent through to bob . in this examplewe assume eve has replaced the entire transmission line with a lossless one .eve sits on the line and measures number until she finds a pulse containing more than one photon and then she takes one of these photons and lets the other pass .she blocks enough of the single photon pulses such that the initial loss is preserved .if , the pulse will have more photons on average than the other and therefore will be allowed to pass through to eve more than the other .so , in the presence of eve , if alice sends 100 pulses , of which 70 ( 70% ) have an average photon number of and 30 ( 30% ) have and we assume a loss of 50% which eve will take over , then bob would still receive a total of 50 pulses , but the percentages of pulses will be less than and the percentage of pulses will be greater than , which is not identical to what alice sent . here , we are looking at the very worst possible case of eavesdropping . we are assuming that eve has replaced all of the noise with a noiseless channel .alice looks at the percentage of and received by bob and compares it to the percentages she sent .if she can tell the difference between them with an acceptable confidence , then eve is detected .this is treated in the same way we treated ee bb84 above .the chernoff distance will give us a metric to determine the presence of eve and the number of pulses needed to be 99% confident of the presence of an eavesdropper is given in figure [ fig : d_number_vs_loss ] .the efficiency of coherent decoy states improves as loss rises because it gives eve more space to sift the photons , but as the loss becomes too high , then obviously transmission becomes difficult for any scheme .the crux of the coherent decoy state solution is that eve manipulates photon number statistics in a way that alice can detect .however , if eve can gain information , which allows her to not alter the statistics in a detectable manner , then the coherent decoy state technique will not be a successful solution .this situation would obviously justify the implementation of ee b84 , yet ee bb84 is advantageous in some other scenarios as well .the parameters and performance of ee bb84 and coherent decoy states can vary greatly depending on environment and choice of variables .for the examples in figure [ fig : d_number_vs_loss ] , the coherent decoy state parameters were chosen such that the percentage of pulses is and the percentage of pulses is , and the dephasing for the ee bb84 scheme was set to and for the two lines respectively . it can be seen that for loss of less than and dephasing less than the ee bb84 scheme outperforms the coherent decoy state scheme by requiring fewer pulses . at 50% lossthe ee bb84 scheme would need to send about a third the number of pulses as the coherent decoy state to detect an eavesdropper with 99% confidence .coherent decoy states are a popular solution to the photon number splitting attack for a reason .they achieve linear scaling with transmittivity .additionally , coherent decoy states can be used to distill a secret key without bob alternating detection modes . however , in ee bb84 , bob must alternate between a polarization detection mode and a phase detection .this gives coherent decoy states an advantage over the present version of ee bb84 . at the moment, ee bb84 does not possess general superiority to coherent decoy states .therefore , the appeal of ee bb84 is that it has some situational advantages and approaches the problem of the photon number splitting attack in a manner strategically different from that of coherent decoy states .the general strategy of coherent decoy states is to improve the secret key transmission rate by focusing on limiting the amount of information that eve can possibly obtain while still avoiding detection .meanwhile , the strategy behind ee bb84 is direct detection of an eavesdropper that might be performing quantum non - demolition measurements .the strategy of ee bb84 is not superior to that of coherent decoy states .it is simply different , and this difference helps generate situations where the ee bb84 scheme has specific advantages , like the case when the operation time for the key transmission is not long enough for decoy states to be a robust defense . in cases such as this , ee bb84 has an advantage because of its ability to determine the use of quantum non - demolition measurement with a rather meager number of pulses .
we develop an improvement to the weak laser pulse bb84 scheme for quantum key distribution , which utilizes entanglement to improve the security of the scheme and enhance its resilience to the photon - number - splitting attack . this protocol relies on the non - commutation of phase and number to detect an eavesdropper performing quantum non - demolition measurement on photon number . the potential advantages and disadvantages of this scheme are compared to the coherent decoy state protocol .
letting be either the real or complex field , the _ synthesis operator _ of a sequence of vectors in an -dimensional hilbert space over is , . viewing as , is the matrix whose columns are the s .note that here and throughout , we make no notational distinction between the vectors themselves and the synthesis operator they induce .the vectors are said to be a _ frame _ for if there exists _ frame bounds _ such that for all . in this finite - dimensionalsetting , the optimal frame bounds and of an arbitrary are the least and greatest eigenvalues of the _ frame operator _ : respectively . here , is the linear functional , . in particular , we have that is a frame if and only if the s span , which necessitates .frames provide numerically stable methods for finding overcomplete decompositions of vectors , and as such are useful tools in various signal processing applications .indeed , if is a frame , then any can be decomposed as where is a _ dual frame _ of , meaning it satisfies .the most often - used dual frame is the _canonical _ dual , namely the pseudoinverse .note that computing a canonical dual involves the inversion of the frame operator .as such , when designing a frame for a given application , it is important to retain control over the spectrum of . here and throughout , such spectra are arranged in nonincreasing order , with the optimal frame bounds and being and , respectively . of particular interestare _ tight frames _ , namely frames for which .note this occurs precisely when for all , meaning . in this case , the canonical dual is given by , and becomes an overcomplete generalization of an orthonormal basis decomposition .tight frames are not hard to construct : we simply need the rows of to be orthogonal and have constant squared norm .however , this problem becomes significantly more difficult if we further require the the columns of have prescribed lengths .in particular , much attention has been paid to the problem of constructing _ unit norm tight frames _ ( untfs ) : tight frames for which for all . here , since , we see that is necessarily .untfs are known to be optimally robust with respect to additive noise and erasures .moreover , all unit norm sequences satisfy the zeroth - order _ welch bound _ , which is achieved precisely when is a untf ; a physics - inspired interpretation of this fact leading to an optimization - based proof of existence of untfs is given in .we further know that such frames are commonplace : when , the manifold of all real untfs , modulo rotations , is known to have dimension .essentially , when , this manifold is zero - dimensional since the only untfs are regular simplices ; each additional unit norm vector injects additional degrees of freedom into this manifold , in accordance with the dimension of the unit sphere in .local parametrizations of this manifold are given in .the _ paulsen problem _ involves projecting a given frame onto this manifold , and differential calculus - based methods for doing so are given in . in light of these facts , it is surprising to note how few explicit constructions of untfs are known .indeed , a constructive characterization of all untfs is only known for . for arbitrary and , there are only two known general construction techniques : truncations of discrete fourier transform matrices known as _ harmonic frames _ and a sparse construction method dubbed _ spectral tetris _ . to emphasize this point, we note that there are only a small finite number of known constructions of untfs , despite the fact that an infinite number of such frames exist even modulo rotations , their manifold being of dimension .the reason for this is that in order to construct a untf , one must solve a large system of quadratic equations in many variables : the columns of must have unit norm , and the rows of must be orthogonal with constant norm . in this paper , we show how to explicitly construct all untfs , and moreover , how to explicitly construct every frame whose frame operator has a given arbitrary spectrum and whose vectors are of given arbitrary lengths .to do so , we build on the existing theory of majorization and the schur - horn theorem .to be precise , given two nonnegative nonincreasing sequences and , we say that _ majorizes _ , denoted , if viewed as discrete functions over the axis , having majorize means that the total area under both curves is equal , and that the area under is distributed more to the left than that of .a classical result of schur states that the spectrum of a self - adjoint positive semidefinite matrix necessarily majorizes its diagonal entries .a few decades later , horn gave a nonconstructive proof of a converse result , showing that if , then there exists a self - adjoint matrix that has as its spectrum and as its diagonal .these two results are collectively known as the schur - horn theorem : there exists a positive semidefinite self - adjoint matrix with spectrum and diagonal entries and only if . over the years , several methods for explicitly constructing horn s matrices have been found ; see for a nice overview .many current methods rely on givens rotations , while others involve optimization . with regards to frame theory ,the significance of the schur - horn theorem is that it completely characterizes whether or not there exists a frame whose frame operator has a given spectrum and whose vectors have given lengths ; this follows from applying it to the _ gram matrix _ , whose diagonal entries are the values and whose spectrum is a zero - padded version of the spectrum of the frame operator .indeed , majorization inequalities arose during the search for tight frames with given lengths , and the explicit connection between frames and the schur - horn theorem is noted in .this connection was then exploited to solve various frame theory problems , such as frame completion . in this paper , we follow the approach of in which majorization is viewed as the end result of the repeated application of a more basic idea : eigenvalue interlacing . to be precise , a nonnegative nonincreasing sequence _ interlaces _ on another such sequence , denoted , provided under the convention , we have that if and only if for all .interlacing arises in the context of frame theory by considering partial sums of the frame operator .to be precise , given any sequence of vectors in , then for every , we consider the partial sequence of vectors .note that and the frame operator of is let denote the spectrum of . for any ,gives that and so a classical result involving the addition of rank - one positive operators gives that .moreover , if for all , then for any such , note that as increases , the gram matrix grows in dimension but the frame operator does not since but .we call a sequence of interlacing spectra that satisfy a sequence of _ eigensteps _ : [ definition.eigensteps ] given nonnegative nonincreasing sequences and , a sequence of _ eigensteps _ is a doubly - indexed sequence of sequences for which : 1 .the initial sequence is trivial : 2 .the final sequence is : 3 .the sequences interlace : 4 .the trace condition is satisfied : as we have just discussed , every sequence of vectors whose frame operator has the spectrum and whose vectors have squared lengths generates a sequence of eigensteps . in the next section, we adapt a proof technique of to show the converse is true .specifically , theorem [ theorem.necessity and sufficiency of eigensteps ] characterizes and proves the existence of sequences of vectors that generate a given sequence of eigensteps . in section 3, we then use this characterization to provide an algorithm for explicitly constructing all such sequences of vectors ; see theorem [ theorem.explicit frame construction ] . though nontrivial , this algorithm is nevertheless straightforward enough to be implemented by hand in small - dimensional examples , involving only arithmetic , square roots and matrix multiplication .we will see that once the eigensteps have been chosen , the algorithm gives little freedom in picking the frame vectors themselves .that is , modulo rotations , the eigensteps are the free parameters when designing a frame whose frame operator has a given spectrum and whose vectors have given lengths .the significance of these methods is that they explicitly construct every possible finite frame of a given spectrum and set of lengths .computing the gram matrices of such frames produces every possible matrix that satisfies the schur - horn theorem ; previous methods have only constructed a subset of such matrices .moreover , in the special case where the spectrums and lengths are constant , these methods construct every equal norm tight frame .this helps narrow the search for frames we want for applications : tight gabor , wavelet , equiangular and grassmannian frames .the purpose of this section is to prove the following result : [ theorem.necessity and sufficiency of eigensteps ] for any nonnegative nonincreasing sequences and , every sequence of vectors in whose frame operator has spectrum and which satisfies for all can be constructed by the following process : 1 .pick eigensteps as in definition [ definition.eigensteps ] .2 . for each , consider the polynomial : take any such that .for each , choose any such that for all , where denotes the orthogonal projection operator onto the eigenspace of the frame operator of .the limit in exists and is nonpositive .conversely , any constructed by this process has as the spectrum of and for all .moreover , for any constructed in this manner , the spectrum of is for all .we note that as it stands , theorem [ theorem.necessity and sufficiency of eigensteps ] is not an easily - implementable algorithm , as step a requires one to select a valid sequence of eigensteps not an obvious feat while step b requires one to compute orthonormal eigenbases for each .these concerns will be addressed in the following section .we further note that theorem [ theorem.necessity and sufficiency of eigensteps ] only claims to construct all possible such , sidestepping the issue of whether such an actually exists for a given and .this issue is completely resolved by the schur - horn theorem .indeed , in the case where , shows that there exists a sequence of vectors in whose frame operator has spectrum and which satisfies for all if and only if . in the case where , a similar argument shows that such a sequence of vectors exists if and only if and for all . as step b of theorem [ theorem.necessity and sufficiency of eigensteps ]can always be completed for any valid sequence of eigensteps , these majorization conditions in fact characterize those values and for which step a can successfully be performed ; we leave a deeper exploration of this fact for future work . in order to prove theorem [ theorem.necessity and sufficiency of eigensteps ] ,we first obtain some supporting results .the following lemma gives a first taste of the connection between eigensteps and our frame construction problem : [ lemma.eigensteps yield desired properties ] let and be nonnegative and nonincreasing , and let be any corresponding sequence of eigensteps as in definition [ definition.eigensteps ] .if a sequence of vectors has the property that the spectrum of the frame operator of is for all , then the spectrum of is and for all .definition [ definition.eigensteps](ii ) immediately gives that the spectrum of is indeed , as claimed .moreover , for any , definition [ definition.eigensteps](iv ) gives letting in gives , while for , considering at both and gives the next result gives conditions that a vector must satisfy in order for it to perturb the spectrum of a given frame operator in a desired way , and was inspired by the proof of theorem 4.3.10 in .[ theorem.necessary lengths of projections ] let be an arbitrary sequence of vectors in and let denote the eigenvalues of the corresponding frame operator . for any choice of in , let .then for any , the norm of the projection of onto the eigenspace is given by where and denote the characteristic polynomials of and , respectively .for the sake of notational simplicity , let , , , , , , and let for all .we will also use to denote the identity matrix , and its dimension will be apparent from context . to obtain the result ,we will express the characteristic polynomial of the gram matrix in terms of the characteristic polynomial of the gram matrix .written in terms of their standard matrix representations , we have , and so to compute the determinant of , it is helpful to compute the singular value decomposition , and note that for any not in the diagonal of , the following matrix has unimodular determinant : subtracting from and conjugating by yields since then . as such , substituting into and again noting gives since has unimodular determinant , implies }\\ = \mathrm{det}(x{\mathrm{i}}-\sigma^*\sigma)(x-\|f\|^2-f^*fv(x{\mathrm{i}}-\sigma^*\sigma)^{-1}v^*f^*f).\ ] ] to simplify , note that since is unitary , } = \det(x{\mathrm{i}}-\sigma^*\sigma).\ ] ] moreover , letting denote the diagonal entry of yields substituting and into gives to continue simplifying , let denote the standard basis element. then implies that for any , where are the singular values of .since for any , implies making the change of variables in and substituting the result into gives here , the restriction that follows from the previously stated assumption that is not equal to any diagonal entry of ; the set of these entries is if and is if .now recall that and are the degree characteristic polynomials of and , respectively , while is the degree characteristic polynomial of and is the degree characteristic polynomial of .we now consider these facts along with in two distinct cases : and . in the case where , we have that and .moreover , in this case the eigenvalues of are given by for all and for all , implying becomes in the remaining case where , we have , and for all , implying becomes we now note that and are equivalent .that is , regardless of the relationship between and , we have writing and then grouping the eigenvalues according to multiplicity gives as such , for any , } = -{\|{p_{\lambda}f}\|}^2\ ] ] yielding our claim . though technical , the proofs of the next two lemmas are nonetheless elementary , depending only on basic algebra and calculus .as such , these proofs are given in the appendix .[ lemma.interlacing and nonpositive limits ] if and are real and nonincreasing , then if and only if where and .[ lemma.equal limits ] if , , and are real and nonincreasing and where , and , then . with theorem [ theorem.necessary lengths of projections ] and lemmas [ lemma.eigensteps yield desired properties ] , [ lemma.interlacing and nonpositive limits ] and [ lemma.equal limits ] in hand , we are ready to prove the main result of this section .( ) let and be arbitrary nonnegative nonincreasing sequences , and let be any sequence of vectors such that the spectrum of is and for all .we claim that this particular can be constructed by following steps a and b. in particular , consider the sequence of sequences defined by letting be the spectrum of the frame operator of the sequence for all and letting for all .we claim that satisfies definition [ definition.eigensteps ] and therefore is a valid sequence of eigensteps .note conditions ( i ) and ( ii ) of definition [ definition.eigensteps ] are immediately satisfied . to see that satisfies ( iii ) , consider the polynomials defined by for all . in the special casewhere , the desired property ( iii ) that follows from the fact that the spectrum of the scaled rank - one projection is the value along with repetitions of , the eigenspaces being the span of and its orthogonal complement , respectively .meanwhile if , theorem [ theorem.necessary lengths of projections ] gives that implying by lemma [ lemma.interlacing and nonpositive limits ] that as claimed .finally , ( iv ) holds since for any we have having shown that these particular values of can indeed be chosen in step a , we next show that our particular can be constructed according to step b. as the method of step b is iterative , we use induction to prove that it can yield .indeed , the only restriction that step b places on is that , something our particular satisfies by assumption .now assume that for any we have already correctly produced by following the method of step b ; we show that we can produce the correct by continuing to follow step b. to be clear , each iteration of step b does not produce a unique vector , but rather presents a family of s to choose from , and we show that our particular choice of lies in this family . specifically , our choice of must satisfy for any choice of ; the fact that it indeed does so follows immediately from theorem [ theorem.necessary lengths of projections ] . to summarize , we have shown that by making appropriate choices , we can indeed produce our particular by following steps a and b , concluding this direction of the proof .( ) now assume that a sequence of vectors has been produced according to steps a and b. to be precise , letting be the sequence of eigensteps chosen in step a , we claim that any constructed according to step b has the property that the spectrum of the frame operator of is for all .note that by lemma [ lemma.eigensteps yield desired properties ] , proving this claim will yield our stated result that the spectrum of is and that for all . as the method of step b is iterative , we prove this claim by induction .step b begins by taking any such that . as noted above in the proof of the other direction ,the spectrum of is the value along with repetitions of .as claimed , these values match those of ; to see this , note that definition [ definition.eigensteps](i ) and ( iii ) give and so for all , at which point definition [ definition.eigensteps](iv ) implies .now assume that for any , the step b process has already produced such that the spectrum of is .we show that by following step b , we produce an such that has the property that is the spectrum of . to do this , consider the polynomials and defined by and pick any that satisfies , namely letting denote the spectrum of , our goal is to show that .equivalently , our goal is to show that where is the polynomial since and are the characteristic polynomials of and , respectively , theorem [ theorem.necessary lengths of projections ] gives : comparing and gives : implying by lemma [ lemma.equal limits ] that , as desired .as discussed in the previous section , theorem [ theorem.necessity and sufficiency of eigensteps ] provides a two - step process for constructing any and all sequences of vectors in whose frame operator possesses a given spectrum and whose vectors have given lengths . in stepa , we choose a sequence of eigensteps . in the end, the sequence will become the spectrum of the partial frame operator , where . due to the complexity of definition [ definition.eigensteps ] , it is not obvious how to sequentially pick such eigensteps .looking at simple examples of this problem , such as the one discussed in example [ example.5 in 3 ] below , it appears as though the proof techniques needed to address these questions are completely different from those used throughout this paper . as such, we leave the problem of parametrizing the eigensteps themselves for future work . in this section ,we thus focus on refining step b. to be precise , the purpose of step b is to explicitly construct any and all sequences of vectors whose partial - frame - operator spectra match the eigensteps chosen in step a. the problem with step b of theorem [ theorem.necessity and sufficiency of eigensteps ] is that it is not very explicit . indeed for every , in order to construct we must first compute an orthonormal eigenbasis for .this problem is readily doable since the eigenvalues of are already known .it is nevertheless a tedious and inelegant process to do by hand , requiring us to , for example , compute qr - factorizations of for each .this section is devoted to the following result , which is a version of theorem [ theorem.necessity and sufficiency of eigensteps ] equipped with a more explicit step b ; though technical , this new and improved step b is still simple enough to be performed by hand , a fact which will hopefully permit its future application to both theoretical and numerical problems .[ theorem.explicit frame construction ] for any nonnegative nonincreasing sequences and , every sequence of vectors in whose frame operator has spectrum and which satisfies for all can be constructed by the following algorithm : 1 .pick eigensteps as in definition [ definition.eigensteps ] .2 . let be any unitary matrix , , and let .for each : 1 . let be an block - diagonal unitary matrix whose blocks correspond to the distinct values of with the size of each block being the multiplicity of the corresponding eigenvalue .2 . identify those terms which are common to both and . specifically: * let consist of those indices such that for all and such that the multiplicity of as a value in exceeds its multiplicity as a value in .* let consist of those indices such that for all and such that the multiplicity of as a value in exceeds its multiplicity as a value in .+ the sets and have equal cardinality , which we denote .next : * let be the unique permutation on that is increasing on both and and such that for all .let be the associated permutation matrix .* let be the unique permutation on that is increasing on both and and such that for all .let be the associated permutation matrix .3 . let , be the vectors whose entries are 4 . , where the vector is padded with zeros . where is the matrix whose entries are : conversely , any constructed by this process has as the spectrum of and for all .moreover , for any constructed in this manner and any , the spectrum of the frame operator arising from the partial sequence is , and the columns of form a corresponding orthonormal eigenbasis for . before proving theorem [ theorem.explicit frame construction ] , we give an example of its implementation , with the hope of conveying the simplicity of the underlying idea , and better explaining the heavy notation used in the statement of the result .[ example.5 in 3 ] we now use theorem [ theorem.explicit frame construction ] to construct untfs consisting of vectors in . here , and .by step a , our first task is to pick a sequence of eigensteps consistent with definition [ definition.eigensteps ] , that is , pick , , and that satisfy the interlacing conditions : as well as the trace conditions : writing these desired spectra in a table : the trace condition means that the sum of the values in the column is , while the interlacing condition means that any value is at least the neighbor to the upper right and no more than its neighbor to the right . in particular , for , we necessarily have and implying that . similarly , for , interlacing requires that and implying that .that is , we necessarily have : applying this same idea again for and gives and , and so we also necessarily have that , and : moreover , the trace condition at gives and so .similarly , the trace condition at gives and so : the remaining entries are not fixed .in particular , we let be some variable and note that by the trace condition , and so . similarly letting : we take care to note that and in are not arbitrary , but instead must be chosen so that the interlacing relations are satisfied .in particular , we have : by plotting each of the inequalities of as a half - plane ( figure [ figure.5 in 3](a ) ) , we obtain a -sided convex set ( figure [ figure.5 in 3](b ) ) of all such that is a valid sequence of eigensteps . pairs of parameters that generate a valid sequence of eigensteps when substituted into . to be precise , in order to satisfy the interlacing requirements of definition [ definition.eigensteps ] , and must be chosen so as to satisfy the pairwise inequalities summarized in .each of these inequalities corresponds to a half - plane ( a ) , and the set of that satisfy all of them is given by their intersection ( b ) . by theorem [ theorem.explicit frame construction ] , any corresponding sequence of eigensteps generates a untf and conversely , every untf is generated in this way .as such , and may be viewed as the two essential parameters in the set of all such frames . in particular , for that do not lie on the boundary of the set in ( b ) , applying the algorithm of theorem [ theorem.explicit frame construction ] to and choosing yields the untf whose elements are given in table [ table.5 in 3 parametrization ] . ] specifically , this set is the convex hull of , , , and .we note that though this analysis is straightforward in this case , it does not easily generalize to other cases in which and are large .to complete step a of theorem [ theorem.explicit frame construction ] , we pick any particular from the set depicted in figure [ figure.5 in 3](b ) .for example , if we pick then becomes : we now perform step b of theorem [ theorem.explicit frame construction ] for this particular choice of eigensteps .first , we must choose a unitary matrix .considering the equation for along with the fact that the columns of will form an eigenbasis for , we see that our choice for merely rotates this eigenbasis , and hence the entire frame , to our liking .we choose for the sake of simplicity .thus , we now iterate , performing steps b.1 through b.5 for to find and , then performing steps b.1 through b.5 for to find and , and so on . throughout this process, the only remaining choices to be made appear in step b.1 . in particular , for step b.1 asks us to pick a block - diagonal unitary matrix whose blocks are sized according to the multiplicities of the eigenvalues .that is , consists of a unitary block a unimodular scalar and a unitary block .there are an infinite number of such s , each leading to a distinct frame . for the sake of simplicity, we choose .having completed step b.1 for , we turn to step b.2 , which requires us to consider the columns of that correspond to and : in particular , we compute a set of indices that contains the indices of for which ( i ) the multiplicity of as a value of exceeds its multiplicity as a value of and ( ii ) corresponds to the first occurrence of as a value of ; by these criteria , we find .similarly if and only if indicates the first occurrence of a value whose multiplicity as a value of exceeds its multiplicity as a value of , and so .equivalently , and can be obtained by canceling common terms from , working top to bottom ; an explicit algorithm for doing so is given in table [ table.index set algorithm ] .continuing with step b.2 for , we now find the unique permutation that is increasing on both and its complement and takes to the first elements of . in this particular instance, happens to be the identity permutation , and so .since , we similarly have that and are the identity permutation and matrix , respectively . for the remaining steps ,it is useful to isolate the terms in that correspond to and : in particular , in step b.3 , we find the vector by computing quotients of products of differences of the values in : ^ 2 & = -\frac{(\beta_1-\gamma_1)(\beta_1-\gamma_2)}{(\beta_1-\beta_2 ) } = -\frac{(1-\frac53)(1-\frac13)}{(1 - 0 ) } = \tfrac49,\\ \label{equation.5 in 3 example 9 } [ v_1(2)]^2 & = -\frac{(\beta_2-\gamma_1)(\beta_2-\gamma_2)}{(\beta_2-\beta_1 ) } = -\frac{(0-\frac53)(0-\frac13)}{(0 - 1 ) } = \tfrac59,\end{aligned}\ ] ] yielding .similarly , we compute according to the formulas : ^ 2 & = \frac{(\gamma_1-\beta_1)(\gamma_1-\beta_2)}{(\gamma_1-\gamma_2 ) } = \frac{(\frac53 - 1)(\frac53 - 0)}{(\frac53-\frac13 ) } = \tfrac56,\\ \label{equation.5 in 3 example 11 } [ w_1(2)]^2 & = \frac{(\gamma_2-\beta_1)(\gamma_2-\beta_2)}{(\gamma_2-\gamma_1 ) } = \frac{(\frac13 - 1)(\frac13 - 0)}{(\frac13-\frac53 ) } = \tfrac16.\end{aligned}\ ] ] next , in step b.4 , we form our second frame element : as justified in the proof of theorem [ theorem.explicit frame construction ] , the resulting partial sequence of vectors has a frame operator whose spectrum is .moreover , a corresponding orthonormal eigenbasis for is computed in step b.5 ; here the first step is to compute the matrix by computing a pointwise product of a certain matrix with the outer product of with : note that is a real orthogonal matrix whose diagonal and subdiagonal entries are strictly positive and whose superdiagonal entries are strictly negative ; one can easily verify that every has this form .more significantly , the proof of theorem [ theorem.explicit frame construction ] guarantees that the columns of form an orthonormal eigenbasis of .this completes the iteration of step b ; we now repeat this process for . for , in step b.1 we arbitrarily pick some diagonal unitary matrix .note that if we wish our frame to be real , there are only such choices of .for the sake of simplicity , we choose in this example .continuing , step b.2 involves canceling the common terms in to find , and so in step b.3 , we find that . steps b.4 and b.5 then give that and are the columns of form an orthonormal eigenbasis for the partial frame operator with corresponding eigenvalues . for the iteration ,we pick and cancel the common terms in to obtain and , implying in step b.3 , we then compute the vectors and in a manner analogous to , , and : note that in step b.4 , the role of permutation matrix is that it maps the entries of onto the indices , meaning that lies in the span of the corresponding eigenvectors : in a similar fashion , the purpose of the permutation matrices in step b.5 is to embed the entries of the matrix into the rows and columns of a matrix : for the last iteration , we again choose in step b.1 . for stepb.2 , note that since we have and , implying working through steps b.3 , b.4 and b.5 yields the untf : we emphasize that the untf given in was based on the particular choice of eigensteps given in , which arose by choosing in .choosing other pairs from the parameter set depicted in figure [ figure.5 in 3](b ) yields other untfs . indeed , since the eigensteps of a given are equal to those of for any unitary operator , we have in fact that each distinct yields a untf which is not unitarily equivalent to any of the others .for example , by following the algorithm of theorem [ theorem.explicit frame construction ] and choosing and in each iteration , we obtain the following four additional untfs , each corresponding to a distinct corner point of the parameter set : \qquad \text{for , } { \let\temp=\\#1\let\\=\temp}\\ f\raggedleft & = \left [ \begin{tabular}{r p{0.8 cm } p{0.8 cm } p{0.8 cm } p{0.8 cm } } \raggedleft & \raggedleft & \raggedleft & \raggedleft & {\smallskip}{\let\temp=\\#1\let\\=\temp}\\ \raggedleft & \raggedleft & \raggedleft & \raggedleft & {\smallskip}{\let\temp=\\#1\let\\=\temp}\\ \raggedleft & \raggedleft & \raggedleft & \raggedleft & { \smallskip}{\let\temp=\\#1\let\\=\temp}\\ \end{tabular } \right ] \qquad \text{for , } { \let\temp=\\#1\let\\=\temp}\\ f\raggedleft & = \left [ \begin{tabular}{r p{0.8 cm } p{0.8 cm } p{0.8 cm } p{0.8 cm } } \raggedleft & \raggedleft & \raggedleft & \raggedleft & {\smallskip}{\let\temp=\\#1\let\\=\temp}\\ \raggedleft & \raggedleft & \raggedleft & \raggedleft & {\smallskip}{\let\temp=\\#1\let\\=\temp}\\ \raggedleft & \raggedleft & \raggedleft & \raggedleft & {\smallskip}{\let\temp=\\#1\let\\=\temp}\\ \end{tabular } \right ] \qquad \text{for , } { \let\temp=\\#1\let\\=\temp}\\ f\raggedleft & = \left [ \begin{tabular}{r p{0.8 cm } p{0.8 cm } p{0.8 cm } p{0.8 cm } } \raggedleft & \raggedleft & \raggedleft & \raggedleft & {\smallskip}{\let\temp=\\#1\let\\=\temp}\\ \raggedleft & \raggedleft & \raggedleft & \raggedleft & {\smallskip}{\let\temp=\\#1\let\\=\temp}\\ \raggedleft & \raggedleft & \raggedleft & \raggedleft & {\smallskip}{\let\temp=\\#1\let\\=\temp}\\ \end{tabular } \right ] \qquad \text{for . } \end{aligned}\ ] ] notice that , of the four untfs above , the second and fourth are actually the same up to a permutation of the frame elements .this is an artifact of our method of construction , namely , that our choices for eigensteps , , and determine the _ sequence _ of frame elements .as such , we can recover all permutations of a given frame by modifying these choices .we emphasize that these four untfs along with that of are but five examples from the continuum of all such frames . indeed, keeping and as variables in and applying the algorithm of theorem [ theorem.explicit frame construction]again choosing and in each iteration for the sake of simplicity yields the frame elements given in table [ table.5 in 3 parametrization ] . here, we restrict so as to not lie on the boundary of the parameter set of figure [ figure.5 in 3](b ) .this restriction simplifies the analysis , as it prevents all unnecessary repetitions of values in neighboring columns in .table [ table.5 in 3 parametrization ] gives an explicit parametrization for a two - dimensional manifold that lies within the set of all untfs consisting of five elements in three - dimensional space . by theorem [ theorem.explicit frame construction ] , this can be generalized so as to yield all such frames , provided we both ( i ) further consider that lie on each of the five line segments that constitute the boundary of the parameter set and ( ii ) throughout generalize to an arbitrary block - diagonal unitary matrix , where the sizes of the blocks are chosen in accordance with step b.1 .\\ f_4&=\left[\begin{array}{ccccccc}-\frac{\sqrt{(4 - 3x)(3y-1)(2-x - y)(4 - 3x-3y)}}{12\sqrt{(2 - 3x)(1-y)}}&-&\frac{\sqrt{(4 - 3x)(5 - 3y)(y - x)(2 + 3x-3y)}}{12\sqrt{(2 - 3x)(1-y)}}&-&\frac{\sqrt{x(3y-1)(y - x)(2 + 3x-3y)}}{4\sqrt{3(2 - 3x)(1-y)}}&+&\frac{\sqrt{x(5 - 3y)(2-x - y)(4 - 3x-3y)}}{4\sqrt{3(2 - 3x)(1-y)}}\smallskip\\-\frac{\sqrt{(4 - 3x)y(3y-1)(2-x - y)(4 - 3x-3y)}}{12\sqrt{(2 - 3x)(1-y)(2-y)}}&+&\frac{\sqrt{(4 - 3x)(2-y)(5 - 3y)(y - x)(2 + 3x-3y)}}{12\sqrt{(2 - 3x)y(1-y)}}&-&\frac{\sqrt{xy(3y-1)(y - x)(2 + 3x-3y)}}{4\sqrt{3(2 - 3x)(1-y)(2-y)}}&-&\frac{\sqrt{x(2-y)(5 - 3y)(2-x - y)(4 - 3x-3y)}}{4\sqrt{3(2 - 3x)y(1-y)}}\smallskip\\\frac{\sqrt{5x(2 + 3x-3y)(4 - 3x-3y)}}{6\sqrt{(2 - 3x)y(2-y)}}&+&\frac{\sqrt{5(4 - 3x)(y - x)(2-x - y)}}{2\sqrt{3(2 - 3x)y(2-y)}}\end{array}\right]\\ f_5&=\left[\begin{array}{ccccccc}\hspace{4.25pt}\frac{\sqrt{(4 - 3x)(3y-1)(2-x - y)(4 - 3x-3y)}}{12\sqrt{(2 - 3x)(1-y)}}\hspace{4.25pt}&+&\frac{\sqrt{(4 - 3x)(5 - 3y)(y - x)(2 + 3x-3y)}}{12\sqrt{(2 - 3x)(1-y)}}&-&\frac{\sqrt{x(3y-1)(y - x)(2 + 3x-3y)}}{4\sqrt{3(2 - 3x)(1-y)}}&+&\frac{\sqrt{x(5 - 3y)(2-x - y)(4 - 3x-3y)}}{4\sqrt{3(2 - 3x)(1-y)}}\smallskip\\\frac{\sqrt{(4 - 3x)y(3y-1)(2-x - y)(4 - 3x-3y)}}{12\sqrt{(2 - 3x)(1-y)(2-y)}}&-&\frac{\sqrt{(4 - 3x)(2-y)(5 - 3y)(y - x)(2 + 3x-3y)}}{12\sqrt{(2 - 3x)y(1-y)}}&-&\frac{\sqrt{xy(3y-1)(y - x)(2 + 3x-3y)}}{4\sqrt{3(2 - 3x)(1-y)(2-y)}}&-&\frac{\sqrt{x(2-y)(5 - 3y)(2-x - y)(4 - 3x-3y)}}{4\sqrt{3(2 - 3x)y(1-y)}}\smallskip\\-\frac{\sqrt{5x(2 + 3x-3y)(4 - 3x-3y)}}{6\sqrt{(2 - 3x)y(2-y)}}&+&\frac{\sqrt{5(4 - 3x)(y - x)(2-x - y)}}{2\sqrt{3(2 - 3x)y(2-y)}}\end{array}\right]\end{aligned}\ ] ] having discussed the utility of theorem [ theorem.explicit frame construction ] , we turn to its proof .( ) let and be arbitrary nonnegative nonincreasing sequences and take an arbitrary sequence of eigensteps in accordance with definition [ definition.eigensteps ] .note here we do not assume that such a sequence of eigensteps actually exists for this particular choice of and ; if one does not , then this direction of the result is vacuously true .we claim that any constructed according to step b has the property that for all , the spectrum of the frame operator of is , and that the columns of form an orthonormal eigenbasis for .note that by lemma [ lemma.eigensteps yield desired properties ] , proving this claim will yield our stated result that the spectrum of is and that for all . since step b is an iterative algorithm, we prove this claim by induction on .to be precise , step b begins by letting and .the columns of form an orthonormal eigenbasis for since is unitary by assumption and for all . as such, the spectrum of consists of and repetitions of . to see that this spectrum matches the values of , note that by definition [ definition.eigensteps ] , we know interlaces on the trivial sequence in the sense of , implying for all ; this in hand , note this definition further gives that .thus , our claim indeed holds for .we now proceed by induction , assuming that for any given the process of step b has produced such that the spectrum of is and that the columns of form an orthonormal eigenbasis for . in particular, we have where is the diagonal matrix whose diagonal entries are . defining analogously from ,we show that constructing and according to step b implies where is unitary ; doing such proves our claim . to do so , pick any unitary matrix according to step b.1 . to be precise , let denote the number of distinct values in , and for any , let denote the multiplicity of the valuewe write the index as an increasing function of and , that is , we write as where if or if and .we let be an block - diagonal unitary matrix consisting of diagonal blocks , where for any , the block is an unitary matrix . in the extreme case where all the values of are distinct , we have that is a diagonal unitary matrix , meaning it is a diagonal matrix whose diagonal entries are unimodular . even in this case , there is some freedom in how to choose ; this is the only freedom that the step b process provides when determining . in any case ,the crucial fact about is that its blocks match those corresponding to distinct multiples of the identity that appear along the diagonal of , implying .having chosen , we proceed to step b.2 . here, we produce subsets and of that are the remnants of the indices of and , respectively , obtained by canceling the values that are common to both sequences , working backwards from index to index . an explicit algorithm for doingso is given in table [ table.index set algorithm ] ..[table.index set algorithm]an explicit algorithm for computing the index sets and in step b.2 of theorem [ theorem.explicit frame construction ] [ cols= " < , < " , ] note that for each ( line 03 ) , we either remove a single element from both and ( lines 0406 ) or remove nothing from both ( lines 0709 ) , meaning that and have the same cardinality , which we denote .moreover , since interlaces on , then for any real scalar whose multiplicity as a value of is , we have that its multiplicity as a value of is either , or .when these two multiplicities are equal , this algorithm completely removes the corresponding indices from both and . on the other hand , if the new multiplicity is or , then the least such index in or is left behind , respectively , leading to the definitions of or given in step b.2 . having these sets , it is trivial to find the corresponding permutations and on and to construct the associated projection matrices and .we now proceed to step b.3 . for the sake of notational simplicity , let and denote the values of and , respectively .that is , let for all and for all .note that due to the way in which and were defined , we have that the values of and are all distinct , both within each sequence and across the two sequences . moreover , since and are nonincreasing while and are increasing on and respectively , then the values and are strictly decreasing .we further claim that interlaces on . to see this , consider the four polynomials : since and were obtained by canceling the common terms from and , we have that for all . writing any as for some , we have that since , applying the only if " direction of lemma [ lemma.interlacing and nonpositive limits ] with " and " being and gives since holds for all , applying if " direction of lemma [ lemma.interlacing and nonpositive limits ] with " and " being and gives that indeed interlaces on . taken together , the facts that and are distinct , strictly decreasing and interlacing sequences implies that the vectors and are well - defined .to be precise , step b.3 may be rewritten as finding for all such that ^ 2=-\,\frac{\displaystyle\prod_{r''=1}^{r_n } ( \beta_r-\gamma_{r''})}{\displaystyle\prod_{\substack{r''=1\\r''\neq r}}^{r}(\beta_r-\beta_{r '' } ) } , \qquad [ w_n(r')]^2=\frac{\displaystyle\prod_{r''=1}^{r_n } ( \gamma_{r'}-\beta_{r''})}{\displaystyle\prod_{\substack{r''=1\\r''\neq r'}}^{r}(\gamma_{r'}-\gamma_{r''})}.\ ] ] note the fact that the s and s are distinct implies that the denominators in are nonzero , and moreover that the quotients themselves are nonzero .in fact , since is strictly decreasing , then for any fixed , the values can be decomposed into negative values and positive values .moreover , since , then for any such , the values can be broken into negative values and positive values . with the inclusion of an additional negative sign, we see that the quantity defining ^ 2 ] has exactly negative values in both the numerator and denominator , namely and , respectively .having shown that the and of step b.3 are well - defined , we now take and as defined in steps b.4 and b.5 . recall that what remains to be shown in this direction of the proof is that is a unitary matrix and that satisfies . to do so , consider the definition of and recall that is unitary by the inductive hypothesis , is unitary by construction , and that the permutation matrices and are orthogonal , that is , unitary and real . as such , to show that is unitary , it suffices to show that the real matrix is orthogonal . to do this , recall that eigenvectors corresponding to distinct eigenvalues of self - adjoint operators are necessarily orthogonal . as such , to show that is orthogonal , it suffices to show that the columns of are eigenvectors of a real symmetric operator . to this end, we claim where and are the diagonal matrices whose diagonal entries are given by and , respectively .to prove , note that for any , (r , r ' ) = ( d_{n;{\mathcal{i}}_n}w_n)(r , r')+(v_n^{}v_n^{\mathrm{t}}w_n)(r , r ' ) = \beta_r w_n(r , r')+v_n(r)\sum_{r''=1}^{r_n}v_n(r'')w_n(r'',r').\ ] ] rewriting the definition of from step b.5 in terms of and gives substituting into gives (r , r ' ) & = \beta_r\frac{v_n(r)w_n(r')}{\gamma_{r'}-\beta_{r}}+v_n(r)\sum_{r''=1}^{r_n}v_n(r'')\frac{v_n(r'')w_n(r')}{\gamma_{r'}-\beta_{r''}}\\ \label{equation.proof of explicit frame construction 7 } & = v_n(r)w_n(r')\,{\biggl({\frac{\beta_r}{\gamma_{r'}-\beta_{r}}+\sum_{r''=1}^{r_n}\frac{[v_n(r'')]^2}{\gamma_{r'}-\beta_{r''}}}\biggr)}\,.\end{aligned}\ ] ] simplifying requires a polynomial identity .note that the difference of two monic polynomials is itself a polynomial of degree at most , and as such it can be written as the lagrange interpolating polynomial determined by the distinct points : recalling the expression for ^ 2 ] , implying by the intermediate value theorem that at least one of the roots of lies in .moreover , since is monic , we have ; coupled with the fact that , this implies that at least one root of lies in .thus , each of the disjoint subintervals of contains at least one of the roots of .this is only possible if each of these subintervals contains exactly one of these roots .moreover , since is nonincreasing , this implies and for all , meaning that indeed interlaces on .we are thus left to consider the remaining case where and share at least one common member .fix such that for at least one pair .let and .let and be -degree polynomials such that and . here , our assumption implies since and satisfy and have degree , our inductive hypothesis gives that the roots interlace on the roots of .we claim that is necessarily either or , that is , .we first show that , a fact which trivially holds for . for ,the fact that implies that the value of the member of is at least that of the member of .that is , the member of is at least , meaning and so , as claimed .we similarly prove that , a fact which trivially holds for . for ,interlacing implies that the member of is at least the member of .that is , the member of is at most and so , as claimed .now , in the case that , the fact that implies that since in this case , the terms and can be inserted into : and so . in the remaining casewhere , having means that since in this case , the terms and can be inserted into : and so in this case as well .fix any , and let be the multiplicity of as a root of . since where each of these two limits is assumed to exist , then the multiplicities of as a roots of and are both at least . as such, evaluating derivatives at gives for all .meanwhile , for , lhpital s rule gives deriving a similar expression for and substituting both it and into yields .as such , for all . as this argument holds at every distinct , we see that has roots , counting multiplicity .but since and are both monic , has degree at most and so , as claimed .p. g. casazza , m. fickus , j. kovaevi , m.t .leon , j. c. tremain , a physical interpretation of tight frames , in : harmonic analysis and applications : in honor of john j. benedetto , c. heil ed . ,birkhuser , boston , pp .5176 ( 2006 ) .i. s. dhillon , r. w. heath , m. a. sustik , j. a. tropp , generalized finite algorithms for constructing hermitian matrices with prescribed diagonal and spectrum , siam j. matrix anal .appl . 27 ( 2005 ) 6171 .
when constructing finite frames for a given application , the most important consideration is the spectrum of the frame operator . indeed , the minimum and maximum eigenvalues of the frame operator are the optimal frame bounds , and the frame is tight precisely when this spectrum is constant . often , the second - most important design consideration is the lengths of frame vectors : gabor , wavelet , equiangular and grassmannian frames are all special cases of equal norm frames , and unit norm tight frame - based encoding is known to be optimally robust against additive noise and erasures . we consider the problem of constructing frames whose frame operator has a given spectrum and whose vectors have prescribed lengths . for a given spectrum and set of lengths , the existence of such frames is characterized by the schur - horn theorem they exist if and only if the spectrum majorizes the squared lengths the classical proof of which is nonconstructive . certain construction methods , such as harmonic frames and spectral tetris , are known in the special case of unit norm tight frames , but even these provide but a few examples from the manifold of all such frames , the dimension of which is known and nontrivial . in this paper , we provide a new method for explicitly constructing any and all frames whose frame operator has a prescribed spectrum and whose vectors have prescribed lengths . the method itself has two parts . in the first part , one chooses eigensteps a sequence of interlacing spectra that transform the trivial spectrum into the desired one . the second part is to explicitly compute the frame vectors in terms of these eigensteps ; though nontrivial , this process is nevertheless straightforward enough to be implemented by hand , involving only arithmetic , square roots and matrix multiplication . frame , construction , tight , unit norm , equal norm , interlacing , majorization , schur - horn 42c15
that evolved brains are highly sensitive organs is an everyday observation . viewed as a dynamical system, a brain may be said to be _ unusually _ susceptible to perturbations and initial conditions .this leads one to ask whether brains may be operating near some form of instability , or criticality , a hypothesis related to the notions of computation at the edge of chaos ( langton 1990 ) and self - organized criticality ( bak et al .1987 ) . in this paperwe propose that while most regulation mechanisms at work in the brain act according to a classical homeostasis schema , i.e. , have a stabilizing effect , an opposite effect could result from the regulation of synaptic weights by a specific form of hebbian covariance plasticity .such a regulation may bring the system near criticality .we suggest that regulated criticality may be the mechanism whereby sensitivity is _ maintained _ throughout life in the face of ongoing changes in brain connectivity .hebbian synaptic plasticity ( hebb 1949 ) plays an important role in the development of the nervous system , and is also believed to underlie many instances of learning in the adult .a _ covariance rule _ of hebbian plasticity roughly states that the change in the efficacy of a given synapse varies in proportion to the covariance between the presynaptic and postsynaptic activities . as noted by many authors ( e.g. sejnowski 1977a , 1977b ; bienenstock et al .1982 ; linsker 1986 ; sejnowski et al . 1988 ) , a covariance - type rule is preferable to a rule that uses the mere product of pre- and post - synaptic activities because the covariance rule predicts not only weight increases but also activity - related weight decreases , and as a consequence allows convergence to non - trivial connectivity states .some forms of covariance plasticity have been shown to be optimal for information storage ( willshaw and dayan 1990 ; dayan and willshaw 1991 ; dayan and sejnowski 1993 ) .also , evidence for hebbian plasticity of the covariance type has been reported in several preparations ( frgnac et al .1988 , 1992 ; stanton and sejnowski 1989 ; artola et al .1990 ; dudek and bear 1992 ) .we shall investigate , in a simple network including excitatory and inhibitory neurons , the effect of covariance plasticity acting as a mechanism of _ regulation , _ rather than supervised learning .synaptic modification results in changes quantitative or qualitative in the activity that reverberates in the network ; these changes in turn cause further modification of the weights , thereby creating a feedback loop between activity and connectivity . studying this loop as such , i.e. , independently from any input and output , we demonstrate that , under fairly general conditions , it causes the network to converge to a critical surface in parameter space , the locus of an abrupt transition between different activity modes . in metzger and lehmann ( 1990 , 1994 ) a similar hebbian rulehas been studied in the context of supervised learning of temporal sequences .schematically , the convergence to a critical state can be explained as follows .one mode of behavior of a network including excitatory and inhibitory neurons is oscillation ; such behavior takes place if the synaptic weights linking excitatory neurons to each other we will refer to these as e - to - e weights are high enough but not too high .oscillation entails high covariance values , hence , according to the covariance rule , results in further increase of the e - to - e weights . if however these weights are higher than a certain critical value which depends on other parameters of the system oscillatory behavior is impossible , hence covariance is low or zero , hence , in accord with the covariance rule used , the e - to - e weights _ decrease ._ as a result , the e - to - e weights stabilize around the critical surface that separates the region of oscillation from the region(s ) of steady firing .our study is conducted in the simplest type of network that will support oscillatory activity : all synaptic weights of a given type are given identical values , and so are all firing thresholds of a given type .this results in a system with just six parameters four synaptic weights and two thresholds and a limited range of behaviors .essentially , all neurons fire uniformly , either at a constant rate ( the number of possible rates of firing is one or two , depending on parameters ) or periodically in time . in the _ thermodynamic , _i.e. , large - size , limit , the dynamics of the network is adequately described by a system of differential equations obtained through a classical mean - field approximation .we first perform a simple bifurcation analysis of this differential system ( guckenheimer and holmes 1983 ) .we then show that the effect of covariance regulation is to stabilize the parameter state at a surface of transition , where the dynamics exhibits an instability .such a critical parameter state for a dynamical system may be characterized as _ degenerate . _ a generic , i.e. , non - exceptional, state is one where one expects to find the system in the absence of further assumptions .mathematically , a generic state of a dynamical system is in the _ interior _ of a parameter region corresponding to a given behavior , and the system in such a parameter state is said to be _ structurally stable ; _ the set of non - generic parameter states has measure zero .we shall show that a state of higher degeneracy , characterized as a point of intersection of _ several _ critical surfaces , can be achieved by the simultaneous regulation of _ several _ parameters . in the vicinity of thathighly degenerate state , the system displays a range of behaviors , including chaos .the plan of the paper is as follows . in the next sectionwe study the dynamical properties of our simple network in the differential - equation formulation with _ fixed _ parameters ( synaptic weights and firing thresholds ) .we characterize the bifurcations which take place at the boundaries between domains corresponding to different modes of behavior .this study is conducted for a _ reduced _ system , where the thresholds are eliminated in such a way as to render the dynamics symmetric about the origin .section [ regulation ] describes the regulation equations .section [ reduced ] describes the behavior of the regulated reduced system , and section [ full ] that of the regulated full system .this section describes the dynamics of the model with fixed parameters .we first briefly describe a network consisting of a large number ( ) of binary - valued neurons operating under a stochastic dynamics .however , rather than using this network for our study of plasticity , we make a number of simplifications and approximations , leading to a deterministic two - variable differential system with just six parameters .the two variables are the excitatory and inhibitory population averages of cell activity in the -dimensional model ; the six parameters include the four average weights of the synapses within and between these two populations , as well as the average firing thresholds for the two populations .we then study the asymptotic behavior of this differential system for various parameter values .different types of asymptotic behavior , in different regions of the parameter space , correspond to different _ phases _ of the stochastic system , and we pay particular attention to the _ bifurcations _ of the solutions , where the bifurcation parameters are the synaptic weights see schuster and wagner ( 1990 ) and borisyuk and kirillov ( 1992 ) for a related bifurcation analysis .bifurcations correspond to _ phase transitions _ in the statistical - physics formulation ( the original -dimensional model ) .we consider a fully - connected network of excitatory and inhibitory linear - sigmoidal -valued neurons , operating under a stochastic dynamics .we denote the activity of the -th excitatory , resp .inhibitory , neuron by , resp . , with , , and we denote the synaptic weights by , , where is postsynaptic and presynaptic , and the superscripts indicate , for each of the two neurons , whether it is excitatory or inhibitory .thus , for all and , and are positive or zero , whereas and are negative or zero . the _ local field _ on excitatory neuron , i.e. , the difference between its membrane potential and its firing threshold , is .similarly , the local field on inhibitory neuron is , where is the threshold of inhibitory neuron .the network dynamics is defined by : ( i ) selecting at random , with uniform probability , one of the neurons ; ( ii ) computing its local field , of the form or ; and ( iii ) defining the state of the network at time to be equal to the state at time except , possibly , for the selected neuron , whose state becomes or stays1 with probability .parameter is a fixed non - negative number , an _inverse temperature_. the temperature measures the amount of noise in the system : the higher the temperature , the noisier the dynamics .the update interval is , so that each neuron is updated on average once every time unit .this _ asynchronous _ dynamics , of the glauber type ( glauber 1963 ) , is widely used in statistical - mechanics models ; it lends itself to a convenient mean - field approximation ( see below ) .a system such as the one just described will exhibit a highly diverse range of behaviors , depending on the values of the synaptic weights and firing thresholds .but we now make the much simplifying assumption that synaptic weights and firing thresholds are _ uniform _ across each class . specifically , for all , we assume that , , , , , and , where , , , , and are fixed parameters , and , , and are non - negative .the dynamics is thus parameterized by six constants , four synaptic weights and two thresholds ; is a mere multiplicative factor common to all six parameters , yet it is convenient to use it as a seventh parameter . unless otherwise mentioned , will be 1 . due to this uniformity assumption, all neurons in any of the two populations experience the same field at any given time .this system exhibits a limited number of fairly simple behaviors , of which figure 1 is an example .this figure shows the time variation of and , the _ average _ activation levels across the excitatory and inhibitory populations . in this example , parameters are : , , , , , , .one unit on the time axis corresponds to updates , so that each neuron is updated , on average , once every time unit . for these parameter values , the system _ oscillates ._ note that the oscillation is not perfectly regular , a finite - size effect .note also that the inhibitory activity lags somewhat behind the excitatory activity : the excitatory neurons first trigger the inhibitory ones , which in turn extinguish , for a while , the excitatory population ..4 cm _ ( insert figure 1 around here ) _ .4 cm the presence of oscillations and the amplitude and shape of the waveform depend on the various parameters .however , rather than pursuing this study of the stochastic system , we shall consider the approximation that obtains in the _ thermodynamic limit _, that is , when .the update interval then goes to 0 and so does each individual synaptic weight .straightforward approximations ( rubin 1988 ; schuster and wagner 1990 ) then lead to a continuous - time differential system for the population averages of the excitatory and inhibitory activation levels , which we denote , respectively , by and : \\ \dot{\sigma}(t)=.5- \sigma(t ) + .5\tanh[\beta(w{^{\mbox{\scriptsize ie}}}s(t)-w{^{\mbox{\scriptsize ii}}}\sigma(t)-h{^{\mbox{\scriptsize i } } } ) ] .\end{array } \right .\label{sys_full}\ ] ] note that the variables and remain at all within the interval [ 0,1 ] . when system [ sys_full ] has a unique attractor , .indeed , in the high - temperature limit , all neurons act independently of each other and fire with probability .5 at each time .we shall now make a last simplification , whose purpose it is to render a fixed point though not necessarily stable at _ all _ temperatures and for all values of the synaptic weights .this is easily achieved by letting the thresholds and be determined by the synaptic weights as follows : it is then convenient to adopt the change of variables : , , and system [ sys_full ] becomes : \\ \dot{\sigma}(t ) = -\sigma(t ) + .5\tanh[\beta(w{^{\mbox{\scriptsize ie}}}s(t)-w{^{\mbox{\scriptsize ii}}}\sigma(t ) ) ] .\end{array } \right .\label{sys_red}\ ] ] in [ sys_red ] , the variables and are in the interval ] , except the unstable equilibrium .motion is counterclockwise , for , as mentioned above , lags behind .in addition to these four orbits , figure 2a shows two curves , the - and -_nullclines _ for system [ sys_red ] .these are the loci of the points such that , resp . , vanish .the equations for the - and -nullclines are easily seen to be , respectively : the -nullcline is an increasing sigmoid - shaped curve , whereas the -nullcline generally has the shape of an ` s ' lying on its side .of particular interest are the intersection points of the two nullclines ; these are the _ fixed points _ of the dynamics . in the case illustrated in figure 2a , the only intersection is , an unstable equilibrium .trajectories intersect the - , resp .- , nullcline in a direction parallel to the - , resp .- , axis .the study of the nullclines is of interest because it is often possible to predict how a parameter change will affect the dynamics of the system by reasoning about how the nullcline diagram will change ; the bifurcation we shall be mostly interested in is associated with a conspicuous change in this diagram .note that the -nullcline is affected by parameters and , whereas the -nullcline is affected by parameters and ..4 cm _ ( insert figure 2 around here ) _.4 cm let us consider first the changes brought about by letting parameter grow , starting from the point for which the system oscillates ; other parameters are unchanged .when grows , the slope of the central , quasi - linear , part of the -nullcline increases ( see equation [ snull ] ) ; that part of the curve rotates about the symmetry center ( 0,0 ) . as a result ,the peak of the -nullcline to the right approaches the upper part of the sigmoid - shaped -nullcline , and the minimum of the -nullcline to the left approaches the lower part of the -nullcline .eventually , at a certain critical value ( subscript ` sn ' stands for ` saddlenode'see below ) , the two curves become tangent to each other .this happens in two points at once , near the upper right - hand corner and near the lower left - hand corner , due to the symmetry of the system .this situation is depicted in figure 2b : is exactly equal to the critical value ( with parameters as above , ) , and the nullclines are just tangent to each other .when grows a little further , each point of contact splits into two intersection points , of which one is an attractor .figure 2c shows this situation , with , somewhat above the critical value .four trajectories are shown , in addition to the two nullclines .the system has five fixed points , three unstable ones and two stable ones ( attractors ) .only the stable fixed points are of interest to us ; they are very near the upper right - hand and lower left - hand corners of the square , corresponding to high , respectively low , excitatory and inhibitory activities .the bifurcation occurring at is of the _ saddlenode _ type .it results in a drastic change of behavior of the system : the periodic attractor disappears and is ` siphoned ' into the two new point attractors .these two points attract the entire square , except a set of measure 0 which includes the three unstable fixed points .thus , altough this bifurcation is caused by a mere _ local _ change , namely the intersection of the nullclines , it results in a reorganization of the dynamics that is both abrupt and _ global . _ into system [ sys_red ] . ]having described the breakdown of oscillations when parameter is increased , we now consider the opposite change , that is , we let decrease .this results in a decrease of the slope of the central , increasing , portion of the -nullcline ( equation [ snull ] ) .eventually , the curve becomes monotonically decreasing ; this does not alter the number of intersections of the nullclines , point remaining the sole equilibrium .however , the amplitude of the limit cycle decreases along with .the cycle eventually collapses to a point ; the equilibrium has then become stable .this can be seen in a linear stability analysis of system [ sys_red ] around point .it is easily shown that , in case there are two complex conjugate eigenvalues , . ]the real part of these eigenvalues is negative if and only if .thus , is a critical value for parameter .we define ( with the current parameter setting , ) .the change of behavior occurring at is a _normal _ , the bifurcation is subcritical see footnote 5 . ]hopf bifurcation .so far , we studied the behavior of system [ sys_red ] for different values of parameter , all other parameters being fixed .in other words , we described the system s behavior on a particular 1-dimensional subspace of the 4-dimensional parameter space .we now extend this study to a 2-dimensional subspace , the plane .figure 3a is the bifurcation diagram of system [ sys_red ] in that plane , with other parameters as before ( , ) .this diagram shows three distinct regions , corresponding to three different attractor configurations ; unstable fixed points and unstable limit cycles are ignored in this diagram . in the middle region which we call region , for_ periodic_the system oscillates .the boundary of this region to the right is the saddlenode bifurcation curve , which we denote ; as discussed above , the rightmost region has _two _ point attractors , and we call it region . the leftmost region , which we call , has only one point attractor , the center of symmetry ; it is separated from region by the hopf bifurcation curve , a vertical line of equation .the curve in the lower left of the diagram , separating region from region , is the locus of a _ pitchfork _ bifurcation .this bifurcation diagram , obtained for one particular set of values of the parameters and , is representative of the general case . to region is of the saddlenode type only for large enough values of ; this range of values corresponds roughly to the straight portion of curve ( figure 3a ) . to see why this is so , consider again figure 2b , the nullcline diagram at the bifurcation , with .note that the points of contact between the nullclines appear near the corners of the square , far from the origin ; this is due to the fact that is relatively large , hence the slope of the -nullcline at the origin is larger than the slope of the -nullcline .the bifurcation is then of the saddlenode type , as described .if however is small , hence so is the slope of the -nullcline at the origin , the transition from to as is increased takes place differently .a pair of intersection points between the nullclines first split off _ from the origin ; _ these are unstable equilibria .as increases , these two equilibria move away from the origin , while remaining inside the large stable limit cycle . at a certain critical value for become stable a ( double ) subcritical hopf bifurcation and almost immediately thereafter the large limit cycle disappears .thus , the transition from region to region really takes place in two steps , giving rise to a _ three - attractor _ behavior : the system has one large limit - cycle attractor _ as well as _ two point attractors , the latter being inside the cycle .the region of the plane where this behavior takes place is a strip extending along the lower , curved , part of the boundary ; it is too narrow to be seen in figure 3a .( with parameters and as above and , the three - attractor behavior occurs for between and . for some other values of and behavior does not occur at all , and the transition from to is always of the saddlenode type . ) for the purpose of this paper ( see footnotes 8 and 11 ) it is important to note that the point attractors appear either _ exactly _ or _ almost _ at the same time as the periodic attractor disappears .the second approximation in the bifurcation diagram , mentioned only for the sake of completeness , concerns the -to- transition .this is generally a smooth , supercritical , hopf bifurcation .however , as mentioned in footnote 4 , this hopf bifurcation becomes subcritical for very large values of .there is thus a narrow region to the left of the bifurcation line where the limit - cycle attractor coexists with the point attractor ( 0,0 ) ; for instance , at , the width of this region is . ].4 cm _ ( insert figure 3 around here ) _ .4 cm in sum , the bifurcation diagram for system [ sys_red ] is characterized by a central periodic - attractor region , a large vertical patch extending to in the direction ( phase ) , flanked by point - attractor regions on each side ( phases and ) .the transition from to is abrupt ( line ) , while the transition from to is smooth .as mentioned in the introduction , system [ sys_full]the full system is not amenable to such a thorough analysis ; however , we shall see in section [ full ] that the two systems behave in much the same way under the plasticity rules that we shall now introduce .whereas in the previous section the synaptic weights and were fixed parameters , they will now be made to evolve . their evolution will obey a hebbian covariance rule , hence be a function of second - order temporal averages of the dynamic variables and .synaptic plasticity creates a _ regulation loop : _ changing the parameters affects the dynamics of the system , which in turn alters the second - order moments of and .formally , the regulation is implemented by introducing additional differential equations , coupled to system [ sys_red ] ( or to system [ sys_full]see section [ full ] ) .the rate of change of and will typically be several orders of magnitude slower than that of and .let us first define , for any function of time , a moving time average : parameter is a positive constant , physically an inverse time ; the larger , the narrower the averaging kernel .equivalently , may be defined by a differential equation , more convenient for simulation purposes : consider now , with reference to the original stochastic model ( section [ model ] ) , the _ instantaneous covariance _ between two excitatory neurons and , defined as : . if we take the _ population average _ of this instantaneous covariance , we obtain , in the thermodynamic limit , the instantaneous variance of : it is this quantity that we use to regulate the excitatory - to - excitatory synaptic weight . the regulation equation is linear in : parameters and are positive .note that the quantity is always non - negative ; the term is therefore necessary to allow for decreases of .we shall also consider a regulation for , the synaptic weight from excitatory to inhibitory neurons , although this regulation will play a less important role than that of .the modification rule for has the same form as equation [ regee ] , yet it uses the excitatory - to - inhibitory instantaneous covariance , defined as : the regulation equation for then reads : in equation [ regie ] , is a positive constant , as in equation [ regee ]. however , the modification rate constant is negative .the main reason for this will be given in the next section ; for now , note that this choice is consistent with the spirit of hebb s principle , for , when considered _ postsynaptically _ to the target neuron , the effect of synapse reinforcement if that target neuron is inhibitory is the opposite of the effect obtained if the target neuron is excitatory .this section describes the behavior of the regulated reduced system .we demonstrate that each of the two regulation loops introduced in section [ regulation ] , when acting separately , brings the system to the critical surface , the locus of an abrupt phase transition ( saddlenode bifurcation ) .we then examine the behavior of the system with the two regulation loops active simultaneously ; we show that under some conditions the state converges to a point on with a remarkable nullcline configuration .before we consider the regulation proper , let us examine how the covariances change across the plane .figure 3b shows the values of , the time average of the instantaneous variance of , ; the latter becomes in the thermodynamic limit . in the regulation equation , we use the _ instantaneous _ covariance rather than its time average ( see discussion ) .the time - averaged variance is used here for illustration purposes only . in order to obtain an essentially constant value for rather than an oscillating function of time, different values of are used for the two averaging operations : the kernel used to average into is ten times broader than the kernel used to compute from . ] along several horizontal lines in the plane . as expected, is positive only in region , where the dynamics is periodic ; although not shown , the same is true of , the time average of the e - to - i covariance . note that as crosses the -to- boundary ( hopf bifurcation ) from left to right , increases _ smoothly _ from 0 to positive values : as discussed above , the amplitude of the limit cycle at this bifurcation is infinitesimal . in contrast , the change in and in at ( saddlenode bifurcation ) is a sharp one , as the system undergoes there a transition from a _limit - cycle regime to a fixed - point attractor .we now start our study of covariance plasticity by regulating parameter in system [ sys_red ] while all other parameters , including , remain fixed .the system under study then consists of coupled equations [ sys_red ] , [ covee ] , [ regee ] .equation [ regee ] prescribes an increase of when , and a decrease when . referring to figure 3b, we see that to the left of , where is high , the first of the two conditions applies ; in this region increases . to the right of covariance vanishes , and decreases .therefore , is attracted to the transition line . should be smaller than the value of immediately to the left of . the portion of the boundary line where the bifurcation is a subcritical hopf rather than a saddlenode ( footnote 5 ) yields similar behavior , since the disruption of the large - amplitude limit cycle occurs very near the emergence of point attractors ( see also footnote 11 ) . ].4 cm _ ( insert figure 4 around here ) _.4 cm the behavior of this regulation loop is illustrated in figure 4a for the following setting of parameters : , , , , .this figure focuses on a small region of the plane , and shows the projection of the trajectory of .several trajectories are shown , all horizontal since is a constant , these trajectories terminate on the critical line , and the behavior of the and components on them is as follows . on the trajectories coming from the left , in the region , moves along a cyclic orbit , whose amplitude grows as increases and approaches the bifurcation line . on the trajectories coming from the right , in the region, stays in one of the two point attractors while decreases until it reaches the bifurcation curve .when is reached , either from the left or from the right , motion does not really stop .rather , sets in a periodic oscillation of small amplitude synchronized with a large - amplitude periodic motion of ; the frequency of this oscillation is several orders of magnitude slower than in , hence covariance is small it matches , on average , the control parameter .when in this regime , the system spends a long time in one of the two almost - attracting corners of the ^ 2 ] . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ figure 6 : regulation of two parameters in system [ sys_full ] .( a ) bifurcation diagram in plane .( b ) regulation of and causes convergence to point , the intersection of critical lines and .( c ) nullcline diagram at ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ figure 7 : behavior of full system under simultaneous regulation of four parameters .diagram shows projection on plane , illustrating the similarity of behavior with reduced system ( compare with figure 4c , but note difference of scales ) .limits of the attraction basin to the left are roughly indicated by the starting points of the trajectories shown ; attraction basin is unbounded in all other directions . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ figure 8 : various behaviors of regulated full system after it has reached critical surface ( figure 7 ) .diagrams show for three slightly different parameter settings ( see text ) ; in all cases , the projection of the motion on the plane remains of small amplitude .( a ) simple periodic attractor , point of figure 7 ; similar periodic attractors are reached for most parameter settings .( b ) complex quasi - periodic attractor .( c e ) chaotic attractor ; for a given parameter setting , three diagrams corresponding to different instants of time and different lengths of time ; characteristic are the irregular transitions between the high - activity , low - activity , and oscillatory phases ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
we propose that a regulation mechanism based on hebbian covariance plasticity may cause the brain to operate near criticality . we analyze the effect of such a regulation on the dynamics of a network with excitatory and inhibitory neurons and uniform connectivity within and across the two populations . we show that , under broad conditions , the system converges to a critical state lying at the common boundary of three regions in parameter space ; these correspond to three modes of behavior : high activity , low activity , oscillation . covariance plasticity .5 cm and regulated criticality * elie bienenstock * .2 cm division of applied mathematics brown university providence ri 02912 usa and cnrs , paris , france .2 cm elie.brown.edu .5 cm * daniel lehmann * .2 cm department of computer science hebrew university jerusalem israel .2 cm lehmann.huji.ac.il .5 cm january 1995
over the last few decades powerlaw distributions have attracted particular attention for their mathematical properties and appearances in a wide variety of scientific contexts , from physical and biological sciences to social and man - made phenomena . differently from normally distributed data ,empirical quantities that follow a powerlaw distribution do not cluster around an average value , and thus can not be characterized through the mean and standard deviation .nevertheless , the fact that some scientific observations can not be characterized as simply as other measurements is often a sign of complex underlying processes that deserve further study . a complete introduction to powerlaw distributions along with a statistical framework for discerning and quantifying powerlaw behavior in empirical data can be found in , whereas extensive discussions can be found in , and references therein .recent advances related to powerlaw fitting and statistical hypothesis testing can be found in .formally , a quantity follows a powerlaw distribution if its probability distribution is defined as where and is called the _ scaling parameter _ of the distribution .fitting these kind of heavy - tailed distributions requires care , since only few empirical phenomena show such a probability distribution for all values of .indeed , more often only values greater than some minimum value , i.e. the so called _ lower bound _ , follow a powerlaw distribution .the traditional lower bound estimation method introduced in is based on the computation of the kolmogorov - smirnov distances between the empirical and the theoretical cumulative distribution functions defined for values when is discrete ( when is continuous ) .once the kolmogorov - smirnov distances have been computed for all the eligible values of , the associated with the smallest distance is chosen as lower bound of the distribution . however , if applied to very large collections of data e.g. the distribution of the number of views received by youtube videos such a method can be computationally demanding , and bootstrap techniques to address the uncertainty in the estimates and average over multiple estimations become unfeasible . in this paper ,we propose two alternative methods with the aim to reduce the time required by the traditional estimation procedure .in particular , the first proposed method starts to compute the traditional kolmogorov - smirnov distance from a guess on the true value of the lower bound , and stops the procedure once a minimum is reached .the second proposed method is thought for the discrete case , where the computation of theoretical cumulative distribution functions involves the calculation of hurwitz zeta functions , which could be computationally binding .such a method uses the above - mentioned conditions to reduce the number of computations , and substitutes the cumulative distribution functions of the traditional kolmogorov - smirnov distance with the corresponding probability mass functions , i.e. it is based on the comparison between empirical and theoretical probabilities for each .this manuscript is organized as follows . in section 2we provide some basic definitions about continuous and discrete powerlaw distributions . in section 3we first discuss the traditional estimation method , and then we introduce two alternative methods which can speed up the estimation procedure . in section 4we apply the three methods to large collections of data ( ) with varying values of the true lower bound , showing that both our proposed methods yield a significantly better performance and accuracy than the traditional method .section 5 is left for some concluding remarks .let represents a quantity whose distribution we are interested in .the probability distribution when is continuous is defined as whereas in the discrete case , when can assume only positive integers , the probability distribution is defined as where is the lower bound , is the scaling parameter , and is the hurwitz zeta function .furthermore , the complementary cumulative distribution function in the continuous case is defined as whereas in the discrete case is defined as the complementary cumulative distribution function is often preferred to the cumulative distribution function since it allows to show powerlaw distributions in doubly logarithmic axes , and thus emphasize their upper tail behavior .the traditional method to estimate the lower bound of a powerlaw distribution has been introduced in .such a method is based on the kolmogorov - smirnov distance , which is defined as where is the empirical cumulative distribution function , and is the theoretical cumulative distribution function of the fitted powerlaw distribution for values when is discrete ( when is continuous , and hereafter we refer only to the discrete case for the sake of brevity ) . once the kolmogorov - smirnov distance has been computed for all the possible values , the associated with the smallest value of is chosen as the lower bound of the powerlaw distribution .in it has been proved that the estimation method based on the kolmogorov - smirnov distance outperforms alternative methods based on the bic ( bayesian information criterion ) and the anderson - darling statistics . nevertheless, when dealing with big data such a method can be computationally demanding for two main reasons : 1 .the algorithm needs to compute the kolmogorov - smirnov distance for each possible ; 2 . in the discrete case ,the computation of the theoretical cumulative distribution function of the fitted powerlaw distributions involves the calculation of hurwitz zeta functions , which can be computationally binding when dealing with large data collections . in this paper, we aim at introducing two lower bound estimation methods in order to tackle the above - mentioned drawbacks of the traditional method .we start from two simple observations .first of all , often there is no need to compute the kolmogorov - smirnov distance for all the eligible values of , since a graphical exploratory analysis is usually sufficient to rule out a substantial range of values . on the left panel of figure [ fig : example ] ,we show the complementary cumulative distribution function of a random generated distribution with powerlaw tail .notice that a quick look at the plot is sufficient to rule out some eligible values for the lower bound . when dealing with big data and large maximal values , we could rule out hundreds of possible values , thus reducing the required time to estimate the lower bound .moreover , we know by definition that the kolmogorov - smirnov statistics computed for all the possible values has a global minimum in correspondence to the true lower bound . on the right panel of figure [ fig : example ] , we show the values of the kolmogorov - smirnov distance in correspondence to subsequent values of . taking into account these observations ,we propose a first estimation method which starts computing the kolmogorov - smirnov distances from the eligible value of that is closest to where is a guess on the true value of the lower bound , and $ ] is the confidence in such a guess .the computation of the kolmogorov - smirnov distances stops when all the differences between the last distances are positive i.e. when a minimum is reached .the key ideas are two : 1 .since a quick graphical exploratory analysis is often sufficient to rule out a large amount of eligible values , we can start computing the kolmogorov - smirnov distances from the value we think it is the true lower bound ; 2 .if our guess is close to the true lower bound , the first minimum of the kolmogorov - smirnov statistics we meet is the global minimum associated with the true lower bound , and hence we can stop the computation once a minimum is reached . moreover , since in the discrete case the computation of the theoretical cumulative distributions involves the calculation of hurwitz zeta functions , we propose a second estimation method that further modify the traditional method by substituting the empirical and theoretical cumulative distribution functions of the kolmogorov - smirnov distance with the corresponding probability mass functions , which are generally faster to compute .more formally , in the second proposed method the distance to be computed is defined as where the subscripts indicate that and are , respectively , the empirical probability mass functions , and the theoretical probability mass function of the fitted powerlaw distribution for values .in this section , we illustrate and discuss the results of a simulation comparing the two proposed methods with the traditional estimation method .we refer to the traditional method as ` estimate_xmin ` the name of the lower bound estimation function provided by the r package ` powerlaw ` and to our proposed methods as 1 .` getxmin ` : the first proposed method still based on the classical traditional kolmogorov - smirnov distance ; 2 . `getxmin2 ` : the second proposed method based on distances between empirical and theoretical probability mass functions .both ` getxmin ` and ` getxmin2 ` are implemented for discrete powerlaw distributions on the r package ` statools ` , which is currently available on cran . in order to test the three different methods , we generate synthetic data and examine both the accuracy and the performance in the estimation of the true lower bound .we use data drawn from a distribution with the form where and is a normalization constant .we apply the three estimation methods to large ( ) collections of data drawn from eq .[ eq : distr ] with true values of varying in . in our simulation, we set equal to the true lower bound with a confidence , e.g. when the true lower bound is , our proposed methods start to compute the corresponding statistics from the possible value of that is closest to , which is a feasible practice , thus assisting the user in making a good guess on the true value of the lower bound . ] and thus a reasonable assumption . moreover , both ` getxmin ` and ` getxmin2 ` stop to compute the corresponding statistics once a first minimum is reached , i.e. when all the differences between the last computed distances are positive .figure [ fig : simulation ] shows the estimated value of as a function of the true lower bound , indicating that both ` getxmin ` and ` getxmin2 ` outperform the traditional estimation method .table [ tb : accuracy ] summarizes the accuracy of the three methods through mean squared errors ( mses ) , root mean squared errors ( rmses ) , and mean absolute errors ( maes ) , confirming that both the proposed methods yield a better accuracy than the traditional method . .*estimation accuracy . * mean squared errors , root mean squared errors , and mean absolute errors summarizing the accuracy of the lower bound estimates obtained by means of three different methods .both the proposed methods yield a better accuracy than the traditional method . [cols=">,^,^,^",options="header " , ] figure [ fig : performance ] illustrates the time demanded by the different estimation methods , indicating that our proposed methods yield a better performance than the traditional estimation method .the traditional lower bound estimation method for powerlaw distributions proved to outperform competing methods based on bic and anderson - darling statistics .however , if applied to very large collections of data , such a method can be computationally demanding , and bootstrap techniques to address the uncertainty in the estimates and average over multiple estimations become unfeasible . in this paper , we propose two alternative methods with the aim to reduce the time required by the estimation procedure . in particular , the first proposed method starts to compute the traditional kolmogorov - smirnov distances from a guess on the true value of the lower bound , and stops the procedure once a minimum is reached .the second proposed method uses the above - mentioned conditions to reduce the number of computations , and substitutes the cumulative distribution functions of the traditional kolmogorov - smirnov statistics with the corresponding probability mass functions .we apply the three methods to large collections of data ( ) with varying values of the true lower bound .both the proposed methods yield a significantly better performance and accuracy than the traditional method .we would like to thank colin s. gillespie for our helpful discussion .
the traditional lower bound estimation method for powerlaw distributions based on the kolmogorov - smirnov distance proved to perform better than other competing methods . however , if applied to very large collections of data , such a method can be computationally demanding . in this paper , we propose two alternative methods with the aim to reduce the time required by the estimation procedure . we apply the traditional method and the two proposed methods to large collections of data ( ) with varying values of the true lower bound . both the proposed methods yield a significantly better performance and accuracy than the traditional method .
since the five last decades , game theory has become a major aspect in economic sciences modelling and in a great number of domains where strategical aspects has to be involved .game theory is usually defined as a mathematical tool allowing to analyse strategical interactions between individuals .+ initially funded by mathematical researchers , j. von neumann , e. borel or e. zermelo in 1920s , game theory increased in importance in the 1940s with a major work by j. von neumann and o. morgenstern and then with the works of john nash in the 1950s .john nash has proposed an original equilibrium ruled by an adaptive criterium . in game theory ,the nash equilibrium is a kind of optimal strategy for games involving two or more players , whereby the players reach an outcome to mutual advantage .if there is a set of strategies for a game with the property that no player can benefit by changing his strategy while the other players keep their strategies unchanged , then this set of strategies and the corresponding payoffs constitute a nash equilibrium .+ we can understand easily that the modelization of a player behavior needs some adaptive properties .the computable model corresponding to genetic automata are in this way a good tool to modelize such adaptive strategy .+ the plan of this paper is the following . in the next section , we present some efficient algebraic structures , the automata with multiplicities , which allow to implement powerful operators .we present in section 3 , some topological considerations about the definition of distances between automata which induces a theorem of convergence on the automata behaviors .genetic operators are proposed for these automata in section 4 . for that purpose , we show that the relevant `` calculus '' is done by matrix representions unravelling then the powerful capabilities of such algebraic structures . in section 5 ,we focus our attention on the `` iterated prisonner dilemma '' and we buid an original evolutive probabilistic automaton for strategy modeling , showing that genetic automata are well - adapted to model adaptive strategies .section 6 shows how we can use the genetic automata developed previously to represent agent evolving in complex systems description .an agent behavior semi - distance is then defined and allows to propose an automatic computation of emergent systems as a kind of self - organization detection .automata are initially considered as theoretical tools .they are created in the 1950 s following the works of a. turing who previously deals with the definition of an abstract `` machine '' .the aim of the turing machines is to define the boundaries for what a computing machine could do and what it could not do .+ the first class of automata , called finite state automata corresponds to simple kinds of machines .they are studied by a great number of researchers as abstract concepts for computable building . in this aspect, we can recall the works of some linguist researchers , for example n. chomsky who defined the study of formal grammars .+ in many works , finite automata are associated to a recognizing operator which allows to describe a language . in such works ,the condition of a transition is simply a symbol taken from an alphabet . from a specific state , the reading of a symbol allows to make the transitions which are labeled by and ( in case of a deterministic automaton - a dfa - there is only one transition - see below ) .a whole automaton is , in this way , associated to a language , the recognized language , which is a set of words .these recognized words are composed of the sequences of letters of the alphabet which allows to go from a specific state called initial state , to another specific state , called final state .+ a first classification is based on the geometric aspect : dfa ( deterministic finite automata ) and nfa ( nondeterministic finite automata ) . * in deterministic finite automata , for each state there is at most one transition for each possible input and only one initial state . * in nondeterministic finite automata , there can be none or more than one transition from a given state for a given possible input . besides the classical aspect of automata as machines allowing to recognize languages , another approach consists in associating to the automata a functional goal .in addition of accepted letter from an alphabet as the condition of a transition , we add for each transition an information which can be considered as an output data of the transition , the read letter is now called input data .we define in such a way an _ automaton with outputs _ or _ weightedautomaton_. + such automata with outputs give a new classification of machines ._ transducers _ are such a kind of machines , they generate outputs based on a given input and/or a state using actions .they are currently used for control applications . _moore machines _ are also such machines where output depends only on a state , i.e. the automaton uses only entry actions .the advantage of the moore model is a simplification of the behaviour .+ finally , we focus our attention on a special kind of automata with outputs which are efficient in an operational way .this automata with output are called _ automata with multiplicities_. an automaton with multiplicities is based on the fact that the output data of the automata with output belong to a specific algebraic structure , a semiring . in that way , we will be able to build effective operations on such automata , using the power of the algebraic structures of the output data and we are also able to describe this automaton by means of a matrix representation with all the power of the new ( i.e. with semirings ) linear algebra . + * ( automaton with multiplicities ) * + an automaton with multiplicities over an alphabet and a semiring is the 5-uple where * is the finite set of state ; * is a function over the set of states , which associates to each initial state a value of k , called entry cost , and to non- initial state a zero value ; * is a function over the set states , which associates to each final state a value of k , called final cost , and to non - final state a zero value ; * is the transition function , that is which to a state , a letter and a state associates a value of ( the cost of the transition ) if it exist a transition labelled with from the state to the state and and zero otherwise .+ automata with multiplicities are a generalisation of finite automata .in fact , finite automata can be considered as automata with multiplicities in the semiring , the boolan set ( endowed with the logical `` or / and '' ) .to each transition we affect 1 if it exists and 0 if not .+ we have not yet , on purpose , defined what a semiring is .roughly it is the least structure which allows the matrix `` calculus '' with unit ( one can think of a ring without the `` minus '' operation ) .the previous automata with multiplicities can be , equivalently , expressed by a matrix representation which is a triplet * which is a row - vector which coefficients are , * is a column - vector which coefficients are , * is a morphism of monoids ( indeed is endowed with the product of matrices ) such that the coefficient on the row and column of is is a field , one sees that the space of automata of dimension ( with multiplicities in ) is a -vector space of dimension ( is here the number of letters ) .so , in case the ground field is the field of real or complex numbers , one can take any vector norm ( usually one takes one of the hlder norms where stands for the vector of all coefficients of arranged in some order one has then the result of theorem [ th1 ] . assuming that is the field of real or complex numbers , we endow the space of series / behaviours with the topology of pointwise convergence ( topology of f. treves ) .[ th1 ] let be a sequence of automata with limit ( is an automaton ) , then one has where the limit is computed in the topology of treves .we define the chromosome for each automata with multiplicities as the sequence of all the matrices associated to each letter from the ( linearly ordered ) alphabet .the chromosomes are composed with alleles which are here the lines of the matrix .+ in the following , genetic algorithms are going to generate new automata containing possibly new transitions from the ones included in the initial automata .+ the genetic algorithm over the population of automata with multiplicities follows a reproduction iteration broken up in three steps : * _ duplication _ : where each automaton generates a clone of itself ; * _ crossing - over _ : concerns a couple of automata . over this couple, we consider a sequence of lines of each matrix for all .for each of these matrices , a permutation on the lines of the chosen sequence is made between the analogue matrices of this couple of automata ; * _ mutation _ : where a line of each matrix is randomly chosen and a sequence of new values is given for this line .finally the whole genetic algorithm scheduling for a full process of reproduction over all the population of automata is the evolutionary algorithm : 1 . for all couple of automata ,two children are created by duplication , crossover and mutation mechanisms ; 2 .the fitness for each automaton is computed ; 3 . for all 4-uple composed of parents and children , the performless automata , in term of fitness computed in previous step , are suppressed .the two automata , still living , result from the evolution of the two initial parents .the fitness is not defined at this level of abstract formulation , but it is defined corresponding to the context for which the automaton is a model , as we will do in the next section .we develop in this section how we can modelize competition - cooperation processes in a same automata - based representation .the genetic computation allows to make automatic transition from competition to cooperation or from coopeartion to competition .the basic problem used for this purpose is the well - known prisoner dilemma .the prisoner dilemma is a two - players game where each player has two possible actions : cooperate ( ) with its adversary or betray him ( ) .so , four outputs are possible for the global actions of the two players .a relative payoff is defined relatively to these possible outputs , as described in the following table where the rows correspond to one player behaviour and the columns to the other player one .+ .prisoner dilemma payoff [ cols="<,^,^",options="header " , ] in the iterative version of the prisoner s dilemma , successive steps can be defined .each player do not know the action of its adversary during the current step but he knows it for the preceding step .so , different strategies can be defined for a player behaviour , the goal of each one is to obtain maximal payoff for himself .+ in figures [ titfortat ] and [ vindictive ] , we describe two strategies with transducers. each transition is labeled by the input corresponding to the player perception which is the precedent adversary action and the output corresponding to the present player action .the only inital state is the state 1 , recognizable by the incoming arrow labeled only by the output .the final states are the states 1 and 2 , recognizable with the double circles .+ in the strategy of figure [ titfortat ] , the player has systematically the same behaviour as its adversary at the previous step . in the strategy of figure [ vindictive ], the player chooses definitively to betray as soon as his adversary does it .the previous automaton represents static strategies and so they are not well adapted for the modelization of evolutive strategies . for this purpose ,we propose a model based on a probabilistic automaton described by figure [ probadilemma ] .+ [ htp ] [ htp ] [ htp ] this automaton represents all the two - states strategies for cooperation and competitive behaviour of one agent against another in prisoner s dilemma .+ the transitions are labeled in output by the probabilities of their realization .the first state is the state reached after cooperation action and the second state is reached after betrayal .+ for this automaton , the associated matrix representation , as described previously , is : with the matrix representation of the automata , we can compute genetic automata as described in previous sections . herethe chromosomes are the sequences of all the matrices associated to each letter .we have to define the fitness in the context of the use of these automata .the fitness here is the value of the payoff .a population of automata is initially generated .these automata are playing against a predefined strategy , named .+ each automaton makes a set of plays . at each play, we run the probabilistic automaton which gives one of the two outputs : ( ) or ( ) . with this output and the s output , we compute the payoff of the automaton , according with the payoff table .+ at the end of the set of plays , the automaton payoff is the sum of all the payoffs of each play .this sum is the fitness of the automaton . at the end of this set of plays, each automaton has its own fitness and so the selection process can select the best automata . at the end of these selection process , we obtain a new generation of automata . +this new generation of automata is the basis of a new computation of the 3 genetics operators . +this processus allows to make evolve the player s behavior which is modelized by the probabilistic multi - strategies two - states automaton from cooperation to competition or from competition to cooperation .the evolution of the strategy is the expression of an adaptive computation .this leads us to use this formalism to implement some self - organisation processes which occurs in complex systems .in this section , we study how evolutive automata - based modeling can be used to compute automatic emergent systems .the emergent systems have to be understood in the meaning of complex system paradigm that we recall in the next section .we have previously defined some way to compute the distance between automata and we use these principles to define distance between agents behaviours that are modeled with automata .finally , we defined a specific fitness that allows to use genetic algorithms as a kind of reinforcement method which leads to emergent system computation .[ ht ] according to general system theory , a complex system is composed of entities in mutual interaction and interacting with the outside environment .a system has some characteristic properties which confer its structural aspects , as schematically described in part ( a ) of figure [ sys2beh ] : * the set elements or entities are in interactive dependance .the alteration of only one entity or one interaction reverberates on the whole system . *a global organization emerges from interacting constitutive elements .this organization can be identified and carries its own autonomous behavior while it is in relation and dependance with its environment .the emergent organization possesses new properties that its own constitutive entities do nt have .`` the whole is more than the sum of its parts '' . * the global organization retro - acts over its constitutive components .`` the whole is less than the sum of its parts '' after e. morin .+ the interacting entities network as described in part ( b ) of figure [ sys2beh ] leads each entity to perceive informations or actions from other entities or from the whole system and to act itself . + a well - adapted modeling consists of using an agent - based representation which is composed of the entity called agent as an entity which perceives and acts on an environment , using an autonomous behaviour as described in part ( c ) of figure [ sys2beh ] .+ to compute a simulation composed of such entities , we need to describe the behaviour of each agent . this one can be schematically described using internal states and transition processes between these states , as described in part ( d ) of figure [ sys2beh ] .+ there are several definitions of `` agents '' or `` intelligent agents '' according to their behaviour specificities .their autonomy means that the agents try to satisfy a goal and execute actions , optimizing a satisfaction function to reach it .+ for agents with high level autonomy , specific actions are realized even when no perception are detected from the environment . to represent the process of this deliberation , different formalisms can be used and a behaviour decomposed in internal states is an effective approach .finally , when many agents operate , the social aspects must also be taken into account .these aspects are expressed as communications through agent organisation with message passing processes .sending a message is an agent action and receiving a message is an agent perception . the previous description based on the couple : perception and action ,is well adapted to this .we describe in this section the bases of the genetic algorithm used on the probabilistic automata allowing to manage emergent self - organizations in the multi - agent simulation . + for each agent , we define an evaluation function of its own behaviour returning the matrix of values such that is the output series from all possible successive perceptions when starting from the initial state and ending at the final state , without cycle .it will clearly be if either is not an initial state or is not a final one and the matrix is indeed a matrix of evaluations of subseries of notice that the coefficients of this matrix , as defined , are computed whatever the value of the perception in the alphabet on each transition on the successful path .that means that the contribution of the agent behaviour for collective organization formation is only based , here , on probabilities to reach a final state from an initial one .this allows to preserve individual characteristics in each agent behaviour even if the agent belongs to an organization .+ let and two agents and and their respective evaluations as described above .we define a semi - distance ( or pseudometrics , see ch ix ) between the two agents and as , a matrix norm of the difference of their evaluations .let a neighbourhood of the agent , relatively to a specific criterium , for example a spatial distance or linkage network .we define the agent fitness of the agent as : in the previous computation , we defined a semi - distance between two agents .this semi - distance is computed using the matrix representation of the automaton with multiplicities associated to the agent behaviour .this semi - distance is based on successful paths computation which needs to define initial and final states on the behaviour automata . for specific purposes , we can choose to define in some specific way , the initial and final states .this means that we try to compute some specific action sequences which are chararacterized by the way of going from some specific states ( defined here as initial ones ) to some specific states ( defined here as final ones ) .+ based on this specific purpose which leads to define some initial and final states , we compute a behaviour semi - distance and then the fitness function defined previously .this fitness function is an indicator which returns high value when the evaluated agent is near , in the sense of the behaviour semi - distance defined previously , to all the other agents belonging to a predefined neighbouring .+ genetic algorithms will compute in such a way to make evolve an agent population in a selective process .so during the computation , the genetic algorithm will make evolve the population towards a newer one with agents more and more adapted to the fitness. the new population will contain agents with better fitness , so the agents of a population will become nearer each others in order to improve their fitness . in that way , the genetic algorithm reinforces the creation of a system which aggregates agents with similar behaviors , in the specific way of the definition of initial and final states defined on the automata . + the genetic algorithm proposed here can be considered as a modelization of the feed - back of emergent systems which leads to gather agents of similar behaviour , but these formations are dynamical and we can not predict what will be the set of these aggregations which depends of the reaction of agents during the simulation . moreover the genetic process has the effect of generating a feed- back of the emergent systems on their own contitutive elements in the way that the fitness improvement lead to bring closer the agents which are picked up inside the emergent aggregations . + for specific problem solving , we can consider that the previous fitness function can be composed with another specific one which is able to measure the capability of the agent to solve one problem .this composition of fitness functions leads to create emergent systems only for the ones of interest , that is , these systems are able to be developed only if the aggregated agents are able to satisfy some problem solving evaluation .the aim of this study is to develop a powerful algebraic structure to represent behaviors concerning cooperation - competition processes and on which we can add genetic operators .we have explained how we can use these structures for modeling adaptive behaviors needed in game theory .more than for this application , we have described how we can use such adaptive computations to automatically detect emergent systems inside interacting networks of entities represented by agents in a simulation . 1 r. axelrod ( 1997 ) _ the complexity of cooperation _ ,princeton university press j. berstel and g. reutenauer ( 1988 ) _ rational series and their language _ , eatcs n. bourbaki ( 1998 ) _ elements of mathematics : general topology _ , chapters 5 - 10 , springer - verlag telos l. von bertalanffy ( 1968 ) _ general system theory _ , georges braziller ed . c. bertelle , m. flouret , v. jay , d. olivier , and j .- l .ponty ( 2002 ) `` adaptive behaviour for prisoner dilemma strategies based on automata with multiplicities . '' in _ ess 2002 conf . , dresden _ , germany c. bertelle , m. flouret , v. jay , d. olivier , and j .- l .ponty ( 2001 ) `` genetic algorithms on automata with multiplicities for adaptive agent behaviour in emergent organizations '' in _ sci2001 _ , orlando ,florida , usa g.h.e .duchamp , h. hadj - kacem and e. laugerotte ( 2005 ) `` algebraic elimination of -transitions '' , _ dmtcs _ , 7(1):51 - 70 g. duchamp and j - m champarnaud ( 2004 ) _ derivatives of rational expressions and related theorems _ , theoretical computer science * 313 * n. eber ( 2004 ) _ thorie des jeux _ , dunod s. eilenberg ( 1976 ) _ automata , languages and machines _ ,a and b , academic press j. ferber ( 1999 ) _ multi - agent system _ , addison - wesley l.j .fogel , a.j .owens , m.j .welsh ( 1966 ) _ artificial intelligence through simulated evolution _ , john wiley j.s .golan ( 1999 ) _ power algebras over semirings _, kluwer academic publishers d.e .goldberg ( 1989 ) _ genetic algorithms _ , addison - wesley j. h. holland ( 1995 ) _hidden order - how adaptation builds complexity _ , persus books ed .hopcroft , r. motwani , j.d .ullman ( 2001 ) _ introduction to automata theory , languages and computation _ , addison - wesley j. koza ( 1997 ) _ genetic programming _ , encyclopedia of computer sciences and technology m. mitchell ( 1996 ) _ an introduction to genetic algorithms _ , the mit press j .-le moigne ( 1999 ) _ la modlisation des systmes complexes _ ,dunod i. rechenberg ( 1973 ) _ evolution strategies _ , fromman - holzboog m.p .schutzenberger ( 1961 ) `` on the definition of a family of automata '' , information and control , 4:245 - 270 r.p .stanley ( 1999 ) _ enumerative combinatorics _ ,cambridge university press f. treves ( 1967 ) _ topological vector spaces , distributions and kernels _press g. weiss , ed .( 1999 ) _ multiagent systems _ , mit press
in this paper , we deal with some specific domains of applications to game theory . this is one of the major class of models in the new approaches of modelling in the economic domain . for that , we use genetic automata which allow to buid adaptive strategies for the players . we explain how the automata - based formalism proposed - matrix representation of automata with multiplicities - allows to define a semi - distance between the strategy behaviors . with that tools , we are able to generate an automatic processus to compute emergent systems of entities whose behaviors are represented by these genetic automata . l
gaussian process ( gp ) models provide a flexible , probabilistic approach to regression and are widely used .however , application of gp models to large data sets is challenging as the memory and computational requirements scale as and respectively , where is the number of training data points .various sparse gp approximations have been proposed to overcome this limitation .a unifying framework of existing sparse methods is given in .we consider the stationary sparse spectrum gp regression model introduced by , where the spectrum of the covariance function is sparsified instead of the usual spatial domain .the ssgp algorithm developed by for fitting this model uses conjugate gradients to optimize the marginal likelihoood with respect to the hyperparameters and spectral points .comparisons with other state - of - the - art sparse gp approximations such as the fully independent training conditional model ( first introduced as sparse pseudo - input gp in * ? ? ?* ) and the sparse multiscale gp , showed that ssgp yielded significant improvements .however , optimization with respect to spectral frequencies increases the tendency to underestimate predictive uncertainty and poses a risk of overfitting in the ssgp algorithm . in this paper, we develop a fast variational approximation scheme for the sparse spectrum gp regression model , which enables uncertainty in covariance function hyperparameters to be treated .in addition , we propose an adaptive local neighbourhood approach for dealing with nonstationary data .although accounting for hyperparameter uncertainty may be of little importance when fitting globally to a large data set , local fitting within neighbourhoods results in fitting to small data sets even if the full data set is large , and here it is important to account for hyperparameter uncertainty to avoid overfitting .our examples show that our methodology is particularly beneficial when combined with the local fitting approach for this reason .our approach also allows hierarchical models involving covariance function parameters to be constructed .this idea is implemented in the context of functional longitudinal models by mensah _( 2014 ) so that smoothness properties of trajectories can be related to individual specific covariates .gps have diverse applications and various methods have been developed to overcome their computational limitations for handling large data sets .a good summary of approximations used in modelling large spatial data sets is given in .computational costs can also be reduced through local gp regression as a much smaller number of training data is utilized in each partition .this approach has been considered in machine learning ( e.g. * ? ? ?* ; * ? ? ?* ; * ? ? ?* ) and in spatial statistics ( e.g. * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ? propose fitting gp models in local neighbourhoods which are defined online for each test point .however , covariance hyperparameters are estimated only for a subset of all possible local neighbourhoods .different local experts are then combined using a mixture model capable of handling multimodality .our idea of using adaptive nearest neighbours in gp regression is inspired by techniques in classification designed to mitigate the curse of dimensionality .for each test point , we fit two models . in the first instance, the neighbourhood is determined using the euclidean metric .lengthscales estimated from the first fitting are then used to redefine the distance measure determining the neighbourhood for fitting the second model .experiments suggest that this approach improves prediction significantly in data with nonstationarities , as hyperparameters are allowed to vary across neighbourhoods adapted to each query point . weighting dimensions according to lengthscales downweights variables of little relevance and also leads to automatic variable selection .our approach differs from methods where local neighbourhoods are built sequentially to optimize the choice of the neighbourhood .examples include and , where the gaussian likelihood is approximated by the use of an ordering and conditioning on a subset of past observations . in , an empirical bayesmean - square prediction error criterion is optimized .while greedy searches usually rely on fast updating formulae available only in the gaussian case , our approach works in non - gaussian settings as well . making neighbourhoods non - local to improve learning of covariance parameters , but local neighbourhoods may work better when the motivation is to handle nonstationarity . make a connection between discrete spatial markov random fields and continuous gaussian random fields with covariance functions in the matrn class . for fitting the sparse spectrum gp regression model, we derive a variational bayes ( vb , * ? ? ?* ) algorithm that uses nonconjugate variational message passing to derive fast and efficient updates .vb methods approximate the intractable posterior in bayesian inference by a factorized distribution .this product density assumption is often unrealistic and can lead to underestimation of posterior variance .however , optimization of a factorized variational posterior can be decomposed into local computations that only involve neighbouring nodes in the factor graph and this often gives rise to fast computational algorithms .vb has also been shown to be able to give reasonably good estimates of the marginal posterior distributions and excellent predictive inferences ( e.g. * ? ? ?* ; * ? ? ?variational message passing is a general - purpose algorithm that allows vb to be applied to conjugate - exponential models .nonconjugate variational message passing extends variational message passing to nonconjugate models by assuming that the factors in vb are members of the exponential family .we use nonconjugate variational message passing to derive efficient updates for the variational posteriors of the lengthscales , which are assumed to be gaussian . use vb for spatial modelling via gp , where they also treat uncertainty in the covariance function hyperparameters .however , they propose using importance sampling within each vb iteration to handle the intractable expectations associated with the covariance function hyperparameters .variational inference has also been considered in machine learning for sparse gps that select the inducing inputs and hyperparameters by maximizing a lower bound to the exact marginal likelihood , and heteroscedastic gp regression models where the noise is input dependent .vb is known to suffer from slow convergence when there is strong dependence between variables in the factors . to speed up convergence , propose parameter expanded vb to reduce coupling in updates , while considered partially noncentered parametrizations . here, we introduce an adaptive strategy to accelerate convergence in nonconjugate variational message passing , which is inspired by adaptive overrelaxed bound optimization methods .previously , showed that nonconjugate variational message passing is a natural gradient ascent algorithm with step size one and step sizes smaller than one correspond to damping .here , we propose using step sizes larger than one which can help to accelerate convergence in fixed point iterations algorithms ( see * ? ? ? * ) . instead of searching for the optimal step size , we use an adaptive strategy which ensures that the lower bound increases after each cycle of updates .empirical results indicate significant speedups . combining parameter - wise updates to form a diagonal direction for a line search .a general iterative algorithm for computing vb estimators ( defined as means of variational posteriors ) has also been proposed by and its convergence properties investigated for normal mixture models .section [ ssgpmodel ] describes the sparse spectrum gp regression model and section [ variational inference ] develops the nonconjugate variational message passing algorithm for fitting it .section [ adaptive strategy ] presents an adaptive strategy for accelerating convergence in nonconjugate variational message passing .section [ pred distn ] discusses how the predictive distribution can be estimated and the measures used for performance evaluation .section [ neigh ] describes the adaptive neighbourhood approach for local regression .section [ eg ] considers examples including real and simulated data and section [ conclusion ] concludes .given a data set , we assume each output is generated by an unknown latent function evaluated at the input , , and independently corrupted by additive gaussian noise such that a gp prior is assumed over for .for any set of inputs , ^t ] and for . introduced a novel perspective on gp approximation by sparsifying the spectrum of the covariance function .they considered the linear regression model , where , are independent and identically distributed as and is a -dimensional vector of spectral frequencies .the power spectral density of a stationary covariance function is and is proportional to a probability density such that .when are drawn randomly from , showed that can be viewed as a sparse gp that approximates the full stationary gp by replacing the spectrum with a discrete set of spectral points . from, the probability density associated with the squared exponential covariance function in is .if is generated randomly from , then is a random sample from . from, a sparse gp approximation to is , \label{spec2}\end{aligned}\ ] ] where , ^t ] and ^t ] , ^t ] and ^t ] where , and are all matrices . in algorithm 1 , we define , for , , . the lower bound defined in is commonly used for monitoring convergence .it can be evaluated in closed form ( see appendix b ) and is given by the above expression applies only after the updates in steps 5 and 6 of algorithm [ alg1 ] have been made .in the sparse spectrum gp regression model , and are intimately linked .each time the lengthscales are changed by a small amount , the amplitudes ( ) will have to respond to this change in order to match the observed . in, we have assumed that the variational posteriors of and are independent so that expectations with respect to are tractable and closed form updates can be derived for a fast algorithm .however , strong dependence between and implies that only small steps can be taken in each cycle of updates and a large number of iterations will likely be required for algorithm [ alg1 ] to converge . to accelerate convergence , we propose modifying the updates in steps 1 and 2 .let be the natural parameter of and be the update of in nonconjugate variational message passing . showed that nonconjugate variational message passing is a natural gradient ascent method with step size one . at iteration , we consider where . when , reduces to the update in nonconjugate variational message passing .taking may be helpful when updates in nonconjugate variational message passing fail to increase . from our experiments ,instability in algorithm 1 usually occur within the first few iterations . beyond that , the algorithm is usually quite stable and taking larger steps with can result in significant speed - ups. indicates conventional path in fixed point iterations while the dot dash line indicates path to convergence with a step size greater than 1.,scaledwidth=49.0% ] recall that nonconjugate variational message passing is a fixed point iterations algorithm .figure [ fixedpointplot ] illustrates in a single variable case ( where we are solving ) how taking steps larger than one can accelerate convergence . instead of taking ,consider , where and .the solid line starting from indicates the conventional path in fixed point iterations while the dot dash line indicates the path with a step size greater than 1 .the dot dash line moves towards the point of convergence faster than the solid line .however , it may overshoot if is too large . in algorithm [ alg2 ] , we borrow ideas from to construct an adaptive algorithm where is allowed to increase by a factor after each cycle of updates whilst is on an increasing trend and we revert to when decreases . the adaptive nonconjugate variational message passing algorithmis given in algorithm [ alg2 ] . in appendix c , we show that reduces to the updates : ^{-1 } \\ \text{and } \;\ ;\mu_\lambda^q \leftarrow \mu_\lambda^q + a_t\ , \sigma_\lambda^q \sum_{a \in n(\lambda ) } \frac{\partial s_a}{\partial \mu_\lambda^q}. \end{gathered}\ ] ] step 3(b ) has been added as a safeguard as the updated may not be symmetric positive definite due to rounding errors or when is large . in this case , we propose reducing the step size by a factor until all eigenvalues of are positive .it is useful to insert step 3(b ) in algorithm [ alg1 ] after has been updated as well as it can serve as damping . for both algorithms [ alg1 ] and [ alg2 ] , we initialize as ^t ] , as , as , and and are initialized using the updates in steps 34 of algorithm [ alg1 ] .we set the maximum number of iterations as 500 and the algorithms are deemed to have converged if the relative increase in is less than . recommend taking the factor to be close to but more than 1 .using this as a guide , we have experimented with taking values 1.1 , 1.5 and 2 . while all these values lead to improvement in efficiency , we find to be more favourable , as the step sizes increase rather slowly when and too fast when , leading to many failed attempts to improve .while algorithm [ alg2 ] does not necessarily converge to the same local mode as algorithm [ alg1 ] , results from the two algorithms are usually very close .algorithm [ alg2 ] sometimes demonstrates the ability to avoid local modes with the larger steps that it takes .we compare and quantify the performance of the two algorithms in section [ pendulum eg ] .note that in algorithm 2 , each failed attempt to improve is also counted as an additional iteration in step 5(b ) even though step 1 does not have to be reevaluated .we note that algorithms 1 and 2 are not guaranteed to converge due to the fixed point updates in nonconjugate variational message passing .however , convergence issues can usually be mitigated by rescaling variables and varying the initialization values .as the fixed point updates may not result in an increase in , it is possible to compute after performing the updates and reduce if necessary .however , this requires computing a lower bound of a more complex form than at each iteration .our experiments indicate that a decline in is often due to not being symmetric positive definite , and hence installing step 3(b ) suffices in most cases .we also find that checking the simplified form of in at the end of each cycle and simply reverting to 1 if necessary is more economical .if premature stopping occurs in algorithms 1 or 2 due to a decrease in the lower bound at some iteration , this can be detected by examination of the lower bound values and remedied if needed by damping where values are considered .let and be the training and testing data sets respectively .let be the set of spectral frequencies randomly generated from .bayesian predictive inference is based on the predictive distribution , assuming is conditionally independent of given , and . we replace with our variational approximation so that from ( [ postpred ] ) , the posterior predictive mean of is where ^t\end{gathered}\ ] ] and can be computed using results in appendix a. the posterior predictive variance is in the examples , we follow and evaluate performance using two quantitative measures : normalized mean square error ( nmse ) and mean negative log probability ( mnlp ) .these are defined as the mnlp is implicitly based on a normal predictive distribution for with mean and variance , .we propose a new technique of obtaining predictive inference by fitting models locally using adaptive neighbourhoods .our proposed approach consists of two stages : for each test point , , 1 .we first find the nearest neighbours of in ( that are closest to in terms of euclidean distance ) and denote the index set of these neighbours by .we use algorithm 2 to fit a sparse spectrum gp regression model , , to .2 . next , we use the variational posterior mean of the lengthscales , , from to define a new distance measure : where the dimensions are weighted according to .this will effectively downweight or remove variables of little or no relevance . using this new distance measure, we find the nearest neighbours of in and denote the index set of these neighbours by .we use algorithm 2 to fit a sparse spectrum gp regression model , , to and use the variational posterior from for predictive inference . in summary ,the first fitting is used to find out which variables are more relevant in determining the output . from, a large value of indicates that the covariance drops rapidly along the dimension of and hence the neighbourhood should be shrunk along the dimension .using from the first fit as an estimate of the lengthscales , the neighbourhood is then adapted before performing a second fitting to improve prediction .we do not recommend iterating the fitting process further since this may result in cyclical behaviour with the neighbourhood successively expanding and contracting along a certain dimension as the iterations proceed .in the examples , when the ssgp algorithm is implemented using this adaptive neighbourhood approach , we replace the variational posterior mean value ( which does not exist for the ssgp method since it does not estimate a variational posterior distribution for ) by the point estimates of the lengthscales obtained by the ssgp approach .the adaptive neighbourhood approach is well - placed to handle data with nonstationarities as stationarity is only assumed locally and local fitting can adapt the noise and the degree of smoothing to the nonstationarities .adapting the neighbourhood can also be very helpful in improving prediction when there are many irrelevant variables due to automatic relevance determination implemented via the lengthscales .a major advantage of the variational approach is that it allows uncertainty in the covariance hyperparameters to be modelled within a fast computational scheme .this is especially important when fitting using local neighbourhoods as plug - in approaches to estimating hyperparameters will tend to underestimate predictive uncertainty when the data set is small .this approach is advantageous for dealing with large data sets as well . as we only consider fitting models to a small subset of data points at each test point , a smaller number of basis functions might suffice . while the computational requirements grow linearly with the number of prediction locations , this approach is trivially parallelizable to get a linear speed - up with the number of processorswe compare the performance of the variational approach with the ssgp algorithm using three real data sets : the pendulum data set , the rainfall - runoff data set and the auto - mpg data set .the implementation of ssgp in matlab is obtained from http://www.tsc.uc3m.es/~miguel/simpletutorialssgp.php .there are two versions of the ssgp algorithm : ssgp ( fixed ) uses fixed spectral points while ssgp ( optimized ) optimizes the marginal likelihood with respect to the spectral points. we will only consider ssgp ( fixed ) .we observe some sensitivity in predictive performance to the basis functions and adopt the following strategy for better results : for each implementation of algorithm [ alg1 ] ( or [ alg2 ] ) , we randomly generate ten sets of spectral points from , perform 2 iterations of the algorithm , and select the set with the highest attained lower bound to continue to full convergence .a similar strategy was used by to initialize the ssgp algorithm . due to the zero mean assumption , we center all target vectors , by subtracting the mean from . in the examples, `` va '' refers to the variational approximation approach implemented via algorithm [ alg2 ] , `` global '' refers to using the entire training set for fitting while `` local '' refers to the adaptive neighbourhood approach described in section [ neigh ] .the pendulum data set ( available at http://www.tsc.uc3m.es/~miguel/simpletutorialssgp.php ) has covariates and contains 315 training points and 315 test points .the target variable is the change in angular velocity of a simulated mechanical pendulum over 50 ms and the covariates consist of different parameters of the system . used this example to show that ssgp ( optimized ) can sometimes fail due to overfitting .we rescale the input variables in the training set to lie in ] , we simulate each of the ten additional covariates randomly from the uniform distribution on the interval ] implies & = e\{\cos(t_1^t\lambda)\cos(t_2^t\lambda ) \\ & \quad + \sin(t_1^t\lambda)\sin(t_2^t\lambda)\ } \\ & = \exp\{-\tfrac{1}{2}(t_1-t_2)^t \sigma ( t_1-t_2)\ } \\ & \quad \cdot \cos\{\mu^t ( t_1-t_2)\ } \end{aligned}\ ] ] and & = e\{\sin(t_1^t\lambda)\cos(t_2^t\lambda ) \\ & \quad -\cos(t_1^t\lambda)\sin(t_2^t\lambda)\ } \\ & = \exp\{-\tfrac{1}{2}(t_1-t_2)^t \sigma ( t_1-t_2)\ } \\ & \quad \cdot \sin\{\mu^t ( t_1-t_2)\}. \end{aligned}\ ] ] replacing by , we get & = e\{\cos(t_1^t\lambda)\cos(t_2^t\lambda ) \\ & \quad -\sin(t_1^t\lambda)\sin(t_2^t\lambda)\ } \\ & = \exp\{-\tfrac{1}{2}(t_1+t_2)^t \sigma ( t_1+t_2)\ } \\ & \quad \cdot \cos\{\mu^t ( t_1+t_2)\ } \end{aligned}\ ] ] and & = e\{\sin(t_1^t\lambda)\cos(t_2^t\lambda ) \\ & \quad + \cos(t_1^t\lambda)\sin(t_2^t\lambda)\ } \\ & = \exp\{-\tfrac{1}{2}(t_1+t_2)^t \sigma ( t_1+t_2)\ } \\ & \quad \cdot \sin\{\mu^t ( t_1+t_2)\}. \end{aligned}\ ] ] ( [ e1])+([e3 ] ) gives the first equation of the lemma , ( [ e1])-([e3 ] ) gives the second and ( [ e2])+([e4 ] ) gives the third . using lemma 1 , we have ^t,\ ] ] where \end{aligned}\ ] ] and for , .we also have where $ ] , where , , are all matrices and , for , .from ( [ lb ] ) , the lower bound is given by where the terms in the lower bound can be evaluated as follows : \\ \cdot { \mathcal{h}(n , c_\gamma^q , a_\gamma^2)}/{\mathcal{h}(n-2,c_\gamma^q , a_\gamma^2)}\end{gathered}\ ] ] putting these terms together and making use of the updates in steps 5 and 6 of algorithm [ alg1 ] gives the lower bound in .it can be shown ( see * ? ? ? * ; * ? ? ?* ) that the natural parameter of is where is a unique matrix that transforms into for any symmetric square matrix , that is , . we use to denote the vector obtained from by eliminating all supradiagonal elements of . is a good reference for the matrix differential calculus involved in the derivation below . from and7 ) , we have where and are evaluated at let the first line of simplifies to the second line of gives
we develop a fast variational approximation scheme for gaussian process ( gp ) regression , where the spectrum of the covariance function is subjected to a sparse approximation . our approach enables uncertainty in covariance function hyperparameters to be treated without using monte carlo methods and is robust to overfitting . our article makes three contributions . first , we present a variational bayes algorithm for fitting sparse spectrum gp regression models that uses nonconjugate variational message passing to derive fast and efficient updates . second , we propose a novel adaptive neighbourhood technique for obtaining predictive inference that is effective in dealing with nonstationarity . regression is performed locally at each point to be predicted and the neighbourhood is determined using a measure defined based on lengthscales estimated from an initial fit . weighting dimensions according to lengthscales , this downweights variables of little relevance , leading to automatic variable selection and improved prediction . third , we introduce a technique for accelerating convergence in nonconjugate variational message passing by adapting step sizes in the direction of the natural gradient of the lower bound . our adaptive strategy can be easily implemented and empirical results indicate significant speedups . example.eps gsave newpath 20 20 moveto 20 220 lineto 220 220 lineto 220 20 lineto closepath 2 setlinewidth gsave .4 setgray fill grestore stroke grestore
social dilemmas describe conflict situations existing between a rational individual maximizing its own benefit and a social group pursuing collective wellbeing . for example , as a sheepherder enjoys herding in the public greenland , the joint effort to achieve environmental sustainability suffers a heavy blow .the gas emission from a factory promoting its prosperity makes the greenhouse problem become more serious , which in turn does harm to the further development of the factory .as everyone faces the temptation to exploit the public goods and make no contribution to society , the immense benefit that can only be got through mutual cooperation becomes unattainable .this poses a challenging problem about how the individual selfishness can lead to the occurrence and maintenance of cooperation commonly found in reality .to answer this question , various mechanisms have been introduced by scientists since darwin .most studies on the evolution of cooperation among selfish individuals are based on game models , the spatial ultimatum game , the prisoner s dilemma game(pdg) , the snowdrift game ( sg) , and the public goods game ( pgg) , to name just a few . the iterated pdg models the interactions between two agents , in which one s contribution favors the other but not itself .although the total income would be the highest if they both cooperate , each agent tends to defect to maximize its own profit .therefore , the nash equilibrium is to defect in all rounds .the pgg is an extension of the pdg to an arbitrary number of agents . in the original pgg , a group of individualshave the choice whether to make an investment into a common pool or not .an equal division of returns irrespective of one s contribution results in the situation where the defectors have an advantage over the cooperators and defection becomes a dominant strategy . to refrain from getting stuck in the deadlocks of mutual defection ,a third strategy , termed a loner s strategy , has been introduced into the original public goods game . in the public goods game with loners ,also known as the optional public goods game ( opgg) , the agents have an option whether to participate in the public goods game or not .those who join the public goods game get a cooperator s payoff or a defector s payoff , and those who do not join the public goods game get a loner s payoff .because a loner only gets a small but fixed payoff , it can win over a group of defectors but will be defeated by a group of cooperators .therefore , an endless rock - scissors - paper cycle occurs : loners will invade a population of defectors with fewer cooperators .cooperators will thrive in a population of loners .a population of cooperators will be intruded by defectors . over the past decade , advances in statistical physics have fueled great interests in constructing a theory of complexity , among which researches on the three - state systems have attained great achievements .the voluntary prisoner s dilemma , the rock - scissors - paper game , the cyclic predator - prey model , the three - state potts model and the three - state cyclic voter model are commonly used models . depending upon pair approximations and mean - field theories ,szabo et al .have theoretically analyzed the cyclic dominance in evolutionary dynamics .it has been found that the symmetric solution of mean - field approximation is stable , but it is not asymptotically stable . as to the stationary solutions of the pair approximation , it has been found that they are unstable for small perturbations , which can be eliminated by using the four- and nine - site approximations . in the optional public goods game , although the existence of loners keeps the cooperators from being doomed , the cyclic oscillation of the three strategies indicates that a higher and stable level of cooperation is difficult to be reached in such a system .but in real world , cooperation is often the dominant strategy in animal and human activities and an environmental change will lead to the occurrence of different levels of cooperation .the environmental conditions include the structured space , the population density , and the mobility of the individuals .to find out the mechanisms determining the occurrence of different levels of cooperation in real world , it requires generalizing the model presented by hauert et al . andincorporating the environmental conditions into the original opgg . in natural and human society , the linkage between the agents may dynamically evolve , the structured space is not fully occupied and random and purposive movements often occur . such as the migration of birds , the floating of a boat and the motion of a train .similar dynamic processes have been found in diffusion systems .x.chen et al . have studied the role of risk - driven migration in the evolution of cooperation . z.wanget al . have investigated the impact of population density on the evolution of cooperation based on different game models .z.h.liu has studied the influence of population density and individual mobility on epidemic spreading .c.p.roca et al . andj.y.wakano et al . have studied the roles of mobility in the improvement of cooperation in the pgg and in the ecological pgg respectively .related studies have shown that , in the structured space , the percolation threshold plays a quite important role in the widespread of cooperation and the outbreak of diseases .motivated by the work done in partially occupied lattices , in this paper , we incorporate population density and individual mobility into the original opgg introduced in ref. and play the game in a square lattice with moore neighborhood .the main findings of the study are as follows : \(1 ) with a predefined free - floating probability , the cooperator frequency is determined by population density .there exists a transition point , below which increases with the rise of while above which decreases with the rise of . with a predefined , is determined by .increasing leads to a monotonic decrease of .\(2 ) considering the size distribution of individual components , we find that the power - law relation between and is related to the occurrence of a giant component . before the giant component occurs , as increases , changes from an exponential to a power - law distribution and the slop of as a function of decreases with the rise of .after the giant component occurs , as increases , changes little and the slope of as a function of also changes little with the rise of .( 3)as we keep an eye on the decrease of cooperation in the present model , the effect of increasing with a fixed is similar to the effect of increasing with a fixed .a theoretical analysis shows that the change of the frequencies of different strategies in the present model should be determined by possible collisions between the agents with different strategies .both the increase in and the increase in would result in more collisions .the more the collisions between the agents , the lower the levels of cooperation .the paper will proceed as follows . in section 2, we introduce the optional public goods game with purposive and random movements in a spatial setting . in section 3 , simulation results about the evolution of competitive strategies and the local agglomeration of individuals are presented and the relationship between them is discussed . in section 4 ,the extinction thresholds are analyzed with mean field theory , and the possible relations between the levels of cooperation and the individual collisions are described theoretically .section 5 summarizes the paper and gives an outlook for future studies .a population of agents is distributed over a square lattice with side length and the moore neighborhood ( i.e. , degree ) , each agent on each site . for ,the population density is defined as . for ,an agent can move around and occupy the firstly found empty site . in the present model ,there exist two coevolutionary processes : the change of personal strategies and the movement of the agents .once the initial position and the adopted strategy for each agent are set , the system will evolve as follows . in the evolution of personal strategies .initially , there exist three kinds of agents : cooperators ( c ) , defectors ( d ) and loners ( l ) . at each monte carlostep ( mcs ) , firstly , each agent interacts with its nearest neighbors and gets a payoff ( for cooperators ) , ( for defectors ) or ( for loners ) .owing to the neighboring restriction , the interaction group in the present model should be .assuming , in which , , represent the numbers of cooperators , defectors and loners in the interaction group respectively , the payoffs for the agents with different strategies are in which ( ) is the multiplication rate and generally satisfies .therefore , a loner s payoff should be lower than the payoff of the agents in a group of cooperators and higher than the payoff of the agents in a group of defectors . if , the cooperator or defector will get a loner s payoff .after all the agents have attained their payoffs , they will make decisions on whether they should update their strategies or not . in the updating process, an agent i compares its payoff with a randomly chosen neighbor j s and adopts j s strategy with probability in which and represent the cost of strategy change and the environmental noise respectively .the above updating mechanism shows that , in an interaction group , the evolution of the mixed strategy profile is determined by the scores of different strategies .even if the group size is constant , the number of competitive agents , i.e. cooperators and defectors , will vary with time .such that the cyclic oscillations will occur in a fully - occupied network setting .in relation to the mobility of individuals . for a loner , because it only relies on a small but fixed payoff and gets nothing from a benefit - seeking competition , it is not so attractive to stay at its current site and at each time step it will move randomly with probability . for cooperators and defectors , the probabilities of leaving the current sites are determined by whether they are satisfied with the present situation and the possible free - floating , which are described by the purposive - moving probability and the free - floating probability respectively .the value of is predefined and the value of is determined by the wealth of an agent , which is a cumulative payoff in the latest t mcs .for an agent i , its wealth is for a cooperator or a defector , if its wealth is less than 0 , with probability in which , it leaves its current site and finds an empty site to stay on .in the present model , to refrain from the random disappearance of loners , at every mcs , the competitive agents will occasionally become loners with a small random - flipping probability .a monte carlo step can be summarized as follows .( 1 ) each agent interacts with its nearest neighbor(s ) and gets its wealth .( 2 ) for a cooperator or a defector , if its wealth is less than 0 , it leaves its current site with probability . if its wealth is greater than or equal to 0, it leaves its current site with probability . for a loner, it leaves its current site with probability no matter how rich it is .the moving agents walk randomly and stay on the firstly found empty sites .for all the agents , they move sequentially .( 3 ) the wealth of a moving agent is reset to 0 and it will not move in the next t-1 mcs .( 4 ) each agent interacts with its nearest neighbor(s ) and gets its payoff .( 5 ) each agent i compares its payoff with a randomly chosen neighbor j s and adopts neighbor j s strategy with probability .for all the agents , they update their strategies synchronously .( 6 ) cooperators and defectors become loners with probability . therefore , in the present model , there are two mechanisms which determine the evolution of cooperation .the payoff determines the replicator dynamics while the wealth determines the mobility dynamics . in relation to the replicator dynamics .different from many other pgg models where the agents collect income simultaneously from several public goods games , in the present model , the agents collect income from a single public goods game .our concentration is how the evolutionary dynamics in a three - strategy game is affected by the population density and the free - floating probability . in relation to the local agglomeration . in the present model, increasing population density has a great influence on the size distribution of individual components , which is similar to that in the percolation problem . in the physical world, percolation theory is commonly used to explain connectivity and transport problems .for example , occupy the sites on a square lattice with probability p. for a small , only small isolated clusters , which means a set of neighboring sites occupied , are observed .increasing p leads to the growth and merging of clusters . for , one dominant cluster ( infinitely large cluster ) occurs and at p=1 all the sites are occupied .the critical point , at which a dominant cluster suddenly occurs , is known as the percolation threshold .the exact value of the threshold and the system property close to are both fundamental problems in percolation theory , which have been widely studied by physicists . in the present model , the cluster of sites occupied by agglomerated individuals , which is called individual component throughout the paper ,should be affected by population density .both the size of the largest component and the distribution of the component sizes can effectively reflect the percolation properties .the relationship between the levels of cooperation and the occurrence of a dominant component is a favorite of ours .in this section , we will focus on the roles of population density and free - floating probability in the change of cooperation . following the work done in ref. , in monte carlo simulationswe choose the loner s payoff , the cost of strategy change and the environmental noise . throughout the paper ,the following parameters are also predefined : the size of the square lattice , the random - moving probability of loners and the random - flipping ( or ) probability .frequencies of cooperators ( circles ) , defectors ( squares ) , and loners ( triangles ) as a function of multiplication rate in a square lattice with size and moore neighborhood . other parametersare , , , , and ( a) , ( b) , ( c) . the results are obtained by averaging over 10 runs and 1000 mcs after 10000 relaxation mcs in each run.,width=453 ] figure 1 shows the frequencies of cooperators , defectors and loners as a function of in a square lattice with different population densities , 0.5 and 0.8 .as what has been found in a fully - occupied regular network , there exist two extinction thresholds and . for ,loners perform better than cooperators and defectors so that all the agents become loners in the final steady state . for ,the three strategies coexist .the rise of is found to be beneficial for defectors and loners , but not for cooperators . for , the survival of cooperatorsis favored and the loners go into extinction . in all the three cases with different , the values of the extinction threshold are the same , , which is also the same as that in a fully - occupied regular network .such a result comes from the fact that , with a small multiplication rate , even if all the competitive agents are cooperators , the payoff of a cooperator , , is less than a loner s payoff . therefore , the rock - scissors - paper cycle does not occur and the system is stuck in the state where all the agents are loners .however , as we consider the extinction threshold , we find it increases with the rise of . for , . for , .for , .frequencies of cooperators ( circles ) , defectors ( squares ) , and loners ( triangles ) as a function of multiplication rate in a square lattice with and ( a) , ( b) , ( c) .all the other parameters are the same as those in fig.1.,width=453 ] the rise of population density can effectively facilitate the interactions between two agents , which is more possible in the system where the agents can move freely . in fig.2 , we give the frequencies of different strategies as a function of with predefined population density and different . comparing the results in fig.2 with those in fig.1 , we find that , as to the change of the frequencies of different strategies , the effect of increasing for a fixed is similar to the effect of increasing for a fixed . as increases , the cooperator frequency decreases while the defector frequency and the loner frequency increase within a large range of .as increases from 0.01 through 0.05 to 0.1 , the extinction threshold increases from 4 through 5 to 5.5 accordingly .the results in fig.1 and fig.2 indicate that the increase in the meeting probability will lead to the decrease of cooperation , regardless of its coming from the rise of population density or the rise of free - floating probability .frequencies of ( a ) cooperators , ( b ) defectors , and ( c ) loners as a function of population density for and different (circles ) , 0.05(squares ) , 0.1(triangles ) .all the other parameters are the same as those in fig.1.,width=226 ] it is instructive to ask : can we draw a conclusion that the fewer the chances to meet , the higher the levels of cooperation in the present model ? to examine whether an optimal level of population density exists , in fig .3 we give the frequencies of the three strategies as a function of for different . with a small and fixed , there exists a transition point . for , defectors do not exist in the population .the number of cooperators increases while the number of loners decreases with the rise of . for , the number of cooperators decreases while the number of loners increases with the rise of .the number of defectors firstly increases and then decreases with the rise of . increasing the free - floating probability leads to the decrease of the levels of cooperation but not the disappearance of the transition point . as ranges from 0.01 to 0.1 , the transition point changes from 0.28 to 0.1 and the cooperator frequency at the transition point decreases from 1 to 0.86 accordingly .the existence of the transition point indicates that , there exists an optimal level of population density , with which the system will reach the highest level of cooperation .time - dependent frequencies of cooperators with r=2.5 , , and ( a ) , (black ) , 0.10(red ) , 0.25(blue ) ; ( b) , (black ) , 0.35(red ) , 0.60(blue ) ; ( c ) , (black ) , 0.05(red ) , 0.10(blue).,width=453 ] to have a close eye on the evolution of cooperation below and above the transition point , in fig .4(a ) and ( b ) we plot the time - dependent frequencies of cooperators for different .for comparison , we also plot the time - dependent cooperator frequencies for different in fig .4(c ) . for , as the time passes becomes stable .the change of only leads to the change of the average value of cooperator frequencies but not the fluctuations of . for ,the change of not only results in the change of but also the stability of cooperation .increasing leads to large fluctuations of the levels of cooperation .figure 4(c ) shows increasing also leads to large fluctuations of the levels of cooperation .such results indicate that , whether the decrease of cooperator frequency results from the increase in or the increase in , the large fluctuations of the strategies are detrimental to cooperation . the size distribution of individual components in the evolved system with r=2.5 , , and different population density (circles ) , 0.4(squares ) , 0.5(triangles).,width=188 ] to find the relationship between the frequencies of strategies and the population patterns , in fig.5 we plot the size distribution of individual components for different .as increases from 0.3 through 0.4 to 0.5 , the size distribution changes from an exponential distribution through a power - law distribution to a giant component accompanied by a power - law distribution . the size of the largest component as a function of population density in the evolved system with r=2.5 , and .,width=188 ] in fig.6 we display the size of the largest component as a function of . as ranges from 0.1 to 0.3 , the size of the largest component has little change with the rise of .as ranges from 0.3 to 0.5 , the size of the largest component has a sharp increase with the rise of . for ,nearly all the agents are in the same component and the largest component changes little with the rise of . comparing the results in fig.3 with the results in fig.5 and fig.6 we find that , for low population density , the agents are scattered and the levels of cooperation are somewhat high . for high population density ,nearly all the agents are in the same component and the levels of cooperation become lower .therefore , the results in fig.3 can be understood as follows : for quite low population density , the dissatisfied agents leave makes it impossible for the defectors to exploit the cooperators and the group of defectors will finally be doomed by the random - moving loners . and for the cooperators who leave the competing group , because of the low population density , it is not easy for them to find another cooperator to collaborate with , and they will finally become solitary agents whose payoffs are equal to the payoff of a loner .the occasional ( or ) random - flipping can protect the loners from extinction .therefore , under such an environment , only cooperators and loners are kept in the final steady state .but for high population density or a high free - floating probability , it is easy for defectors to intrude into cooperator clusters and the average level of cooperation will accordingly decrease with the rise of or the rise of .log - log plot of the cooperator frequency vs with , 0.4 , 0.5 , 0.6 , 0.7 .all the other parameters are r=2.5 , and .the fitted lines satisfy the equation .,width=188 ] to have a deep understanding of the roles of free - floating in the increase or decrease of cooperation , in fig.7 we plot vs for different . from fig.7we find that , for low population density , the frequency of cooperators decreases sharply with the rise of .the rise of makes such a changing tendency become ease .as we fit curves to data points in fig.7 , it is found that and satisfy the equation , in which 0.29 , 0.37 , 0.39 , 0.38 , 0.34 and 0.24 , 0.13 , 0.09 , 0.07 , 0.07 respectively . comparing the results in fig.7 with the results in fig.6 , we find that the slope of the fitted line in fig.7 is closely related to the occurrence of the giant component . as the size of the largest component is quite small , the slope of the fitted line is steep , which indicates that the change of can greatly affect as the agents are scattered .as has a sharp increase , the slope of the fitted line obviously becomes gentle .as , the change of has little effect on the change of , and accordingly the slope of the fitted line no longer changes with the rise of .such results indicate that , in the present model , the value of b in equation contains the information of evolutionary patterns .we may effectively figure out the size distribution of the components from the slope of the fitted line .the above simulation results suggest that , as to the decrease of cooperation , the role of increasing is similar to the effect of increasing .the change of cooperation in the present model should come from the change of the collisions between the agents . both increasing and increasing effectively increase the collisions between the agents with different strategies , which makes it easy for the defectors to exploit the cooperators and accordingly leads to the decrease of cooperation .in the present model , due to the purposive movement and free - floating of the agents , the interactive partners in the competing group vary with time , which will lead to similar results in well - mixed populations , where all the agents are possible to be chosen as interactive partners in the competing process . in the following ,we make a mean field analysis of the replicator dynamics and give an approximation of the extinction threshold in the present model . according to the payoff function , in a randomly chosen group, a defector always gets a higher payoff than a cooperator .but in mean field analysis , the system does not evolve according to such payoffs but the averaged payoffs for cooperators or defectors which are obtained by averaging over all groups . in the present model ,not all the interaction groups have the same size . in theoretical analysis, we take the average size of the interaction groups as the interaction group size , which satisfies .suppose in the well - mixed population , the frequencies of cooperators , defectors and loners are , and respectively , which satisfy the condition .as that in ref. , the average payoffs of defectors , cooperators and loners are , \ ] ] as the multiplication rate increases to the extinction threshold , the loners become extinct and we get and . in such a case , the payoffs of defectors and cooperators become on the condition that , where cooperators and defectors can coexist , we obtain the extinction threshold according to the above equation , the extinction threshold in the present model is only related to the average size of interaction groups .the rise of population density will lead to the rise of the average group size and thereafter the increase in the extinction threshold .for example , as population density increases from to , we can estimate that should increase from 2.7 to 7.2 . compared with the simulation results in fig.1, it is found that , only for an intermediate , the theoretical value of is in accordance with the simulation result . for a small ,the theoretical value of is smaller than the simulation result . for a large ,the theoretical value of is greater than the simulation result .such a difference between the mean field analysis and the simulation data may come from the dynamic connectivity between the agents .the above theoretical analysis is only a rough approximation for .it has been found that the effect of individual movements can not be handled within the mean - field analysis .that is the reason why the effect of the moving - probability is omitted in our analysis .how to give an accurate approximation for is still an open question for future studies .the simulation results show that the free - floating of the agents has great impact on the change of cooperation . in the following , by theoretical analysis, we will give a picture of what may be the possible reasons for the occurrence of such an impact .in the opgg , the frequencies of cooperators , defectors and loners are determined by the payoffs to different strategies . in the well - mixed case ,the payoff of each agent is determined by the group size and the status of the agents in the same group .the larger the group size , the more possible the immediate interactions between the agents . just like that in the well - mixed case , in a mobile environment , although all the agents are on the sites of a network , they are possible to meet each other within a period of time .the faster the free - floating of the agents , the more possibly the agents meet each other within the period of time .therefore , from the view point of the probability of meeting between the agents , the effect of increasing the speed of free - floating is the same as the effect of increasing the group size . in the following, we will firstly give a functional relation between the group size and the free - floating probability , and then give a theoretical analysis of the relationship between the free - floating probability and the levels of cooperation .in the present model , because all the agents are arranged on the sites of a square lattice with moore neighborhood , the number of agents in the same group should be less than or equal to 9 .considering the effects of increasing the group size and the speed of free - floating on the increase of meeting probability , we define the boundary conditions of and as : for , ; for , .therefore , the following functional relation between and is adopted , .\ ] ] just like that in a well - mixed case , for predefined and , the difference in the payoff between a defector and a cooperator satisfies the equation in the steady state , the payoffs of all the agents should be the same , .therefore , the above equation becomes for , the solution of the above equation is the same as the solution of the following equation suppose for the three cases of , and , we obtain from the equations of ( 18 ) , ( 19 ) and ( 20 ) we find that , within the range of and , as changes from 0 to 1 , the value of changes from to .as we check the sign of , we find that it changes at most once within the range of .such results indicate that , within the range of , there exists a single solution for equation ( 16 ) . for ,the payoff of cooperators is less than the payoff of defectors . for ,the payoff of cooperators is greater than the payoff of defectors .the above theoretical analysis indicates that , in the present model , increasing will lead to the increase in and accordingly the decrease of , which is in accordance with the simulation results found in fig .it should be noted that , the above mean field analysis can not accurately predict the cyclic behavior in the dynamic network and the lattice with moving agents .therefore , the present theoretical analysis is only a rough approximation and the corresponding equations are borrowed from those in a static network .the strong relevance of local structures needs a generalized mean - field approximation .a qualitatively correct prediction of the cyclic behavior in the dynamic network and the lattice with moving agents is still an open question for future studies .when facing the choice to win everything or get nothing in a public goods game , it is reasonable for the individuals to drop out of the game and enjoy a small but fixed gain , which yields the rock - scissors - paper cycles of different strategies in the optional public goods game .the oscillatory cooperation in the optional public goods game displays the sustainability of cooperation , but it does not tell us on which conditions the system will evolve to slow oscillations and different levels of stable cooperation can be reached . by incorporating population density and individual mobility into the original opgg, it is found that both the stability and the improvement of cooperation are connected to the degree of crowdedness and the speed of free - floating of competitive agents . for low population density and slow free - floating of competitive agents ,the departure of dissatisfied agents from competing groups makes it easy for scattered cooperators to agglomerate into cooperator clusters , which results in the expansion of cooperation .the agents who stay on the original sites are more possible to form defector clusters , which will finally be doomed by the random - moving loners . for low population density and fast free - floating of competitive agents , the exchange of neighbors often takes place and it is easy for defectors to invade the cooperative clusters .numerous small components where cooperators and defectors coexist occur , and the levels of cooperation decrease . for high population density , because of the percolation effect , nearly all the agents are merged into the same component .the defectors uniformly expand into cooperator territory and the evolutionary dynamics in the original opgg is recovered .the relationship between the levels of cooperation and the free - floating probability is found .the simulation results in the present model are quite similar to those in ref. , where the effects of different population densities on the evolution of cooperation are studied depending upon the prisoner s dilemma , the snowdrift , the stag - hunt and the public goods game . in a static network with no moving agents , the optimal population density , with which an optimal level of cooperation can be reached ,has been found to be related to the percolation threshold .the present model indicates that the existence of different levels of cooperation in real world should be related to the environmental conditions , including population density and individual mobility . in the future ,similar environmental conditions should be considered in the game models with continuous strategy spaces and the generalization of these conditions is a favorite of ours .this work is the research fruits of the humanities and social sciences fund sponsored by ministry of education of china ( grant no .10yjazh137 ) , natural science foundation of zhejiang province ( grant no .y6110687 ) , social science foundation of zhejiang province ( grant no .10cggl14yb ) and national natural science foundation of china ( grant nos . 10805025 , 11175079 , 70871019 , 71171036 , 71072140 ) .
in a static environment , optional participation and a local agglomeration of cooperators are found to be beneficial for the occurrence and maintenance of cooperation . in the optional public goods game , the rock - scissors - paper cycles of different strategies yield oscillatory cooperation but not stable cooperation . in this paper , by incorporating population density and individual mobility into the spatial optional public goods game , we study the coevolutionary dynamics of strategy updating and benefit - seeking migration . with low population density and slow movement , an optimal level of cooperation is easy to be reached . an increase in population density and speed - up of free - floating of competitive agents will suppress cooperation . a log - log relation between the levels of cooperation and the free - floating probability is found . theoretical analysis indicates that the decrease of cooperator frequency in the present model should result from the increased interactions between different agents , which may originate from the increased cluster size or the speed - up of random - movement . mobility , cooperation , population density , public goods games
radio astronomy is poised to move ahead with a suite of new instruments such as the square kilometre array ( ska ) .these instruments will improve sensitivity , resolution , bandwidth , and many other instrument and observational parameters by more than an order of magnitude . at present , many radio astronomy observations are corrupted to at least some extent by radio frequency interference ( rfi ) .this interference comes from ground - based communication transmitters , satellites , and the observatory equipment itself . with the increase in sensitivity and frequency coverage of radio astronomy instruments , and with telecommunication signals occupying more of the spectrum ,it is essential to develop ways of removing or suppressing this rfi . in radio astronomy, real - time adaptive filters can be used to modify an auxiliary voltage time series ( the reference signal ) so that it cancels rfi from an astronomical voltage time series . for each voltage samplethe filters are allowed to slightly vary their internal coefficients in order to adapt to any changes taking place in the rfi .if one is interested in the power spectrum of the astronomy signal , and the filter coefficients stay fairly constant over the interval in which the power spectrum is estimated , cancellation in the post - correlation domain can give better results , since a second reference antenna can be used to give complete suppression of the rfi ( only zero - mean random receiver noise is added to the complex correlations and it will average away , see ) . however , many applications ( particularly in the communications field , but also some astronomy applications ) , require the recovery of the actual symbol stream ( i.e. , a transmitted sequence of symbols such as bits or words ) from the noisy rf environment , which is not retained in post - correlation . following a suggestion of ,we have devised a modified approach that can give improved rfi attenuation in the voltage domain . in an attempt to minimise any rfi in an astronomical voltage series the standard approach is to minimise the canceller s output power , which it will be shown means that some residual rfi always remains .the modified approach that we give here forces the rfi in the output power to zero .it results in residual power that is always greater than that of the standard approach ( output power is no longer minimised ) , but which does not contain rfi .that is , superior rfi cancellation is obtained , but at the expense of somewhat increased thermal noise . in the following sectionsthe standard adaptive canceller is discussed , and the new approach is introduced .this is followed by an overview of how cancellation can be applied after correlations are formed .residual rfi power and added receiver noise are investigated and an example from the australia telescope compact array is given .in an attempt to remain general , we assume a system of one or more radio antennae pointing towards a direction on the celestial sphere .delays are inserted into the signal paths so that a wavefront from the chosen direction arrives at the output of each antenna simultaneously .the celestial location is known as the phase tracking centre .given that we will attempt to deal with the interference in subsequent parts of the system , separate reference antennae are incorporated into the network of receivers to observe the rf environment , which typically enters the astronomy signal through the side - lobes of the antennae .signals from astronomy antennae will be referred to as main signals , and those from reference antennae as reference signals . at each antennaa waveform containing an additive mixture of all the signals present in the environment is received and downconverted to an if voltage series .this is sampled and quantised into a number of digital bits ( which we assume is sufficient to keep the voltage statistics linear so that quantisation effects such as intermodulation are negligible , and to ensure that the receiver noise and astronomy fluctuations are measured even in the presence of strong interference ) .each main if voltage series contains three components : a noise voltage from the receiving system , ; a noise voltage from the sky , ; and interference , .the sky voltage contains the information about the astronomical sources , which are the signals of interest .if the interference can not be removed completely , it is desirable to reduce it to less than the final rms noise level ( with a negligible or at least predictable effect on the astronomy , ) .since an interfering signal is usually incident from a direction other than the phase tracking centre , its wavefront will not be synchronous at the output of the different antennae .the geometric delay , , of antenna due to the physical separation of the receivers represents the difference in arrival time of an interfering wavefront at antenna and an arbitrary reference point ( after accounting for the delay needed to track the selected field on the celestial sphere ) .as a signal passes through a receiving and processing system it encounters various convolutions and deconvolutions , so working in the frequency domain can offer a more intuitive basis for discussion . in the frequency domain the system can be represented by complex multiplications and divisions . a frequency - dependent coupling term , ,is used to describe the combined complex - valued gain of each receiver system and antenna to the interference , including any filtering ( time is included to account for the slow variations imposed as the rfi passes through antenna side - lobes ) . using upper case characters to denote frequency domain quantities , and keeping in mind that this spectral representation comes from fourier transforming each consecutive 1000 or so samples of the voltage series ,the signal in a quasi - monochromatic channel at frequency is for main antennae and for reference antennae .note that the phase term due to the geometric delay of the interfering signal , , has been kept separate from the -terms ( time is also included here to account for changes as either the phase tracking centre or rfi transmitter direction change ) . to remain generalthe -terms are kept as complex quantities , to allow for any effects on the phase that are not due to the geometric delay .it is assumed that there is only one interfering signal in a frequency channel ( see for a discussion of multiple interferers ) , and that there is negligible reference antenna gain in the direction of , i.e. , reference antennae do not measure signal from astronomical sources .these are important assumptions , but often quite reasonable ( the latter assumption is strengthened because the weak astronomy signal enters the reference antennae through side - lobes ) .however , statements made later in relation to the lack of effect of adaptive cancellers on the astronomy signal rely on the validity of the second assumption .if any astronomy signals leak into the reference series the achievable sensitivity , dynamic range , astronomy purity , etc ., will all be affected .for example , signal leakage could lead to the power of a strong self - calibration source changing as the canceller weights change .adaptive cancellers are usually applied to broadband if voltage samples in the time domain .( `` broadband '' here simply refers to the whole passband , rather than the quasi - monochromatic frequency channels . )while the adaptive canceller examples given throughout section [ an atca data example ] have been processed in the time domain , the following analysis is carried out in the frequency domain ( see ; and for descriptions of time domain implementation ) .the aim of adaptive cancellers in interference mitigation is to find the set of filter weights , * _ w _ * , which scale and phase shift each reference antenna frequency channel so that they best approximate the rfi in a main astronomy spectrum ( in the time domain delays are inserted into the signal paths so that positive and negative delays can be considered .so strictly speaking the cancellers are not truly real - time , the output is lagging in time by the length of the inserted delay . )so is a vector with a complex element for each frequency channel .figure [ mk1 adaptive cancellers ] shows schematically how such a canceller can be implemented , hereinafter referred to as a mark one ( mk1a ) canceller ( the `` a '' is added since we will be modifying the filter later ) . throughout this paper it is assumed that the filter weights are varying slowly enough that they are approximately constant on the time scales used to calculate them ; typically less than a second .it is also assumed that the various signals are independent ( i.e. , , , and are uncorrelated , and the receiver noise is independent for the different antennae , i.e. , and are uncorrelated .the noise terms , , will not be independent for receivers used in low frequency instruments such as lofar , where they will be dominated by partially correlated sky noise . herewe will only consider frequencies for which is dominated by uncorrelated noise , internal to the receivers . ) when these assumptions hold , the power in a single frequency channel at the output of the canceller is where * denotes a complex conjugate and the expectation value .the explicit frequency dependency of the terms has been removed ( it is assumed that the channels are independent so that ( [ adaptive filter power ] ) can be applied to each frequency channel separately ) .if we set , , , and assume that the complex gains , delays , and weights are constant over the time average , so they can be taken outside the average , then we can write where represents the phase difference , .equation ( [ mk1 weight dependence ] ) highlights a critical point .minimising the output power reduces the _ combined _ power of the residual rfi and the inserted reference receiver noise .if there is reference receiver noise the rfi will never be completely cancelled .this is because both the interference and reference receiver noise are being weighted .the receiver noise of the main antennae and the astronomy signals , however , are not affected by the choice of weights and will pass through the canceller freely ( under assumptions of signal independence and zero reference gain towards astronomy sources ) . to perform the minimisation of , one can differentiate it with respect to and find the weights that set the derivative equal to zero .the surface of is a multidimensional ( positive ) quadratic surface that has a single minimum , so the weights that set must give the unique global minimum . alternatively, wiener theory tells us that the optimal weights ( known as the wiener - hopf solution ) , , which minimise the output power are also the weights that set the cross - correlation between the canceller output and the reference signal to zero ( , as indicated in figure [ mk1 adaptive cancellers ] ) : so the weighting process takes the cross - correlation of the reference and main signals , which determines the correlated power and relative delay of the interference , and scales that by the auto - correlation of the reference signal .one can calculate these weights directly by calculating the correlations from short integrations , or they can be found adaptively by iteratively seeking and then tracking the weights that satisfy ( [ mk1 pre corr weights a ] ) .ideally the output of the canceller would consist of the astronomy signal and receiver noise .however , as mentioned above , since there is always some receiver noise in the reference signal , setting the correlation in ( [ mk1 pre corr weights a ] ) to zero can never remove all of the rfi .the rfi is played off against the reference receiver noise .if is the interference - to - noise power ratio of the reference signal , , the mean amount of residual output power ( power in addition to the main receiver noise and the astronomy , ) , is given by it is clear from ( [ mk1 residual power ] ) that as approaches infinity ( no reference receiver noise ) the residual power goes to zero . if there is no rfi , will be zero and there will also be zero residual power ( the filter turns off )when is finite and non - zero the reference receiver noise term in the denominator of ( [ mk1 pre corr weights b ] ) biases the weights and some residual power will remain .what might not be so clear from an inspection of ( [ mk1 residual power ] ) is the statement made earlier that this residual power is a combination of reference receiver thermal noise added during cancelling and residual rfi that was not excised .another way to see this bias is to consider figure [ mk1 adaptive cancellers ] .thermal noise from the reference receiver is present in both inputs to the weight generation process ( in the term from the reference antenna and the term from the filter output , c.f .equation [ mk1 pre corr weights a ] ) .this will lead to a second non - zero - mean correlation product ( the first being the rfi ) . minimisingthe output power must be a trade - off between minimising the contributed reference receiver noise and the rfi , and as a result there will always be residual rfi .the total residual power given in ( [ mk1 residual power ] ) can be divided into the inserted reference receiver noise residual , , and the rfi residual , , such that which are shown by to be thus . to shed some light on the meaning of the relations in ( [ mk1 residuals a ] ) and ( [ mk1 residuals b ] ), we again interpret the process of output power minimisation as determining the weights that set the cross - correlation of the canceller output and the reference signal to zero .as increases , the rfi becomes the dominant signal in the cross - correlation , and the filter must concentrate on reducing the rfi power . as a resultthe proportion of the rfi that remains after cancelling decreases faster than the injected noise power . when noise starts to dominate andthe filter will concentrate on reducing the contributed noise power .when goes to zero , the correlation is completely reference receiver noise , and the canceller turns itself off . when rfi dominates , most of the residual power is reference receiver noise , and when reference receiver noise dominates , most of the residual power is rfi .also , since the reference signal is being scaled in an attempt to match its own rfi to the rfi in the main signal , the larger is relative to ( for example using a reference antenna that is pointing directly at the interfering source ) , the smaller the scaling factor ( weighting amplitude ) and thus the amount of injected receiver noise .when and both of the residual terms drop off and extremely good results are achieved . to remove the biasing effect caused by the reference receiver noise ,a second reference receiver can be used , as was suggested in .the rfi in the main spectrum is still estimated using a weighted version of the spectral channels from the first reference , but now the cross - correlation of the second reference signal with the canceller output is set to zero in order to find the weights .this mk2a canceller is shown in figure [ mk2 adaptive cancellers ] .it was mentioned in the previous section that the reference receiver noise is a component of both signals in ( [ mk1 pre corr weights a ] ) .zeroing this correlation results in minimum output power , but a non - zero rfi residual . correlating the canceller output against a second reference with uncorrelated receiver noiseremoves the bias . since the rfi is the only correlated signal , zeroing the cross - correlation results in zeroing the rfi. however , there is no built - in mechanism to guard against the amount of reference receiver noise contributed to the output .the amount added during cancelling must always be greater than the noise added by the mk1a canceller , since power is no longer minimised .the two references will be denoted and , so that and we can again determine the weights that set the output - reference cross - correlation to zero : as with the mk1a canceller the weights need to scale and phase shift reference signal so that its rfi component matches that of signal . here , however , the independent reference signal is used to give a true view of the scaling factor needed to match rfi levels . is only constrained by the rfi , since the receiver noise terms in are uncorrelated , and the rfi signal is entirely replaced with a weighted version of thermal noise from reference signal ( assuming that the various signals are uncorrelated and that the receivers are ideal and remain linear ) .while the noise in the weights will also increase the output power of the canceller , it is much weaker noise since the weights are averaged over many samples , and is not considered further here . when from ( [ mk2 pre corr weights ] ) is substituted into the equation for output power , ,the mean residual power is as shown in .infinite attenuation of the rfi component has been achieved , but potentially a significant amount of system noise has been added during cancelling .compare ( [ mk1 residual power ] ) and ( [ mk2 residual power ] ) : as in ( [ mk1 residual power ] ) , when , the attenuation of the rfi signal in is very large . in this case however , any frequency channels with will end up with more unwanted power than they started with .the mk2a canceller does not turn itself off .the reference signal is boosted until its rfi matches the rfi in the main signal , regardless of the amount of receiver noise being added .note that does not affect the output power , provided there is enough rfi power in to keep the weights stable ( see section [ instabilities in the dual reference algorithms ] for a discussion of weight stability ) .if one is interested in the ( interference - free ) power spectrum of , the single reference antenna mk1a canceller output will contain less residual power .there are , however , advantages to the mk2a canceller .if one is concerned with retrieving a structured signal from a voltage series , random noise in the signal may not pose too much of a problem , but a structured rfi residual may detract from signal recovery .more relevant in radio astronomy is the case where one is looking for a structure in the power spectrum .the rfi remaining after mk1a cancelling will have features in the power spectrum , but if the ( amplified and filtered ) reference rfi , , is proportional to , equation ( [ mk2 residual power ] ) says that the noise contributed by mk2a cancelling will have the same spectrum as the input reference receiver noise ( apart from a constant scaling factor ) .it is also conceivable to remove the unwanted reference receiver noise from the auto - correlation of either canceller s output .this is discussed in the next section and can result in a mk2 canceller superior to the mk1 canceller for some applications .we now describe a double canceller setup that can be used to suppress the added reference receiver noise in the output astronomy power spectrum .if the main signal is duplicated before cancelling so there is an identical copy , the copy can be passed through a second canceller that uses different reference signals so that the noise added will be uncorrelated with the noise added to the first . when the two filtered copies are cross - correlated ,the main antenna receiver noise and astronomy will correlate as if the original signal has simply been auto - correlated , while the added reference noise power will average away with the radiometric factor ( ) .an important point to note about the mk2a canceller is that all of the noise added during cancelling is from the first reference receiver ( see equation [ mk2 residual power ] ) .the second reference is only used in setting the complex weights .so the second canceller can be made by interchanging the references .this setup will be called the mk2b canceller and is shown in figure [ indeprx mk2 ] . as discussed in ,if the inr of the references are the same , the mean residual output power from the mk2b canceller is which averages towards zero as the integration length is increased .similarly , if two references are used to create two independent mk1a cancellers for the two main signal copies , this gives the mk1b canceller shown in figure [ indeprx mk1 ] , and the mean residual output power becomes equations ( [ mk2 indeprx ] ) and ( [ mk1 indeprx ] ) show that while should keep integrating towards zero , has a definite limit due to the rfi signal that remains after cancelling .however , one must be aware that in situations where the reference inr is very small the mk2 canceller does not turn itself off , and there are practical implementation issues that need to be addressed ( essentially , one may need to force the canceller to turn off ) .this is discussed in section [ instabilities in the dual reference algorithms ] .in this section the residual power equations given throughout section [ adaptive cancellers ] are demonstrated . notethat while the plots for the dual - reference cancellers show the residual power as the reference interference - to - noise power ratio approaches arbitrarily close to zero , the algorithms in practice become unstable and need to be turned off .this point will be reiterated where appropriate in the discussion below .for the theory we have set , so that all of the output power displayed in this section is a combination of residual rfi and any reference receiver noise added during cancelling .figure [ residual power v btau 1 ] displays the proportion of residual power in the output signal after adaptive cancelling with mk1b and mk2b cancellers .the plot shows that the added reference receiver noise averages away as the integration length is increased .if single canceller systems were being considered , then since any noise added during cancelling is sent to an auto - correlator , the output power would remain constant ( i.e. , remain at the levels shown on the left hand side of figure [ residual power v btau 1 ] ) .it is clear that the mk1b canceller hits a limit when it reaches the residual rfi , but that the mk2b does not .values as a function of the number of samples used to average the noise down in the canceller output ( ) .the three sets of solid and dashed lines are for values of 0.1 ( top ) , 10 ( middle ) , and 1000 ( bottom ) .solid lines indicate mk1b cancelling , dashed lines mk2b cancelling .the lines flatten when the residual rfi power level is reached.,width=321 ] as decreases the normalised output power of the mk1b canceller levels off at 1 , so there is no cancelling taking place . on the other hand the mk2 canceller continues to insert more and more reference receiver noise in an attempt to match the reference rfi to the main signal rfi . even though the mk2b canceller always has the larger total residual power ,it is entirely zero - mean noise and averages out with the radiometric factor .again the reader should note that for low values the mk2 canceller can become unstable and requires an additional mechanism to turn off .figure [ residual power v inr ] shows contours of constant ( normalised ) output power as a function of and the number of samples , .figures [ residual power v inr]a and [ residual power v inr]b represent mk1a and mk2a cancelling respectively , and figures [ residual power v inr]c and [ residual power v inr]d represent mk1b and mk2b cancelling respectively .the amount of residual power in db is indicated by the grey scale and runs from -80 to 20 db .it is clear from figure [ residual power v inr]c that a constant rfi residual remains for all values after mk1b cancelling .the dashed line indicates the approximate line where the added reference receiver noise power has averaged down to expose the non - zero residual rfi power level .it is clear from figures [ residual power v inr]a and [ residual power v inr]c that an output power plateau is reached as becomes small for the mk1 cancellers .the residual power of the plateau is 0 db , and indicates that the canceller has turned off .in contrast , the mk2a canceller ( [ residual power v inr]b ) does not turn off and results in more output power than input rfi power for low levels . however , since the residual is entirely noise it averages down in a mk2b canceller , where two independent filters are used ( [ residual power v inr]d ) .this is highlighted further in the next section .we now demonstrate adaptive cancellation of real rfi impinging on the australia telescope compact array ( atca ) .the rfi is a point - to - point microwave ( mw ) television link transmitted from a tv tower on a nearby mountain at 1503 mhz .the reference antennae were two orthogonal linearly polarised receivers on a small reference horn pointed in the direction of the mw transmitter , as described in . a linearly polarised receiver on a regular atca antenna , pointing at the sky and receiving the microwave link interference through the antenna side - lobes ,was used to collect the main signal .the rfi is polarised , and as long as all three receivers are at least partially polarised in the same sense as the rfi the cancellers will work correctly ( assuming that the polarisation cross - talk between the reference receivers is negligible so the receiver noise is independent ) .the received voltages were filtered in a 4mhz band centred at 1503 mhz , downconverted , and sampled with 4-bit precision .each of the cancellation techniques discussed has been applied to the mw data using matlab ( see the ) .all of the spectra shown were generated using 1024-point ffts .figure [ high inr mk1a spectrum ] shows the unfiltered and mk1a filtered power spectra of the main astronomy voltage series when large reference inrs were available .two synthetic astronomy signals have been added to the astronomy voltage series at 1503.0 mhz and 1503.2 mhz .what we want is to remove the rfi peak ( the 1 - 2 mhz wide peak centred at 1503 mhz ) from the main signal spectrum , leaving the broad , main antenna bandpass , and the astronomy .this is indeed what is seen , and similar results are achieved for all of the techniques discussed in this paper . since all of the techniques behave excellently when the reference inr is large , and any contributed reference receiver noise or residual rfi is much smaller than the main signals receiver noise level , it is difficult to compare them . before and after mk1 cancelling with .two simulated cosmic signals were added at 1503.0 and 1503.2 mhz.,width=321 ] figure [ reference spectra ] displays the power spectra of two reference signals that have had gaussian random noise injected into their sampled voltage series , to set at the centre of the mw band and zero at the edges .the reason for adding the fake receiver noise was to lower the inr and accentuate any residual rfi , as detailed in ( [ mk1 residuals a ] ) , ( [ mk1 residuals b ] ) , and ( [ mk2 residual power ] ) .plots of the power spectra of the main signal ( the one containing the astronomy ) , before and after the different cancellation techniques , are displayed in figures [ mk1 spectra ] through [ post - corr spectra ] .figure [ mk1 spectra ] shows unfiltered and mk1 filtered power spectra of the main astronomy voltage series .the two synthetic astronomy peaks have not been affected to any measurable level .the amount of residual power after mk1a cancelling is as expected from ( [ mk1 residual power ] ) .about twice as much power has been cancelled at the centre of the rfi peak by the mk1b canceller ( the added receiver noise in this case has a zero - mean cross - correlation ) , which is expected since .away from the centre of the peak , becomes smaller and the proportion of residual rfi increases ( which is why the residual rfi peak is flatter than the initial peak ) .no matter how long the integration is run this residual power will not decrease any further .that is , the canceller is operating in the horizontal region shown in figure [ residual power v btau 1 ] .before and after spectra for the mk2 cancellers are shown in figure [ mk2 spectra ] .as for the mk1 cancellers , the two synthetic astronomy signals have been left unaffected .it is clear from figure [ mk2 spectra]a that when the mk2a output residual power is greater than the input rfi power ( except when goes to zero , where the canceller was turned off , see section [ instabilities in the dual reference algorithms ] ) .this is clearly unsuitable and makes the situation worse .in contrast , the mk2b canceller ( figure [ mk2 spectra]b ) has removed the rfi peak down to the primary antenna s receiver noise level , indicating that the mk2a residual power was completely added noise and not residual rfi .the removal of the rfi was extremely successful , with only a slight increase in zero - mean noise at the primary receiver noise floor , as seen in figure [ mk2 spectra noise ] .b , with a mk2b spectrum overlaid .very little noise is added when , demonstrating the increase for the case.,width=321 ] each spectral channel in the examples in this section was averaged for 15872 samples .this means that for the mk1b and mk2b cancellers the contributed reference receiver noise power will have averaged down by about a factor of , or about 20 db . since the mk2b canceller output has essentially the same noise spectrum as the mk2a canceller output , but averaged down , there has been about a 20 db reduction in unwanted power after mk2b cancelling at the centre of the rfi peak ( see figure [ mk2 spectra ] )this zero - mean noise would have continued to average down had we continued to integrate , and more attenuation would have been achieved .the mk1b canceller has only achieved around 6 db of attenuation , and this will not increase with a longer integration .one should keep in mind , however , that we have inserted a substantial amount of noise into the reference voltages , which has also greatly lowered the achieved attenuation .an important final point to note is that , with the exception of the edges of the rfi peak ( where there are features associated with turning the filter off ) , figure [ mk2 spectra]a shows that the rfi is spread with essentially constant power over the rfi contaminated channels .this is because , which we recall is given by , is approximately proportional to , so that the spectral shape of the rfi cancels out of the residual power given in ( [ mk2 residual power ] ) .from an implementation point of view it is important to realise that in astronomy we are not usually seeking fast time - variable information such as modulation . in most astronomical applicationsthe aim is to measure signal statistics since they are related to quantities such as cosmic flux density and visibilities as measured by arrays .these applications are generally either finding the auto - correlation of signals from a single antenna ( to measure the power spectrum of the astronomy signal ) , or the cross - correlation of signals from more than one antennae ( to measure the spatial coherence of the astronomy signal ) .see and various chapters of for an overview . in this paperwe have concentrated on the power spectra given by auto - correlations , however all of the techniques discussed can be generalised to work on cross - correlations .the upshot of only requiring signal statistics is that if the canceller weights are not changing appreciably over the 100 millisecond or so time interval that the statistics are measured over , then the algorithms can be applied to the statistics rather than each voltage series .this means that they are applied at a rate of hz to khz rather that mhz to ghz .it also means that if the new astronomical cross - correlators that are coming online for new and existing facilities have a few extra inputs for reference antennae , then no new filters need to be added to the signal paths , and the cancellation can be performed after the observation as part of post - processing . for a comparison of voltage cancellation and statisticscancellation for rfi signals that require filters with weights that are changing appreciably during an integration , see .the standard statistics canceller used in radio astronomy is a post - correlation version of the mk2b canceller discussed in section [ dual reference signal adaptive cancellers ] .if we use to denote the correlation between signals from antennae and , the quantity that is subtracted from the auto - correlation of antenna is determined from amplitude and phase closure relations : since the reference signal does not contain any information about the astronomy signal , it will not be present in any of the quantities in ( [ correction spectrum mk2 ] ) .the denominator removes the reference signal rfi s phase and amplitude information from the numerator , leaving information about the rfi in the signal from antenna , and zero - mean noise , since the expectation operators are not infinite in extent .when is subtracted from main signal power , the mean residual power is ( see ) as in ( [ mk2 indeprx ] ) . is shown in figure [ post - corr spectra ] .as long as the gain and geometric delay stay essentially constant over the time average , the `` pre '' and `` post '' correlation techniques are very similar ; . before and after post - correlation cancelling .two simulated cosmic signals were added at 1503.0 and 1503.2 mhz .there are slight artifacts at the edge of the peak where and the canceller is turning off , as described in section [ instabilities in the dual reference algorithms].,width=321 ] while in the last few sections an extreme case has be demonstrated ( that of a low reference inr ) , it does highlight major differences between the different cancellers . the main differences being that infinite rfi removal is theoretically possible for cancellers that use two reference signals , but not for single reference cancellers .it is also clear that when the weights are slowly varying , the post - correlation canceller is equivalent to the dual - reference mk2b canceller .another way in which the mk2b and post - correlation canceller techniques can differ is in their instabilities at low inr levels .equations ( [ mk2 pre corr weights ] ) and ( [ correction spectrum mk2 ] ) have denominators that become zero - mean noise when there is no correlated rfi signal , which can lead to numerical errors .the coefficients of the single - reference techniques go to zero when the rfi becomes weak and the cancellers automatically turn off .these issues are discussed next .equations ( [ mk2 pre corr weights ] ) and ( [ correction spectrum mk2 ] ) show that the dual - reference cancellers can have stability problems when . in frequency channels where the correlated interference is zero or very small ,( [ mk2 pre corr weights ] ) and ( [ correction spectrum mk2 ] ) are noise dominated and can result in a division by zero ( or very close to zero ) .this can not occur in the mk1 cancellers since ( [ mk1 pre corr weights b ] ) goes to zero as and they turn off .a modified post - correlation canceller has been suggested by in which an extra term is added to the denominator of ( [ correction spectrum mk2 ] ) : where , which is an estimate of the noise power in the reference cross - correlation , and the prime indicates that has been approximated .the extra term stops the zero - mean fluctuations in from going too close to zero , while for large equation ( [ modified spectrum mk2 ] ) reduces to ( [ correction spectrum mk2 ] ) .since introduces a small bias in a similar way to the mk1 cancellers , a relation equivalent to ( [ mk1 residual power ] ) , but with a much smaller bias , can be derived : this will have both a noise and a rfi component , as in ( [ mk1 residual power ] ) , however , since the canceller is now biased like a mk1a canceller the added receiver noise term will not average away . reduces as multiplied by the number of samples in the time average .there is a similar problem for the pre - correlation mk2 adaptive cancellers . in the lag domain ,the division in ( [ mk2 pre corr weights ] ) becomes a multiplication by the inverse of a matrix with columns containing offset copies of the - cross - correlation function .divisions by zero in the frequency domain due to interference - free frequency channels in are manifest in the reference lag matrix as singular values .one method of dealing with this is to use singular value decomposition to decompose the matrix into two orthonormal triangular matrices and one diagonal matrix ( see 2.9 of ) .singular ( or near - singular ) values can be selected when the relevant diagonal matrix elements are less than a chosen threshold , such as .the singular parts of the matrix contain no information about the correlated signal and are removed from the decomposition matrices .the inverse matrix can then be reconstructed from the remaining parts of the three decomposition matrices , and it will not function in the rfi - free parts of the spectrum .interference cancelling using a single reference antenna can give excellent results when the reference signal interference - to - noise ratio is large , and there is more gain towards the interfering signal for the reference antenna than for the astronomy antennae .however , receiver noise in the reference signal means that a fraction of the interference will always remain after cancelling . a second reference signal can be used to remove the noise bias and give infinite interference attenuation , but a larger amount of reference receiver noise is added during cancelling . for pre - correlation systems , a dual canceller setup can be used to average the ( zero - mean ) receiver noise away , a process that comes automatically with post - correlation cancellers . a breakdown of the main properties for the different mitigation techniques is given in table [ summary table ] .it is important to note that even though the single - reference cancellers leave residual rfi , the residual may be extremely small and well below the rms noise .this occurs when the interference - to - noise ratios of the reference signals are very large , and the use of two references ( in pre - correlation systems ) might just add complexity to the system with little or no benefit .however , if maximum sensitivity is required , one should be aware that they will eventually reach a non - zero residual signal .using two reference signals to remove the reference receiver noise bias removes the inherent stability of the algorithms in situations where some or all of the frequency channels are interference - free .although there are applications in which the passband will always be entirely filled with rfi ( such as observations in the gps l1 and l2 bands ) , many interfering signals will only take up a part of the band . in these casesthe algorithms need a mechanism to turn themselves off in the vacant frequency channels .we are grateful to dr .mike j. kesteven and professor lawrence e. cram for discussions and comments on this paper .the australia telescope compact array is part of the australia telescope which is funded by the commonwealth of australia for operation as a national facility managed by csiro .barnbaum , c. , & bradley r. f. 1998 , aj , 116 , 2598 bell , j. f. , et al .2001 , pasa , 18 , 105 bower , g. c. 2001 , ata memo 31 .briggs , f. h. , bell , j. f. , & kesteven m. j. 2000 , aj , 120 , 3351 mitchell , d. a. 2004 , ph.d .thesis , _ interference mitigation in radio astronomy , _ the university of sydney , ` http://setis.library.usyd.edu.au ` .mitchell , d. a. , & bower , g. c. 2001 , ata memo 36 mitchell , d. a. , & robertson , j. g. 2005 , rs , special section on interference mitigation , in press press , w. h. , flannery , b. p. , teukolsky , s. a. , & vetterling , w. t. 1986 , numerical recipes .the art of scientific computing , ( cambridge university press ) the mathworks , inc .1998 , matlab user s guide , ( the mathworks , inc . )taylor , g. b. , perley , r. a. , & carilli , c. l. , ed . 1999 ,pasp , 180 thompson , a. r. , moran , j. m. , & swenson , g. w. 1986 , interferometry and synthesis in radio astronomy , ( new york : wiley - interscience ) widrow , b. , & stearns , s. d. 1985 , adaptive signal processing , englewood cliffs , nj : prentice hall )
in radio astronomy , reference signals from auxiliary antennae , receiving only the radio frequency interference ( rfi ) , can be modified to model the rfi environment at the astronomy receivers . the rfi can then be cancelled from the astronomy signal paths . however , astronomers typically only require signal statistics . if the rfi statistics are changing slowly , the cancellation can be applied to the signal correlations at a much lower rate than required for standard adaptive filters . in this paper we describe five canceller setups ; pre- and post - correlation cancellers that use one or two reference signals in different ways . the theoretical residual rfi and added noise levels are examined , and demonstrated using microwave television rfi at the australia telescope compact array . the rfi is attenuated to below the system noise , a reduction of at least 20 db . while dual - reference cancellers add more reference noise than single - reference cancellers , this noise is zero - mean and only adds to the system noise , decreasing the sensitivity . the residual rfi that remains in the output of single - reference cancellers ( but not dual - reference cancellers ) sets a non - zero noise floor that does not act like random system noise and may limit the achievable sensitivity . thus dual - reference cancellers often result in superior cancellation . dual - reference pre - correlation cancellers require a double - canceller setup to be useful and to give equivalent results to dual - reference post - correlation cancellers .
recently there have been renewed interests in large - scale interaction in several research disciplines , with its uses in wireless networks , big data , cyber - physical systems , financial markets , intelligent transportation systems , smart grid , crowd safety , social cloud networks and smarter cities . in mathematical physics ,most of models are analyzed in the asymptotic regime when the size of the system grows without bounds .as an example , the mckean - vlasov model for interacting particles is analyzed when the number of particles tends to infinity .such an approach is referred to as mean field approach .the seminal works of sznitman in the 1980s and the more recent work of kotolenez & kurtz show that the asymptotic system provides a good approximation of the finite system in the following sense : for any tolerance level there exists a population size such that for any the error gap between the solution of the infinite system and the system with size is at most moreover , the work in shows that the number is in order of for a class of smooth functions , where denotes the dimension of the space .thus , for this current theory does not give an approximation that is meaningful . in queueing theory ,the number of customers is usually assumed to be large or follows a certain distribution with unbounded support ( e.g. , exponential , poisson etc ) and the buffer size ( queue ) can be infinite . however , many applications of interests such as airport boarding queues , supermarket queues , restaurant queue , iphone / ipad waiting queue involve a finite number of customers / travelers .approximation by a continuum of decision - makers may not reflect the reality .for example the number of clients in the supermarket queue can not exceed the size of available capacity of markets and there is a certain distance between the clients to be respected .in other words , human behaviors are not necessarily like standard fluid dynamics . in game theory , the rapidly emerging field of mean - field games is addressing behavioral and algorithmic issues for mathematical models with continuum of players .we refer the reader to for a survey on ( asymptotic ) mean field games .the classical works mentioned above provide rich mathematical foundations and equilibrium concepts in the asymptotic regime , but relatively little in the way of computational and representational insights that would allow for few number of players .most of the mean - field game models consider a continuum of players , which seems not realistic in terms of most applications of interests .below we give some limitations of the _ asymptotic _ mean - field approaches in engineering and in economics : * in wireless networks , the number of interacting nodes at the same slot in the same range is finite and currently the capacity / bandwidth of the system is limited .therefore , a mean - field model for infinite capacity and infinite number of nodes is not plausible .the result of infinite system may not capture the real system with only few number of nodes . * in most of the current markets ,the number of traders is _ finite_. in that context it is well known that the bayesian - cournot game may not have an ex - post equilibrium whenever the number of traders is finite .however , the infinite game with continuum of traders has a pure ( static mean - field ) equilibrium .if our prediction is the `` mean - field equilibrium '' , in what sense the ( static ) mean - field ex - post equilibrium captures the finite system ?our primarily goal in this article is to provide a simple and easy to check condition such that mean - field theory can be used for finite - scale which we call _ non - asymptotic mean - field _ approach .we investigate the nonasymptotic mean - field under two basic conditions .the first condition is indistinguishability ( or interchangeability ) of the payoff functions .the indistinguishability property is easy to verify .the indistinguishability assumption is implicitly used in the classical ( static ) mean - field analysis including the seminal works of aumann 1964 , selten 1970 , schmeidler 1973 .this assumption is also implicitly used in the dynamic version of mean - field games by jovanovic & rosenthal 1988 , benamou & brenier 2000 and lasry & lions 2007 .the second condition is the ( regularity ) smoothness of the payoff functions .the regularity property is relatively easy to check .based on these two conditions , we present a simple approximation framework for finite horizon mean - field systems .the framework can be easily extended to infinite horizon case .the non - asymptotic mean field approach is based on a simple observation that the many effects of different actions cancel out when the payoff is indistinguishable .nevertheless , it can lead to a significant simplification of mathematical mean - field models in finite regime .the approach presented here is non - asymptotic and is unrelated to the mean - field convergence that originates from law of large numbers ( and its generalization to de finetti - hewitt - savage functional mean - field convergence ) in large populations .the non - asymptotic mean field approach holds even when there are only few players in a game , or few nodes in a network .the idea presented here is inspired from the works in on the so - called averaging principle .these previous works are limited to static and one - shot games .here we use that idea not only for static games but also for dynamic mean - field games .one of the motivations of the asymptotic mean field game approach is that it may reduce the complexity analysis of large systems .the present work goes beyond that .we believe that if the complexity of the infinite system can be reduced easily then , the finite system can also be studied using a non - asymptotic mean - field approach . in order to apply the mean - field approach to a system with arbitrary number of players, we shall exploit more the structure of objective function and the main assumption of the model which is the indistinguishability property , i.e. , the performance index is unchanged if one permutes the label of the players .this is what we will do in this work .the aggregative structure of the problem and the indistinguishability property of the players are used to derive an error bound for any number of players .interestingly , our result holds not only for large number of players but also for few number of players . for example , for players , there is no systematic way to apply the theory developed in the previous works but the non - asymptotic mean - field result presented here could be applied . the non - asymptotic mean - field result does not impose additional assumptions on the payoff function .we show that the indistinguishability property provides an accurate error bound for any system size .we show that the total equilibrium payoff with heterogeneous parameters can be approximated by the symmetric payoff where the symmetry is the respect to the mean of those parameters .these parameters can be a real number , vector , matrix or a infinite functional .the proof of the approximation error is essentially based on a taylor expansion which cancels out the first order terms due to indistinguishability property .we provide various examples where non - asymptotic mean - field interaction is required and the indistinguishability property could be exploited more efficiently .we present of queueing system with only few servers where closed - form expression of the waiting time is not available and the use of the present framework gives appropriate bounds .as second main example focuses on dynamic auctions with asymmetric bidders that can be self - interested , malicious or spiteful . in models of first - price auctions , when bidders are ex ante heterogeneous , deriving explicit equilibrium bid functions is an open issue . due to the boundary - value problem nature of the equilibrium ,numerical methods remain challenging issue .recent theoretical research concerning asymmetric auctions have determined some qualitative properties these bid functions must satisfy when certain conditions are met . herewe propose an accurate approximation based on non - asymptotic mean field game approach and examine the relative expected payoffs of bidders and the seller revenue ( which is indistinguishable ) to decide whether the approximate solutions are consistent with theory .the remainder of the paper is structured as follows . in section [ staticscale ]we present a mean field system with arbitrary number of interacting entities and propose a nonasymptotic static mean field framework . in section [ secdynamicscale ]we extend our basic results in a dynamic setup . in section [ basicapplication ]we present applications of nonasymptotic mean - field approach to collaborative effort game , approximation of queueing delay performance and computation of error bound of equilibrium bids in dynamic auction with asymmetric bidders .we summarize some of the notations in table [ tablenotationjournal ] . ll symbol & meaning + & set of potential minor players + & cardinality of + & action space + & action of player + & global payoff function of the major player + & indicator function .+ & + & strategy of player + & long - term payoff of player with horizon ] thus , the global error is bounded by since the above inequality holds for a generic symmetric vector which is the average action , it is in particular true when evaluated at a symmetric nash equilibrium actions ( if it exists ) of the minor players . in that case , we recursively use the value iteration relation for each minor player . ] and iterating times gives players . each player can choose an action in the closed interval . ] in order to preserve the differentiability at the origin we consider the payoff as then , the following statements hold : *we observe that the payoff functions are indistinguishable .the payoff functions satisfy which remains the same by interchanging the indexes . * the pure strategies and are equilibria .moreover , is a strong - equilibrium ( resilience to any deviation of any size ) . indeed , + if all the players do the maximum effort , i.e. , then every player receives the maximum payoff and no player has incentive to deviate .this is clearly a pure nash equilibrium .suppose now that a subset of players ( a coalition ) deviates and jointly chooses an action that is different than , then the payoff of all the players is lower than in particular , the members of the coalition gets a lower payoff than .since this analysis holds for any coalition of any size , the action profile is a strong nash equilibrium .* we define the analogue of the price of anarchy ( poa ) for payoff - maximization problem as the ratio between the worse equilibrium payoff and the social optimum .if one of the players does ( no effort ) then the payoff of every player will be zero and no player can improve its payoff by unilateral deviation .this means that is a pure nash equilibrium .note that the equilibrium payoff at this equilibrium is the lowest possible payoff that a player can receive , i.e. , is the worse equilibrium in terms of payoffs . hence , and the ratio between the global optimum and the equilibrium payoff is which is infinite .+ clearly , the price of stability ( the ratio between the best equilibrium payoff and the social optimum ) is .* we say that a pure symmetric strategy is an evolutionarily stable strategy if it is resilient by small perturbation as follows : for every there exists an such that + + + for all + we now show that the pure strategy ( i.e. , the action profile ) is not an evolutionarily stable strategy .indeed , if the left hand side of the above inequality is which is not strictly greater than the non - asymptotic mean - field approach allows us to link the geometric mean with the arithmetic mean action .we remark that the geometric mean as a payoff , satisfies the indistinguishability property and it is smooth in the positive orthant . here is the identity function because when the all the actions are identical , the geometric mean coincides with the arithmetic mean . in order to illustrate the error bound in the non - asymptotic mean field let consider two decision - makers .we expand the payoff for an asymmetric input level of size let here , thus , in particular , if and then and one has , now , if is near zero , i.e. , and with then and thus , the above calculus illustrates that if the system is indistinguishable we can work directly with the mean of the mean - field with error where captures the asymmetry level of the system .next we illustrate the usefulness of our approximation of waiting time in a queueing system with multiple servers .consider servers and a system with arrival and service rate of for server assume that the customers are indistinguishable in terms of performance index .each customer will be assigned to one of the non - busy servers with a certain probability , ( if any ) . if not , the customer joins a queue and will be in waiting list .our goal is to investigate the delay , i.e. , the propagation delay and the expected waiting time ( wt ) in the queue .let the expected propagation delay to be using , we determine the waiting time in the case of similar service rates .let the transition rate ( continuous time ) is given by ,\\ r_{k , k+1}=\lambda , \\k\geq 1 , \r_{k , k-1}=(k-1 ) \bar{m},\\ \mbox{otherwise } \ r_{kj}=0 .\end{array } \right.\ ] ] the steady states are easily determined by setting the probability that all servers are busy is hence , the waiting response time for the symmetric setup the computation of in the asymmetric setting is highly complex and is still not well understood .the question is to know if non - asymptotic mean field approach can provide a useful approximation of it .to do so , we check the main assumptions and clearly wt is regular ( in for small ) and satisfies the indistinguishability property .then , using nonasymptotic mean - field approach , + in figure [ figmm1simulation ] , we observe the following : the theory of auctions as games of incomplete information originated in 1961 in the work of vickrey .a seller has an object to sell .she adopted a first - price auction rule .consider a first - price auction with asymmetric bidders .there are bidders for the object .each bidder independently submit a single bid without seeing the others bids .if there is only one bidder with the highest bid , the object is sold to the bidder with biggest bid .the winner pays her bid , that is , the price is the highest ( or first price bid ) .if there is more than one bidder , the object goes to each of these bidders with equal probability .the bidder has a valuation of the object .the random variable has a distribution function with support ] using result [ thmmain ] , one gets * good approximate of the asymmetric equilibrium strategies , * equilibrium payoff with deviation order of note that the optional second price auction is currently used in doubleclick ad exchange .examples of ad exchanges are rightmedia , adbrite , openx , and doubleclick .the idea is described as follows : * user visits the webpage of publisher that has , say , a single slot for ads . *publisher contacts the exchange e with where is the minimum price p(w ) is willing to take for the slot in and is the information about user that shares with * the exchange e contacts ad networks with , where is information about provided by , and is the information about provided by may be potentially different from . *each ad network returns on behalf of its customers which are the advertisers ; is its bid , that is , the maximum it is willing to pay for the slot in page and is the ad it wishes to be shown .each ad network may have multiple advertisers .the ad networks may also choose not to return a bid .* exchange determines a winner for the ad slot among all and its price satisfying via an auction ( first or second price ) . *exchange returns winning ad to publisher p(w ) and price to ad network * the publisher serves webpage with ad to user ( the impression of ad ) .note that from the click of the user to the impression of the ad there are many intermediary interactive processes .auction is one them and an important one because it determines the winner ad network .while it is reasonable to consider large population of users over internet , the number of concurrent ad networks remains finite and there a room for non - asymptotic mean - field analysis for the revenue . as a user may click several times over webpages, the dynamic auction framework seems more realistic .we examine the dynamic auction in subsection [ subsecdynamic ] .a player might be losing the auction of a long - term project .yet she continues to participate in the auction because she wants to minimize the negative payoff on losing by making her competitor , who would win the auction , pay a high price for the win .this negative dependence of payoff on others surplus is referred to as spiteful behavior .below we show how our nonasymptotic mean - field framework can be applied to that scenario . a spiteful player maximizes the weighted difference of her own payoff and his competitors payoffs for all the payoff of a spiteful player is obviously , setting to zero yields a selfishness ( whose payoff equals his exact profit ) whereas defines a completely malicious player ( jammer ) whose only goal is to minimize the profit of other players .note that for altruistic player we would be considering a payoff in the form the payoff of a spiteful player is \\ & & + \alpha_j \mathbb{e}\left[\max_{j'\neq j}b_{j'}| \\max_{j'\neq j}b_{j ' } >b_{j}(v_j ) \right]\end{aligned}\ ] ] as mentioned above the main difficulty is that the private values distribution are asymmetric .denote that by the equilibrium bid strategy .even when bidders are selfish ( ) , the above analysis shows that the explicit expression of is not a trivial task .however , for symmetric type distribution , and symmetric coefficient the payoff function reduces to using the fact that the derivative of with the respect to is given by the first order optimality condition yields to in particular for uniform distribution over ] with points inside , we arrive at a nonlinear least - squares algorithm for selecting and by solving which yields the points on a grid will be chosen uniformly spaced , i.e. standard newton - gauss - seidel methods provide a very fast convergence rate to a solution if the initial guess if appropriately chosen .however the choice of initial data and guess need to be conducted .we propose a numerical scheme for optimal bidding strategies .first we solve the initial - value problem that starts at near but not equal to so that the denominator do not vanish .ode starts at and moves forward .we fix the starting function to where is the limit in of the derivative , , are positive constant . and , ] the game is played as follows . at opportunity every player , * realizes his current value ] denotes his bidding strategy at auction ; * updates his information set based on the results obtained in auction specifically , he forms a set of beliefs about the distribution of the bidding profile of the players distributed according to let be the value function of the bidder , i.e. , it is the supremum , over all possible bidding strategies , of the expectation of the payoff starting from an initial budget when the other bidder strategy profile is based on the classical bellman optimality criterion , we immediately get the following result . the proof is therefore omitted .[ refdynamic ] given the optimal strategy of a player satisfies ,\end{aligned}\ ] ] where is the highest bid of the other players than let } ] let then the optimal bidding strategy is the value iteration is given by \end{aligned}\ ] ] for the symmetric setup we drop the index given the optimal strategy of a generic player satisfies ,\end{aligned}\ ] ] where is the highest bid of the other players than let } \left[v - x_1+v_{t}(s_{t}-c(s_{t},x_1))\right ] .$ ] this implies that the optimal bidding strategy is the value iteration is given by \end{aligned}\ ] ] the proof follows as a corollary of result [ refdynamic ] . to complete the value iteration system we choose a terminal payoff [ reyu ] let where and }\ | f_{j , t}(v)-\bar{m}_t(v ) |,\ ] ] then , the long - term revenue of the seller is in order of for any the proof of the first statement is a direct extension of result [ thmmain ] to the time space as a consequence of the result [ reyu ] , if depends on and satisfies for some then , the error gap between the finite regime and the infinite regime in equilibrium is in order of which can be very small even for small but large if then the error gap at time reduces to and hence the global error in is at most note that is a significant improvement of the mean - field approximation since the use of mean - field convergence la _ de finetti _ gives a convergence order of the use of the indistinguishability property of the payoff function helps us to provide a more precise error . using a scaling factor to the starting state ( budget ) and horizon, the value has a certain limit when goes to let let then , the value is solution of the following differential game dt\ ] ] subject to and let be the instantaneous payoff from the above formulation . introduce the hamiltonian ,\ ] ] the value satisfies the hamilton - jacobi - bellman equation we consider a particular state dynamics given by drift , independent individual brownian motion and a common brownian motion .the instantaneous cost is define the hamiltonian as and let be an equilibrium cost value .following , the value satisfies where is the divergence operator . considering minor players where the evolution of the measure is now replaced by the evolution of beliefs in a bayesian mean field game , the long - term cost function of a player is which in order of where is the average measure ( belief ) .thus , the non - asymptotic mean field game approach allows us to understand the behavior of the equilibrium cost when the players have different beliefs ( incomplete information game ) which are near the average belief measure . in this sectionwe assume that the payoff functions are strictly positive and satisfy a near - indistinguishability property defined as follows : the game with payoffs is near - indistinguishable if it is for a certain small i.e. there exists an indistinguishable function such that the game with payoffs is scalable near - indistinguishable if it is indistinguishable for a certain small i.e. there exists an indistinguishable function such that the following result follows from the definition of near - indistinguishability .result [ thmmain2 ] extends to near - indistinguishable case with an approximation given by for the first type ( non - scalable ) and where comes from the scalable near - indistinguishability error ( second type , scalable notion ) and is the heterogeneity gap between action profile and one of the main motivations to study mean field games is the possibility to reduce the high complexity of interactive dynamical systems into a low - complexity and easier to solve ones .however , the infinite mean field game system suggests a continuum of players may not be realistic in many cases of interests .then , the question addressed in this paper is to know whether the mean field game ideas can be used in the finite regime .we show that the answer is positive for important classes of payoff functions .an important statement is that if the asymptotic mean - field system can reduce the computational complexity then , the same analysis can be conducted in the finite regime .moreover , the non - asymptotic mean field game model developed here does not require additional assumptions than the classical ones ( namely a0 and a1 ) used in asymptotic mean field game theory .if the indistinguishability assumption fails but still the asymptotic mean field system is easily solvable then , one can classify the finite system too by class / and type and hence reduce into a game with less number of classes ( than players ) and in each class the indistinguishability property holds .we refer to such games as indistinguishability per class games .interestingly our approximation results extends to near - indistinguishable games as well as to indistinguishable per class games .we have presented a mean field framework where the indistinguishability property can be exploited to cover not only the asymptotic regime but also the non - asymptotic regime . in other words ,our approximation is suitable not only for large systems but also for a small system with few players .the framework can be used to approximate unknown functions in heterogeneous systems , in optimization theory as well as in game theory .this work suggests several paths for future research .first , the approach introduced here can be used in several applications , starting from other queueing and auctions formats , in particular to private information models where strategies are functions of types .second , more progress needs to be done by considering a less restrictive action and belief spaces that are far from the mean of the mean field . the smoothness condition on the objective function may not be satisfied in practicefinally , we would like to understand how large the deviation of the non - asymptotic result is compared to a symmetric vector ( non - alignment level ) .99 h. tembine , distributed strategic learning for wireless engineers , crc press/ taylor & francis , 496 pages , may 2012 , isbn : 9781439876442 .jovanovic , boyan and rosenthal , robert w. anonymous sequential games , journal of mathematical economics , elsevier , vol .17(1 ) , pp .77 - 87 , february 1988 .j. d. benamou , y. brenier , a computational fluid mechanics solution to the monge - kantorovich mass transfer problem .375 - 393 , 2000 .lasry and p.l . lions .mean field games .japan . j. math ., 2:229 - 260 , 2007 .lasry , p .-lions , and o. gueant .mean field games and applications .paris - princeton lectures on mathematical finance , 2010 l. kleinrock .queueing systems , volume i and ii . john wiley and sons , 1975 .e. s. maskin and j. g. riley .asymmetric auctions .econom . stud .67 , 413 - 438 .lebrun b : first - price auctions in the asymmetric n bidder case .international economic review 40:125 - 142 , ( 1999 ) lebrun b : uniqueness of the equilibrium in first - price auctions .games and economic behavior 55:131 - 151 , 2006 .y. mansour , s. muthukrishnan and n. nisan .doubleclick ad exchange auction , 2012 .s. muthukrishnan .ad exchanges : research issues . in proc . wine .lncs , new york , 1 - 12 m. kac .foundations of kinetic theory .third berkeley symp . on math .statist . and prob ., 3:171 - 197 , 1956 .d. a. dawson .critical dynamics and fluctuations for a mean - field model of cooperative behavior .journal of statistical physics , 31:29 - 85 , 1983 .aumann r. : markets with a continuum of traders .econometrica , 32 , 1964 .villani c. : optimal transport : old and new , springer , berlin , 2009 .selten , r. preispolitik der mehrprodktenunternehmung in der statischen theorie , springer - verlag .1970 p. kotolenez and t. kurtz .macroscopic limits for stochastic partial differential equations of mckean - vlasov type .probability theory and related fields , 146(1):189 - 222 , 2010 a. s. sznitman. topics in propagation of chaos . in p.l .hennequin , editor , springer verlag lecture notes in mathematics 1464 , ecole dete de probabilites de saint - flour xi ( 1989 ) , pages 165 - 251 , 1991 .g. fibich , a. gavious , and e. solan , averaging principle for second - order approximation of heterogeneous models with homogeneous models , proceedings of the national academy of sciences of the united states of america , doi : 10.1073/pnas.1206867109 , pnas , 2012 .g. fibich and n. gavish .asymmetric first - price auctions : a dynamical systems approach , mathematics of research operations , vol .37 no . 2 219 - 243 , may 2012 .h. tembine , j .- y .le boudec , r. el azouzi , e. altman : mean field asymptotics of markov decision evolutionary games and teams , international conference game theory for networks , istanbul , turkey , 2009 .marshall r. c. , m. j. meurer , j .- f .richard , w. stromquist . numerical analysis of asymmetric first price auctions .games econom . behav .7(2 ) 193 - 220 . 1994 .fibich g. , n. gavish .numerical simulations of asymmetric first - price auctions .games econom . behav .72(2 ) 479 - 495 . 2011 .r. gummadi p. key , a. proutiere , repeated auctions under budget constraints : optimal bidding strategies and equilibria , may 2012 vickrey , w. : counterspeculation , auctions , and competitive sealed tenders .j. finance 16 , 8 - 37.1961 .shapley l. s. : stochastic games .pnas 39 ( 10 ) : 1095 - 1100 , 1953 . j. maynard smith and g. r. price , the logic of animal conflict , nature 246 ( 5427 ) , 15 - 18 , 1973 .* hamidou tembine * ( s06-m10-sm13 ) received his m.s .degree from ecole polytechnique and his ph.d .degree from university of avignon .his current research interests include evolutionary games , mean field stochastic games , distributed strategic learning and applications . in 2014tembine received the outstanding young researcher award from ieee comsoc .he was the recipient of 5 best paper awards and has co - authored two books .more details can be found at tembine.com
mean - field games have been studied under the assumption of very large number of players . for such large systems , the basic idea consists to approximate large games by a stylized game model with a continuum of players . the approach has been shown to be useful in some applications . however , the stylized game model with continuum of decision - makers is rarely observed in practice and the approximation proposed in the asymptotic regime is meaningless for networks with few entities . in this paper we propose a mean - field framework that is suitable not only for large systems but also for a small world with few number of entities . the applicability of the proposed framework is illustrated through various examples including dynamic auction with asymmetric valuation distributions , and spiteful bidders . keywords : nonasymptotic , approximation , games with few decision - makers .
the general systems of radiation protection on earth are not appropriate for the study of radiation exposure of aviation crews , since high energy charged particles contribute significantly to the total dose in the human body .these are due to the interaction of the galactic cosmic rays ( gcr ) and solar energetic particles ( sep ) with the atmospheric layers and the cascades of the secondary particles that are produced .as the altitude in - creases , the atmospheric protective layer gets thinner and less dense , resulting in higher cosmic radiation ( cr ) than the radiation on the ground .radiation in the different altitudes of the atmosphere is ionizing ( gcr , sep and trapped radiation inside earth s magnetic field ) and non - ionizing ( uv - radiation ) , with many biological and technological effects .as far as human health and exposure is concerned the effects can be acute ( nausea / vomiting , fatigue , central nervous system disease ) and chronic ( cancer / solid tumors / leukemia , cataract / vision impairment , degenerative cardiac disease ) .ionizing radiation is by far more dangerous and the acute effects after exposure are related to the high intensity spe , while the chronic effects are due to long term exposure to gcr .for example , the mean equivalent dose during a 7-hrs flight is 0.05 msv for a quiet period , while for an extreme spe ( 105 particles / cm2 st sec ) it raises up to 40 msv .it is noted that the average equivalent dose is 1 msv / year for public exposure according to the european directive 2013/59/ euratom .therefore , one of the primary concerns for aircraft flights is the elevated level of radiation that aviators and passengers are exposed .several models are developed in order to study the atmospheric showers such as the crii model , planeto - cosmics and dyastima . at the same time a number of applications are used for the determination of biological and technological effects and the calculation of the radiation exposure such as spenvis - creme , sievert , nairas and avidos . a new application , named dyastima - r , which calculates the equivalent dose in different altitudes during quiet and disturbed solar activity periods is being developed and is presented in this work .in order to implement a simulation of the cosmic ray propagation through the atmosphere , there are some physical quantities and processes that must be taken into consideration , such as the spectrum of the primary cr that reach the top of the atmosphere , the structure of the atmosphere , the earth s magnetic field and the physical interactions that take place between the cr particles and the molecules of the atmosphere .these quantities are affected by various parameters , such as the space weather conditions , the current physical characteristics of the earth s atmosphere , the time and the location for which the simulation is performed .dyastima is a standalone application for the simulation of the showers that are produced in the atmosphere of a planet due to the cr .the application makes use of the well known geant4 simulation toolkit .the simulation scenario is described by using a graphical user interface ( gui ) and requires as input all the parameters mentioned above .the output of dyastima provides all the available information about the cascade and tracking , such as number , energy , direction , arrival time and energy deposit of the secondary particles at different atmospheric layers .dyastima is also used for cascades simulation in the atmosphere of other planets .the new software application dyastima - r , which constitutes an extension of dyastima uses the output provided by dyastima , in order to calculate the energy that is deposited on the phantom and moreover the equivalent dose .monte carlo simulations are made in order to describe the particle interactions and the transport of the primary and secondary radiation through matter , especially through simulated media , such as the human body ( phantom ) and the aircraft shielding ( optional ) , ( fig .[ fig : dyastimar ] ) .the absorbed dose d is calculated by using the mean energy de deposited in a volume of mass dm at each step along a particle s trajectory ( eq .[ eq1 ] ) while the equivalent dose h is calculated by using the absorbed dose d averaged over the phantom , multiplied by a quality factor related to the biological effectiveness of the radiation , ( eq . [ eq2]) , .thus factor is defined as a function of the unrestricted linear energy transfer ( let ) in water , which is the energy lost by a charged particle divided by the path length .values of for different particles are given in table [ tab1 ] .[ cols="^,^",options="header " , ] since the radiation exposure field consists of different particles and energies , the total absorbed dose and total equivalent dose are calculated as the sum of the individual absorbed doses and equivalent doses respectively . the gcr primary spectrum was extracted from creme2009 for solar quiet conditions at solar maximum and minimum , which represent ambient conditions in the absence of solar energetic particle events .the magnetic threshold rigidity is of order of 0 gv .a cylindrical phantom ( 1.75 m height , 0.25 m radius ) consisted of water is used .an optional airplane shell will be available soon , in order to study various shielding materials .the application is under development and some preliminary results are presented .the study of the radiation exposure of aviators and passengers due to the contribution of gcr and spes is of great importance .for this reason , the scientific community was led to develop space weather forecasting and monitoring centers . the athens neutron monitor station ( a.ne.mo.s . ) , participates as an expert group to european space agency ( esa ssa space radiation center ) providing timely and accurate warning for gles ( gle alert ) . in this work ,the dose and equivalent dose are studied during the maximum and descending phase of solar cycle 23 and the ascending phase of solar cycle 24 ( fig .[ fig : dose ] ) .furthermore , the contribution of different radiation particles in total dose ( fig .[ fig : dose_types ] ) and total equivalent dose ( fig .[ fig : eq_dose_types ] ) is also studied .the main results of this study can be summarized as follows : -the dose levels are directly related to the gcr particle intensities with the 11-year sunspot cycle and the 22-year solar magnetic cycle ( fig .[ fig : dose ] ) .since the gcr intensity is anti - correlated with the solar activity , the gcr exposure peaks at solar minimum and is lowest at solar maximum conditions .-the main contribution in the total dose is due to protons ( fig .[ fig : dose_types ] ) , while the main contribution in total equivalent dose is due to neutrons ( fig .[ fig : eq_dose_types ] ) . for further studies ,dyastima - r will be applied during intense solar activity periods , such as spe and ground level enhancements ( gle ) increase the energy deposit and the equivalent dose .as dyastima - r calculates the equivalent dose for various types of particles in different atmospheric altitudes and takes into account the phases of solar activity , as well as the geometry and shielding materials of the aircrafts , allowing the study of various flight scenarios .therefore , it can be of great interest for : * air - craft crews ( pilots , flight attendants ) * passengers ( frequent travelers , pregnant women , children ) * airlines and tour operators * air - craft manufacturers * legislators and civil aviation dyastima - r will be combined with the gle alert system operated in a.ne.mo.s and esa space radiation center and soon will be provided as a tool for an extensive study of the radiation exposure during aircraft flights and manned space missions .special thanks to the esa space situational awareness program p2-swe-1 space weather exert centers : definition and developement .we acknowledge the nmdb database ( www.nmdb.eu ) , founded under the european union s fp7 program ( contract no .213007 ) for providing cosmic ray data .a.ne.mo.s is supported by the special research account of athens university ( 70/4/5803 ) .
the primary components of radiation in interplanetary space are galactic cosmic rays ( gcr ) and solar cosmic radiation ( scr ) . gcr originates from outside of our solar system , while scr consists of low energy solar wind particles that flow constantly from the sun and the highly energetic solar particle events ( spes ) that originate from magnetically disturbed regions of the sun , which sporadically emit bursts of energetic charged particles . exposure to space radiation may place astronauts and aviation crews at significant risk for numerous biological effects resulting from exposure to radiation from a major spe or combined spe and gcr . doses absorbed by tissues vary for different spes and model systems have been developed to calculate the radiation doses that could have been received by astronauts during previous spes . for this reason a new application dyastima - r which constitutes a successor of the dynamic atmospheric shower tracking interactive model application ( dyastima ) is being developed . this new simulation tool will be used for the calculation of the equivalent dose during flights scenario in the lower or higher atmosphere , characterized by different altitudes , different geographic latitudes and different solar and galactic cosmic ray intensity . results for the calculated energy deposition and equivalent dose are calculated during quiet and disturbed periods of the solar cycles 23 and 24 , are presented .
object detection is the basis of intelligent video analysis . generally , object recognition , action and behavior recognition , and tracking rely on the detected objects . in a sequence of images ,there are both moving and static objects . in this paper , the focus is on detecting moving objects in a video .moving object detection is related to but also different from class - specific object detection and general salient object detection .pedestrian detection , face detection , and hand detection are instances of class - specific object detection . the task of moving object detection is to detect semantically meaningful moving objects .predefined classes of moving objects should be detected by a moving object detection algorithm .moreover , other semantically meaningful objects should also be detected even though their classes are not pre - defined .examples of meaningless moving objects include water ripples , waving trees ( leafs ) , shadows , noisy data , and the one caused by variations of illumination .however , the moving object detection algorithm relying merely on motion information is prone to incorrectly classify such meaningless moving objects as meaningful ones .the corresponding error is called false alarms . but a salient object detection algorithm tends to correctly discard the meaningless objects . hence , in this paper , we propose to incorporate the output ( i.e. , saliency map ) of a salient object algorithm into a subspace analysis based objective function so that the problem of false alarms can be alleviated .it is noted that our method is also capable of alleviating the problem of missed alarms .existing moving object detection algorithms tend to classify flat regions ( i.e. , textureless regions ) inside an object and moving regions with similar appearance ( texture ) to background as static background and thus such regions may be missed .state - of - the - art salient object detection algorithm can output large value of saliency map at such regions . utilizing the saliency map, our method has ability to classify such regions as foreground . in summary , we present an objective function that unifies subspace analysis of background and saliency map .the objective function consists of four terms : saliency map , sparsity , connectivity , and low - rank .an alternative minimization algorithm is proposed to find the optimal solution .the significant advantage compared to previous subspace based approaches is that saliency map is used to guide the result to have less false and missed alarms .the proposed method is named modsm .it is natural that ideal saliency map ( e.g. , the bottom of fig . [ fig.1](a ) and fig . [ fig.1](b ) ) is desirable for the proposed method .however , even relatively unsatisfying saliency map ( e.g. , the bottom of fig . [ fig.1](c ) and fig .[ fig.1](d ) ) can also play a positive role in the proposed modsam method .of course , completely bad saliency map has a negative influence on moving object detection .fortunately , great progress of salient object detection has been achieved and their fruits can be borrowed for moving object detection .several methods were developed to employ a salient object detection algorithm for improving the performance of moving object detection , , . despite the initial success, their performance can not arrive at the level of state - of - the - art low - rank based and subspace based methods , , , , , , .the rest of the paper is organized as follows .we review related work in section ii .the proposed method is given in section iii .experimental results are provided in section iv .we then conclude in section v.moving object detection can be implemented by different manners : detecting followed by tracking , subtracting frames , modeling background by density function , modeling background by subspace , modeling background by low - rank matrix .the last two manners dominate the state - of - the - art methods and are closely related to our work .note that moving object detection methods can also be divided into incremental methods and batch methods .our method belongs to incremental one . *subtracting frames * this kind of methods detects moving objects based on the differences between adjacent frames , .but these methods were proved not robust against illumination variations , changing background , camera motion , and noise . * modeling background by density function * this strategy assumes that the background is stationary and can be modeled by gaussian , mixture of gaussians , or dirichlet process mixture models , , . the foreground ( moving objects )can then be obtained by subtracting the current frame with the background model .* modeling background by subspace * instead of using a density function , subspace based method models the background as a linear combination the bases of a subspace , , , , .because the subspace can be updated in an incremental ( online ) manner , its efficiency is very high .this kind of subspace based algorithms needs to impose constraints on the foreground in order to obtain valid solutions .foreground sparsity is one of the widely used constraints which implies that the area of moving objects is small relative to the background .principal component prusuit ( pcp ) is an important pioneer work which adopts norm for measuring the foreground sparsity .it is the constraint of foreground sparsity that makes pcp suitable for foreground - background separation . without this constraint ,traditional robust subspace methods can only deal with noise and outliers , , , .the method improves pcp by taking into account the foreground connectivity ( i.e. , foreground structure ) .rfdsa takes into account smoothness and arbitrariness constraints .but pcp , rfdsa , and the method are batch algorithms .its detection speed can not arrive at real - time level .therefore , incremental ( online ) subspace methods are crucial for real - time detection . proposed an online subspace tracking algorithm called grasta ( grassmannian robust adaptive subspace tracking algorithm ) .similar to pcp , grasta also explores norm for imposing sparsity on foreground .but the grasta algorithm does not utilize any connectivity ( a.k.a . ,smoothness ) property of foreground .the gosus ( grassmannian online subspace updates with structured - sparsity ) algorithm imposes a connectivity constraint on the objective function by grouping the pixels with a superpixel method and encouraging sparsity of the groups . because of the large computational cost of the superpixel algorithm , gosus is not as efficient as grasta . * modeling background by low - rank matrix low - rank modeling * is effective in video representation . a sequence of vectorized images is represented as a matrix and the matrix is approximated by the sum of matrices of vectorized foreground , background , and noise .it is reasonable to assume that the background matrix is low - rank .decolor ( detecting contiguous outliers in the low - rank representation ) is considered as one of the most successful low - rank based algorithms . in decolor ,both foreground sparsity and contiguity ( connectivity ) are taken into account .it can be interpreted as -penalty regularized rpca .but the matrix computation can be started only if all of the predefined number of successive images is available .obviously , such a batch method is not suitable for real - time video analysis due to its low efficiency .isc and corola are incremental versions of decolor .isc and corola transforms low - rank method to subspace one .the low - rank methods and subspace methods impose sparsity and connectivity ( a.k.a . ,smoothness ) on foreground and impose low - rank or principal components on background .in addition to such properties , in this paper we propose to impose saliency map on background and foreground meanwhile .the proposed method belongs to incremental subspace based moving object detection method .our main contribution lies in employing a saliency map to form a new objective function , resulting in fewer false and missed alarms .the input of the algorithm is a sequence of frames ( images ) .denote the current image and denote the -th pixel of .there are pixels in an image .the goal is to find the locations of the moving objects ( i.e. , foreground ) in the current image .the foreground locations are represented by a foreground - indicator vector .the -element of equals to either zero or one : the foreground - indicator vector is obtained by binarizing background vector with a threshold : where is the -element of .the possibility of pixel being background increases with the value of and the possibility of pixel being foreground decreases with the increasing value of . as stated above ( i.e. , eq . ( [ eq.1 ] ) and eq .( [ eq.2 ] ) , the foreground - indicator vector can be obtained by binarizing background vector .the problem is how to compute once a frame is given . in this paperwe formulate the problem of computing as the following minimization problem : + \lambda \left\| { { \bf{db } } } \right\|_1 , \end{split } \label{eq.3}\ ] ] where stands for the -th row of . in eq .( [ eq.3]), ] :assign a small number to .update and by running the following formulas several loops : :assign a small number to .update by running the following formulas several loops : end iteration compute foreground - indicator vector is obtained by binarizing background vector: algorithm 1 summarizes the above steps .we describe intermediate results followed by comparison with state - of - the - art methods . in our experiments ,the saliency maps are obtained by the method developed in .we give intermediate results to show the role of the saliency map term and the connectivity term . for notation simplicity , in table [ t.1 ]we list the objective functions of three methods : baseline , add connectivity , and add saliency map .the objective function of the proposed method is + \lambda \left\| { { \bf{db } } } \right\|_1 .\end{split } \label{eq.25}\ ] ] the baseline is the method whose objective function ( eq . ( [ eq.26 ] ) ) consists of the first two terms of ( eq . ( [ eq.25 ] ) ) : }. \end{split } \label{eq.26}\ ] ] in addition to the reconstruction term , the baseline method merely makes use of the sparsity term .compared to ( eq . ( [ eq.26 ] ) ) , the objective function ( eq . ( [ eq.27 ] ) ) of add connectivity has additional connectivity term : } \\+ \lambda \left\| { { \bf{db } } } \right\|_1 .\end{split } \label{eq.27}\ ] ] the objective function of add saliency map is the same as ( eq . ( [ eq.25 ] ) ) .that is , add saliency map is the final form of our method where sparsity , low - rank , connectivity , and saliency map are taken into account ..method used for intermediate results .[ cols="<,<",options="header " , ] [ t.2 ] the -score , the harmonic mean of precision and recall , is used for objective evaluation : _ ws _ & _ cur _ & _ fou _ & _ hal _ & _ sm _ & _ lob _ & _ esc _ & _ bs _ & _ cam _ & _ mean _ + gmm & .7948 & .7580 & .6854 & .3335 & .5363 & .6519 & .1388 & .3838 & .0757 & .4842 + sobs & .8247 & .8178 & .6554 & .5943 & .6677 & .6489 & .5770 & .6019 & .6960 & .6760 + dp - gmm & .9090 & .8203 & .7049 & .5484 & .6522 & .5794 & .5055 & .6024 & .7567 & .6754 + pcp & .4137 & .6193 & .5679 & .5917 & .7234 & .6989 & .6728 & .6582 & .3406 & .5874 + decolor & .8866 & .8255 & .*8598 * & .6424 & .6525 & .6149 & .6994 & .5869 & .*8096 * & .7308 + grastra & .7310 & .6591 & .3786 & .5817 & .7142 & .5550 & .4697 & .6146 & .2504 & .5505 + rfdsa & .8796 & .8976 & .7544 & .6673 & .*7407 * & .*8029 * & .6353 & .6841 & .6779 & .7489 + * modsm * & * .9404 * & .*9098 * & .8205 & .*6859 * & .7362 & .5762 & .*7553 * & .*7280 * & .7876 & .*7711 * + [ t.3 ] the results of the different methods are given in table [ t.3 ] . among the nine videos , the proposed modsm , rfdsa , and decolor get the best performance on five ( i.e.,ws , cur , hal , esc , and bs ) , two ( i.e. , sm and lob ) , and two ( fou and cam ) different videos , respectively .the average -score of the proposed modsm is the largest .but our method does not work well for the lobby ( i.e. , lob ) video .the main reason is that the performance of the method of creating saliency map on the lobby video degraded significantly .if the lobby video is excluded , the average -score of modsm grows from 0.7711 to 0.7955 whereas that of rfdsa decreases from 0.7489 to 0.7421 .it is expected that the performance of modsm increases with the performance of saliency map .table [ t.3 ] also shows that if proper prior information ( i.e. , connectivity , saliency map , sparsity ) is employed then the incremental algorithm modsm can outperform the batch algorithms decolor and rfdsa .the roc curves of the modsm and rfdsa on the water surface , escalator , and fountain , and campus videos are shown in fig .[ fig.8 ] where the superiority of the modsm can be observed .take the fountain video as an example .the true positive rates ( i.e. , recall ) of modsm and rfdsa are respectively 0.99 and 0.935 when the false positive rate is 0.05 .note that the docolor method can not generate the roc curves because of their binary values of the estimated foreground and background .+ several specific results of modsm , rfdsa , and decolor are visualized in fig .[ fig.9 ] , fig .[ fig.10 ] , and fig .[ fig.11 ] where ( a ) , ( b ) , ( c ) , ( d ) , and ( e ) are the current input frame , ground truth of the moving objects , the detected results of modsm , rfdsa , and decolor , respectively .[ fig.9 ] ( a ) is a frame of the curtain video .[ fig.9 ] ( d ) shows that rfdsa incorrectly regards the variation caused by motion of the curtain as moving objects and rfdsa results in incomplete neck of the person .[ fig.9 ] ( e ) shows that decolor gives rise to even more false alarms .investigating figs .[ fig.9 ] ( c ) and ( b ) , one can find the result of modsm is very close to the ground truth .[ fig.10 ] ( a ) is a frame of the campus video .[ fig.10 ] ( d ) shows that rfdsa incorrectly classifies many waving leafs as meaningful moving objects .[ fig.10 ] ( e ) tells that decolor can not detect the left small person and the head of the right large person is also mistakenly classified as background .[ fig.10 ] ( c ) shows that the proposed method is powerful for classifying the waving leafs as background and detecting both of the persons .[ fig.11 ] ( a ) is a frame of the escalator video .[ fig.11 ] ( d ) shows that rfdsa classifies moving escalator as semantically meaningful moving objects .because of using the information of saliency map , the proposed modsm ( fig . [ fig.11 ] ( c ) ) avoids the errors of rfdsa .[ fig.11 ] ( e ) shows that decolor has almost not missed alarms but has many false alarms .the result ( fig. [ fig.11 ] ( c ) ) of modsm is the best among the three methods[ fig.12 ] ( a ) is a frame of the shopping mall video .it can be seen that modsm is comparable and even slightly better than rfdsa and decolor . as can be seen from table [ t.3 ] , the proposed method modsm results unsatisfying results on the lobby video . fig .[ fig.13 ] attempts to explain the reason . on the one hand , switching from light on ( fig .[ fig.13 ] ( a ) ) to light off ( fig .[ fig.13 ] ( b ) ) gives rise to large variation which is difficult for the basis vectors to capture . on the other hand, the saliency map is not satisfying on the regions of the moving object ( person ) . in this case , introducing the bad saliency map ( fig .[ fig.13 ] ( c ) ) has a negative influence on the task of moving object detection .the research progress of salient object detection is helpful for improving the performance of the propose method .in this paper , we have presented a moving object detection method .the method makes use of saliency map by incorporating it into a unified objective function where the properties of sparsity , low - rank , connectivity , and saliency are integrated .the manner of using saliency map yields smaller number of false alarms and missed alarms .our future work will apply the idea of using saliency map to other state - of - the - art incremental and batch methods of moving object detection .moreover , we will investigate other state - of - the - art methods of generating saliency map .1 r. achanta , a. shaji , k. smith , a. lucchi , p. fua , and s. susstrunk , `` slic superpixels compared to state - of - the - art superpixel methods , '' _ ieee trans .pattern analysis and machine intelligence _ , vol .2274 2282 , 2012 .l. balzano , r. nowak , and b. recht , `` online identification and tracking of subspaces from highly incomplete information , '' _ proc .allerton conference on communication _e. candes , x. li , y. ma , and j. wright `` robust principal component analysis ? ''_ journal of the acm _ , vol .3 , pp . 1 - 37 , 2011 .s. boyd , n. parikh , e. chu , b. peleato , and j. eckstein , `` distributed optimization and statistical learning via the alternating direction method of multipliers , '' _ foundations and trends in machine learning _ ,3 , no , 1 , pp . 1 - 22 , 2011 .p. dollar , r. appel , s. belongie , and p. perona , `` fast feature pyramids for object detection , '' _ ieee trans .pattern analysis and machine intelligence _ , vol .36 , no . 8 , pp . 1532 - 1545 , 2014 .p. favaro , r. vidal , and a. ravichandran , `` a closed form solution to robust subspace estimation and clustering , '' _ cvpr _ , 2011 .x. guo , x. wang , l. yang , x. cao , and yi ma , `` robust foreground detection using smoothness and arbitrariness constraints , '' _ proc .european conference on computer vision _ , 2014 .j. he , l. balzano , and a. szlam , `` incremental gradient on the grassmannian for online foreground and background separation in subsampled video , '' _ proc .ieee international conference on computer vision and pattern recognition _ , 2012 .l. maddalena and a. petrosino , `` a self - organizing approach to background subtraction for visual surveillance applications , '' _ ieee trans .image processing _17 , no . 7 , pp . 1168 - 1177 , 2008. s. mittal and p. meer , `` conjugate gradient on grassmann manifolds for robust subspace estimation , '' _ image and vision computing _ , vol .6 - 7 , pp .417 - 427 , 2012 .a. neri , s. colonnese , g. russo , and p. talone , `` automatic moving object and background separation , '' _ signal processing _ ,219 - 232 , apr . 1998 .y. pang , k. zhang , y. yuan , and k. wang , `` distributed object detection with linear svms , '' _ ieee trans .cybernetics _ , vol .2122 - 2133 , 2014 .y. pang , s. wang , and y. yuan , `` learning regularized lda by clustering , '' _ ieee trans . neural networks andlearning systems _ , vol .12 , pp . 2191 - 2201 , 2014 .y. pang , x. li , j. pan , and x. li , `` incrementally detecting moving objects in video with sparsity and connectivity , '' _ cognitive computation _ , 2015 .c. qiu and n. vaswani . reprocs , " missing link between recursive robust pca and recursive sparse recovery in large but correlated noise , _ arxiv _ , 1106.3286 , 2011 . c. stauffer and w. grimson , `` adaptive background mixture models for real - time tracking , '' _ proc .ieee international conference on computer vision and pattern recognition _ , 1999 .f. torre and m. black .`` a framework for robust subspace learning , '' _ ijcv _ , 54(1):117 - 142 , 2003 .r. vidal , y. ma , and s. sastry , `` generalized principal component analysis ( gpca ) , '' _ ieee trans .pattern analysis and machine intelligence _ , neurocomputing , vol .12 , pp . 1945 - 1959 , 2005 .w. wang , j. shen , x. li , and f. porikli , `` robust video object cosegmentation , '' _ ieee trans .image processing _ , vol .3137 - 3148 , 2015 . j. xu , v. k. ithapu , l. mukherjee , j. m. rehg , and v. singh , `` gosus : grassmannian online subspace updates with structured - sparsity , '' _ proc .ieee international conference on computer vision _ , 2013 .b. xin , y. tian , y. wang , and w. gao , `` background subtraction via generalized fused lasso foreground modeling , '' _ proc .ieee international conference on computer vision and pattern recognition _ , 2015 .x. zhou , c. yang , and w. yu , `` moving object detection by detecting contiguous outliers in the low - rank representation , '' _ ieee trans . pattern analysis and machine597 - 610 , 2013 .
moving object detection is a key to intelligent video analysis . on the one hand , what moves is not only interesting objects but also noise and cluttered background . on the other hand , moving objects without rich texture are prone not to be detected . so there are undesirable false alarms and missed alarms in many algorithms of moving object detection . to reduce the false alarms and missed alarms , in this paper , we propose to incorporate a saliency map into an incremental subspace analysis framework where the saliency map makes estimated background has less chance than foreground ( i.e. , moving objects ) to contain salient objects . the proposed objective function systematically takes account into the properties of sparsity , low - rank , connectivity , and saliency . an alternative minimization algorithm is proposed to seek the optimal solutions . experimental results on the perception test images sequences demonstrate that the proposed method is effective in reducing false alarms and missed alarms .
the antarctic plateau has revealed to be particularly attractive for astronomy since already several years fossat ( 2005 ) , storey et al .it is extremely cold and dry and this does of this site an interesting candidate for astronomy in the long wavelength ranges ( infrared , sub - millimeter and millimeter ) thanks to the low sky brightness and high atmospheric transmission caused by a low temperature and concentration of the water vapour in the atmosphere ( valenziano & dalloglio 1999 , lawrence 2004 , walden et al . 2005 ) .the antarctic plateau is placed at high altitudes ( the whole continent has an average height of m ) , it is characterized by a quite peculiar atmospheric circulation and a quite stable atmosphere so that the level of the optical turbulence ( profiles ) in the free atmosphere is , for most of the time , lower than above whatever other mid - latitude sites ( marks et al .1996 , marks et al . 1999 , aristidi et al .2003 , lawrence et al .gillingham ( 1991 ) , suggested for the first time , such a low level of the optical turbulence above the antarctic plateau .atmospheric conditions , in general , degrade in proximity of the coasts . a low level of optical turbulence in the free atmosphere is , in general , associated to large isoplanatic angles ( ) . the coherence wavefront time ( ) is claimed to be particularly large above the antarctic plateau due to the combination of a weak and a low wind speed all along the whole troposphere . under these conditions ,an adaptive optics system can reach better levels of correction ( minor residual wavefront perturbations ) than those obtained by an equivalent ao system above mid - latitude sites .wavefront correction at high zernike orders can be more easily reached over a large field of view , the wavefront - corrector can run at reasonably low frequencies and observations with long exposure time can be done in closed loop .this could reveal particularly advantageous for some scientific programs such as searches for extra - solar planets .of course , also the interferometry would benefit from a weak and . in the last decade several site testing campaigns took place , first above south pole ( marks et al ., 1996 , loewenstein et al . 1998 , marks et al . 1999 , travouillon et al . 2003a , travouillon 2003b ) and , more recently , above dome c ( aristidi et al .2003 , aristidi et al . 2005a , lawrence et al .dome c seems to have some advantages with respect to the south pole : * ( a ) * the sky emission and atmospheric transparency is some order of magnitude better than above south pole ( lawrence 2004 ) at some wavelengths .the sensitivity ( depending on the decreasing of sky emission and increasing of transparency ) above dome c is around 2 times better than above south pole in near to mid - infrared regions and around 10 times better in mid to far - infrared regions . *( b ) * the surface turbulent layer , principally originated by the katabatic winds , is much more thinner above dome c ( tens of meters - aristidi et al .2005a , lawrence et al . 2004 ) than above south pole ( hundreds of meters - marks et al . 1999 ) .the thickness and strength of the surface turbulent layer is indeed tightly correlated to the katabatic winds , a particular wind developed near the ground characterizing the boundary layer circulation above the whole antarctic continent .katabatic winds are produced by the radiative cooling of the iced surface that , by conduction , cools the air in its proximity .the cooled air , in proximity of the surface , becomes heavier than the air in the up layers and , for a simple gravity effect , it moves down following the ground slope with a speed increasing with the slope .dome c is located on the top of an altiplano in the interior region of antarctica and , for this reason , the katabatic winds are much weaker above dome c than above other sites in this continent such as south pole placed on a more accentuated sloping region . at presentnot much is known about the typical values of meteorological parameters above dome c during the winter ( april - september ) time i.e. the most interesting period for astronomers .the goals of our study are the following .we intend to provide a complete analysis of the vertical distribution of the main meteorological parameters ( wind speed and direction , absolute temperature , pressure ) in different months of the year using european center for medium weather forecasts ( ecmwf ) data .a particular attention is addressed to the wind speed , key element for the estimate of the wavefront coherence time .the ecmwf data - set is produced by the ecmwf general circulation model ( gcm ) and is therefore reliable at synoptic scale i.e. at large spatial scale .this means that our analysis can be extended to the whole troposphere and even stratosphere up to - km .the accuracy of such a kind of data is not particularly high in the first meters above the ground due to the fact that the orographic effects produced by the friction of the atmospheric flow above the ground are not necessarily well reconstructed by the gcms .we remind to the reader that a detailed analysis of the wind speed near the ground above dome c extended over a time scale of years was recently presented by aristidi et al .( 2005a ) . in that paper , measurements of wind speed taken with an automatic weather station ( aws ) ( )are used to characterize the typical climatological trend of this parameter . in the same paperit is underlined that estimates of the temperature near the ground are provided by schwerdtfeger ( 1984 ) .the interested reader can find information on the value of this meteorologic parameter above dome c and near the surface in these references .our analysis can therefore complete the picture providing typical values ( seasonal trend and median values ) of the meteorological parameters in the high part of the surface layer , the boundary layer and the free atmosphere .thanks to the large and homogeneous temporal coverage of ecmwf data we will be able to put in evidence typical features of the meteorological parameters in the summer and winter time and the variability of the meteorological parameters in different years .the winter time is particularly attractive for astronomical applications due to the persistence of the _night time _ for several months .this period is also the one in which it is more difficult to carry out measurements of meteorological parameters due to logistic problems .for this reason ecmwf data offer a useful alternative to measurements for monitoring the atmosphere above dome c over long time scales in the future .we intend to study the conditions of stability / instability of the atmosphere that can be measured by the richardson number that depends on both the gradient of the potential temperature and the wind speed : (, ) .the trigger of optical turbulence in the atmosphere depends on both the gradient of the potential temperature ( ) and the wind speed ( ) i.e. from the .this parameter can therefore provide useful information on the probability to find turbulence at different altitudes in the troposphere and stratosphere in different period of the year .why this is interesting ?at present we have indications that , above dome c , the optical turbulence is concentrated in a thin surface layer . above this layerthe is exceptionally large indicating an extremely low level of turbulence .the astronomic community collected so far several elements certifying the excellent quality of the dome c site and different solutions might be envisaged to overcome the strong surface layer such as rising up a telescope above m or compensating for the surface layer with ao techniques .the challenging question is now to establish more precisely how much the dome c is better than a mid - latitude site . in other words , which are the _ typical _ , and that we can expect from this site ?we mean here as _typical _ , values that repeat with a statistical relevance such as a mean or a median value .for example , the gain in terms of impact on instrumentation performances and astrophysical feedback can strongly change depending on how weak the is above the first m. in spite of the fact that , or are all small quantities , they can have a different impact on the final value of , and .only a precise estimate of this parameter will provide to the astronomic community useful elements to better plan future facilities ( telescopes or interferometers ) above the antarctic plateau and to correctly evaluate the real advantage in terms of turbulence obtained choosing the antarctic plateau as astronomical site . with the support of the richardson number , the wind speed profile and a simple analytical model we will try to predict a and a without the contribution of the first m of atmosphere . data provided by ecmwf can be used as inputs for atmospheric meso - scale models usually employed to simulate the optical turbulence ( ) and the integrated astroclimatic parameters ( masciadri et al . 2004 , masciadri & egner 2004 , masciadri & egner 2005 ) .measurements of wind speed done during the summer time have been recently published ( fig . 1 - aristidi et al .we intend to estimate the quality and reliability of the ecmwf data comparing these values with measurements from aristidi et al .so to have an indication of the quality of the initialization data for meso - scale models .we planned applications of a meso - scale model ( meso - nh ) to the dome c in the near - future .as a further output this model will be able to reconstruct , in a more accurate way than the ecmwf data - set , the meteorologic parameters near the ground .the paper is organized in the following way . in section [ meteo ]we present the median values of the main meteorological parameters and their seasonal trend .we also present a study of the richardson number tracing a complete map of the instability / stability regions in the whole km on a monthly statistical base . in section [ rel ]we study the reliability of our estimate comparing ecmwf analysis with measurements . in section [ disc ]we try to retrieve the typical value of and above dome c. finally , in section [ conc ] we present our conclusions .the characterization of the meteorological parameters is done in this paper with _analyses _ extracted by the catalog mars ( meteorological archival and retrieval system ) of the ecmwf .an _ analysis _ provided by the ecmwf general circulation ( gcm ) model is the output of a calculation based on a set of spatio - temporal interpolations of measurements provided by meteorological stations distributed on the surface of the whole world and by satellite as well as instruments carried aboard aircrafts .these measurements are continuously up - dated and the model is fed by new measurements at regular intervals of few hours .the outputs are formed by a set of fields ( scalar and/or vectors ) of classical meteorological parameters sampled on the whole world with a horizontal resolution of correspondent to roughly km .this horizontal resolution is quite better than that of the ncep / ncar reanalyses having an horizontal resolution of so we can expect more accurate estimate of the meteorological parameters in the atmosphere .the vertical profiles are sampled over levels extended up to km .the vertical resolution is higher near the ground ( m above dome c ) and weaker in the high part of the atmosphere . in order to give an idea of the vertical sampling , fig .[ mars ] shows the output of one data - set ( wind speed and direction , absolute and potential temperature ) of the mars catalog ( extended in the first km ) with the correspondent levels at which estimates are provided .we extracted from the ecmwf archive a vertical profile of all the most important meteorological parameters ( wind speed and direction , pressure , absolute and potential temperature ) in the coordinates ( 75 s , 123 e ) at : u.t . for each day of the 2003 and 2004 years .we verified that the vertical profiles of the meteorologic parameters extracted from the nearest 4 grid points around the dome c ( 75 s , 123 e ) show negligible differences .this is probably due to the fact that the orography of the antarctic continent is quite smoothed and flat in proximity of dome c. above this site we can appreciate on an orographic map a difference in altitude of the order of a few meters over a surface of kilometers ( masciadri 2000 ) , roughly the distance between 2 contiguous grid points of the gcm .the orographic effects on the atmospheric flow are visibly weak at such a large spatial scale on the whole km .we can therefore consider that these profiles of meteorologic parameters at macroscopic scale well represent the atmospheric characteristics above dome c starting from the first ten of meters as previously explained .the wind speed is one among the most critical parameters defining the quality of an astronomical site .it plays a fundamental role in triggering optical turbulence ( ) and it is a fundamental parameter in the definition of the wavefront coherence time : ^{-3/5 } \label{eq1}\ ] ] where is the wavelength , v the wind speed and the optical turbulence strength .figure [ year_wind ] shows the median vertical profile of the wind speed obtained from the ecmwf analyses during the 2003 ( a ) and 2004 ( b ) years .dotted - lines indicate the first and third quartiles i.e. the typical dispersion at all heights .figure [ year_wind ] ( c ) shows the variability of the median profiles obtained during the two years .we can observe that from a qualitative ( shape ) as well as quantitative point of view ( values ) the results are quite similar in different years .they can therefore be considered as typical of the site . due to the particular synoptic circulation of the atmosphere above antarctica ( the so called _ polar vortex _ )the vertical distribution of the wind speed in the summer and winter time is strongly different .the wind speed has important seasonal fluctuations above km .figure [ win_sum_wind ] shows the median vertical profiles of the wind speed in summer ( left ) and winter ( right ) time in 2003 ( top ) and 2004 ( bottom ) .we can observe that the wind speed is quite weak in the first km from the sea - level during the whole year with a peak at around km from the sea level ( km from the ground ) .* at this height the median value is m / sec and the wind speed is rarely larger than m / sec .* above km from the sea level , the wind speed is extremely weak during the summer time but during the winter time , it monotonically increases with the height reaching values of the order of m / sec ( median ) at km .the typical seasonal wind speed fluctuations at and km are shown in fig.[wind_free ] .this trend is quite peculiar and different from that observed above mid - latitude sites . in order to give an idea to the reader of such differences, we show in fig .[ spm_domec ] the median vertical profiles of the wind speed estimated above dome c in summer ( dashed line ) and winter time ( full bold line ) and above the san pedro mrtir observatory ( mexico ) in summer ( dotted line ) and winter time ( full thin line ) ( masciadri & egner 2004 , masciadri & egner 2005 ) .san pedro mrtir is located in baja california ( n , w ) and it is taken here as representative of a mid - latitude site . above mid - latitude sites ( san pedro mrtir - fig.[spm_domec ] )we can observe that the typical peak of the wind speed at the jet - stream height ( roughly 12 - 13 km from the sea - level ) have a strong seasonal fluctuation .the wind speed is higher during the winter time ( thin line ) than during the summer time ( dotted line ) in the north hemisphere and the opposite happens in the south hemisphere . at this height, the wind speed can reach seasonal variations of the order of m / sec . near the ground and above the wind speed strongly decreases to low values ( rarely larger than m / sec ) . during the winter time, the wind speed above dome c can reach at - km values comparable to the highest wind speed values obtained above mid - latitude sites at the jet - stream height ( i.e. m / sec ) . on the other side, one can observe that , * in the first km from the sea - level , the wind speed above dome c during the winter time is weaker than the wind above mid - latitude site in whatever period of the year .* figure [ seas_wind ] shows , month by month , the median vertical profile of the wind speed during 2003 ( green line ) and during 2004 ( red line ) .the different features of the vertical distribution of the wind speed that we have just described and attributed to the winter and summer time are more precisely distributed in the year in the following way . during december , january ,february and march the median wind speed above km is not larger than m / sec . during the other months , starting from km , the median wind speed increases monotonically with different rates .september and october show the steepest wind speed growing rates .it is worth to underline the same wind speed vertical distribution appears in different years in the same month . only during the august month it is possible to appreciate substantial differences of the median profile in the 2003 and 2004 years .this result is extremely interesting permitting us to predict , in a quite precise way , the typical features of the vertical distribution of wind speed in different months .figure [ cum_wind_8_9 km ] shows the cumulative distribution of the wind speed at - km from the sea level during each month .we can observe that , in only of cases , the wind speed reaches values of the order of m / sec during the winter time .this height ( - km ) corresponds to the interface troposphere - tropopause above dome c. as it will be better explained later , this is , in general , one of the place in which the optical turbulence can be more easily triggered due to the strong gradient and value of the wind speed .we remark that , similarly to what happens above mid - latitude sites , in correspondence of this interface , we find a local peak of the wind speed . in spite of thisthis value is much smaller above dome c than above mid - latitude sites .we can therefore expect a less efficient production of turbulence at dome c than above mid - latitude sites at this height .figure [ seas_wind_dir ] shows , for each month , the median vertical profile of the wind direction during 2003 ( green line ) and during 2004 ( red line ) .we can observe that , during the all months , in the low part of the atmosphere the wind blows principally from the south ( ) . in the troposphere ( - km ) the wind changes , in a monotonic way , its direction from south to west , north / west ( ) . in the slab characterized by the tropopause and stratosphere ( above km )the wind maintains its direction to roughly .above km , during the summer time ( more precisely during december , january and february ) the wind changes its direction again to south .this trend is an excellent agreement with that measured by aristidi et al .( 2005a)-fig.6 .the pressure is a quite stable parameter showing small variations during the summer and winter time above antarctica .figure [ press ] shows the pressure during the summer and winter time . in this picture, we indicate the values of the pressure associated to the typical interface troposphere - tropopause above mid - latitude sites ( mbar correspondent to km from the sea - level ) and above dome c ( - mbar km from the sea - level ) .as explained before , the interface between troposphere and tropopause corresponds to a favourable place in which the optical turbulence can be triggered .the absolute and potential temperature are fundamental elements defining the stability of the atmosphere .figure [ seas_abs_temp ] shows , for each month , the median vertical profile of the absolute temperature during 2003 ( green line ) and during 2004 ( red line ) .figure [ seas_pot_temp ] shows , for each month , the median vertical profile of the potential temperature during 2003 ( green line ) and during 2004 ( red line ) as measured by aristidi et al.(2005 ) ) ] .the value of indicates the level of the atmospheric thermal stability that is strictly correlated to the turbulence production .when is positive , the atmosphere has high probabilities to be stratified and stable. we can observe ( fig.[seas_pot_temp ] ) that this is observed in the ecmwf data - set and it is particularly evident during the winter time . another way to studythe stability near the ground is to analyse the , i.e. the gradient of the absolute temperature .when is positive , the atmosphere is hardly affected by advection because the coldest region of the atmosphere ( the heaviest ones ) are already in proximity of the ground .this is a typical condition for antarctica due to the presence of ice on the surface but it is expected to be much more evident during the winter time due to the extremely low temperature of the ice. we can observe ( fig.[seas_abs_temp ] ) that during the winter time , is definitely positive near the ground indicating a strongly stratified and stable conditions .all this indicates that some large wind speed gradient on a small vertical scale have to take place to trigger turbulence in the surface layer in winter time .we discuss these results with those obtained from measurements in section [ rich ] .a further important feature for the vertical distribution of the absolute temperature is the inversion of the vertical gradient ( from negative to positive ) in the free atmosphere indicating the interface troposphere - tropopause in general associated to an instable region due to the fact that .we can observe that , above dome c , this inversion is located at around km from the sea level during all the months . in the summer time , the median vertical profile of the absolute temperature is quite similar to the one measured by aristidi et al .( 2005a)-fig.9 .however the temperature during the winter time , above the minimum reached at km , does not increase in a monotonic way with the height but it shows a much more complex and not unambiguous trend from one month to the other with successive local minima and a final inversion from negative to positive gradients at km ( may , june , july and august ) and km ( september and october ) . considering that the regions of the atmosphere in which favour the instability of the atmosphere ( see section [ rich ] ), the analysis of the absolute temperature in the - km range tells us that , at least from the thermal point of view , it is much more complex and difficult to define the stability of the atmosphere during the winter time than during the summer time .the richardson number maps ( section [ rich ] ) will be able to provide us some further and more precise insights on this topic .we finally observe that , during all the months , the vertical distribution of the absolute temperature is reproduced identically each year .the stability / instability of the atmosphere at different heights can be estimated by the deterministic richardson number : where is the gravity acceleration m , is potential temperature and is the wind speed .the stability / instability of the atmosphere is tightly correlated to the production of the optical turbulence and it can therefore be an indicator of the turbulence characteristics above a site .the atmosphere is defined as _ stable _ when and it is _when .typical conditions of instability can be set up when , in the same region , and or . under these conditions the turbulence is triggered in strongly stratified shears .these kind of fluctuations in the atmosphere have a typical small spatial scale and can be detected by radiosoundings .when one treats meteorological parameters described at lower spatial resolution , as in our case , it is not appropriate to deal about a deterministic richardson number . following a statistical approach ( van zandt et al .1978 ) , we can replace the deterministic with a probability density function , describing the stability and instability factors in the atmosphere provided by meteorological data at larger spatial scales .this analysis has already been done in the past by masciadri & garfias ( 2001 ) .figures [ seas_pot_temp_grad ] and [ seas_wind_grad ] show , for each month , the gradient of the potential temperature and the square of the gradient of the wind speed .finally , fig.[seas_rich_inv ] shows , for each month , the inverse of the richardson number ( ) over km .we show instead of because the first can be displayed with a better dynamic range than the second one . from a visual point of view , permits , therefore , to better put in evidence stability differences in different months . as explained before , with our data characterized by a low spatial resolution , we can analyze the atmospheric stability in relative terms ( in space and time ) , i.e. to identify regions that are less or more stable then others .this is quite useful if we want to compare features of the same region of the atmosphere in different period of the year .the probability that the turbulence is developed is larger in regions characterized by a large . if , for example , we look at the distribution in the month of january ( middle of the summer time ) we can observe that , a maximum is visible at around ] km region and a monotonically decreasing above km is observed . during the months of may - november, shows more complex features . at ] km . the analysis of the ( or / ) does not give us the value of the at a precise height but it can give us a quite clear picture of _where _ and _ when _ the turbulence has a high probability to be developed over the whole year above dome c. summarizing we can state that , during the whole year , we have conditions of instability in the ] km above dome c is , indeed , clearly weaker than the wind speed at the same height above mid - latitude sites . in the high part of the atmosphere ( km ) , during the summer time the atmosphere is , in general , quite stable and we should expect low level of turbulence .during the winter time the atmosphere is more instable and one should expect a higher level of turbulence than during the summer time .the optical turbulence above km would be monitored carefully in the future during the months of september and october to be sure that is competitive with respect to mid - latitude sites in winter time .indeed , even a weak joint to the large wind speed at these altitudes might induce important decreasing of with respect to the found above mid - latitude sites .indeed , as can be seen in fig.[seas_wind ] , the wind speed at this height can be quite strong .on the other side , we underline that this period does not coincide with the central part of the winter time ( june , july and august ) that is the most interesting for astronomic observations .we would like to stress again this concept : in this paper we are not providing absolute value of the turbulence but we are comparing levels of instabilities in different regions of the atmosphere and in different periods of the year .this status of stability / instability are estimated starting from meteorological parameters retrieved from ecmwf data - set . considering that , as we proved once more , the meteorologic parameters are quite well described by ecmwf the relative status of stability / instability of the atmosphere represented by the richardson number maps provided in our paper is a constrain against which measurements of the optical turbulence need to be compared .we expect that measurements agree with the stability / instability properties indicated by the richardson maps . which is the typical seeing above the first m ?we should expect that the strength of the turbulence in the free atmosphere is larger in winter time than during the summer time .are the measurements done so far in agreement with the richardson maps describing the stability / instability of the atmosphere in different seasons and at different heights ? some sitestesting campaigns were organized above dome c ( aristidi et al .2005a , aristidi et al . 2005b , lawrence et al .2004 ) so far employing different instruments running in different periods of the year .we need measurements provided by a vertical profiler to analyze seeing values above m. balloons measuring vertical profiles have been launched during the winter time ( agabi et al .preliminary results indicate a seeing of above the first m. unfortunately , no measurements of the vertical distribution during the summer time is available so far .luckily , we can retrieve information on the level of activity of the turbulence in the high part of the atmosphere analysing the isoplanatic angle .this parameter is indeed particularly sensitive to the turbulence developed in the high part of the atmosphere . we know , at present , that the median measured with a gsm is during the summer time and during the winter time measured by a gsm ( ) and balloons ( ) in the same period ( aristidi et al .this should be analyzed more in detail in the future .however , in the context of our discussion , we are interested on a relative estimate i.e. on a parameter variation between summer and winter time .we consider , therefore , values measured by the same instrument ( gsm ) in summer and winter time . ] .this means that , during the winter time , the level of the turbulence in the free atmosphere is higher than in summer time .this matches perfectly with the estimates obtained in our analysis . on the other side, a dimm placed at m from the ground measured a median value of seeing in summer time ( aristidi et al .2005b ) and in winter time ( agabi et al .this instrument measures the integral of the turbulence over the whole troposphere and stratosphere .the large difference of the seeing between the winter and summer time is certainly due to a general increasing of the turbulence strength near the ground in the summer - winter passage in some period of the day in summer time as shown by aristidi et al .( 2005b ) . ] .indeed , measurements of the seeing above m obtained with balloons and done during the winter time ( agabi et al . 2006 ) give a typical value of .using the law:^{3/5}\ ] ] we can calculate that during the winter time the median seeing in the first m is equal to . in spite of the fact that we have no measurements of the seeing above m in summer time , we know , from the richardson analysis shown in this paper , that the seeing in this region of the atmosphere should be weaker in summer time than in winter time .this means that the seeing above m in summer time should be smaller than . knowing that the total seeing in summer time is equal to , one can retrieve that the seeing in the first m should be smaller than .this means that .this means that the turbulence strength on the surface layer is larger during the winter time than during the summer time . in section [ abs_temp ]we said that during the winter time and near the ground , the thermal stability is larger than during the summer time .this is what the physics says and what the ecmwf data - set show but it is in contradiction with seeing measurements .the only way to explain such a strong turbulent layer near the ground during the winter time is to assume that the wind speed gradient in the first m is larger during the winter time than during the summer time .this is difficult to accept if the wind speed is weaker during the winter time than during the summer time as stated by aristidi et al .as shown in masciadri ( 2003 ) , the weaker is the wind speed near the surface , the weaker is the gradient of the wind speed .we suggest therefore a more detailed analysis of this parameter near the surface extended over the whole year .this should be done preferably with anemometers mounted on masts or kites .this will permit to calculate also the richardson number in the first m during the whole year and observe differences between summer and winter time .this can be certainly a useful calculation to validate the turbulence measurements .the ecmwf data - set have no the necessary reliability in the surface layer to prove or disprove these measurements .as previously explained , measurements obtained recently above dome c with radiosoundings ( aristidi et al .2005a ) can be useful to quantify the level of reliability of our estimates . in aristidiet al . ( 2005a ) is shown ( fig.4 ) the median vertical profile of the wind speed measured during several nights belonging to the summer time .figure 1 in aristidi et al.(2005a ) gives the histogram of the time distribution of measurements as a function of month .most of measurements have been done during the december and january months .figure [ mean_wind_dec_jan ] ( our paper ) shows the vertical profile of the wind speed obtained with ecmwf data related to the december and january months in and ( bold line ) and the measurements obtained during the same months above dome c ( thin full line ) .e. aristidi , member of the luan team , kindly selected for us only the measurements related to these two months from their sample .we note that , the ecmwf are all calculated at : u.t . while the balloons were not launched at the same hour each day .moreover , the measurements are related to 2000 - 2003 period while the analyses are related to the 2003 - 2004 period . in spite of this difference ,the two mean vertical profiles show an excellent correlation .the absolute difference remains below m / sec with a mean difference of m / sec basically everywhere . in the high part of the atmosphere ( fig.[mean_wind_dec_jan ] ) ,the discrepancy measurements / ecmwf analyses is of the order of .this is a quite small absolute discrepancy but , considering the typical wind speed value of at this height , it gives a relative discrepancy of the order of .we calculated that , assuming measurements of the seeing so far measured above dome c and profiles as shown in section [ disc ] ( table 2 ) , this might induce discrepancies on the estimates of the order of - . to produce a more detailed study on the accuracy of the ecmwf analyses and measurements one should know the intrinsic error of measurements and the scale of spatial fluctuations of the wind speed at this height .no further analysis is possible for us above the dome c to improve the homogeneity of the samples ( measurements and analyses ) and better quantify the correlation between them because we do not access the raw data of measurements .we decided , therefore , to compare measurements with ecmwf analyses above south pole in summer as well as in winter time to provide to the reader further elements on the level of reliability of ecmwf analyses above a remote site such as antarctica .figure [ jan_med ] ( january - summer time - nights ) and fig.[jj_med ] ( june and july - winter time - nights ) show the median vertical profiles of wind speed , wind direction and absolute temperature provided by measurements ] and ecmwf analyses .we underline that , in order to test the reliability of ecmwf analyses , we considered all ( and only ) nights for which measurements are available on the whole for the three parameters : wind speed , wind direction and absolute temperature . it was observed that , during the winter time , the number of radiosounding ( balloons ) providing a complete set of measurements decreases . in this seasonit is frequent to obtain measurements only in the first - . above this heightthe balloons blow up . to increase the statistic of the set of measurements extended over the whole we decided to take into account nights related to two months ( june and july ) in winter time. we can observe ( fig.[jan_med ] , fig.[jj_med ] ) that the correlation ecmwf analyses / measurements is quite good in winter as well in summer time for all the three meteorologic parameters .we expressly did not smoothed the fluctuations characterized by high frequencies of measurements .the discrepancy measurements / ecmwf analyses is smaller than on the whole troposphere .it is also visible that the natural typical fluctuations at small scales of the measured wind speed is .we conclude , therefore , that a correlation measurements / ecmwf analyses within m / sec error is a quite good correlation and these data - set can provide reliable initialization data for meso - scale models.as a further output of this study we observe that , during the winter time , the wind speed above south pole is weaker than above dome c , particularly above km from the ground .this fact certainly affects the value of the placing the south pole in a more favourable position with respect to dome c. on the other side , we know that the turbulent surface layer is much more stronger and thicker above south pole than dome c. this elements also affects the placing dome c in a more favourable position with respect to south pole .further measurements are necessary to identify which of these two elements ( a larger wind speed at high altitudes above dome c or a stronger turbulence surface layer above south pole ) more affects the . indeed , if typical values of ( msec ) in winter time ( june , july and august ) above south pole are already available ( marks et al .1999 ) , we have not yet measurements of above dome c related to the same period .of course , if above dome c will reveal to be larger than msec , this would mean that the stronger turbulence layer in the surface above south pole affects more than the larger wind speed at high altitudes above dome c. this study is fundamental to define the potentialities of these sites for applications to the interferometry and adaptive optics .we intend here to calculate the value of , in the slab of atmosphere in the range [ h , h using , as inputs , simple analytical models of the optical turbulence and the median vertical profiles of the wind speed shown in fig.[win_sum_wind ] .the superior limit ( h ) is defined by the maximum altitude at which balloons provide measurements before exploding and falling down .the inferior limit ( h ) corresponds to the expected surface layer above dome c. we define h m and h m the dome c ground altitude .we consider independent models with h km and h km .our analysis intend to estimate typical values of some critical astroclimatic parameters ( , ) without the contribution of the first m above the iced surface .the wavefront coherence time is defined as eq.([eq1 ] ) and the isoplanatic angle as : ^{-3/5}\ ] ] table [ tab1 ] and table [ tab2 ] summarize the inputs and outputs of these estimates . :the simplest ( and less realistic ) assumption is to consider the constant over the [ h , h range . to calculate the we use three values of references : , and .we do the assumption that the is uniformly distributed in the where h - h - h .we then calculate the as : the median vertical profiles of wind speed during the summer time in the 2003 and 2004 years ( see fig.[win_sum_wind ] ) are used for the calculation of . :as discussed previously the turbulence above dome c would preferably trigger at around $ ] km from the ground during the summer time .a more realistic but still simple model consists therefore in taking a thin layer of m thickness at km from the ground and the rest of the turbulent energy uniformly distributed in the complementary - .this model is particularly adapted to describe the in summer time in which there is a well localized region of the atmosphere in which the turbulence can more easily trigger ( see section [ rich ] ) . considering the more complex morphology of the richardson number during the winter time , we think that these simple models * ( a)-(n ) * should not well describe the turbulence vertical distribution in this season . in other worlds , we have not enough elements to assume a realistic model for the winter season and we will therefore limit our analysis to the summer season . to calculate the best values of and that can be reached above dome c we consider the realistic minimum values of c m ( marks et al .( 1999 ) ) given by the _atmospheric noise _ smaller than we enter in the regime of the electronic noise - see azouit & vernin ( 2005 ) , masciadri & jabouille ( 2001 ) . ] and we calculate the value of the c in the thin layer at km using eq.([cn2 ] ) and the following relation : aristidi et al .( 2005c ) measured an isoplanatic angle in the summer time .looking at table[tab1 ] - ( model a - f ) , we deduce that such a uniform distribution could match with these value ( ) only in association with an exceptional seeing of . in this case, we should expect a of the order of - msec .alternatively , under the assumption of a peaked at km from the sea - level ( table[tab1 ] - ( model g - n ) ) , a seeing of would better match with the . in this casewe should expect a of the order of - msec .summarizing we can expect the following data sets : [ , , - msec ] or [ , , - msec ] .the second one is much more realistic .it is interesting to note that the can be quite different if one assume a seeing slightly different ( - ) under the hypothesis of a distribution of the as described in this paper .we deduce from this analysis ( joint with the discussion done in section [ rich ] ) that the seeing above m during the summer time is probably of the order of or even smaller .this means that , in the free atmosphere , the seeing should be weaker during the summer time than during the winter time ( average - agabi et al .this result well matches with our richardson number maps .however , it would be interesting to measure the seeing in the free atmosphere during the summer time in order to better constrain the values of .this is not evident due to the fact that radiosoundings used to measure the so far can not be used to measure this parameter during the summer time .measurements are not reliable due to fictitious temperature fluctuations experienced by the captors in this season ( aristidi , private communication ) . from this simple analysiswe deduce reasonable values of and a msec during the summer time under the best atmospheric conditions and the most realistic distribution of in the atmosphere .we remind to the reader that some measurements of have already been published ( lawrence et al .such measurements have been done just in the interface summer - winter time ( april - may ) .our simple model is not adapted to compare estimates of and similar to those done in this section with those measured by lawrence et al .a more detailed information on the measurements in winter time will permit in the future to verify measurements done by lawrence et al .in this paper we present a complete study of the vertical distribution of all the main meteorological parameters ( wind speed and direction , pressure , absolute and potential temperature ) characterizing the atmosphere above dome c from a few meters from the ground up to km .this study employs the ecmwf _ analyses _ obtained by general circulation models ( gcm ) ; it is extended over two years 2003 and 2004 and it provides a statistical analysis of all the meteorological parameters and the richardson number in each month of a year .this parameter provides us useful insights on the probability that optical turbulence can be triggered in different regions of the atmosphere and in different periods of the year .the richardson number monitors , indeed , the conditions of stability / instability of the atmosphere from a dynamic as well as thermal point of view .the main results obtained in our study are : * the wind speed vertical distribution shows two different trends in summer and winter time due to the _polar vortex _ circulation . in the first km above the groundthe wind speed is extremely weak during the whole year . the median value at km , correspondent to the peak of the profile placed at the interface troposphere / tropopause , is m / sec .at this height the 3rd quartile of the wind speed is never larger than m / sec . above km the wind speed remains extremely weak ( the median value is smaller than m / sec ) during the summer time . during the winter timethe wind speed increases monotonically with the height and with an important rate reaching , at km , median values of the order of m / sec .a fluctuation of the order of m / sec is estimated at km between the summer and winter time . *the atmosphere above dome c shows a quite different regime of stability / instability in summer and winter time . during the summer time the richardson number indicates a general regime of stability in the whole atmosphere .the turbulence can be triggered preferably at [ - ] km from the ground . during the winter time the atmosphere shows a more important turbulent activity . in spite of the fact that the analysis of the richardson number in different months of the year is qualitative profiles but the relative probability to trigger turbulence in the atmosphere .] our predictions are consistent with preliminary measurements obtained above the site in particular period of the year .considering the good reliability of the meteorological parameters retrieved from the ecmwf analyses the richardson maps shown here should be considered as a reference to check the consistency of further measurements of the optical turbulence in the future . * with the support of a simple model for the distribution , the richardson number maps and the wind speed vertical profile we calculated a best and msec above dome c during the summer time . *the vertical distribution of all the meteorological parameters show a good agreement with measurements .this result is quite promising for the employing of the ecmwf analyses as initialization data for meso - scale models .besides , it opens perspectives to employ ecmwf data for a characterization of meteorologic parameters extended over long timescale .data - sets from mars catalog ( ecmwf ) were used in this paper .this study was supported by the special project ( spdesee ) - ecmwf-_. we thanks the team of luan ( nice - france ) : jean vernin , max azouit , eric aristidi , karim agabi and eric fossat for kindly providing us the wind speed vertical profile published in aristidi et al .( 2005a ) .we thanks andrea pellegrini ( pnra - italy ) for his kindly support to this study .this work was supported , in part , by the community s sixth framework programme and the marie curie excellence grant ( forot ) .aristidi , e. , agabi , k. , azouit , m. , fossat , e. , martin , f. , sadibekova , t. , vernin , j. , ziad , a. , travouillon , t. 2005b , proceedings of conference on `` wide field survey telescope on dome c / a '' , june 3 - 4 , beijing , as a supplement of `` acta astronomica sinica '' ccccccc & & & & & sum-2003 & sum-2004 + models & h & & & & & + & ( km ) & ( arcsec ) & m & ( arcsec ) & ( msec ) & ( msec ) + model a & 25 & 0.27 & 3.53 & 1.95 & 14.00 & 15.38 + model b & 25 & 0.2 & 2.14 & 2.63 & 18.91 & 20.77 + model c & 25 & 0.1 & 6.74 & 5.26 & 37.83 & 41.54 + model d & 20 & 0.27 & 4.58 & 2.53 & 13.52 & 14.87 + model e & 20 & 0.2 & 2.78 & 3.41 & 18.25 & 9.89 + model f & 20 & 0.1 & 8.76 & 6.82 & 36.49 & 40.11 + ccccccc & & & & & sum-2003 & sum-2004 + models & h & & & & & + & ( km ) & ( arcsec ) & m & ( arcsec ) & ( msec ) & ( msec ) + model g & 25 & 0.27 & 7.46 & 4.60 & 10.17 & 9.07 + model h & 25 & 0.2 & 4.40 & 6.03 & 13.87 & 12.14 + modeli & 25 & 0.1 & 1.25 & 10.18 & 28.34 & 33.00 + model l & 20 & 0.27 & 7.51 & 4.73 & 10.15 & 11.89 + model m & 20 & 0.2 & 4.49 & 6.32 & 13.75 & 16.08 + model n & 20 & 0.1 & 1.30 & 11.65 & 28.02 & 32.67 +
in this paper we present the characterization of all the principal meteorological parameters ( wind speed and direction , pressure , absolute and potential temperature ) extended up to km from the ground and over two years ( 2003 and 2004 ) above the antarctic site of dome c. the data set is composed by _ analyses _ provided by the general circulation model ( gcm ) of the european center for medium weather forecasts ( ecmwf ) and they are part of the catalog mars . a monthly and seasonal ( summer and winter time ) statistical analysis of the results is presented . the richardson number is calculated for each month of the year over km to study the stability / instability of the atmosphere . this permits us to trace a map indicating where and when the optical turbulence has the highest probability to be triggered on the whole troposphere , tropopause and stratosphere . we finally try to predict the best expected isoplanatic angle and wavefront coherence time ( and a ) employing the richardson number maps , the wind speed profiles and simple analytical models of vertical profiles . 2 2 2
modern track detectors based on semiconductor technologies contain larger amounts of material than gaseous detector types , partially due to the detector elements themselves and partially due to additional material required for on - sensor electronics , power , cooling , and mechanical support .a precise modelling of material effects in track reconstruction is therefore necessary to obtain the best estimates of the track parameters .such material effects are particularly relevant for the reconstruction of electrons which , in addition to ionization energy loss and multiple coulomb scattering , suffer from large energy losses due to bremsstrahlung .a well - known model of the bremsstrahlung energy loss is due to bethe and heitler . in this model ,the probability density function ( pdf ) , , of the energy loss of an electron is ^{c-1}}{\gamma ( c)},\ ] ] where , is the thickness of material traversed by the electron ( in units of radiation length ) , and is the fraction of energy remaining after the material layer is traversed .the probability of a given fractional energy loss is assumed to be independent of the energy of the incoming particle .this pdf is shown in fig .[ fig : bhpdf ] for different thickness values .[ fig : bhpdf ] the baseline for track reconstruction in the cms tracker is the kalman filter . throughout the filter tracksare described by a five - dimensional state vector , containing the information about the momentum , the direction and the position at some reference surface .the material effects are currently assumed to be concentrated in the active elements of the detector layers . in this contextthe optimal treatment of radiative energy loss is to correct the momentum with the mean value of energy loss and to increase the variance of the momentum by adding the variance of the energy loss distribution .this procedure should ensure unbiased estimates of the track parameters and of the associated uncertainties .the kalman filter is a linear least - squares estimator , and is proved to be optimal only when all probability densities encountered during the track reconstruction procedure are gaussian .the implicit assumption of approximating the bethe - heitler distribution with a single gaussian is quite crude .it is therefore plausible that a non - linear estimator which takes the actual shape of the distribution into account can do better . a non - linear generalization of the kalman filter ( kf ) , the _ gaussian - sum filter ( gsf ) _ , has therefore been implemented in the reconstruction software of the cms tracker . in the gsfthe distributions of all state vectors are gaussian mixtures , i.e. weighted sums of gaussians instead of single gaussians .the algorithm is therefore appropriate if the probability densities involved in track reconstruction can be adequately described by gaussian mixtures .the basic idea of the present work is to approximate the bethe - heitler distribution as a gaussian mixture rather than a single gaussian , in which the different components of the mixture model different degrees of hardness of the bremsstrahlung in the layer under consideration .the resulting estimator resembles a set of kalman filters running in parallel , where each kalman filter corresponds to one of the components of the mixture describing the distribution of the state vector .an important issue with the gsf reconstruction of electrons is to obtain a good gaussian - mixture approximation of the bethe - heitler distribution .the parameters to be obtained are the weights , the mean values and the variances of each of the components in the approximating mixture .the parameters are determined by minimizing the following two distances : f(z ) dz , \end{aligned}\ ] ] where and are the pdf and cumulative distribution function ( cdf ) of the model distribution and and are the pdf and cdf of the gaussian mixture , respectively .the distance is the so - called kullback - leibler distance between the model distribution and the mixture .hereafter , the mixtures obtained by minimizing are called cdf - mixtures , whereas the mixtures obtained by minimizing are called kl - mixtures .the minimizations have been done independently on a set of discrete values of , ranging from 0.02 to 0.20 .figures [ fig : kldist ] and [ fig : cdfdist ] show the resulting distances as a function of thickness for a varying number of components in the approximating mixture .[ fig : kldist ] [ fig : cdfdist ] in order to obtain mixtures for arbitrary values of the thickness , fifth - degree polynomials have been fitted to the parameters as a function of . due to the fast access to the parameters from the polynomials , the calculation of the mixtureis done on the fly during reconstruction , using the effective thickness of a detector layer from the knowledge of the incident angle of inclination .the approximation of energy loss by a gaussian mixture amounts to a convolution of this mixture with the current state , which in general is also composed of several gaussian components .the strict application of the gsf algorithm therefore quickly leads to a prohibitively large number of components due to the combinatorics involved each time a layer of material is traversed . in a realistic implementation of the gsf the number of componentsmust repeatedly be reduced to a predefined maximum . as little information as possibleshould be lost in this procedure .two strategies have been tested : 1 . only the _n _ components with the largest weights are kept ; 2 .components are merged into clusters , according to a given metric .the first option has the advantage of being computationally light , but it turns out to be inferior .even the first two moments of the estimated parameters are not described correctly . in the second approach ,the component with the largest weight is merged with the one closest to it , and this procedure is repeated until the required number of components is reached .the results below have been obtained by using the kullback - leibler distance defined in equation ( [ equation : kldist ] ) as a measure of distance .first , results from the reconstruction of data originating from a simplified simulation are shown . in this simulation multiple scattering and ionization energy lossare turned off , all the material is concentrated on the detector units , and the exact amount of material used in the simulation is known by the reconstruction program .single electron tracks with gev/ have been simulated for absolute values of less than 1.0 .reconstructed hits have been collected using the knowledge of the associated simulated hits , so no pattern recognition has been involved .the following results all refer to the quantity ( charge over absolute value of the momentum ) recorded at the point of closest approach to the vertex in the transverse plane the transverse impact point ( tip ) after a fit going from the outside towards the inside of the tracker .figure [ fig : qpsingletrack ] shows an example of the estimated for one single track , both for the kf and for the gsf . [ fig : qpsingletrack ] figures [ fig : pullprobsmixture ] and [ fig : pullprobs ] show probability distributions for the estimated of the kf and the gsf with a varying maximum number of components kept during the reconstruction . given the estimated pdf ( a single gaussian for the kf , a gaussian mixture for the gsf ) , each entry in the histogram amounts to the integral from to the true value of .if the estimated pdf is a correct description of the real distribution of the parameter , the corresponding histogram should be flat .for the kf ( solid ) and the gsf with a maximum of six ( dashed - dotted ) , twelve ( dashed ) , ( solid ) and ( dotted ) components kept during reconstruction . in this casethe same six - component cdf - mixture has been used both in the simulation of the disturbance of the momentum in a detector unit and in reconstruction . keeping 36 components yields estimates quite close to the correct distribution of the parameter.,width=255 ] [ fig : pullprobsmixture ] for the kf ( solid ) and the gsf with a maximum of six ( dashed - dotted ) , twelve ( dashed ) , ( solid ) and ( dotted ) components kept during reconstruction .the same six - component mixture as the one described in the caption of fig .[ fig : pullprobsmixture ] has been used in reconstruction , but the simulation of the disturbance of the momentum in a detector unit has been done by sampling from the bethe - heitler distribution .the distributions for the gsf are seen to be less flat than those shown in fig .[ fig : pullprobsmixture].,width=255 ] [ fig : pullprobs ] the deviation from flatness can be quantified by the of the difference between the probability distributions of and the flat distribution .this per bin is shown in fig .[ fig : probchi2 ] for a set of different mixtures as a function of the maximum number of components kept .the cdf - mixtures are superior to the kl - mixtures concerning the quality of the estimated .the main trend seems to be related to the maximum number of components kept rather than the number of components in the mixture describing the energy loss , even though the mixtures with five and six components are best in the limit of keeping a large number of components .[ fig : probchi2 ] figure [ fig : qppredtipbh ] shows the residuals of the estimated of the gsf and the kf with respect to the true value of the parameter .the estimated for the gsf is the mean value of the state vector mixture , and the mixture used for this specific plot is a cdf - mixture with six components . in order to quantify the difference between the gsf and the kf residuals ,the full - width at half - maximum ( fwhm ) and the half - width of intervals covering 50% and 90% of the distribution have been considered .the covering intervals have been chosen to be symmetric about zero .the fwhm and the half - widths of the covering intervals are shown in figs .[ fig : resfwhm ] , [ fig : resq5 ] and [ fig : resq9 ] .the different flavours of the gsf in these figures are the same as those described in the caption of fig .[ fig : probchi2 ] .[ fig : qppredtipbh ] [ fig : resfwhm ] [ fig : resq5 ] [ fig : resq9 ] the gsf and the kf have also been run on tracks from a full simulation using the official cms simulation program .the and the range are the same as in the simplified simulation , but the amount and spatial distribution of the material are different .probability distributions of the estimated for the gsf and the kf are shown in fig .[ fig : pullprobsfull ] .[ fig : pullprobsfull ] the probability distribution of the gsf exhibits no large deviation from flatness , indicating that the estimated pdf of describes reasonably well the actual pdf of .this observation is all the more remarkable since , with the full simulation , the energy loss is not generated by the simple bethe - heitler model , and neither the exact amount nor the exact location of the material are known to the gsf .the corresponding residuals of the estimated with respect to the true value are shown in figs .[ fig : qppredtipfull ] and [ fig : qpupdtipfull ] .the residuals shown in fig .[ fig : qpupdtipfull ] have been obtained by including a vertex constraint in the fit .such a constraint allows the momentum to be measured in the innermost part of the track and thus gives a handle on possible radiation in the first two layers .the result of including this constraint is a less skew distribution with the mode being moved closer towards zero , and the amount of tracks in the tails is also reduced .even though the results from the full simulation qualitatively seem to confirm those from the simplified simulation , more studies are needed to understand the differences in detail .[ fig : qppredtipfull ] [ fig : qpupdtipfull ]the gaussian - sum filter has been implemented in the cms reconstruction program .it has been validated with electron tracks with a simplified simulation in which the energy loss distribution ( bethe - heitler model ) , the exact amount of material and its exact location are known to the reconstruction program .it has been shown that the quality of the momentum estimate depends mainly on the number of mixture components kept during reconstruction , and to some extent also on the number of components in the mixture approximation to the energy loss distribution .a comparison with the best linear unbiased estimator , the kalman filter , shows a clear improvement of the momentum resolution .remarkably , a similar improvement can be seen with electron tracks from the full simulation , although in this case neither the exact energy loss distribution nor the precise amount and location of material are known to the reconstruction program .more systematic studies with electrons from the full simulation are clearly needed , but it seems safe to conclude that in electron reconstruction the gaussian - sum filter yields a substantial gain in precision as compared to the kalman filter .9 h. bethe and w. heitler , proc .london * a 146 * ( 1934 ) 83 .r. frhwirth , nucl .instrum . and methods * a 262 * ( 1987 ) 444 .d. stampfer , m. regler and r. frhwirth , comp .phys . comm .* 79 * ( 1994 ) 157 .r. frhwirth , comp .* 100 * ( 1997 ) 1 .r. frhwirth and s. frhwirth - schnatter , comp .phys . comm .* 110 * ( 1998 ) 80 .
the bremsstrahlung energy loss distribution of electrons propagating in matter is highly non gaussian . because the kalman filter relies solely on gaussian probability density functions , it might not be an optimal reconstruction algorithm for electron tracks . a gaussian - sum filter ( gsf ) algorithm for electron track reconstruction in the cms tracker has therefore been developed . the basic idea is to model the bremsstrahlung energy loss distribution by a gaussian mixture rather than a single gaussian . it is shown that the gsf is able to improve the momentum resolution of electrons compared to the standard kalman filter . the momentum resolution and the quality of the estimated error are studied with various types of mixture models of the energy loss distribution .
in ( to be referred to as part i hereafter ) , we have presented a framework of multi - scale turbulence modeling with the correlations up to the fourth order , based on the navier - stokes equations , reynolds average , the constraints of inequality from the physical considerations and the cauchy - schwarz inequality and so on , the maximum information principle and the alternative objective function such as turbulent energy contained in the flow .the model is an optimal control problem with the fourth order correlations as the control variables .we have adopted the notion of the information and the maximum information principle , unlike that of edwards and mccomb who resorted to the entropy method to fix certain response functions of an isotropic homogeneous model through the maximization of entropy .the interpretation of the information as a thermodynamic entropy raises an interesting issue ; if we view the navier - stokes equations as a consequence of the second law of thermodynamics in that or , the question arises on how to justify as another entropy of thermodynamic nature , in addition to the one leading to . as an alternative , one may view the information as the mixing entropy as done in ( of macro - scales ) .the next important question is how to make the evaluation of computationally feasible under certain constraints such as the equations of evolution for the correlations and the positive semi - definiteness of the reynolds stress listed in part i. from the point of view of modeling , the maximization of the information under the constraints reflects the uncertainty in our inference based on the data and information available and specified , a ground for our adoption of the notion . to understand the mathematical challenges faced by the formulation of part i , we apply it to two - dimensional homogeneous shear turbulence in this work .we need to modify the formulation slightly , especially the alternative objective function , in order to cope with the infinite domain of motion ; the turbulent energy density is used as the objective to be maximized . on the basis of the supposed homogeneity , fourier transforms are applied to the correlations , and two primary integro - differential equations are obtained in the fourier wave number space , one for the second order correlations and the other for the third order correlations . without imposing the objective maximization and the constraints of inequality , these two equations can be solved formally by the method of characteristics and by the separation of variables , respectively : ( i ) the solutions of the former hold for rather general initial conditions and describe the corresponding evolution of the motion ( the transient state solutions ) .( ii ) the solutions from the latter hold for some special initial conditions and have an exponential dependence on time with spatial supports ( the asymptotic state solutions ) .( iii ) under certain conditions yet to be studied , a transient solution evolves , at great time , into a corresponding asymptotic state solution , and this evolution process involves the turbulent energy transfer among different wave numbers or different spatial scales .the asymptotic state solutions are characterized by the dimensionless exponential time rate of growth , , compatible with the studies of three - dimensional homogeneous shear turbulence ( , , , , , , , , , ) , and the rate of growth is bounded from above by , as argued mathematically with the help of certain constraints of inequality ; the existence of such an upper bound in the associated three - dimensional shear turbulence will be explored in a report forthcoming .the asymptotic solutions of the fourth order model are to be obtained from convex programming , with mathematical proofs to argue for the convexity of the quadratic constraints on the basis of linearization . for the asymptotic state solutions of the reduced model with the correlations up to the third order , the objective and all the constraints are linear , and the optimization reduces to a linear programming problem , with the possibility of either the primary component of the third order correlations or an associated integral quantity as the optimal control variable . for the sake of exploring the multi - scale structure of the turbulent motion ,we relax mathematically the restriction of to , which is justified and allowed by the two additional arguments for the existence of in the reduced model . at a specific ,the asymptotic solutions of the correlations are effectively nontrivial only inside certain bounded domains of the wave number spaces ; and the sizes of the domains shrink as increases from to . in the case that is the optimal control variable , there exist feasible solutions for any , implying that the reduced model may be inadequate to simulate the transient states which do not decay .the homogeneous turbulence modeling problem concerned may also be viewed as a stability problem due to the averaged flow field is held constant .it raises the possibility that the framework of optimization developed here may have relevance to flow stability analysis .further works need to be done to assess the adequacy and the feasibility of the idea .this paper is organized as follows . in section[ sec : homogeneousturbulence ] , we develop the differential equations , the constraints of inequality and the objective function in physical and fourier wave number spaces . in section [ sec : formalsolutionswithoutenforcingconstraints ] , without enforcing the maximization of the objective function and the constraints of inequality , we present the formal solutions , both transient and asymptotic , to the primary integro - differential equations . also , we discuss the effects of bounded solutions at finite time on the distributions of the correlations in the wave number spaces , the intrinsic equalities of zero sum balance for some integral quantities , and the evolution of a transient state solution to an asymptotic state solution under certain conditions .we also address the relevance of the formulation of turbulence modeling as optimal control to flow stability analysis . in section [ sec : asymptoticstatesolution ] , we analyze in detail the asymptotic state solutions , especially for the case of the reduced model .the convexity of the quadratic constraints is demonstrated ; various restrictions on the exponential growth rate are discussed ; the possible structures of the correlations in the wave number space are explored and two possible implementations of numerical approximations are presented .to examine how challenging the formulation proposed in part i is mathematically and whether it can produce adequate results , we consider the homogeneous shear turbulence in with an average velocity field of where is a nontrivial constant . since the average flow field of and is not affected by the correlations , we need to consider only the fluctuation fields of and governed by and due to the symmetry of the flows associated with and , we will restrict to in this work . under this restrictionwe can introduce the dimensionless quantities through and non - dimensionalize the above equations of motion to obtain the forms of and here , we have removed the accent for the sake of brevity . considering that the probability density function will not be present explicitly in the optimization problem , we can incorporate the supposed homogeneity in the first place in order to simplify the mathematical treatment . to this end , we construct , on the basis of through , the following equations for the evolution of the multi - point correlations up to the fourth order , & \frac{\partial}{\partial x_k}\overline{\wk(\bx ) \q(\by)}=0,\quad \frac{\partial}{\partial x_k}\overline{\wk(\bx ) \wl(\by ) \q(\bz)}=0 \label{hst_divergencefreeinphysicalspace_2p_3p}\end{aligned}\ ] ] & + \delta_{j1 } \overline{\wi(\bx ) w_2(\by ) } + \frac{\partial}{\partial x_k}\overline{\wi(\bx )\wk(\bx ) \wj(\by ) } + \frac{\partial}{\partial y_k}\overline{\wi(\bx ) \wk(\by ) \wj(\by ) } \notag\\[4pt ] = & -\frac{\partial } { \partial x_i}\overline{\q(\bx ) \wj(\by ) } -\frac{\partial } { \partial y_j}\overline{\wi(\bx ) \q(\by ) } + \bigg(\frac{\partial^2}{\partial x_k \partial x_k}+\frac{\partial^2}{\partial y_k\partial y_k}\bigg)\overline{\wi(\bx ) \wj(\by ) } \label{hst_clminphysicalspace_2p}\end{aligned}\ ] ] & + z_2 \frac{\partial } { \partial z_1}\overline{\wi(\bx)\wj(\by)\wk(\bz ) } + \delta_{i1 } \overline{w_2(\bx)\wj(\by)\wk(\bz ) } + \delta_{j1 } \overline{w_2(\by)\wi(\bx)\wk(\bz ) } \notag\\[4pt ] & + \delta_{k1 } \overline{\wi(\bx)\wj(\by)w_2(\bz ) } + \frac{\partial } { \partial x_l}\overline{\wi(\bx ) \wl(\bx)\wj(\by)\wk(\bz ) } + \frac{\partial } { \partial y_l}\overline{\wj(\by ) \wl(\by)\wi(\bx)\wk(\bz ) } \notag\\[4pt ] & + \frac{\partial } { \partial z_l}\overline{\wk(\bz ) \wl(\bz)\wi(\bx)\wj(\by ) } = -\frac{\partial } { \partial x_i}\overline{\q(\bx ) \wj(\by)\wk(\bz ) } -\frac{\partial } { \partial y_j}\overline{\q(\by ) \wi(\bx)\wk(\bz ) } \notag\\[4pt ] & -\frac{\partial } { \partial z_k}\overline{\q(\bz ) \wi(\bx)\wj(\by ) } + \bigg ( \frac{\partial^2 } { \partial x_l\partial x_l } + \frac{\partial^2 } { \partial y_l\partial y_l } + \frac{\partial^2 } { \partial z_l\partial z_l}\bigg)\overline{\wi(\bx)\wj(\by)\wk(\bz ) } \label{hst_clminphysicalspace_3p}\end{aligned}\ ] ] and here and below the dependence of the fluctuations and correlations on is suppressed for the sake of brevity .we now apply the homogeneity to the multi - point correlations involved in through , & \overline{\wi(\bx ) \wj(\by ) \wk(\bz ) \wl(\bz')}=\overline{\wi(\mathbf{0 } ) \wj(\mathbf{\br } ) \wk(\bs ) \wl(\bs')}=:\w_{ijkl}(\br,\bs,\bs ' ) , \notag\\[4pt ] & \overline{\q(\bx ) \,\q(\by)}=\overline{\q(\mathbf{0})\ , \q(\by-\bx)}=:\q(\br ) , \quad \overline{\q(\bx ) \wj(\by)}=\overline{\q(\mathbf{0 } ) \wj(\by-\bx)}=:\qj(\br ) , \notag\\[4pt ] & \overline{\q(\bx)\wj(\by)\wk(\bz)}=\overline{\q(\mathbf{0})\wj(\by-\bx)\wk(\bz-\bx)}=:\q_{jk}(\br,\bs ) \label{homogeneity}\end{aligned}\ ] ] where , and .obviously , there are symmetric relations from the definitions above such as & \w_{ijkl}(\br,\bs,\bs')=\w_{ijlk}(\br,\bs',\bs)=\w_{ilkj}(\bs',\bs,\br)=\w_{ikjl}(\bs,\br,\bs')=\w_{jikl}(-\br,\bs-\br,\bs'-\br ) \notag\\[4pt ] & = \w_{kijl}(-\bs,\br-\bs,\bs'-\bs)=\w_{lijk}(-\bs',\br-\bs',\bs-\bs ' ) , \quad \q(\br)=\q(-\br ) , \quad \q_{jk}(\br,\bs)=\q_{kj}(\bs,\br ) \label{homogeneity_symmetry}\end{aligned}\ ] ] the domain of motion and the averaged flow field are symmetric under the coordinate transformation of .further , it can be verified directly that , if , is a solution of through , , is also a solution , that is , the solution satisfies the symmetry of inversion , provided that the initial condition is adequate , such as holding at .it is interesting to notice that the adoption of implies that .that is , is a peculiar point at which the velocity fluctuation remains zero under the symmetry of the exact solutions for the corresponding initial conditions ; this result has the non - physical consequence of .it follows that the above symmetry does not hold for all the realizable individual solutions since the initial conditions do not possess such a symmetry .we will still adopt , however , the symmetry in a statistical sense as formulated in , which may be justified from the aspect of the coordinate transformation for the flow due to its geometric and kinematic symmetries .for instance , if we rotate the cartesian coordinate system under , we have and we expect that the statistical correlations transform accordingly as specified in below .we now impose the statistical symmetry of inversion , & \overline{w_i(\bx ) w_j(\by ) w_k(\bz ) w_l(\bz')}=\overline{(-w_i(-\bx))\ , ( -w_j(-\by))\ , ( -w_k(-\bz))\ , ( -w_l(-\bz ' ) ) } , \notag\\[4pt ] & \overline{\q(\bx\ , ) \q(\by)}=\overline{\q(-\bx ) \q(-\by)},\quad \overline{\q(\bx ) \wj(\by)}=\overline{\q(-\bx ) ( -\wj(-\by ) ) } , \notag\\[4pt ] & \overline{\q(\bx ) \wj(\by ) \wk(\bz)}=\overline{\q(-\bx ) ( -\wj(-\by ) ) ( -\wk(-\bz ) ) } \label{statisticalsymmetryofinversion}\end{aligned}\ ] ] or & \q(\br)=\q(-\br),\quad \q_j(\br)=-\q_j(-\br),\quad \q_{jk}(\br,\bs)=\q_{jk}(-\br,-\bs ) \label{homogeneity_inversion}\end{aligned}\ ] ] we can substitute into through to get & \bigg(\frac{\partial}{\partial r_k}+\frac{\partial}{\partial s_k}\bigg)\w_{kjl}(\br,\bs)=0,\quad \frac{\partial}{\partial r_j}\w_{kjl}(\br,\bs)=0,\quad \frac{\partial}{\partial s_l}\w_{kjl}(\br,\bs)=0 , \notag\\[4pt ] & \bigg(\frac{\partial}{\partial r_i}+\frac{\partial}{\partial s_i}+\frac{\partial}{\partial s'_i}\bigg)\w_{ijkl}(\br,\bs,\bs')=0,\quad \frac{\partial}{\partial r_j}\w_{ijkl}(\br,\bs,\bs')=0,\quad \frac{\partial}{\partial s_k}\w_{ijkl}(\br,\bs,\bs')=0 , \notag\\[4pt ] & \frac{\partial}{\partial s'_l}\w_{ijkl}(\br,\bs,\bs')=0,\quad \frac{\partial}{\partial r_k}\q_{k}(\br)=0,\quad \frac{\partial}{\partial r_k}\q_{kl}(\br,\bs)=0,\quad \frac{\partial}{\partial s_l}\q_{kl}(\br,\bs)=0 \label{hst_divergencefreeinphysicalspace_2p_3p_rs}\end{aligned}\ ] ] = & \ , \frac{\partial } { \partial r_i}\q_j(\br ) -\frac{\partial } { \partial r_j}\q_i(-\br ) + 2 \frac{\partial^2}{\partial r_k\partial r_k}\w_{ij}(\br ) \label{hst_clminphysicalspace_2p_r}\end{aligned}\ ] ] & + \delta_{k1 } \w_{ij2}(\br,\bs ) -\bigg(\frac{\partial } { \partial r_l}+\frac{\partial } { \partial s_l}\bigg ) \w_{iljk}(\mathbf{0},\br,\bs ) + \frac{\partial } { \partial r_l } \w_{jlik}(\mathbf{0},-\br,\bs-\br ) \notag\\[4pt ] & + \frac{\partial } { \partial s_l } \w_{klij}(\mathbf{0},-\bs,\br-\bs ) = \frac{\partial } { \partial r_i } \q_{jk}(\br,\bs ) + \frac{\partial } { \partial s_i } \q_{jk}(\br,\bs ) -\frac{\partial } { \partial r_j } \q_{ik}(-\br,\bs-\br ) \notag\\[4pt ] & -\frac{\partial } { \partial s_k } \q_{ij}(-\bs,\br-\bs ) + 2 \bigg ( \frac{\partial^2 } { \partial r_l\partial r_l } + \frac{\partial^2 } { \partial s_l\partial s_l } + \frac{\partial^2}{\partial r_l \partial s_l } \bigg ) \w_{ijk}(\br,\bs ) \label{hst_clminphysicalspace_3p_rs}\end{aligned}\ ] ] & -\bigg(\frac{\partial}{\partial r_m}+\frac{\partial}{\partial s_m}\bigg)\bigg(\frac{\partial}{\partial r_l}+\frac{\partial}{\partial s_l}\bigg ) \w_{lmjk}(\mathbf{0},\br,\bs ) \label{hst_pressureinphysicalspace_3p_rs}\end{aligned}\ ] ] and it is convenient to formulate the mathematical problem with the help of fourier transforms in , , , . with this adoption of an infinite domain of flow ,we need to modify our treatment presented in part i accordingly , as to be mentioned in the appropriate places below .we adopt the fourier transforms of \ , d\bk\ , d\bl , \notag\\[4pt ] & \w_{ijkl}(\br,\bs,\bs')=\int_{\mathbb{r}^2\times\mathbb{r}^2\times\mathbb{r}^2 } \tw_{ijkl}(\bk,\bl,\bm)\ , \exp\s\l[\imaginary\ , ( \bk\s\cdot\s\br+\bl\s\cdot\s\bs+\bm\s\cdot\s\bs')\r]\ , d\bk\ , d\bl\ , d\bm , \notag\\[4pt ] & \q(\br)=\int_{\mathbb{r}^2 } \tq(\bk)\ , \exp(\imaginary\ , \bk\s\cdot\s\br)\ , d\bk , \quad \qj(\br)=\int_{\mathbb{r}^2 } \tqj(\bk)\ , \exp(\imaginary\ , \bk\s\cdot\s\br)\ , d\bk , \notag\\[4pt ] & \q_{jk}(\br,\bs)=\int_{\mathbb{r}^2\times\mathbb{r}^2 } \tq_{jk}(\bk,\bl)\ , \exp\s\l[\imaginary\ , ( \bk\s\cdot\s\br+\bl\s\cdot\s\bs)\r]\ , d\bk \,d\bl \label{fouriertransform}\end{aligned}\ ] ] that the one - point and multi - point correlations in the physical space are real requires that & \tq^*(\bk)=\tq(-\bk ) , \quad \tq^*_j(\bk)=\tq_j(-\bk ) , \quad \tq^*_{jk}(\bk,\bl)=\tq_{jk}(-\bk,-\bl ) \label{realcorrelations_inps}\end{aligned}\ ] ] where the superscript denotes the complex conjugate operation . combining , , and , we get & \tw_{ijk}(\bk,\bl)=\tw_{ikj}(\bl,\bk)=\tw_{jik}(-\bk-\bl,\bl)=\tw_{kij}(-\bk-\bl,\bk)=-\tw_{ijk}(-\bk,-\bl)=-\tw^*_{ijk}(\bk,\bl ) , \notag\\[6pt ] & \tw_{ijkl}(\bk,\bl,\bm)=\tw_{ijlk}(\bk,\bm,\bl)=\tw_{ilkj}(\bm,\bl,\bk)=\tw_{ikjl}(\bl,\bk,\bm ) = \tw_{jikl}(-\bk-\bl-\bm,\bl,\bm ) \notag\\[3pt ] & = \tw_{kijl}(-\bk-\bl-\bm,\bk,\bm ) = \tw_{lijk}(-\bk-\bl-\bm,\bk,\bl ) = \tw_{ijkl}(-\bk,-\bl,-\bm)=\tw^*_{ijkl}(\bk,\bl,\bm ) , \notag\\[6pt ] & \tq(\bk)=\tq(-\bk)=\tq^*(\bk ) , \quad \tq_j(\bk)=-\tq_j(-\bk)=-\tq^*_j(\bk ) , \notag\\[6pt ] & \tq_{jk}(\bk,\bl)=\tq_{jk}(-\bk,-\bl)=\tq^*_{jk}(\bk,\bl)=\tq_{kj}(\bl,\bk ) \label{homogeneity_symmetry_inversion_fs}\end{aligned}\ ] ] it then follows that , , and are real and and are purely imaginary , i.e. , & \tq_j(\bk)=\imaginary\,\,\tq^{(i)}_j(\bk),\quad \tq^{(i)}_j(-\bk)=-\tq^{(i)}_j(\bk ) \label{wiwjwk_purelyimaginary_fs}\end{aligned}\ ] ] we now transform through in the physical space to their corresponding relations in the wave number space of and so on , & ( k_i+l_i+m_i)\,\tw_{ijkl}(\bk,\bl,\bm)=0,\quad k_j\,\tw_{ijkl}(\bk,\bl,\bm)=0,\quad l_k\,\tw_{ijkl}(\bk,\bl,\bm)=0 , \notag\\[6pt ] & m_l\,\tw_{ijkl}(\bk,\bl,\bm)=0,\quad k_k\,\tq^{(i)}_{k}(\bk)=0,\quad k_k\,\tq_{kl}(\bk,\bl)=0,\quad l_l\,\tq_{kl}(\bk,\bl)=0 \label{hst_divergencefreeinphysicalspace_qandw_fs}\end{aligned}\ ] ] = & \ , -k_i\,\tqj^{(i)}(\bk ) + k_j\,\tqi^{(i)}(-\bk ) -k_k \int_{\mathbb{r}^2 } \big(\twimag_{ijk}(\bk,\bl)-\twimag_{jik}(-\bk,\bl)\big)\ , d\bl \label{hst_clminphysicalspace_ww_fs}\end{aligned}\ ] ] and & + \delta_{i1 } \twimag_{2jk}(\bk,\bl ) + \delta_{j1 } \twimag_{i2k}(\bk,\bl ) + \delta_{k1}\twimag_{ij2}(\bk,\bl ) -(k_l+l_l)\int_{\mathbb{r}^2}\tw_{iljk}(\bm,\bk,\bl)\,d\bm \notag\\ & + k_l\int_{\mathbb{r}^2}\tw_{jlik}(\bm,-\bk-\bl,\bl)\,d\bm + l_l\int_{\mathbb{r}^2}\tw_{klij}(\bm,-\bk-\bl,\bk)\,d\bm \notag\\[4pt ] = & \ , k_i\,\tq_{jk}(\bk,\bl ) + l_i\,\tq_{jk}(\bk,\bl ) -k_j\,\tq_{ik}(-\bk-\bl,\bl ) -l_k\,\tq_{ij}(-\bk-\bl,\bk ) \notag\\[4pt ] & -2\,\big(|\bk|^2+|\bl|^2+\bk\s\cdot\s\bl\big)\,\twimag_{ijk}(\bk,\bl ) \label{hst_clminphysicalspace_www_fs}\end{aligned}\ ] ] equation can be easily solved to obtain & \tw^{(i)}_{ijk}(\bk,\bl)=\bigg(\s-\frac{k_1+l_1}{k_2+l_2}\bigg)^{\s i-1 } \bigg(\s-\frac{k_1}{k_2}\bigg)^{\s j-1 } \bigg(\s-\frac{l_1}{l_2}\bigg)^{\s k-1}\,\toc(\bk,\bl ) \label{hst_divergencefreeinphysicalspac_wiwjwk_fs}\end{aligned}\ ] ] and & = \foc(-\bk-\bl-\bm,\bl,\bm ) , \notag\\[8pt ] & \tw_{ijkl}(\bk,\bl,\bm)=\bigg(\s-\frac{k_1+l_1+m_1}{k_2+l_2+m_2}\bigg)^{\s i-1 } \bigg(\s-\frac{k_1}{k_2}\bigg)^{\s j-1 } \bigg(\s-\frac{l_1}{l_2}\bigg)^{\s k-1 } \bigg(\s-\frac{m_1}{m_2}\bigg)^{\s l-1}\,\foc(\bk,\bl,\bm ) \label{hst_divergencefreeinphysicalspace_wiwjwkwl_fs}\end{aligned}\ ] ] that is , , and are , respectively , the primary components for the second , the third and the four order correlations .next , the consistency between and requires the existence of single equation of evolution for and the consistency between and also demands single equation of evolution for .both can be checked directly by the respective substitutions of into and into and so on ; straightforward but lengthy operations give \,\soc(\bk)\bigg\ } \notag\\[4pt ] = & \ , 2\,|\bk|^2 \,k_2\,\exp\s\big[2 h(0,\bk)\big]\s \int_{\mathbb{r}^2}\s \bigg [ \frac{l_1}{l_2 } + \frac{k_1+l_1}{k_2+l_2}\,\frac{k_1}{k_2}\,\frac{l_1}{l_2 } -\frac{k_1}{k_2 } - \frac{k_1+l_1}{k_2+l_2}\,\bigg(\frac{k_1}{k_2}\bigg)^2 \bigg]\ , \toc(\bk,\bl ) \, d\bl \label{hst_clminphysicalspace_w1w1_fs}\end{aligned}\ ] ] and \ , \toc(\bk,\bl)\bigg\ } \notag\\[4pt ] = & \ , \frac{|\bk|^2\,|\bl|^2\,|\bk+\bl|^2}{k_2\,l_2\,(k_2+l_2 ) } \exp\s\big[h(0,\bk)\s+\s h(0,\bl)\s+\s h(0,\bk+\bl)\big]\notag\\ & \hskip 3 mm \times\s\bigg [ ( k_1+l_1)\,\frac{(k_2+l_2)^2}{|\bk+\bl|^2}\,\foci(\bk,\bl ) + ( k_2+l_2)\,\bigg(1 -\frac{2(k_1+l_1)^2}{|\bk+\bl|^2 } \bigg)\,\focii(\bk,\bl ) \notag\\[4pt ] & \hskip 10 mm -k_1\,\frac{(k_2)^2}{|\bk|^2}\,\foci(-\bk-\bl,\bl ) -k_2\,\bigg(1-\frac{2(k_1)^2}{|\bk|^2}\bigg)\,\focii(-\bk-\bl,\bl ) \notag\\[4pt ] & \hskip 10 mm -l_1\,\frac{(l_2)^2}{|\bl|^2 } \,\foci(-\bk-\bl,\bk ) -l_2\,\bigg(1-\frac{2(l_1)^2}{|\bl|^2}\bigg ) \ , \focii(-\bk-\bl,\bk ) \bigg ] \label{hst_clminphysicalspace_w1w1w1_fs}\end{aligned}\ ] ] here , & \focii(\bk,\bl ) : = -\int_{\mathbb{r}^2}\frac{m_1}{m_2 } \ , \foc(\bm,\bk,\bl)\,d\bm \label{hst_divergencefreeinphysicalspace_1s2s_fs_asymp}\end{aligned}\ ] ] equations and are the two primary equations for and , respectively ; is to be determined .the equations above have the linear structures involving , and ; the non - linearity comes into play through the nonlinear constraints of inequality to be discussed below .it is straightforward to check that the symmetries of the second and third order correlations listed in and are guaranteed by the structures of , and the symmetries of in that are to be implemented .there are constraints of inequality for the second , third and fourth order correlations from various considerations .firstly , there are constraints of inequality for as discussed in part i which will , in turn , result in a set of inequality constraints for and , ( the summations are replaced with the corresponding integrations here due to the infinite domain of flow ) . 1 .the two - point correlations in the physical space are supposed to be finite at any finite instant , and the finiteness supposedly holds also for the corresponding correlations in the wave number space .that is , 2 .we take as non - negative , it guarantees the non - negativity of the energy spectrum distribution whose consequence or necessity will be demonstrated below .the constraint may also be justified if one starts from the fourier transform of and then applies the homogeneity to the resultant correlation of .we should mention that is the only constraint formulated directly in the wave number space , we will not enforce similar inequalities for and derived from the application of the cauchy - schwarz inequality to and , since the involvement of the dirac delta complicates the formulation .the above adoption of the homogeneity before the fourier transforms intends to avoid such complications .. the constraints of inequality from the positive semi - definiteness of the single - point correlations , and are satisfied automatically under and .for instance , in the case of we have , with the help of the cauchy - schwarz inequality , & = \int_{\mathbb{r}^2 } \big(|\bk|\,\sqrt{\soc(\bk)}\big)\,|\bk|\,\big|\frac{k_1}{k_2}\big|\,\sqrt{\soc(\bk)}\ , d\bk \leq \sqrt{\int_{\mathbb{r}^2 } |\bk|^2\,\soc(\bk)\ , d\bk\,\ , \int_{\mathbb{r}^2 } |\bk|^2\,\big|\frac{k_1}{k_2}\big|^2\,\soc(\bk)\ , d\bk } \notag\\[4pt ] & = \sqrt{\int_{\mathbb{r}^2 } |\bk|^2\,\tw_{11}(\bk)\ , d\bk}\,\ , \sqrt{\int_{\mathbb{r}^2 } |\bk|^2\,\tw_{22}(\bk)\ , d\bk}\end{aligned}\ ] ] 4 .we apply the cauchy - schwarz inequality to the two - point correlations of to obtain a set of constraints of inequality for , and these constraints are also satisfied automatically under . for example , in the case of we have & = \int_{\mathbb{r}^2}\sqrt{\soc(\bk)}\,\,\,\big|\frac{k_1}{k_2}\big|\,\sqrt{\soc(\bk ) } \ , d\bk \leq \sqrt{\int_{\mathbb{r}^2 } \soc(\bk ) \ , d\bk\ , \int_{\mathbb{r}^2}\big|\frac{k_1}{k_2}\big|^2\,\soc(\bk ) \, d\bk } \notag\\[4pt ] & = \sqrt{\int_{\mathbb{r}^2 } \tw_{11}(\bk ) \ ,d\bk}\ , \sqrt{\int_{\mathbb{r}^2}\tw_{22}(\bk ) \ , d\bk}\end{aligned}\ ] ] with the help of , , , and the cauchy - schwarz inequality to the functions in the wave number space .next , we consider the multi - point correlations in the physical space involving the higher orders . 1 .it is expected that which indicates the finiteness of the corresponding correlations at any finite time in the wave number space .2 . the expected and for all and and , and requires that hereafter , the summation rule is suspended for underlined subscripts , following the convention .more such inequalities can be formulated for different combinations of partial derivatives of various orders .3 . we can obtain constraints of inequality among , and by applying the cauchy - schwarz inequality to the correlations of as well as their spatial derivativesconsider .the cauchy - schwarz inequality requires that & \quad \overline{\w_{\underlinej}(\by ) \w_{\underlinej}(\by)}\,\,\overline{\w_{\underlinei}(\bx ) \w_{\underlinei}(\bx ) \w_{\underlinek}(\bz ) \w_{\underlinek}(\bz ) } , \notag\\[4pt ] & \quad \overline{\w_{\underlinek}(\bz ) \w_{\underlinek}(\bz)}\,\,\overline{\w_{\underlinei}(\bx ) \w_{\underlinei}(\bx ) \w_{\underlinej}(\by ) \w_{\underlinej}(\by ) } \big)\end{aligned}\ ] ] that is , & \quad \w_{\underlinek\underlinek}(\mathbf{0})\,\,\w_{\underlinei\underlinei\underlinej\underlinej}(\mathbf{0},\br,\br ) \big ) , \quad i\leq j\leq k \label{cs_inequality_ps_01}\end{aligned}\ ] ] 2 .the application to results in & \quad \w_{\underlinei\underlinei\underlinek\underlinek}(\mathbf{0},\bs,\bs)\,\,\w_{\underlinej\underlinej\underlinel\underlinel}(\mathbf{0},\bs'-\br,\bs'-\br ) , \notag\\[4pt ] & \quad \w_{\underlinei\underlinei\underlinel\underlinel}(\mathbf{0},\bs',\bs')\,\,\w_{\underlinej\underlinej\underlinek\underlinek}(\mathbf{0},\bs-\br,\bs-\br ) \big ) , \quad i\leq j\leq k\leq l \label{cs_inequality_wwww_01}\end{aligned}\ ] ] 3 . leads to 4 . and give , respectively , 4 .it is interesting to evaluate the average deviation of from through or this inequality has certain similarity to the quasi - normal approximation , and it has a significant implication to the asymptotic state solutions to be discussed .similar inequalities involving and can also be formulated , such as & \overline{w_{\underlinei},_{\underlinek}(\bx)w_{\underlinei},_{\underlinek}(\bx ) w_{\underlinej},_{\underlinel}(\by ) w_{\underlinej},_{\underlinel}(\by ) } -\big(\overline{w_{i},_k(\bx ) w_{j},_l(\by)}\big)^2\geq 0 \label{wiwj , k_deviation}\end{aligned}\ ] ] and so on . in parti , we have restricted our treatment to the case of bounded flow domains so as to avoid the complication of a functional formulation of probability density .therefore , we need to modify the objective function for the homogeneous shear turbulence in the unbounded flow domain of . we have established in part i the proportional relationship between and the total fluctuation kinetic energy possessed in a turbulent flow , and consequently , we will redefine here the objective as the fluctuation energy per unit area or equivalently it is preferable to employ as the alternative objective to be maximized which has a mathematically simple linear structure and a physically clear meaning , compared with the other invariants of the covariance matrix .we need to examine how the alternative affects the uniqueness of solutions and other issues .it is clear that the mathematical problem of through , together with through , is an optimal control problem of an infinite dimensional system governed by two integro - partial differential equations with and as the state variables and as the control variable ( , ) .this link implies that we should solve the problem with the help of the relevant tools from optimal control theory and develop further analysis if required .equations and are of first order and linear forms , which can be solved formally with the help of the method of characteristics and the separation of variables under appropriate initial conditions .we explore the properties of the equations , without enforcing the maximization of objective and the constraints of inequality listed above . under rather general initial conditions , we can find the formal solutions of and with the aid of the method of characteristics , which are presented below . \,\soc_0(\bk^{\prime\prime } ) \notag\\ & + \frac{2(k_2)^2}{|\bk|^4 } \int_0^t dt ' \,|\bk^{\prime}|^2 \,\exp\s\big[2 \big(h\big(0,\bk^{\prime}\big)- h(0,\bk)\big)\big]\s \int_{\mathbb{r}^2}\s d\bl\ , |\bl|^2 \,(k_1\,l_2-k'_2\,l_1)\ , \frac{\toc\big(t',\bk^{\prime},\bl\big)}{k'_2\,l_2\,(k'_2+l_2 ) } \label{hst_clminphysicalspace_w1w1_fs_solution}\end{aligned}\ ] ] and &\hskip 4 mm \times\s \exp\s\big[h(0,\bk^{\prime\prime})-h(0,\bk)+ h(0,\bl^{\prime\prime})- h(0,\bl ) \notag\\[2pt ] & \hskip 18 mm + h(0,\bk''\s+\s\bl'')-h(0,\bk\s+\s\bl ) \big ] \toc_0(\bk'',\bl '' ) \notag\\[2pt ] & + \frac{k_2\,l_2\,(k_2+l_2)}{|\bk|^2\,|\bl|^2\,|\bk+\bl|^2 } \notag\\[2pt ] & \hskip 4mm\times\s \int_0^{t } dt ' \ , \frac{|\bk^{\prime}|^2\,|\bl^{\prime}|^2\,|\bk^{\prime}+\bl^{\prime}|^2}{k^{\prime}_2\,l^{\prime}_2\,(k^{\prime}_2+l^{\prime}_2 ) } \exp\s\big[h(0,\bk^{\prime})-h(0,\bk)+ h(0,\bl^{\prime})- h(0,\bl ) \notag\\[2pt ] & \hskip 60 mm + h(0,\bk^{\prime}+\bl^{\prime})- h(0,\bk+\bl)\big ] \notag\\ & \hskip 15 mm \times\s\bigg [ ( k_1+l_1)\,\frac{(k^{\prime}_2+l^{\prime}_2)^2}{|\bk^{\prime}+\bl^{\prime}|^2}\,\foci(t',\bk^{\prime},\bl^{\prime } ) + ( k^{\prime}_2+l^{\prime}_2 ) \bigg(1 -\frac{2(k_1+l_1)^2}{|\bk^{\prime}+\bl^{\prime}|^2 } \bigg)\,\focii(t',\bk^{\prime},\bl^{\prime } ) \notag\\ & \hskip 22 mm -k_1\,\frac{(k_2^{\prime})^2}{|\bk^{\prime}|^2}\,\foci(t',-\bk^{\prime}-\bl^{\prime},\bl^{\prime } ) -k_2^{\prime}\,\bigg(1-\frac{2(k_1)^2}{|\bk^{\prime}|^2}\bigg)\,\focii(t',-\bk^{\prime}-\bl^{\prime},\bl^{\prime } ) \notag\\ & \hskip 22 mm -l_1\,\frac{(l_2^{\prime})^2}{|\bl^{\prime}|^2 } \,\foci(t',-\bk^{\prime}-\bl^{\prime},\bk^{\prime } ) -l^{\prime}_2\,\bigg(1-\frac{2(l_1)^2}{|\bl^{\prime}|^2}\bigg ) \focii(t',-\bk^{\prime}-\bl^{\prime},\bk^{\prime } ) \bigg ] \label{hst_clminphysicalspace_w1w1w1_fs_solution}\end{aligned}\ ] ] here , and are , respectively , the initial conditions of and , and & \bl'=\l(l_1,l_2+l_1(t - t')\r)\end{aligned}\ ] ] , \notag\\ h(0,\bk')-h(0,\bk ) = & -(t - t ' ) \bigg[(k_1)^2+\frac{1}{6}\,\big(\big(k_2+k^{\prime}_2\big)^2+(k_2)^2+\big(k^{\prime}_2\big)^2 \big ) \bigg ] , \quad \text{etc.}\end{aligned}\ ] ] in the derivation of , we have used which can be verified directly on the basis of from .one prominent feature of the formal solutions and is the presence of the mixed modes of time and wave numbers such as , , and , which characterize the turbulent energy transfer among various wave numbers as time proceeds , as to be demonstrated below .there is a singularity at contained in ] and of .we may understand their consequences in and through the limit of .we can approach in from different directions . to simplify the analysis , we focus on the limits of we set first and in to obtain and we then have alternatively , under a fixed , taking in gives \,\soc_0((0,k_2 ) ) \notag\\ & -2\,k_2 \int_0^t dt ' \,\exp\s\big[-2\,(k_2)^2\,(t - t')\big]\s \int_{\mathbb{r}^2}\s d\bl\,|\bl|^2\,l_1\ , \frac{\toc\big(t',(0,k_2),\bl\big)}{k_2\,l_2\,(k_2+l_2 ) } \label{hst_clminphysicalspace_w1w1_fs_solution_k1=0}\end{aligned}\ ] ] consequently , due to the expectantly bounded and integral in at any finite time , ( see also below ) .it follows from the equality of the two limits that similarly , we consider the case of .we have from , under fixed , and \toc_0(\mathbf{0},\bl^{\prime\prime})\end{aligned}\ ] ] these two limits should be the same , and thus , we have we have some observations on the restrictions and effects of the initial conditions and as follows . 1 .the related term in contains a possible singularity at under , or at , which needs to be removed by the distribution of .similarly , the related term in contains possible singularities at , or under , or , at certain s , which need to be removed by the adequate distribution of .therefore , we impose the constraints that or under the expected invariance of time translation .otherwise , say , the limits of did not exist at some , we could then take as an initial instant and infer the validity of at from the application of to the new setting , a contradiction .+ the constraints above suggest the transformations of & \foc(\bk,\bl,\bm)=k_2\,l_2\,m_2\,(k_2+l_2+m_2)\,\dfoc(\bk,\bl,\bm ) \label{singularityremovaltransform}\end{aligned}\ ] ] with & \dfoc(\bk,\bl,\bm)=\dfoc(-\bk,-\bl,-\bm ) = \dfoc(\bk,\bm,\bl)=\dfoc(\bm,\bl,\bk)=\dfoc(\bl,\bk,\bm ) \notag\\[4pt ] & = \dfoc(-\bk-\bl-\bm,\bl,\bm ) = \dfoc(-\bk-\bl-\bm,\bk,\bm ) = \dfoc(-\bk-\bl-\bm,\bk,\bl ) \label{hst_divergencefreeinphysicalspace_wiwjwkwl_fs_transf}\end{aligned}\ ] ] following from through .these transformations are compatible with in the limit of and the forms of through , and they also make and satisfied automatically . + if we substitute into and and we require that we get the constraints of 2 . under fixed , the first term on the right - hand side of tends to \,\soc_0(\bk'')\quad \text{at large}\ t \label{hst_clminphysicalspace_w1w1_fs_solution_larget}\end{aligned}\ ] ] the constraints of and imply that is bounded for all the wave numbers and is negligible at large .therefore , will have negligible effects on , , at large time .+ in the case of and , indicates that will have negligible effects on at large time . in the case of , says that .consequently , will have negligible effects on at large time .3 . under fixed and with , the first term on the right - hand side of has the asymptote of \toc_0(\bk^{\prime\prime},\bl^{\prime\prime})\ \\text{at large}\ t \label{hst_clminphysicalspace_w1w1_fs_solution_larget_02}\end{aligned}\ ] ] we expect that is bounded under supposedly bounded with as a consequence , we conclude that the effect of on will become negligible at large time .the effect of on is described by the term of \notag\\[4pt ] & \hskip 18 mm \times\s \int_{\mathbb{r}^2}\s d\bl\ , |\bl|^2 \,(k_1\,l_2-k'_2\,l_1)\ , \frac{\toc_0(\bk'',(l_1,l_2+l_1t'))}{k''_2\,(l_2+l_1t')\,(k''_2+l_2+l_1 t ' ) } \notag\\[4pt ] & \hskip 28 mm \times\s \frac{|\bk''|^2\,[(l_1)^2+(l_2+l_1t')^2]\,[(k_1+l_1)^2+(k''_2+l_2+l_1t')^2]}{|\bl|^2\,|\bk'+\bl|^2}\ , \notag\\[4pt]&\hskip 28 mm \times\s \exp\s\bigg [ -t ' \big((k_1)^2+(l_1)^2+(k_1+l_1)^2\big ) -\frac{t'}{3}\big ( \big(k^{\prime}_2\big)^2 + k^{\prime}_2\,k^{\prime\prime}_2 + \big(k^{\prime\prime}_2\big)^2 \big)\notag\\[4pt ] & \hskip 41 mm -\frac{t'}{3 } \big ( ( l_2)^2 + l_2\,\big(l_2+l_1\,t'\big ) \s+\s\big(l_2+l_1\,t'\big)^2 \big)\notag\\[4pt ] & \hskip 41 mm -\frac{t'}{3}\big(\s \big(k^{\prime}_2+l_2\big)^2 \s+\s \big(k^{\prime}_2+l_2\big)\big(k^{\prime\prime}_2+l_2+l_1\,t'\big ) \s+\s \big(k^{\prime\prime}_2+l_2+l_1\,t'\big)^2\big ) \bigg ] \label{insofw1w1w1onw1w1}\end{aligned}\ ] ] from and .the constraint of makes finite .furthermore , under expectantly bounded , is bounded and goes to zero rapidly in the limits of or .therefore , under , rapidly approaches zero at large . also , the exponential functions contained in the integrand of approaches zero at large under .consequently , under , is expected to be very small and has a negligible effect on at large .similarly , we can argue that , in the case of and , has a negligible effect on , which is also guaranteed by the adoption of below .the case of is trivial due to .the above conclusion of negligible effects is drawn based solely on the formal transient solutions for and without the enforcement of the constraints of inequality and the maximization of the objective function .therefore , it does not exclude the impacts of the initial distributions on and at large time via the constraints and the maximization which shape the optimal control starting at with , and .for example , the existence of asymptotic state solutions of various exponential time rates to be discussed may be viewed as the evidence bearing such impacts . the negligible effects discussed above are more relevant to the possibility that two different sets of initial conditions for , , may evolve into the same asymptotic solution of , , at great .the discussion above has used the implicit assumptions that , at large time , the -term in is much smaller than the integral term and the -term in much smaller than the other integral term .these assumptions seemingly hold if both the integral terms evolve at large time according to with being constant of any value , given the presence of in the two exponential functions in and .however , the complication caused by the dependence of the two exponential functions on the wave numbers needs to examined .some scenarios can occur , for example , the initial distribution terms may be greater than or have the same order of magnitude as the integral terms , such as under the condition of certain small turbulent fluctuations and so on . in this case , the solutions may be dominated or significantly modified by the initial distribution terms and decay in a rather complicated fashion as indicated by the initial condition related terms in and , in contrast to the constant exponential time rates of the asymptotic state solutions to be discussed .this point may also have certain relevance to the issue of stability analysis to be considered later .there are certain intrinsic equalities associated with which can be established as follows . to this end , we first introduce resorting to the symmetry of and the transformation of and which gives , we can show that \label{intrinsicequality_wholek1}\end{aligned}\ ] ] moreover , we have \label{intrinsicequality_halfk1}\end{aligned}\ ] ] which can be proved by using on the basis of the transformation of and and from , and then , next , we define and using arguments similar to the ones above , we can also show that \label{intrinsicequality_bkp}\end{aligned}\ ] ] the significance of , and may be understood by recasting in the form of \,\soc_0(\bk^{\prime\prime } ) \notag\\ & \s\s\s + \frac{2(k_2)^2}{|\bk|^4 } \int_0^t dt ' \,|\bk^{\prime}|^2\,\exp\s\big[2 \big(h\big(0,\bk^{\prime}\big)- h(0,\bk)\big)\big ] \,l(\bk , t , t')\end{aligned}\ ] ] the non - negativity of ] is a solution of , is a solution too , which can be verified directly . therefore , is a closed loop in the polar coordinate system of with ] . fig .[ ngd ] indicates the existence of the minimum of the loop , denoted as , which is negative and can be obtained by taking with to be solved from one can verify that .the specific on the loop as a function of can be solved from ( [ k1min] under ; there are two distinct solutions , denoted by and , respectively , a special case of which is illustrated in fig .[ mcurve ] .+ -3 mm + here , represents the lower branch and the upper branch of the loop displayed in fig .[ ngd ] , , and = 0 , & k'_2=\mpeak , \ , \mvalley ; \\ [ 4pt ] > 0 , & k'_2\in ( -\infty,\,\mpeak)\cup(\mvalley,\,+\infty ) \end{array } \r .\ ] ] and are related through \end{aligned}\ ] ] 5 .recall that there exist such that constraint requires that the two relations above and the behavior of illustrated in figs .[ ngd ] and [ mcurve ] imply that and should lie preferably in a small neighborhood of when is predominantly negative so as to satisfy the constraint of .this observation offers us a ground to estimate the support of , , as follows .+ as illustrated by the specific curve of fig .[ mcurve ] , there is a unique such that and , i.e. , \label{lowerboundfornkpositive}\end{aligned}\ ] ] we may take this as a lower bound for the set under fixed , considering , and the behavior of .furthermore , there is a unique with or <1 \label{upperboundfornknegative}\end{aligned}\ ] ] this may be treated as a upper bound for the set , .+ with the help of and , we may have an estimate for the support of where } \big\{\{k_1\}\s\times\s\big[\mvalley',\,\mpeak'\big]\big\},\quad \maxsupportnbkppos:=\big\{-\bk':\ \bk'\in \maxsupportnbkpneg\big\ } \label{possiblesupportofnkp_subdomains}\end{aligned}\ ] ] the boundaries of with , are , respectively , sketched in fig .[ nsupportmax ] .+ -3 mm + the above support estimate helps us to fix numerically if is taken as the control variable under .6 . with the above estimate of , ( [ n_controlvariable] met automatically .we notice that this estimate is obtained by focusing on the variations of and along the axis of under fixed . to be comprehensive in the support estimate, we may also take into account the variation of along the direction , considering that is relatively small in the region where is small .for example , the predominantly negative values of should be achieved in an appropriate range of and in a neighborhood of ; it then follows that ( [ n_controlvariable] may be met even by a lower bound of beyond with being positive in the associated region of expansion , possibly resulting in a more robust numerical computation and larger turbulent energy .this expansion beyond is compatible with and .we may also enlarge the estimate of by taking , say ] apparently reflect the meaning of .for instance , the case of lower has greater supports for and which contain greater subdomains of higher wave numbers , which in turn tend to dissipate more of the turbulent energy according to the term of in and result in the slower growth rate of the turbulent energy .this feature may also imply the complicity of and other non - asymptotic decaying cases of the homogeneous shear turbulence .we notice that the above estimates for the supports of , and are expected to hold adequately for the fourth - order model too , since we have obtained them without resorting to any approximations to . as part of the solution, we need to determine the value of the upper bound for the exponential rate of growth within the reduced model .it may be inferred from the establishment of that the satisfaction of underlies the existence of such a bound . to explore the issue in detail, we consider the case of under .we discuss first the simpler equivalent constraints of under the equality condition of , \ , \lasy(k^0_1,k'_2 ) \geq 0 , \notag\\[4pt ] & \int_{\mathbb{r}}\s dk'_2\,\lasy(k^0_1,k'_2)= 0,\ \\lasy(k^0_1,k'_2)\not=0,\ \ k^0_1\in(\konemin,0),\ \\sigma\in[0,0.5 ) \label{nonnegativityofsoc11_reduced_forsigmamax}\end{aligned}\ ] ] within the reduced model in which is the control variable , we can construct mathematically a whose positive and negative values , respectively , distribute only in the peak and valley regions of .for the sake of simple illustration , we take a discontinuous distribution of -l_v(k^0_1,\sigma ) , & k'_2 \in \big(\mvalley(k^0_1,\sigma)-\delta_p(k^0_1,\sigma ) , \ \mvalley(k^0_1,\sigma)+\delta_p(k^0_1,\sigma)\big);\\[4pt ] 0 , & \text{others } , \end{array } \r .\notag\\[6pt ] & \hskip 20 mm 0<l_v(k^0_1,\sigma)=l_p(k^0_1,\sigma)\leq 1,\quad 0<\delta_p(k^0_1,\sigma)<<|\mvalley(k^0_1,\sigma)| \label{lasyascv_lasyapproxdistribution}\end{aligned}\ ] ] it is then trivial to verify that ( [ nonnegativityofsoc11_reduced_forsigmamax] is satisfied automatically , due to , etc .this specific example has demonstrated that , for the reduced model with as the continuous control variable , we can construct mathematically a non - trivial feasible solution with consequently , we conclude that the issue of can not be resolved within the context of the reduced model with as the control variable ; we need to seek a solution possibly with as the control variable . in the reduced model with as the control variable , is a derived quantity defined by and the intrinsic equality of holds automatically .in contrast to the case above , there are more possible ways for to behave : i ) the predominant positive and negative values of may not lie , respectively , in the peak and valley regions of for some .ii ) it may take positive and negative values alternately , with the number of peaks and valleys more than that of .iii ) it may allow the occurrence of these possible mathematical behaviors introduce possible ways to violate ( [ nonnegativityofsoc11_reduced_forsigmamax] .for example , let us consider \ , \lasy(k^0_1,k'_2 ) \geq 0 , \notag\\[4pt ] & \int_{\mathbb{r}}\s dk'_2\,\lasy(k^0_1,k'_2 ) < 0,\ \ \text{for some } k^0_1\in(\konemin,0 ) \label{nonnegativityofsoc11_reduced_forsigmamax_b}\end{aligned}\ ] ] we may understand its consequence by adopting with and obtaining from ( [ nonnegativityofsoc11_reduced_forsigmamax_b] this inequality may be violated for all the in a scenario as follows : is close to so that it can not produce the sufficient height contrast between the peak and the valley , and the ratio on the left - hand side of is lower than . that is allowed in the reduced model implies that a transient solution may approach an asymptotic state of in the reduced model , which is impossible in the fourth - order model due to the constraint of .therefore , if a transient solution does not decay , the reduced model may predict that it may have a exponential growth rate of .that is , the reduced model may not be suitable for the simulation of transient solutions which do not decay . as mentioned above, we may treat as the control variable to determine through optimization .a direct search of an optimal solution of in a space of functions poses a challenge .a simpler strategy is to adopt a specific form for constructed with the help of certain function bases and symmetries ; the unknown parameters contained in the specific form will be determined through the objective maximization under the constraint of inequality .similar to the galerkin method in the calculus of variations , such a treatment transforms the optimal control problem into an optimization problem in a finite - dimensional vector space whose dimension is equal to the number of unknown parameters involved .there are a few possible ways to deal with .the first is to adopt simply as the control variable .the second is to incorporate the limits of , and and the support estimate of by taking the special transformation of where is the control variable and the result of sub - subsection [ subsubsec : sigmamax ] holds .one can also adopt different transformations to meet the limit constraints . considering that , to avoid the apparent singularity at in numerical simulations , the point of on the support boundarymay be relocated to with and small ( along with its neighborhood points on the boundary ) , we present the first possibility here for discussion , and the others can be worked out in a similar fashion .they are to be tested numerically for the sake of comparison . considering that expression for is restricted to , a triangle mesh over will be constructed with nodes and linear triangle elements whose collections are denoted , respectively , by there are a point matrix and a connectivity matrix associated with the mesh .the point matrix is of which stores in its -th column the coordinates of node , ; the connectivity matrix is of whose -th column contains the numbers of the three nodes in triangle , the three nodes ordered in a counterclockwise sense .the values of at the nodes of are denoted as the distribution of in can be approximated through the linear interpolation of {ij}\big)\,\shapefunction_i\big(\bk^{-};\trianglesnneg_j\big ) \label{l_valuedatkn}\end{aligned}\ ] ] here , is the characteristic function , and , , , , are the linear interpolation shape functions associated with triangle .the distribution of in can be found through of . for the sake of computational convenience below, we recast in the form of {ij}\big)\,\shapefunction_i\big(\bk^{-};\trianglesnneg_j\big ) \label{l_valuedatkn_ae}\end{aligned}\ ] ] here , stands for ` almost everywhere ' , since the equality may not hold when is in a common edge between two neighboring triangles or coincides with a node. this approximation will not have significant effects on the computations of , and the intrinsic equality , with adequate mesh distributions to be explained below . substituting into ,we obtain {ij}\big)\ , \int^{k_2}_{-\infty}\s dk'_2\,m(\bk;k'_2)\,\ , \shapefunction_i(\bk';\trianglesnneg_j)\ , \characteristicfunctiontrianglesnnegj(\bk ' ) \label{hst_clminphysicalspace_w1w1_fs_transf_asymp_sol_reduced_trianglemesh_l01}\end{aligned}\ ] ] here , we should point out that the a.e .property of might cause a potential problem in the integration with respect to , due to the possible double counting in the summation of and for located in a common edge between two neighboring triangles ; this double counting affects the validity of only if it contributes to the line integration . we can eliminate this problem by one of two ways : ( i ) to generate the triangle mesh in such that no common edge is parallel to the axis of ; ( ii ) to choose in such that it does not lie in any common edge parallel to the axis of .the latter can be easily implemented since we need to impose the constraint of non - negativity only at a finite number of points inside to be discussed .equation can be rewritten , through rearrangement and combination , as it is linear in with the coefficients as continuous functions of .now , becomes {ij}\big ) \int_{-\infty}^0\sdk_1\s \int_{\mathbb{r}}\s dk_2\ , \frac{1}{k_1\,|\bk|^2}\ , \int^{k_2}_{-\infty}\s dk'_2\,m(\bk;k'_2)\ , \shapefunction_i(\bk';\trianglesnneg_j)\ , \characteristicfunctiontrianglesnnegj(\bk ' ) \label{objectivefunction_01_asymp_reduced_trianglemesh_l}\end{aligned}\ ] ] the approximate nature of should not affect the validity of since the area measure of all the edges is zero and the values of {ij}\big)$ ] are supposedly finite .the equation can be recast as this objective function is linear in . following from , it is to be maximized under the constraints of and , whose consequences are as follows .firstly , combining and gives the coefficients are functions of and the above inequality needs to hold in the estimated support of , given by and sketched in fig .[ betasupportmax ] . due to the equivalence between ( [ nonnegativityofsoc11_reduced_forsigmamax] and , it is sufficient to enforce the inequality in of . considering the above - mentioned requirement of not - located at any common edge of the triangular meshes parallel to the axis of , we can select adequately a finite set of collocation points inside the support , on which we apply so as to approximate it with a finite number of linear constraints of secondly , the intrinsic equality of ( [ n_controlvariable] requires that {ij}\big ) \int_{-\infty}^0\s dk_1\ , \int_{\mathbb{r}}\s dk_2\ , \shapefunction_i\big(\bk;\trianglesnneg_j\big ) \ , \characteristicfunctiontrianglesnnegj(\bk ) = 0\end{aligned}\ ] ] or the above treatment of as the control variable has the advantage of computations only in the wave number space .however , it does not provide any detailed information about the third order correlations .also , it can not resolve the issue of .we now study the case that ( or effectively its anti - symmetric part ) is used as the control variable .a comprehensive distribution of should be determined with the fourth order model . motivated by the structure of , the definition of and the limiting constraints of , and , and the symmetry of , we present , amongst several choices , a partition form of ^ 2 \ ,\notag\\[4pt]&\hskip 10mm\times\s \big [ \gasy(\bk',\bl)+\gasy(\bl,-\bk'-\bl)+\gasy(-\bk'-\bl,\bk ' ) \big ] \label{toc_generalsymmetries_d}\end{aligned}\ ] ] here , is the characteristic function and supposedly has the symmetry of the support of is taken the same as that of , the symmetries of are less stringent than those of .in fact , without a partition as such or similar , it is difficult in numerical simulation to satisfy of .the inclusion of the characteristic function in is to guarantee that the resultant has the same support as the estimated . for numerical simulation of , we adopt a quasi - triangle mesh , i.e. , a tensor - product of two triangle meshes over in the fashion detailed below : first , we resort to the triangle mesh generated in over .next , due to , we have a corresponding triangle mesh over with & \maxsupportnbkppostriangles=\big\{\trianglesnpos_j:\ \trianglesnpos_j=\trianglesnneg_j,\ , j=1,2,\cdots,\trianglenumbertotalsnneg\big\ } \label{nodestriangles_pos}\end{aligned}\ ] ] the corresponding point matrix and connectivity matrix are given by which reflects that it then follows from above that is meshed by & \pointmatrixsn=\big\{\pointmatrixsnneg,\ , \pointmatrixsnpos\big\},\qquad \connectivitymatrixsn=\big\{\connectivitymatrixsnneg,\ , \connectivitymatrixsnpos\big\}\end{aligned}\ ] ] we now adopt the tensor - product of the triangle meshes over .this treatment is motivated mainly by its simple mesh generation , its easy implementations of the symmetry properties of and the notion of turbulent energy cascade if necessary .the values of at the nodes of are denoted as we can take as the primary basis set , considering that and can be found through due to and . next ,since is of integral form and the tensor - product of triangle meshes is adopted in , we resort to a quasi - bilinear interpolation to find the distribution of in , {ik};[\connectivitymatrixsnneg]_{jl}\big)\,\shapefunction_i\big(\bk^{-};\trianglesnneg_k\big)\,\,\shapefunction_j\big(\bl^{-};\trianglesnneg_l\big),\ \ \bk^{-}\in\trianglesnneg_k,\ \bl^{-}\in\trianglesnneg_l \label{g_valuedatknln}\end{aligned}\ ] ] {ik};[\connectivitymatrixsnpos]_{jl}\big)\,\shapefunction_i\big(\bk^{-};\trianglesnneg_k\big)\,\,\shapefunction_j\big(\bl^{+};\trianglesnpos_l\big),\ \ \bk^{-}\in\trianglesnneg_k,\ \bl^{+}\in\trianglesnpos_l \label{g_valuedatknlp}\end{aligned}\ ] ] and here , comes from .we now need to discuss how the full content of can be satisfied . 1 . the application of to the elements of , along with , yields which will be imposed explicitly .together with through and the shape function property , these constraints are also sufficient to guarantee 2 .we now test for the symmetry .equation indicates that is automatically met through construction . for the rest two cases ,we consider first .that and implies that and from the adopted mesh generation over . consequently , the last equality above gives as desired . in the case of ,we apply and to get therefore , the symmetry of is satisfied automatically if is enforced .the last of requires that which will be imposed explicitly . to help the computation of and the implementation of , we introduce a unified relation of {il};\l[\connectivitymatrixsnneg\r]_{jm}\r)\,\shapefunction_i\big(\bl^{-};\trianglesnneg_l\big)\,\shapefunction_j\big(\bm;\trianglesnneg_m\big)\,\characteristicfunctiontrianglesnnegl(\bl^{-})\,\characteristicfunctiontrianglesnnegm(\bm ) \notag\\[4pt ] & \hskip 20 mm + \gasy\s\l(\l[\connectivitymatrixsnneg\r]_{il};\l[\connectivitymatrixsnpos\r]_{jm}\r)\,\shapefunction_i\big(\bl^{-};\trianglesnneg_l\big)\,\shapefunction_j\big(\bm;\trianglesnpos_m\big)\,\characteristicfunctiontrianglesnnegl(\bl^{-})\,\characteristicfunctiontrianglesnposm(\bm ) \big ] \label{g_valuedatlnmpm_ae}\end{aligned}\ ] ] a remark like that of can be made here . substituting and into and using and, we obtain {il};\l[\connectivitymatrixsnneg\r]_{jm};\bk\r)\,\gasy\s\l(\l[\connectivitymatrixsnneg\r]_{il};\l[\connectivitymatrixsnneg\r]_{jm}\r ) \notag\\[4pt ] & + \hat{a}\s\l(\l[\connectivitymatrixsnneg\r]_{il};\l[\connectivitymatrixsnpos\r]_{jm};\bk\r)\,\gasy\s\l(\l[\connectivitymatrixsnneg\r]_{il};\l[\connectivitymatrixsnpos\r]_{jm}\r ) \big ] \label{hst_clminphysicalspace_w1w1_fs_transf_asymp_sol_reduced_trianglemesh}\end{aligned}\ ] ] where {il};\l[\connectivitymatrixsnneg\r]_{jm};\bk\r ) \notag\\ = \ , & \int^{k_2}_{\mvalley'}\s d k'_2 \,m(\bk;k'_2)\ , \notag\\ & \hskip 10mm\times\s \bigg [ \int_{\trianglesnneg_l}\s d\bl\ , \big[|\bl|^2-|\bk'+\bl|^2\big]\,(k_1\,l_2-k'_2\,l_1)\ , \chi_{\maxsupportnbkpneg}(\bk'+\bl ) \notag\\ & \hskip 35mm\times\s [ k_1 l_1\,(k_1+l_1)]^2\ , \shapefunction_j\big(\bk';\trianglesnneg_m\big)\ , \shapefunction_i\big(\bl;\trianglesnneg_l\big)\ , \characteristicfunctiontrianglesnnegm(\bk')\ , \notag\\[4pt ] & \hskip 18 mm -\int_{\trianglesnneg_l } d\bl\ , |\bl|^2\,(k_1\,l_2-k'_2\,l_1)\ , \chi_{\maxsupportnbkpneg}(\bk')\ , \notag\\ & \hskip 35mm\times\s [ k_1 l_1\,(k_1-l_1)]^2\ , \shapefunction_i\big(\bl^{-};\trianglesnneg_l\big)\ , \shapefunction_j\big(\bk'-\bl;\trianglesnneg_m\big)\ , \characteristicfunctiontrianglesnnegm(\bk'-\bl ) \bigg ] \label{gsoc_solutioncoefficient_neg}\end{aligned}\ ] ] and {il};\l[\connectivitymatrixsnpos\r]_{jm};\bk\r ) \notag\\ = \ , & \int^{k_2}_{\mvalley'}\s d k'_2 \,m(\bk;k'_2)\ , \notag\\ & \hskip 10mm\times\s \bigg [ \int_{\trianglesnneg_l}\s d\bl\ , \big(|\bk'-\bl|^2-|\bl|^2\big)\,(k_1\,l_2-k'_2\,l_1)\ , \chi_{\maxsupportnbkp}(\bk'-\bl ) \notag\\ & \hskip 35mm\times\s [ k_1 l_1\,(k_1-l_1)]^2\ , \shapefunction_j\big(\bk';\trianglesnneg_m\big)\ , \shapefunction_i\big(\bl;\trianglesnneg_l\big)\ , \characteristicfunctiontrianglesnnegm(\bk')\ , \notag\\[4pt ] & \hskip 18 mm + \int_{\trianglesnneg_l } d\bl\ , |\bl|^2\,(k_1\,l_2-k'_2\,l_1)\ , \chi_{\maxsupportnbkpneg}(\bk')\ , \notag\\ & \hskip 35mm\times\s [ k_1 l_1\,(k_1+l_1)]^2\ , \shapefunction_i\big(\bl^{-};\trianglesnneg_l\big)\ , \shapefunction_j\big(\bk'+\bl;\trianglesnneg_m\big)\ , \characteristicfunctiontrianglesnnegm(\bk'+\bl ) \notag\\[4pt ] & \hskip 18 mm -\int_{\trianglesnneg_l } d\bl\ , |\bl|^2\,(k_1\,l_2-k'_2\,l_1)\ , \chi_{\maxsupportnbkpneg}(\bk')\ , \notag\\ & \hskip 35mm\times\s [ k_1 l_1\,(k_1-l_1)]^2\ , \shapefunction_i\big(\bl^{-};\trianglesnneg_l\big)\ , \shapefunction_j\big(\bl-\bk';\trianglesnneg_m\big)\ , \characteristicfunctiontrianglesnnegm(\bl-\bk ' ) \bigg ] \label{gsoc_solutioncoefficient_pos}\end{aligned}\ ] ] next , substitution of into gives {il};\l[\connectivitymatrixsnneg\r]_{jm}\r ) \gasy\s\l(\l[\connectivitymatrixsnneg\r]_{il};\l[\connectivitymatrixsnneg\r]_{jm}\r ) \notag\\[4pt ] & + \hat{c}\s\l(\l[\connectivitymatrixsnneg\r]_{il};\l[\connectivitymatrixsnpos\r]_{jm}\r ) \gasy\s\l(\l[\connectivitymatrixsnneg\r]_{il};\l[\connectivitymatrixsnpos\r]_{jm}\r ) \big ] \label{lp_objectivefunction_trianglemesh_a}\end{aligned}\ ] ] where {il};\l[\connectivitymatrixsnneg\r]_{jm}\r ) = \int_{\maxsupportbeta } d\bk\,\frac{1}{|k_1|\,|\bk|^2}\ , \hat{a}\s\l(\l[\connectivitymatrixsnneg\r]_{il};\l[\connectivitymatrixsnneg\r]_{jm};\bk\r)\,\end{aligned}\ ] ] and {il};\l[\connectivitymatrixsnpos\r]_{jm}\r ) = \int_{\maxsupportbeta } d\bk\,\frac{1}{|k_1|\,|\bk|^2}\ , \hat{a}\s\l(\l[\connectivitymatrixsnneg\r]_{il};\l[\connectivitymatrixsnpos\r]_{jm};\bk\r)\end{aligned}\ ] ] the objective function of can be recast , through rearrangement and combination , in the form of \label{lp_objectivefunction}\end{aligned}\ ] ] which is linear in and .this function is to be maximized following from. equation can also be recast in terms of and , \end{aligned}\ ] ] and then , equation results in the linear constraint of \geq 0 \label{lp_constraint_continuous}\end{aligned}\ ] ] the coefficients are functions of .we can apply the constraint to the collocation points of to obtain a finite number of linear constraints of \leq 0 , \ \\forall \m_1,\ \m_2 \label{lp_constraint_atcollocationpoints}\end{aligned}\ ] ] the support of implies that in addition , and require that & \gasy\s\l(\nodesnneg_i;\nodesnpos_i\r)=0,\qquad \forall i , j \in\{1,2,\cdots , \nodenumbertotalsnneg\},\ \ i\not = j \label{lp_constraint_fromsymmetry}\end{aligned}\ ] ] constraint is cast in the equivalent form of & -\gasy\s\l(\nodesnneg_i;\nodesnpos_j\r)\leq 1,\ \\forall i , j \in\{1,2,\cdots , \nodenumbertotalsnneg\ } \label{lp_constraint_lowerbounds}\end{aligned}\ ] ] 99 v. barbu , _ analysis and control of nonlinear infinite dimensional systems _ , academic press , new york , 1993 .s. boyd and l. vandenberghe , _ convex optimization _ , cambridge university press , new york , 2009 .p. s. bernard and c. g. speziale , _ bounded energy states in homogeneous turbulent shear flow - an alternative view_. asme journal of fluids engineering 114 ( 1992 ) 29 - 39 .f. a. de souza , v. d. nguyen and s. tavoularis , _ the structure of highly sheared turbulence_. journal of fluid mechanics 303 ( 1995 ) 155 - 167 .p. g. drazin and w. h. reid , _ hydrodynamic stability _ , cambridge university press , cambridge , 2004 .s. f. edwards and w. d. mccomb , _ statistical mechanics far from equilibrium _ , j. phys . a 2 ( 1969 ) 157 - 171 .j. c. isaza and l. r. collins , _ on the asymptotic behaviour of large - scale turbulence in homogeneous shear flow_. journal of fluid mechanics 637 ( 2009 ) 213 - 239 .j. c. isaza , z. warhaft and l. r. collins , _ experimental investigation of the large - scale velocity statistics in homogeneous turbulent shear flow_. physics of fluids 21 ( 2009 ) 065105 .d. d. joseph , _ stability of fluid motions i , ii _ , springer - verlag , new york , 1976 .i. lasiecka , _ mathematical control theory of coupled pdes _ ,siam , philadelphia , 2002 .m. j. lee , j. kim and p. moin , _ structure of turbulence at high shear rate_. journal of fluid mechanics 216 ( 1990 ) 561 - 583 .j. piquet , _ turbulent flows . models and physics _ , springer , berlin , 1999 .r. robert and j. sommeria , _ relaxation towards a statistical equilibrium state in two - dimensional perfect fluid dynamics_. physical review letters 69 ( 1992 ) 2776 - 2779 .j. j. rohr , e. c. itsweire , k. n. helland and c. w. van atta , _ an investigation of the growth of turbulence in a uniform - mean - shear flow_. journal of fluid mechanics 187 ( 1988 ) 1 - 33 .p. sagaut and c. cambon , _ homogeneous turbulence dynamics _ , cambridge university press , new york , 2008 .l. tao and m. ramakrishna , _ multi - scale turbulence modelling and maximum information principle .part 1_. _ arxiv:1009.1691v1 [ physics.flu-dyn]_ , 2010 . s. tavoularis , _ asymptotic laws for transversely homogeneous turbulence shear flows_. physics of fluids 28 ( 1985 ) 999 - 1001. s. tavoularis and u. karnik , _ further experiments on the evolution of turbulent stresses and scales in uniformly sheared turbulence_. journal of fluid mechanics 204 ( 1989 ) 457 - 478 .
we consider two - dimensional homogeneous shear turbulence within the context of optimal control , a multi - scale turbulence model containing the fluctuation velocity and pressure correlations up to the fourth order ; the model is formulated on the basis of the navier - stokes equations , reynolds average , the constraints of inequality from both physical and mathematical considerations , the turbulent energy density as the objective to be maximized , and the fourth order correlations as the control variables . without imposing the maximization and the constraints , the resultant equations of motion in the fourier wave number space are formally solved to obtain the transient state solutions , the asymptotic state solutions and the evolution of a transient toward an asymptotic under certain conditions . the asymptotic state solutions are characterized by the dimensionless exponential time rate of growth which has an upper bound of ; the asymptotic solutions can be obtained from a linear objective convex programming . for the asymptotic state solutions of the reduced model containing the correlations up to the third order , the optimal control problem reduces to linear programming with the primary component of the third order correlations or a related integral quantity as the control variable ; the supports of the second and third order correlations are estimated for the sake of numerical simulation ; the existence of feasible solutions is demonstrated when the related quantity is the control variable . the relevance of the formulation to flow stability analysis is suggested . p h p
the notion of a noninvasive measurement a measurement that does not disturb the system being measured is undisputed in classical physics because one can assign a real physical value to every point in phase space at all times .even so , the situation becomes complicated if we introduce explicit detectors since these may disturb the system . in quantum physics , the notion of a noninvasive measurement is always problematic .one can not assign a value to an observable without discussing the measurement procedure .strong projective measurements ( and therefore the majority of general measurements ) are certainly invasive .a good candidate for a noninvasive measurement scheme is a _ weak measurement _ . in general , by reducing the coupling of the detector system to the system under measurement , the invasiveness is reduced at the price of an increased detector noise .this leads to paradoxes of unusually large values for single measurement results after a subsequent postselection , or a quasiprobability for the measured distribution after the detector noise has been removed .there is growing interest in such measurements . in this paper , we answer the question of when our intuitive criteria ( defined below ) of noninvasiveness and time symmetry of measurements are satisfied , for both classical and quantum cases .time - reversal symmetry of observables is a fundamental symmetry of physics , valid in classical physics and in general because it is a good symmetry of quantum electrodynamics in low - energy physics ( in high - energy physics combined with parity and charge conjugation ) .this symmetry is generally probed by the measurement of single , non - time - resolved measurements , such as the measurement of electric dipole moments of particles .however , time - reversal symmetry also constrains the results of time - resolved measurements with _multiple _ measurements . for such considerations, one must consider the invasiveness of the measurements themselves which will tend to break time - reversal symmetry .a _ measurement scheme _ is a description of how to measure _observables_functions of phase space for classical physics or hermitian operators for quantum physics .a measurement takes place on a _ system under measurement _ which is a member of the _ ensemble under measurement_. as usual , systems of the ensemble are considered to be identically distributed and statistically independent . returning to the _ measurement scheme _ , it should be a description of ( a ) what the detector system is and how it is prepared , ( b ) how the detector system is coupled to the system under measurement , and ( c ) how the detector system is itself measured , and how the measured value is interpreted .the measurement scheme , essentially a description of the detectors , should be generally independent of the _ ensemble under measurement _ , and only ( b ) , the coupling to the system of interest , should depend on the observable .also , the measurement of the detector system must be defined in terms of axioms both classical and quantum ( e.g. by projection postulate ) .the measurement result should contain the _ inherent _ statistical distribution of the measured system .the measurement result also contains _ detector noise _ resulting , in an similar fashion , from the statistical and quantum properties of the detector system . by the measurement of many systems from an ensemble, the probability distribution of the measurement can itself be measured .the detector noise probability distribution of a _null measurement_a ` measurement ' where the detector system is prepared but not coupled to the system under measurement can be determined .we _ postulate _ that the measurement scheme is expressed by a convolution , and in this case the detector noise may be removed by deconvolution .the measurement schemes considered in this paper all possess this last property .we consider time - resolved measurements of observables measured at times , with outcomes occurring with probabilities .the probability density contains all the information about the experiment and we formulate _ criteria _ for noninvasiveness and time symmetry in terms of , or more exactly , by requiring equality between values measured in different experiments .an arbitrary operation is non - disturbing if the probability density of other measurements is unchanged by the test operation s addition or removal . in other words , integrating over the single measurement should yield the same distribution that would be obtained if that measurement were never performed .therefore , our criterion of _ noninvasiveness _ of the measurement reads equation ( [ eq : noninvasivedefinition ] ) equates probabilities between two different experiments . in the first ,the measurement is integrated out and in the second , the slash notation indicates that the variable was not measured at all .this defines noninvasiveness of single measurements on a given experiment .more generally , if new measurements of observables can be inserted at intermediate times without changing the previous probability density as in ( [ eq : noninvasivedefinition ] ) then the all of them are noninvasive .the noninvasiveness is stronger if ( [ eq : noninvasivedefinition ] ) is satisfied for a fixed but arbitrary other measurements .we assume that time reversal is a good symmetry for the equations of motion of the system and investigate whether this leads to a corresponding symmetry expressed in the results of measurements performed on the system .we should note that time reversal symmetry holds only for nondissipative , hamiltonian systems .however , physical dissipation is always a result of ignoring fast - changing and fine - grained degrees of freedom , often modeled by a heat bath coupled weakly to the system .if one had access to all the degrees of freedom and the heat bath , one could reverse the full phase space probability and restore time symmetry . even if it is not practically possible to reverse fine - grained degrees of freedom ,an alternate solution is to restrict ourselves to states in equilibrium coupled to a heat bath , which are time - symmetric themselves in the thermodynamic limit . to express the expected time - reversal symmetry of a set of measurements , we begin by denoting the time - reversed version of an object by , i.e. , position : and momentum time - reversed experiment involves the time reversed initial state , time - reversed measured quantities with results , and also reversed time and therefore , ordering of the measurements .hence , for the probability , our criterion of _ time symmetry of measurements _ reads where we compare the probability densities of the forward ( ) and reversed ( ) sets of measurements .in such a form , classically ( [ tsym ] ) holds for equilibrium and non - equilibrium systems and is independent of the validity of charge conjugation and parity symmetries and also of relativistic invariance . when fulfilled assuming for the moment that the measurements are non - invasive the result ( [ eq : timesymmetryofmeasurements ] ) leads to the principle of detailed balance and reciprocity of thermodynamic fluxes .the above criteria ( [ eq : noninvasivedefinition ] ) and ( [ eq : timesymmetryofmeasurements ] ) must be confronted with real detection protocols . for each measurement, there is a detection protocol that includes some interaction between the original system and an ancilla that is later decoupled with the imprinted information retrieved from the system .we should add the remark that the internal dynamics of the detector may be irreversible , but this is irrelevant , because we ask only about the behavior of the system .note also that , for the time symmetry to hold , the measurements should not disturb the system in the sense of the criterion ( [ eq : noninvasivedefinition ] ) , since any disturbance would create an asymmetry between before and after the measurement .the majority of measurements are invasive and irreversible , both classical and quantum .however , there exists a special class of measurements , defined both classically and quantum mechanically , which are noninivasive under certain conditions .they are described by an instantaneous interaction between the system and detector , where is the measured observable , is the detector s momentum and is the coupling strength ( see details later in the text ) .the initial state of the detector is the zero mean gaussian .the observer finally registers the position which is shifted by .the result contains also the internal detection noise , which is subtracted / deconvoluted .for all _ finite _ the scheme is invasive , except if the observables are compatible ( vanishing poisson bracket or commutator ) or if initially , which makes sense only classically ( we do not want divergent position ) .however , the scheme becomes noninvasive ( both classically and quantum ) _ in the limit _ , while rescaling the detector s result by this is the _ weak measurement _ .surprisingly , classical and quantum weak measurements differ with respect to time symmetry ( [ tsym ] ) .the behavior of different types of measurements is summarized in table [ tab_mes ] .the aim of this paper is to explain the origin of this difference between classical and quantum measurements .we will also show the asymmetry explicitly by giving an example of a measurement of a simple two - level system and propose an experimentally feasible realization by charge measurements on a quantum dot connected to a reservoir ..different types of measurements may satisfy noninvasiveness and/or time symmetry .the exceptions include position and/or momentum measurement in a simple harmonic oscillator , two - time correlations and other accidental symmetries or quasiclassical systems . [ cols="<,<,<",options="header " , ]we will show in next sections that in the _ classical _ weak measurement limit one can find where the average is taken in the initial state in the phase space and denotes a classical analogue of the heisenberg picture for the observable .this clearly satisfies noninvasiveness and time symmetry , because are commuting numbers and we can reorder them under time reversal .now , in the quantum case , we will get for , where with the initial density matrix .the superoperators act as , for the observable operator .this quantity is no longer a probability but a quasiprobability and still satisfies noninvasiveness ( [ eq : noninvasivedefinition ] ) .however , the time symmetry ( [ eq : timesymmetryofmeasurements ] ) is violated , except for compatible measurements ( e.g. space - like separated ) .mathematically , this is because we replace the classical -number multiplication ( obviously a commuting operation ) by the quantum anticommutator of operators ( therefore noncommuting ) . we can not reorder superoperators under time reversal . for slow measurements ,each operator in ( [ qaq ] ) is replaced with , where turns on and off slowly compared to relevant timescales of the system .this slow measuring smoothes the resulting distribution so that any antisymmetric contributions vanish and therefore time symmetry ( [ eq : timesymmetryofmeasurements ] ) will still apply . roughly speaking ,the more classical is the system , the more time - symmetric it is .the time symmetry ( [ eq : timesymmetryofmeasurements ] ) can be tested by comparing moments of the distribution , we emphasize that the quantities in ( [ eq : t_moments ] ) are expectation values of products of measurement results , and should not be confused with expectation values of observables in an ensemble .the ordering of to is mathematically irrelevant , but serves as a reminder of the ordering of measurements in the experiment .linear correlations of quantum weak measurements in the limit of zero measurement strength are given by we can freely permute the in the left hand side but not the in the right hand side ( they do not commute and the order reflects that ) . this asymmetry is only present for fast measurements of three or more incompatible observables .this does not need a specific system .in contrast , only specific systems and observables do not show the asymmetry ; one such an exception is e.g. position measurement in a simple harmonic oscillator . in the case ofcompatible or only two ( not necessarily compatible ) measurements the ordering is irrelevant and the symmetry ( [ eq : t_moments ] ) holds .let us take a classical system with the probability density in phase space with being a pair of canonical generalized position and momentum .the evolution is given by the hamiltonian and can be expressed compactly using the liouville operator , defined by where \ ] ] is the poisson bracket .one has or .let us consider a direct sequential measurement of quantities measured at times , with the results , respectively .the probability distribution is naturally postulated as alternatively , it can be written as where .the above quantity coincides with ( [ cac ] ) , is positive and normalized so it is a normal probability .as we already noted , it satisfies noninvasiveness ( [ eq : noninvasivedefinition ] ) and time symmetry ( [ eq : timesymmetryofmeasurements ] ) .now , the quantum direct measurement is governed by the projection postulate .it is obviously invasive , violates ( [ eq : noninvasivedefinition ] ) and ( [ eq : timesymmetryofmeasurements ] ) , which is not at all surprising . looking for quantum noninvasiveness , we have to abandon direct measurements .since we want to compare classical and quantum noninvasive measurements , we will consider indirect measurements both classical and quantum .let us now construct a model of a weak measurement which functions both classically and quantum mechanically .we have no direct access to the quantity at time but we couple a detector for an instant .the interaction hamiltonian , added to the system , reads where is the detector s momentum , and is the measurement s strength .we will use a very compact notation that highlights quantum - classical analogies and differences .this is why many formulae below apply both to classical and quantum cases , with differences only in the mathematical objects ( e.g. numbers or operators , phase - space density or density matrix , operator or superoperator ) .the quantum liouville superoperator reads /\rmi\hbar ] ) . as in the classical evolution of phase space density , operators in the heisenberg picture evolve as . for a single measurement ,the total initial state is a product , where is the state of the detector .after the measurement , the total density is where classically ( multiplication by ) , quantum - mechanically ( anticommutator ) , and the ( super)operator is given classically by , and quantum mechanically by /\rmi\hbar ] ) .this is a generic symmetric gaussian state .if measured classically the initial variances read and .quantum mechanically ( under projective measurement ) and . note that for they reduce to the classical result while is imposed by the heisenberg uncertainty principle .we register directly the value of . however , the way of measuring of is in principle irrelevant , both classical and quantum , and may be well disturbing because the detector will not interact with the system anymore .the detector ( classical or quantum ) can evolve irreversibly , we are only interested in the data extracted from the system .we apply a sequence of such measurements , using identical , independent detectors , but coupled at different times to possibly different observables .it is convenient to define a _result - conditioned _ density , normalized by the final _ result - integrated _ density .the probability density of a given sequence of results is given by or .now , is given by where is the zero - mean gaussian noise with the variance .the quantity reads this is classically a standard probability density but not a positive definite density matrix in quantum mechanics .it is clear when defining and .now the quantum is only a quasiprobability .one can write down the convolution relation analogous to ( [ roconv ] ) , both and have a well - defined limit , and . then ( [ qweak ] ) reduces to ( [ qdef ] ) classically . in the quantum case , with or equivalently , which coincides with ( [ qaq ] ) .the effect of disturbance ( both classical and quantum ! ) is of the order so it vanishes in the limit .one can relate correlation functions the leading contribution to such correlation functions is of the order , while the lowest correction due to disturbance is of the order , as follows from ( [ roconv ] ) and ( [ qweak ] ) .both classical and quantum satisfy noninvasiveness ( [ eq : noninvasivedefinition ] ) , but only in the limit .there are exceptions when noninvasiveness holds for an arbitrary .in particular is independent of and always a real positive probability for _ compatible _ observables if classically or =0 ] ) , and is the measured observable .the interaction is followed by von neumann projection of the ancilla onto a position eigenstate which destroys the ancilla .the system can however be measured again with the next ancilla , as shown in figure [ weakm ] .the density matrix after the ^th^ measurement is where is the initial prepared state of ancilla . by inserting identity operations ,the measurement interaction can be expressed as shifts of the ancilla wavefunction , in ( [ eq : weak_rho_recursive_with_idents ] ) , the the state of ancilla which has the shifted wavefunction is written as .the joint probability is the probability of measuring the ancillas in a set of position eigenstates with positions given by in ( [ eq : joint_prob_with_integrals ] ) , is defined recursively by using gaussian wavefunctions , a change of variables to and separates the joint probability density into a quasiprobability signal ( ) and detector noise ( ) . equation ( [ eq : weak_measurements_quasiprobability ] ) defined the joint quasiprobability density for the series of von neumann measurements .the quasiprobability has a well - defined limit . in this limit for time - resolved measurement ,the averages with respect to this quasiprobability are given by which is equivalent to ( [ tinor ] ) .the genuine , measured probability is positive definite because it contains also the large detection noise which is gaussian , white and completely independent of the system , compared to the signal .an alternative , equivalent approach is based on gaussian positive operator - valued measures ( povms ) and special kraus operators .let us begin with the basic properties of povm .the kraus operators for an observable described by with continuous outcome need only satisfy .the act of measurement on the state defined by the density matrix results in the new state .the new state yields a normalized and positive definite probability density .the procedure can be repeated recursively for an arbitrary sequence of ( not necessarily commuting ) operators , the corresponding probability density is given by .we now define a family of kraus operators , namely .it is clear that should correspond to exact , strong , projective measurement , while is a weak measurement and gives a large error .in fact , these kraus operators are exactly those associated with the von neumann measurements previously described .we also see that strong projection changes the state ( by collapse ) , while gives , and hence this case corresponds to weak measurement .however , the repetition of the same measurement times effectively means one measurement with so , with , even a weak coupling results in a strong measurement . for an arbitrary sequence of measurements , we can write the final density matrix as the convolution with . here , and .the quasi - density matrix is given recursively by with the initial density matrix for .we can interpret in ( [ conv ] ) as some internal noise of the detectors which , in the limit , should not influence the system . we _define _ the quasiprobability and abbreviate . in this limit ( [ quasi ] )reduces to note that , so the last measurement does not need to be weak ( it can be even a projection ) .the averages with respect to are easily calculated by means of the generating function ( [ gene ] ) , e.g. , , for . as a straightforward generalization to continuous measurement, we obtain for time ordered observables , ., coupled capacitively to the measured dot . the fluctuations of the current in the junction biased by the voltage depend on the dot s occupation with the proportionality constant . ] an effective model of weakly detecting the dot s charge using an electric junction is shown in fig . [ qpc ] .the junction is treated as another dot between two reservoirs but in a broad level regime .the complete hamiltonian , consisting of the dot part ( [ hdot ] ) , and the junction part , reads ,\nonumber\\ & & \hat{n}_l=\hat{\psi}_l^{\dag}(e)\hat{\psi}_l(e),\:\hat{n}'=\hat{d}^{\dag}\hat{d},\end{aligned}\ ] ] where is the total number of elementary charges in the left reservoir , is the capacitance between the dot and the qpc , , denote effective tunneling rate and level energy of the qpc and is the bias voltage .we measure current fluctuations in the junction , , with the current in heisenberg picture defined as .such fluctuations have already been measured experimentally at low and high frequencies .most of fluctuations are just generated by the shot noise in the junction .now , we consider a finite , but still very large capacitance .we expect a contribution from the system dot s charge fluctuation to of the order .we assume separation of the system s and detector s characteristic frequency scales , namely which also includes the broad level approximation for the detector s dot .there exists a special parameter range , where the coupling is strong enough to extract information about which is not blurred by feedback and cross - correlation terms ( left inequality ) , but weak enough not to drive the system dot out of equilibrium ( right inequality ) . in this limit the dominating contributions to the detector current s third cumulant are given by with where and effective transmission .although the term in is much smaller than the first one , other terms , corresponding to cross correlations and back action , are negligible compared to the last term .99 leggett a j and garg a 1985 _ phys . rev .lett . _ * 54 * 857 von neumann j 1932 _ mathematical foundations of quantum mechanics _( princeton : princeton u.p . )wiseman h m and milburn g j 2009 _ quantum measurement and control _( cambridge : cambridge university press ) kraus k 1983 _ states , effects and operations _ ( berlin : springer ) aharonov y , albert d z and vaidman l 1988 _ phys .lett . _ * 60 * 1351 bednorz a and belzig w 2010 _ phys .lett . _ * 105 * 106803 bednorz a , belzig w and nitzan a 2012 _ new j. phys . _* 14 * 013009 lundeen j s , sutherland b , patel a , stewart c and bamber c 2011 _ nature _ * 474 * 188 ruskov r , korotkov a n and mizel a 2006 _ phys .lett . _ * 96 * 200404 jordan a n , korotkov a n and bttiker m 2006 _ phys .lett . _ * 97 * 026805 williams n s and jordan a n 2008 _ phys ._ * 100 * 026804 palacios - laloy a , mallet f , nguyen f , bertet p , vion d , esteve d and korotkov a n 2010 _ nat .* 6 * 442 streater r f and wightman a s 1964 _ pct , spin and statistics , and all that _( new york : benjamin ) sozzi m 2008 _ discrete symmetries and cp violation _( new york : oxford u.p . ) greenberg o w 2002 _ phys .lett . _ * 89 * 231602 van kampen n g 2007 _ stochastic processes in physics and chemistry _ , ( amsterdam : north - holland ) onsager l 1931 _ phys . rev . _ * 37 * 405 aharonov y , bergmann p g and lebowitz j l 1964 _ phys . rev . _* 134 * b1410 gell - mann m and hartle j 1994 _ physical origins of time asymmetry _ eds .halliwell j , perez - mercader j and zurek w ( cambridge : cambridge university press ) 311 ( _ preprint _ arxiv : gr - qc/9304023 ) aharonov y , popescu s and tollaksen j 2010 _ phys . today_ * 63 * i.11 27 berg b , plimak l i , polkovnikov a , olsen m k , fleischhauer m and schleich w p 2009 _ phys . rev . _a * 80 * 033624 tsang m 2009 _ phys .rev . _ a * 80 * 033840 hofmann h f 2010 _ phys .rev . _ a * 81 * 012103 dressel j , agarwal s , and jordan a n 2010 _ phys .lett . _ * 104 * 240401 chou k , su z , hao b and yu l 1985 _ phys . rep . _ * 118 * 1 dirac p a m 1958 _ the principles of quantum mechanics _( new york : oxford u.p . ) lu w , ji z , pfeiffer l , k. w. west k w and rimberg a j 2003 _ nature _ * 423 * 422 sukhorukov e v , jordan a n , gustavsson s , leturcq r , ihn t and ensslin k 2007 _ nature phys . _ * 3 * 243 blanter y m and bttiker m 2000 _ phys .* 336 * 1 schwinger j 1961 _ j. math .* 2 * 407 keldysh l v 1965 _ sov . phys jetp _ * 20 * 1018 kadanoff l p , baym g 1962 _ quantum statistical mechanics _( new york : benjamin ) kamenev a and levchenko a 2009 _ advances in phys . _ * 58 * 197 utsumi y 2007 _ phys .b _ * 75 * 035333 reulet b , senzier j , and prober d e 2003 _ phys . rev. lett . _ * 91 * 196601 bomze y , gershon g , shovkun d , levitov l s and reznikov m 2005 _ phys . rev . lett . _ * 95 * 176601 gershon g , bomze y , sukhorukov e v and reznikov m 2008 _ phys . rev . lett ._ * 101 * 016803 gabelli j and reulet b 2009 _ j. stat ._ p01049 doi:10.1088/1742 - 5468/2009/01/p01049
measurements in classical and quantum physics are described in fundamentally different ways . nevertheless , one can formally define similar measurement procedures with respect to the disturbance they cause . obviously , strong measurements , both classical and quantum , are invasive they disturb the measured system . we show that it is possible to define general weak measurements , which are noninvasive : the disturbance becomes negligible as the measurement strength goes to zero . classical intuition suggests that noninvasive measurements should be time symmetric ( if the system dynamics is reversible ) and we confirm that correlations are time - reversal symmetric in the classical case . however , quantum weak measurements defined analogously to their classical counterparts can be noninvasive but not time symmetric . we present a simple example of measurements on a two - level system which violates time symmetry and propose an experiment with quantum dots to measure the time - symmetry violation in a third - order current correlation function .
a crucial ingredient of many cosmological models is the idea that galaxies and large - scale structure in the universe grew by a process of gravitational instability from small initial perturbations . in the most successful versions of this basic idea ,the primordial fluctuations that seeded this process were generated during a period of inflation which , in its simplest form , is expected to produce fluctuations with relatively simple statistical properties ( starobinsky 1979 , 1980 , 1982 ; guth 1980 ; guth & pi 1981 ; linde 1982 ; albrecht & steinhardt 1982 ) . in particular the primordial density field in these models is taken to form a statistically homogeneous ( i.e. stationary ) gaussian random field ( bardeen et al .this basic paradigm for structure formation has survived numerous observational challenges , and has emerged even stronger after recent confrontations with the 2df galaxy redshift survey ( 2dfgrs ; percival et al .2001 ) and the wilkinson microwave anisotropy probe ( wmap ; hinshaw et al .so successful has the standard paradigm now become that many regard the future of cosmology as being largely concerned with improving estimates of the parameters of this basic model rather than searching for alternatives .there are , however , a number of suggestions that this confidence in the model may be misplaced and the focus on parameter estimation may be somewhat premature .for example , the wmap data have a number of unusual properties that are not yet completely understood ( efstathiou 2003 ; chiang et al .2003 ; dineen & coles 2003 ; eriksen et al .2003 ) . among the possibilities suggested by these anomaliesis that the cosmic microwave background ( cmb ) sky is not statistically homogeneous and isotropic , and perhaps not gaussian either .the latter possibility would be particularly interesting as it might provide indications of departures from the simplest versions of inflation ( e.g. linde & mukhanov 1997 ; contaldi , bean & magueijo 2000 ; martin , riazuelo & sakellariadou 2000 ; gangui , pogosian & winitzki 2001 ; gupta et al .2002 ; gangui , martin & sakellariadou 2002 ; bartolo , matarrese & riotto 2002 ) . whether wmap is hinting at something primordial or whether there are systematic problems with the data or its interpretation is unclear .either way , these suggestions , as well as general considerations of the nature of scientific method , suggest that it is a good time to stand back from the prevailing paradigm and look for new methods of analysis that are capable of testing the core assumptions behind it rather than taking them for granted . in this paperwe introduce a new method for the statistical analysis of all - sky cmb maps which is complementary to usual approaches based on the power spectrum but which also furnishes a simple and direct test of the statistical assumptions underpinning the standard cosmological models .our method is based on the properties of the phases of the ( complex ) coefficients obtained from a spherical harmonic expansion of all sky maps .the advantages of this approach are that it hits at the heart of the `` random phase '' assumption essential to the definition of statistically homogeneous and isotropic gaussian random fields . perhaps most importantly from a methodological point of view ,is intrinsically non parametric and consequently makes minimal assumptions about the data .the layout of this paper is as follows . in the next sectionwe discuss some technical issues relating to the properties of spherical harmonic phases which are necessary for an understanding of the analysis we present . in section 3we explain a practical procedure for assessing the presence of a particular form of departure from the random phase hypothesis using kuiper s statistic . in section 4we discuss results obtained by applying the method to cobe - dmr maps and data from wmap , as well as some toy examples .we discuss the outcomes and outline ideas for future work in section 5 .the method we discuss in this paper is based on recent results arising from the study of correlations between fourier domain for random fields defined over flat two or three dimensional spaces . in two dimensions , which is the case closest to the spherical expansion we use in this paper , the fourier expansion is of the form with and the fourier transform is complex , i.e. .\ ] ] the quantity is the phase of the mode corresponding to wavevector .a two dimensional flat surface is a reasonable approximation to a small patch of the sky , so a fourier approach has been taken in studies of high - resolution cmb maps ( bond & efstathiou 1987 ; coles & barrow 1987 ) .the extension to three - dimensions is trivial . in orthodox cosmologies ,initial density fluctuations constitute a statistically homogeneous and isotropic gaussian random field ( bardeen et al .1986 ) . strictly speakingthis means that the real and imaginary parts of are drawn independently from gaussian distributions with a variance that depends only on . if this assumption holds , the phases are independent and uniformly random on the interval ] , then the phase differences are also uniformly random on the same interval . finally , the behaviour of the phase differences is relatively simple to visualize , and interpret in terms of the dynamics of non - linearly evolving structure ( coles & chiang 2000 ) .however the preceding argument seems to demonstrate that statistical homogeneity and isotropy require that or , in other words , that the phase differences are random .so how can be useful statistically ?the answer to this question is that although the expectation value taken over the ensemble of possible configurations of a random field may indeed be zero , any average taken over a finite region within single realization will not correspond to this value .departures from the global expectation value contain information about higher order fluctuations . in the case of non linear gravitational clustering developing within a finite numerical box ,what is key is the scale of the non - linearity compared with the size of the box .if non - linearity exists only on small scales , then the are very close to uniform on the unit circle and the box average is close to the ensemble average ( `` the box is a fair sample '' ) .if the scale of structure is close to the scale of the box then becomes extremely non - uniform ( coles & chiang 2000 ) . using a bigger box actually makes it harder to characterize non - linear structure on a given scale using phase differences .for related comments see hikage , matsubara & suto ( 2003 ) .we can describe the distribution of fluctuations in the microwave background over the celestial sphere using a sum over a set of spherical harmonics : here is the departure of the temperature from the average at angular position on the celestial sphere in some coordinate system , usually galactic .the are spherical harmonic functions which we define in terms of the legendre polynomials using i.e. we use the condon - shortley phase convention . in equation( 6 ) , the are complex coefficients which can be written .\ ] ] note that , since is real , the definitions ( 7 ) and ( 8) require the following relations between the real and imaginary parts of the : if is odd then while if is even and if is zero then from this it is clear that the mode always has zero phase , and there are consequently only independent phase angles describing the harmonic modes at a given . if the primordial density fluctuations form a gaussian random field as described in sec .2.1 , then the temperature variations induced across the sky form a gaussian random field over the celestial sphere . by analogy with the above discussion above ,this means that where is the angular power spectrum , the subject of much scrutiny in the context of the cosmic microwave background ( e.g. hinshaw et al .2003 ) , and is the kronecker delta function . since the phases are random , the stochastic properties of a statistically homogeneous and isotropic gaussian random field are fully specified by the , which determines the variance of the real and imaginary parts of both of which are gaussian . to look for signs of phase correlations in cmb maps one can begin with a straightforward generalization of the ideas introduced above for fourier phases .there are , however , some technical points to be mentioned .first , the set of spherical harmonics , and consequently the phases of the harmonic coefficients , is defined with respect to a particular coordinate system .this is also true in the fourier domain but , as we discussed above , dealing with translations of the coordinate system in that case is relatively straightforward because and are equivalent .this is not so for spherical harmonics : the definitions of and are quite different , and the distinction is introduced when the -axis of a three - dimensional coordinate system is chosen to be the polar axis .changing this axis muddles up the spherical harmonic coefficients in a much more complex way than is the case for fourier modes .another point worth stressing is that the sphere is a finite space , so when analyzing the statistical properties of maps on a single sky one never has an exact ensemble average .this is less of an issue when dealing with spherical harmonic modes at high , because the average over a single sky is then close to that over the probability distribution , but at low there is the perennial problem of so - called `` cosmic variance '' .suppose one were to construct phase differences in the manner of section 2.1 , i.e. then the distribution of the for high should tend to uniform if the field is statistically homogeneous over the sphere whether or not the temperature fluctuations are gaussian .however , at low , the finiteness of the sky come into play and the distribution over the small number of modes available will depart from the uniform case .these fluctuations at low depend on higher order statistical information about the form of the fluctuations , so in this case cosmic variance can actually be helpful . we shall explain how we cope with this aspect of phases and the behaviour in more detail in section 3 . for the time being we note that , if the orthodox cosmological interpretation of temperature fluctuations is correct , the phases of the should be random and so should phase differences of the form and .the aim of this paper is introduce a method of checking whether this is so . in studying the properties of phase correlations of cmb maps ,our intention is to characterize departures from the standard cosmological framework . the usual hypothesis , that primordial density fluctuations form a statistically homogeneous and isotropic gaussian random field , results in uncorrelated phases .most discussions of phase correlations in the literature refer to them as diagnostic of non gaussianity in some form .what this actually means is usually that the field in question is not a homogeneous and isotropic gaussian random field .fields can be defined which are non gaussian but are both homogeneous and isotropic ; coles & barrow ( 1987 ) give examples .such fields certainly do not have random phases , but the form of phase association that characterizes them can be quite complex ( watts & coles 2003 ; matsubara 2003 ; hikage , matsubara & suto 2003 ) .however , watts & coles ( 2003 ) established an important connection between the form of non gaussianity exhibited by such fields and the bispectrum .the bispectrum is now a standard part of the cosmologists armoury for studies of large scale structure in three dimensions ( e.g. peebles 1980 ; luo 1994 ; heavens 1998 ; matarrese , verde & heavens 1997 ; scoccimarro et al .1998 ; scoccimarro et al .1998 ; scoccimmarro , couchman & frieman 1999 ; verde et al . 2000 , 2001 , 2002 ) and for cmb studies ( ferreira , magueijo & gorski 1998 ; heavens 1998 ; magueijo 2000 ; contaldi et al . 2000 ; phillips & kogut 2001 ; sandvik & magueijo 2001 ; santos et al . 2001; komatsu et al . 2002; komatsu et al .the bispectrum , being essentially the three point correlation function in harmonic space , can be generalized to higher order polyspectra ( stirling & peacock 1996 ; verde & heavens 2001 ; hu 2001 ; cooray 2001 ) .however the _ estimation _ of these polyspectra generally involves taking averages that assume that the field one is dealing with is indeed statistically homogeneous and they are therefore not in themselves tests of that hypothesis .on the other hand , it is also possible for fields to be gaussian in some sense , but not strictly gaussian in the sense we defined in sec .2.1 . as a consequence of strict gaussianity , all the -dimensional joint probability distributions of the field values at different spatial locations are multivariate gaussian ( bardeen et al .fields can be constructed in which , say , the one point distribution is gaussian but the higher order joint distributions are not .these would be gaussian in some sense , but again would not have random phases and may or may not be statistically homogeneous and isotropic . in the fourier or spherical harmonic domain , one can construct fields in which the harmonic coefficients are gaussian but not independent .such fields are gaussian , but either statistically inhomogeneous or statistically anisotropic and would have non - random phases .any or all of these deviations from simple gaussianity might be expected to occur in cmb in various circumstances . statistical inhomogeneity and/or anisotropy over the sky might result from incorrectly subtracted foreground emission from the galaxy . alternatively , the scanning pattern used to produce all - sky map will generally not sample the sky uniformly resulting in a signal to noise that varies over the celestial sphere .perhaps more exotically , compact universe models with a non - trivial topology should result in repeated features in the temperature pattern on the sky ( levin , scannapieco & silk 1998 ; scannapieco , levin & silk 1999 ; levin 2002 ; rocha et al .all these possibilities should result in some form of phase correlation , and indeed there is already some evidence from the preliminary release of the wmap data ( chiang et al .2003 ; eriksen et al .2003 ) .the focus of most discussions of higher - order cmb statistics is the possible detection of primordial non gaussianity . as we discussed in the introduction ,most versions of the inflationary universe idea produce primordial fluctuations that are extremely close to the strictly gaussian form .however , as we mentioned in the introduction , there are circumstances in which the primordial fluctuations could be non gaussian . in such casesthere will be information concerning the level of non gaussianity in the distribution of phases , but this usually appears in a form that is picked up by the bispectrum ( watts & coles 2003 ) .clearly there are many different possible forms of departure from the standard statistical assumption and consequently many possible signatures in the distribution of phases .our aim in this paper is to look for a simple method that is a powerful complement to the usual approach via the polyspectra and hopefully sheds some light on previous debates about departures from gaussianity in the cobe dmr data ( ferreira , magueijo & gorski 1998 ; pando , valls - gabaud & fang 1998 ; bromley & tegmark 1999 ; magueijo 2000 ) .as we shall see , a test based on the distribution of harmonic phases and/or phase differences fits the bill rather nicely .the approach we take in this paper is to assume that we have available a set of phases corresponding to a set of spherical harmonic coefficients obtained from a data set , either real or simulated .we can also form from these phases a set of phase differences as described in the previous section .let us assume , therefore , that we have generic angles , .under the standard statistical assumption these should be random , apart from the constraints described in section 2.2 .the first thing we need is a way of testing whether a given set of phase angles is consistent with being drawn from uniform distribution on the unit circle .this is not quite as simple as it seems , particularly if one does not want to assume any particular form for actual distribution of angles , such as a bias in a particular direction ; see fisher ( 1993 ) .fortunately , however , there is a fully non parametric method available , based on the theory of order statistics , and known as as kuiper s statistic ( kuiper 1960 ) .kuiper s method revolves around the construction of a statistic , , obtained from the data via the following prescription .first the angles are sorted into ascending order , to give the set .it does not matter whether the angles are defined to lie in ] or whatever .each angle is divided by to give a set of variables , where . from the set of we derive two values and where and kuiper s statistic , , is then defined as anomalously large values of indicate a distribution that is more clumped than a uniformly random distribution , while low values mean that angles are more regular .the test statistic is normalized by the number of variates , , in such a way that standard tables can be constructed to determine significance levels for any departure from uniformity ; see fisher ( 1993 ) . in this context , however , it is more convenient to determine significance levels using monte carlo simulations of the `` null '' hypothesis of random phases .this is partly because of the large number of samples available for test , but also because we can use them to make the test more general .the first point to mention is that a given set of phases , say belonging to the modes at fixed is not strictly speaking random anyway , because of the constraints noted in equations ( 9)(11 ) .one could deal with this by discarding the conjugate phases , thus reducing the number of data points , but there is no need to do this when one can instead build the required symmetries into the monte carlo generator .in addition , suppose the phases of the temperature field over the celestial sphere were indeed random , but observations were available only over apart of the sky , such as when a galactic cut is applied to remove parts of the map contaminated by foregrounds .in this case the mask may introduce phase correlations into the observations so the correct null hypothesis would be more complicated than simple uniform randomness .as long as any such selection effect were known , it could be built into the monte carlo simulation .one would then need to determine whether from an observed sky is consistent with having been drawn from the set of values of generated over the monte carlo ensemble .there is also a more fundamental problem in applying this test to spherical harmonic phases .this is that a given set of depends on the choice of a particular coordinate axis .a given sky could actually generate an infinite number of different sets of because the phase angles are not rotationally invariant .one has to be sure to take different choices of -axis into consideration when assessing significance levels , as a random phase distribution has no preferred axis while systematic artifacts may . a positive detection of non randomness may result from a chance alignment of features with a particular coordinate axis in the real sky unless this is factored into the monte carlo simulations to . for both the real sky and the monte carlo skieswe therefore need not a single value of but a distribution of -values obtained by rotating the sky over all possible angles .a similar approach is taken by hansen , marinucci & vittorio ( 2003 ) .this method may seem somewhat clumsy , but a test is to be sensitive to departures from statistical homogeneity one should not base the test on measures that are rotationally invariant , such as those suggested by ferreira , mageuijo & gorski ( 1998 ) as these involve averaging over the very fluctuations one is trying to detect .in view of the preceding discussion we need to know how to transform a given set of into a new set when the coordinate system is rotated into a different orientation .the method is fairly standard , but we outline it here to facilitate implementation of our approach . any rotation of the cartesian coordinate system can be described using a set of three euler angles , , , which define the magnitude of successive rotations about the coordinate axes . in terms of a rotation operator , defined so that a field transforms according to a vector is transformed as here is a matrix representing the operator , i.e. the wigner functions describe the rotation operator used to realise the transformations of covariant components of tensors with arbitrary rank of rank .the functions , written as , transform a tensor from to .consider a tensor defined under the coordinate system and apply the rotation operator we get : this means that the transformation of the tensor under the rotation of the coordinate system can be represented as a matrix multiplication .an alternative representation of the functions in terms of the euler angles is in this representation the parts of the function depending upon the individual euler angles are separated .the form the elements of a rotation matrix and only depend upon the angle . using this representationmakes calculating the values of the easier since the possess a number of symmetry properties meaning that although one quarter of the elements need to be directly calculated the values of the remainder can be inferred .figure ( [ fig : symmetries ] ) shows how the following symmetries can be used to map the elements in the top quadrant of the rotation matrix into each of the other quadrants .first , which maps elements in region i into region ii as shown in figure 1 .next , which maps elements in region i into region iii . and finally , this maps elements in region i into region iv .the relations ( 22 ) to ( 24 ) can be used in conjunction with equation ( [ wignerlittled ] ) , to find all the elements of the wigner functions from the values of in the upper quadrant .similar , but more complicated , symmetry relations exist for the .for a given value of we can construct a sum of spherical harmonics in the coordinate system as follows we can use this and the transformation in equation ( [ wigner1 ] ) to find the transformed sum , in the coordinate system this gives an expression for the rotated coefficients finding the rotated coefficients therefore requires a simple matrix multiplication once the appropriate function is known .to apply this in practice one needs a fast and accurate way of generating the matrix elements for the rotation matrix .there are elements needed to describe the rotation of each mode and the value of each element depends upon the particular values of for that rotation .varshalonich , moskalev & khersonskii ( 1988 ) provide a set of matrices for the for the modes .for example , the case is fortunately , rather than working these matrices out by hand , varshalovich et al .( 1988 ) includes a set of iteration formulae , which , in conjunction with the symmetry relations for the , can be used to generate matrices for the higher modes from the previous ones .the first iteration formula provides a way of generating an element in the mode by using the equivalent element in the matrices for the modes and ..\end{aligned}\ ] ] applying this formula repeatedly allows the central nine elements to be generated for all of the matrices .another iteration formula uses the values already in the matrix to calculate the element directly to the left .for example the element could be generated using the elements and ..\end{aligned}\ ] ] similar iteration functions exist to move up a matrix and across a matrix to the right .figure ( [ fig : iterationmethod ] ) shows the iteration scheme .iteration formula ( [ upiteration ] ) is used to generate the elements coloured in yellow from the elements coloured in cyan .the elements coloured in yellow are then used to calculate the elements coloured in blue .the yellow and blue elements are used to calculate the elements coloured in red .the red elements are used to calculate the elements coloured in green .the symmetry relations are used to calculate the elements in the other three quadrants .once the matrix has been calculated it can be used with the matrix to find the fourth matrix , and so on and so forth .this method is fairly straightforward to implement but care is needed .the main problem with it is that the second iteration formula shown , equation ( [ acrossiteration ] ) includes a term in which a factor is divided by . if or then leading to numerical problems .an alternative approach is to calculate each using factorials .the procedure encoded the definition of the given in edmonds ( 1960 ) : ^{\frac{1}{2}}\sum _ { k}\left ( \begin{array}{c } l+m\\ l - m^{\prime } -k \end{array}\right ) \left ( \begin{array}{c } l - m\\ k \end{array}\right ) ( -1)^{l - m^{\prime } -k}\nonumber \\ & & \times \left ( \cos \beta \right ) ^{2k+m^{\prime } + m } \left ( \sin\beta \right)^{2l-2k - m^{\prime } -m}.\end{aligned}\ ] ] in this expression the variable varies between and the smallest of and .this method avoids the problem of dividing through by when is small .the results are far more stable and satisfy both of the techniques given above for testing the rotation code .the procedure can be tested using a random number generator to produce matrices for the full range of possible euler angles .there are two main tests that can be made of the results .first , the rotation code should generate the correct values for the matrices given in varshalovich et al .( 1988 ) . since the iteration formula are for the they will be applicable to the if and are set to zero . choosing a value of and using it in matrices ( [ wigner2 ] ) and ( [ wigner3 ] ) should reproduce the precise calculation of the matrices as given by varshalovich et al .second , the calculated matrices should be rotation matrices , so they should preserve the length of a vector .we define the vector of coefficients for a particular mode as and use this to define the length of to be and for . in order to apply these ideas to make a test of cmb fluctuations , we first need a temperature map from which we can obtain a measured set of .employing equation [ rotatedcoefficients ] with some choice of euler angles yields a rotated set of the .it is straightforward to choose a set of angles such that random orientations of the coordinate axis can be generated .the wigner function needed to compute this transformation is generated using the techniques described in section 3.3 .for the purposes of this paper we only compute the effect of rotation on low values of .there is no difficulty in principle in extending this method to very high spherical harmonics , but the computational generation of wigner scales by so have chosen to limit ourselves to =20 .once a rotated set has been obtained , kuiper s statistic is calculated from the relevant transformed set of phases . for each cmb map ,3000 rotated sets are calculated by this kind of resampling of the original data , producing 3000 values of .the values of the statistic are binned to form a measured ( re - sampled ) distribution of .the same procedure is applied to the 1000 monte carlo sets of drawn from a uniformly random distribution , i.e. each set is rotated 3000 times and a distribution of under the null hypothesis is produced .these realizations are then binned to created an overall global average distribution under the null hypothesis . in the following applications ,the distribution of the 3000 values of v was split into 100 equally - spaced bins ranging from to 3.0 for different and 100 equally - spaced bins ranging from to 5.2 for different . in order to determinewhether the distribution of is compatible with a distribution drawn from a sky with random phases , we use a simple test , using where the summation is over all the bins and is the number expected in the bin from the overall average distribution .the larger the value of the less likely the distribution functions are to be drawn from the same parent distribution .values of are calculated for the 1000 monte carlo distributions and is calculated from the distribution of . if the value of is greater than a fraction of the values of , then the phases depart from a uniform distribution at significance level .we have chosen 95 per cent as an appropriate level for the level at which the data are said to display signatures that are not characteristic of a statistically homogeneous gaussian random field .as a first check on the suitability of our method for detecting evidence of non - gaussianity in cmb skymaps , sets of spherical harmonic coefficients were created with phases drawn from non - uniform distributions .three types of distributions were used to test the method : ( i ) the phases were coupled ( ii ) the phases were drawn from a cardioid distribution and ( iii ) the phases were drawn from a wrapped cauchy distribution ; see fisher ( 1993 ) .the fake sets of coefficients were created in a manner that insured that the orthonormality of the coefficients were preserved as outlined in equations ( 9)(11 ) .the first non uniform distribution of phases is constructed using the following relationship where is a random number chosen between 0 and 2 .the results of the simulation with are shown in panel ( i ) of tables [ tab : ngpr ] and [ tab : ngpdr ] .the sky is taken to be non - gaussian if is larger than 95 % of for a particular mode .the phase difference method should be particularly suited to detecting this kind of deviation from gaussianity because the phase differences are all equal in this case ; the distribution of phase differences is therefore highly concentrated rather than uniform .nevertheless , the non - uniformity is so clear that it is evident from the phases themselves , even for low modes . on the other hand ,assessing non - uniformity from the quadrupole mode on its own is difficult even in this case because of the small number of independent variables available .the cardioid distribution was chosen because studies of phase evolution in the non - linear regime have shown phases rapidly move away from their initial values , however , they wrap around many multiples of 2 and the observed phases appear random ( chiang and coles 2000 ; watts et al .this evolution can be distinguished by the phases differences which appear to be drawn from a roughly cardioid distribution ( watts et al . 2003 ) .the cardioid distribution has a probability density function given by ,\quad 0\leq\theta<2\pi,0\leq\rho\leq1/2,\ ] ] where is the mean resultant length . as the distribution converges to a uniform distribution .the kurtosis of the distribution is given by sets of coefficients were produced from distributions with .the distributions themselves were generated using rejection methods .the results are given in panel ( ii ) of tables [ tab : ngpr ] and [ tab : ngpdr ] .the results with are not shown : they indicate the coefficients are taken from a gaussian random field , as expected .furthermore , with the non - uniformity detected in the phases and phase differences for is not particularly significant , however , the kurtosis for this distribution is not especially large . .[ tab : ngpr]results for non - gaussian skies based on phases between consecutive values of at a fixed .the results show the significance levels of detected departures from random phases for simulated distributions corresponding to ( i ) coupled phases , ( ii ) a cardioid distribution and ( iii ) a wrapped cauchy distribution . in the latter two cases various parameter choices are shown as described in the text .these results are obtained from 1000 monte carlo skies . [cols="^,^,^,^,^,^,^,^,^,^,^",options="header " , ] the issue of non - gaussianity in the cobe - dmr data has been controversial ( ferreira , magueijo & gorski 1998 ; bromley & tegmark 1999 ; magueijo 2000 ) , but is likely to involve systematic problems identified in the data set we use .note however that the strong detection we have found is at a different harmonic mode than that originally claimed by ferreira et al .( 1998 ) who used a diagnostic related to the bispectrum .there need not be a conflict here because , as we discussed above , we are not sensitive to the same form of phase correlations in this method as the bispectrum . we do not wish to go further into the statistical properties of the cobe - dmr data here .the results we have presented are from the original data which is now known to have suffered from some systematic problems ( banday , zaroubi & gorski 2000 ) .the effect we are seeing is therefore probably not primordial , making this largely an academic discussion .the data have also partly been superseded by wmap .it does , however , illustrate the importance of using complementary approaches to identify all possible forms of behaviour in data sets even when they are of experimental , rather than cosmological , origin .the wmap instrument comprises 10 differencing assemblies ( consisting of two radiometers each ) measuring over 5 frequencies ( , 33 , 41 , 61 and 94 ghz ) .the wmap team have released an internal linear combination ( ilc ) map that combined the five frequency band maps in such a way to maintain unit response to the signal cmb while minimising the foreground contamination .the construction of this map is described in detail in bennett et al .( 2003 ) . to further improve the result ,the inner galactic plane is divided into 11 separate regions and weights determined separately .this takes account of the spatial variations in the foreground properties .the final map covers the full - sky and should represent only the cmb signal , although it is not anticipated that foreground subtraction is perfect .the wmap ilc map is issued with estimated errors in the map - making technique .this uncertainty could be included in the monte carlo simulations of the null hypothesis used to construct sampling distributions of our test statistic .this would be difficult to do rigorously as errors are highly correlated , but is possible in principle .we have decided not to include the experimental errors in this analysis because it is just meant to illustrate the method : a more exhaustive search for cmb phase correlations ( for which such considerations would be relevant ) will have to wait for cleaner maps than are available now . following the release of the wmap 1 yr data tegmark , de oliveira - costa & hamilton ( 2003 ; toh )have produced a cleaned cmb map .they argued that their version contained less contamination outside the galactic plane compared with the internal linear combination map produced by the wmap team .the five band maps are combined with weights depending both on angular scale and on the distance from the galactic plane .the angular scale dependence allows for the way foregrounds are important on large scales whereas detector noise becomes important on smaller scales .toh also produced a wiener filtered map of the cmb that minimisizes rms errors in the cmb .features with a high signal - to - noise are left unaffected , whereas statistically less significant parts are suppressed .while their cleaned map contains residual galactic fluctations on very small angular scales probed only by the w band , these fluctuations vanish in the filtered map .the three all - sky maps are available in healpix format ( gorski , hivon & wandelt 1999 ) .we derived the for each map using the anafast routine in the healpix package .the results are shown in panels ( ii ) to ( iv ) in tables [ tab : wmapl ] and [ tab : wmapm ] .the results for the ilc map suggest there are departures from the random phase hypothesis for the phases of and 16 and the phase differences for =14 and 16 .the departure for =16 appears very strong with the value being larger than 995 of the values obtained for the mc skies . for these data we have also explored the effect of measuring differences between the phases at different for the same value of .( we did not present results in this case for the cobe - dmr data , as we only discussed the even -modes in that case and differences between adjacent would require every mode . )notice that the confidence levels of non - uniformity are larger for at fixed than for at fixed .this is an interesting feature of the wmap data . to investigate this mode further we therefore reconstructed the temperature pattern on the sky solely using only the spherical harmonic modes for with their appropriate and compared this with a map with the same power spectrum but with random phases ( figure [ fig : ilc16 ] ) .the result is a striking confirmation of the above argument that our method is very sensitive to departures from statistical homogeneity .the modes are clearly forming a more structured pattern than the random - phase counterpart .the alignment of these modes with the galactic plane is striking ; it may be that the angular scale corresponding to may relate to the width of the zones used to model the galactic plane .the two maps produced by tegmark et al .( 2003 ; toh ) also suggest that the phases for the scale are non - random .the cleaned map shows departures from the random phase hypothesis at greater than 95% confidence for the phases of =16 and phase differences of and 16 .the wiener filtered map finds departures for the phases corresponding to and phase differences in =14 .both the cleaned and the wiener filtered maps of toh display similar band patterns to that observed in the map for the ilc ; we show the wiener filtered toh map for comparison. the morphological appearance of these fluctuations as a band across the galactic plane may be indicative of residual foreground still present in the cmb map after subtraction .this is the most likely explanation for the alignment of features relative to the galactic coordinate system , since the scanning noise is not aligned in this way .further evidence for this is provided by naselsky et al .2003 from cross correlations of the phases of cmb maps with various foreground components . in any caseone should certainly caution against jumping to the conclusion that this represents anything primordial .these are preliminary data with complex variations in signal - to - noise across the sky .nevertheless , the results we present here do show that the method we have presented is useful for testing realistic observational data for departures from the standard cosmological statistics .in this paper we have presented a new method of testing cmb data for departures from the homogeneous statistical behaviour associated with gaussian random fields generated by inflation .our method uses some of the information contained within the phases of the spherical harmonic modes of the temperature pattern and is designed to be most sensitive to departures from stationarity on the celestial sphere .the method is relatively simple to implement , and has the additional advantage of being performed entirely in `` data space '' .confidence levels for departures from the null hypothesis are calculated forwards using monte carlo simulations rather than by inverting large covariance matrices .the method is non - parametric and , as such , makes no particular assumptions about data .the main statistical component of our technique is a construction known as kuiper s statistic , which is a kind of analogue to the kolmogorov - smirnov test , but for circular variates . in order to illustrate the strengths and weaknesses of our approachwe have applied it to several test data sets . to keep computational costs down , but also to demonstrate the usefulness of non - parametric approaches for small data sets , we have concentrated on low order spherical harmonics .we first checked the ability of our method to diagnose non - uniform phases of the type explored by watts , coles & melott ( 2003 ) and matsubara ( 2003 ) .the method is successful even for the low order modes that have few independent phases available .we then applied the method to quadratic non gaussianity fields of the form discussed by watts & coles ( 2003 ) and liguori , matarrese & moscardini ( 2003 ) .our method is not sensitive to the particular form of phase correlation displayed by such fields as , although non gaussian , they are statistically stationary .phase correlations are present in such fields , but do not manifest themselves in the simple phase or phase - difference distributions discussed in this paper .these two examples demonstrate that our method is a useful diagnostic of statistical non - uniformity and its applicability to non gaussian fields is likely to be restricted to cases where the pattern contains strongly localized features , such as cosmic strings or textures ( e.g. contaldi et al .2000 ) .we next turned our attention to the cobe dmr data .claims and counter - claims have already been made about the possible non gaussian nature of these data , the most likely explanation of the observed features being some form of systematic error .our test does show up non - uniformity in the phase distribution at high confidence levels for the mode and , less robustly , at .this is a different signal to that claimed previously to be indicative of non gaussianity .its interpretation remains unclear .finally we applied our method to various representations of the wmap preliminary data release . in this casewe find clear indications at and , less significantly , at .the distribution of the harmonics on the sky shows that this result is not a statistical fluke .the data are clearly different from a random phase realisation with the same amplitudes and there is certainly the appearance of some correlation of this pattern with the galactic coordinate system .we are not claiming that this proves there is some form of primordial non gaussianity in this dataset .indeed the wmap data come with a clear warning its noise properties are complex so it would be surprising if there were no indications of this in the preliminary data. it will be useful to apply our method to future releases of the data in order to see if the non - uniformity of the phase distribution persists as the signal - to - noise improves . at the moment, however , the most plausible interpretation of our result is that it represents some kind of galactic contamination , consistent with other ( independent ) claims ( dineen & coles 2003 ; chiang et al .2003 ; eriksen et al . 2003 ; naselsky et al .2003 ) . as a final comment we mention that the aptitude of our method for detecting spatially localised features ( or departures from statistical homogeneity generally ) suggests that it is may be useful as a diagnostic of the repeating fluctuation pattern produced on the cmb sky in cosmological models with compact topologies ( levin , scannapieco & silk 1998 ; scannapieco , levin & silk 1999 ; rocha et al .2002 ) ; see levin ( 2002 ) for a review .we shall return to this issue in future work .we acknowledge the use of the legacy archive for microwave background data analysis ( lambda ) . support for lambda is provided by the nasa office of space science .the cobe datasets were developed by the nasa goddard space flight centre under the guidance of the cobe science working group and were provided by the nssdc .we acknowledge nasa and the wmap science team for their data products .we thank joao magueijo for supplying us with the spherical harmonic coefficients he used for the cobe 4-year data ( magueijo 2000 ) .we also thank michele liguori for letting us use his non gaussian simulations .we are grateful to pedro ferreira , sabino matarrese and mike hobson for useful discussions related to this work .a preliminary version of some of the work presented in this paper was performed by john earl and dean wright as an undergraduate project , which subsequently won a prize awarded by the software company tessella .patrick dineen is supported by a university research studentship .this work was also partly supported by pparc grant number ppa / g / s/1999/00660 .albrecht a. , steinhardt p.j . , 1982 ,lett . , 48 , 1220 banday a.j . , zaroubi s. , gorski k.m . , 2000 , apj , 533 , 575 bardeen j.m . ,bond j.r . , kaiser n. , szalay a.s ., 1986 , apj , 304 , 15 bartolo n. , matarrese s. , riotto , a. , 2002 , phys .d. 65 , 103505 bennett c.l .et al . , 1996 ,apj , 464 , l1 bennett c.l .et al . , 2003 ,apj , 538 , 1 bennet c.l .et al . , 2003 ,apjs , 148 , 1 bond j.r . , efstathiou g. , 1987 , mnras , 226 , 655 bromley b.c ., tegmark m. , 1999 , apj , 524 , l79 chiang l .- y . , 2001 , mnras , 325 , 405 chiang l .- y . , coles p. , 2000 ,mnras , 311 , 809 chiang l .- y . ,coles p. , naselsky p.d ., 2002 , mnras , 337 , 488 chiang l .- y ., naselsky p.d . , coles p. , 2002, astro - ph/0208235 chiang l .- y ., naselsky p.d ., verkhodanov o.v . ,way m.j . , 2003 , apj , 590 , l65 coles p. , barrow j.d . , 1987 , mnras , 228 , 407 coles p. , 1988 ,mnras , 234 , 509 coles p. , jones b.j.t . , 1991 , mnras , 248 , 1 coles p , chiang l.y . ,2000 , nat , 406 , 376 colley w.n . ,gott j.r . , 2003 , mnras , 344 , 686 contaldi c.r . , bean r. , magueijo j. , 2000 , phys . lett . b. , 468 , 189 contaldi c.r . , ferreira p.g . , magueijo j. , gorski k.m . , 2000 , apj , 534 , 25 cooray a. , 2001 , phys . rev .d. , 64 , 043516 dineen p. , coles p. , 2003 ,mnras , in press , astro - ph/0306529 edmonds a.r . , 1960 ,angular momentum in quantum mechanics , 2nd edition .princeton university press , princeton .eriksen h.k . ,hansen f.k . ,banday a.j . ,gorski k.m . ,lilje p.b ., 2003 , astro - ph/0307507 ferreira p.g . , magueijo j. , gorski k.m . , 1998 ,apj , 503 , l1 fisher n.i . , 1993 , statistical analysis of circular data .cambridge university press , cambridge .gangui a. , martin j. , sakellariadou m. , 2002 , phys .d. 66 , 083502 gangui a. , pogosian l. , winitzki s. , 2001 , phys .d. 64 , 043001 grski k.m ., hivon e. , wandelt b.d . , 1999 , in proceedings of the mpa / eso conference _ evolution of large - scale structure _ , eds .banday , r.s .sheth and l. da costa , printpartners ipskamp , nl , pp .37 - 42 ( also astro - ph/9812350 ) gupta s. , berera a. , heavens a.f . ,matarrese s. , 2002 , phys . rev .d. , 66 , 043510 guth a.h ., 1981 , phys .d. , 23 , 347 guth a.h . , pi s .- y . , 1982 , phys ., 49 , 1110 hansen f.k ., marinucci d. , vittorio n. , 2003 , phys . rev .d. , 67 , 123004 heavens a.f ., 1998 , mnras , 299 , 805 hikage c. , matsubara t. , suto y. , 2003 , apj submitted .hinshaw g.f .et al . , 2003 , apjs , 148 , 135 hu w. , 2001 , phys .d. , 64 , 083005 jain b. , bertschinger e. , 1996 , apj , 456 , 43 jain b. , bertschinger e. , 1998 , apj , 509 , 517 komatsu e. , et al . , 2002 , apj , 566 , 19 komatsu e. , et al . , 2003 , apjs , 148 , 119 kuiper n.h . , 1960 , koninklijke nederlandse akademie van wetenschappen , proc .a , lxiii , pp .38 - 49 levin j. , 2002 , phys, 365 , 251 levin j. , scannapieco e. , silk j. , 1998 , phys .d. , 58 , 103516 liguori m. , matarrese s. , moscardini l. , 2003 , apj , 597 , 57 linde a.d ., 1982 , phys .b. , 108 , 389 linde a.d ., mukhanov v. , 1997 , phys . rev .d. , 56 , 535 luo x , 1994 , apj , 427 , l71 magueijo j. , 2000 , apj , 528 , l57 martin j. , riazuelo a. , sakellariadou m. , 2000 , phys .d. , 61 , 083518 matarrese s. , verde l. , heavens a.f . , 1997 , mnras , 290 , 651 matarrese s. , verde l. , jimenez r. , 2000 , apj , 541 , 10 matsubara t. , 2003 , apj , 591 , l79 naselsky p.d . , verkhodanov o.v ., chiang l .- y . ,novikov i.d . , 2003 ,apj , submitted , astro - ph/0310235 pando j. , valls - gabaud d. , fang l. , 1998 , phys .81 , 4568 peebles p.j.e ., 1980 , the large - scale structure of the universe .princeton university press , princeton .phillips n.g .kogut a. , 2001 , apj , 548 , 540 rocha g. , cayon l. , bowen r. , canavezes a. , silk j , banday a.j ., gorski k.m . , 2002 , mnras , submitted , astro - ph/0205155 ryden b.s . , gramann m. , 1991 , apjl , 383 , l33 sachs r.k . , wolfe a.m. , 1967 , apj , 147 , 73 sandvik h.b . , magueijo j. , 2001 , mnras , 325 , 463 santos m.g .et al . , 2001 ,, 88 , 241302 scannapieco e. , levin j. , silk j. , 1999 , mnras , 303 , 797 scherrer r.j . , melott a.l . , shandarin s.f . , 1991 , apj , 377 , 29 scoccimarro r. , colombi s. , fry j.n ., frieman j.a . , hivon e. , melott a.l . , 1998 ,apj , 496 , 586 smoot g.f .et al . , 1992 ,apj , 396 , l1 soda j. , suto y. , 1992 , apj , 396 , 379 starobinsky a.a . , 1979 , pisma zh .fiz . , 30 , 719 starobinsky a.a . , 1980 ,b. , 91 , 99 starobinsky a.a ., 1982 , phys .b. , 117 , 175 stirling a.j . , peacock j.a ., 1996 , mnras , 283 , l99 tegmark m. , de oliveira - costa a. , hamilton a.j.s . , 2003 , astro - ph/0302496 varshalovich d.a . , moskalev a.n ., khersonskii v.k . , 1988 , quantum theory of angular momentum .world scientific , singapore .verde l. , heavens a.f . , 2001 ,apj , 553 , 14 verde l , jimenez r. , kamionkowski m. , matarrese s. , 2001 , mnras , 325 , 412 verde l , wang l. , heavens a.f . ,kamionkowski m. , 2000 , mnras , 313 , 141 verde l. et al .2002 , mnras , 335 , 432 watts p.i.r . , coles p. , 2003 ,mnras , 338 , 806 watts p.i.r ., coles p. , melott a.l ., 2003 , apj , 589 , l61
we study the statistical properties of spherical harmonic modes of temperature maps of the cosmic microwave background . unlike other studies , which focus mainly on properties of the amplitudes of these modes , we look instead at their phases . in particular , we present a simple measure of phase correlation that can be diagnostic of departures from the standard assumption that primordial density fluctuations constitute a statistically homogeneous and isotropic gaussian random field , which should possess phases that are uniformly random on the unit circle . the method we discuss checks for the uniformity of the distribution of phase angles using a non - parametric descriptor based on the use order statistics , which is known as kuiper s statistic . the particular advantage of the method we present is that , when coupled to the judicious use of monte carlo simulations , it can deliver very interesting results from small data samples . in particular , it is useful for studying the properties of spherical harmonics at low for which there are only small number of independent values of and which therefore furnish only a small number of phases for analysis . we apply the method to the cobe - dmr and wmap sky maps , and find departures from uniformity in both . in the case of wmap , our results probably reflect galactic contamination or the known variation of signal - to - noise across the sky rather than primordial non - gaussianity . = 6.5truein cosmic microwave background cosmology : theory large - scale structure of the universe methods : statistical
we consider the problem of communicating over an unknown and arbitrarily varying channel , with the help of feedback .we would like to minimize the assumptions on the communication channel as much as possible , while using the feedback link to learn the channel .the main questions with respect to such channels are how to define the expected communication rates , and how to attain them universally , without channel knowledge . the traditional models for unknown channels are compound channels , in which the channel law is selected arbitrarily out of a family of known channels , and arbitrarily varying channels ( avc s ) , in which a sequence of channel states is selected arbitrarily .the well known results for these models do not assume adaptation .therefore , the avc capacity , which is the supremum of the communication rates that can be obtained with vanishing error probability over any possible occurrence of the channel state sequence , is in essence a worst - case result .for example , if one assumes that , the channel output at time , is determined by the probability law where is the channel input , and is an arbitrary sequence of conditional distributions , clearly no positive rate can be guaranteed a - priori , as it may happen that all have zero capacity , and therefore the avc capacity is zero. this capacity may be non - zero only if a constraint on is defined . in this paperwe use the term `` arbitrarily varying channel '' in a loose manner , to describe any kind of unknown and arbitrary change of the channel over time , and the acronym `` avc '' to refer to the traditional model .other communication models , which allow positive communication rates over such avc s were proposed by the authors and others .although the channel models considered in these papers are different , the common feature distinguishing them from the traditional avc setting is that the communication rate is adaptively modified using feedback .the target rate is known only a - posteriori , and is gradually learned throughout the communication process . by adapting the rate , one avoids worst case assumptions on the channel , and can achieve positive communication rates when the channel is good .however , in the aforementioned communication models , the distribution of the transmitted signal is fixed and independent of the feedback , and only the rate is adapted . specifically in the `` individual channel '' model for reasons explained therein , the distribution of the channel input is fixed to a predefined prior .likewise , eswaran show that for a fixed prior , the mutual information of the averaged channel can be attained .clearly , with this limitation these systems are incapable of universally attaining the channel capacity in many cases of interest .for example , consider even the simple case where the channel is a compound memoryless channel , i.e. the conditional distributions are all constant but unknown . in the last paper ,the problem of universal communication was formulated as that of a competition against a reference system , comprised of an encoder and a decoder with limited capabilities . for the case where the channel is modulo - additive with an individual , arbitrary noise sequence ,it was shown possible to asymptotically perform at least as well as any finite - block system ( which may be designed knowing the noise sequence ) , without prior knowledge of the noise sequence . however, this result crucially relies on the property of the modulo - additive channel , that the capacity achieving prior is the uniform i.i.d .prior for any noise distribution . to extend the result to more general models, we would like to be able to adapt the input behavior .the key parameter to be adapted is the `` prior '' , i.e. the distribution of the codebook ( or equivalently the channel input ) , since it plays a vital role in the converse as well as the attainability proof of channel capacity and is the main factor in adapting the message to the channel . in a crude waywe may say that previous works achieve various kinds of `` mutual information '' for a fixed prior and any channel from a wide class , by mainly solving problems of universal decoding and rate adaptation . however to obtain more than the `` mutual information '' , i.e. the `` capacity '', one would need to select the prior in a universal way .prior adaptation using feedback is a well known practice for static or semi - static channels .two familiar examples are bit and power loading performed in digital subscriber lines ( dsl - s ) , and precoding for in multi - antenna systems which is performed in practice in wireless standards such as wifi , wimax and lte . if the channel can be assumed to be static for a period of time sufficient to close a loop of channel measurement , feedback and coding , then an input prior close to the optimal one can be chosen . in the theoretical setting of the compound memoryless channel where , where is unknown but fixed , a system with feedback can asymptotically attain the channel capacity of , without prior knowledge of it , by using an asymptotically small portion of the transmission time to estimate the channel , and using an estimate of the optimal prior and the suitable rate during the rest of the time .all models for prior adaptation that we are aware of , use the assumption that the knowledge of the channel at a given time yields non trivial statistical information about future channel states , but do not deal with arbitrary variation .the question that we deal with in this paper is : assuming a channel which is _ arbitrarily _ changing over time , is there any merit in using feedback to adapt the input distribution , and what rates can be guaranteed ? as a target , we would have liked to consider the most general variation of the channel ( as in the unknown vector channel model ) , however to start our exploration , we focus on channel models which are memoryless in the input , i.e. whose behavior at a certain time does not depend on any previous channel _ inputs_.the most general model that does not include memory of the input is that of an unknown sequence of memoryless channels ( which is in essence an avc without constraints ) and this is the main model considered in this paper .the motivation for avoiding memory of the input can be appreciated by considering the negative examples in .we now give a brief overview of the structure and the results of this paper . in section [ sec : problem_statement_and_notation ] we state the problem , and define several communication rates ( as a function of the channel sequence ) that would be of interest . in order to focus thoughts on questions related to the problem of determining the _ prior _ , we initially adopt an abstract model of the communication system , stripping off the details of communication , such as decoding , channel estimation , overheads , error probability , etc .we begin by presenting an easier synthetic problem , in which all previous channels are known ( section [ sec : toy_problem ] ) .this problem may represent a channel which changes its behavior in a block - wise manner and remains i.i.d .memoryless during each block ( a subset of the original problem ) .this problem is related to standard prediction problems ( section [ sec : toy_categorization ] ) , and used as a tool to gain insight into the prediction problem involved , present bounds on what can be achieved universally , and develop the techniques that will be used later on .furthermore , we show that even for this easier problem there is no hope to attain the channel capacity universally and we would have to settle for lower rates ( section [ sec : regret_lb ] ) .the attained rate is the maximum over the prior , of the averaged mutual information ( theorem [ theorem : prior_predictor_exp ] ) . in section [ sec : arbitrary_channel_var ] , we return to the main problem , and show that the rate that can be attained when the past channel is not known , but is estimated from the output , is lower .we focus on the capacity of the time - averaged channel .we show this rate is the best achievable rate that does not depend on the order of the channel sequence ( theorem [ theorem : c_overlinew_optimality ] ) , and present the main result showing that this rate is indeed achievable ( theorem [ theorem : c_overlinew_achievability ] ) .furthermore , this rate meets or exceeds the avc capacity , and essentially equals the `` empirical capacity '' defined by eswaran .we present a scheme based on rateless coding and combines a prior predictor that attains this rate . in section [ sec : arbitrary_var_exp_prior_predictor ] , the prior predictor is developed under abstract assumptions regarding the channel estimation and decoding rate . in section [ sec :mainproof ] , we present and analyze the full communication system and prove the main result .finally , section [ sec : discussion ] is devoted to discussion and comments .we denote random variables by capital letters and vectors by boldface . however for probabilitieswhich are sometimes treated as vectors we use regular capital letters .we apply superscript and subscript indices to vectors to define subsequences in the standard way , i.e. , denotes the mutual information obtained when using a prior over a channel , i.e. it is the mutual information between two random variables with the joint probability . denotes the channel capacity . for discrete channels ,the channel is sometimes presented as a matrix where is in the -th column and the -th row .logarithms and all information quantities are base unless specified otherwise .we denote by the unit simplex , i.e. the set of all probability measures on . denotes a bernoulli random variable with probability to be . denotes an indicator function of an event or a condition , and equals if the event occurs and otherwise .we use `` '' to denote simple mathematical inductions , where the same rule is repeatedly applied , for example .a hat denotes an estimated value , and a line denotes an average value .the empirical distribution of a vector of length is a function representing the relative frequency of each letter , where the subscript identifies the vector .the conditional empirical distribution of two equal length vectors is defined as let be sets defining the input and output alphabets , respectively .both are assumed to be finite , unless stated otherwise.,[sec : arbitrary_channel_var ] do not require to be finite ] let be a sequence of memoryless channels over channel uses .each is a conditional distribution where and represent an input and output symbol respectively .the conditional distribution of the output vector given the input vector is given by : the sequence of channels is arbitrary and unknown to the transmitter and the receiver .we assume the existence of common randomness ( i.e. that the transmitter and the receiver both have access to some random variable of choice ) .there exists a feedback link between the receiver and the transmitter . to simplify, we assume the feedback is completely reliable , has unlimited bandwidth and is instantaneous , i.e. arrives to the encoder before the next symbol .we assume the system is rate adaptive , which means that the message is represented by an infinite bit sequence , and the system may choose how many bits to send .the error probability is measured only over the bits which were actually sent ( i.e. over the first bits , where is the rate reported by the receiver ) .the system setup is presented in figure [ fig : system_adaptive ] .( 140 , 30 ) ( 23,16)transmitter(20,10)(1,0)20(40,10)(0,1)15(40,25)(-1,0)20(20,25)(0,-1)15 ( 63,20)channel(60,19)(1,0)20(80,19)(0,1)5(80,24)(-1,0)20(60,24)(0,-1)5 ( 103,16)receiver(100,10)(1,0)20(120,10)(0,1)15(120,25)(-1,0)20(100,25)(0,-1)15 ( 0,17.5)(1,0)20(6,18.5)(0,14)(message ) ( 40,21.5)(1,0)20(42,22.5) ( 80,21.5)(1,0)20(82,22.5) ( 100,15)(-1,0)60(55,10) ( feedback ) ( 120,21.5)(1,0)20(125,22.5) ( rate ) ( 120,15)(1,0)20(125,16) ( message ) ( 30,5)(0,1)5(30,0) ( common randomness ) ( 110,5)(0,1)5(110,0) to simplify , we assume that there are no constraints on the channel input ( such as power constraints ) . if such constraints exist they can be accommodated by changing the set of potential priors .since the channel sequence is arbitrary there is no positive rate which can be guaranteed a - priori .instead , we define a target rate as a function of the channel sequence .[ def : attainability_of_rw ] we say that a sequence of rate functions is asymptotically attinable , if for every there is large enough such that there is a system with feedback and common randomness over channel uses , in which , for _ every _ sequence , the rate is or more , with probability of at least , while the probability of error is at most . in the next sectionwe propose several potential target rates and then we would ask which of these are attainable . with respect to the sequence can define various meaningful information theoretic measures .the maximum possible rate of reliable communication is the capacity when the sequence is known a - priori ( in other words , the capacity with full , non causal , channel state information at the transmitter and the receiver ) and is given by : note that if constraints on the sequence existed , then we would have an equality . the maximum rate that can be obtained with a single _ fixed _ prior when the sequence is known is : , the capacity of the time - averaged channel is : where we define the time - averaged channel as clearly , where the first inequality results from the order of maximization and the other results from the convexity of the mutual information with respect to the channel .for each of the above target rates we would like to find out whether it is achievable under the definitions above .as we shall see , is not achievable , is achievable , and is achievable only under further constraints imposed on the problem . a rigorous proof that is the capacity of the channel sequence is left out of the scope of this paper . for our purpose , it is sufficient to observe that is an upper bound on the achievable rate , because the mutual information between channel input and output is maximized by a memoryless ( not i.i.d . ) input distribution . to see intuitively how can be achieved ,consider that since can be arbitrarily large while the input and output alphabets , and thus the set of channels , remain constant , we may sort the channels into groups of similar channels , and apply block coding to each group .a close result pertaining to stationary ergodic channels appears in ( * ? ? ?* ( 3.3.5 ) ) .in this section we present a synthetic problem , which will help us examine the achievability of the target rates defined above in a simplified scenario , draw the links to universal prediction , and introduce the techniques that will be used in the sequel .we focus on the problem of setting a prior at time .we assume that at each time instance , the system has full knowledge of the sequence of past channels .the prior prediction mechanism sets based on the knowledge of .then , we assume that bits are conveyed during time instance .a predictor attains a given target rate if for all sequences we have , and .this abstract problem can apply to a situation where the channel sequence is constant during long blocks , and changes its value only from block to block , or from one transmission to another . in this case denotes the block index , and denoting by the constant block length , at most bits can be sent in block .if the channel is constant over long blocks it is reasonable to assume that past channels can be estimated .note that in addition we made the assumption that is achievable , although this communication rate is unknown to the transmitter in advance , i.e. we ignored the problem of rate adaptation .therefore the synthetic problem is a subset of the original problem and upper bounds that we show here apply also to the original problem .we begin by discussing the achievability of for the synthetic problem .the target rate is special in being an additive function for each value of . universally attaining under the conditions specified above , falls into a widely studied category of universal prediction problems .below , we present this class of problems and review some results that will be important for our discussion .these prediction problems have the following form : let be a strategy in a set of possible strategies , and be a state of nature . a loss function associates a loss with each combination of a strategy and a state of nature .the total loss over occurrences is defined as . the universal predictor assigns the next strategy given the past values of the sequence , and before seeing the current value .there is a set of reference strategies ( sometimes called experts ) , which are visible to the universal predictor .the target of universal prediction is to provide a predictor which is asymptotically and universally better than any of the reference strategies , in the sense defined below . for a given sequence , denote the losses of the universal predictor and the reference strategies as and , respectively .denote the regret of the universal predictor with respect a specific reference strategy as the excessive loss : is a function of the sequence and the predictor .the target of the universal predictor is to minimize the worst case regret , i.e. attain the reference strategies may be defined in several different ways . in the simplest form of the problemthe competition is against the set of fixed strategies .the exact minimax solution is known only for very specific loss functions , and a solution guaranteeing is not known for general loss functions .however there are many prediction schemes which perform well for a wide range of loss functions ( see references above ) . in the information theoretic framework, the log - loss , where is a probability distribution over is the most familiar loss function , and used in analyzing universal source encoding schemes , since represents the optimal encoding length of the symbol when assigned a probability .it exhibits an asymptotical minimax regret of .however in the more general setting the asymptotical minimax regret decreases in a slower rate of .there are several loss functions which are characterized by a `` smoother '' behavior for which better minimax regret is obtained ( * ? ? ?* theorem 3.1 , proposition 3.1 ) .for some of these loss functions , a simple forecasting algorithm termed `` follow the leader '' ( fl ) can be used ( * ? ? ?* theorem 1 ) . in fl ,the universal forecaster picks at every iteration the strategy that performed best in the past , i.e. minimizes the cumulative loss over the instances from to .the archetype of loss functions for which it is not possible to obtain a better convergence rate than is the absolute loss , where and ] , the channel can be defined as follows : if then , otherwise , .these channels are depicted in figure [ fig : w0w1_example_channels ] , where transitions are denoted by solid lines for probability , and dashed lines for probability .we consider the same prediction problem , under the simplifying assumption that the channel is chosen only between the two channels above , and the forecaster knows this limitation , i.e. only the sequence of states is unknown .it is clear from convexity of the mutual information , and the symmetry with respect to ( interchanging the values of leads to the same mutual information ) , that any solution can only be improved by taking a uniform distribution over .therefore , without loss of generality , the input distribution can be defined by a single value ] . for this choicethe output will always be uniformly distributed .we have : and similarly , therefore we can write : hence , even under this limited scenario , the loss function behaves like the absolute loss function , and therefore the normalized minimax regret ( and the redundancy in attaining ) is at least .note that the relation to the absolute loss implies that the simple fl predictor , can not be applied to our problem .an example to illustrate this and some further details are given in appendix [ sec : fl_failure_example ] .since in the rest of the paper we will focus on the rate function , it is interesting to note that , although this rate is smaller , in general , than , the minimum redundancy in obtaining it can not be better than . to show this , we only need to show that in the context of the counter - example shown above , . for a specific sequence of channels , denote by the relative frequency with which channel appears .the averaged channel is .it is easy to see that the capacity of this channel is obtained by placing the entire input probability on the two useful inputs of the channel that appears most of the time .that is , if we place the input probability on the useful inputs of and obtain the rate , and otherwise obtain .hence the capacity of the averaged channel is .on the other hand , } \left ( ( 1-p ) \cdot ( 1-q ) + p q \right ) = \max(p,1-p ) .\end{split}\ ] ] using the example above , we can also see why is not universally achievable with an asymptotically vanishing normalized regret by a sequential predictor . in the example, the capacities of the two channels are .suppose the sequence of channel states is generated randomly i.i.d . .then for any sequential predictor of , the expected loss in each time instance is = \half ( 1-q ) + \half q = \half ] : returning to we have : therefore recursively applying : notice that .this completes the first part of showing that the increase in is bounded . for the second part we shall use the following lemma which relates the exponential weighting of a function to its maximum , andis proven in appendix [ sec : proof_of_lemma1 ] : [ lemma : f_exp_weight_ub ] let be a real non - negative bounded function ] . on the other hand, is the maximum achievable rate which is independent of the order of the sequence , or , in other words , which is fixed under permutation of the sequence .this observation is formalized in the following theorem : [ theorem : c_overlinew_optimality ] let ( for ) be a sequence of rate functions , which are oblivious to the order of . if the sequence is asymptotically attainable according to definition [ def : attainability_of_rw ] , then there exists a sequence such that .note that depends on through the average over channels .since both and are oblivious to the order of , theorem [ theorem : c_overlinew_optimality ] implies they are not achievable .following is a rough outline of the proof .consider the channel generated by uniformly drawing a random permutation of the indices , using the channels in a permuted order .if a system guarantees a rate , which is fixed under permutation , then this rate would be fixed for all drawing of , and therefore for the channel we described , the system can guarantee the rate a - priori .hence , the capacity of this channel must be at least .the next stage is to show that the feedback capacity of this channel is at most . due tothe fact we select the channels from the set without replacement , the proof is a little technical and will be deferred to appendix [ sec : proof_cw_max_rate ] .however to give an intuitive argument , if we replace the channel described above , by a similar channel , obtained by randomly drawing at each time instance one of , this time _ with _ replacement , then this new channel is simply the dmc with channel law .therefore feedback does not increase the capacity and its feedback capacity is simply .the main point in the proof is to show there is no difference in feedback - capacity between the two channels , and the main tool is hoeffding s bounds on sampling without replacement .another interesting property of the rate is that it meets or exceeds the random - code capacity of any memoryless avc defined over the same alphabet , and thus attaining yields universality over all avc s ( see section [ sec : discussion_avc ] ) . through the relation to avc capacitywe can see that common randomness is essential to obtain , as it is essential for obtaining the random - code capacity . after settling for ,the next question that naturally arises is : what is the best convergence rate of the regret , with respect to this target ? in section [ sec : regret_lb ] we have shown that even in the context of the synthetic problem of section [ sec : toy_problem ] ( with full knowledge of past channels ) , the regret with respect to is at least , and this lower bound naturally holds in the current problem , where only partial knowledge of past channels is available .the following theorem formalizes claim that is achievable according to definition [ def : attainability_of_rw ] : [ theorem : c_overlinew_achievability ] for every there exists and a constant , such that for any there is an adaptive rate system with feedback and common randomness , where for the problem of section [ sec : problem_setting ] , over any sequence of channels : 1 .the probability of error is at most 2 .the rate satisfies with probability at least 3 . [ corollary : symbolwise_random_numerical ] specific values for can be obtained as follows .let be parameters of choice .then the constants and are given in the proof , by , , where constants used in these equations are defined in , , , - , .for any , and .[ corollary : symbolwise_random_prior_predictor1 ] the same holds if is determined ( e.g. by an adversary ) as a function of the message and all previous channel inputs and outputs .a numerical example is given after the proof ( example [ example : symbolwise_random_prior_predictor ] ) .the proof of the theorem is given in section [ sec : mainproof ] . in this section give the communication scheme , up to some details which will be completed later on ( section [ sec : mainproof_decoding_cond ] ) .one of the issues that we ignored in the synthetic problem is the determination of the rate before knowing the channel . to solve this problem we use rateless codes .we divide the available time into multiple such blocks as done by eswaran and in .we fix a number of bits per block . in each block, bits from the message string are sent . at each block , a codebook of codewords is generated randomly and i.i.d .( in time and message index ) according to the prior . is determined by a prediction scheme which is specified below .the random drawing of the codewords is carried out by using the common randomness , and the codebook is known to both sides .the relevant codeword matching the message sub - string is sent to the receiver symbol by symbol . at each symbol of the block and for each codeword in the codebook , the receiver evaluates a decoding condition that will be specified later on . roughly speaking, the condition measures whether there is enough information from the channel output to reliably decode the message .the receiver decides to terminate the block if the condition holds , and informs the transmitter .when this happens , the receiver determines the decoded codeword as one of the codewords that satisfied .then , using the known channel output , and the decoded input over the block which was decoded , the receiver computes an estimate of the averaged channel over the block .the specific estimation scheme will be specified in section [ sec : mainproof_decoding_cond ] .the receiver calculates a new prior for the next block according to the prediction scheme that will be specified below .the receiver sends the new prior to the transmitter .alternatively , the receiver may send the estimated channel , and the new prior can be calculated at each side separately .the new block starts at the next symbol , and the process continues , until symbol is reached .the last block may terminate before decoding .( 229.77 , 131.75)(0,0 ) ( 0,0 ) an illustration of the combination of a rateless scheme with prior prediction .each box represents a rateless block in which bits are transmitted.,title="fig : " ] ( 75.28,118.30 ) ( 45.52,118.30 ) ( 122.90,118.30 ) ( 190.37,118.30 ) ( 123.56,66.71 ) ( 125.33,41.63 ) ( 71.53,12.69)predictor ( 160.84,12.18 ) ( 75.34,41.63 ) ( 73.58,66.71 ) ( 46.78,41.63 ) ( 45.02,66.71 ) ( 229.77 , 131.75)(0,0 ) ( 0,0 ) an illustration of the combination of a rateless scheme with prior prediction .each box represents a rateless block in which bits are transmitted.,title="fig : " ] ( 75.28,118.30 ) ( 45.52,118.30 ) ( 122.90,118.30 ) ( 190.37,118.30 ) ( 123.56,66.71 ) ( 125.33,41.63 ) ( 71.53,12.69)predictor ( 160.84,12.18 ) ( 75.34,41.63 ) ( 73.58,66.71 ) ( 46.78,41.63 ) ( 45.02,66.71 ) in this section we present the prediction algorithm .we denote by the index of the block , and by the averaged channel over the block , i.e. if the block starts at symbol and ends at , then .the length of the -th block is denoted .we use an exponentially weighted predictor mixed with a uniform prior .the motivation for using the uniform prior is explained in the next section .let be the uniform prior over .we define the predictor as : where where is an estimate of the mutual information of the averaged channel over block , , and is interpreted as an estimate of the number of bits that would have been sent with the alternative prior .this estimate is defined later on in section [ sec : mainproof_attained_rate ] .the parameters and will be chosen later on . is the potential function defined in .the term normalizes to .the following lemma formalizes the claim that the predictor resulting of - , asymptotically achieves a rate : [ lemma : rateless_prior_predictor_lemma ] let , be a set of non - negative concave functions of the prior , let denote a set of non - negative numbers , and be arbitrary positive constants satisfying and .define the target rate define the actual rate over channel uses as : define the sequential predictor as the result of and .let satisfy : then for the value of specified below it is guaranteed that : where and the value of attaining the result above is : the lemma is proven in appendix [ sec : proof_rateless_prior_predictor_lemma ] .the proof uses similar techniques to those introduced in section [ sec : toy_performance ] , however , different from the previous analysis , due to mixing with the uniform prior , the `` blackwell '' condition ( in the previous case ) only approximately holds . on the other hand ,the use of the uniform prior enables relating to for any other , and thus obtain from an upper bound on the gain related to an alternative prior .the trade - off between the two is expressed in the two last factors in , one of which is increasing with and the other decreasing . since by , , the claim of the lemma appears similar to theorem [ theorem : prior_predictor_exp ] , with taking the place of the function .however two important properties of the lemma , distinguishing it from the rather standard claim of theorem [ theorem : prior_predictor_exp ] are that the bound does not depend on the number of blocks ( i.e. the number of prediction steps ) , and that no upper bound on is assumed . the rate represents a bound on mutual information , but in the context of the lemma it enough to consider it as an arbitrary rate that caps .it affects the setting of and the resulting loss .also , does not have to correspond to the actual number of symbols and serves here merely as a scaling parameter for the communication rate .the lemma sets a value of but not for , since will have additional roles in the next section . in this section a motivation for the prediction algorithm , and especially for the use of the uniform priorunder abstract assumptions it is shown to achieve the capacity of the averaged channel .this section is intended merely to give motivation and is not formally necessary for the proof of theorem [ theorem : c_overlinew_achievability ] . to simplify the discussion ,let us make abstract assumptions regarding the decoding condition and the channel estimation : 1 .the decoding condition yields block lengths satisfying : with an equality for all blocks except the last one which is not decoded .this implies the rate equals the mutual information of the averaged channel .2 . the averaged channels over all previous blocks are known and available for the predictor with these assumptions , the prediction problem can be considered separately from decoding and channel estimation issues . supposing that blocks were transmitted , the achieved rate is . since , using this can be written as .the target is to find a prediction scheme for , such that for any sequence , one will have with .there are two main difficulties compared to the prediction problem discussed in section [ sec : toy_problem ] : 1 .the problem is not directly posed as a prediction problem with an additive loss .the loss is not bounded : if for some , then the rate becomes zero regardless of other blocks .the first issue is resolved by posing an alternative problem which has an additive loss , and using the convexity of the mutual information with respect to the channel ( as will be exemplified below in the abstract case ) . regarding the second issue ,notice that if the channel has zero capacity ( always , or from some point in time onward ) , it is possible that one of the blocks will extend forever and will never be decoded .however we must avoid a situation where the channel has non - zero capacity ( which our competition enjoys ) , while a badly chosen prior yields .this may happen for example in the channels of example [ example : prediction_channel1 ] , if the predictor selects to use the pair of inputs that yield zero capacity .if this happens then the scheme will get stuck since the block will never be decoded , and hence there will be no chance to update the prior .in addition , notice that selecting some inputs with zero probability makes the predictor blind to the channel values over these inputs . to resolve these difficultieswe construct the predictor as a mixture between an exponentially weighted predictor and a uniform prior .we use a result by shulman and feder , which bounds the loss from capacity by using the uniform prior : {shulman_prior } } \geq c \cdot \beta(c ) \stackrel{\cite[(17)]{shulman_prior}}{\geq } \frac{c}{|\mathcal{x}| \cdot ( 1-e^{-1})},\ ] ] where is the channel capacity and is defined therein .this guarantees that if the capacity is non - zero , then the uniform prior will yield a non - zero rate , and hence the block will not last indefinitely . under the abstract assumptions made here , the following is known and can be substituted in lemma [ lemma : rateless_prior_predictor_lemma ] : this yields the following result : [ lemma : c_overlinew_achievability_w_side_info ] for the scheme of section [ sec : arbitrary_var_rateless_scheme ] under the abstraction specified above , with and and properly chosen , the following holds : for any sequence of channels , the rate satisfies : where is the capacity of the averaged channel and where .the parameters of the scheme required to attain the result are specified in and respectively .note that the bound is increasing with , so it appears that that it can be improved by taking the minimal value of .however in the actual system , there are be fixed overheads related to the communication scheme , and a large block size would be needed to overcome them . taking any fixed and large enough ,the normalized regret is bounded by , which converges to zero , but at a worse rate than we had in section [ sec : toy_exp_prior_predictor ] .note that the claims of lemma [ lemma : c_overlinew_achievability_w_side_info ] are stronger than the claims that appeared in the conference paper on the subject , for the same problem , mainly in terms of the improved convergence rate with . also , the scheme used here is slightly different than the one in the conference paper ( in equation ). the proof corresponding to the scheme presented in the conference paper can be found in an early version uploaded to arxiv . to prove lemma [ lemma : c_overlinew_achievability_w_side_info ] , lemma [ lemma : rateless_prior_predictor_lemma ]is used with defined in .the rate guaranteed by lemma [ lemma : rateless_prior_predictor_lemma ] is approximately .using convexity of the mutual information with respect to the channel this is at least , and since this is true for any , the rate is at least .the detailed proof appears in appendix [ sec : proof_c_overlinew_achievability_w_side_info ] .in this section we prove theorem [ theorem : c_overlinew_achievability ] , regarding the attainability of . the principles of the prediction scheme have been laid in the previous section , and here we plug - in a suitable decoding condition and a channel estimator .suppose that during a certain block of length we have used the i.i.d .prior .in order to estimate the channel after the block has ended and was decoded , we use the following estimate : where here and throughout the current section , denote the -length input and output vectors over the block , and is the empirical distribution of the pair ( for ) .the estimator is the joint empirical distribution divided by the ( known ) marginal distribution of the input .since we mix a uniform prior into , all are bounded away from zero , which makes the estimator statistically stable , in comparison with the more natural estimator given by the empirical conditional distribution : in which the denominator may turn out to be zero .a drawback of the proposed estimator is , that it does not generally yield a legitimate probability distribution , i.e. . the result of using this estimator is that in the calculations , we will see values that formally appear like probabilities but are not . to distinguish them from legitimate probabilities we term these values `` false '' probabilities , andmark them with a .these functions usually approximate or estimate a legitimate probability .formally , a false probability or can be any non - negative function of or ( respectively ) .note that until this point we did not need the assumption that the output alphabet is finite , since the channel was given to the predictor rather than being estimated , and it is the first time this assumption is used .the function that we use as an optimization target for selecting the prior for the next block is , as before , the mutual information .the reason is that since our aim is to achieve the capacity of the averaged channels , the `` competing '' schemes , for each prior , achieve the mutual information of the averaged channel .however , since the estimates of past channels are false - probabilities , we need to define how to apply the mutual information to them .we do this by simply plugging - in the false channel into the standard formula of .this substitution results in what we define as the _ false mutual information _ : where cases of or are resolved using the convention .the following lemma shows that most of the properties of the mutual information function needed for our previous analysis in section [ sec : arbitrary_var_exp_prior_predictor ] are maintained .[ lemma : fmi_properties ] the function defined in is 1 .non negative 2 .concave with respect to 3 .convex with respect to 4 .upper bounded by , where ] where then where in the equations above we wrote the empirical distribution in explicitly as a normalized sum of indicator functions .we currently assume and we ll return to the case of at the end . from the above we have that : since at symbol in the block , which is one symbol before decoding , none of the codewords satisfies the decoding condition , including the correct codeword ( which corresponds to the true channel input ) , we have the same holds for the last block . as for ,from we have that and because is measured on a single symbol , we can bound : the equality above can be obtained using lemma [ lemma : fmi_as_remp ] , or by definition , using the fact that only for a single pair , . combining and using we have : in the case of , and holds due to. the last inequality means the conditions of lemma [ lemma : rateless_prior_predictor_lemma ] with respect to are satisfied , with replaced by . under the conditions of the lemma, it guarantees that : where is the offset defined in the lemma , with replaced by .we use the convexity of with respect to the channel ( lemma [ lemma : fmi_properties ] ) in order to relate the sum above to the capacity of the estimated averaged channel : substituting in we obtain : because the actual rate that the scheme achieves is not but , we have : considering the second term , notice that the expression for in lemma [ lemma : rateless_prior_predictor_lemma ] , is sublinear in , i.e. is decreasing with , and therefore , and we can replace the offset term in by . as for the factor we have }_{\delta_1 } .\end{split}\ ] ] and by using we have the desired result .we would now like to show the convergence of to .as mentioned above , and are statistically dependent . to avoid conditioning on , we first write in an alternative form . plugging the explicit form of from into the definition of , we have : recall that the averaged channel is we would like to show that . define , \ ] ] then although are not i.i.d ., they constitute a bounded martingale difference sequence , where the martingale is , as we will show below .first , by , each component is bounded , so they be bounded in absolute value by .on average over the common randomness , each symbol is generated independent of the past ( given ) .in other words , for someone not knowing the specific codebook , the knowledge of past values of does not yield any information about when is given . define the state variable .note that is only generated as a function of past symbols and therefore can be considered as part of the state at time .we have : & = \frac{\pr(x_k = x , y_k = y | s_{k-1})}{n \cdot \hat q_{b_k}(x ) } - \frac{w_k(y|x)}{n } \\ & = \frac{\hat q_{b_k}(x ) \cdot w_k(y|x)}{n \cdot \hat q_{b_k}(x ) } - \frac{w_k(y|x)}{n } = 0 .\end{split}\ ] ] now , since the previous value of the sum is only a function of , by applying the iterated expectations law we have \\ & = \e \left\ { \e \left [ \gamma_k(x , y ) \bigg| s_{k-1 } , \sum_{j=1}^{k-1 } \gamma_j \right ] \bigg| \sum_{j=1}^{k-1 } \gamma_j \right\ } = 0 , \end{split}\ ] ] which shows is a martingale. we can now apply hoeffding - azuma inequality ( * ? ? ?* a.1.3) and obtain : the above holds for each value of separately . to bound the norm we use the unionbound : \right\ } \\ & \leq \sum_{x , y } \pr \left\ { \left| { \breve{w}}_a(y|x ) - \overline{w}(y|x ) \right| > t \right\ } \\ & \stackrel{\eqref{eq:1580}}{\leq } 2 |\mathcal{x}| \cdot |\mathcal{y}| \cdot e^{-\frac{2 n \lambda^2 t^2}{|\mathcal{x}|^2 } } .\end{split}\ ] ] to guarantee the above holds with probability at most we choose to make the rhs equal : this is summarized in the following proposition : [ prop : symbolwise_scheme_channel_convergence2 ] for any , and for defined above , observe that a large improves the channel estimate convergence ( reduces ) , since it increases the minimum rate at which each input symbol is sampled .this is the additional role of that we did not have in lemma [ lemma : c_overlinew_achievability_w_side_info ] .the final step is to link the difference in the channels to the difference in capacities . for this purposewe use the following lemma : [ lemma : fmi_lp_bound ] let be an input distribution on the discrete alphabet , a conditional distribution , and a false conditional distribution .define where assuming we have : and where for , by convention .furthermore is concave and monotonically non - decreasing for .note that the lemma is also true with respect to legitimate distributions .the proof of the lemma is based on cover and thomas bound on entropy , and s inequality , and appears in appendix [ sec : lp_bound_on_capacity ] .we now combine the results above as follows : choose a value of .we denote by the event of any decoding error occurring in any of the blocks , and by the event .we use and overline to denote complementary events . consider the event . in this case, we have and from lemma [ lemma : fmi_lp_bound ] this implies where .from proposition [ prop : symbolwise_scheme_predictor_rate ] we have that : to summarize , if then . by the unionbound and propositions [ prop : symbolwise_scheme_channel_convergence2],[prop : symbolwise_scheme_err_probability ] , we have : note that although lemma [ lemma : fmi_lp_bound ] is stated for general norms , we have used it here only with respect to the norm , since it is relatively simple to obtain bounds on the convergence of by using the well known hoeffding - azuma inequality per channel element ( ) and the union bound .however as the distribution of tends to a mutlivariate gaussian distribution , using norm seems to be more suited .indeed , applying lemma [ lemma : fmi_lp_bound ] with norm , together with the ( yet unpublished ) bound on the convergence of vector martingales due to hayes yields tighter bounds on the probability of having a small difference for large alphabet sizes .we now substitute the numerical expressions for the various overheads , and set the parameters of the scheme to optimize the convergence rate . are parameters of choice , and together with they determine .our purpose is to choose that will approximately minimize .this part is rather tedious .we write and collect all the relations below : \\ \delta_c & = & - 2 \delta_w \cdot |\mathcal{y}| \log ( \delta_w ) \\\delta_w & = & \frac{|\mathcal{x}|}{\lambda } \sqrt{\frac{1}{2n } \ln \left ( \frac{2 |\mathcal{x}| \cdot |\mathcal{y}|}{\delta_0 } \right ) } \\ \dpred & = & \frac{k}{n } + i_{\max } \cdot \lambda + c_1 \sqrt{\frac{\ln ( n)}{n } } \lambda^{-\half } .\\ c_1 & = & 2 \sqrt{k \cdot |\mathcal{x}| ( |\mathcal{x}|-1 ) \cdot i_{\max}}\end{aligned}\ ] ] since , , therefore . to make we need , and making this assumption, we have that the last element in is bounded by . further assuming that ( this holds trivially for the values of of lemma [ lemma : fmi_as_remp ] when ) , and ( for some arbitrary polynomial decay rate ) we have \\&= \frac{\log n}{k } ( d_{\epsilon } + \tfrac{5}{4 } k_0 + \tfrac{3}{2 } ) .\end{split}\ ] ] using these bounds and extracting the constants we can upper bound by : where element stems from , from and from , and the constants are : as we shall see , element is negligible .therefore we first optimize the sum of and with respect to , using lemma [ lemma : ab_alphabeta ] .we write the sum as with .since is required to be integer , we write it as a function of a real valued parameter : , and assume .then , and therefore . by optimizing the bound with respect to using lemma [ lemma : ab_alphabeta ], we obtain where we defined substituting in ( and upper bounding element by ) , we obtain : to determine we notice that it is a trade - off between element which is increasing in and either or which are decreasing . minimizing any combination separately ( i.e. or ) using lemma [ lemma : ab_alphabeta ] , yields the same decay rate , and of the form therefore this determines the best decay rate possible for .note that we do not have to worry about the case , since in this case the term in will exceed and theorem [ theorem : c_overlinew_achievability ] will be true in a void way .substituting we have : \cdot \left ( \frac{\ln^2 ( n)}{n } \right)^{\tfrac{1}{4 } } + c_5 \cdot \left ( \frac{\ln n}{n^2 } \right)^{\frac{1}{3 } } \\&\leq \left [ \tfrac{3}{2 } \cdot \left ( \tfrac{5}{2 } \frac{c_2 c_4 ^ 2}{c_{\lambda } } \right)^{\tfrac{1}{3 } } + \frac{c_3}{c_{\lambda } } + i_{\max } \cdot c_{\lambda } + 1\right ] \cdot \left ( \frac{\ln^2 ( n)}{n } \right)^{\tfrac{1}{4 } } \\&= c_{\delta } \cdot \left ( \frac{\ln^2 ( n)}{n } \right)^{\tfrac{1}{4 } } , \end{split}\ ] ] where in the last inequality we substituted the expression for and assumed . in the last step we defined we now revisit the assumptions we have made along the way . * in , we assumed .this requires that , and a sufficient condition is . * for we assumed . substituting leads to , and a sufficient condition is * for we assumed .we may simply determine and set .* for we assumed , i.e. * the application of lemma [ lemma : rateless_prior_predictor_lemma ] to obtain proposition [ prop : symbolwise_scheme_predictor_rate ] requires that and . since is sufficient that , or .furthermore for we assumed , so we require . substituting leads to the sufficient condition : + to summarize , the results holds for where is the maximum of the conditions of , , and of : . \end{split}\ ] ] this proves corollary .the claims of the theorem are milder and are easily deduced from this corollary . given , let , and choose any and .choose large enough so that the error probability given by the corollary satisfies , and .this guarantees that for , the requirements of the corollary are met the error probability is , and the probability to fall short of the rate is at most .this concludes the proof of theorem [ theorem : c_overlinew_achievability ] .following is a numerical example for the calculation of and in theorem [ theorem : c_overlinew_achievability ] .[ example : symbolwise_random_prior_predictor ] for , and we obtain and .choosing we obtain and .the convergence rate is rather slow and we have only for . during the proof of theorem [theorem : c_overlinew_achievability ] we assumed the channel sequence is unknown but fixed .it is easy to see that the same proof holds even if the channel sequence is determined by an online adversary .the error probability ( proposition [ prop : symbolwise_scheme_err_probability ] ) is maintained regardless of channel behavior , because the probabilistic assumptions made refer to the distribution of codewords that were _ not _ transmitted .proposition [ prop : symbolwise_scheme_predictor_rate ] does not make any assumptions on the channel as it connects the communication rate with the _ measured _ channel .the main difference is with respect to channel convergence . for the proof of proposition [ prop : symbolwise_scheme_channel_convergence2 ] to hold we need to show that remains a bounded martingale difference sequence , which boils down to verifying still holds , i.e. that has zero mean conditioned on the past . adding the message to the state variable defined before , i.e. redefining , where is the message bit sequence , we have that holds even when the channel is a function of .although channels with memory of the input are not considered in this paper , the scheme presented above can be used over such channels as well . in this case, the performance of the scheme can be characterized as follows : [ lemma : prior_prediction_channel_with_memory ] when the scheme of theorem [ theorem : c_overlinew_achievability ] is operated over a general channel , the results of the theorem hold if the averaged channel is redefined as follows : note that for each pair , is a random variable depending on the history , and therefore , different from the main setting considered in this paper , is also a random variable .the definition above coincides with the previous definition of when the channel is memoryless in the input .this lemma is used in to show competitive universality for channels with memory of the input . _proof : _ as in the proof of corollary [ corollary : symbolwise_random_prior_predictor1 ] it is easy to see that assumptions on the channel apply only to proposition [ prop : symbolwise_scheme_channel_convergence2 ] showing the convergence of the average estimated channel to . to show proposition [ prop : symbolwise_scheme_channel_convergence2 ] holds ,we need to show that remains a bounded martingale difference sequence , where now is defined as : . \end{split}\ ] ] as in , we have .equation now becomes & = \frac{\pr(x_k = x , y_k = y | s_{k-1})}{n \cdot \hat q_{b_k}(x ) } \\ & \qquad - \frac{1}{n } \pr(y_k = y | x_k = x , \vr x^{k-1 } , \vr y^{k-1 } ) \\ & = \frac{\hat q_{b_k}(x ) \cdot \pr(y_k = y | x_k = x , \vr x^{k-1 } , \vr y^{k-1})}{n \cdot \hat q_{b_k}(x ) } \\ & \qquad - \frac{1}{n } \pr(y_k = y | x_k = x , \vr x^{k-1 } , \vr y^{k-1 } ) \\ & = 0 .\end{split}\ ] ] the rest of the proof of proposition [ prop : symbolwise_scheme_channel_convergence2 ] remains the same .in this section we discuss the relation of the current results to existing results pertaining to unknown channels and make some comments on schemes presented here .it is interesting to compare the target rate with the avc capacity. we will give a short background on the avc and the relation to the current problem .in the traditional avc setting , the channel model is similar to the setting assumed here , but slightly more constrained .the channel in each time instance is assumed to be chosen arbitrarily out of a set of channels , each of which is determined by a state .frequently , constrains on the state sequence ( such as maximum power , number of errors ) are defined .the avc capacity is the maximum rate that can be transmitted reliably , for every sequence of states that obeys the constraints .the avc capacity may be different depending on whether the maximum or the average error probability over messages is required to tend to zero with block length , on the existence of feedback , and on whether common randomness is allowed , i.e. whether the transmitter and the receiver have access to a shared random variable .the last factor has a crucial effect on the achievable rate as well as on the complexity of the underlying mathematical problem : the characterization of avc capacity with randomized codes is relatively simple and independent on whether maximum or average error probability is considered , while the characterization of avc capacity for deterministic codes is , in general , still an open problem .randomization has a crucial role , since we consider the worst - case sequence of channels .this sequence of channels is chosen after the deterministic code was selected ( and therefore sometimes viewed as an adversary ) , enabling the worst - case sequence of channels to exploit vulnerabilities that exist in the specific code .as an example , for every symmetrizable avc ( * ? ? ?* definition 2 ) , the avc capacity for deterministic codes is zero ( * ? ? ?* theorem 1 ) .when randomization does exist , the random seed is selected `` after '' the channel sequence was selected ( mathematically , the probability over random seeds is taken after the maximum error probability over all possible sequences ) , and therefore prevents tuning the channel to the worst - case code .when randomization exists , the channel inputs may be made to appear independent from the point of view of the adversary , thus limiting effective adversary strategies .therefore the results in the current paper assume common randomness exists .we would now like to compare the target rate with the randomized avc capacity .the discrete memoryless avc capacity without constraints may be characterized as follows : let be the set of possible channels that are realized by different channel states ( for example in a binary modulo - additive channel with an unknown noise sequence , there are two channels in the set one in which and another in which ) . this set is traditionally assumed to be finite , i.e. there is a finite number of `` states '' , however this constraint is immaterial for the comparison .the randomized code capacity of the avc is ( * ? ? ?* theorem 2 ) : where is the convex hull of , which represents all channels which are realizable by a random drawing of channels from . over channel states in . ] in the example , would be the set of all binary symmetric channels .when input or state constraints exist , they affect simply by including in the set of -s and in only those priors , or channels , that satisfy the constraints ( respectively ) .the converse of is obtained by choosing the worst - case channel and implementing a discrete memoryless channel ( dmc ) where the channel law is , by a random selection of channels from .hence it is clear that the randomized code capacity can not be improved by feedback .in contrast , the deterministic code avc capacity can be improved by feedback , and in some cases made to equal to the randomized code capacity . therefore , most existing works on feedback in avc deal with the deterministic case . since by definition , we have from , , i.e. our target rate meets or exceeds the avc capacity . while in the traditional setting ,a - priori knowledge of or state constraints on the channel is necessary in order to obtain a positive rate , here we attain a rate possibly higher than the avc capacity , without prior knowledge of . this is important since without such constraints , i.e. when the channel sequence is completely arbitrary , the avc capacity is zero .this property makes the system presented here universal , with respect to the avc parameters , a universality which also holds in an online - adversary setting ( corollary [ corollary : symbolwise_random_prior_predictor1 ] ) .we can view the difference between and as the difference between the capacities of the worst realizable channel , and the specific channel representing the average of the sequence of channels that actually occurred .this difference is obtained by adapting the communication rate to the capacity of the average channel , and adapting the input prior to the prior that achieves this capacity , whereas in the avc setting , the rate and the prior are determined a - priori , based on the worst - case realizable channel . as we noted above, feedback can not improve the randomized avc capacity .therefore the improvement is attained not merely by the use of feedback , but by allowing the communication rate to vary , whereas in the traditional avc setting , one looks for a fixed rate of communication which can be guaranteed a - priori ( note that the improvement is not in the worst case ) . in allowing the rate to vary ,we have lost the formal notion of capacity ( as the supremum of achievable rates ) , thereby making the question of setting the target rate more ambiguous , but nevertheless improved the achieved rates .the capacity of the averaged channel is a slight generalization of the notion of _ empirical capacity _ defined by eswaran .the only difference is releasing the assumption made there , that the set of channel states is finite .the empirical capacity of eswaran is in itself a generalization of the empirical capacity for modulo additive channels defined by shayevitz and feder .eswaran assume the prior is given a - priori and attain the empirical mutual information .the scheme used here is similar to the scheme they presented in its high level structure .we can view the current result ( theorem [ theorem : c_overlinew_achievability ] ) as an improvement over the previous work , i.e. attaining the capacity , rather than the mutual information , by the addition of the universal predictor .our result answers the question raised there , whether the empirical capacity is attainable .another small extension is in corollary [ corollary : symbolwise_random_prior_predictor1 ] , showing that the result holds in an adversarial setting .this extension is outside our main focus of communicating over unknown channels , and is only used to strengthen the claim on universality with respect to the avc parameters .the main result ( theorem [ theorem : c_overlinew_achievability ] ) could be derived in a conceptually simpler but crude scheme , by combining the results of eswaran or our previous paper with theorem [ theorem : prior_predictor_exp ] .the transmission time may be divided into multiple fixed - size blocks , and in each block , one of these schemes is operated , with an i.i.d .prior chosen by a predictor . using eswaran s result , for example , and ignoring some details such as finite - state assumptions, one would obtain the rate over each block , where is the averaged channel over the block .the channel can be well estimated ( e.g. using training symbols or using the communication scheme itself ) .assuming it is known , if the prediction scheme of theorem [ theorem : prior_predictor_exp ] is operated over it will guarantee the average rate over the blocks will be asymptotically at least for any , and using convexity , . since this holds for any this achieves the capacity of the average channel .note that here it appears that there is no need for the uniform prior , however this is somewhat hidden in the assumption that the channel is known .furthermore there is no need to worry about rateless blocks extending `` forever '' since the commnication scheme is re - started on each of the blocks . in a related paper we presented the concept of the iterated finite block capacity of an infinite vector channel , which is similar in spirit to the finite state compressibility defined by lempel and ziv . roughly speaking , this value is the maximum rate that can be reliably attained by any block encoder and decoder , constrained to apply the same encoding and decoding rules over sub - blocks of finite length .the positive result is that is universally attainable for all modulo - additive channels ( i.e. over all noise sequences ) .the result is obtained by a system similar to the one described in section [ sec : arbitrary_var_rateless_scheme ] , while the input prior is fixed to the uniform prior .the result uses two key properties of the modulo additive channel : 1 .the channel is memoryless with respect to the input ( i.e. current behavior is not affected by previous values of the input ) .the capacity achieving prior is fixed for any noise sequence .the current work is a step toward removing the second assumption .the capacity of the averaged channel is a bound on the rate that can be obtained reliably by a transmitter and a receiver operating on a single symbol , since the channel that this system `` sees '' can be modeled as a random uniform selection of a channel out of , which we term the `` collapsed channel '' . by combining symbols into a single super - symbol, we can extend the result and obtain a rate which is equal to or better from the rate obtained by block encoder and decoder operating over chunks of symbols .therefore the current result suggests that it is possible to attain for all vector channels that are memoryless in the input , i.e. that have the form defined in , for an arbitrary sequence of channels ( compared to only an arbitrary noise sequence , in the previous result ) .it is interesting to consider the converse ( theorem [ theorem : c_overlinew_optimality ] ) from the following point of view : suppose a competitor is given the entire sequence of channels , but is allowed to take from this sequence only the `` histogram '' ( a list of channels and how many times they occurred ) , and devise a communication system based on this information .the rate that can be guaranteed in this case is limited by . on the other hand , assuming common randomness exists , it is enough to know in order to attain without feedback .to see this intuitively , we may apply a random interleaver and use the fact the interleaved channel is similar to a dmc with the channel law .therefore even if one knows the entire histogram of the sequence , the average channel , which contains less information , contains all information necessary for communication . to illustrate this , consider the deterministic setting , where instead of a sequence of channel laws we have a sequence of deterministic functions is a particular case of our problem , with . even in this case , according to theorem [ theorem : c_overlinew_optimality ] , a competitor knowing the list of functions up to order , will not be able to guarantee a rate better than , where , i.e. a channel created by counting for each , the normalized number of times a certain would appear as output . comparing the amount of information in the channel histogram and the averaged channel in this case ,there are functions , and therefore the distribution is given by real numbers .on the other hand , the average channel is a probability distribution from to and is specified by real numbers .an interesting property revealed through the example , is that although the setting is deterministic , the result is given in terms of probability functions .these `` probabilities '' are only averages related to the deterministic function sequence , but this shows that the formulation via probabilities ( or frequencies ) is more natural than by specifying the function between the input and output .we assumed the feedback channel has unlimited rate , and is free of delays and errors .this was done mainly to focus the discussion and simplify the results .it is clear from the scheme presented , that because the amount of information required to be fed back to the transmitter can be made small , the capacity of the average channel could be attained even if the feedback link has any small positive rate and a fixed delay .if the feedback channel is such that errors can be mitigated by coding with finite delay , then errors can be accommodated as well .specifically , we show in appendix [ sec : zero_feedback_rate ] that when the feedback rate is limited , or there is a fixed delay , the penalty is a gap of at most symbols between the blocks , and that the normalized loss from this effect tends to zero .therefore we have ( with the notation of theorem [ theorem : c_overlinew_achievability ] ) , with any positive feedback rate and any fixed delay .the gap may be reduced by using the time of the -th block to transfer the channel information from block and use it only in block ( i.e. insert a delay of one block in the prediction scheme ) , however this approach is not analyzed here . throughout the course of this paper , as we have gradually madeour assumptions more realistic , we have seen a deterioration of the rate of convergence , of the achieved rate to the target rate .we denote by the gap between the guaranteed rate and the target rate , and focus on the dominant polynomial power , while ignoring the terms .we have in the synthetic problem of section [ sec : toy_problem ] ( assuming `` block - wise '' variation ) [ sec : toy_problem ] , when using the rateless scheme under assumptions of perfect average channel knowledge [ sec : arbitrary_channel_var ] , and when releasing the abstract assumptions [ sec : mainproof ] . the first deterioration ( between and ) is mainly attributed to the rateless coding scheme .more specifically , it stems from mixing with the uniform prior , which is necessary to bound the regret per block when the blocks have variable lengths . the second deterioration ( between and )can be attributed mainly to the fact that the number of bits per block has to increase in a certain rate in order to balance overheads created by the universal decoding procedure ( and reduces the rate of adaptation ) . while the rate of convergence which was achieved deteriorates , the only upper bound we presented on the convergence rate is ( [ sec : regret_lb ] ) , which is tight only for the first case .we do not know whether better convergence rates can be attained in theorems [ lemma : c_overlinew_achievability_w_side_info],[theorem : c_overlinew_achievability ] .variation '' ) & arbitrarily varying channel , with side information on average channel and without communication overheads & arbitrarily varying channel & notes + reference & [ sec : toy_problem ] , theorem [ theorem : prior_predictor_exp ] & [ sec : arbitrary_var_exp_prior_predictor ] , lemma [ lemma : c_overlinew_achievability_w_side_info ] & [ sec : problem_setting],[sec : mainproof ] , theorem [ theorem : c_overlinew_achievability ] & + attainability & no & no & no & = capacity of = mean capacity + attainability & yes & no & no & = mean mutual information with fixed prior + attainability & yes & yes & yes & = capacity of the time - averaged channel 1 .best attainable rate not using time structure ( theorem [ theorem : c_overlinew_optimality ] ) .2 . ( section [ sec : discussion_avc ] ) + normalized regret lower bound & & & & + normalized regret attained & & & & + the results in this paper were obtained by exponential weighting .this scheme was selected mainly due to its simplicity and elegance .unfortunately , the exponential weighting is performed over a continuous domain ( of probabilities ) , and therefore it is not immediately implementable . of course, the simplest practical solution could be discrete sampling of the unit simplex and replacement of the integrals by sums .since the mutual information is continuous , it is possible to bound the error resulting from this discretization .an alternative way is to quantize the set of priors . instead of competing against a continuum of reference schemes, we first reduce the number of reference schemes to a finite one , by creating a `` codebook '' of priors .this codebook is designed so that the penalty in the mutual information resulting from rounding to the nearest codeword , is small .this quantization is useful in terms of the feedback link , which now only has to convey the index .having quantized the priors , we may replace the predictors shown here by standard schemes used for competition against a finite set of references , .see a rough analysis of this approach in appendix [ sec : prior_quantization_analysis ] .an alternative approach is to bypass the explicit calculation of the predictor and use a rejection - sampling based algorithm to generate a random variable .this approach is demonstrated in appendix [ sec : rejection_sampling_predictor ] .zinkevich proposed a computationally efficient online algorithm , based on gradient descent , to solve a problem of minimizing the sum of convex functions , each revealed to the forecaster after the decision was made ( a similar setting to that of lemma [ lemma : rateless_prior_predictor_lemma ] ) . to apply zinkevich s results to our problem, some modifications are required .the mutual information does not have a bounded gradient ( which is required by ) , but this could be bypassed by keeping away from the boundary of , i.e. from these points for which one of the elements of is or .one way to accomplish this is by mixing with the uniform prior when defining the target rate , and use as a target , and then bounding the loss induced by this mixture . in the rateless scheme ,a bound on the maximum value of ( of lemma [ lemma : rateless_prior_predictor_lemma ] ) is required and can be obtained using the same methods presented here .another application of sequential algorithms to solve problems related to avc s was proposed by buchbinder who used a sequential algorithm to solve a problem of dynamic transmit power allocation , where the current channel state is known but future states are arbitrary . in the communication scheme proposed in section [sec : arbitrary_var_rateless_scheme ] we chose to use an i.i.d . prior during each block , and update the prior only at the end of the block .this choice is motivated by the following considerations : * assuming no explicit training symbols are transmitted , the estimation of the channel is done based on the encoded sequence , which is known to the receiver only after decoding ( at the end of the block ) . * varying the prior throughout the block inserts memory into the channel input , which complicates the analysis .the result of this is a relatively slow update of the prior , essentially limited by the block size , which is determined based on communication related considerations ( overheads and error probabilities ) .an alterative would be learning the channel through random training symbols ( see for example ) , and updating the prior from time to time , without relation to the rateless blocks . in section[ sec : regret_lb ] we have shown a lower bound on the redundancy in attaining by using a counter example with .it is worth mentioning that for the set of binary channels , the normalized regret is not necessarily .for this set of channels , the optimal prior does not reach the boundaries of ] .it is possible to show that the loss function satisfies conditions 1,2,4 in cesa - bianchi and lugosi s book theorem 3.1 ( but not condition 3 ) .this fact together with experimental results showing convergence of the fl predictor , suggests that the normalized minimax regret in this case may converge like .in the prediction scheme of theorems [ lemma : c_overlinew_achievability_w_side_info],[theorem : c_overlinew_achievability ] , we mixed a uniform prior with an exponentially weighted predictor .this mixing has two advantages : 1 . enabling to bound the instantaneous regret caused by a large block due to a low mutual information 2 .enabling channel estimation by making sure all input symbols have a non zero probability .note that alternative solutions are use of training symbols at random locations and termination and re - transmission of blocks whose length exceeds a threshold . mixing the exponentially weighted predictor with a uniform distribution is a technique used in prediction problems with partial monitoring , where the predictor only has access to its own loss ( or a function of it ) and not to the loss of the competitors , and effectively assigns some time instances for sampling the range of strategies . in our problem the uniform prior plays two roles .one , is related to the rateless communication scheme , which required to relate the gains of the predictor to the gain of any alternative prior in order to have an upper bound on the latter .the second role is in the convergence of the estimated channel ( proposition [ prop : symbolwise_scheme_channel_convergence2 ] ) .the second role is similar to the role of uniform distribution in partial monitoring problems : the channel can not be estimated for input values that occur with zero probability .note that even without the explicit uniform component , the exponential weighting element in includes a small uniform component . particularly , since referring to , , and however this value is too small for our purpose . in the current paper we assumed the input and output alphabets are finite .in general it is not possible to universally attain or , even in the context of the synthetic problem of section [ sec : toy_problem ] , when the alphabet size is infinite .this is since in the continuous case one is trying to assign a probability to an infinite set of values , where the values producing the capacity may be a small subgroup .consider the following example : let the channel , with input and output ( ) be defined by the arbitrary sequence , , with all .the channel rule is defined by : for any sequential predictor ( even randomized ) we can find a sequence of channels such that the values of the sequence at each step have total probability zero ( since the input distribution may have at most a countable group of discrete values with non zero probability ) . therefore we can always find a sequence of channels where the rate obtained by the predictor would be zero . on the other hand , each channel has infinite capacity ( since it can transmit noiselessly any integer number ) .therefore the value of is infinite ( it is enough to choose a prior suitable for one of the channels in the sum ) .it stands to reason that under suitable continuity conditions on and input constraints on , we may convert the problem to a discrete one , while bounding the loss in this conversion , by discretization of the input i.e. by selecting the input from a finite grid , or alternatively assuming a parametrization of the channel .we considered the problem of adapting an input prior for communication over an unknown and arbitrarily varying channel , comprised of an arbitrary sequence of memoryless channels , using feedback from the receiver .we showed that it is possible to asymptotically approach the capacity of the time - averaged channel universally for every sequence of channels .this rate equals or exceeds the randomized avc capacity of any memoryless channel with the same inputs , and thus the system is universal with respect to the avc model .the result holds also when the channel sequence is determined adversatively .we also presented negative results showing which communication rates or minimax regret convergence rates can not be attained universally ( see a summary in table [ tbl : prior_pred_summary ] ) , and presented a simplified synthetic problem relating to prediction of the communication prior , which may have applications for block - fading channels .when examining the role of feedback in combating unknown channel , previous works mainly focused on the gains of rate adaptation , and here we have seen an additional aspect , namely selection of the communication prior , in which feedback improves the communication rate .the results have implications on competitive universality in communication , and suggest that with feedback , it would be possible for any memoryless avc , to universally achieve a rate comparable to that of any finite block system , without knowing the channel sequence .when comparing the results to the traditional avc results , the former setting was prevailed by the notion of capacity , and thus , even when feedback was assumed , it was not used for adapting the communication rate .here we have shown for the first time , that rates equal to or better from the avc capacity can be attained universally , when releasing the constraint of an a - priori guaranteed rate .this demonstrates the validity of the alternative `` opportunistic '' problem setting that has been considered in the last decade for feedback communication over unknown channels , a setting which does not focus on capacity .we thank yishay mansour for helpful discussions on the universal prediction problem . _proof : _ let denote a global maximum of in ( which exists since is concave and is closed ) . then from the concavity of for any ] is a point between and .this proves the lower bound .also , for since this also proves the upper bound . for ,the right inequality can be made tighter , by writing the full tailor expansion : _ proof of lemma [ lemma : ab_alphabeta ] : _ is continuous and differentiable therefore at the maximum .derivation yields , and yields the single solution stated in the lemma .this is a single maximum since is positive for and negative for .[ [ sec : proof_cw_max_rate ] ] proof of theorem [ theorem : c_overlinew_optimality ] : the optimality of averaged channel capacity ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ in this section we prove theorem [ theorem : c_overlinew_optimality ] presented in [ sec : arbitrary_var_target_rate ] ( regarding the optimality of ) . for a given sequence ,consider the `` permutation '' channel generated by uniformly selecting a random permutation of the indices , rearranging the sequence to a permuted sequence , and applying the channel to the input ( i.e. using the channels in permuted order ) .suppose there is a system achieving the rate with probability and error probability .since this rate is fixed for all drawing of , the system can guarantee the rate a - priori ( with probability ) , and we can convert the rate - adaptive system to a fixed - rate system , delivering a message of bits , with probability of error at most .once we constrain the discussion to the permutation channel induced by the deterministic sequence , we can assume this sequence is known to the transmitter and the receiver .in the main part of the proof we will show that approximately , . note thatbecause of feedback , may be a function of and , and therefore does not give a tight bound on the rate .as noted in the outline presented in section [ sec : arbitrary_var_target_rate ] , if the channels were selected from with replacement , this result would be obvious , since feedback would not be helpful . in the permuted channel ,a system with feedback can use past channel outputs to gain some knowledge about the future behavior of the channel .the point of the proof is to show that there is no considerable gain from this knowledge , and even a knowledge of the actual list of channels that were already picked does not change the mutual information considerably .we denote by the random permutation and by a specific instance of the permutation .we bound the mutual information as follows : where ( a ) is because conditioning reduces entropy ( used twice ) , and ( b ) is since ( in other words , gives all relevant information on ) .this can be seen from the functional dependence graph in fig.[fig : dependence_graph_for_converse_channel ] .let be a random variable generated by passing through the channel ( i.e. ) . next we show that and .( 220.24 , 130.39)(0,0 ) ( 0,0 ) a dependence graph for the variables of the permutation channel in appendix [ sec : proof_cw_max_rate ] .each node is a ( potentially random ) function of the nodes with arrows pointing toward it.,title="fig : " ] ( 5.67,63.37) ( 43.37,7.81) ( 41.39,116.95) ( 194.17,63.37) ( 118.77,116.95) ( 79.09,116.95) ( 81.07,7.81) ( 122.74,7.81) ( 162.43,116.95) ( 164.41,7.81) ( 134.65,61.39) ( 220.24 , 130.39)(0,0 ) ( 0,0 ) a dependence graph for the variables of the permutation channel in appendix [ sec : proof_cw_max_rate ] .each node is a ( potentially random ) function of the nodes with arrows pointing toward it.,title="fig : " ] ( 5.67,63.37) ( 43.37,7.81) ( 41.39,116.95) ( 194.17,63.37) ( 118.77,116.95) ( 79.09,116.95) ( 81.07,7.81) ( 122.74,7.81) ( 162.43,116.95) ( 164.41,7.81) ( 134.65,61.39) given , the channel law between and is a random pick from the group of channels that are not included in : the average channel given the past indices is an average of values .note that the indices belong to , so the notation may be confusing , but it is used to stress the causal dependence on . considering the random variable generated by calculating this channel over all drawings of ,the set becomes a random set of distinct indices from , chosen uniformly from all such sets . is an average of values , sampled uniformly without replacement from the set ( for any specific ) .it was shown by hoeffding that averages of variables sampled without replacement obey the same bounds ( * ? ? ?* theorem 1 ) with respect to the probability to deviate from their mean , as independent random variables . specifically ,applying hoeffding s bounds ( combining theorem 1 with section 6 in ) , and since = \overline w ] are two random variables satisfying ( for some ] , ] random variable by the inverse transform theorem .a generation of the mixture of an exponentially weighted and a uniform distribution such as in , only requires to toss a coin with probability , which determines whether is generated using the exponentially weighted distribution or using a uniform distribution .therefore the problem of generating the predictors described here , , boils down to the following problem : we would like to generate a random variable distributed according to where and where is a concave function and is bounded . is the unit simplex ( which implicitly refers to the alphabet ) .all integrals below are over the unit simplex .furthermore , we would like to accomplish this without computing any integrals .the first observation is that instead of generating an from it is enough to generate a the probability vector randomly with the probability distribution and then generate an from the ( specific ) probability distribution .the last step can be accomplished using the inverse transform theorem . in this casewe have : \\&= \underset{q \sim w(q)}{\e } \left [ q(x ) \right ] = \int q(x ) w(q ) dq .\end{split}\ ] ] this leaves us with the problem of generating .this is accomplished by rejection sampling .i.e. we first generate a random variable with a different distribution , and if it does not satisfy a given condition , we `` reject it '' and re - generate it , until the condition is satisfied .we first generate a probability distribution uniformly over the unit simplex .there are several algorithms for uniform sampling over the unit simplex .a simple algorithm , for example , is normalizing a vector of i.i.d . exponential random variables .define , and .we will determine later on such that .having generated , we toss a coin with probability for `` accept '' .if is accepted , this is the resulting random variable and we set . otherwise , we draw again and repeat the process .let denote the event of acceptance , and denote the distribution of which is the uniform distribution over the simplex .the distribution of equals the distribution of given that it was accepted .i.e. : which is the desired distribution . to determine ,suppose we know the maximum of .this is usually possible since it is a convex optimization problem .even if this value is not known , a bound on this value will be sufficient .suppose that is the maximizer of and therefore also of .then it is enough to set .an important question from implementation perspective is the average number of iterations required . since the probability of acceptance in each iteration is fixed , the number of iterations is a geometrical random variable , with mean . by lemma [ lemma : f_exp_weight_ub ]we can relate to and bound the average number of iterations . using the lemma we have : + \frac{d}{\eta } \ln \left ( \frac{\eta e n g_0}{d } \right ) \\ & \leq \frac{1}{\eta } \ln \left ( \e \left [ g(p ) \right ] \right ) + \frac{d}{\eta } \ln \left ( \frac{\eta e n g_0}{d } \right ) , \end{split}\ ] ] where is the dimension of the unit simplex .we obtain the following bound on : } \cdot \left ( \frac{\eta e n g_0}{d } \right)^{-d } , \ ] ] and the average number of iterations can be bounded : } \\&= \frac{1}{\e \left [ a(p ) \right ] } = \frac{1}{\alpha \e \left [ g(p ) \right ] } \leq \left ( \frac{\eta e n g_0}{d } \right)^{d } .\end{split}\ ] ] since is polynomial in and tends to , grows slower than , however this number is still prohibitively large . 1 .compute the maximum of ( a convex optimization problem ) , or a bound on it .2 . set .3 . draw uniformly over the unit simplex .[ line:2488 ] 4 .toss a coin and with probability return to step [ line:2488 ] .draw randomly according to the distribution . as noted in section [ sec : toy_categorization ] , the relation of the synthetic prediction problem to prediction under the absolute loss function , implies that the fl predictor can not be applied to our problem .here we give a specific example to see why fl fails , based on the channel defined in section [ sec : toy_categorization ] .we construct the following sequence of channels : the channel at is a mixture of with probability and a completely noisy channel . for this channel . at time ,the best a - posteriori strategy is .the sequence of channels from time onward is the alternating sequence .it is easy to see that the resulting cumulative rates are linear functions of and thus the optimum is attained at the boundaries of $ ] and . at each time , since the channel that slightly dominates the past is opposite of the channel that is about to appear , the fl predictor chooses the prior that yields the _ least _ mutual information , and ends up having a zero rate in time instances . on the other hand , by using a uniform fixed prior , a competitor may achieve an average rate of over these symbols .therefore the normalized regret of fl would be at least , and does not vanish asymptotically .p. chow , j. cioffi , and j. bingham , `` a practical discrete multitone transceiver loading algorithm for data transmission over spectrally shaped channels , '' _ ieee trans .communications _ , vol .773 775 , apr .d. love , r. heath , v. lau , d. gesbert , b. rao , and m. andrews , `` an overview of limited feedback in wireless communication systems , '' _ ieee journal on selected areas in communications _ , vol .26 , no . 8 , pp .13411365 , oct .a. mahajan and s. tatikonda , `` a training based scheme for communicating over unknown channels with feedback , '' in _ communication , control , and computing , 2009 .allerton 2009 .47th annual allerton conference on _ , 30 2009-oct . 2 2009 ,1549 1553 .r. ahleswede and n. cai , `` the avc with noiseless feedback and maximal error probability : a capacity formula with a trichotomy , '' _ numbers , information and complexity _ , pp .151176 , 2000 , special volume in honour of r. ahlswede on occasion of his 60th birthday .s. onn and i. weissman , `` generating uniform random vectors over a simplex with implications to the volume of a certain polytope and to multivariate extremes , '' _ annals of operations research _ , vol .189 , pp . 331342 , 2011 .[ online ] .available : http://dx.doi.org/10.1007/s10479-009-0567-7
we consider the problem of universally communicating over an unknown and arbitrarily varying channel , using feedback . the focus of this paper is on determining the input behavior , and specifically , a prior distribution which is used to randomly generate the codebook . we pose the problem of setting the prior as a sequential universal prediction problem , that attempts to approach a given target rate , which depends on the unknown channel sequence . the main result is that , for a channel comprised of an unknown , arbitrary sequence of memoryless channels , there is a system using feedback and common randomness that asymptotically attains , with high probability , the capacity of the time - averaged channel , universally for every sequence of channels . while no prior knowledge of the channel sequence is assumed , the rate achieved meets or exceeds the traditional arbitrarily varying channel ( avc ) capacity for every memoryless avc defined over the same alphabets , and therefore the system universally attains the random code avc capacity , without knowledge of the avc parameters . the system we present combines rateless coding with a universal prediction scheme for the prior . we present rough upper bounds on the rates that can be achieved in this setting and lower bounds for the redundancies .
thinning ( or skeletonization ) is the process of reducing an object to its skeleton . the topology preserving skeleton may be informally defined as a thinned subset of the object that retains the same topology of the original object and often many of its significant morphological properties .thinning is a very active research area thanks to its ability of reducing the amount of information to be processed for example in medical image analysis and visualization as well as simplifying the development of pattern recognition or computer - aided diagnosis algorithms .hence , it is not surprising that thinning gained a pivotal role in a wide range of applications .an exhaustive review of the literature is beyond the scope of this work .two dimensional skeletons have been used for digital image analysis and processing , optical character and fingerprint recognition , pattern recognition and matching , and binary image compression since a long time ago , see for example the survey paper .more recently , three dimensional skeletons have been widely used in computer vision and shape analysis , in computer graphics for mesh animation and in computer aided design ( cad ) for model analysis and simplification , and for topology repair .there is also a vast literature of applications of skeletons in medical imaging .they have been used for route planning in virtual endoscopic navigation , for example in virtual colonoscopy - or bronchoscopy .skeletons have also been an important part of clinical image analysis by providing centerlines of tubular structures . in particular, there is a large body of literature showing applications of skeletons to blood veins centerline extraction from angiographic images , - , and intrathoracic airway trees classification for the evaluation of the bronchial tree structure .also protein backbone models can be produced with techniques based on skeletons .furthermore , many computer - aided diagnostic tools rely on skeletons .for example , skeletons have been used to identify blood vessels stenoses - , tracheal stenoses , polyps and cancer in colon and left atrium fibrosis .there are medical applications of skeletons where topology preservation is essential .non invasively determine the three - dimensional topological network of the trabecular bone is a good example .indeed , many studies demonstrate that the elastic modulus and strength of the bones is determined by the topological interconnections of the bone structure rather than the bone volume fraction , . therefore , topological analysis plays a fundamental role in computer - aided diagnostic tools for osteoporosis .topology preserving thinning is non trivial and a vast literature , briefly surveyed in sec .[ prior ] , has been dedicated to this topic . in particular , thinning by iteratively removing _ simple points _ is a widely used and effective technique .it works locally and for this reason is efficient and easy to implement .while reading the literature one may notice that thinning algorithms are claimed to be `` topology preserving , '' even though in most cases a precise statement of what that means is left unaddressed .this paper uses _ homology theory _ to rigorously define what the virtue of being topology preserving actually consists of .this theory is less intuitive than the concepts used so far , including simple homotopy type , but exhibits some important theoretical and practical advantages that will be highlighted later in the paper .we remark that a homological definition of simple points has already been used in the context of skeletonization in , , but only in the case of cubical complexes .this paper generalizes this idea to cell complexes that are more general than cubical complexes .there are many applications that would benefit from an algorithm that deals with general unstructured simplicial complexes [ p. 35] fact , the geometry of three - dimensional objects is frequently specified by a triangulated surface , obtained for example by using an isosurface algorithm as _ marching cubes _ [ p. 539], applied on voxel data from computed tomography , magnetic resonance imaging or any other three - dimensional imaging technique .another possibility is to obtain the triangulations from the convex hull of point clouds provided for example by 3d laser scanners .triangulated surfaces offer two potential advantages over voxel representation .they allow to adaptively simplify the surface triangulation , see for example fig . 16.20 in[ p. 549] .they also allow to visualize and edit the object efficiently with off - the - shelf software ( for example the many visualization and editing tools for stereo lithography ) and without the starcase artifacts typical of voxel representation of objects with curved boundary .one may even easily print the object with additive manufacturing technology ( i.e. 3d printers ) .another issue that arises reading the literature is that many different definitions of topology preserving skeleton exist . in some papers ,the skeleton is obtained by removing simple pairs in the spirit of simple homotopy theory by what is well known as _collapsing _ in algebraic topology . the resulting skeleton ,if no other constraints are used , has a lower dimension with respect to the input complex . on the contrary , this paper assumes that the skeleton is always a solid object of the same dimension as the initial complex .the difference is highlighted in fig .[ fig : def ] . representing an annulus .on the left , the thick cycle represents a 1-dimensional skeleton of obtained by means of standard _ collapsing _ of . on the right ,the gray triangles represent a 2-dimensional skeleton of according to the definition used in this paper .this kind of skeleton is obtained after removing a sequence of top dimensional cells . ] in this paper the skeleton of a given complex is defined as a subset that is obtained from after removing a sequence of top dimensional cells .we require that the _ homology _ of the initial complex is preserved during this process . in particular , a top dimensional cell can be safely removed if this does not change the homology of the _ complement _ of .[ fig:2intersection ] provides an intuitive explanation why the last requirement is desirable .this additional requirement , to the best of our knowledge , is not documented in other papers .we call this cell a _ simple cell _ , which is a generalization of the idea of _ simple points_ in digital topology .is used to skeletonize a 2-dimensional simplicial complex representing an annulus .let the dark gray triangles belong to the skeleton . on the left , the result obtained by checking whether the removal of a cell changes the topology of complement . on the right, the result obtained by checking whether the removal of a cell changes the topology of . the numbers inside triangles indicate the iteration number of the while loop in the thinning algorithm when they were removed .both skeletons preserve topology .however , in most applications , the skeleton on the left is preferred . ] clearly , in nontrivial cases , the skeleton is not unique .resorting to explicit homology computations to detect simple points as in , , is quite computationally intensive , as the worst - case complexity of homology computations is cubical , see also the discussion in . in this paper , we introduce a much more efficient solution by exploiting the idea of tabulated configurations , i.e. _ acyclicity tables _ , that are described in detail in sec .[ sec : tables ] . usually , a skeleton also requires to preserve the shape of the object . in this paperwe show some very simple proof of concept idea how to preserve both homology and shape .of course , this is just an example to illustrate how the idea of acyclicity tables can be used together with some additional techniques that guarantee shape preservation .the rest of the paper is organized as follows . in section[ prior ] the prior work on thinning algorithms is surveyed .section [ contrib ] analyzes the original contributions of the present paper . in section [ sec : intrototopology ] the property of being a homology preserving thinning is rigorously stated . in section [ sec : tables ] the concept of acyclicity tables is introduced , whereas , in section [ sec : topologypreservingalgorithm ] , the topology preserving thinning algorithm is presented .section [ sec : benchmarks ] discusses the results of the thinning algorithm on a number of benchmarks and , finally , in section [ sec : conclusions ] the conclusions are drawn .there are hundreds of papers about thinning .most of them fall into two categories . on one hand , there are papers using morphological operations like erosion and dilatations to obtains skeletons , see and references therein .they do not guarantee topology preservation in general .the others use the idea of removing the so called simple points from the given cell complex , see , , . without pretending to be exhaustive , in the followingwe resume previous results . most of the work on thinning regard finding skeletons of 2-dimensional images . a very comprehensive survey on this topicmay be found in .this case is well covered in literature and general solution exists , see for example - . in case onewants to skeletonize three ( or higher ) dimensional images , there are much less papers available in literature .most of them rely on case study , see - .the problem is that it is hard to prove that a rule - based algorithm is general , i.e. it removes a cell if and only if its removal does not change topology . in 3dthere are more than 134 millions possible configurations for a cube neighborhood and only treating correctly _ all _ of them gives a correct thinning algorithm .references , , use explicit homology computations to detect simple points .there are a number of papers presenting thinning algorithms for 3-dimensional images in which euler characteristic is used to guarantee topology preservation , see for example , and references therein .the problem is that euler characteristic is a rather raw measure of topology and it is not sufficient to preserve topology in general for 3-dimensional cubical complexes . for three dimensional images one needs to use both euler characteristic and connectivity information to preserve topology , but this is not sufficient for four dimensional images .all the strategies presented so far are applicable only to cubical grids ( pixels , voxels , ... ) . to our best knowledge , there are just a few papers dealing with 2d grids that are not cubical , and they are restricted to 2d binary images modeled by a quadratic , triangular , or hexagonal cell complex , see - .the main reason for the lack of results on general 2d simplicial complexes may be the absence of regularity in unstructured simplicial grids that makes case - study algorithms very hard to devise and to implement .this gap in the literature is covered by the present paper . to the best of our knowledge , we are not aware of algorithms that deal with unstructured 3d simplicial complexes or more general cell complexes. there are only some papers that find the 1-dimensional skeleton by using the well known collapsing in algebraic topology - .again , this gap in the literature is covered by the present paper . in this sectionthe main novelties presented in this paper are summarized : 1 .the claim `` topology preserving thinning '' is rigorously defined , for any cell complex , by means of homology theory .a novel topology preserving thinning algorithm that removes simple cells is introduced .conceptually this algorithm falls into the category of thinning algorithms based on simple points and generalizes all previous papers .in fact , the acyclicity tables introduced in this paper give a classification of all possible simple points that can occur in a given cell complex .therefore , no rules are needed since all of them are encoded into the acyclicity tables .the most important advantage of the novel approach is that acyclicity tables are _ automatically _ filled in advance , for any cellular decomposition , with homology computations performed by a computer .therefore , once the tables are available , the implementation of a thinning algorithm is straightforward since identifying simple cells requires just queering the acyclicity table . no other topological processing is needed .the fact that acyclicity tables are filled _ automatically _ and _ correctly _, for _ all _ possible configurations , provides a rigorous computer - assisted mathematical proof that the homology - based thinning algorithm preserves topology .it is also verified , simply by checking all acyclic configurations , that using euler characteristic is not enough to ensure preservation of topology in 3-dimensional or higher dimensional cubical and simplicial complexes .however , when one checks both euler characteristic and that the number of connected component before and after cell removal remains one , then topology is preserved . checking euler characteristic together with connectivity does not suffice to preserve topology in 4d .the acyclicity tables for simplicial complexes of dimension 2 , 3 and 4 and for cubical complexes of dimension 2 and 3 , that can be freely used in any implementation of the proposed algorithm , are provided as supplemental material at .this way , we dispense readers to implement homology computations to produce the acyclicity tables .the thinning algorithm , unlike the standard collapsing of algebraic topology , does not require the whole cell complex data structure but it uses only the top dimensional elements of the complex , with obvious memory saving .7 . as a proof of concept , an open source c++ implementation that works for 3-dimensional simplicial complexesis provided to the reader as supplemental material at .we remark that the code is optimized for readability and memory usage and not for speed .when one claims that an algorithm preserves topology , in order to give a precise meaning to this statement , one needs to specify which topological invariant is preserved . in the literature , the invariant is assumed to be , in most cases implicitly , the so called homotopy type .the problem of this choice is that this strong topological invariant in general is not computable according to markov .this is the reason why in this paper we propose to use homology theory which is computable in place of homotopy theory , even if it is weaker than the former .indeed , homology seems to be the strongest topological invariant that can be rigorously and efficiently computed .therefore , every time we claim that topology is not changed , implicitly we mean that the homology is not changed .homology groups may be used to measure and locate holes in a given space .zero dimensional holes are the connected components . one dimensional holes are handles of a given space , whereas two dimensional holes are voids totally surrounded by the considered space ( i.e. cavities ) .one can look at a -dimensional hole as something bounded by a deformed -sphere .a space is homologically trivial ( or _ acyclic _ ) if it has one connected component and no holes of higher dimensions . a rigorous definition of homology groups is not presented in this paper due to the availability of rigorous mathematical introductions in any textbook of algebraic topology as and the lack of space . for a more intuitive presentation for non mathematicians oneone may consult , . in this paper , we consider in particular two standard ways of representing spaces , namely the _ simplicial _ and _ cubical _ complexes .n - simplex _ is the convex hull of points in general position ( point , edge , triangle , tetrahedron , 4-dimensional tetrahedron ) .a simplex spanned with vertices is denoted by ] is given as input .let ] be the maximal elements in the configuration ( i.e. the configuration consists of those elements and the vertices ,[5],[19] ] , ] that are the faces of ] is mapped to `0`,`1`,`2` ] is mapped to vertex `3` ] , `1` ] , `3` ] , `0`,`2` ] ( indices 5 , 6 , 8) ; 3 .face : `0`,`1`,`2` $ ] ( index 11 ) .consequently , the index of this configuration is one may check , at this position of the provided acyclicity table for 3-dimensional simplices , that this configuration is not acyclic . in the same spirit, we introduce an ordering for the 2-dimensional simplex and for the 4-dimensional simplex in the case of cubes , unlike the case of simplices , the model cube is expressly needed to specify the location of vertices in the cube . the model for - and -dimensional cubesis represented in fig.s [ fig : templates]b and [ fig : templates]c .the ordering for the 2-dimensional cube is whereas , for the 3-dimensional cube ( voxel ) is of course , in order to compute the index in the acyclicity table , exactly the same procedure as the one described for the 3-dimensional simplex is used . historically , the acyclicitiy tables for cubes and simplices were introduced in order to speed up homology computations . in this paperwe provide an even stronger result .not only the homology of the initial set and its skeleton is the same , but one can construct a retraction from the initial set to its skeleton . the existence of retraction implies the isomorphism in homology , but the existence of retraction is a stronger property than homology preservation .we demonstrated the existence of a retraction by a brute - force computer assisted proof , i.e. checking all acyclic configurations .thus , the following lemma holds . for every acyclic configuration in the boundary of 2- , 3- or 4-dimensionalsimplices and 2- or 3-dimensional cubes ( denoted as ) there exist a simple homotopy retraction from to . at the end of this section ,let us define more rigorously a _simple cell_. a cell in a complex is _ simple _ if is acyclic . in the supplemental material ,we already provide the acyclicity tables for dimensional simplices and dimensional cubes ( pixels , voxels ) , in such a way that the reader can safely bypass the step of constructing them .we note that we do not provide tables for higher dimensional simplices or cubes , since the memory required to store them is huge - dimensional cube require almost pb . all configurations for -dimensional simplex require pb . on the contrary , the acyclicity table for the 3-dimensional simplices provided as supplemental material requires no more than kb . ] .in this section we propose a simple thinning technique that iteratively removes simple cells .the algorithm is valid both for cubes and simplices provided that the corresponding acyclicity table is used .we want to point out that the algorithm works on top dimensional cells ( cubes , simplices ) . therefore unlike the case of homological algorithms or collapsing there is no need to generate the whole lower dimensional cell complex data structure when checking if is simple . ] .the input of the algorithm consists of a list of top dimensional cells in the considered set .the output is a subset of being its skeleton . at the beginning , we present a first version of the algorithm that preserves only the topology of . at the beginning ,one searches the list to find all the cells that are simple and store them into a queue .then , the queue is processed as long as it is not void . in each iteration , an element is removed from the queue . then , with the acyclicity table , one has to check if is simple in the set .we want to point out that elements already removed form the considered set in previous iterations are treated as the exterior of at a given iteration .if is simple in , then it is removed from the set . in this case , all neighbors of is any cell / simplex such that . ] that are still in are added to the queue .the details of the presented procedure are formalized in alg .[ alg : thinning ] . list of maximal cells ;list of maximal cells that belong to the skeleton of ; queue ; ; [alg : first ] continue ; ; [ alg : remove ] put all neighbor cells of in to the queue ;[alg : second ] ; [ alg : matred ] we want to stress that alg .[ alg : thinning ] is just an illustration .it may be turned into an efficient implementation by using more efficient data structures ( for instance removing from the list can be replaced by a suitable marking the considered element . ) also searching for intersection of with current should be performed by using hash tables that , for the sake of clarity , are not used explicitly in alg .[ alg : thinning ] .let us now discuss the complexity of the algorithm .clearly the _ for _ loop requires operations .we assume that one can set and check a flag of every cell in a constant time .this flag indicates if a cell is removed from or not .every cell appears in the _ while _ loop only times , where is maximal number of neighbors of a top dimensional cell in the complex .therefore , the _ while _ loop performs at most iterations before its termination .the time complexity of every iteration is , which means that the overall complexity of the procedure is .typically the number is a dimension dependent constant and , in this case , the complexity of the algorithm is .the same complexity analysis is valid for alg .[ alg : matredshapepreserving ] .we now present in alg .[ alg : matredshapepreserving ] a simple idea that enables to preserve the shape of the object in addition to its topology .we stress that the aim of this second algorithm is just to show how to couple topology and shape preservation .list of maximal cells ; list of maximal cells that belong to the skeleton of ; queue ; ; queue ; [ acyclicity ] ; put all the neighbor cells of in to the queue ; ; ; [ shapepreservingcodition ] _ break _ ; ; [ alg : matredshapepreserving ] in alg .[ alg : matredshapepreserving ] there is one basic difference with respect to alg . [ alg : matred ] . in alg .[ alg : matredshapepreserving ] , after removing a single external layer of cells , a check is made at line [ shapepreservingcodition ] to determine whether all cells that remain in are already in the boundary of .once they are , the thinning process terminates .the topology is still preserved due to line [ acyclicity ] . the additional constraint used at line [ shapepreservingcodition ] of alg .[ alg : matredshapepreserving ] is very simple and it gives acceptable results in practice .it may be easily coupled with other techniques to preserve shape already described in literature .finally , we discuss the situation when one wants to keep the skeleton attached to some pieces of the external boundary of the mesh . in this case , when testing whether a top dimensional cell is simple , one should consider as elements in .in other words , elements from are not considered as an interface between the object to skeletonize and its exterior .now we are ready to give a formal definition of skeleton .let us have a simplicial or cubical complex .a _ skeleton _ of , denoted by , is a set of top dimensional simplices or cubes such that : 1 . is obtained from by iteratively removing top dimensional elements , provided that the intersection of with complement is acyclic . consequently , homology groups of and are isomorphic ; 2 .there is no top dimensional element that has an acyclic intersection with complement ( i.e. the process of removing such elements has been run as long as possible . )[ def : skeleton ] we want to point out that sometimes , due to some deep phenomena arising in simple homotopy theory , some skeleton may be redundant .for instance it is possible to have a skeleton of a 3-dimensional ball that is a bing s house instead being a single top dimensional element . in generalit is impossible to avoid this issue due to some intractable problems in topology . in the follwing, we formally show that the skeleton obtained from alg .[ alg : thinning ] satisfies def .[ def : skeleton ] .this fact is shown with a sequence of two simple lemmas .the homology of and are isomorphic .the proof of this lemma is a direct consequence of the mayer vietoris sequence .let be the elements removed during the course of the algorithm ( enumeration is given by the order they were removed by the algorithm . )let us show that , for every , homology of and homology of are isomorphic .let us write the mayer vietoris sequence in reduced homology for : _ n ( _ j=1^i-1 t_j ) .the intersection is acyclic .this is because the intersection of with the set complement is checked in the acyclicity tables to be acyclic .once it is , also is acyclic . therefore , is trivial. also , since is a simplex or cube , it is acyclic .this provides being trivial ( we are considering reduced homology . ) consequently from the exactness of the presented sequence we have the desired isomorphism between and .the conclusion follows from a simple induction .[ lem : acintersection ] after termination of the algorithm there is no element that has an acyclic intersection with the complement .let be the elements removed during the course of the algorithm ( enumeration is given by the order they were removed by the algorithm . )suppose , by contrary , that a exists such that it has an acyclic intersection with the complement .let denotes the index of last element among that has nonempty intersection with .if , then would be put to the queue in the line [ alg : first ] of alg .[ alg : thinning ] and removed from in the line [ alg : remove ] of the algorithm , since no change to its intersection with complements is made by removing .if , then after removing the intersection of with complement does not change .therefore , it is acyclic after removing for . when alg .[ alg : thinning ] removes in the line [ alg : second ] , is added to the list and it is going to be removed in the line [ alg : remove ] , since removing for does not affect the acyclicity of the intersection of with the complement . in both cases we showed that is removed from by alg .[ alg : thinning ] .therefore , a contradiction is obtained .skeletons can be used in computer - aided diagnostic tools for coarctation and aneurism , by evaluating the transverse areas of any vessel structure , see for example .aorta coarctation is a congenital heart defect consisting of a narrowing of a section of the aorta .surgical or catheter - based treatments seek to alleviate the blood pressure gradient through the coarctation in order to reduce the workload on the heart .the pressure gradient is dependent on the anatomic severity of the coarctation , which can be determined from patient data .gadolinium - enhanced magnetic resonance angiography ( mra ) has been used in a 8 year old female patient to image a moderate thoracic aortic coarctation , see fig .[ ex1]b shows a rendering of the 3d triangulated surface , obtained by segmenting the mra data , which models the ascending aorta , arch , descending aorta , and upper branch vessels .the interior of the surface has been covered with 94756 tetrahedra . the skeleton of this vessel structure , obtained with alg .2 , is shown in fig .[ ex1]c .cerebrovascular aneurysms are abnormal dilatations of an artery that supplies blood to the brain .magnetic resonance imaging ( mri ) has been used to image the cerebral circulation in a 47 year old female patient , see fig .[ brain]a .[ brain]b shows a rendering of the 3d triangulated surface , obtained from the segmentation of the mri data .the interior of the surface has been covered with 390,081 tetrahedra .the skeleton of this vessel structure , obtained with alg .2 , is shown in fig .[ brain]c .pulmonary arteries connect blood flow from the heart to the lungs in order to oxygenate blood before being pumped through the body .skeletons have been used for quantitative analysis of intrathoracic airway trees in .a 3d triangulated surface , shown in fig . [ pulmonary]b , represents the 3d model of pulmonary airway trees of a 16 year old male patient obtained by segmenting data from computed tomography ( ct ) images , see fig .[ pulmonary]a .the interior of the surface is covered with 236,433 tetrahedra .the topology preserving skeleton obtained by alg . 2is shown in fig .[ pulmonary]c .a 3d triangulated surface that represents the 3d model of a colon is obtained by segmenting data from computed tomography ( ct ) images , see fig .[ excolon ] .the interior of the surface is covered with 2,108,424 tetrahedra .the topology preserving skeleton obtained by alg .2 , which may be used as a colon centerline to guide a virtual colonoscopy , is shown in black in fig .[ excolon ] .a 3d model of a human bone belonging to a 61 year old male patient has been obtained from a stack of thresholded 2d images acquired by x - ray microct scanning . in particular , a region of interest ( roi ) of size mm ( pixels , pixel size m ) is selected in the trabecular region .a stack of 195 2d images has been considered , resulting in a volume of interest ( voi ) of approximately mm . from this 3d model , consisting of about 2.4 millions voxels , a 3d triangulated surface has been obtained , see fig .[ exbone]a .the interior of this surface is covered with 688,773 tetrahedra .the topology preserving skeleton obtained by alg . 1is shown in fig .[ exbone]b .the results of algorithm [ alg : matred ] on some benchmarks are visible in fig.s [ ex1]-[ex14 ] , whereas the results obtained with algorithm [ alg : matredshapepreserving ] are shown in fig.s [ ex5]-[ex13 ] .this paper introduces a topology preserving thinning algorithm for cell complexes based on iteratively culling simple cells .simple cells , that may be seen as a generalization of simple points in digital topology , are characterized with homology theory . despite homotopy, homology theory has the virtue of being computable .it means that , instead of resorting to complicated rule - based approaches , one can detect simple cells with homology computations .the main idea of this paper is to give a classification of all possible simple cells that can occur in a cell complex with acyclicity tables .these tables are filled in advance automatically by means of homology computations for all possible configurations .once the acyclicity tables are available , implementing a thinning algorithm does not require any prior knowledge of homology theory or being able to compute homology .the fact that acyclicity tables are filled automatically and correctly for all possible configurations provides a rigorous computer - assisted mathematical proof that the homology - based thinning algorithm preserves topology .we believe that such rigorous topological tools simplify the study of thinning algorithms and provide a clear and safe way of obtaining skeletons .van uitert , r , m . summers , `` automatic correction of level set based subvoxel precise centerlines for virtual colonoscopy using the colon outer wall , '' _ ieee trans med .imaging _ ,26 , no . 8 , pp . 10691078 , 2007 .m. ding , r. tong , s.h .liao , j. dong , `` an extension to 3d topological thinning method based on lut for colon centerline extraction , '' _ comput .methods programs biomed .1 , pp . 3947 , 2009 .k. haris , s.n .efstratiadis , n. maglaveras , c. pappas , j. gourassas , g. louridas , `` model - based morphological segmentation and labeling of coronary angiograms , '' _ ieee trans . med .imaging _ , vol .10 , pp . 10031015 , 1999 .frangi , w.j .niessen , r.m .hoogeveen , t. van walsum , m.a viergever , `` model - based quantitation of 3d magnetic resonance angiographic images , '' _ ieee trans . med .10 , pp . 946956 , 1999 .m. niethammer , a.n .stein , w.d .kalies , p. pilarczyk , k. mischaikow , a. tannenbaum , `` analysis of blood vessel topology by cubical homology '' , _ proc . of the international conference on image processing 2002 _ , vol . 2 , 969 - 972 , 2002 .aylward , e. bullitt , initialization , noise , singularities , and scale in height ridge traversal for tubular object centerline extraction , _ ieee trans . med .imaging _ , vol .2 , pp . 6175 , 2002 .m. straka , m. cervenansky , a. la cruz , a. kochl , m. sramek , e. groller , d. fleischmann , `` the vesselglyph : focus & context visualization in ct - angiography , '' _ proc . of ieee visualization2004 _ , pp . 385392 , 2004 .baker , s.s .abeysinghe , s. schuh , r.a .coleman , a. abrams , m.p .marsh , c.f .hryc , t. ruths , w. chiu , t. ju , `` modeling protein structure at near atomic resolutions with gorgon , '' _ j. struct2 , pp . 360373 , 2011 .y. yang , l. zhu , s. haker , a.r .tannenbaum , d.p .giddens , `` harmonic skeleton guided evaluation of stenoses in human coronary arteries , '' _ med .image comput .8 , pp . 490497 , 2005 .e. sorantin , c.s .halmai , b. erdhelyi , k. palgyi , l.g .nyl , l.k .oll , b. geiger , f. lindbichler , g. friedrich , k. kiesler , `` spiral - ct - based assessment of tracheal stenoses using 3-d - skeletonization , '' _ ieee trans . med .imaging _ , vol .21 , pp . 263273 , 2002 .d. ravanelli , e. dal piaz , m. centonze , g. casagranda , m marini , m. del greco , r. karim , k. rhode , a. valentini , `` a novel skeleton based quantification and 3d volumetric visualization of left atrium fibrosis using late gadolinium enhancement magnetic resonance imaging , '' _ ieee trans ., 2014 , in press .wehrli , b.r .gomberg , p.k .saha , h.k .song , s.n .hwang , p.j .snyder , `` digital topological analysis of in vivo magnetic resonance microimages of trabecular bone reveals structural implications of osteoporosis , '' _ j. bone miner .16 , no . 8 , pp . 15201531 , 2001 .saha , y. xu , h. duan , a. heiner , g. liang , `` volumetric topological analysis : a novel approach for trabecular bone classification on the continuum between plates and rods , '' _ ieee trans .imaging _ , vol .11 , pp . 18211838 , 2010 .m. niethammer , w.d .kalies , k. mischaikow , a. tannenbaum , `` on the detection of simple points in higher dimensions using cubical homology , '' _ ieee trans .image process .15 , no . 8 , pp .24622469 , 2006 .m. ashwin , g. dinesh , `` a new sequential thinning algorithm to preserve topology and geometry of the image , '' _ international journal of mathematics trends and technology _ , volume 2 .issue 2 , pp . 15 , 2011 .jonker , o. vermeij , on skeletonization in 4d images , _ proceeding sspr 96 proceedings of the 6th international workshop on advances in structural and syntactical pattern recognition _ , springer - verlag london , uk 1996 .k. palgyi , e. balogh , a. kuba , c. halmai , b. erdhelyi , e. sorantin , k. hausegger , `` a sequential 3d thinning algorithm and its medical applications , '' _ information processing in medical imaging _ , lecture notes in computer science , vol .2082 , pp . 409415 , 2001 .l. tcherniavski , p. stelldinger , `` a thinning algorithm for topologically correct 3d surface reconstruction , '' _ viip 2008 , proc .8th iasted international conference on visualization , imaging , and image processing _ ,villanueva ( ed . ) , pp .119124 , acta press , 2008 .p. dotko , r. specogna , `` physics inspired algorithms for ( co)homology computations of three - dimensional combinatorial manifolds with boundary , '' _ comput .22572266 , 2013 .
a topology preserving skeleton is a synthetic representation of an object that retains its topology and many of its significant morphological properties . the process of obtaining the skeleton , referred to as skeletonization or thinning , is a very active research area . it plays a central role in reducing the amount of information to be processed during image analysis and visualization , computer - aided diagnosis or by pattern recognition algorithms . this paper introduces a novel topology preserving thinning algorithm which removes _ simple cells_a generalization of simple points of a given cell complex . the test for simple cells is based on _ acyclicity tables _ automatically produced in advance with homology computations . using acyclicity tables render the implementation of thinning algorithms straightforward . moreover , the fact that tables are automatically filled for all possible configurations allows to rigorously prove the generality of the algorithm and to obtain fool - proof implementations . the novel approach enables , for the first time , according to our knowledge , to thin a general unstructured simplicial complex . acyclicity tables for cubical and simplicial complexes and an open source implementation of the thinning algorithm are provided as additional material to allow their immediate use in the vast number of practical applications arising in medical imaging and beyond . + * keywords : * + skeleton , skeletonization , iterative thinning , topology preservation , algebraic topology , homology , topological image analysis
dna microarray and sequencing technologies allow investigators to measure the transcription levels of a large numbers of genes within several diverse experimental conditions ( or experimental samples ) [ ] .the experimental conditions may correspond to either different time points , different environmental samples , or different individuals or tissues .the data resulting from these technologies are usually referred to as _ gene expression data_. a gene expression data set may be seen as a data matrix , with rows and columns respectively corresponding to genes and experimental conditions .each cell of this matrix represents the expression level of a gene under a biological condition .the analysis of gene expression data usually implies the search for groups of co - regulated genes , that is , groups of genes that exhibit similar expression patterns .inversely , the analysis may seek samples or conditions ( e.g. , patients ) with similar expression profiles .these may indicate the same attribute , such as a common type or state of a particular disease .vast amounts of gene expression data from numerous experiments are available for detailed analysis through public repositories such as the gene expression omnibus ( geo ) [ ] at the national center for biotechnology information . in general , unveiling the hidden structure in gene expression data requires the use of exploratory analytical methods such as _ clustering_. cluster analysis has been used successfully to analyze a wide variety of transcriptomes [ e.g. , see the review by ] . as all major biological functions are built on the synergistic interplay of multiple proteins (the role of genes is to produce proteins ) , clustering similar gene expression patterns into distinct groups corresponds with the belief that different genes that are regulated and co - expressed at the same time and in similar locations are likely to contribute to the same biological functions .classical clustering analysis ( e.g. , the popular -means algorithm [ ] ) associates a given gene with only one cluster .moreover , all genes in a given cluster must show similar co - regulation patterns across all experimental conditions .these are very stringent conditions for gene expression , as a given protein ( the product of a gene ) may have the capacity to regulate several different biochemical reactions .in fact , many proteins intervene in a number of different biological processes or biochemical functions , as documented in the gene ontology ( go ) project [ ] , a major bioinformatic initiative to unify the representation of gene and gene product attributes across all species .the go project provides an ontology of controlled vocabularies that describes gene products in terms of their associated biological processes , cellular components and molecular functions in a species - independent manner .classical clustering of genes ( or conditions ) can not assign a gene ( or a condition ) to several different clusters .the approach of biclustering better accommodates the multi - functional character of genes across subsets of experimental conditions .biclustering is the simultaneous clustering of genes ( rows ) and conditions ( columns ) . in biclustering, a given gene may be associated simultaneously with several different clusters , which may describe distinct biological processes that are run by a cell at a given time and which use a given set of proteins . seems to be the first to have applied a clustering method to simultaneously cluster rows and columns .he introduced the so - called _ direct clustering _algorithm , a partition - based algorithm that allows for the division of data into submatrices ( biclusters ) .we apply our methods to the analysis of gene expression data associated with retinal detachment ( rd ) , a disorder of the eye that typically leads to permanent vision loss .rd occurs when the sensory layer of the retina ( a thin tissue lining the back of the eye ) pulls away from the pigmented layer of the retina .this results from atrophy or tearing of the retina secondary to a systemic disease such as diabetes or from injury or other disturbances of the eye that allow fluids to enter the space between the sensory and pigmented retinal layers [ ] . surgical intervention to remove the detached parts of the retinais the current standard of care to prevent further progression of the disorder .if not treated properly , the entire retina will progressively detach , leading to complete blindness .better knowledge of the molecular mechanisms involved in the progression of rd is of great interest in order to develop novel drugs to stop or slow the detachment process , either as a substitute for surgical intervention or to use in combination with surgical intervention .molecular events that occur during the progression of rd were studied via transcription profiling [ ] .briefly , 19 retinal biopsies from patients with rd were compared to 19 normal retinal samples using affymetrix microarrays .these arrays covered the human genome , with 54,000 probe - sets .the microarray data are publicly available at the national center for biotechnology information geo website [ ] as gse28133 .transcriptional changes in photoreceptor cells in the retina are the primary target for drug development . in an initial analysis of the retinal transcriptome, used -tests ( as is normally done by bioinformatic labs ) to compare normal versus rd samples . in that analysis ,the rd inflammatory response dominated any other transcriptional changes [ ] .inflammation typically represents a secondary event that follows the initial stimulus that caused the first tissue detachment .unfortunately , the more subtle transcriptional changes in the photoreceptor cells related to the rd disorder were not well detected . in the study by , mutual information techniques indicated that changes existed in the rd transcription profile other than those associated with inflammation , and that they may be a starting point for studying transcriptomic changes associated with the photoreceptor cells in rd .however , the mutual information procedure applied in that study involves iterative optimization of the results and appears to be rather difficult to automate . in this work ,we analyze the rd data with biclustering techniques .we choose biclustering techniques because traditional clustering approaches are not well suited for the analysis of proteins .some proteins assume multiple functions and/or work as hubs that mediate , link or simultaneously synchronize multiple biological processes ( such as the protein tp53 or stat1 [ ] ) . such characteristics of proteins make it very challenging to use traditional approaches for clustering the protein interaction networks .biclustering is well adapted to this aim . in rd ,anti - inflammatory reactions try to stop or slow the further advancement of the detachment , while apoptotic ( i.e. , cell death ) mechanisms degrade the parts of the retina that have been detached too long and where the fragile photoreceptor cells have already started to die . as the retinais composed of three layers with more than eight different cell types [ ] , studying the behavior of photoreceptor cells is complex , and biclustering represents a major advantage when needing to account for the multiple overlapping functional responses that occur during rd .good surveys of existing biclustering algorithms are available [ ] .cheng and church s algorithm [ ] and the plaid model [ ] are two of the most popular biclustering methods .it appears that were the first authors to propose the term biclustering for the analysis of microarray data .their algorithm consists of a greedy iterative search that aims to minimize the mean squared residual error . the popular plaid model .they assumed that the expectation of each cell in the data matrix is formed with the contribution ( sum ) of different biclusters .others have generalized the plaid model into a bayesian framework [ ] . from our review of the literature ,it is apparent that most models used for biclustering do not take into account application - specific prior information about genes or conditions and pairwise interactions between genes or conditions . in this work, we propose a model that accounts for this information .we adopt a gaussian plaid model as the model that describes the biclustering structure of the data matrix .in addition , we incorporate prior information on the dependency between genes and between conditions through dedicated relational graphs , one for the genes and another for the conditions .these graphs are conveniently described by auto - logistic models [ ( ) , ] for genes and conditions .the distributions are pairwise - interaction gibbs random fields for dependent binary data .they can be interpreted as generalizations of the finite - lattice ising model [ ] , which is a popular two - state discrete mathematical model for assessing ferromagnetism in statistical mechanics .we will refer to our overall model as the _ gibbs - plaid _ biclustering model .our prior is elicited from similarities obtained from the go annotations . an -nearest - neighbor graph over the genesis built from these similarities .a key parameter of the auto - logistic prior is the so - called temperature parameter ( due to its analogy with the physical process of tempering ) .the normalizing constant of this prior is , in general , unknown and intractable .however , for computational purposes , this constant is needed to implement a stochastic algorithm that aims to estimate the posterior distribution of the genes bicluster memberships when is unknown .this means that the usual mcmc metropolis hastings procedure is not applicable to our model .instead , we adopt a hybrid procedure that mixes the metropolis hastings sampler with a variant of the wang landau algorithm [ ] .the convergence of the proposed algorithm to the posterior distribution of the bicluster membership is guaranteed by the work of .we note that some earlier attempts to incorporate gene dependency information are available in the literature , but they were carried out within the context of clustering ( as opposed to biclustering ) and variable selection . a nice review . proposed a bayesian model that incorporates information on pathways and gene networks in the analysis of dna microarray data .they assumed a markov random field prior to capture the gene gene interaction network .the neighborhood between the genes uses the pathway structure from the kyoto encyclopedia of genes and genomes ( kegg ) database [ ] . and have also used biological information to perform a clustering analysis of gene expression data . incorporated go annotations to predict survival time and time to metastasis for breast cancer patients using gene expression data as predictor variables .the potts model has also been used for clustering analysis of gene expression data [ ] . however , in these approaches , the potts model [ ] was used directly as a nonparametric model for clustering [ ] , and not as a prior that accounts for the gene gene interaction on another clustering model .this paper is organized as follows .section [ sec : model ] introduces the proposed gibbs - plaid model for biclustering .section [ sec : posterior ] describes the stochastic algorithm used to estimate the posterior distribution of the model parameters .this includes the combination of the wang landau algorithm with the metropolis hastings sampler .section [ sec : experiments ] shows the results of a simulation carried out to study the performance of the gibbs - plaid model and of the model selection criteria used to determine the number of biclusters present in a data set .section [ sec : applications : rd ] deals with the application of our methodology to the rd data .the supplementary material [ ] provides more complete results of our application to the rd data and a high - resolution image of figure [ fig : bicluster4:network ] .let be the number of genes , and be the number of experimental conditions .let denote the logarithm of the expression level of gene under condition ( , ) .even though we actually work with the logarithm of the expression level , we refer to as the expression level .let be the number of biclusters . for all in the set of genes , in the set of conditions , and , we define the binary variables and as taking values in , so that if and only if gene belongs to bicluster , and if and only if condition belongs to bicluster . the symbols and denote the -dimensional vector of components and the -dimensional vector comprising all the vectors , , respectively . the symbols and are similarly defined for the conditions .let denote the set of parameters of the model , which are made explicit hereafter . in the plaid model , , where is a zero - mean error term and , where denotes the overall data mean , and and are the gene and condition effects associated with bicluster , measured as deviations from the bicluster mean , .hereafter , we denote by the vector of means .the model parameters are given by .the most common distribution for the error term is a normal distribution [ ] .this is the model we adopt here . in the context of gene expression data ,the plaid model is a model for the logarithm of the gene expression levels . in the presence of extreme observations, a more robust model may be more appropriate , such as one with student- distributed errors . although some researchers have modeled the log - expression with more complex distributions such as gamma or double exponential distributions [ ] , the associated achievement of any gains within the context of biclustering is arguable .in fact , the simulation study in showed that the gaussian error term in the plaid model is fairly robust to heavily tailed errors .we assume that the variables s given the labels and are independent , that is , where stands for the standard normal density .given the bicluster labels , we define as the set of rows in the bicluster , and as the set of columns in the bicluster , .the bicluster is given by .let be the number of elements in the bicluster .the number of rows and columns in this bicluster will be denoted by and , respectively .note that .let denote the vector of all s in , and stand for the identity matrix of dimension .we further assume that , given the bicluster labels , the prior of the gene effects is a multivariate normal distribution with mean zero and variance covariance matrix given by . as shown in , we may change the parametrization of the model to a proper multivariate normal vector so that .[def : a_k ] similarly , we suppose that the prior for follows a multivariate normal distribution with mean zero and variance covariance matrix given by .note that these prior distributions satisfy the conditions of identifiability in the model , that is , they ensure that the gene and condition effects add up to zero for each bicluster .we set zero - mean independent normal priors with variances , and for the means and , respectively ; and set a scaled inverse chi - squared prior with scale and degrees - of - freedom for the variance .these hyperparameters are to be chosen adequately .for example , in our analysis in section [ sec : experiments ] , we set , and .the gene labels as well as the condition labels are usually assumed to be independent [ ] . more realistically , in this work , we incorporate prior knowledge on the relation between genes and between conditions ( if applicable ) by means of relational graphs .for example , the gene relational graph is an -nearest - neighbor graph for which the nodes correspond to the set of genes and the edges correspond to the set of `` most similar '' or closer genes .it is this notion of similarity that contains the relational information between genes .we define these similarities based on the go annotations , which define the association between gene products and terms .go terms are organized in a directed acyclic graph ( dag ) in which the parent - child relationships are edges . in this graph ,a go term can have multiple parents .all the go annotations associated with a term inherit all the properties of the ancestors of those terms .thus , child annotations inherit annotations from multiple parent terms .we adopt lin s pairwise similarity [ ] , which is based on the minimum subsumer of , as a means to build a notion of semantic similarity between any two go annotations .this idea was first introduced by .further details can be found in the supplementary material accompanying this paper [ ] .let denote the distance between genes and induced by lin s similarity between the genes .the gene relational graph is defined as having edge weights equal to here , and are the temperature and kernel bandwidth parameters of the graph , respectively .we assume that for pairs of genes not connected by an edge in the -nearest - neighbor data graph .the larger the weights , the more similar the genes .we will use the notation for nodes that are connected by an edge in the data graph .for example , for the rd data , we fix to define the -nearest - neighbor graph for genes , as this is often recommended for high - dimensional data [ ] . with a set of 4645 probe - sets of the rd data ,we obtain a sparse graph , with a total of 135,498 edges , which is a total of 0.63% connectivity in the graph .this corresponds to an average graph degree ( number of edges spawned from each node ) of 29 .the distribution of the gene labels in this graph is given by the binary gibbs random field \\[-8pt ] & \doteq & \exp \biggl\{\sum^{p}_{i=1}a_{i } \rho_{ik } + \sum_{i \sim i'}b_{ii ' } \bigl(t^{\rho},\sigma_{\rho}\bigr ) \mathbf{1}_{\ { \rho_{ik}=\rho_{i'k}\ } } \biggr\ } , \nonumber\end{aligned}\ ] ] where are hyperparameters that control the amount of membership ( ) in the bicluster , and , for every relation , denotes the indicator function that takes the value if and only if the relation is satisfied .this gibbs field is actually a binary auto - logistic distribution on the labels [ ( ) , ] .this gibbs prior favors biclusters formed by similar genes in the sense of the distances or similarities chosen to build the relational graph .a similar prior relational graph may be built for the conditions if a notion of similarity between the conditions can be defined .this is the case , for example , when the conditions correspond to similar measurements taken over a period of time , such as in gene expression evolution ( i.e. , time - course ) profiles . in this case, the distance between conditions may incorporate a measure of smoothness of the time - course profile during consecutive measurements .alternatively , a measure of correlation may be incorporated in the similarities if a moving average or specific arma process is assumed on the time - course profiles .these aspects of the modeling processes are better explained within the context of specific applications , such as the ones described in section [ sec : experiments ] . for the moment , assume that such a distance between conditions may be defined .we denote the distance between two conditions and by .the condition relational graph is defined to have edge weights equal to as before , and are the temperature and kernel bandwidth parameters of the graph , respectively .and we assume that for pairs of conditions not connected by an edge .the distribution of the condition labels in this graph is then given by the binary auto - logistic distribution \\[-8pt ] & \doteq & \exp \biggl\{\sum^{q}_{j=1 } c_{j } \kappa_{jk } + \sum_{j \sim j ' } d_{jj'}\bigl(t^{\kappa } , \sigma_{\kappa}\bigr ) \mathbf{1}_{\{\kappa_{jk}=\kappa _ { j'k}\ } } \biggr\ } , \nonumber\end{aligned}\ ] ] where are hyperparameters that control the amount of condition membership ( ) in the bicluster .note that in the absence of any prior information on the dependency between conditions , we may assume that all pairs of conditions are far apart and , consequently , that for all pairs .this leads to a prior where all the condition labels are a priori independent .to estimate the posterior of the parameters , especially the one associated with the labels , we use a hybrid stochastic algorithm .first , an augmented model is considered in order to efficiently sample the labels through a block gibbs sampling .this is the swendsen wang algorithm [ ] , which is well known in the physics and imaging literature .we briefly describe it hereafter .the effect and variance parameters are readily sampled using the usual gibbs sampler .however , the temperature hyperparameters associated with the label priors need extra consideration . in order to sample from their posterior , one needs to know the normalizing constant of the priors , which are unfortunately intractable . to solve this impasse, we adopt the wang landau algorithm [ ] , which is a technique that efficiently samples from a grid of finite temperature values by cleverly estimating the normalizing constant at each iteration .the algorithm travels efficiently over all the temperatures by penalizing each visit .the resulting algorithm is also referred to as a flat - histogram algorithm .next , we further explain how the technique is applied to our model .let the number of biclusters be fixed .we denote the partial residuals by .the likelihood is given by consequently , the full conditional probability of the genes labels is given by where and to sample from this full conditional , we use the swendsen wang algorithm [ ] .this algorithm samples the labels in blocks by taking into account the neighborhood system of the data graph .it defines a set of the independent auxiliary binary variables , called the bonds .the bonds are set to with label - dependent probabilities given by the bond is said to be _ frozen _ if .note that necessarily a frozen bond can occur only between neighboring points that share the same label .a set of data graph nodes is said to be connected if , for every pair of nodes in the set , there is a path of frozen nodes in the set connecting with .the swendsen wang algorithm is used to sample the labels as follows : given the labels , each bond is frozen independently of the others with probability if and .otherwise , the bond is set to zero . given the bond variables , the graph is partitioned into its connected components .each connected component is randomly assigned a label .the assignment is done independently , with -to- log - odds equal to . in the special case of the ising model and , more generally , when for all , the labels are chosen uniformly at random . given the gene labels ,the condition labels are sampled in a similar way .we assume that the temperatures and take a finite number of values .let and be the sets of and possible values for and , respectively .we assume that the prior distribution of is a uniform distribution on the grid of values .note that is directly proportional to where and denote the normalizing constants for and , respectively [ see equations ( [ eq : h : rho ] ) and ( [ eq : h : kappa ] ) ] . in general , these constants can not be easily evaluated and are intractable , except for the very simplest cases .mcmc techniques , such as metropolis hastings , are of no use here because the constants change with the value of . instead , in order to obtain samples from the posterior of the labels , we use a stochastic algorithm based on the wang landau algorithm [ ] .the sampling from this algorithm simultaneously provides approximate samples from the posterior of the labels and the parameters and estimates of the posterior probability mass function of . provided a nice exposition of the algorithm and showed its convergence . successfully used a variant of the wang landau algorithm to estimate the posterior of the temperature of the potts model .landau algorithm considers the target joint distribution \\[-8pt ] \quad&&\qquad\propto p\bigl ( y | \sigma^2 , \theta , \rho , \kappa\bigr ) \pi\bigl ( \sigma^2 , \theta\bigr ) \prod_{k=1}^k h_{\rho , k}\bigl ( \rho_k , t^{\rho } \bigr ) h_{\kappa , k}\bigl ( \kappa_k , t^{\kappa } \bigr ) / \psi\bigl ( t^{\rho } , t^{\kappa}\bigr ) , \nonumber\end{aligned}\ ] ] where is given by where is the constant such that . the algorithm samples from iterative stochastic approximations of this distribution ( see the algorithm steps below ) , so that the marginal of the parameters and labels converges to the target marginal and the marginal of converges to , which turns out to be a uniform distribution on the grid of temperatures .the main idea of the stochastic approximation is to replace by an iterative estimate , say .consider equation ( [ eq : joint : target ] ) with replaced by its estimate .since is uniform , then integrating this equation so as to obtain the estimate , and using equation ( [ eq : joint : target : normalizing ] ) , we have that at convergence therefore , the quantities given in the left - hand side of equation ( [ eq : phi ] ) give an estimate of the posterior probability mass function of the temperatures .let be the set of temperatures considered .landau algorithm we have implemented depends on an updating proposal of the form , with and if . the proposal is similarly defined .this proposal corresponds to the proposal of that was used within the context of simulated tempering . suggested a different proposal based on a multinomial distribution .however , their proposal involves considerably more computation .the algorithm proceeds as follows : given and at iteration : sample from the proposal distribution .set with probability otherwise set , where .sample from the proposal distribution . set with probability otherwise set , where .[ psi1 ] update : for , set \\[-8pt ] & & \qquad=\log \hat{\psi } ^{(t)}\bigl(t^{\rho } , t^{\kappa}\bigr ) + \gamma^{(t ) } \biggl ( \mathbf{1 } _ { \ { ( t^{\rho,(t+1 ) } , t^{\kappa , ( t+1 ) } ) = ( t^{\rho } , t^{\kappa})\ } } -\frac{1}{m n } \biggr ) .\nonumber\end{aligned}\ ] ] sample and with the swendsen wang algorithm .[ par ] sample using the usual gibbs sampler . in step ( [ psi1 ] ), is a random sequence of real numbers decreasing slowly to 0 .we chose according to the wang landau schedule suggested by .the sequence is kept constant until the histogram of the temperatures is flat , that is , until has equiprobably visited all the values of the grid . at the recurrent time suchthat is approximately uniformly distributed , we set where .when becomes too small , is set to . in practice, a very large number of iterations is needed to reach convergence of the quantities given in equation ( [ eq : phi ] ) [ or equation ( [ eq : log : psi ] ) ] .we carried out a small simulation ( not shown here ) to get a better idea of the number of simulations needed for a problem like ours .the answer lies at about one - half million iterations . a theoretical proof of the convergence of this algorithm is given in the supplementary material [ ] . in step ( [ par ] ) ,the parameters are sampled with a gibbs sampler .the full conditional posterior of the parameters is straightforward to derive ; hence , it is not spelled out here . the temperatures ( and also the set , if appropriate ) are obtained by using the procedure of to elicit their prior critical temperatures from the random cluster models associated with the potts model .the kernel bandwidth parameters and are kept constant and set to the corresponding average nearest - neighbor distance [ ] .to build our simulated data sets , we used two different pools of genes : one from the yeast cycle data [ ] and the second from the retinal detachment ( rd ) data [ ] . the yeast cycle data set shows the time - course fluctuation of the log - gene - expression - levels of 6000 genes over 17 time points .the data have been analyzed by several researchers [ ( ) , ] and are a classical example for testing clustering algorithms [ ] .we use the five - phase subset of this data , which consists of genes with expression levels that peak at different time points , corresponding to the five phases of the cell cycle . of the genes ,only are annotated with go terms .the rd data set is described in greater detail in section [ sec : applications : rd ] .we used this data set so as to have simulations that resemble the rd data more closely .we randomly chose 2000 probe - sets ( i.e. , genes ) out of the 4645 probe - sets present in these data in order to study many scenarios for the simulated data .based on lin s pairwise similarities , discussed in section [ sec : prior ] , we built corresponding relational graphs comprising the annotated genes .as with the real data , we simulated 38 conditions for the genes taken from the rd data set . recall that the rd data set consists of a group of 19 biopsies from patients with rd and a control group of 19 non - rd biopsies . as described in ,the patients can be further organized into three classes of rd : early stage ( rd month , 5 patients ) , mid - term stage ( month rd months , 7 patients ) and late stage ( rd months , 7 patients ) . the relational condition graph associated with the genes from the rd data set was built so that patients in the same group were related in the graph .the distances between patients in the same group were assumed to be the same . for the genes taken from the yeast cycle data set , we simulated 17 conditions , the same number of conditions found in the real data .the modeling of the relational condition graph associated with these genes was inspired by the time dependency in the data .this allowed us to consider biclusters formed by consecutive conditions , which are easier to visualize .thus , for these simulated data , the similarity between conditions was induced by the correlation between time - consecutive conditions . the correlation distance between conditionswas set to the value of the correlation parameter does not affect the relational structure given by the -nearest - neighbor graph . in our simulations , we set . setting as an unknown parameter of the modelwould unnecessarily complicate the model because conducting inference on would involve knowledge of the normalizing constant , which in turns depends on and the temperature .a high value of should guide the model to consider clustering time - consecutive conditions together . as our label prior favors common labels for genes or conditions that are strongly related in the graph , we used a hierarchical clustering ( e.g. , ward s minimum variance method [ ] ) with different tree cutoffs to generate labels for different numbers of biclusters .clusters that split at higher cutoffs in the tree were used as candidates for overlapping biclusters .the expression levels of the bicluster cells associated with the data for genes taken from the yeast cycle data were generated as follows : was generated from a normal distribution ; was generated from a normal , distribution ; the gene effects were generated as normal distributions , with the means equal to , and the variances equal to their prior variances , while keeping the constraint , ( see the last paragraph of section [ sec : plaid : model ] on page ) ; the condition effects were generated similarly ; and the variance was generated from an inverse- . in this fashion , we created data sets with the following numbers of biclusters : . each of these cases was replicated 15 times .figure [ fig : simulated : data ] shows some examples of the simulated data for different values of .the expression levels of the bicluster cells associated with the data for genes taken from the rd data set were generated in the same manner , except for the parameters that were generated from a normal distribution , with mean and variance . in this case, we created data sets with the following numbers of biclusters : .each of these cases was replicated 15 times .a measure of similarity between two sets of biclusters and is given by the so - called f1-measure [ ] .the f1-measure is an average between _ recall _ and _ precision _ , two measures of retrieval quality introduced in the text - mining literature [ ] .let be two biclusters , and be the number of genes in and , and be the number of conditions in and , and and be the number of elements in and , respectively .precision and recall are given by recall is the proportion of elements in that are in .precision is the proportion of elements in that are also found in .the f1-measure between and is given by .when several target biclusters ( or estimated biclusters ) are to be compared with known biclusters , we use the f1-measure average : .the estimated biclusters are obtained by using a threshold of 0.5 on the marginal posterior probabilities of the labels from our stochastic algorithm .we show the results of a performance comparison between the gibbs - plaid model and the bayesian penalized plaid model of for each number of biclusters considered .the penalized plaid model uses a parameter , which controls the amount of overlap of the biclusters .it extends the original plaid model of and the nonoverlapping model of , which arise as special cases of the penalized model when is set to zero and infinity , respectively .the case of is also equivalent to our gibbs - plaid model when the temperatures tend toward infinity ( i.e. , a model without prior interaction between the genes or between the conditions ) . fit their model with a gibbs sampler , and showed that its performance is much better than the performance of five other competitive biclustering methods : the samba algorithm of , the improved plaid model of , the algorithm of , the spectral algorithm of , and the fabia procedure of . in this section ,we extend this performance comparison by ( a ) including our gibbs - plaid model , ( b ) considering a larger and much more diverse pool of genes in the generation of data sets , and by ( c ) considering a larger number of biclusters in the simulations .the gibbs - plaid model was run with the stopping criterion suggested by , but with the maximum number of iterations fixed at .the penalized plaid model was run for iterations .for both models , we used the last samples after the burn - in period to perform the analysis and comparisons .we set the hyperparameters of the variables and as follows : , and .figure [ fig : yeast : rd : simulation ] shows the results .overall , the gibbs - plaid model performed better than the penalized plaid model and the other five biclustering algorithms .the difference in performance was much larger when the number of biclusters was large ( for the rd data and for the yeast data ) .we stress that these results apply to a large simulation involving very different pools of genes and types of conditions .note that with the rd data , the fabia algorithm did not work for cases with a large number of biclusters ( ) , and that the spectral algorithm did not find any biclusters for all cases ( data set replicates ) with and .moreover , for and , fabia found biclusters in only a single case out of 15 replicates .similarly , for and , the spectral algorithm found biclusters in only a single case .as in the work of , we used two model selection criteria to decide on the appropriate number of biclusters for each data set .we used the aic [ ] and the conditional dic ( dic ) , which was considered in and is given by + p_{c}\bigl ( \tilde{\sigma}^2 , \tilde{\theta } , \tilde{\rho } , \tilde{\kappa } \bigr),\end{aligned}\ ] ] where is the maximum a posteriori estimator of and \\[-8pt ] & & \qquad = -2e_{\sigma^2,\theta , \rho , \kappa } \bigl[\log p\bigl(y| \sigma^2 , \theta,\rho,\kappa\bigr)|y \bigr]+2\log p\bigl(y|\tilde { \sigma}^2 , \tilde{\theta } , \tilde{\rho } , \tilde{\kappa}\bigr),\nonumber\end{aligned}\ ] ] is the corresponding effective dimension .we computed the dic and aic criteria for all the simulated data for different values of . for the data generated from the yeast cycle data , we computed these criteria for biclusters . for the data generated with the rd data , we computed these criteria for biclusters when , for when , for when , and for when . figure [ fig : dic : yeast : rd ] shows the model selection results for some of the simulated data sets .we note that , in general , aic and dic chose the same models for the small data sets generated with the pool of genes of the yeast cycle data .however , for the larger data sets generated with the pool of genes of the rd data , aic tended to reach a minimum before dic did , largely underestimating the true number of biclusters .this suggests an over - penalization of complex models by aic due to the large number of parameters induced by the large number of genes in the data sets .this behavior of aic has been noticed before [ ] . on the other hand, the elbow of the dic s curve ( that is , the start of the flattening of the dic s trajectories ) tended to occur at or after the minimum of the corresponding aic curves . in some cases ,the dic criterion reached a minimum at a number of biclusters that was larger than the true number of biclusters . a closer look at the extra biclusters revealed that they were , in general , very small , containing only a couple of conditions or a handful of genes .in addition , at the flattening of the dic curve , the dic s values were not ( statistically ) significantly different when we considered the errors in the dic s estimates ( the vertical segments crossing the curve correspond to plus or minus two standard deviations ; the standard deviations were estimated from 15 replicates ) .therefore , a possible rule of thumb is to select the biclustering model associated with a point in the flat part of the dic curve that falls near the elbow of the curve .this is the rule we applied in the simulations and in the application to a real data set , described hereafter . for the gibbs - plaid model .the top row shows the results associated with the data sets generated for genes from the yeast cycle data ( , ) .the middle and bottom rows show the results associated with the data sets generated for genes from the rd data ( , ) .the bars correspond to plus or minus two standard deviations . ]in this section we show the application of our biclustering approach to the data gathered from a study in which 19 biopsy samples of rd were compared to 19 normal retinal samples [ ] .the data are available at ncbi / geo as gse28133 [ ] .the first step in microarray analysis consists in filtering for potentially relevant alterations in expression levels and removing any changes presumably due to the inherent noise of the system [ ] .such filtering aims at eliminating all genes whose expression measurements are very low , and to whom the resulting measures can be associated with random noise at detection - limit . in our case , points out that the data is well described as a bimodal distribution where the first peak is associated with nonexpressed genes ( i.e. , where random noise at detection - limit was captured ) . in order to separate the random noise peak from the second peak of the bimodal distribution , we followed the exact same preprocessing procedure of and applied a threshold of 31.5 expression units to the expression data .only 32% of all probe - set expression values in the data were retained after the application of the threshold .fundamentally , this filtering step follows the belief that a gene which is not expressed in any of the samples studied can not present changes in expression rates in some samples and , therefore , all changes in the measures are due to random noise . therefore , we filtered out the genes / probe - sets with very low or constant expression values along all samples , which allowed us to concentrate on the highly reliable changes in the transcriptome , reduce the overall noise , and accelerate the subsequent calculations .a further gene filtering step was done based on the intuitive belief that if a gene expression standard deviation is too small , then the gene may have little discriminating strength ( e.g. , to discriminate between rd patients from healthy control ones ) and will be less likely to be selected .we studied the effects of performing this preprocessing step in a simulation study ( not shown here ) .we noticed that noisy genes not only increased the computational burden , but could also decrease the biclustering performance .after this filtering step , we obtained a data set of 4645 probe - sets with information for 3182 different genes ( multiple probe - sets may correspond to a single gene ) .we fit the gibbs - plaid biclustering model to these data .the dic criterion chose 47 biclusters , a value close to the elbow , whereas the aic criterion chose 11 biclusters , the value of the minimum aic .the size of the biclusters are shown in a series of histograms in figure [ fig : rd : hist ] .the dic biclustering yielded a total of 20 biclusters that contained more than 80% of the rd samples , and 6 biclusters that contained more than 80% of the non - rd samples .in contrast , the aic biclustering yielded only 5 biclusters that contained more than 80% of the rd samples , and 3 biclusters that contained more than 80% of the non - rd samples .of the 20 dic - yielded biclusters with at least 80% of the rd samples , 18 contained 90% of the rd samples , and 15 contained only rd samples ( i.e. , they were purely rd sample biclusters ) .we are particularly interested in the `` _ significant _ '' biclusters because genes involved in these biclusters can be viewed as biomarkers that discriminate between the patients with rd and those without rd . in what follows, we refer to the biclusters that contain at least 80% of the rd samples or at least 80% of the non - rd samples as _ significant biclusters ._ of particular interest are dic biclusters 4 , 41 and 6 , which respectively consist of 95% , 91% and 84% of the rd samples .the degree of biclustering overlap and association among the significant biclusters may be better studied by computing the amount of shared elements ( either probe - sets or samples ) between each pair of biclusters .we computed the relative _ redundancy _ between each pair of biclusters as the average of the two ratios given by the number of shared elements and the corresponding bicluster sizes .as the dic produced a larger number of smaller biclusters , the corresponding results of biclustering showed less overlap ( i.e. , lower relative redundancy ) than the aic results ( see figure [ fig : rd : hist ] ) .a more detailed inspection of the biclustering results ( see the supplementary material [ ] for complete biclustering results ) revealed that those produced using dic contained the most interesting enrichment of go ontologies related to photoreceptor cells ( i.e. , go ontologies `` go:0009416 response to light stimulus '' or further specialized branches of the previous go term , such as `` go:007603 phototransduction , visible light '' ) , which were found in dic bicluster 4 and somehow weaker in bicluster 6 ( dic biclusters 4 and 6 have a relative gene redundancy of 51.8% ) .some other interesting biclusters showed either enrichment of go ontology terms for inflammatory response ( bicluster 41 , which consists of 91% rd samples ) or for cell death ( bicluster 8 , which consists of only 54% rd samples ) .both types of responses have been previously described [ ] , but are not related to photoreceptor cells and are therefore less helpful in establishing a better understanding of the fate of photoreceptor cells .the biclusters obtained using aic had globally similar results with respect to enriched go ontologies. however , the terms related to vision and photoreceptor cells showed less dominant enrichment .in addition , this biclustering contains only a few `` significant '' biclusters .moreover , following our simulation results , the large difference in the number of biclusters suggested by aic and dic indicate that the dic results should be more reliable than those obtained from aic in this case .therefore , in the subsequent analysis , we focused on the results obtained using dic and , in particular , on bicluster 4 , which contained all the rd samples and only one non - rd sample .subsequent inspection of the protein interaction map , and , in both types of graphs , some nodes have a high number of connections while the majority has simply one or two connections .] for the proteins identified in dic bicluster 4 ( formed by 332 probe - sets and representing 301 different proteins ) was performed using the string database of documented protein - protein interactions [ ] .this is displayed in figure [ fig : bicluster4:network ] ( see the supplementary material [ ] for a high - resolution image ) .on the basis of 301 proteins , we obtained a fairly small network of 50 directly interconnected proteins .we decided to construct an extended network by adding proteins that allowed us to link two or more of the 301 proteins from bicluster 4 , and for which the expression values were sufficiently high to call them unambiguously expressed genes .again , the threshold of 31.5 units described above and in was used so as to ensure that only genes with an unambiguous presence be considered for addition to the network .this approach has been successfully applied to identify proteins that are part of regulatory cycles and which are themselves not regulated at the level of transcription , but rather by either phosphorylation [ ] or proteins in the same pathway that are more weakly regulated . using this approach, we constructed an extended network of 50 proteins from the initial network and 68 additional proteins from bicluster 4 , which could then be connected to the network because of the addition of 192 novel proteins that were not present in bicluster 4 ( figure [ fig : bicluster4:network ] ) .are surrounded ( highlighted ) by black rectangles . ] in the extended network , the proteins identified in bicluster 4 are shown as large nodes , whereas the added proteins are shown as small nodes .all nodes ( proteins ) are divided into three regions that correspond to early , middle and late latency of rd .the regions are colored according to the change of gene expression values ( fold - change ) relative to the control group .the three respective fold - change values are displayed in a blue to red color scale ( saturated blue for down - regulation stronger than 6-fold ; saturated red for up - regulation stronger than 6-fold ) .it is important to note that the majority of proteins added to construct this extended network have node colors that are similar to the color of their neighbors originally identified in bicluster 4 .this confirms that adding these genes conserves well the overall structure of up- or down - regulated groups of proteins .several go ontology features are displayed in figure [ fig : bicluster4:network ] according to the following shapes of the nodes : triangles display genes with `` go:0007601 visual perception , '' parallelograms , genes with `` go:0008219 cell death , '' and rectangles , genes with `` go:0006954 inflammatory response . ''no cases of multiple annotations combining any of these three terms were observed among the 310 proteins that form this network .genes annotated with other functions are shown as circles .proteins involved in cell death and inflammation were key results in the traditional analysis using -tests [ ] .in contrast , proteins with these annotations are fairly rare in dic bicluster 4 , and are found in separate substructures of the enriched network when compared to the down - regulated genes annotated as being involved in visual perception .in fact , most other subnetworks based on dic bicluster 4 are somehow related to signaling , and thus reflect substantial biological and molecular activity in specimens of rd .one may note other relevant subnetworks , such as the one around rhou and arhgap30 ( framed by rectangles at the top left part of figure [ fig : bicluster4:network ] ) , which is highly enriched in gtpases , which in turn are found at the very end of signaling pathways ; the subnetwork around mx1 and rnaf135 ( framed by rectangles at the bottom left part of figure [ fig : bicluster4:network ] ) , which is enriched in up - regulated antiviral activity ; or the subnetwork around ppara , nr4a2 and nr2c1 ( framed by rectangles at the bottom right of figure [ fig : bicluster4:network ] ) , which is enriched in mostly down - regulated nuclear receptors .the surprisingly strong antiviral activity subnetwork mentioned above may be involved in the general acute inflammatory response ; however , it has not been noted in the literature .alternatively , these findings may open novel perspectives for further detailed studies to investigate the potential participation of viral infections as risk factors for rd or as factors related to a worse prognosis at the onset of rd .we have proposed a model for biclustering that incorporates biological knowledge from the gene ontology ( go ) project and experimental conditions ( if available ) .we use this knowledge to specify prior distributions that account for the dependency structure between genes and between conditions .our goal was to determine whether using prior information on the genes and the conditions would improve the biological significance of the biclusters obtained from this method .we incorporated this prior information by efficiently modeling mutual interactions between genes ( or conditions ) with discrete gibbs fields . the pairwise interaction between the genesis given by entropy similarities estimated from go .these are embedded into a relational graph with nodes that correspond to genes and edges to similarities .the graph is kept sparse by filtering out gene interactions ( edges ) that arise from genes that do not share much common biological functionality as measured by go . in some cases , the conditionsmay also be compared by building a notion of similarity between them , for example , in the case of gene expression time courses .these similarities can also be represented by a corresponding relational graph . to our knowledge, the introduction of markov models and gibbs fields in the context of biclustering is new .however , this has already been attempted in the fields of clustering and regression . in order to estimate the biclusters, we adopted a hybrid procedure that mixes the metropolis hastings sampler with a variant of the wang landau algorithm . to efficiently sample the labels through a block gibbs sampling, we used an algorithm based on the swendsen wang algorithm .experiments on simulated data showed that our model is an improvement over other algorithms .they also showed that criteria based on the conditional dic and aic may be used to guide the choice of the number of biclusters .the application of gibbs - plaid biclustering to a data set created from rd research brings several advantages and novel insights . in comparison to previous efforts, we noted that biclustering is much more adaptive to biological settings , which are characterized by numerous proteins that have multiple functions and tissues or cells of interest that make use of multiple biological processes at the same time .a detailed inspection of the biclustering results allowed us to identify biclusters that are associated with all major known groups of cellular and molecular events . adding a protein - network component to these results revealed several previously unknown aspects of rd that lead to the generation of new hypotheses regarding : ( i ) proteins directly involved in subsequent changes in photoreceptor cells , and ( ii ) subnetworks of proteins potentially linked to these events .the authors are grateful to leeann chastain at md anderson cancer center for editing assistance .
we propose and develop a bayesian plaid model for biclustering that accounts for the prior dependency between genes ( and/or conditions ) through a stochastic relational graph . this work is motivated by the need for improved understanding of the molecular mechanisms of human diseases for which effective drugs are lacking , and based on the extensive raw data available through gene expression profiling . we model the prior dependency information from biological knowledge gathered from gene ontologies . our model , the gibbs - plaid model , assumes that the relational graph is governed by a gibbs random field . to estimate the posterior distribution of the bicluster membership labels , we develop a stochastic algorithm that is partly based on the wang landau flat - histogram algorithm . we apply our method to a gene expression database created from the study of retinal detachment , with the aim of confirming known or finding novel subnetworks of proteins associated with this disorder . ./style / arxiv - general.cfg , +
a highest - density region ( hdr ) for a measurement of interest is a region where the underlying density function exceeds some nominal threshold . given a random sample from that density , hdr estimation typically involves determination of regions where an estimated density is high .kernel density estimation is the most common approach , but its performance is heavily dependent on the choice of the bandwidth parameter .automatic selection of a good bandwidth for hdr estimation is the overarching goal of this article .figure [ fig : hdrvise ] illustrates the bandwidth selection issue for hdr estimation .the left panel shows five kernel density estimates based on random samples of size 1000 from the normal mixture density [ density 4 of marron and wand ( ) ] . in each casethe bandwidth is chosen to minimize the integrated squared error ( ise ) . in the right panelthe same random samples are used , but , instead , the bandwidths are chosen to minimize an error appropriate for estimation of the 20% hdr ( defined formally in section [ sec : asyrisk ] ) .this region is shown as a thick horizontal line at the base of the plot .it is clear from figure [ fig : hdrvise ] that optimality for hdr estimation is quite different from ise - optimality .low ise requires that the two curves be close to each other over the whole real line .however , good estimation of the 20% hdr only requires that the 20% hdrs of the kernel density estimates are close to the true region .in particular , the sharp mode of the underlying density has no bearing upon the hdr and there is no need to estimate it well .for this density it is apparent that a bandwidth considerably larger than ise - optimal bandwidth is best for estimation of the 20% hdr . in this articlewe study an asymptotic risk associated with kernel - based hdr estimation and use our theory to develop a plug - in type bandwidth selector .attractive asymptotic properties of our bandwidth selector are established and good performance is illustrated on simulated data . a self - contained function for use in the ` r ` environment [ r development core team ( ) ] is made available on the internet .the hdr estimation problem has an established literature .contributions include hartigan ( ) , mller and sawitzki ( ) , polonik ( ) , hyndman ( ) , tsybakov ( ) , ballo , cuesta - albertos and cuevas ( ) , ballo ( ) , cadre ( ) , jang ( ) , rigollet and vert ( ) and mason and polonik ( ) . mason and polonik ( ) provide a thorough literature review for the problem .alternative terminology includes estimation of the _ density contours _, _ density level sets _ and _ excess mass regions_. this literature is , however , mainly concerned with theoretical results unconnected with the bandwidth selection problem . jang ( ) is an applied paper on the use of hdr estimation for astronomical sky surveys .however , the bandwidths used there are chosen via classical ise - based plug - in strategies .the present paper is , to our knowledge , the first to derive theory and bandwidth selection rules that are specifically tailored to the hdr estimation problem .while our proposed practical bandwidth selector relies on asymptotic approximations , its development comes at a time when sample sizes in applications that benefit from smoothing techniques are becoming very large .the area of application that led to this research , flow cytometry , typically has sample sizes in the hundreds of thousands .the astronomical application in jang ( ) involves sample sizes in the tens of thousands .another hdr application is approximation of the highest posterior density region of a parameter in a bayesian analysis , where only a sample from that density is available . in this situation , the sample ,most typically obtained using markov chain monte carlo methods , can arbitrarily large in size .section [ sec : asyrisk ] presents an approximation to the hdr asymptotic risk .numerical studies support its use for bandwidth selection . in section[ sec : bwsel ] we describe plug - in strategies for bandwidth selection .asymptotic performance results are established and a simulation study demonstrates practical efficacy .we conclude with an example on daily temperature maxima in melbourne , australia .proofs are deferred to an .let be a probability density function on the real line . for ,define we call the % _ highest - density region _ of [ cf .hyndman ( ) ] .if is a sequence of independent random variables with density , the kernel estimator of based on is where satisfies , and is called a _ kernel _ and is called the _bandwidth_. let denote the plug - in estimator of , so that the corresponding plug - in estimator of is then . given two borel subsets and of , we define their proximity through a measure on their symmetric difference . the particular measure we consider is given by for all borel subsets of .the error is then then the probability of an observation from lying in precisely one of and .compared with lebesgue measure , puts more weight on regions where the data will tend to be denser .it also has the advantage of admitting a simple monte carlo approximation .this is important in higher - dimensional settings where exact computation of is difficult .in theorem [ thm : mainthm ] , we derive a uniform - in - bandwidth asymptotic expansion for the _ risk _ , which can facilitate a theoretical , optimal choice of bandwidth ( cf . corollary [ cor : opth ] ) .this in turn motivates practical bandwidth selection algorithms whose performance is studied in theorems [ thm : hath ] and [ thm : hathho ] .we will make use of the following conditions on the underlying density , bandwidth sequence and kernel : 1 . is uniformly continuous on .there exist finitely many points such that for , and moreover there exists such that is twice continuously differentiable in ] , where and the nature of this result is somewhat different from the results in the existing literature which have tended to focus ( sometimes in more general settings ) on the order in probability or almost surely of or related measures [ e.g. , ballo , cuesta - albertos and cuevas ( ) , ballo ( ) ] .more recent works have derived results on the limiting behavior of suitably scaled and/or centered versions of [ e.g. , cadre ( ) , mason and polonik ( ) ] .rigollet and vert ( ) provide a finite sample upper bound for the risk , uniformly over certain hlder classes , with an unspecified constant in the bound . while these theoretical results are certainly of considerable interest , our aim in providing the asymptotic expansion in theorem [ thm : mainthm ] is to facilitate practical bandwidth selection algorithms for this problem see section [ sec : bwsel ] . in the course of the proof of theorem [ thm : mainthm ], it is shown that so that each is positive .moreover and are nonnegative , and are positive for at least one . indeed , and are certainly positive whenever , where the weights sum to 1 .however , this condition on is far from necessary for and to be positive .it is easily seen from theorem [ thm : mainthm ] that for any sequence of bandwidths satisfying ( a2 ) , if is not bounded away from zero and infinity then along a subsequence . on the other hand ,if is bounded away from zero and infinity , then is bounded .notice that all such sequences are permitted by the condition ( a2 ) . focusing our attention on bandwidth sequences of order and substituting , we have .\ ] ] writing this limit as , we see that is continuous on with as and as , so attains its minimum .if is such that and are positive , then it can be shown ( cf .the proof of corollary [ cor : opth ] below ) , that has a unique minimum .this unique minimizer represents the asymptotically optimal bandwidth for estimating the risk in a small neighborhood of .although we typically expect the minimum of to be unique , the complicated nature of the function and the coefficients , and make it difficult to prove this assertion without additional conditions .the following corollary gives the desired result in one restricted case ; however , we anticipate that the result in fact holds much more widely .[ cor : opth ] assume and .assume further that in we have and the underlying density is symmetric about some point on the real line .then there exists a unique , depending on and but not , such that any sequence of bandwidths that minimizes satisfies as .the additional hypotheses on imply that , and do not depend on , and in fact in the presence of ( a1 ) and ( a3 ) , the conclusion of the corollary also holds under this ( weaker ) condition , as can be seen from the proof .theorem [ thm : mainthm ] yields the asymptotic risk approximation \\[-8pt ] & & \hspace*{18.8pt } { } + b_{3,j } h^2 \{2\phi(b_{2,j } n^{1/2}h^{5/2 } ) - 1\ } \biggr].\nonumber\end{aligned}\ ] ] in section [ sec : bwsel ] we use the right - hand side of ( [ eq : asyrisksim ] ) to develop plug - in bandwidth selection strategies .however , it is prudent to first assess the quality of this approximation to the risk .we now do this through some numerical examples . for a given , and ,the risk is very difficult to obtain exactly .instead , we work with a monte carlo approximation , } \triangle r_\tau \bigr),\ ] ] where },\ldots,\widehat{r}_{h,\tau } ^{[m]} ] . to get the algorithm started we also require _ normal scale _ estimates of , based on the assumption that is a density .normal scale estimates of take the form throughout we take , the standard normal kernel .the full algorithm is : 1 .the inputs are the random sample and parameter specifying the required hdr .2 . let be a robust estimate of scale .( the interquartile range for the standard normal density is approximately , so this factor ensures approximate unbiasedness for normally distributed data . )estimate , and using normal scale estimates .explicit expressions for these are , and .4 . estimate , and using kernel estimates , and where , and . 5 .estimate , and using kernel estimates , and where ^{1/7} ] and ^{1/11} ] , ^{1/7} ] .obtain pilot of estimates of , and via gaussian kernel estimates based on these bandwidths : , and .use to obtain pilot estimates of , and .substitute the estimates from steps [ step6 ] and [ step7 ] into the expressions for , and to obtain estimates , and .the selected bandwidth for gaussian kernel estimation of the hdr is where , where was defined in ( [ eq : hatcopt ] ) .binned approximations to [ cf .gonzlez - manteiga , sanchz - sellero and wand ( ) ] are strongly recommended to allow fast processing of large samples .an ` r ` function ` hdrbw ( ) ` that implements the above algorithm has been included in the package ` hdrcde ` [ hyndman ( ) ] which supports hdr estimation .we ran a simulation study in which the performance of was compared with an established ise - based selector : least squares cross validation [ rudemo ( ) , bowman ( ) ] which we denote by .the number of replications in the simulation study was 250 .the hdr estimation error was used throughout the study . figures [ fig : bwsimd4n1000 ] ( ) and [ fig : bwsimd4n100000 ] ( ) summarise the results for the situation where the true is the normal mixture density from section [ sec : intro ] and figure [ fig : hdrvise ] .the improvement gained from using the hdr - tailored bandwidth selector is apparent from the graphics , especially for the lower values of .wilcoxon tests applied to the error ratios showed statistically significant improvement of at the 5% level for and when . for , performed better for , while did better for .this latter result is not a big surprise since good estimation of requires good estimation of the finger - shaped modal region and this , in turn , requires good ise performance . and for and and 250 samples of size 1000 generated from density 4 of marron and wand ( ) .the upper panels are scatterplots of the errors for on the vertical axes and on the horizontal axes .the lower panels are kernel density estimates of . ] and for and and 250 samples of size 100000 generated from density 4 of marron and wand ( ) .the upper panels are scatterplots of the errors for on the vertical axes and on the horizontal axes .the lower panels are kernel density estimates of . ]we performed similar simulation comparisons for the remaining densities 110 of marron and wand ( ) .for the performance of was better than for densities 15 ; whereas did better for densities 610 .this suggests that the asymptotics on which relies have not `` kicked in '' at for these more intricate density functions .we suspect that more sophisticated pilot estimation might improve matters for hdr - based bandwidth selection for lower sample sizes .the simulations show superior performance of , especially where it is the `` winner '' for 9 out of the 10 density functions .the overarching conclusion is that for common density estimation situations is better than .we conclude with an application to data on daily maximum temperatures in melbourne , australia , for the years 19811990 .these data were used in hyndman ( ) to illustrate hdr principles .we revisit them armed with the automatic hdr estimation technology described in section [ sec : pracalg ] . of interestare the conditional densities of tomorrow s temperature _ given _today s temperature is within a fixed interval .the intervals for the `` today s temperature '' values are , in degrees celsius , figure [ fig : melbmaxhdr ] shows the kernel . ]estimates of the 20% , 50% and 80% hdrs with bandwidths chosen using the rule as detailed in section [ sec : pracalg ] .some interesting bimodality in `` tomorrow s temperature '' is apparent when conditioned on today s temperature being in the 3040 degrees celsius range .throughout the proof , it is convenient to write and and adopt the convention that and for all . observe that so that the main idea of the proof is that the dominant contribution to comes from a union of small intervals , one near each , where is close to . in each of these intervals, we can represent by a sample mean of independent and identically distributed random variables and a small additional remainder term , and hence apply a normal approximation to deduce the result .for clarity of exposition , we now split the proof into several steps : [ step1 ] as a preliminary step , let be another uniformly continuous density , and let .writing for the supremum norm on the real line , we show that there exists such that for all sufficiently small , we have whenever . to see this ,let and choose .the inverse function theorem [ burkill and burkill ( ) , theorem 7.51 ] gives that for with sufficiently small , we can write \ ] ] with as .it follows that when is sufficiently small , and , we have thus .a very similar argument yields the upper bound , and this completes step [ step1 ] .now , for small enough that has two continuous derivatives in ] .indeed , we make a similar claim for every error term in each expression below where the bandwidth appears , but we do not repeat this assertion in future occurrences . as in step [ step1 ] , observe that under ( a1 ) , if is sufficiently small , then there exists such that for ] . by reducing if necessary , for ] .now , since is uniformly continuous under ( a1 ) , as .the inequality ( [ eq : basicineq ] ) , together with the observation ( [ eq : bias ] ) on the bias of , yields that for sufficiently large , for some . here, the final inequality is an application of corollary 2.2 of gin and guillou ( ) ( a consequence of talagrand s inequality ) to the vapnik cervonenkis class of functions [ cf .dudley ( ) , theorems 4.2.1 and 4.2.4 ] .equation ( [ eq : nonmargin ] ) follows immediately , and this completes the proof of step [ step2 ] .[ step3 ] we show that ( [ eq : nonmargin ] ) continues to hold if is replaced by a sequence converging to zero , provided that slowly enough that and . in order to complete the proof of step [ step3 ], it suffices to show that there exists such that we may assume is small enough that has two continuous derivatives in .this enables a straightforward modification to the argument in ( [ eq : bias ] ) using a taylor expansion , leading to now there exists a constant small enough that if we take , then we have when .moreover , , so that for sufficiently large , the same argument as in step [ step2 ] yields this completes the proof of step [ step3 ] .[ step4 ] we seek asymptotic expansions for and . to this end ,for uniformly continuous densities that are twice continuously differentiable in , and for , we define the reason for making this definition is that by examining the behavior of under small changes of its arguments from , we will be able to study the difference in ( [ eq : probbound ] ) below .first , for sufficiently small , as .a very similar argument shows that the error term is of the same order as .observe that when and are sufficiently small , has a nonzero derivative in a neighborhood of each .it follows that for sufficiently small values of , we can write ,\ ] ] where .moreover , provided that and as , we have that as .thus we can write as . assuming that and that the above conditions on hold , we have from ( [ eq : partial1 ] ) and ( [ eq : partial2 ] ) that \\[-8pt ] & = & - \{\tilde{f}_{\tau } - f_{\tau } \ } f_{\tau } \sum_{j=1}^{2r } \frac{1}{|f'(x_j)| } + f_{\tau } \sum_{j=1}^{2r } \frac{g(x_j)}{|f'(x_j)| } + \sum_{j=1}^r \int_{x_{2j-1}}^{x_{2j } } g(x ) \,dx \nonumber\\ & & { } + o \biggl\ { \biggl(\sum_{j=1}^{2r } |g(x_j)| \biggr)^2 + \|g'\|_{i_\delta,\infty}\sum_{j=1}^{2r } |g(x_j)| \biggr\}\nonumber\end{aligned}\ ] ] as .we want to apply ( [ eq : puttogether ] ) with , so that . in order to do this , we must recall observation ( [ eq : bias ] ) on the bias of , and the fact that from an application of corollary 2.2 of gin and guillou ( ) .it follows that .similarly , , and a further application of corollary 2.2 of gin and guillou ( ) gives .thus .this in turn implies that with probability one , for sufficiently large , is the unique solution to , or equivalently , as claimed in section [ sec : asyrisk ] .it remains to note that and it follows that we can now substitute in ( [ eq : puttogether ] ) to deduce that equation ( [ eq : probbound ] ) shows that we can write the difference as a sample mean of independent and identically distributed random variables and a small additional remainder term .notice from the bandwidth condition on in ( a2 ) that .next , observe that where is given in ( [ eq : d1d2d3 ] ) .thus , in order to prove that it suffices by ( [ eq : puttogether ] ) and step [ step1 ] to show that for any , but this follows by cauchy schwarz , because step [ step1 ] may be used to show that , and also we therefore deduce ( [ eq : expexpansion ] ) . in a very similar way, we can also use ( [ eq : puttogether ] ) and the fact that where is given in ( [ eq : d1d2d3 ] ) , to deduce that [ step5 ] we can use the results of step [ step4 ] to shrink the region of interest still further . from the result of step [ step3 ]we can write for brevity , we write . now , for each , we see that for sufficiently large , is a strictly monotone function of ] .we claim that \\[-8pt ] & & \hspace*{26.6pt } { } + \int_{i_{2j}^n } \bigl|\mathbb{p}\bigl(\widehat{f}_h(x_{2j}^t ) <\widehat{f}_{h,\tau } \bigr ) - { \mathbh}{1}_{\{t \geq0\}}\bigr| \,dt \biggr\ } \rightarrow0\nonumber\end{aligned}\ ] ] as .now there exists such that for all and sufficiently large , we have .thus there exists such that for all sufficiently large , & & \qquad\leq\mathbb{p } \biggl ( \biggl|\frac{\widehat{f}_h(x_{2j-1}^t ) - \mathbb{e}\{\widehat{f}_h(x_{2j-1}^t)\}}{\operatorname{var}^{1/2 } \{\widehat{f}_h(x_{2j-1}^t)\ } } \biggr| \geqc_4 t_n \biggr)\\[-0.8pt ] & & \qquad\quad { } + \mathbb{p } \biggl ( \biggl|\frac{\widehat{f}_{h,\tau } - \mathbb{e}(\widehat{f}_{h,\tau } ) } { \operatorname{var}^{1/2 } ( \widehat{f}_{h,\tau } ) } \biggr| \geq c_4t_n \biggr ) \rightarrow0,\end{aligned}\ ] ] uniformly for . since also uniformly for , we deduce ( [ eq : ijn ] ) .[ step6 ] we also require an asymptotic expansion for , for ] , where is given at ( [ eq : d1d2d3 ] ) .this follows from the expansion ( [ eq : puttogether ] ) and the fact that provided diverges sufficiently slowly , & & \qquad= \frac{1}{h } \int_{-\infty}^\infty k(z)k \biggl(\frac{(nh)^{-1/2}t + hz}{h } \biggr)f(x_j - hz ) \,dz \\[-0.8pt ] & & \qquad= \frac{1}{h}f_{\tau } r(k ) + o(h^{-1}),\end{aligned}\ ] ] uniformly for ] , we choose to diverge to infinity so slowly that : * , uniformly for ] ; * , uniformly for ] . herewe have used the berry esseen inequality to reach the penultimate line .a very similar argument yields a lower bound of the same order .the proof of step [ step7 ] , and hence the proof of theorem [ thm : mainthm ] , is now completed by the observation that .\end{aligned}\ ] ] we may restrict attention to the case where is bounded away from zero and infinity . the important point to noteis that under the hypotheses of the corollary , , and do not depend on , so we write them as , and , respectively . by making the substitution ,there exist positive constants and such that , where . since is continuous with as and , it attains its minimum in . to show this minimum is unique, it suffices to show that has a unique zero in , where now we have there are therefore two cases to consider : if , then is strictly convex , so since and as , we see that has a unique zero in . on the other hand , if , then there exists such that for and for .but if then , for sufficiently small , so from , it again follows that has a unique zero .write for the unique minimum of in , and let .we conclude that any optimal bandwidth sequence , in the sense of minimizing , must satisfy as .we require a bound on for . to this end , let be another density satisfying the same conditions as . from step [ step4 ] of the proof of theorem [ thm : mainthm ] , we see that for sufficiently small values of , there exist precisely values such that .moreover , provided as , we have as . substituting , so that and , we have .it follows that , the crucial fact being that .similarly , and for .thus , and .we deduce that for any , we have , uniformly for ] . under the conditions of the theorem, we may integrate by parts twice and apply a taylor expansion to obtain this expression for the bias can be combined with the standard fact that and the bound on from the proof of theorem [ thm : hath ] to yield .similar computations give .the rest of the proof mirrors the proof of theorem [ thm : hath ] .the authors are grateful to tarn duong , inge koch , steve marron and richard nickl for their comments on aspects of this research , and to the organizers of a workshop on statistical research held at the keystone resort , colorado , usa , on 4th8th june , 2007 .
we study kernel estimation of highest - density regions ( hdr ) . our main contributions are two - fold . first , we derive a uniform - in - bandwidth asymptotic approximation to a risk that is appropriate for hdr estimation . this approximation is then used to derive a bandwidth selection rule for hdr estimation possessing attractive asymptotic properties . we also present the results of numerical studies that illustrate the benefits of our theory and methodology . and .
recent rigorous observations have provided an unprecedented accuracy that has to be taken into account in any cosmological modeling - .nowadays , we have enough discriminating data to investigate the practicability of a proposed inflationary scenario precisely .it is widely believed that the recent planck data favors the simplest inflationary models consisting of a single field slow - roll .although some inflationary models always remain in the valid domain , many of them have been excluded due to incorrect predictions particularly in the density perturbation spectral index on the cmb as well as the power of primordial gravitational waves .this decisive information is at our disposal now , thanks to several experiments and decades of rehearsing on the issue .simultaneously , we are witnessing some remarkable experiments in particle physics and quantum field theory .no one can doubt that cosmology and quantum field theory are tightly bound and any achievement in one of them must be considered as a clue for the other .there are many attempts to find a qft motivation as a decisive sign of an acceptable inflationary scenario - and conversely , the capability of a qft paradigm to include the inflation , is supposed as a supportive sign for the paradigm . on the other hand , after endorsement of the higgs boson existence , the last predicted particle of the standard model , there are more attention on the inflationary capability of symmetry breaking scenario ,- .the measured higgs mass in the lhc also raised another problem : the higgs mass and the top quark mass together increase the chance of being in a metastable vacuum for the electroweak theory - .topologically , a discrete vacuum means domain wall production if the symmetry breaking is * perfect * - .we have known after zeldovich s 1975 paper that domain walls are drastically in contradiction with the observed cosmic mean energy density unless the domain wall energy density is low enough .such low energy scales never provide appropriate outline for a successful inflation although the cmb residual dipole anisotropy might be explained using them .the domain wall problem also appears when one tries to solve the strong cp problem by means of introducing a new axion field .indeed , to explain invisibility of axions due to their weak coupling to matter one could hypothesize more quark spices than the usual standard model quarks , or assume two higgs doublets .the latter case is more appealing in quantum field theory since it offers the modest possible extension to the standard model .moreover , there are yet unconfirmed reports of observing the footprint of higgs - like particles in atlas and cms , which put the multi - higgs theories under the spotlight . assuming a two doublet higgs scenario , one inevitably encounters even number of domain walls separated by strings .the number of appearing domain walls are two times the number of the generators .if the energy scale of domain walls is high enough such that domain wall production precedes the inflation , then one has an explanation for not observing such walls , like what happens in magnetic monopoles .domain walls also could leave no significant remnant in the later stages if they disappear soon enough .there are some known mechanisms for destructing a domain wall which could operate alone or in combination with each other .the most famous one is assuming a metastable domain wall which automatically tends to ruin it .of course , the decay time could be very long , for example the decay time of electroweak metastable vacuum , in the case of existence , is of the order of the age of the universe .potentially , unstable domain walls are also among the best candidates for justification of baryogenesis . amidthe other options one can mention destabilizing a domain wall by another defect collision or embarking the symmetron mechanism .there is a very interesting idea that mini black holes could trigger the electroweak vacuum decay in a similar way . on the other hand, one can generalize the natural inflation to include a dynamical modulus in addition to the proposed angular field dynamics .this generalization , respectively , promotes for example to the double fields potential in which the symmetry is broken to discrete symmetry . in this regard ,it worth to have an exhaustive analysis of the potential with explicit broken symmetry , to know both the inflationary behavior and possible domain wall properties .here , we try a double field potential with two discrete vacua as a toy model of domain wall formation and inflation , trying to avoid rendering numerical results before having an analytical picture . by this, we always keep the track of the model parameters employing appropriate approximations .the assumed potential is very close to the original higgs potential except that the continuous symmetry now is broken into a symmetry with two discrete vacuua to produce domain walls . to get more familiar with the domain wall properties which potentially could be produced by the proposed model , we first calculate its energy scale for a simple spacial configuration .results show that entering more parameters into the model and making it more sophisticated provides us with more freedom to control the wall energy scale without decreasing the energy scale of the potential at the origin significantly , in contrast to the zeldovich s proposal .next , we discuss the most important possible scenarios in which the potential could accomplish inflation , starting with a complete analytic review of the simple symmetry breaking case according to the recent data .we stick to the most prevailing method of diagnostic in which the slow - roll parameters play the basic role in the analysis .we also overcome the difficulties of dealing with a double field inflation - by treating the potential as an equivalent single field potential .it soon becomes clear that almost all the scenarios are compatible with the famous hill - top new inflationary models - .such models of course are categorized among the super - planckian models with no attainable motivation from known physics , but like other new inflationary models some particular characteristics make them noticeable : they predict very small primordial gravitational waves ,- , much less than what one may hope for detection in a conceivable future . the other point about hill - top models is that there are some techniques to arrange them to work in the supergravity scope - .the outline of the paper is as follows : in the next section , we have a comprehensive review on the previous studies which provide motivations for our survey on descending the symmetry to the .we focus on fundamental theories skipping applied physics and condensed matter .it is interesting that these different theories are related due to a common characteristic of producing domain walls from an original symmetry .then in the third section , we analyze the domain wall characteristics of the proposed potential , where we find a very close approximation as a solution for the domain walls .this approximate solution satisfies the pde s and static virial theorem simultaneously .therefore , we invoke this approximation to deal with the other important domain wall characteristics , including the surface energy density and the wall thickness . afterwards , in the forth section , we propose the potential as the source of inflation , starting from a novel analysis on the ordinary symmetry breaking inflation .we try to have a complete survey on all possible scenarios of a successful inflation .we conclude that the saddle point inflation could reduce the scale of the required energy for inflation . this reduction being not such effective to avoid the theory from becoming a super - planckian .the last section is devoted to the conclusion which contains the most important results of the paper .although symmetry could not assumed as a formal part of the standard model of particles , it appears frequently for certain reasons .one of these arenas is extending the higgs sector .in fact , in spite of unnecessity of extra bosons in the standard model , many pioneering theories like supersymmetry and grand unified theories , demand for extending the higgs sector .the simplest and the most well - known extended models demand for two higgs doublet models ( 2hdms ) .2hdms also provide one of the best explanations for axion invisibility .axions are nambo - goldstone bosons of pecci - quinn spontaneous symmetry breaking which originally was invented to solve the strong cp conservation problem .theoretically , spontaneous breaking of the leaves domain wall(s ) attaching to a string , . assuming a multi - higgs model to deal with the recent unconfirmed record of observing a higgs - like bump in lhc s last run , reinforces the existence of such extensions of the standard model higgs sector .if the higgs cousins contain interaction with standard model fermions , which is a very natural postulate , then one can assume symmetry and the appropriate fermionic eigen value to avoid higgs - mediated flavor changing neutral current ( fcnc ) .recently , a survey has proposed the cosmological consequences of explicit breaking of , considering the potential to be where is a complex field defined as . in their analysis ,the responsible term for breaking the symmetry demonstrates only the phase field dependence . in our surveywe let the explicit symmetry breaking term to have modulus dependence , too . we therefore assume also for simplicity, we focus on case since it suffices to inspect the topological domain wall behavior . we are thus led to it is worth mentioning that the above potential form is also a conformally renormalizable extension of natural inflation .natural inflation , was originally , based on the dynamics of the phase of a complex field whose modulus is stabilized severely .then the numbo - goldstone boson becomes massive thanks to the instanton effect . in the qcd case ,instantons break the symmetry down to a discrete subgroup to produce the axion - inflaton potential the key prediction of the above form of inflation is the strong gravitational wave remnant .in fact , strong enough to be detected in the recent planck project .lacking such approval from observation , one could suppose more completion to the original natural model .bestowing modulus dependence on the potential is supposed to be one of the first choices .this choice has also been considered recently in by introducing the above potential coincides with ksvz modification of pecci - quinn theory , in which . in order to promote the above potential to contain two domain walls , which is more desirable in our study, we suggest then assuming , one obtains which is just ( [ ver1 ] ) , with a renaming of the parameters . as it will be introduced later in the paper , our choice of variables for dealing with ( [ ver1 ] ) or ( [ ver2 ] ) is where and from a completely different point of view , in the inflationary paradigm of cosmology , there is a category of potentials , dubbed new inflation , in which the slow roll starts from nearly flat maximum of the potential where the field(s ) is(are ) located near the origin . in other words , the slow roll happens to be outward from the origin .these category of inflationary models , survived the tests though they generally suffer super - planckian parameters . among new inflationary theories one could mention inverted hybrid inflation , which was an attempt to merge new inflation with the hybrid inflation .the potential has the following form now if one redefines the parameters as , , and then ( [ inverted ] ) reduces to which is just the cartesian form of ( [ ver3 ] ) .of course in order to restore the tachyonic instability of the field , an additional constraint is needed , but in our study the latter condition wo nt be necessary .in this sectin , we present a brief motivation for explicit symmetry breaking from susy . to avoid lengthening the article , we avoid any introductory entrance to supersymmetry .the reader may consult many comprehensive textbooks on the subject .supersymmetry , if exists at all , must be a broken symmetry .many attempts have been made to introduce a viable explicit or spontaneous mechanism to explain the supersymmetry breaking .here , we consider a d - term ssb by adding an additional gauge symmetry ( fayet - iliopoulos mechanism ) and derive the resultant potential .we will see that the resulting potential , for the case of two charged scalar fields , demonstrates an explicitly broken behavior , when is exhibited in norm - space of the fields .moreover , since lhc has been obtained no approval evidence for minimal supersymmetric standard model ( mssm ) , considering additional fields to the supersymmetry sounds as a next logical step .one of the elegant properties of supersymmetry is the automatic appearance of a scalar potential through f and d auxiliary fields which are originally invented to balance the off - shell bosonic and fermionic degrees of freedom : supersymmetry requires the vacuume expectation value ( vev ) to vanish . in thisregard , non - vanishing ( positive ) vev is considered as a sign for ssb . in other words , if both d - term and f - term super potential contributions ca nt be zero coincidently , then the supersymmetry is broken .one way to do this task is assuming an extra symmetry and let the potential to involve a linear d - term besides the ordinary terms : where represents the fields that acquire charge under new symmetry .the first term , known as fayet - iliopoulos ( fi ) term , satisfies both supersymmetry and gauge symmetry . then supposing the charged scalar field to be massive , one obtains obviously , the above potential has a non - zero minimum which demonstrates breaking of supersymmetry .one has to note that despite broken supersymmetry , the gauge remains unbroken if for all fields .if for all s happen to be massive , then they must appear in pairs with apposite charges to respect the gauge symmetry .of course , there is no obligation for charged scalar fields to be massive since they can be considered massless without losing any bosonic degrees of freedom . in order to get closer to the model considered in this paper , we consider two massive charged scalar fields and and recast and while the charges are normalized to .we obtain let us redefine the potential in polar coordinates by setting and , then we have taking derivative with respect to angular coordinate yields \ ] ] then two groups of solutions will be gained for . and .the former is always available but the latter needs an elaborated fine tuning of parameters , particularly , when the field is located near the origin initially , like what happens in a normal new inflationary scenario .therefore , one can consider four attractor paths down from the origin as the slow - roll path , which are equal pairwise . ^ 2+m_\psi ^2 ( \phi_0 ^2-\frac{m_\psi ^2}{\lambda}),\ ] ] ^ 2+m_\phi ^2 ( \phi_0 ^2-\frac{m_\phi ^2}{\lambda}).\ ] ] to protect the gauge symmetry for each field one requires , where stands for the corresponding field .one has to note that the potential exhibits the ssb due to its positive definition , whether the gauge symmetry has been broken or not . in the case of gauge symmetry breaking , the potential develops one - dimensional kinks , different in shape for each scalar . if both scalar fields are involved in the gauge symmetry breaking , then four vacua will develop . for ,all vacua are degenerate and stable , while for , the two vacua which are related to the more massive field , become metastable and ultimately decay to the two stable vacua . since the decay rate decreases exponentially by decreasing the difference of vev s , it is possible to consider the situations in which the metastable lifetime exceeds the age of the universe . here, some more elaboration is in order .if ( [ dssb ] ) is assumed for supersymmetric masses set to zero then one obtains which means that in the space of the charged scalar field norms , the potential shows a symmetry , explicitly broken by an effective mass of one of the fields .to begin , we propose the simplest asymmetric scenario in a two - fields potential in which we require that satisfies the following constraint : for , we know that is a solution of ( [ simplest ] ) for any arbitrary function .this ensures symmetry if we choose and , and consequently breaks this symmetry .the above equation has the general solution of the form or where we consider positive definition for throughout the paper . to achieve a more familiar potential formlet us recast the fields into and and also let the function to have an ordinary symmetry breaking appearance .then the potential ( [ wwe ] ) could be written as the potential ( [ original ] ) shows a full circular symmetry in the first parenthesis resembling the higgs potential . in the inflationary contextthis is a self consistent version of a particular hill - top model .in fact , without the last term in ( [ original ] ) , the circular freedom in the vacuum corresponds to the massless goldstone boson but for the case under consideration , the vacuum is not a continuous minimum and as we will see , the circular field acquires mass as well as its own roll down mechanism which has to be considered in slow roll assumption and could bring about different consequences . from now on , we will discuss the symmetry breaking term with the plus sign in ( [ original ] ) , but a brief argument about this choice is in order .suppose we choose the plus sign in ( [ original ] ) , then by the following redefinition and interchanging the variables , one obtains where so the plus or minus choice for epsilon coefficient is trivial up to a constant .although breaking the symmetry down to is very common in condense matter and superconductivity , it has received less attention in fundamental theories . in order to have a better perspective about the potential ,let us change the field coordinates into polar coordinates by setting and .the potential then becomes now we are able to discuss the behavior of the potential by taking a differentiation with respect to the radial field . to learn about the extrema , let us find the roots of ; obviously , there could be up to three roots . herewe are able to categorize the potential as bellow { \rm 1 \ ; or\ ; 3 \ ; roots\ ; } { \rm \quad saddle\ ; point\ ; at\ ; the\ ; origin}\label{ue2}\ ] ] ( left ) and ( right ) .the origin appears as local maximum for and is a saddle point for the other case ., title="fig : " ] ( left ) and ( right ) . the origin appears as local maximum for and is a saddle point for the other case ., title="fig : " ] the stronger the inequality ( [ ue2 ] ) , the wider range of recognizes the origin as minimum . for ,the two saddle points are located at on the axis .as the inequality becomes weaker the saddle points move toward the origin where finally meet the origin for .the origin remains a saddle point for ( figure [ fig : lmsp ] ) .since the potential exhibits discrete vacua after symmetry breaking , domain wall production seems inevitable . more technically , it is the vacuum manifold that determines the character of possible topological defects and in our case the zero homotopy group , or zero homotopy set as mathematicians prefer , is not trivial which warns us about the inevitability of domain walls generation through the kibble mechanism .one expects at least one domain wall per horizon volume if the symmetry breaking is * perfect*. the potential ( [ original ] ) has recently been analyzed due to the capability of generating domain walls with a rich dynamics .there is a tight constraint on the existence of domain walls except for very low surface energy densities .let us see why this is so .for the very popular toy model of discrete symmetry breaking in which the effective potential has the well - known form ; where is a real field , the scenario has been already analyzed in many references so it would be adequate to review only the results . for simplicity , we assume a minkowskian background space - time since it suffices to indicate the major properties . after the symmetry breaking is settled down , as a very simple simulation , one could suppose a planar domain wall placed on the plane at .for planar domain walls , everything is independent of and coordinates and as long as we are interested in the static situation , the time dependency is eliminated .so we get the first integral of this equation is where the constant of integration vanishes when we impose the boundary conditions of vanishing of the potential and the spacial field derivative at the infinity .then the domain wall solution for the assumed boundary condition is or and for the surface energy density of the domain wall we obtain the thickness of the wall is defined as . in an expanding universe ,any proper velocity of the walls very soon becomes negligible , which leaves the universe with a non - relativistic network of domain walls , and here the problem arises ; according to the kibble mechanism , domain walls are generally horizon - sized so we can estimate their mass as if we do for a horizon - sized plate i.e. , so the mass energy density can roughly be approximated as .we know that the critical density evolves as . therefore . by setting at about the gut scale ,the domain wall energy density reaches the critical density already in the time of wall generation and one expects that in our time this ratio becomes which shows a catastrophic conflict with reality .in order to compromise between the introduced domain walls and the observations , one needs to decrease the energy of the possible domain walls to very small values . as we will derive in ( [ sedw ] ) for the more sophisticated domain walls other parameters involve to determine the wall energy which gives us sort of freedom to prepare the potential to work as inflation .the other remedy is to allow the disappearance of domain walls so early that not only diminish from the density calculations but also not altering the cmb isotropy , considerably .this can be achieved in various ways .for example , one can imagine the potential as an effective potential to demote the wall to an unstable version , by which , one of the vacua will disappear through the biased tunneling effect , or considering some sort of destructing collisions which are generally fatal for a kink stability and for such primordial walls the primordial black holes might be the best candidates .it is worth mentioning that domain walls even at very low energies could cause a residual dipole anisotropy in large scale observations , and such an anisotropy is receiving increasing observational supports .static double - field domain wall solutions corresponding to ( [ original ] ) satisfy the following two coupled euler - lagrange equations these equations can be merged into to find the solution of the above equation with appropriate boundary conditions , we have a guide line ; must be odd with respect to coordinate due to its main role in the discrete symmetry breaking process , while for both sides of the wall in has the same characteristics since both vacua lay at .in other words , has to be even with respect to the z coordinate .moreover ( [ merged ] ) is subject to the following boundary conditions the trajectory of transition between the vacua does nt pass through if , since the origin in this case is a local maximum of the potential . to find the domain wall solution ,first , we employ an appropriate ansatz for one field and derive the other field .then we will check the accuracy of the final solution by comparing it with the numerical solutions .our estimation about the final form must fulfill the boundary conditions .the best choice would be , this hyperbolic form which has been inspired by the kink solution , fully satisfies the boundary conditions and indicates the odd characteristic of .the appearance of is also reasonable since after two times differentiation it will produce the desired factor while the coefficient cancels out by division .next , we put this ansatz solution into ( [ merged ] ) , which after some straightforward calculation one obtains for ; where and are constants of integration . but the above solution has been separated into an odd term and an even term .so to keep the evenness property of we require . to fix the solution we have to find the remaining constant . this can be done by means of the minimum energy theorem and integration , but we utilize the static virial theorem , since both of these two theorems stem from the least action principle , they could be used interchangeably .the static virial theorem has another important consequence of vanishing the tangential pressure for the wall , which we prove before inserting it into our calculation . for a typical multi - field potentialthe euler - lagrange equations have the general form where enumerates the fields . for a static solution and a planar wallthese can be written as here is chosen to be the coordinate perpendicular to the wall .multiplying by one obtains =\frac{d}{dz}\left[\frac{1}{2}\left(\frac{d\chi_a}{dz}\right)^2-v(\chi_a)\right]=0,\ ] ] if we require a true vacuum to have zero energy expectation value then the constant of integration in the above equation should be zero .note that the derived static one dimensional version of virial theorem must be valid for the proposed guess if we require it to satisfy the euler - lagrange equation of motion .obtaining another relation among the fields first derivative and the potential , one can use it in order to determine in ( [ mge ] ) by requiring substituting for , and and after some algebraic simplification one finds which results in this solution approximates very closely the more accurate numerical solution ( figure [ fig : domainwall ] ) and the wall width is . as the next step one can calculate the energy - momentum of the wall (1,1,1,0)=\frac{\epsilon\phi_0 ^2}{\cosh^2{\sqrt{\epsilon}z}}\left(1-\frac{2\epsilon}{\lambda\sigma^2}\tanh^2{\sqrt{\epsilon}z}\right),\ ] ] and have been plotted for , and , although the parameters ave been selected very close to the boundary of the approximation validity ( ) still the analytical solutions are very close to numerical solutions .the insets are zoomed to demonstrate the tiny difference .] note that although our domain wall solution ( [ pdw],[sdw ] ) was approximate , the vanishing of is an exact result .if we use this result to find the surface - energy density we find this result seems interesting since now one can control the effect of by and ultimately for the case of one obtains , which is independent of .this result is noticeable because in contrary to the kink case , now it is possible to decrease the wall energy without decreasing the maximum of the potential , in other words , the peak could be chosen , say at the gut or even planck scale , since the wall energy density is low enough to avoid domain wall density domination . to be more clear parameter controls the height of the potential through while is responsible for the wall energy density .the above approximate solution , circumspectly , could be generalized to the below potentail where is an arbitrary real number . in order to adapt our postulate to the well - analyzed groups ,we require to be integer or half integer .since other choices of are related to our line through a field redefinition , the above assumption does nt demote the level of the generality .then one has to note that ( [ gen ] ) shows explicit breaking of symmetry into the where restores ( [ polar ] ) . by this , one obtains domain walls , which are equal up to the free lagrangian . but the picture becomes more contrived if one assumes interactions of and independently , since they acquire different values for each vacuum .now , let us practice the ( [ pdw],[sdw ] ) approximation for the new case .suppose we have a potential with which provides domain walls .first we recast the fields into their cartesian form by and .then for the domain wall , the boundary conditions read and where each line demonstrate two distinct possibilities for each sign .obviously , the odd and even properties of the fields , which are crucial for the next steps , disappear . in order to restore the parity characteristics in boundary behavior one needs to rotate the fields by angel , where counts the number of the intended walls when we enumerate them counter clockwise from and is the angular period of the potential . then , for even , the potential remains the same since the rotation angel is an integer times the period .but for odd , further elaboration is needed . in the rotated cordinatethe boundary conditions are which exhibit the desired parity .therefore the next steps are eligible and the approximation becomes and the validity bound of the above approximation becomes narrower by increasing , since it requires .one can return to the original coordinates by a simple reverse rotation , respectively ; proposed potential in an inflationary perspective should be categorized within the `` new inflation '' models , while the inflation is expected to begin near the maximum where the fields leave the origin .new inflationary scenarios received great welcome because they do not have the common problems of the old inflation in completion of inflation .in fact , old scenarios need bubble collisions but a new inflationary scenario can end with a more realistic process of oscillation around a minimum . here, we try to scrutinize how ( [ original ] ) works as an inflationary potential , too .the extreme situation occurs when epsilon is considered as a small perturbation parameter in the original symmetry breaking term and can be ignored for the most of the process , then the potential is readily reduced to the single potential the same as the simple symmetry breaking which has been already explored in some respects . henceforth , in order to have a measure for the remaining part of our survey it is convenient to know the inflationary characteristics of the potential when the asymmetric term is ignorably small . to provide more similaritywe can rewrite the potential as in which we have used obviously , this potential is now in the domain of `` new inflation '' , in which the field rolls away from an unstable equilibrium , here placed at the origin . even after considering the whole potential, this will remain the main theme of the analysis .we can proceed by making a taylor expansion keeping the leading terms and ignoring the rest due to for the roll down path . where it is a well - known result that a potential in the form works properly as an inflationary potential for if it is lfm ( large field model ) i.e. .to be more precautious and to have a measure for the coming procedure , let us see the case more closely .the slow roll parameters are and to estimate the field value at the end of inflation we require , then the appropriate solution would be assuming and utilizing the taylor expansion in favor of the leading terms we can write ( [ rend1 ] ) as one can recognize that the end of inflation happens close to the true minimum . having an estimate for , we are able to obtain the e - folding interval between the time that cosmological scales leave the horizon and the end of inflation where we set for simplicity . substituting ( [ rstar ] ) in ( [ efold1 ] ) and making some straightforward approximation again , one obtains +\frac{1}{8}(\frac{1}{r_{*}^2}-\frac{1}{r_{end}^2 } ) = \frac{1}{8\alpha^2}\left[(2-r_*\alpha)^2 - 2(1-\sqrt{2}\alpha)+r _ * ^2 \alpha^2\right].\ ] ] one can solve the above equation with repsect to for the only acceptable solution : so far we supposed to validate our approximation and now appears in the denominator .one can readily justify that is a monotonically decreasing function of .it implies that lowering the raises the field value in which the desired scale leaves the horizon .this statement is reasonable since smaller leads to a decrease in the slope of the potential with respect to the field .this provides us a straightforward method to find the maximum allowed value for in which coincides with the origin i.e. , then the answer will be therefore from ( [ apr ] ) , if one requires , it roughly means . which is in accord with the previous assumption about smallness of . to have an insight, we have to emphasize that this upper bound for coincides with ignorable and undetectably low value for primordial gravitational waves as it will become clear shortly .now let us have a look at the most important observational constraints on any inflationary hypothesis ; spectral index and tensor to scalar perturbation ratio . for the spectral indexwe obtain .\label{bign}\ ] ] this estimation is accurate enough to indicate that for one regains the scale - invariant harrison - zeldovich - peeble s spectrum as expected .recall that we keep assuming .let us use the fact that tensor to scalar perturbation ratio `` '' ; must be smaller than . combining the definition of `` '' with ( [ srp1 ] ) and making some simplification yields which points to a vanishingly small when the horizon exit happens near the origin as mentioned before .substituting for we obtain where is defined as .\ ] ] then for the spectral index we have this estimation is not precise enough yet but helps us to have a better insight . assuming a lower value for means that moves away from the origin but this can not effect the gravitational wave strength considerably , since ( [ ret ] ) can always be approximated as from ( [ bign ] ) one can verify that is a monotonically decreasing function of in the allowed range ( figure [ fig : spectral ] ) . with versus for the range of e - folding ( and ) . therefore one can claim that for the model under investigation , spectral index is a monotonically decreasing function of . ] but we have an accurate observation for the scalar spectral index by planck tt+lowp cl , ; this yields the range of valid for a typical e - folding means which exhibits a relatively stringent fine tuned need of the model . on the other hand ,the estimated lower limit for can help us to determine the maximum of since from ( [ tit ] ) we have which is again a decreasing function of in the permitted region such that remains between and , approximately .one can estimate the tensor to scalar ratio using ( [ rca ] ) in ( [ ttosa ] ) to obtain this function monotonically decreases with repsect to in the acceptable range of , with the maximum value about for .as one expects , the tensor to scalar ratio vanishes for since it requires the initial point to be the fixed point ( figure [ fig : tensortoscalar ] ) .therefore we learn that the model needs a super - planckian value for to work properly according to the available data from planck+wp+bao .there is another piece of observational information yet to be addressed .actually the oldest and the most well - known part of it . from the expansion of the power spectra for the curvature perturbations we have , the planck constraint on implies an upper bound on the inflation energy scale which is readily transformed to the more useful form taking into account the slow - roll paradigm once more , we obtain which equivalently means for the potential ( [ original ] ) the dynamics is limited between two extrema .the first is similar to the previous case where the field starts rolling down from the vicinity of the axis and remains close to it throughout its motion .this is the most probable possibility due to the angular minimum of the potential and this is more or less the only possibility if one takes , because as it was discussed , in this situation , the origin becomes a saddle point and seems like an attractor path .but if we assume , then other paths are possible depending on different initial conditions , reminding that the nearer to the axis the less chance for the path to be chosen , again because of the angular behavior of the potential .but as a possibility , though a very weak one , we consider the ultimate radial path from the origin to one of the side saddle points at and then a curved orbit toward the true vacuum .note that the second part of the path could happen independently from the starting point .at the point where these different trajectories meet , some transient oscillations might happen , which are damped by the following inflationary period .generally , these two extreme paths could bring different expansions as we will discuss .for the moment , let us concentrate on the path from the origin toward the saddle point .remember that although this is not an attractor and the chance of following this path is low , still as we emphasized earlier , we consider it as a bound for what could happen . as long as we restrict ourselves to move on the axis, the effective potential can be simplified as which can be reordered as it can be recognized now that the potential is similar to ( [ ssb ] ) shifted by a constant .so we can proceed in a similar manner unless this time we set to obtain the amount of e - foldings in this part of the slow - roll path ; actually , the inflation could nt reach the saddle point at and since it is interrupted at due to the growth of the slow - roll parameter .this is a transitive situation and inflation starts quickly in a new path as we will consider shortly .the whole e - foldings through trajectory is to simplify the above expression , we invoke two facts ; first we know that the pivot scale leaves the horizon soon near the origin ( ) and second , the inflation stops essentially before so the last two terms make no considerable contribution and we finally obtain it is seen that the denominator could intensify the e - foldings by providing a semi - flat trajectory on the axis . but remember that this path is not likely to happen due to instability .now we can write however the effect of varying only slightly changes our picture about since all other features along this path more or less resemble the ordinary symmetry breaking case which was discussed earlier and we will consider this again from a different view . but inflation could happen along a completely different path ; starting from the saddle point and ending at the true vacuum . to have an estimation about the selected path due to its low kinetical energy we suppose that the fields remain in the radial minimum throughout their trajectory .therefore for obtaining the equation of the estimated path first we find the radial minimum of the potential in the polar form ( [ polar ] ) ignoring the solution , which correspond to the maximum at the origin , for a given angle the radial minimum obeys the following equation returning to the original cartesian form , we have setting we recover the symmetry as expected . the true vacua also satisfy the above equation . to simplify the analysis ,we will focus on the quarter and solve ( [ path ] ) for to obtain since we assume the regime , expanding the square root and keeping the most important terms we deduce or equivalently finally the above approximation allows us to recast the potential ( [ original ] ) in the form of a single field potential while for the supposed slow - roll path we estimate then obviously for we obtain as we expected , the inflation energy scale is determined by and since the height of the barrier depends on them. under this condition , the potential completely fits to the first new inflationary models called hill - top models with the general shape if we set , and then the model predictions are and which are in agreement with planck+wp+bao joint cl contours for lfm ( ) i.e. although is reduced compared to the original symmetry breaking case thanks to providing a longer curved trajectory with a smaller slope , the potential is still considered a super - planckian model without any known physical motivation . particularly , picci - quinn mechanism which is supposed to happen at qcd scale falls far bellow the needed energy of inflation and even the reheating process in the above hypothesis likely produces undesirable domain walls to explain the invisible axions .the other probable scenario is rolling down the origin of field space .first , let us see which direction is more likely to be chosen for rolling down .returning back to the field space polar coordinates we obtain for a certain , the line is always maximum while is a minimum so one probably expects slow - roll happen on the axis or at least it appears as an effective attractor .but as it is also obvious from ( [ tet ] ) , near the origin all directions appear on the same footing so the field could follow different trajectories later on .in other words , the path chosen is very sensitive to the initial conditions and despite that the axis is the most probable path , other options are not ruled out .simulations confirm this idea ( figure [ fig : slowroll ] ) . to analyze the inflationary scenario in this case ,first let us derive on the field space ; even if the ratio was not very small , we would rely on the smallness of with respect to to establish the following approximation the solution straightforwardly can be obtained as in which is an arbitrary constant which stems from arbitrariness in starting direction of the slow - roll .as gets closer to 1 , more sophisticated approximations will be necessary .for example , the above equation insures that the path outward origin remains linear with a good accuracy for small ratio , until the path meets the radial minimum at in which quite suddenly vanishes .this abrupt redirection of course could raise the chance of isocurvature density perturbations for large or even for small ratio at the end of inflation . but remember that expansion appears as a damping term and prevents the field to acquire much kinetic energy and the subsequent tumbling . in thisregard , numerical simulations confirm the slow - roll parameter predictions and slow - roll continues up to the true vacuum vicinity despite the slow - roll redirection in the field space ( figure [ fig : slowroll1 ] ) . fornow , let us see how the potential looks in an arbitrary radial direction . to this end , we switch to the polar coordinates once more and write the potential for a fixed arbitrary angle . again , we encounter a hill - top model as could be expected and as mentioned before we should nt worry about a slow - roll interrupt in the redirection point since the inflation continues more or less in the same manner up to the true vacuum .note that in a typical new inflation like what we consider , the first stages of inflation are much more effective in producing the e - foldings and the redirection always happens after these critical era such that gives us enough excuse for the mentioned approximations .this time we should define to check this against the previous results , we can take or expecting that both of these choices result in the constraint that we have already had from simple symmetry breaking in ( [ ssb ] ) .both of the above situations reduce ( [ mumu ] ) to then the hill - top constraint of readily yields which is approximately in accord with the previously achieved constraint . on the other hand , the bigger , the smaller required for matching with the observation . herea delicate point has to be taken into account .although in the derivation of ( [ mumu ] ) we have not considered any constraint on the ratio , but in order to have a multi - directional operation , is required , since the other case has the ability to bring about an imaginary in some directions which consequently changes the sign of in ( [ mumu ] ) . roughly speaking , controls the range of the angle in which the negative slope is seen from the origin : when then but in the opposite case this angle for the positive half space is obtained from the division by two here results from considering half plane .this conclusion is not surprising at all since for the origin becomes a saddle point as discussed earlier .when the inequality gets stronger , the axis plays the role of attractor more effectively .so in this regime we can ignore the field in the slow - roll stage such that the potential reduces to which were discussed fully earlier .it might be worth arguing that in the last case ( i.e. ) , the potential for the initial condition in which fields are located far from origin , imitates the chaotic inflation in the form of which continues along the trajectory to the true vacuum . in figure[ fig : sub1 ] one observes the curved path from the saddle point on the axis toward the true vacuum at . to plot these graphs we simply calculated the absolute slope by to be more rigorous , the length of the projected interval between the two saddle points on the plane is and disappears as soon as reaches ( or gets bigger ) .therefore , it is natural to take into account the new path provided for the fields slow - roll . to indicate what we exactly talk about onecan refer to figure [ fig : fieldflow1 ] . as it is depicted in the figures , in the proposed potential, the slow - roll path could be completely different from the radial flow of an ordinary symmetry breaking .it is saying that the path is very sensitive to the initial conditions and the figures are just two possible paths among many .note that in figure [ fig : fieldflow1 ] the slow - roll comprises of two different paths ; a radial and a nearly circular path , in contrast to the case , in which the saddle points unite at the origin .as we have shortly discussed under the motivation topic , the latter case approximately reduces to the inverted hybrid inflation case , with high energy scale domain walls which are formed before the inflationary procedure .let us get back to the main trend and suppose .it was discussed earlier that the path consists of two distinct parts , a radial and a nearly circular one , although the circular part changes in shape as the inequality of becomes weaker .one has to note that if the slow - roll happens to be on the axis , then again the problem reduces to the ordinary symmetry breaking scenario and the circular path does nt appear anymore .the remaining possibility that has less importance relates to the case when is comparable to . as it was discussed earlier , in this casethe slow - roll path is a curved line ( figure [ fig : fieldflow2 ] ) and for such trajectories , there are other methods to deal with the slow - roll process - . however , since recent observations are generally in favor of single field inflation , or at last , such models that evolve along an effectively single - field attractor solution , we limited our survey to the situations in which we can approximate our double field potential with a single field one .we estimated the domain wall properties for an explicitly broken symmetric potential introducing an approximation that nicely fits both the euler - lagrange equations with appropriate boundary conditions and the static version of viral theorem .we showed that adding one degree of freedom into our lagrangian in the form of a new field , helps us evade the domain wall domination problem of the ordinary kink without decreasing the scale energy of the potential .the price that has to be paid is relaxing the symmetry as an exact symmetry of the model .this allows us to have super - planckian scale of energy for the peak of the potential while the domain wall energy is sufficiently low to avoid conflict with observation .the exact allowed values of parameters are of course model dependent .more rigorously , it has to be first determined , in which cosmological era and correlation length ( horizon ) , the domain walls will form .the descending of to is not an unprecedented scenario and the same explicit symmetry breaking have been suggested as a remedy for invisible axion and two higgs doublet models . from an observational point of view , the model parameters could be set such that the wall explains any confirmed cmb residual dipole anisotropy . in the other extreme , we proposed the domain wall production to happen before inflation and claimed that for super - planckian values , this scenario could work properly .we thoroughly examined the potential as the source of inflation .we just focused on those cases which are reduced to a single field inflation since such models have received more appreciation after the planck data .our study indicated that all the mentioned scenarios could be classified into the `` new inflationary models '' and almost always into the hill - top subclass of it .we also introduced an analytic , though approximate proof for the well - known simple symmetry breaking potential which indicates nearly complete accordance with the previously obtained numerical values .we tried to encompass all inflationary possibilities of the potential .as briefly mentioned , there is hope to explain the possible cmb dipole anisotropy by means of domain walls , which could simultaneously solve the invisible axion problem .therefore , it is important to make a compromise between cosmological evidence , particle physics requirements and domain wall formation as we tried to do so .ade , et al [ planck collaboration ] , arxiv:1502.02114 .ade , et al .[ planck collaboration ] , astron .571 , a22 .bicep2 2014 results release . national science foundation .p. a. r. ade , et al [ planck collaboration ] , arxiv:1502.01589 .a. r. liddle , arxiv : astro - ph/9910110 , ( 1999 ) .a. linde , arxiv:1402.0526v2 .j. martin , c. ringeval , v. vennin , physics of the dark universe . 5 - 6 , 75 .d. h. lyth , a. riotto , phys .( 1999 ) j. martin , c. ringeval , v. vennin , phys . rev .114 , 081303 .m. eshaghi , m. zarei , v. domcke , n. riazi , a. kiasatpour , jcap 11 , 037 .a. gharibi , advances in modern cosmology , intech .m. khlopov , symmetry 7 , 815 . ( 2015 ) .r. n. greenwood , d. i. kaiser , e. i. sfakianakis , physical review d 87 : 064021 .g. aad , et al .( atlas collaboration ) , new j. phys .043007 . 15 ( 2013 ) .g. aad , et al .( atlas collaboration ) , phys .b716 1 , ( 2012 ) .s. chatrchyan , et al .( cms collaboration ) , phys .b716 , 30 , ( 2012 ) . f. bezrikov , class .quantum grav .30 , 214001 .m. b. einhorn , d. r. timothy jones , jcap 11 , 049 .t. banks , int .a , 29 , 1430010 .f. jegerlehner , arxiv:1305.6652v2 .s. alekhina , a. djouadi , s. moch , phys .f. takahashi , n. kitajima , phys.lett .b745 , 112 .y. tang , mod .a , 28 , 1330002 .a. gangui , arxiv : astro - ph/0110285 .t. vachepsati , kinks and domain walls , , cambridge university press . ( 2006 ) .j. preskill , s. p. trivedi , f. wilczec , m. b. wise , nuclear physics b363 , 207 .b. zeldovich , i. yu .kobzarev , l. b. okun , jetp 40 .( 1975 ) s. jazayeri , y. akrami , h. firouzjahi , a. r. solomon , y. wang , jcap 1411 , 044 .r. d. peccei , lecture notes in physics , springer , 741 , 3 - 17 , ( 2008 ) .atlas collaboration , jhep 12 , 55 , ( 2015 ) .e. j. weinberg , classical solutions in quantum field theory ( solitons and instantons in high energy physics ) , cambridge university press .( 2012 ) t. matsuda , phys .b436 , 264 .j. a. pearson , phys .d 90 , 125011 .( 2014 ) t. hiramatsu , m. kawasaki , k. saikawa , t. sekiguchi , jcap 01 , 001 , ( 2013 ) .n. riazi , m. peyravi , sh .abbassi , chin .53 , 100903 ( 2015 ) .p. higgs , physical review letters 13 ( 16 ) : 508 .p. mitra , symmetry and symmetry breaking in field theory , crc press , ( 2014 ) .e. w. kolb , m. s. turner , the early universe , addison - wesley publishing company .a. r. liddle , p. parsons , j. d. barrow , phys.rev .d50 , 7222 .s. dodelson , modern cosmology , academic press .s tsujikawa , h .yajima , phys.rev .d62 , 123512 .a. mazumdar , l. wang , jcap 09 , 005 .a. davis , jcap 02 , 038 .b. lotfi , d. h. lyth , jcap 0507 , 010 .a. linde , physics letters b 108 ( 6 ) , 389 .a. albrecht , p. j. steinhardt , physical review letters 48 , 1220 .w. mukhanov , physical foundation of cosmology , cambridge university press .a. r.liddle , d. h. lyth , cosmological inflation and the large - scale structure , cambridge university press .d. h. lyth , arxive.hep-ph/9609431v1 .p. peter , j. p. uzan , primordial cosmology , oxford university press .k. a. olive .physics report , 190 , 307 .m yamaguchi , iop publishing ltd , classical and quantum gravity , volume 28 .m. dine , w. fischler , and m. srednicki , phys.lett .b104 , 199 ( 1981 ) ; a. zhitnitsky , sov.j.nucl.phys . 31 , 260 ( 1980 ) .a. linde , jhep 0111,052 .m. czerny , t. higaki , f. takahashi , phys.lett .b734 , 167 .j. wess , j. berger , princeton series in physics : supersymmetry and supergravity , princeton university press .r. d. peccei , h. r. quinn , physical review letters 38 ( 25 ) : 14401443 , ( 1977 ) .p. ko , y. omura , c. yu , arxiv:1406.1952 , ( 2014 ) .s. baek , p. ko , w. i. park , phys.lett.b 302 , ( 2015 ) .d. stojkovic , k. freese , g. d. starkman , phys .d72 , 045012 .a. achcarro , v. atal , m. kawasaki and f. takahashi , jcap 12 , 044 , ( 2015 ) .d. h. lyth , e. d. stewart , phys.rev.d54 : 7186 - 7190 , ( 1996 ) .p. fayet , j. iliopoulos , phys.lett.b51 .the lhcb collaboration , nature physics 11 , 743 ( 2015 ) j. wess , j. bagger , princeton university press .s. p. martin , hep - ph/9709356 .k. j. barnes , group theory for the standard model of particle physics and beyond , crc press .j. bardeen , l. cooper and j. r. schrieffer , microscopic theory of superconductivity , phys .106 , 162 .kibble , phys .kibble , nucl .phys , b252 , 227.(1985 ) .a.vilenkin , e.p.s .shelard , cosmic strings and other topological defects , cambridge university press .( 1994 ) . p.j. peeble , principal of physical cosmology , princeton university press .j. lesgourgues , inflationary cosmology ( lecture notes , epfl ) , ( 2006 ) .a. h. guth , phys .d 23 , 347 . ( 1981 ) .a. r. liddle , d. h. lyth , arxiv : astro - ph/9303019v1 .e. r. harrison , phys .d1 , 2726 .( 1970 ) , r. sunyaef and y. zeldovich , astrohpys .space sci 7 .( 1970 ) , p. peebles and j. yu , astrophys.j . 162 , 815 .( 1970 ) . c. gordon , d. wands , b. a. bassett , r. maartens , phys . rev .d 63 , 023506 .g. fasisto , c. t. b. byrnes , jcap 0908 , 016 .m. susuki , prog .e. d. stewart , d. h. lyth .b 302 , 171 . ( 1993 ) .m. dias , d. seery , phys . rev .d 85 , 043519 .d. i. kaiser , e. i. sfakianakis , phys .112 , 011302 .
we have analyzed a model which is broken explicitly to a model . the proposal results in generating two stable domain walls , in contrast with the more common version which is prevalently used to explain axion invisibility for model . we have been meticulous to take into account any possible relation with previous studies , even if they apparently belong to different lines of research . then we have scrutinized the domain wall properties of the model , proposing a rigorous approximate solution which fully satisfies boundary conditions and the static virial theorem simultaneously . invoking the mentioned approximation , we have been able to obtain an analytical insight about the effect of parameters on the domain wall features , particularly on their surface energy density which is of great importance in cosmological studies when one tries to avoid domain wall energy domination problem . next , we have mainly focused on the likely inflationary scenarios of the model , including saddle point inflation , again insisting on analytical discussions to be able to follow the role of parameters . we have tried to categorize any inflationary scenario into the known categories to take advantage of the previous detailed studies under the inflationary topic over the decades . we have concluded that any successful inflationary scenario requires large fields definition for the model . calculations are mainly done analytically and numerical results are used as supportive material .
scattering is a phenomenon where some form of a travelling wave excitation ( light , sound , etc . )deviates from its original trajectory due to a change in the properties of the medium along its path . in the context of electromagnetic waves , given a field incident on an object of known permittivity, it is quite straightforward to calculate the scattered field in various directions . in caseswhere this can not be done analytically , several computational methods can be employed .this is known as the forward scattering problem .the inverse problem , however , is more challenging .it consists of determining the unknown spatial permittivity of an object based on measurements of the scattered field . in order to understand the properties of the scattered field , bucci et al . considered the electric field integral equation ( efie ) approach , and noted that the integral operator in this case is compact . by invoking a theorem due to kolmogorov and fomine concerning the properties of such an operator, it was deduced that the scattered field has a finite dimensional representation .further , the singular values of the operator rapidly decay after a certain threshold , a property attributed to the analyticity of the operator .thus , it was concluded that the scattered field can be represented by a finite number of singular vectors , each associated with a singular value .this critical number was referred to as the _ degrees of freedom _ of the scattered field , and in the two - dimensional case of a circular observation domain bounding a circular scatterer of radius , this number was found to be equal to , where is the free space wavevector . also see for a lucid derivation of this decomposition . the mathematical machinery used in the works of bucci et .al is formidable , as it is rigorous .instead , we present a much simpler route to the same results by invoking certain elegant properties of bessel functions .we start in section [ methods ] by discretizing the efie , and derive expressions for the discrete fourier transform of the field scattered by a bounded dielectric object in the case of transverse electric polarization . using a key property of bessel functions of fixed argument ( demonstrated in appendix [ bessel ] ) , namely that their amplitude monotonically decays to zero as the order is increased beyond a threshold value , we present our main results on the bandlimited nature of the scattered fields in section [ results ] . in this section, we also present numerical validation with the finite element method , and consider the case of an object illuminated by multiple incidence angles .we conclude in section [ discussion ] with a discussion on the hardware implications of our results , and make a few other connections. bounded within a circle of radius is illuminated by an incident field .the scattered field is measured on the circle of radius . to discretized source and observation points , respectively.,scaledwidth=50.0% ]consider a dielectric object bounded in a two - dimensional region of space , , whose relative permittivity , is a function of space , immersed in a medium of constant ( and real ) relative permittivity , .in this situation , if we consider a transverse electric ( te ) polarization ( ) , the -component of the electric field obeys the helmholtz equation ; where is the magnitude of the wavevector in free space , and in the relative permittivity at . the above equation can be recast as an integral equation in terms of the green s function for the homogeneous medium ( with uniform permittivity , ) , , where is the hankel function of second kind and zero order , and , as ; where is the incident field , and is referred to as the dielectric contrast .the above efie is a freedholm integral equation of the second kind .assume that the dielectric object , , is surrounded by a concentric observation circle , , of radius shown in figure [ fig1 ] . to compute the scattered field ( ) on , we discretize eq .( [ intg_eqn ] ) by dividing the region into equi - sized cells of uniform dielectric constant and into equispaced points . following well known techniques for solving such equations , the scattered electric field ( ) at the observation point on related to the total field ( ) at the n points of : where is the distance between the ( observation ) point on and ( source ) point in , is the radius of the circle with the same area as the cell of with dielectric contrast , and is the bessel function of the first kind and first order .note that without loss of generality , and does not depend on .we now approximate the scattered field in a far - field setting ( ) .the cosine rule gives us that , where is the angle between the position vectors corresponding to the source and observation points , and , respectively , and .thus , in the far field , at least to first order , can be approximated as in the amplitude term and as in the phase term , following which ( also using the large argument approximation of the hankel function , ) : \right\ } \times \nonumber \\ & { } & \qquad \exp{(jk_b r_n\cos\theta_{mn } ) } \nonumber \\ & = & h_0 \exp{(jk_b r_n\cos\theta_{mn})}\end{aligned}\ ] ] where is a constant independent of when the observation points are on a circle ( the terms in the curly bracket above ) .finally , the far field approximation for the scattered field can be written as ; we are now in a position to consider the -point discrete fourier transform ( dft ) , , of the scattered electric field , , as obtained in eq .( [ scat - ff ] ) .the fourier component is given by ; a simple reordering of the order of summations in ( [ dftscatf ] ) reveals that the inner summation is , in effect , the dft of a plane wave sampled on a circle of radius ; this is so because the inner summation , in which the source position , , is fixed , contains in the exponent .this can be expanded as , where is the angular position of the source point .since the measurement position , , goes around a circle of radius as goes from to , it is evident that evenly samples points spanning .the dft of the scattered field as derived in eq .( [ dftscatf_simple ] ) can be simplified using the jacobi - anger expansion , as follows ( see appendix [ planewavedft ] for details ) ; } j_{k - qm}(k_br_n).\end{aligned}\ ] ] a careful examination of the inner summand of eq .shows that for a fixed point in region ( i.e. fixed ) , the argument of the bessel function is constant ( equal to ) and only its order ( equal to ) changes with .the bessel function , , has the property that for ( see appendix [ bessel ] ) .further , since the maximum possible value of is ( for a dielectric bounded within a cylindrical region of radius ) , it naturally follows that for the fourier component , will be negligible .in other words , the dft , , is bandlimited to this value .it must be mentioned that this bandlimit corresponds to an _ effective _ bandwidth , as the fourier coefficients for are negligible , but not identically zero .we thus arrive at the same result as bucci et .al regarding the degrees of freedom of the scattered field in terms of an effective bandwidth .we note that our approach of discretizing the efie is similar to one previously proposed , wherein , starting from a series representation of the green s function , a truncated fourier series for the scattered electric field is obtained ; we essentially extend this idea and estimate the truncation number in terms of . since the internal field coefficients , , and the contrast , ,are upper bounded in magnitude , the fourier components in eq .( [ dftscatf_simple-2 ] ) that are zero , will continue to be zero , regardless of the particular values of the field and contrast . to see this quantitatively ,consider the fourier coefficient of the scattered field from eq .( [ dftscatf_simple-2 ] ) .we consider the first half of the total coefficients ; this is sufficient since the dft is symmetric ( see appendix [ planewavedft - symmetry ] ) : \ ] ] assuming the contrast to be bounded such that , and the field to be bounded such that , we apply the cauchy - schwartz inequality ( ) to the above relation to obtain thus , if the magnitude of the order of the bessel function , , is large enough such that , we can see that as long as are finite .in other words , the bandlimited nature of scattered field is independent of the object permittivity , and this bandlimit depends only on the size of the object relative to wavelength , .we use a two dimensional vector - element based finite element method ( fem ) to compute the scattered electromagnetic fields in two different configurations . in both configurations ,the scattering object is confined within a cylinder of radius , the fields are computed on a radius of , the computational domain is terminated by applying a radiation boundary condition at a radius , and the te - polarized incident field makes an angle of with the axis . for the numerical convergence of the fem solution , the domain discretization must be on the order of , which results in a mesh having approximately 90,000 elements . in the first configuration ,the cylinder has a uniform permittivity , , while in the second configuration , we allow the permittivity of the cylinder to be random such that for each element is a uniform random variable from 1 to 10 .the scattered fields in both cases are shown in figure [ fig - fields ] , and their corresponding spatial dfts in figure [ fig - fft ] .we find that regardless of the constitutive permittivity of the scattering object , the dft is bandlimited , and that the band limit matches very well with the analytical prediction of from section [ bandlimit - analytical ] . here , , which gives ( see figure [ fig - fft ] ) .it is interesting to note that while the observation circle ( ) is not in the far - field of the scattering object ( ) , the predicted cut - off matches very well with the fem results ( which do not assume any far - field approximation ) .this is because the idea of an effective bandwidth as derived by bucci et al . does not require a far - field approximation , even though we assume it here to simplify the analysis . for a constant permittivity ( ) cylinder ( blue curve ) and for a cylinder with random permittivity ( red curve ) . in the latter ,the permittivity of each element in the cylinder is a uniform random variable such that ] given the measurements ] , where is the distance of the source from the origin and is the angle between the source position vector , and the incident wavevector , .we assume that the angular spacing between multiple incidence angles is uniform .the above matrix system has a solution , giving the field at the source point as ; {nk } \exp[-jk_b \rho_k\cos\theta_{ik}]\ ] ] noting the similarities between eq .( [ scat - ff ] ) and eq .( [ fwd_inc ] ) , it is at once clear that if the dft of the above expression is taken w.r.t . the incident field index , , the result would be a band limited expression , just as was shown with the dft of eq .( [ scat - ff ] ) w.r.t .the observation point index , in section [ dft - plane ] . extending the analogyfurther , it is seen that this band limit is given by the expression ; thus , only upto incidence angles are useful in imposing independent constraints on the unknown permittivity of the object . beyond this number, the field can be reconstructed using sampled values and a suitable interpolation scheme ; no new information can be gained .we note that this number depends only on the relative object size , and does not depend on the object permittivity .we also note that this result has previously been derived in the framework of functional analysis by bucci and isernia .the nyquist - shannon sampling theorem states that for a function that is bandlimited to a maximum frequency component , , the minimum sampling frequency required to reconstruct the function is given by . applying this theorem to the scattered electric field , which is known to be bandlimited , implies that for a fixed object size , , it is optimal to make equally spaced measurements of the scattered field . in an experimental setup it may be necessary to measure the fields scattered due to an object .it is common to have a dedicated antenna for each measurement ; we thus have a lower bound on the hardware complexity of the experiment .further , if it is desired to estimate the scattered field at more points than these , a suitable interpolation scheme can be applied to the sampled values of the field .a setup such as that described above surrounding an object with several antennae is fairly common in experiments which perform breast cancer detection using microwave imaging techniques .although our results apply to a two - dimensional geometry , the same approach can be easily applied to the three - dimensional problem . in the inverse scattering problem ,the measurements of the scattered fields are typically noise corrupted .also , it has been shown that the scattered field is bandlimited to .thus , a robust strategy for determining the unknown permittivity of an object would be to : ( i ) take the dft of the measurements ( i.e. obtain ] : thus by breaking up the total integral of eq .( [ besselintegral ] ) into such intervals , we arrive at the result that as grows beyond , monotonically approaches zero . by noting that this behaviour depends simply upon , and by inspection of figure [ phitau ]it can be said that for .the modulus on come from the observation that .consider a -polarized plane wave propagating in the plane , measured on a circular contour in the plane .let there be observation points on this circle of radius , giving an observed electric field vector , , as = \exp\left[-jk_0r_0\cos\left(\frac{2\pi m}{m}\right)\right],\ , m\in[0,m-1],\ ] ] for a plane wave travelling along the axis .the discrete fourier transform ( dft ) of this vector is given by = \overset{m-1}{\underset{m=0}{\sum}}\ , \exp[-j2\pi km / m ] x[m] ] , whose properties are now considered .we can simplify it s symmetric counterpart , ] , which leads to : = \overset{m-1}{\underset{m=0}{\sum}}\ , \exp[j2\pi km / m ] x[m - m] ] . observe that the transformation , where , leaves the above equation unchanged . the fourier component of from eq .( [ planewave ] ) , after applying eqs .( [ ja ] ) and ( [ delta ] ) , is & = & \overset{m-1}{\underset{m=0}{\sum } } \exp\left[\frac{-j2\pi mk}{m}\right ] \\\nonumber & { } & \times \overset{\infty}{\underset{p=-\infty}{\sum}}\,j^pj_p(-k_0r_0 ) \exp\left[\frac{j2\pi mp}{m}\right ] \\ & = & \overset{\infty}{\underset{p=-\infty}{\sum}}\,j^pj_p(-k_0r_0)\,m \delta[k-(p+qm ) ] \nonumber \\ & = & \overset{\infty}{\underset{q=-\infty}{\sum}}\,j^{k - qm } m j_{k - qm}(-k_0r_0)\end{aligned}\ ] ] in other words , the fourier component of comprises the order bessel function and it s orders , all of the same ( fixed ) argument . from the analysis of the properties of as a function of in appendix [ bessel ] , it is clear that the dft of the incident plane wave is band - limited .this is because for , , and therefore \sim 0 $ ] .thus the incident field on a contour of radius can be represented by coefficients in the dft basis .m. franceschetti , d. migliore , and p. minero , the capacity of wire- less networks : information - theoretic and physical limits , " _ ieee transactions on information theory _ , vol .55 , no . 8 , pp . 3413 - 3424 , 2009 .u. khankhoje , j. van zyl , and t. cwik , computation of radar scat- tering from heterogeneous rough soil using the finite - element method , " _ ieee transactions on geoscience and remote sensing _ , vol .51 , no . 6 , pp . 3461 - 3469 , june 2013 .m. klemm , i. j. craddock , j. a. leendertz , a. preece and r. benjamin , radar based breast cancer detection using a hemispherical antenna array experimental results , " _ ieee transactions on antennas and propagation _ , vol .57 , no . 6 , pp . 1692 - 1704 , 2009 .
in this tutorial paper , we consider the problem of electromagnetic scattering by a bounded two - dimensional dielectric object , and discuss certain interesting properties of the scattered field . using the electric field integral equation , along with the techniques of fourier theory and the properties of bessel functions , we show analytically and numerically , that in the case of transverse electric polarization , the scattered fields are spatially bandlimited . further , we derive an upper bound on the number of incidence angles that are useful as constraints in an inverse problem setting ( determining permittivity given measurements of the scattered field ) . we also show that the above results are independent of the dielectric properties of the scattering object and depend only on geometry . though these results have previously been derived in the literature using the framework of functional analysis , our approach is conceptually far easier . implications of these results on the inverse problem are also discussed . electromagnetic scattering by nonhomogeneous media , inverse problems , integral equations .
black holes are the most extreme objects known in the universe .our representations of physical laws reach their limits in them .the strange phenomena that occur around black holes put to the test our basic conceptions of space , time , determinism , irreversibility , information , and causality .it is then not surprising that the investigation of black holes has philosophical impact in areas as diverse as ontology , epistemology , and theory construction . in black holes , in a very definite sense, we can say that philosophy meets experiment .but , alas , philosophers have almost paid no attention to the problems raised by the existence of black holes in the real world ( for a notable and solitary exception see weingard 1979 ; a recent discussion of some ontological implications of black holes can be found in romero & prez 2014 ) .the purpose of this chapter is to palliate this omission and to provide a survey of some important philosophical issues related to black holes .i do not purport to deliver an exhaustive study ; such a task would demand a whole book devoted to the topic .rather , i would like to set path for future research , calling the attention to some specific problems . in the next sectioni introduce the concept of a black hole .i do this from a space - time point of view , without connection to newtonian analogies .black holes are not black stars ; they are fully relativistic objects and can be understood only from a relativistic perspective .hence , i start saying a few things about space - time and relativity . in the remaining sections of the chapter i present and discuss several philosophical issues raised by the existence and properties of black holes . in particular, i discuss what happens with determinism and predictability in black holes space - times , the implications of the existence of black holes for ontological views of time and the nature of reality , the role of black holes in the irreversibility we observe in the universe , issues related to information and whether it can be destroyed in black holes , the apparent breakdown of causality inside black holes , and , finally , the role played , if any , by black holes in the future of the universe .a black hole is a region of space - time , so i start introducing the concept of space - time ( minkowski 1908 ) . +* definition . *_ space - time is the emergent of the ontological composition of all events_. + events can be considered as primitives or can be derived from things as changes in their properties if things are taken as ontologically prior .both representations are equivalent since things can be construed as bundles of events ( romero 2013b ) .since composition is not a formal operation but an ontological one , space - time is neither a concept nor an abstraction , but an emergent entity . as any entity , space - time can be represented by a concept .the usual representation of space - time is given by a 4-dimensional real manifold equipped with a metric field : it is important to stress that space - time _ is not _ a manifold ( i.e. a mathematical construct ) but the `` totality '' of events .a specific model of space - time requires the specification of the source of the metric field .this is done through another field , called the `` energy - momentum '' tensor field .hence , a model of space - time is : the relation between these two tensor fields is given by field equations , which represent a basic physical law .the metric field specifies the geometry of space - time .the energy - momentum field represents the potential of change ( i.e. of event generation ) in space - time .all this can be cast into in the following axioms ( romero 2014b ) .+ the set is a differentiable , 4-dimensional , real pseudo - riemannian manifold .+ the metric structure of is given by a tensor field of rank 2 , , in such a way that the differential distance between two events is : the tangent space of at any point is minkowskian , i.e. its metric is given by a symmetric tensor of rank 2 and trace .+ the metric of is determined by a rank 2 tensor field through the following field equations : where is a second rank tensor whose components are functions of the second derivatives of the metric . both and are constants .+ the elements of represent physical events .+ space - time is represented by an ordered pair : there is a non - geometrical field represented by a 2-rank tensor field on the manifold e. + a specific model of space - time is given by : so far no mention has been made of the gravitational field .the sketched theory is purely ontological , and hence , can not be yet identified with general relativity . to formulate the field equations we introduce the einstein tensor : where isthe ricci tensor formed from second derivatives of the metric and is the ricci scalar .the geodesic equations for a test particle free in the gravitational field are : with an affine parameter and the affine connection , given by : the affine connection is not a tensor , but can be used to build a tensor that is directly associated with the curvature of space - time : the riemann tensor .the form of the riemann tensor for an affine - connected manifold can be obtained through a coordinate transformation that makes the affine connection to vanish everywhere , i.e. the coordinate system exists if for the affine connection . the left hand side of eq .( [ r ] ) is the riemann tensor : when the metric is flat , since its derivatives are zero . if the metric has positive curvature .sometimes it is said that the riemann tensor represents the gravitational field , since it only vanishes in the absence of fields . on the contrary, the affine connection can be set locally to zero by a transformation of coordinates .this fact , however , only reflects the equivalence principle : the gravitational field can be suppressed in any locally free falling system . in other words , the tangent space to the manifold that represents space - time is always minkowskian . to determine the mathematical object of the theory that represents the gravitational field we have to consider the weak field limit of eqs .( [ eq - einstein ] ) .when this is done we find that the gravitational potential is identified with the metric coefficient and the coupling constant is . if _the metric represents the gravitational potential _, then _ the affine connection represents the strength of the field itself_. this is similar to what happens in electrodynamics , where the 4-vector represents the electromagnetic potential and the tensor field represents the strength of the electromagnetic field . _ the riemann tensor , on the other hand , being formed by derivatives of the affine connection , represents the rate of change , both in space and time , of the strength of the gravitational field_. the source of the gravitational field in eqs .( [ eq - einstein ] ) , the tensor field , stands for the physical properties of material things .it represents the energy and momentum of all non - gravitational systems . in the case of a point mass and assuming spherical symmetry , the solution of eqs .( [ eq - einstein ] ) represents a schwarzschild black hole .the schwarzschild solution for a static mass can be written in spherical coordinates as : the metric given by eq .( [ schw ] ) has some interesting properties .let s assume that the mass is concentrated at .there seems to be two singularities at which the metric diverges : one at and the other at the length is known as the _ schwarzschild radius _ of the object of mass . usually , at normal densities , is well inside the outer radius of the physical system , and the solution does not apply to the interior but only to the exterior of the object . for a point mass ,the schwarzschild radius is in the vacuum region and the entire space - time has the structure given by ( [ schw ] ) .it is easy to see that strange things occur close to .for instance , for the proper time we get : or when both times agree , so is interpreted as the proper time measure from an infinite distance . as the system withproper time approaches to , tends to infinity according to eq .( [ time2 ] ) .the object never reaches the schwarzschild surface when seen by an infinitely distant observer .the closer the object is to the schwarzschild radius , the slower it moves for the external observer .a direct consequence of the difference introduced by gravity in the local time with respect to the time at infinity is that the radiation that escapes from a given will be redshifted when received by a distant and static observer . since the frequency ( and hence the energy ) of the photondepend on the time interval , we can write , from eq .( [ time2 ] ) : since the redshift is : then and we see that when the redshift becomes infinite .this means that a photon needs infinite energy to escape from inside the region determined by .events that occur at are disconnected from the rest of the universe .the surface determined by is an _event horizon_. whatever crosses the event horizon will never return .this is the origin of the expression `` black hole '' , introduced by john a. wheeler in the mid 1960s .the black hole is the region of space - time inside the event horizon . according to eq .( [ schw ] ) , there is a divergence at .the metric coefficients , however , can be made regular by a change of coordinates .for instance we can consider eddington - finkelstein coordinates .let us define a new radial coordinate such that radial null rays satisfy .( [ schw ] ) we can show that : then , we introduce : the new coordinate can be used as a time coordinate replacing in eq .( [ schw ] ) .this yields : or where is the schwarzschild radius where the event horizon is located ( units ).,width=264 ] notice that in eq .( [ ef ] ) the metric is non - singular at .the only real singularity is at , since there the riemann tensor diverges . in order to plot the space - time in a -plane, we can introduce a new time coordinate . from the metric ( [ ef ] ) or from fig . [ falling ]we see that the line , , and constant is a null ray , and hence , the surface at is a null surface .this null surface is an event horizon because inside all cones have in their future ( see figure [ falling ] ) .everything that crosses the event horizon will end at the singularity .this is the inescapable fate for everything inside a schwarzschild black hole .there is no way to avoid it : in the future of every event inside the event horizon is the singularity .however , that no signal coming from the center of the black hole can reach a falling observer , since the singularity is always in the future , and a signal can arrive only from the past .a falling observer will never see the singularity .many coordinates systems can be used to describe black holes . for this reason , it is convenient to provide a definition of a black hole that is independent of the choice of coordinates .first , i will introduce some preliminary useful definitions ( e.g. hawking & ellis 1973 , wald 1984 ) . +* definition . * _ a causal curve in a space - time is a curve that is non space - like , that is , piecewise either time - like or null ( light - like ) . _+ we say that a given space - time is _ time - orientable _ if we can define over a smooth non - vanishing time - like vector field . +* definition . * _ if is a time - orientable space - time , then , the causal future of , denoted , is defined by : _ similarly , + * definition . * _ if is a time - orientable space - time , then , the causal past of , denoted , is defined by _ : the causal future and past of any set are given by : and , a set is said _ achronal _ if no two points of are time - like related .a cauchy surface is an achronal surface such that every non space - like curve in crosses it once , and only once , .a space - time is _ globally hyperbolic _ iff it admits a space - like hypersurface which is a cauchy surface for .causal relations are invariant under conformal transformations of the metric . in this way ,the space - times and , where , with a non - zero function , have the same causal structure .let us now consider a space - time where all null geodesics that start in a region end at .then , such a space - time , , is said to contain a _ black hole _if _ is not _ contained in . in other words , there is a region from where no null geodesic can reach the _ asymptotic flat _ future space - time , or , equivalently , there is a region of that is causally disconnected from the global future . the _ black hole region _ , , of such space - time is ] is invariant under velocity reversal , it follows that if $ ] decreases for the first solution , it will increase for the second .accordingly , the reversibility objection is that the h - theorem can not be a general theorem for all mechanical evolutions of the gas .more generally , the problem goes far beyond classical mechanics and encompasses our whole representation of the physical world .this is because _ all formal representations of all fundamental laws of physics are invariant under the operation of time reversal_. nonetheless , the evolution of all physical processes in the universe is irreversible .if we accept , as mentioned , that the origin of the irreversibility is not in the laws but in the initial conditions of the laws , two additional problems emerge : 1 ) what were exactly these initial conditions ? , and 2 ) how the initial conditions , of global nature , can enforce , at any time and any place , the observed local irreversibility ? the first problem is , in turn , related to the following one , once the cosmological setting is taken into account : in the past , the universe was hotter and at some point matter and radiation were in thermal equilibrium ; how is this compatible with the fact that entropy has ever been increasing according to the so - called past hypothesis , i.e. entropy was at a minimum at some past time and has been increasing ever since ?the standard answer to this question invokes the expansion of the universe : as the universe expanded , the maximum possible entropy increased with the size of the universe , but the actual entropy was left well behind the permitted maximum .the source of irreversibility in the second law of thermodynamics is the trend of the entropy to reach the permitted maximum . according to this view, the universe actually began in a state of maximum entropy , but due to the expansion , it was still possible for the entropy to continue growing .the main problem with this line of thought is that is not true that the universe was in a state of maximum disorder at some early time .in fact , although locally matter and radiation might have been in thermal equilibrium , this situation occurred in a regime were the global effects of gravity can not be ignored ( penrose 1979 ) . since gravity is an attractive force , and the universe was extremely smooth ( i.e structureless ) in early times , as indicated , for instance , by the measurements of the cosmic microwave background radiation, the gravitational field should have been quite far from equilibrium , with very low global entropy ( penrose 1979 ) .it seems , then , that the early universe was _ globally _ out of the equilibrium , being the total entropy dominated by the entropy of the gravitational field .if we denote by a scalar formed out by contractions of the weyl tensor , the initial condition is required if entropy is still growing today .the answer to the second question posed above , namely , ` how the second law is locally enforced by the initial conditions , which are of global nature ? ' , seems to require a coupling between gravitation ( of global nature ) and electrodynamics ( of local action ) . in what followsi suggest that black holes can provide the key for this coupling ( for the role of cosmological horizons in this problem see romero & prez 2011 ) .the electromagnetic radiation field can be described in the terms of the 4-potential , which in the lorentz gauge satisfies : with and the 4-current .the solution is a functional of the sources .the retarded and advanced solutions are : the two functionals of are related to one another by a time reversal transformation . the solution ( [ ret ] )is contributed by sources in the past of the space - time point and the solution ( [ adv ] ) by sources in the future of that point .the integrals in the second term on the right side are the surface integrals that give the contributions from i ) sources outside of and ii ) source - free radiation .if is the causal past and future , the surface integrals do not contribute .the linear combinations of electromagnetic solutions are also solutions , since the equations are linear and the principle of superposition holds .it is usual to consider only the retarded potential as physical meaningful in order to estimate the electromagnetic field at : .however , there seems to be no compelling reason for such a choice .we can adopt , for instance ( in what follows i use a simplified notation ) , if the space - time is curved ( ) , the null cones that determine the causal structure will not be symmetric around the point . in particular , the presence of event horizons can make very different the contributions from both integrals . hawking s black hole area theorem ( hawking 1971 ) ensures that in a time - orientable space - time such that for all null vectors holds , the area of the event horizons of black holes either remains the same or increases with cosmic time . more precisely : + * theorem . * _ let be a time - orientable space - time such that for all null .let and be space - like cauchy surfaces for the globally hyperbolic region of the space - time with , and be , , where denotes an event horizon. then . _+ the fact that astrophysical black holes are always immersed in the cosmic background radiation , whose temperature is much higher than the horizon temperature , implies that they always accrete and then , by the first law of black holes ( bardeen et al .1973 ) , .the total area of black holes increases with cosmic time .the accretion should include not only photons but also charged particles .this means that the total number of charges in the past of any point will be different from their number in the corresponding future .this creates a local asymmetry that can be related to the second law .we can introduce a vector field given by : \ ; dv \neq 0.\ ] ] if , with there is a preferred direction for the poynting flux in space - time .the poynting flux is given by : where and are the electric and magnetic fields and is the electromagnetic energy - momentum tensor . in a black hole interiorthe direction of the poynting flux is toward the singularity . in an expanding , accelerating universe , it is in the global future direction .we see , then , that a time - like vector field , in a general space - time , can be _there is a global to local relation given by the poynting flux as determined by the curvature of space - time that indicates the direction along which events occur .physical processes , inside a black hole , have a different orientation from outside , and the causal structure of the world is determined by the dynamics of space - time and the initial conditions .macroscopic irreversibility , where is the stefan - boltzmann constant . ] and time anisotropy emerge from fundamental reversible laws .there is an important corollary to these conclusions .local observations about the direction of events can provide information about global features of space - time and the existence of horizons and singularities .presentism is a metaphysical thesis about what there is .it can be expressed as ( e.g. crisp 2003 ) : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ presentism_. it is always the case that , for every , is present . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the quantification in this scheme is unrestricted , it ranges over all existents . in order to render this definition meaningful , the presentist must provide a specification of the term ` present ' .crisp , in the cited paper , offers the following definition : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ present_. the mereological sum of all objects with null temporal distance ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the notion of temporal distance is defined loosely , but in such a way that it accords with common sense and the physical time interval between two events . from these definitionsit follows that the present is a thing , not a concept .the present is the ontological aggregation of all present things .hence , to say that ` is present ' , actually means is part of the present " .the opposite thesis of presentism is eternalism , also called four - dimensionalism .eternalists subscribe the existence of past and future objects .the temporal distance between these objects is non - zero .the name four - dimensionalism comes form the fact that in the eternalist view , objects are extended through time , and then they have a 4-dimensional volume , with 3 spatial dimensions and 1 time dimension .there are different versions of eternalism .the reader is referred to rea ( 2003 ) and references therein for a discussion of eternalism .i maintain that presentism is incompatible with the existence of black holes .let us see briefly the argument , considering , for simplicity , schwarzschild black holes ( for details , see romero & prez 2014 ) .the light cones in schwarzschild space - time can be calculated from the metric ( [ schw ] ) imposing the null condition . then : where i made .notice that when , , as in minkowski space - time .when , , and light moves along the surface .the horizon is therefore a _null surface_. for , the sign of the derivative is inverted .the inward region of is time - like for any physical system that has crossed the boundary surface .as we approach to the horizon from the flat space - time region , the light cones become thinner and thinner indicating the restriction to the possible trajectories imposed by the increasing curvature . on the inner side of the horizon the local direction of timeis ` inverted ' in the sense that all null or time - like trajectories have in their future the singularity at the center of the black hole .there is a very interesting consequence of all this : an observer on the horizon will have her present _ along _ the horizon .all events occurring on the horizon are simultaneous .the temporal distance from the observer at any point on the horizon to any event occurring on the horizon is zero ( the observer is on a null surface so the proper time interval is necessarily zero ) . if the black hole has existed during the whole history of the universe , all events on the horizon during such history ( for example the emission of photons on the horizon by infalling matter ) are _ present _ to any observer crossing the horizon .these events are certainly not all present to an observer outside the black hole .if the outer observer is a presentist , she surely will think that some of these events do not exist because they occurred or will occur either in the remote past or the remote future .but if we accept that what there is can not depend on the reference frame adopted for the description of the events , it seems we have an argument against presentism here . before going further into the ontological implications ,let me clarify a few physical points .i remark that the horizon 1 ) does not depend on the choice of the coordinate system adopted to describe the black hole , 2 ) the horizon is an absolute null surface , in the sense that this property is intrinsic and not frame - dependent , and 3 ) it is a non - singular surface ( or ` well - behaved ' , i.e. space - time is regular on the horizon ) . in a worlddescribed by special relativity , the only way to cross a null surface is by moving faster than the speed of light . as we have seen, this is not the case in a universe with black holes .we can then argue against presentism along the following lines .+ argument : * : there are black holes in the universe .* : black holes are correctly described by general relativity . * : black holes have closed null surfaces ( horizons ) . * therefore , there are closed null surfaces in the universe .argument : * : all events on a closed null surface are simultaneous with any event on the same surface .* : all events on the closed null surface are simultaneous with the birth of the black hole .* : some distant events are simultaneous with the birth of the black hole , but not with other events related to the black hole . *therefore , there are events that are simultaneous in one reference frame , and not in another .simultaneity is frame - dependent . since what there existcan not depend on the reference frame we use to describe it , we conclude that there are non - simultaneous events .therefore , presentism is false .let us see which assumptions are open to criticism by the presentist .an irreducible presentist might plainly reject .although there is significant astronomical evidence supporting the existence of black holes ( e.g. camenzind 2007 , paredes 2009 , romero and vila 2014 ) , the very elusive nature of these objects still leaves room for some speculations like gravastars and other exotic compact objects .the price of rejecting , however , is very high : black holes are now a basic component of most mechanisms that explain extreme events in astrophysics , from quasars to the so - called gamma - ray bursts , from the formation of galaxies to the production of jets in binary systems .the presentist rejecting black holes should reformulate the bulk of contemporary high - energy astrophysics in terms of new mechanisms . in any case, is susceptible of empirical validation through direct imagining of the super - massive black hole `` shadow '' in the center of our galaxy by sub - mm interferometric techniques in the next decade ( e.g. falcke et al . 2011 ) . in the meanwhile ,the cumulative case for the existence of black holes is overwhelming , and very few scientists would reject them on the basis of metaphysical considerations only .the presentist might , instead , reject .after all , we _ know _ that general relativity fails at the planck scale .why should it provide a correct description of black holes ?the reason is that the horizon of a black hole is quite far from the region where the theory fails ( the singularity ) .the distance , in the case of a schwarzschild black hole , is . for a black hole of 10 solar masses , as the one suspected to form part of the binary system cygnus x-1 , this means km . and for the black hole in the center of the galaxy , about 12 million km .any theory of gravitation must yield the same results as general relativity at such distances .so , even if general relativity is not the right theory for the classical gravitational field , the correct theory should predict the formation of black holes under the same conditions .there is not much to do with , since it follows from the condition that defines the null surface : , where is the proper temporal separation . ] ; similarly only specifies one of the events on the null surface . a presentist might refuse to identify ` the present ' with a null surface .after all , in minkowskian space - time or even in a globally time - orientable pseudo - riemannian space - time the present is usually taken as the hyperplane perpendicular to the local time .but in space - times with black holes , the horizon is not only a null surface ; it is also a surface locally normal to the time direction . in a minkowskian space - time the plane of the present is not coincident with a null surface .however , close to the event horizon of a black hole , things change , as indicated by eq .( [ cones - schw ] ) . as we approach the horizon , the null surface matches the plane of the present . on the horizon ,both surfaces are exactly coincident .a presentist rejecting the identification of the present with a _ closed _ null surface on an event horizon should abandon what is perhaps her most cherished belief : the identification of ` the present ' with hypersurfaces that are normal to a local time - like direction .the result mentioned above is not a consequence of any particular choice of coordinates but an intrinsic property of a black hole horizon .this statement can be easily proved .the symmetries of schwarzschild space - time imply the existence of a preferred radial function , , which serves as an affine parameter along both null directions .the gradient of this function , satisfies ( ) : thus , is space - like for , null for , and time - like for . the 3-surface given by is the horizon of the black hole in schwarzschild space - time . from eq .( [ ra ] ) it follows that over , and hence is a null surface[multiblock footnote omitted ] .premise , perhaps , looks more promising for a last line of presentist defence .it might be argued that events on the horizon are not simultaneous with any event in the external universe .they are , in a very precise sense , cut off from the universe , and hence can not be simultaneous with any distant event .let us work out a counterexample .the so - called long gamma - ray bursts are thought to be the result of the implosion of a very massive and rapidly rotating star .the core of the star becomes a black hole , which accretes material from the remaining stellar crust .this produces a growth of the black hole mass and the ejection of matter from the magnetised central region in the form of relativistic jets ( e.g. woosely 1993 ) .approximately , one of these events occur in the universe per day .they are detected by satellites like _ swift _( e.g. piran and fan 2007 ) , with durations of a few tens of seconds .this is the time that takes for the black hole to swallow the collapsing star .let us consider a gamma - ray burst of , say , 10 seconds . before these 10 seconds , the black hole did not exist for a distant observer . afterwards, there is a black hole in the universe that will last more than the life span of any human observer .let us now consider an observer collapsing with the star . at some instant she will cross the null surface of the horizon . this will occur within the 10 seconds that the collapse lasts for .but for all photons that cross the horizon are simultaneous , including those that left long after the 10 seconds of the event and crossed the horizon after traveling a long way . for instance, photons leaving the planet of one million years after the gamma - ray burst , might cross the horizon , and then can interact with .so , the formation of the black hole is simultaneous with events in and , but these very same events of are simultaneous with events that are in the distant future of .the reader used to work with schwarzschild coordinates perhaps will object that never reaches the horizon , since the approaching process takes an infinite time in a distant reference frame .this is , however , an effect of the choice of the coordinate system and the test - particle approximation ( see , for instance , hoyng 2006 , p.116 ) . if the process is represented in eddington - finkelstein coordinates , it takes a finite time for the whole star to disappear , as shown by the fact that the gamma - ray burst are quite short events .accretion / ejection processes , well - documented in active galactic nuclei and microquasars ( e.g. mirabel et al .1998 ) also show that the time taken to reach the horizon is finite in the asymptotically flat region of space - time .my conclusion is that black holes can be used to show that presentism provides a defective picture of the ontological substratum of the world .black holes are often invoked in philosophical ( and even physical ) discussions about production and destruction of ` information ' .this mostly occurs in relation to the possibility hypercomputing and the application of quantum field theory to the near horizon region .i shall review both topics here .the expression ` hypercomputing ' refers to the actual performance of an infinite number of operations in a finite time with the aim of calculating beyond the turing barrier ( turing , 1936 . for a definition of a turing machine see hopcrof & ullman 1979 ) .it has been suggested that such a hypercomputation can be performed in a kerr space - time ( nmeti & david 2006 , nmeti & handrka 2006 ) .the kerr space - time belongs to the class of the so - called malament - hogarth ( m - h ) space - times .these are defined as follows ( hogarth 1994 ) : + * definition . *_ is an m - h space - time if there is a future - directed time - like half - curve and a point such that and . _+ the curve represents the world - line of some physical system . because has infinite proper time , it may complete an infinite number of tasks .but , at every point in , it is possible to send a signal to the point .this is because there always exists a curve with future endpoint which has finite proper time .we can think of as the `` sender '' and as the `` receiver '' of a signal . in this way , the receiver may obtain knowledge of the result of an infinite number of tasks in a finite time . in a kerr space - timethis scheme can be arranged as follows .the `` sender '' is a spacecraft orbiting the kerr black hole with a computer onboard .the `` receiver '' is a capsule ejected by the orbiter that falls into the black hole .as the capsule approaches the inner horizon it intersects more and more signals from the orbiter , which emits periodically results of the computer calculations into the black hole . by the time the capsule crosses the inner horizon it has received all signals emitted by the computer in an infinite time ( assuming that both the black hole and the orbiter can exist forever )this would allow the astronauts in the capsule to get answers to questions that require beyond - turing computation !( nmeti & david 2006 ) .the whole situation is depicted in figure [ c - p2 ] .remains in the exterior space - time for an infinite amount of time , whereas falls into the black hole . in the time it takes the latter to reach the inner horizon ,the former arrives to the conformal infinity .the lines that connect both trajectories represent signals sent from to .,width=264 ] there are many reasons to think that the described situation is physically impossible .i shall mention the following ones : 1 ) the required inner black hole structure does not correspond to an astrophysical black hole generated by gravitational collapse . in a real black hole the cauchy horizon is expected to collapse into a ( probably null ) singularity due to the backscattered gravitational wave tails that enter the black hole and are blueshifted at the cauchy horizon ( see next section and brady 1999 ) .the instability of the cauchy horizon seems to be a quite general feature of any realistic black hole interior model .2 ) the black hole is not expected to exist during an infinite duration : it should evaporate through hawking radiation , over very long ( but always finite ) time .3 ) the performance of infinite operations would require an infinite amount of energy ( bunge 1977 , romero 2014 ) .even if the universe were infinite , a finite spacecraft can not manipulate infinite amounts of energy .4 ) if signals are periodically sent to the receiver , the blushifted electromagnetic radiation would burn the capsule by the time it crosses the cauchy horizon .nmeti & david ( 2006 ) argue that this might be circumvented by sending just one signal with the final result .this suggestion faces the problems of the actual infinite : for any moment there will always be a further moment , then , when the spaceship would send this signal ?5 ) the universe seems to be entering into a de sitter phase , so particle horizons will appear and block part of the accessible space - time to the spacecraft limiting its resources .i think that the cumulative argument is strong enough to support a _ hypercomputing avoidance conjeture _ : the laws of physics are such that no actual hypercomputation can be performed .i turn now to another issue related to black holes and information : the destruction of information by black holes .this seems to be a topic of high concern for quantum field theorists , to the point that the presumed destruction of information in a black hole is called the `` black hole information paradox '' .i maintain that such a paradox does not exist : black holes can not destroy any information .the reason is that information is not a property of physical systems .it is not like the electric charge , mass , or angular momentum .information is an attribute of _ languages _ , and languages are constructs , i.e. elaborated fictions . to say that black holes can destroy informationis like to say that they can destroy syntax .let us review the situation in a bit more detail .the application of quantum field theory to the near horizon region of a black hole results in the prediction of thermal radiation ( hawking 1974 ) . a temperature ,then , can be associated with the horizon : we can write the entropy of the black hole as : the area of a schwarzschild black hole is : in the case of a kerr - newman black hole , the area is : .\label{a_kn}\end{aligned}\ ] ] notice that expression ( [ a_kn ] ) reduces to ( [ a_s ] ) for .the formation of a black holes implies a huge increase of entropy .just to compare , a star has an entropy orders of magnitude lower than the corresponding black hole of the same mass .this tremendous increase of entropy is related to the loss of all the structure of the original system ( e.g. a star ) once the black hole is formed .the analogy between area and entropy allows to state a set of laws for black holes thermodynamics ( bardeen et al .1973 ) : * first law ( energy conservation ) : . here, is the angular velocity , the angular momentum , the electric charge , the electrostatic potential , and is the contribution to the change in the black hole mass due to the change in the external stationary matter distribution . * second law ( entropy never decreases ) : in all physical processes involving black holes the total surface area of all the participating black holes can never decrease . * third law ( nernst s law ) : the temperature ( surface gravity ) of a black black hole can not be zero . since with for extremal charged and extremal kerr black holes , these are thought to be limit cases that can not be reached in nature .* zeroth law ( thermal equilibrium ) : the surface gravity ( temperature ) is constant over the event horizon of a stationary axially symmetric black hole . if a temperature can be associated with black holes , then they should radiate as any other body .the luminosity of a schwarzschild black hole is : here , is the stephan - boltzmann constant .this expression can be written as : the lifetime of a black hole is : notice that the black hole heats up as it radiates !this occurs because when the hole radiates , its mass decreases and then according to eq .( [ t ] ) the temperature must rise .the black hole then will lose energy and its area will decrease slowly , violating the second law of thermodynamics. however , there is no violation if we consider a _ generalised second law _ , that always holds : _ in any process , the total generalised entropy never decreases _ ( bekenstein 1973 ) .unfortunately , many physicists think that entropy and information are the same thing .this confusion seems to come from j. von neumann , who advised , not without some sarcasm , claude shannon to adopt the expression ` entropy ' to name the information characterised in the mathematical theory of communications developed by shannon and weaver ( 1949 ) : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ you should call it entropy , for two reasons . in the first placeyour uncertainty function has been used in statistical mechanics under that name , so it already has a name . in the second place , andmore important , nobody knows what entropy really is , so in a debate you will always have the advantage . floridi ( 2010 ) ,_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ shannon s information ` entropy ' , although formally defined by the same expression , is a much more general concept than statistical thermodynamic entropy . information ` entropy ' is present whenever there are unknown quantities that can be described only by a probability distribution . when some physicists write about a ` principle of information conservation ' ( e.g. susskind & lindesay 2010 ) , what they really mean is that the entropy of an isolated system in equilibrium should not increase , since it already is at its maximum value .when a black hole accretes matter , however , the entropy increases ( they say that `` information is destroyed '' ) .even if the black hole finally radiates away the whole mass absorbed , the radiation will be thermal , so the entropy of matter will continue to increase . as pointed out by penrose, these considerations do not take into account the entropy of the gravitational field .the state of maximum entropy of this field is gravitational collapse ( penrose 2010 ) .as the black hole evaporates , the entropy of gravitation decreases .eventually , after the black hole complete evaporation , radiation will be in thermal equilibrium and gravity in a maximally ordered state .after a huge amount of time , the universe might return to a state of minimum overall entropy .black holes , in this sense , might act as some ` entropy regeneration engines ' , restoring the initial conditions of the universe .there is yet another sense of the so - called black hole information paradox , related to the breakdown of predictability of quantum mechanics in presence of black holes .the paradox here appears because of a confusion between ontological and epistemic determinism ( see sect .[ sect3 ] above ) .a fundamental postulate of quantum mechanics is that complete description of a system is given by its wave function up to when the system interacts .the evolution of the wave function is determined by a unitary operator , and unitarity implies epistemic determinism : initial and boundary conditions allow to solve the dynamic equation of the system and the solution is unique .if a system is entangled and one component cross the event horizon , measurements of the second component and knowledge of the initial state will , however , not allow to know the state of the component fallen into the black hole .epistemic determinism fails for quantum mechanics in presence of black holes .i confess not to see a problem here , since quantum interactions are by themselves already non - unitary .ontic determinism , the kind that counts , is not in peril here , and epistemic determinism was never part of a full theory of quantum mechanics .we have seen that black hole space - times are singular , at least in standard general relativity . moreover ,singularity theorems formulated by penrose ( 1965 ) and hawking & penrose ( 1970 ) show that this is an essential feature of black holes .nevertheless , essential or true singularities should not be interpreted as representations of physical objects of infinite density , infinite pressure , etc . since the singularities do not belong to the manifold that represents space - time in general relativity , they simply can not be described or represented in the framework of such a theory .general relativity is incomplete in the sense that it can not provide a full description of the gravitational behaviour of any physical system .true singularities are not within the range of values of the bound variables of the theory : they do not belong to the ontology of a world that can be described with 4-dimensional differential manifolds .let us see this in more detail ( for further discussions see earman 1995 ) .a space - time model is said to be singular if the manifold is _incomplete_. a manifold is incomplete if it contains at least one _ inextendible _ curve .a curve is inextendible if there is no point in such that as , i.e. has no endpoint in . a given space - time model has an _ extension _ if there is an isometric embedding , where is another space - time model and is an application onto a proper subset of .a _ singular _ space - time model contains a curve that is inextendible in the sense given above .singular space - times are said to contain singularities , but this is an abuse of language : singularities are not ` things ' in space - time , but a pathological feature of some solutions of the fundamental equations of the theory .singularity theorems can be proved from pure geometrical properties of the space - time model ( clarke 1993 ) .the most important of these theorems is due to hawking and penrose ( 1970 ) : + * theorem .* let be a time - oriented space - time satisfying the following conditions : 1 . for any non space - like is the ricci tensor obtained by contraction of the curvature tensor of the manifold . ] .2 . time - like and null generic conditions are fulfilled .3 . there are no closed time - like curves .at least one of the following conditions holds * \a .there exists a compact achronal set without edge . *there exists a trapped surface .there is a such that the expansion of the future ( or past ) directed null geodesics through becomes negative along each of the geodesics .then , contains at least one incomplete time - like or null geodesic .+ if the theorem has to be applied to the physical world , the hypothesis must be supported by empirical evidence .condition 1 will be satisfied if the energy - momentum satisfies the so - called _ strong energy condition _ : , for any time - like vector .if the energy - momentum is diagonal , the strong energy condition can be written as and , with the energy density and the pressure .condition 2 requires that any time - like or null geodesic experiences a tidal force at some point in its history .condition 4a requires that , at least at one time , the universe is closed and the compact slice that corresponds to such a time is not intersected more than once by a future directed time - like curve .the trapped surfaces mentioned in 4b refer to surfaces inside the horizons , from where congruences focus all light rays on the singularity .condition 4c requires that the universe is collapsing in the past or the future .i insist , the theorem is purely geometric , no physical law is invoked .theorems of this type are a consequence of the gravitational focusing of congruences .singularity theorems are not theorems that imply physical existence , under some conditions , of space - time singularities .material existence can not be formally implied .existence theorems imply that under certain assumptions there are functions that satisfy a given equation , or that some concepts can be formed in accordance with some explicit syntactic rules .theorems of this kind state the possibilities and limits of some formal system or language .the conclusion of the theorems , although not obvious in many occasions , are always a necessary consequence of the assumptions made . in the case of singularity theorems of classical field theories like general relativity ,what is implied is that under some assumptions the solutions of the equations of the theory are defective beyond repair .the correct interpretation of these theorems is that they point out the _ incompleteness _ of the theory : there are statements that can not be made within the theory . in this sense ( and only in this sense ) , the theorems are like gdel s famous theorems of mathematical logic .to interpret the singularity theorems as theorems about the existence of certain space - time models is wrong .using elementary second order logic is trivial to show that there can not be non - predicable objects ( singularities ) in the theory ( romero 2013b ) . if there were a non - predicable object in the theory , where the quantification over properties in unrestricted .the existential quantification , on the other hand , means let us call the property ` ' .then , formula ( [ p ] ) reads : which is a contradiction , i.e. it is false for any value of .i conclude that there are no singularities nor singular space - times .there is just a theory with a restricted range of applicability .the reification of singularities can lead to accept an incredible ontology .we read , for instance , in a book on foundations of general relativity : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [ ... ] a physically realistic space - time _ must _ contain such singularities .[ ... ] there exist causal , inextendible geodesics which are incomplete .[ ... ] if a geodesic can not be extended to a complete one ( i.e. if its future endless continuation or its past endless continuation is of finite length ) , then either the particle suddenly ceases to exist or the particle suddenly springs into existence . in either case this can only happen if space - time admits a `` singularity '' at the end ( or the beginning ) of the history of the particle .kriele ( 1999 ) , p. 383 ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ this statement and many similar ones found in the literature commit the elementary fallacy of confusing a model with the object being modelled. space - time does not contain singularities .some of our space - time models are singular .it is this incomplete character of the theory that prompt us to go beyond general relativity in order to get a more comprehensive view of the gravitational phenomena .as it was very clear to einstein , his general theory breaks down when the gravitational field of quantum objects starts to affect space - time .another interesting feature of black hole interiors is the existence , according to the unperturbed theory , of a region with closed time - like curves ( ctcs ) in kerr and kerr - newman black holes .this is the region interior to the second horizon ; chronology violation is generated by the tilt of the light cones around the rotation axis in this part of space - time ( e.g. andrka , nimeti , & wthrich 2008 ) .the interior event horizon is also a cauchy horizon a null hypersurface which is the boundary of the future domain of dependence for cauchy data of the collapse problem .it results impossible to predict the evolution of any system inside the cauchy horizons ; they are an indication of the breaking of predictability in the theory .these horizons , however , exhibit highly pathological behaviour ; small time - dependent perturbations originating outside the black hole undergo an infinite gravitational blueshift as they evolve towards the horizon .this blueshift of infalling radiation gave the first indications that these solutions may not describe the generic internal structure of real black holes .simpson & penrose ( 1973 ) pointed this out more than 40 years ago , and since then linear perturbations have been analysed in detail .poisson & israel ( 1990 ) showed that a scalar curvature singularity forms along the cauchy horizon of a charged , spherical black hole in a simplified model .this singularity is characterised by the exponential divergence of the mass function with advanced time .the key ingredient producing this growth of curvature is the blueshifted radiation flux along the inner horizon ( see also gnedin & gnedin 1993 and brady 1999 for a review ) . since then ,the result was generalised to kerr black holes( e.g. brady & chambers 1996 , hamilton & polhemus 2011 ) .these , and other results about the instability of the kerr black hole interior , suggest that ctcs actually do not occur inside astrophysical black holes .according to eq . ( [ age ] ) , an isolated black hole with would have a lifetime of more than yr .this is 56 orders of magnitude longer than the age of the universe .however , if the mass of the black hole is small , then it could evaporate within the hubble time .a primordial black hole , created by extremely energetic collisions short after the big bang , should have a mass of at least g in order to exist today .less massive black holes must have already evaporated .what happens when a black hole losses its mass so it can not sustain an event horizon anymore ? as the black hole evaporates , its temperature raises .when it is cold , it radiates low energy photons .when the temperature increases , more and more energetic particles will be emitted . at some point gamma rays would be produced . if there is a population of primordial black holes , their radiation should contribute to the diffuse gamma - ray background .this background seems to be dominated by the contribution of unresolved active galactic nuclei and current observations indicate that if there were primordial black holes their mass density should be less than , where is the cosmological density parameter ( ) . after producing gamma rays ,the mini black hole would produce leptons , quarks , and super - symmetric particles , if they exist . at the end, the black hole would have a quantum size and the final remnant will depend on the details of how gravity behaves at planck scales . the final product might be a stable , microscopic object with a mass close to the planck mass .such particles might contribute to the dark matter present in the galaxy and in other galaxies and clusters .the cross - section of black hole relics is extremely small : ( frolov and novikov 1998 ) , hence they would be basically non - interacting particles .a different possibility , advocated by hawking ( 1974 ) , is that , as a result of the evaporation nothing is left behind : all the energy is radiated . independently of the problem of mini black hole relics ,it is clear that the fate of stellar - mass and supermassive black holes is related to fate of the whole universe . in an ever expanding universe or in an accelerating universeas it seems to be our actual universe , the fate of the black holes will depend on the acceleration rate .the local physics of the black hole is related to the cosmic expansion through the cosmological scale factor ( faraoni & jacques 2007 ) .a schwarzschild black hole embedded in a friedmann - lemaitre - robertson - walker ( flrw ) universe can be represented by a generalisation of the mcvittie metric ( e.g. gao et al .2008 ) : ^{2}}{\left[1+\frac{2 g m(t)}{a(t)c^{2}r}\right]^{2 } } c^{2}dt^{2}-a(t)^{2}\left[1+\frac{2 g m(t)}{a(t)c^{2}r}\right]^{4 } ( dr^{2 } + r^{2}d\omega^{2 } ) .\label{cosmicbh}\ ] ] assuming that , with a constant , the above metric can be used to study the evolution of the black hole as the universe expands .if the equation of state for the cosmic fluid is given by , with constant , then for the universe accelerates its expansion in such a way that the scale factor diverges in a finite time .this time is known as the big rip . if , then the big rip will occur in gyr .the event horizon of the black hole and the cosmic apparent horizon will coincide for some time and then the inner region of the black hole would be accesible to all observers . in case of expansion will continue during an infinite time .black holes will become more and more isolated .as long as their temperature be higher than that of the cosmic microwave background radiation ( cmb ) , they will accrete photons and increase their mass .when , because of the expansion , the cmb temperature falls below that of the black holes , they will start to evaporate . on the very long run, all black holes will disappear .if massive particles decay into photons on such long timescales , the final state of the universe will be that of a dilute photon gas .cosmic time will cease to make any sense for such a state of the universe , since whatever exist will be on a null surface . without time , there will be nothing else to happen .penrose ( 2010 ) , however , has suggested that a countable sequence of open flrw space - times , each representing a big bang followed by an infinite future expansion might occur , since the past conformal boundary of one copy of flrw space - time can be `` attached '' to the future conformal boundary of another , after an appropriate conformal rescaling .since bosons obey the laws of conformally invariant quantum theory , they will behave in the same way in the rescaled sections of the cyclical universe . for bosons ,the boundary between different cycles is not a boundary at all , but just a space - like surface that can be passed across like any other .fermions , on the other hand , remain confined to each cycle , where they are generated and decay .most of the fermions might be converted into radiation in black holes .if this is correct , black holes would then be the key to the regeneration of the universe .in this chapter i have overviewed some philosophical problems related to black holes .the interface between black hole physics and philosophy remains mostly unexplored , and the list of topics i have selected is by no means exhaustive .the study of black holes can be a very powerful tool to shed light on many other philosophical issues in the philosophy of science and even in general relativity .evolving black holes , black hole dependence of the asymptotic behaviour of space - time , the nature of inertia , the energy of the gravitational field , quantum effects in the near horizon region , turbulent space - time during black hole mergers , the classical characterisation of the gravitational field , and regular black hole interiors are all physical topics that have philosophical significance . in black holes our current representations of space , time , and gravity are pushed to their very limits .the exploration of such limits can pave the way to new discoveries about the world and our ways of representing it .discoveries in both science and philosophy .i thank mario bunge , daniela prez , gabriela vila , federico lopez armengol , and santiago perez bergliaffa for illuminating discussions on science and black holes .i am also very grateful to florencia vieyro for help with the figures .my work has been partially supported by the argentinian agency anpcyt ( pict 2012 - 00878 ) and the spanish mineco under grant aya2013 - 47447-c3 - 1-p .falcke , h. , markoff s. , bower g. c. , gammie , c. f. , moscibrodzka , m. & maitra , d. 2011 . the jet in the galactic center : an ideal laboratory for magnetohydrodynamics and general relativity . in : g. e. romero , r. a. sunyaev t. belloni ( eds . ) , _ jets at all scales _ ,proceedings of the international astronomical union , iau symposium , volume 275 , 68 - 76 , cambridge university press , cambridge .hogarth , m. 1994 , non - turing computers and non - turing computability . in : hull , d. , forbes , m. , burian , r. ( eds . ) , _ proceedings of the biennial meeting of the philosophy of science association 1994 _ , pp .126138 , university of chicago press , chicago .
black holes are extremely relativistic objects . physical processes around them occur in a regime where the gravitational field is extremely intense . under such conditions , our representations of space , time , gravity , and thermodynamics are pushed to their limits . in such a situation philosophical issues naturally arise . in this chapter i review some philosophical questions related to black holes . in particular , the relevance of black holes for the metaphysical dispute between presentists and eternalists , the origin of the second law of thermodynamics and its relation to black holes , the problem of information , black holes and hypercomputing , the nature of determinisim , and the breakdown of predictability in black hole space - times . i maintain that black hole physics can be used to illuminate some important problems in the border between science and philosophy , either epistemology and ontology . * pacs * 04.70.bw , 97.60.lf , 98.80.-k , 01.70.+w . + * keywords : * black holes , cosmology , philosophy of science . [ lastpage-01 ]
cities are often defined by their domineering cultural characteristics .however , particular cities themselves harbor a large variety of different cultural districts . streets separated by just a few blocks may give very different impressions .these implicit boundaries and classifications are not documented on official maps , and usually are only learned with much time and experience living in a particular city .+ we believe that having a sense of these districts is valuable to a much wider population .some examples are : + 1 .new businesses : for business - owner or entrepreneur looking to open a new restaurant or expand to a different location , knowing which areas of a city harbor restaurants very similar to or different than that particular business is doubtless a valuable insight .newcomers : for tourists , people moving in , or anyone else new to the city , it is often an arduous and daunting task to get a sense of things such as where they are most likely to find a good thai restaurant , _ the _ block to go for dim sum , or the best area for a dressy , upscale dinner with good wine .anyone looking to explore : even people who have already have a sense of the city can be surprised by a hole - in - the - wall cafe or undiscovered area .the lda model we describe can identify the most highly weighted classification for a particular area as well as secondary classifications .this allows it to uncover more hidden characteristics of a particular area besides the most highly weighted .this is interesting information in itself , but can also be used as a backbone of a recommendation system .if a person really enjoys a particular area of town , this model could discover and rank other non - obvious areas that share similar traits .+ from a cognitive science point of view , we think trying to model these questions is an interesting experiment to test the accuracy of methods like lda and probabilistic mixture models to model human cognition .recent cognitive science research has had major successes in probabilistic generative models of human cognition [ 12 , 13 ] .specifically , research by tenenbaum shows strong support for bayesian concept learning [ 14 ] and sanborn et .use dirichlet process mixture models for category learning that emulates human learning [ 15 ] .using techniques like these in this paper , we try to recreate the kind of map a local might build up in their head over time of the different subsections of their city .+ the yelp academic dataset was released in 2013 and has grown to include over 42,000 businesses with over 1 million reviews [ 9 ] .the dataset has been used in academic papers for sentiment analysis , word layout systems , and recommendation engines , among other research areas .the quality and sheer size of the dataset is of high value to our research and its natural language user reviews are pivotal to our cultural detection and classification system .latent dirichlet allocation ( lda ) , first introduced by blei et .al . in 2003[ 1 ] , has been applied to numerous and diverse fields : from computer vision [ 2,3 ] to recommendation systems [ 4 ] to spam filtering [ 5 ] .lda hypothesizes that a collection of documents can be treated as a `` bag of words '' where each document d is generated by the following process , given hyperparameters and : + 1 .assume each topic has a fixed distribution over all words in d that is 2 .choose the document s topic distribution 3 .to generate each word w : + a. choose a topic from + b. choose a word from + using this model , lda is able to learn the topic mixtures , , for the documents on which it is trained , in an unsupervised manner . + to implement lda , we used tools from the python library gensim , which provides functionality to analyze semantic structure in texts [ 6 ] . based off of the results of the expectation maximization algorithm used by huang et .al . [ 7 ] to determine the optimal number of topics for yelp restaurant reviews in phoenix , we chose = 50 as the number of topics to extract .we used hyperparameters and with symmetric 1.0/k priors .+ we cleaned the reviews to remove punctuation , numbers , and a list of stopwords made up of the english stop words " list in the scikit - learn python library [ 11 ] . additionally , we specified that after this initial cleaning , the model should only consider the 40,000 median frequency words .this eliminated words that only appeared a handful of times , as well as generic food - related words that appeared many times .these words provide little information gain and removing them dramatically increased the convergence time of our lda training . + we trained the model on all restaurant reviews ( around 1.1 million ) from las vegas .training uses the online inference algorithm described by hoffman et .al . [ 8 ] and results in an lda topic model object that can be queried with new , unseen documents to return an optimal topic distribution .we used our model to predict topic weights for each restaurant .in addition , the model contains the static word distributions for each topic .our lda model produced 50 topics .each topic is a collection of word - weight couples .words with high corresponding weight values are most representative of the topic .the topic word weights are normalized such that .table i is a small sampling of selected topics our model generated .table i displays the topic # , the label we chose for the topic based on its word distribution , and the word distribution .the full list of topics and their weights can be viewed in the appendix .+ 2 & mexican food & 0.043*tacos + 0.037*taco + 0.026*asada + 0.024*carne + 0.024*mexican + 0.019*burrito + 0.010*salsa + 0.009*fries + 0.009*beans + 0.008*roberto s + 7 & night club & 0.013*music + 0.012*fun + 0.009*club + 0.007*cool + 0.006*party + 0.006*lounge + 0.005*group + 0.005*floor + 0.004*dance + 0.004*girls + 45 & casino & 0.027*hotel + 0.022*casino + 0.020*room + 0.010*stay + 0.009*downtown + 0.006*staying + 0.006*pool + 0.005*street + 0.005*stayed + 0.005*fremont + topic + pho vietnam restaurant & 4 , 15 & pho , vietnamese , rolls , broth + myxx hookah lounge & 7 , 15 & music , fun , club , cool , party + romano s macaroni grill & 24 , 30 & pasta , italian , bread , server + we used our trained lda model to predict topic distributions for each of the 3855 restaurants .a restaurant s topic distribution is a collection of coupled topic numbers and corresponding weights .topics with high corresponding weight values are most representative of the restaurant .these topic distribution weights are normalized such that . a sampling of topic predictions is shown in table ii .we tested several clustering methods in order to group restaurants into appropriate clusters .we assume that culinary districts in a city are characterized by closeness and similarity of restaurants . in our model ,therefore , we represent each restaurant as a combined vector of its coordinate position and its lda assigned topic weight distribution .this vector has 52 dimensions , 2 of which represent the spatial location of the restaurant , and 50 of which represent the restaurant s lda topic weights .+ 1 . scaling procedure + since the spatial coordinates and topic weights are measured in different spaces , their values are on different scales . to prevent our results being arbitrarily skewed by these different units of scale, we used a scaling procedure , multiplying the topic weight distributions by a constant .+ by varying , we can give the topic weights more or less influence over the clustering .when , the clustering is equivalent to clustering based only on location . as s increases , topic weights are given more control over the clustering .when , the clustering is done purely by topic similarity . +our goal was to find an such that close - together clusters of restaurants would be grouped into a single cluster , and points on the outer edges of these clusters would identify themselves with the cluster that best matched their topic distribution . in this way , we allow for a chinese restaurant to escape a nearby cluster of italian restaurants . +since we are not using pure spatial features , our clustering may result in some clusters overlapping and interweaving .this allows our model to be representative of the real world of cultural mixing and fuzzy cultural boundaries .+ to determine a reasonable scaling factor , we constructed a plausible scenario . a chinese restaurant ( )lies betwen two clusters - a primarily italian cluster ( ) and a primarily chinese cluster ( ) . is 0.25 mi from s center and 0.75 mi from s center .we want to choose an s such that dist(, ) dist( , ) : that is , we want the chinese restaurant to be classified into the chinese restaurant " cluster despite it being closer in spatial coordinates to the italian cluster s center .the function is a euclidean distance metric between the two vectors .let s assume and share the same topic distributions , and that and share zero topics in their distributions .we define as so our goal is to find an s such that : this calculation of s , however , assumes topic distributions are all - or - none , when in fact most restaurants are a mixture of a few topics .in fact , we determined the mean # of topics assignments a restaurant received to be 5 .we found a typical restaurant to have 1 dominating topic comprising of at least 0.5 of the weight and 4 subtopics comprising of the rest of the weight .we performed a more advanced analysis of the same scenario and found .the analysis can be found in equation 10 of the appendix .normalization + since some topics are inherently more common than other topics due to the high prevalence of some restaurant types , we wanted to avoid our model becoming unfairly skewed by very common topics such as a pizza " topic .there are 355 pizza restaurants in las vegas , comprising of 9.2% of all las vegas restaurants . to avoid the scenario where all clusters are labeled as pizza " simply because of the uniformly large number of these restaurants across all clusters , we vertically normalize the topic weights for each restaurant .we define to be a 50 dimensional topic weight vector of a restaurant , and to be the number of restaurants .we define where this normalization can be thought of as dividing out the background of a city s restaurant distribution , ensuring clusters will be dominated by notable exceptions to the average : we do nt want to point out that pizza restaurants are pretty much evenly distributed in high quantities all around vegas , but rather discover when they , or another type of restaurant , are appear in _ notably _ high quantities .we then horizontally re - normalize each topic vector so that the values remain at the same scale .determining the number of clusters + to determine the optimal number of clusters , we first used the elbow method , which looks at the percentage of variance explained as a function of the number of clusters .the idea is that we should choose a number of clusters such that adding more clusters does nt significantly improve the modeling .we performed clustering with = 5 to = 35 clusters and plotted the variance quantity vs. , where is the sum of the normalized intra - cluster sums of sqaures [ 16 ] .figure 1 shows a plot of log( ) vs. .+ the elbow method involves visually choosing the elbow of the graph where the slope changes most drastically .we determined our elbow happens at = 30 .however , determining the elbow of a graph is not a well - defined process , and in fact this is one of the known weaknesses of the elbow method .+ because of the shortcomings of the elbow method , we also used the gap statistic [ 10 , 16 ] to determine the optimal with which to cluster .the gap statistic is a way to to standardize the comparison of the variance explained " metric used in the elbow method .the gap statistic takes the approach of standardizing the variance explained against a null reference distribution of the data ( distribution with no apparent underlying clusters ) .the gap statistic method involves calculating the difference between the variance explained for the dataset and the variance explained for the null reference distribution .this difference is known as the gap statistic .the value that yields the greatest gap statistic ( greatest difference in variance ) is the optimal value for clustering the data .figure 2 shows the results of the gap statistic .+ the gap statistic predicts that =30 is the optimal number of clusters for our data .this confirms our identification of the elbow was indeed correct .+ \1 .k - means clustering + k - means clustering is a clustering algorithm that will find centroids to cluster a data set .the k - means algorithm converges on a centroid distribution that minimizes the sum of squares of distances between cluster centroids and the corresponding data points that are classified by them .+ using tools from the python library scikit - learn [ 11 ] , we performed k - means clustering on all 3855 vegas restaurants with random k++ means initialization and 300 iterations , specifying and .we used a euclidean distance metric for our clustering and classification of the 52-dimensional restaurant vectors .the result of this clustering can be seen in figure 3 .+ the clusters have a median of 148 members in each with a standard deviation of 80 .gaussian mixture model + a gaussian mixture model ( gmm ) is a probabilistic generative model that assumes all the data points are generated from a mixture of a finite number of gaussian distributions .the gmm , in principle , is a weighted sum of component gaussian densities .each gaussian distribution can be thought of as a cluster that can classify data points . + also using tools from the python library scikit - learn , we trained our gmm with the expectation maximization algorithm on 3855 restaurants specifying and .the result of this clustering can be seen in figure 3 .the gmm clusters have a median of 200 members each , with mean 158 members and standard deviation of 40 members .+ most notably , gmm clusters varied from the k - means clusters in shape : the k - means clusters were nearly always spherical in shape , due to k - means minimizing distance . the gaussian mixture model , however , is not limited to spherical clusters , as the gaussian distributions that define its clusters are shaped by variances in each dimension .this results in some elongated elliptical clusters .+ some of the clusters consists of only restaurants that lie on a particular street . it may be that this behavior is actually beneficial .oftentimes cultural districts within a city are highly street based , and the gmm model is flexible enough to detect clusters like this . the result of the gmm clustering can be seen in figure 4 .+ once we determine relevant spatial and topical clusters , we are tasked with labeling the clusters . to determine the labeling of a cluster , we take the average topic vector for all restaurants in the cluster .we then chose the top _ two _ topics that describe a cluster and use their human - attributed labels .these labels overlayed atop their cluster distributions are shown in figure 5 .+ we chose to display the top two labels to uncover not only the most frequent topic within a cluster but also underlying categories which might be less obvious .+ using our gaussian mixture clustering , we were able to enhance these labeling with appropriate orientations . since each clusteris represented by a gaussian with two dimensional variances , we are able to rotate the labelings to align with the direction of maximum gaussian variance .these rotated labels have a tendency to orient with streets .the waffles / brunch label in the top left displays this rather useful property .these oriented labels overlayed atop their gaussian cluster distributions are shown in figure 6 .+ while clustering restaurants on space and topics illuminates a city s many cultural centers , it does not show how a specific topic is distributed throughout the city .to show this distribution for a given topic we plotted topic similarity in a heatmap .we ran our lda inference on a novel restaurant s reviews . from this we got a topic distribution of that novel restaurant .we divided the city into a a 20x20 grid of squares .for each square we calculated the average topic similarity from the center of the square to all restaurants in the city .we used a gaussian weight metric to scale topic similarity by proximity . for each square we calculated a similarity metric : + where + : : = all restaurants in city : : = the topic distribution of the novel restaurant : : = euclidian distance metric for topic distributions and where + : : the center of the square : : : : the position of the restaurant our similarity metric takes in to account topic similarity of the novel restaurant to each other restaurant in the city .we use a gaussian weight to scale these topic similarities by distance .this allows restaurants near the square s center location to have most of the influence over the square s color .we calculated our similarity metric for every square in our grid and colored our heat map red for high values and blue for low values .the results of our heatmap generated by comparing the topics of the restaurant `` pho vietnamese restaurant '' to the restaurants of vegas are shown in figure 7 .+ in figure 8 , the x indicates the actual location of the the restaurant ( the similarity calculations were conducted without including this restaurant ) .our heat map shows that the restaurant is in an area of high topic similarity , which is accurate ( pho vietnamese restaurant is located in las vegas s chinatown district ) .we found that the resultant lda topics ( appendix table iii ) were well - defined and descriptive .we observed that the words within a given topic fit well into a particular category of food type or culture , and we had very little trouble labeling them based off of the given words and weights .additionally , the topics themselves seem reasonably distinct from each other with only few overlapping topics .the general area in which we saw the most overlap was the buffet restaurants topic .topics # 0 , # 5 , # 48 each concerned buffet restaurants .however , looking at the words in each , we were able to distinguish seafood / buffet " ( # 5 ) and upscale / buffet"(#0 ) from a more general buffet " topic ( # 48 ) .+ looking at the k - means clustering of las vegas restaurants , we observed that our clustering classifies areas defined beforehand : it put labels of `` pho '' and `` dim sum '' on chinatown , and `` luxe '' , `` steakhouse '' , `` upscale '' , and `` seafood / buffet '' over the strip .interestingly , it also split these clusters into smaller subclusters , for example separating a dim sum " and ramen " cluster from a pho " and soup " cluster .this behavior may or may not be ideal : it may be identifying actual sub - districts , or future work may involve a final cluster merge step in which two clusters close in distance and topic similarity can be merged in to a single cluster .+ the gmm also distinguished these already - known cultural areas .the shape and sizes of the clusters themselves were slightly more varied and often alligned with a particular street .this ability is interesting because oftentimes districts may be very street - based .+ as all this learning was unsupervised , we are very interested in finding a metric to quantitatively determine accuracy across these different models .one potential way to do this would be to conduct cognitive studies with people who live in or are familiar with particular cities .for example , we could compare our map with maps described by las vegas residents , or get a measure of how accurate they believe our map is .+ the automatic spatial and topical clustering and labeling approach outlined above is a general method that can be applied to any city .figure 8 in the appendix shows the results of labeling 2 other cities ( phoenix and endinberg ) with this method .these maps can be analytical tools with various applications including but not limited to determining new restaurant placement , understanding cultural regions of a city , discovering unexplored areas of one s city , choosing where to live , or what route to take on a stroll to the park . like the automatic cultural labeling method, the topic heat map can be used as a useful analytics tool .this map can be used to determine certain cultural hotbeds , both known and hidden . a hidden cultural hotbed may present a market opportunity for continued growth .the topical heat map of a city may be an especially valuable asset to a new restaurant or chain looking to strategize where exactly to place a new store location .the heat map could actually be used to perform a detailed analysis on what kind of location , for different types of restaurants , is optimal ( see iv part 3 for more detail ) .\1 ) using the timestamps on reviews , it is possible to filter reviews based on when they were written .this would allow for creating dynamic maps using reviews within a moving a time window to see how culture changes : how new clusters emerge , split and merge .+ 2 ) in our study we use the elbow method and gap statistic to predetermine an appropriate number of clusters to use . instead, it may prove valuable to use a nested chinese restaurant process to learn a hierarchy of clusters and subclusters .for example , this could split chinatown into various subclusters under the general chinese cluster .this could be used to label the graphs at various scales and zoom levels . additionally , using a chinese restaurant process as part of a nonparametric mixture modelwould allow the model to flexibly add more clusters as needed , and may be more likely to find the optimal number of clusters .+ 3 ) the similarity heatmap we developed along with yelp star ratings could be used to analyze what kind of placement makes a restaurant successful .for example , it may be that placing a restaurant right in the center of an area of very high similarity creates direct competition and comparison that is actually detrimental . at the same time, it may be that placing a restaurant in an area where it is completely out of place is also a bad idea .a detailed analysis of where restaurants with varying star ratings fall on a similarity heatmap could provide valuable insight to businesses about what kind of placement is optimal .+ 4 ) using yelp user data and the classifications from this model , it is possible to create a recommendation system .recommendations could be general areas or specific restaurant suggestions : for example , if a user likes several restaurants in a specific area / cluster , we imagine recommending to them another restaurant or area that shares similar topics .we would like to thank joshua tenenbaum for supporting this work . 99 d. blei , a. ng , m. jordan , latent dirichlet allocation , journal of machine learning research 3 , pp . 993 , 2003 . y. wang , p. sabzmeydani , g. mori , semi - latnt dirichlet allocation : a hierarchical model for human action recognition , in lecture notes in computer science , vol .4814 , a. elgammal , b. rosenhahn , r. klette , eds ., heidelberg : springer berlin , 2007 , pp . 240 .m. lienou , h. maitre , m. datcu , lienou , m. , semantic annotation of satellite images using latent dirichlet allocation , geoscience and remote sensing letters , ieee , vol . 7 , pp .28 , july 2009 .r. krestel , p. fankhauser , w. nejdl , latent dirichlet allocation for tag recommendation , proc .of the third acm conf .on recommender systems , new york , 2009 , pp .61 . i. br , j. szab , a. benczr , latent dirichlet allocation in web spam filtering , proc . of the 4th int .workshop on adversarial information retrieval on the web , new york , 2008 , pp .gensim python library , https://radimrehurek.com / gensim/. j. huang , s. rogers , e. joo , improving restaurants by extracting subtopics from yelp reviews , presented at iconference , berlin , 2014 .m. hoffman , d. blei , f. bach , online learning for latent dirichlet allocation , in advances in neural information processing systems 23 , 2010 .yelp academic dataset , https://www.yelp.com/academic_dataset .r. tibshirani , g. walther , t. hastie , estimating the number of clusters in a data set via the gap statistic , j. r. statist .63 , part 2 , pp . 411 , 2001 .f. pedregosa , g. varoquaux , a. gramfort , v. michel , b. thirion , o. grisel , m. blondel , p. prettenhofer , r. weiss , v. dubourg , j. vanderplas , a. passos , d. cournapeau , m. brucher , m. perrot , e. duchesnay , scikit - learn : machine learning in python , journal of machine learning , vol .12 , pp . 2825 , 2011 .t. griffiths , n. chater , c. kemp , a. perfors , and j. b. tenenbaum , probabilistic models of cognition : exploring representations and inductive biases , trends in cognitive sciences , vol .14 , pp . 357 , 2010 . n. chater , j. b. tenenbaum , and a. yuille , probabilistic models of cognition : conceptual foundations , trends in cognitive sciences , vol . 10 , 2006 . j. b. tenenbaum , rules and similarity in concept learning , advances in neural information processing systems 12 , pp .59 , 2000 .a. n. sanborn , t. l. griffiths , d. j. navarro , psychological review , vol .117 , pp . 1144 , 2010 .the data science lab , finding the k in k - means clustering , december 2013 , https://datasciencelab.wordpress.com/2013/12/27/finding-the-k-in-k-means-clustering/| m0.8 cm | m2 cm | m14 cm | + * topic # * & * human label * & * word distribution * + _ continued from previous page _ + * topic # * & * human label * & * word distribution * + + 0 & buffet / upscale & 0.027*wicked + 0.026*spoon + 0.015*buffet + 0.014*dishes + 0.012*cosmopolitan + 0.011*stone + 0.011*mac + 0.011*marrow + 0.011*bone + 0.010*gelato +
topic models are a way to discover underlying themes in an otherwise unstructured collection of documents . in this study , we specifically used the latent dirichlet allocation ( lda ) topic model on a dataset of yelp reviews to classify restaurants based off of their reviews . furthermore , we hypothesize that within a city , restaurants can be grouped into similar clusters based on both location and similarity . we used several different clustering methods , including k - means clustering and a probabilistic mixture model , in order to uncover and classify districts , both well - known and hidden ( i.e. cultural areas like chinatown or hearsay like the best street for italian restaurants " ) within a city . we use these models to display and label different clusters on a map . we also introduce a topic similarity heatmap that displays the similarity distribution in a city to a new restaurant .
at a summer school in les houches france in summer 1972 , james bardeen , building on earlier work of brandon carter , initiated research on gravitational lensing by spinning black holes .bardeen gave a thorough analytical analysis of null geodesics ( light - ray propagation ) around a spinning black hole ; and , as part of his analysis , he computed how a black hole s spin affects the shape of the shadow that the hole casts on light from a distant star field .the shadow bulges out on the side of the hole moving away from the observer , and squeezes inward and flattens on the side moving toward the observer .the result , for a maximally spinning hole viewed from afar , is a d - shaped shadow ; cf .figure [ fig4:kerrlens ] below .( when viewed up close , the shadow s flat edge has a shallow notch cut out of it , as hinted by figure [ fig8:nofingerprint ] below . ) despite this early work , gravitational lensing by black holes remained a backwater of physics research until decades later , when the prospect for actual observations brought it to the fore .there were , we think , two especially memorable accomplishments in the backwater era .the first was a 1978 simulation of what a camera sees as it orbits a non - spinning black hole , with a star field in the background .this simulation was carried out by leigh palmer , maurice pryce and bill unruh on an evans and sutherland vector graphics display at simon fraser university .palmer , pryce and unruh did not publish their simulation , but they showed a film clip from it in a number of lectures in that era .the nicest modern - era film clip of this same sort that we know of is by alain riazuelo ( contained in his dvd and available on the web at ) ; see figure [ fig3:schrays ] and associated discussion below . and see for an online application by thomas mller and daniel weiskopf for generating similar film clips . also of much interest in our modern era are film clips by andrew hamilton of what a camera sees when falling into a nonspinning black hole ; these have been shown at many planetariums , and elsewhere .the other most memorable backwater - era accomplishment was a black and white simulation by jean - pierre luminet of what a thin accretion disk , gravitationally lensed by a nonspinning black hole , would look like as seen from far away but close enough to resolve the image . in figure[ fig15:disk]c below , we show a modern - era colour version of this , with the camera close to a fast - spinning black hole . gravitational lensing by black holes began to be observationally important in the 1990s .kevin rauch and roger blandford recognised that , when a hot spot , in a black hole s accretion disk or jet , passes through caustics of the earth s past light cone ( caustics produced by the hole s spacetime curvature ) , the brightness of the hot spot s x - rays will undergo sharp oscillations with informative shapes .this has motivated a number of quantitative studies of the kerr metric s caustics ; see , especially and references therein .these papers caustics are relevant for a source near the black hole and an observer far away , on earth in effect , on the black hole s `` celestial sphere '' at radius . in our paper , by contrast , we are interested in light sources that are usually on the celestial sphere and an observer or camera near the black hole . for this reversed case , we shall discuss the relevant caustics in sections [ subsec : kerrouter ] and [ subsec : fingerprint ] .this case has been little studied , as it is of primarily cultural interest ( everyone " wants to know what it would look like to live near a black hole , but nobody expects to make such observations in his or her lifetime ) , and of science - fiction interest .our paper initiates the detailed study of this culturally interesting case ; but we leave a full , systematic study to future research .most importantly , we keep our camera outside the ergosphere by contrast with alain riazuelo s and hamilton s recent simulations with cameras deep inside the ergosphere and even plunging through the horizon . in the 1990sastrophysicists began to envision an era in which very long baseline interferometry would make possible the imaging of black holes specifically , their shadows and their accretion disks .this motivated visualizations , with ever increasing sophistication , of accretion disks around black holes : modern variants of luminet s pioneering work .see , especially , fukue and yokoyama , who added colours to the disk ; viergutz , who made his black hole spin , treated thick disks , and produced particularly nice and interesting coloured images and included the disk s secondary image which wraps under the black hole ; marck , who laid the foundations for a lovely movie now available on the web with the camera moving around close to the disk , and who also included higher - order images , as did fanton et . and beckwith and done .see also papers cited in these articles . in the 2000s astrophysicistshave focused on perfecting the mm - interferometer imaging of black - hole shadows and disks , particularly the black hole at the centre of our own milky way galaxy ( sgr a * ) .see , e.g. , the 2000 feasibility study by falcke , melia and agol . see also references on the development and exploitation of grmhd ( general relativistic magnetohydrodynamical ) simulation codes for modelling accretion disks like that in sgr a * ; and references on detailed grmhd models of sgr a * and the models comparison with observations .this is culminating in a mm interferometric system called the _ event horizon telescope _ , which is beginning to yield interesting observational results though not yet full images of the shadow and disk in sgr a*. all the astrophysical visualizations of gravitational lensing and accretion disks described above , and all others that we are aware of , are based on tracing huge numbers of light rays through curved spacetime . a primary goal of today s state - of - the - art , astrophysical ray - tracing codes ( e.g. , the chan , psaltis and zel s massively parallel , gpu - based code gray ) is very fast throughput , measured , e.g. , in integration steps per second ; the spatial smoothness of images has been only a secondary concern . for our _ interstellar _ work , by contrast, a primary goal is smoothness of the images , so flickering is minimised when objects move rapidly across an imax screen ; fast throughput has been only a secondary concern . with these different primary goals , in our own code ,called dngr , we have been driven to employ a different set of visualization techniques from those of the astrophysics community techniques based on propagation of ray bundles ( light beams ) instead of discrete light rays , and on carefully designed spatial filtering to smooth the overlaps of neighbouring beams ; see section [ sec : dngr ] and [ sec : app ] .although , at double negative , we have many gpu - based workstations , the bulk of our computational work is done on a large compute cluster ( the double negative render - farm ) that does not employ gpus . in [ app : comparecodes ] we shall give additional comparisons of our dngr code with astrophysical codes and with other film - industry cgi codes . our work on gravitational lensing by black holesbegan in may 2013 , when christopher nolan asked us to collaborate on building realistic images of a spinning black hole and its disk , with imax resolution , for his science fiction movie _ interstellar_. we saw this not only as an opportunity to bring realistic black holes into the hollywood arena , but also an opportunity to create a simulation code capable of exploring a black hole s lensing with a level of image smoothness and dynamics not previously available . to achieve imax quality ( with 23 million pixels per image and adequately smooth transitions between pixels ) , our code needed to integrate not only rays ( photon trajectories ) from the light source to the simulated camera , but also bundles of rays ( light beams ) with filtering to smooth the beams overlap ; see section [ sec : dngr ] , [ subsec : app - raybundle ] , and [ subsec : app - filtering ] . and because the camera would sometimes be moving with speeds that are a substantial fraction of the speed of light , our code needed to incorporate relativistic aberration as well as doppler shifts and gravitational redshifts .thorne , having had a bit of experience with this kind of stuff , put together a step - by - step prescription for how to map a light ray and ray bundle from the light source ( the celestial sphere or an accretion disk ) to the camera s local sky ; see [ subsec : app - raytrace ] and [ subsec : app - raybundle ] .he implemented his prescription in mathematica to be sure it produced images in accord with others prior simulations and his own intuition .he then turned his prescription over to our double negative team , who created the fast , high - resolution code dngr that we describe in section [ sec : dngr ] and [ sec : app ] , and created the images to be lensed : fields of stars and in some cases also dust clouds , nebulae , and the accretion disk around _interstellar _ s black hole , gargantua . in section [ sec : starfield ], we use this code to explore a few interesting aspects of the lensing of a star field as viewed by a camera that moves around a fast - spinning black hole ( 0.999 of maximal a value beyond which natural spin - down processes become strong ) .we compute the intersection of the celestial sphere ) with the primary , secondary , and tertiary caustics of the camera s past light cone and explore how the sizes and shapes of the resulting caustic curves change as the camera moves closer to the hole and as we go from the primary caustic to higher - order caustics .we compute the images of the first three caustics on the camera s local sky ( the first three critical curves ) , and we explore in detail the creation and annihilation of stellar - image pairs , on the secondary critical curve , when a star passes twice through the secondary caustic . we examine the tracks of stellar images on the camera s local sky as the camera orbits the black hole in its equatorial plane .and we discover and explain why , as seen by a camera orbiting a fast spinning black hole in our galaxy , there is just one image of the galactic plane between adjacent critical curves near the poles of the black hole s shadow , but there are multiple images between critical curves near the flat , equatorial shadow edge .the key to this from one viewpoint is the fact that higher - order caustics wrap around the celestial sphere multiple times , and from another viewpoint the key is light rays temporarily trapped in prograde , almost circular orbits around the black hole . by placing a checkerboard of paint swatches on the celestial sphere ,we explore in detail the overall gravitational lensing patterns seen by a camera near a fast - spinning black hole and the influence of aberration due to the camera s motion . whereas most of section [ sec : starfield ] on lensing of stellar images is new , in the context of a camera near the hole and stars far away , our section [ sec : disk ] , on lensing of an accretion disk , retreads largely known ground .but it does so in the context of the movie _ interstellar _ , to help readers understand the movie s black - hole images , and does so in a manner that may be pedagogically interesting .we begin with a picture of a gravitationally lensed disk made of equally spaced paint swatches .this picture is useful for understanding the multiple images of the disk that wrap over and under and in front of the hole s shadow .we then replace the paint - swatch disk by a fairly realistic and rather thin disk ( though one constructed by double negative artists instead of by solving astrophysicists equations for thin accretion disks ) .we first compute the lensing of our semi - realistic disk but ignore doppler shifts and gravitational redshifts , which we then turn on pedagogically in two steps : the colour shifts and then the intensity shifts .we discuss why christopher nolan and paul franklin chose to omit the doppler shifts in the movie , and chose to slow the black hole s spin from that , , required to explain the huge time losses in _ interstellar _ , to .and we then discuss and add simulated lens flare ( light scattering and diffraction in the camera s lenses ) for an imax camera that observes the disk something an astrophysicist would not want to do because it hides the physics of the disk and the lensed galaxy beyond it , but that is standard in movies , so computer generated images will have continuity with images shot by real cameras .finally , in section [ sec : conclusion ] we summarise and we point to our use of dngr to produce images of gravitational lensing by wormholes . throughoutwe use geometrized units in which ( newton s gravitation constant ) and ( the speed of light ) are set to unity , and we use the mtw sign conventions .our computer code for making images of what a camera would see in the vicinity of a black hole or wormhole is called the double negative gravitational renderer , or dngr which obviously can also be interpreted as the double negative general relativistic code . onto the celestial sphere viaa backward directed light ray ; and the evolution of a ray bundle , that is circular at the camera , backward along the ray to its origin , an ellipse on the celestial sphere . ]the ray tracing part of dngr produces a map from the celestial sphere ( or the surface of an accretion disk ) to the camera s local sky .more specifically ( see [ subsec : app - raytrace ] for details and figure [ fig1:skymapa ] for the ray - tracing geometry ) : 1 . in dngr, we adopt boyer - lindquist coordinates for the black hole s kerr spacetime . at each eventwe introduce the locally non - rotating observer , also called the fiducial observer or fido in the membrane paradigm : the observer whose 4-velocity is orthogonal to the surfaces of constant , the kerr metric s space slices .we regard the fido as _ at rest _ in space , and give the fido orthonormal basis vectors , , that point along the spatial coordinate lines . 2 .we specify the camera s coordinate location ; its direction of motion relative to the fido there , a unit vector in the camera s reference frame ; and the camera s speed relative to the fido .3 . in the camera s reference frame, we set up a right - handed set of three orthonormal basis vectors , with along the direction of the camera s motion , perpendicular to and in the plane spanned by and , and orthogonal to and .see figure [ fig1:skymapa ] .and we then set up a spherical polar coordinate system for the camera s local sky ( i.e. for the directions of incoming light rays ) in the usual manner dictated by the camera s cartesian basis vectors .4 . for a ray that originates on the celestial sphere ( at ), we denote the boyer - lindquist angular location at which it originates by .we integrate the null geodesic equation to propagate the ray from the camera to the celestial sphere , thereby obtaining the map of points on the camera s local sky to points on the celestial sphere .if the ray originates on the surface of an accretion disk , we integrate the null geodesic equation backward from the camera until it hits the disk s surface , and thereby deduce the map from a point on the disk s surface to one on the camera s sky . for more details on this case ,see [ subsec : dngrdisks ] . 7 .we also compute , using the relevant doppler shift and gravitational redshift , the net frequency shift from the ray s source to the camera , and the corresponding net change in light intensity .dngr achieves its imax - quality images by integrating a bundle of light rays ( a light beam ) backward along the null geodesic from the camera to the celestial sphere using a slightly modified variant of a procedure formulated in the 1970s by serge pineault and robert roeder .this procedure is based on the equation of geodesic deviation and is equivalent to the optical scalar equations that have been widely used by astrophysicists in analytical ( but not numerical ) studies of gravitational lensing ; see references in section 2.3 of .our procedure , in brief outline , is this ( see figure [ fig1:skymapa ] ) ; for full details , see [ subsec : app - raybundle ] . 1 . in dngr , we begin with an initially circular ( or sometimes initially elliptical ) bundle of rays , with very small opening angle , centred on a pixel on the camera s sky .2 . we integrate the equation of geodesic deviation backward in time along the bundle s central ray to deduce the ellipse on the celestial sphere from which the ray bundle comes . more specifically , we compute the angle that the ellipse s major axis makes with the celestial sphere s direction , and the ellipse s major and minor angular diameters and on the celestial sphere .3 . we then add up the spectrum and intensity of all the light emitted from within that ellipse ; and thence , using the frequency and intensity shifts that were computed by ray tracing , we deduce the spectrum and intensity of the light arriving in the chosen camera pixel .novel types of filtering are key to generating our imax - quality images for movies . in dngrwe use spatial filtering to smooth the interfaces between beams ( ray bundles ) , and temporal filtering to make dynamical images look like they were filmed with a movie camera . for details , see [ subsec : app - filtering ] . in [ subsec :app - implementation ] we describe some details of our dngr implementation of the ray - tracing , ray - bundle , and filtering equations ; in [ subsec : app - codefarm ] we describe some characteristics of our code and of double negative s linux - based render - farm on which we do our computations ; in [ subsec : dngrdisks ] we describe our dngr modelling of accretion disks ; and in [ app : comparecodes ] we briefly compare dngr with other film - industry cgi codes and state - of - the - art astrophysical simulation codes .in this subsection we review well known features of gravitational lensing by a nonspinning ( schwarzschild ) black hole , in preparation for discussing the same things for a fast - spinning hole .we begin , pedagogically , with a still from a film clip by alain riazuelo , figure [ fig2:schlens ] .the camera , at radius ( where is the black hole s mass ) is moving in a circular geodesic orbit around the black hole , with a star field on the celestial sphere .we focus on two stars , which each produce two images on the camera s sky .we place red circles around the images of the brighter star and yellow diamonds around those of the dimmer star . as the camera orbits the hole , the images move around the camera s sky along the red and yellow curves . .picture courtesy alain riazuelo , from his film clip ; coloured markings by us . ]images outside the einstein ring ( the violet circle ) move rightward and deflect away from the ring .these are called _primary images_. images inside the einstein ring ( _ secondary images _ ) appear , in the film clip , to emerge from the edge of the black hole s shadow , loop leftward around the hole , and descend back into the shadow .however , closer inspection with higher resolution reveals that their tracks actually close up along the shadow s edge as shown in the figure ; the close - up is not seen in the film clip because the images are so very dim along the inner leg of their tracks . at all times ,each star s two images are on opposite sides of the shadow s centre .this behaviour is generic .every star ( if idealised as a point source of light ) , except a set of measure zero , has two images that behave in the same manner as the red and yellow ones .outside the einstein ring , the entire primary star field flows rightward , deflecting around the ring ; inside the ring , the entire secondary star field loops leftward , confined by the ring then back rightward along the shadow s edge .( there actually are more , unseen , images of the star field , even closer to the shadow s edge , that we shall discuss in section [ subsec : kerrstarfield ] . ) . ] as is well known , this behaviour is easily understood by tracing light rays from the camera to the celestial sphere ; see figure [ fig3:schrays ] .the einstein ring is the image , on the camera s sky , of a point source that is on the celestial sphere , diametrically opposite the camera ; i.e. , at the location indicated by the red dot and labeled caustic " in figure [ fig3:schrays ] .light rays from that caustic point generate the purple ray surface that converges on the camera , and the einstein ring is the intersection of that ray surface with the camera s local sky .[ the caustic point ( red dot ) is actually the intersection of the celestial sphere with a caustic line ( a one - dimensional sharp edge ) on the camera s past light cone .this caustic line extends radially from the black hole s horizon to the caustic point . ]the figure shows a single star ( black dot ) on the celestial sphere and two light rays that travel from that star to the camera , gravitationally deflecting around opposite sides of the black hole .one of these rays , the primary one , arrives at the camera outside the einstein ring ; the other , secondary ray , arrives inside the einstein ring . because the caustic point and the star on the celestial sphere both have dimension zero , as the camera moves , causing the caustic point to move relative to the star , there is zero probability for it to pass through the star .therefore , the star s two images will never cross the einstein ring ; one will remain forever outside it and the other inside and similarly for all other stars in the star field .however , if a star with finite size passes close to the ring , the gravitational lensing will momentarily stretch its two images into lenticular shapes that hug the einstein ring and will produce a great , temporary increase in each image s energy flux at the camera due to the temporary increase in the total solid angle subtended by each lenticular image .this increase in flux still occurs when the star s actual size is too small for its images to be resolved , and also in the limit of a point star . for examples ,see riazuelo s film clip .( large amplifications of extended images are actually seen in nature , for example in the gravitational lensing of distant galaxies by more nearby galaxies or galaxy clusters ; see , e.g. , . ) , as seen by a camera in a circular , equatorial geodesic orbit at radius .the red curves are the trajectories of primary images , on the camera s sky , for stars at celestial - sphere latitudes .the yellow curves are the trajectories of secondary images for stars at .the picture in this figure is a still from our first film clip archived at and is copyright 2015 warner bros .entertainment inc ._ interstellar _ and all related characters and elements are trademarks of and warner bros .entertainment inc .the full figure appears in the second and later printings of _ the science of interstellar _ , and is used by permission of w. w. norton & company , inc .this image may be used under the terms of the creative commons attribution - noncommercial - noderivs 3.0 ( cc by - nc - nd 3.0 ) license .any further distribution of these images must maintain attribution to the author(s ) and the title of the work , journal citation and doi .you may not use the images for commercial purposes and if you remix , transform or build upon the images , you may not distribute the modified images . ] for a camera orbiting a spinning black hole and a star field ( plus sometimes dust clouds and nebulae ) on the celestial sphere , we have carried out a number of simulations with our code dngr .we show a few film clips from these simulations at .figure [ fig4:kerrlens ] is a still from one of those film clips , in which the hole has spin ( where is the hole s spin angular momentum per unit mass and is its mass ) , and the camera moves along a circular , equatorial , geodesic orbit at radius . in the figure we show in violet two critical curves analogs of the einstein ring for a nonspinning black hole .these are images , on the camera sky , of two caustic curves that reside on the celestial sphere ; see discussion below .we shall discuss in turn the region outside the secondary ( inner ) critical curve , and then the region inside .as the camera moves through one full orbit around the hole , the stellar images in the outer region make one full circuit along the red and yellow curves and other curves like them , largely avoiding the two critical curves of figure [ fig4:kerrlens]particularly the outer ( primary ) one .we denote by five - pointed symbols four images of two very special stars : stars that reside where the hole s spin axis intersects the celestial sphere , and ( and ) .these are analogs of the earth s star polaris . by symmetry, these pole - star images must remain fixed on the camera s sky as the camera moves along its circular equatorial orbit . outside the _ primary ( outer ) critical curve _ ,all northern - hemisphere stellar images ( images with ) circulate clockwise around the lower red pole - star image , and southern - hemisphere stellar images , counterclockwise around the upper red pole - star image . between the primary and secondary ( inner ) critical curves ,the circulations are reversed , so at the primary critical curve there is a divergent shear in the image flow .[ for a nonspinning black hole ( figure [ fig2:schlens ] above ) there are also two critical curves , with the stellar - image motions confined by them : the einstein ring , and a circular inner critical curve very close to the black hole s shadow , that prevents the inner star tracks from plunging into the shadow and deflects them around the shadow so they close up . ] after seeing these stellar - image motions in our simulations , we explored the nature of the critical curves and caustics for a camera near a fast - spinning black hole , and their influence .our exploration , conceptually , is a rather straightforward generalisation of ideas laid out by rauch and blandford and by bozza .they studied a camera or observer on the celestial sphere and light sources orbiting a black hole ; our case is the inverse : a camera orbiting the hole and light sources on the celestial sphere . just as the einstein ring , for a nonspinning black hole , is the image of a caustic point on the celestial sphere the intersection of the celestial sphere with a caustic line on the camera s past light cone so the critical curves for our spinning black hole are also images of the intersection of the celestial sphere with light - cone caustics . but the spinning hole s light - cone caustics generically are 2-dimensional surfaces ( folds ) in the three - dimensional light cone , so their intersections with the celestial sphere are one - dimensional : they are closed _caustic curves _ in the celestial sphere , rather than caustic points .the hole s rotation breaks spherical symmetry and converts non - generic caustic points into generic caustic curves .( for this reason , theorems about caustics in the schwarzschild spacetime , which are rather easy to prove , are of minor importance compared to results about generic caustics in the kerr spacetime . )we have computed the caustic curves , for a camera near a spinning black hole , by propagating ray bundles backward from a fine grid of points on the camera s local sky , and searching for points on the celestial sphere where the arriving ray bundle s minor angular diameter passes through zero .all such points lie on a caustic , and the locations on the camera sky where their ray bundles originate lie on critical curves . around a black hole with spin parameter .as the camera moves , in the camera s reference frame a star at travels along the dashed - line path . ] for a camera in the equatorial plane at radius , figure [ fig5:astroid603 ] shows the primary and secondary caustic curves .these are images , in the celestial sphere , of the primary and secondary critical curves shown in figure [ fig4:kerrlens ] .the primary caustic is a very small astroid ( a four - sided figure whose sides are fold caustics and meet in cusps ) .it spans just in both the and directions .the secondary caustic , by contrast , is a large astroid : it extends over in and in .all stars within of the equator encounter it as the camera , at , orbits the black hole .this is similar to the case of a source near the black hole and a camera on the celestial sphere ( far from the hole , e.g. on earth ) . there, also , the primary caustic is small and the secondary large . in both casesthe dragging of inertial frames stretches the secondary caustic out in the direction .because the spinning hole s caustics have finite cross sections on the celestial sphere , by contrast with the point caustics of a nonspinning black hole , stars , generically , can cross through them ; see , e.g. , the dashed stellar path in figure [ fig5:astroid603 ] . as is well known from the elementary theory of fold caustics (see , e.g. , section 7.5 of ) , at each crossing two stellar images , on opposite sides of the caustic s critical curve , merge and annihilate ; or two are created . andat the moment of creation or annihilation , the images are very bright . of figure [ fig5:astroid603]a . in the right still , images 1 and 2 are about to annihilate as their star passes through caustic point . ]figure [ fig6:createannihilate ] ( two stills from a film clip at ) is an example of this . as the star in figure [ fig5:astroid603]a , at polar angle , travels around the celestial sphere relative to the camera , a subset of its stellar images travels around the red track of figure [ fig6:createannihilate ] , just below the black hole s shadow .( these are called the star s `` secondary images '' because the light rays that bring them to the camera have the same number of poloidal turning points , one and equatorial crossings , one as the light rays that map the secondary caustic onto the secondary critical curve ; similarly these images red track is called the star s `` secondary track '' . ) at the moment of the left still , the star has just barely crossed the secondary caustic at point of figure [ fig5:astroid603]a , and its two secondary stellar images , # 2 ( inside the secondary critical curve ) and # 3 ( outside it ) have just been created at the point half way between # 2 and # 3 , where their red secondary track crosses the secondary critical curve ( figure [ fig6:createannihilate]a ) . in the meantime, stellar image # 1 is traveling slowly , alone , clockwise , around the track , outside the critical curve . between the left and right stills , image # 2 travels the track counter clockwise and images 1 and 3 , clockwise . immediately after the rightstill , the star crosses the secondary caustic at point in figure [ fig5:astroid603]a , and the two images # 1 ( outside the critical curve ) and # 2 ( inside it ) annihilate at the intersection of their track with the critical curve ( figure [ fig6:createannihilate]b ) .as the star in figure [ fig5:astroid603]a continues on around the celestial sphere from point to , the lone remaining image on the track , image # 3 , continues onward , clockwise , until it reaches the location # 1 of the first still , and two new images are created at the location between # 2 and # 3 of the first still ; and so forth . .the camera s orbit is a circular , equatorial geodesic with radius .( a ) for a star at latitude ( above the equatorial plane ; essentially the same star as in figure [ fig5:astroid603]a ) .( b ) for a star at ( above the equator ) .the tracks are labeled by the order of their stellar images ( the number of poloidal , , turning points on the ray that brings an image to the camera ) . ]figure [ fig7:latitude]a puts these in a broader context .it shows the tracks of _ all _ of the images of the star in figure [ fig5:astroid603]a .each image is labeled by its _ : the number of poloidal turning points on the ray that travels to it from its star on the celestial sphere ; or , equally well ( for our choice of a camera on the black hole s equator ) , the number of times that ray crosses the equator . the order-0 track is called the primary track , and ( with no ray - equator crossings ) it lies on the same side of the equator as its star ; order-1 is the secondary track , and it lies on the opposite side of the equator from its star ; order-2 is the tertiary track , on the same side of the equator as its star ; etc .the primary track ( order 0 ) does not intersect the primary critical curve , so a single primary image travels around it as the camera orbits the black hole .the secondary track ( order 1 ) is the one depicted red in figure [ fig6:createannihilate ] and discussed above .it crosses the secondary critical curve twice , so there is a single pair creation event and a single annihilation event ; at some times there is a single secondary image on the track , and at others there are three .it is not clear to us whether the red secondary track crosses the tertiary critical curve ( not shown ) ; but if it does , there will be no pair creations or annihilations at the crossing points , because the secondary track and the tertiary critical curve are generated by rays with different numbers of poloidal turning points , and so the critical curve is incapable of influencing images on the track .the extension to higher - order tracks and critical curves , all closer to the hole s shadow , should be clear .this pattern is qualitatively the same as when the light source is near the black hole and the camera far away , but in the hole s equatorial plane .and for stars at other latitudes the story is also the same ; only the shapes of the tracks are changed .figure [ fig7:latitude]b is an example .it shows the tracks for a star just above the black hole s equatorial plane , at . the film clips at exhibit these tracks and images all together , andshow a plethora of image creations and annihilations .exploring these clips can be fun and informative .the version of dngr that we used for _ interstellar _ showed a surprisingly complex , fingerprint - like structure of gravitationally lensed stars inside the secondary critical curve , along the left side of the shadow .we searched for errors that might be responsible for it , and finding none , we thought it real .but alain riazuelo saw nothing like it in his computed images . making detailed comparisons with riazuelo, we found a bug in dngr .when we corrected the bug , the complex pattern went away , and we got excellent agreement with riazuelo ( when using the same coordinate system ) , and with images produced by andy bohn , francois hebert and william throwe using their cornell / caltech sxs imaging code .since the sxs code is so very different from ours ( it is designed to visualize colliding black holes ) , that agreement gives us high confidence in the results reported below .fortunately , the bug we found had no noticeable impact on the images in _interstellar_. with our debugged code , the inner region , inside the secondary critical curve , appears to be a continuation of the pattern seen in the exterior region .there is a third critical curve within the second , and there are signs of higher - order critical curves , all nested inside each other .these are most visible near the flattened edge of the black hole s shadow on the side where the horizon s rotation is toward the camera ( the left side in this paper s figures ) .the dragging of inertial frames moves the critical curves outward from the shadow s flattened edge , enabling us to see things that otherwise could only be seen with a strong zoom - in .the higher - order critical curves and the regions between them can also be made more visible by moving the camera closer to the black hole s horizon . in figure [ fig8:nofingerprint ]we have moved the camera in to with , and we show three nested critical curves . the primary caustic ( the celestial - sphere image of the outer , primary , critical curve ) is a tiny astroid , as at ( figure [ fig5:astroid603 ] ) .the secondary and tertiary caustics are shown in figure [ fig9:astroid ] . in the equatorial plane of a black hole that has spin .( b ) blowup of the equatorial region near the shadow s flat left edge .the imaged star field is adapted from the tycho-2 catalogue of the brightest 2.5 million stars seen from earth , so it shows multiple images of the galactic plane . ] .points on each curve that are ray - mapped images of each other are marked by letters a , b , c , d. ( b ) the tertiary caustic and tertiary critical curve ., title="fig : " ] .points on each curve that are ray - mapped images of each other are marked by letters a , b , c , d. ( b ) the tertiary caustic and tertiary critical curve ., title="fig : " ] by comparing these three figures with each other , we see that ( i ) as we move the camera closer to the horizon , the secondary caustics wrap further around the celestial sphere ( this is reasonable since frame dragging is stronger nearer the hole ) , and ( ii ) at fixed camera radius , each order caustic wraps further around the celestial sphere than the lower order ones .more specifically , for , the secondary caustic ( figure [ fig9:astroid]a ) is stretched out to one full circuit around the celestial sphere compared to of a circuit when the camera is at ( figure [ fig5:astroid603 ] ) , and the tertiary caustic ( figure [ fig9:astroid]b ) is stretched out to more than six circuits around the celestial sphere !the mapping of points between each caustic and its critical curve is displayed in film clips at . for the secondary caustic at ,we show a few points in that mapping in figure [ fig9:astroid]a . the left side of the critical curve ( the side where the horizon is moving toward the camera ) maps into the long , stretched - out leftward sweep of the caustic . the right side maps into the caustic s two unstretched right scallops .the same is true of other caustics and their critical curves .returning to the gravitationally lensed star - field image in figure [ fig8:nofingerprint]b : notice the series of images of the galactic plane ( fuzzy white curves ) . above and below the black hole s shadowthere is just one galactic - plane image between the primary and secondary critical curves , and just one between the secondary and tertiary critical curves .this is what we expect from the example of a nonspinning black hole .however , near the fast - spinning hole s left shadow edge , the pattern is very different : three galactic - plane images between the primary and secondary critical curves , and eight between the secondary and tertiary critical curves .these multiple galactic - plane images are caused by the large sizes of the caustics particularly their wrapping around the celestial sphere and the resulting ease with which stars cross them , producing multiple stellar images .an extension of an argument by bozza [ paragraph preceding his eq .( 17 ) ] makes this more precise .( this argument will be highly plausible but not fully rigorous because we have not developed a sufficiently complete understanding to make it rigorous . )consider , for concreteness , a representative galactic - plane star that lies in the black hole s equatorial plane , inside the primary caustic , near the caustic s left cusp ; and ask how many images the star produces on the camera s sky and where they lie . to answer this question , imagine moving the star upward in at fixed until it is above all the celestial - sphere caustics ; then move it gradually back downward to its original , equatorial location . when above the caustics , the star produces one image of each order : a primary image ( no poloidal turning points ) that presumably will remain outside the primary critical curve when the star returns to its equatorial location ; a secondary image ( one poloidal turning point ) that presumably will be between the primary and secondary critical curves when the star returns ; a tertiary image between the secondary and tertiary critical curves ; etc .when the star moves downward through the upper left branch of the astroidal primary caustic , it creates two primary images , one on each side of the primary critical curve .when it moves downward through the upper left branch of the secondary caustic ( figure [ fig9:astroid]a ) , it creates two secondary images , one on each side of the secondary caustic . andwhen it moves downward through the six sky - wrapped upper left branches of the tertiary caustic ( figure [ fig9:astroid]b ) , it creates twelve tertiary images , six on each side of the tertiary caustic . and because the upper left branches of all three caustics map onto the left sides of their corresponding critical curves , all the created images will wind up on the left sides of the critical curves and thence the left side of the black hole s shadow . and by symmetry , with the camera and the star both in the equatorial plane , all these images will wind up in the equatorial plane .so now we can count . in the equatorial plane to the left of the primary critical curve ,there are two images : one original primary image , and one caustic - created primary image .these are to the left of the region depicted in figure [ fig8:nofingerprint]a . between the primary and secondary critical curvesthere are three images : one original secondary image , one caustic - created primary , and one caustic - created secondary image .these are representative stellar images in the three galactic - plane images between the primary and secondary critical curves of figure [ fig8:nofingerprint ] . and between the secondary and tertiary critical curves there are eight stellar images : one original tertiary , one caustic - created secondary , and six caustic - created tertiary images. these are representative stellar images in the eight galactic - plane images between the secondary and tertiary critical curves of figure [ fig8:nofingerprint ] .this argument is not fully rigorous because : ( i ) we have not proved that every caustic - branch crossing , from outside the astroid to inside , creates an image pair rather than annihilating a pair ; this is very likely true , with annihilations occurring when a star moves out of the astroid .( ii ) we have not proved that the original order- images wind up at the claimed locations , between the order- and order- critical curves .a more thorough study is needed to pin down these issues . .as the camera , moves around a circular , equatorial , geodesic orbit at radius , stars move along horizontal dashed lines relative to the camera .( b ) this checkerboard pattern as seen gravitationally lensed on the camera s sky .stellar images move along the dashed curves .the primary and secondary critical curves are labeled 1cc " and 2cc " .( c ) blowup of the camera s sky near the left edge of the hole s shadow ; 3cc " is the tertiary critical curve.,title="fig : " ] . as the camera ,moves around a circular , equatorial , geodesic orbit at radius , stars move along horizontal dashed lines relative to the camera .( b ) this checkerboard pattern as seen gravitationally lensed on the camera s sky .stellar images move along the dashed curves .the primary and secondary critical curves are labeled 1cc " and 2cc " .( c ) blowup of the camera s sky near the left edge of the hole s shadow ; 3cc " is the tertiary critical curve.,title="fig : " ] [ fig10:checker ] figure [ fig10:checker ] is designed to help readers explore this multiple - image phenomenon in greater detail .there we have placed , on the celestial sphere , a checkerboard of paint swatches ( figure [ fig10:checker]a ) , with dashed lines running along the constant - latitude spaces between paint swatches , i.e. , along the celestial - sphere tracks of stars . in figure[ fig10:checker]b we show the gravitationally lensed checkerboard on the camera s entire sky ; and in figure [ fig10:checker]c we show a blowup of the camera - sky region near the left edge of the black hole s shadow .we have labeled the critical curves 1cc , 2cc and 3cc for primary , secondary , and tertiary .the multiple images of lines of constant celestial - sphere longitude show up clearly in the blow - up , between pairs of critical curves ; and the figure shows those lines being stretched vertically , enormously , in the vicinity of each critical curve .the dashed lines ( star - image tracks ) on the camera s sky show the same kind of pattern as we saw in figure [ fig7:latitude ] . .the orbit , which has constant boyer - lindquist coordinate radius , is plotted on a sphere , treating its boyler - lindquist coordinates as though they were spherical polar coordinates . ] the multiple images near the left edge of the shadow can also be understood in terms of the light rays that bring the stellar images to the camera . those light rays travel from the celestial sphere inward to near the black hole , where they get temporarily trapped , for a few round - trips , on near circular orbits ( orbits with nearly constant boyer - lindquist radius ) , and then escape to the camera .each such nearly trapped ray is very close to a truly ( but unstably ) trapped , constant- ray such as that shown in figure [ fig8:ringoffire ] .these trapped rays ( discussed in and in chapters 6 and 8 of ) wind up and down spherical strips with very shallow pitch angles .as the camera makes each additional prograde trip around the black hole , the image carried by each temporarily trapped mapping ray gets wound around the constant- sphere one more time ( i.e. , gets stored there for one more circuit ) , and it comes out to the camera s sky slightly closer to the shadow s edge and slightly higher or lower in latitude .correspondingly , as the camera moves , the star s image gradually sinks closer to the hole s shadow and gradually changes its latitude actually moving away from the equator when approaching a critical curve and toward the equator when receding from a critical curve .this behaviour is seen clearly , near the shadow s left edge , in the film clips at . .the celestial sphere is covered by the paint - swatch checkerboard of figure [ fig10:checker]a , the camera is at radius and is moving in the azimuthal direction , and the camera speed is : ( a ) that of a prograde , geodesic , circular orbit ( same as figure [ fig10:checker]b ) , ( b ) that of a zero - angular - momentum observer ( a fido ) , and ( c ) at rest in the boyer - lindquist coordinate system .the coordinates are the same as in figure [ fig10:checker]b ., title="fig : " ] 0.5pc .the celestial sphere is covered by the paint - swatch checkerboard of figure [ fig10:checker]a , the camera is at radius and is moving in the azimuthal direction , and the camera speed is : ( a ) that of a prograde , geodesic , circular orbit ( same as figure [ fig10:checker]b ) , ( b ) that of a zero - angular - momentum observer ( a fido ) , and ( c ) at rest in the boyer - lindquist coordinate system .the coordinates are the same as in figure [ fig10:checker]b ., title="fig : " ] 0.5pc .the celestial sphere is covered by the paint - swatch checkerboard of figure [ fig10:checker]a , the camera is at radius and is moving in the azimuthal direction , and the camera speed is : ( a ) that of a prograde , geodesic , circular orbit ( same as figure [ fig10:checker]b ) , ( b ) that of a zero - angular - momentum observer ( a fido ) , and ( c ) at rest in the boyer - lindquist coordinate system .the coordinates are the same as in figure [ fig10:checker]b ., title="fig : " ] the gravitational lensing pattern is strongly influenced not only by the black hole s spin and the camera s location , but also by the camera s orbital speed .we explore this in figure [ fig12:aberration ] , where we show the gravitationally lensed paint - swatch checkerboard of figure [ fig10:checker]a for a black hole with spin , a camera in the equatorial plane at radius , and three different camera velocities , all in the azimuthal direction : ( a ) camera moving along a prograde circular geodesic orbit [ coordinate angular velocity in the notation of [ subsec : app - raytrace ] ] ; ( b ) camera moving along a zero - angular - momentum orbit [ , eq .( [ eq : kerrquantities ] ) , which is speed as measured by a circular , geodesic observer ] ; and ( c ) a static camera , i.e. at rest in the boyer - lindquist coordinate system [ , which is speed as measured by the circular , geodesic observer ]. the huge differences in lensing pattern , for these three different camera velocities , are due , of course , to special relativistic aberration .( we thank alain riazuelo for pointing out to us that aberration effects should be huge . ) for prograde geodesic motion ( top picture ) , the hole s shadow is relatively small and the sky around it , large . as seen in the geodesic reference frame, the zero - angular - momentum camera and the static camera are moving in the direction of the red / black dot i.e ., toward the right part of the external universe and away from the right part of the black - hole shadow at about half and 4/5 the speed of light respectively .so the zero - angular - momentum camera ( middle picture ) sees the hole s shadow much enlarged due to aberration , and the external universe shrunken ; and the static camera sees the shadow enlarged so much that it encompasses somewhat more than half the sky ( more than steradians ) , and sees the external universe correspondingly shrunk . despite these huge differences in lensing patterns ,the multiplicity of images between critical curves is unchanged : still three images of some near - equator swatches between the primary and secondary critical curves , and eight between the secondary and tertiary critical curves .this is because the caustics in the camera s past light cone depend only on the camera s location and not on its velocity , so a point source s caustic crossings are independent of camera velocity , and the image pair creations and annihilations along critical curves are independent of camera velocity .we have used our code , dngr , to construct images of what a thin accretion disk in the equatorial plane of a fast - spinning black hole would look like , seen up close . for our own edification, we explored successively the influence of the bending of light rays ( gravitational lensing ) , the influence of doppler frequency shifts and gravitational frequency shifts on the disk s colours , the influence of the frequency shifts on the brightness of the disk s light , and the influence of _ lens flare _ due to light scattering and diffraction in the lenses of a simulated 65 mm imax camera .although all these issues except lens flare have been explored previously , e.g. in and references therein , our images may be of pedagogical interest , so we show them here .we also show them as a foundation for discussing the choices that were made for _interstellar s _ accretion disk . and before being placed around a black hole .body : this paint - swatch disk , now in the equatorial plane around a black hole with , as viewed by a camera at and ( ) , ignoring frequency shifts , associated colour and brightness changes , and lens flare .( figure from _ the science of interstellar _ , used by permission of w. w. norton & company , inc , and created by our double negative team , tm & warner bros .entertainment inc .( s15 ) ) .this image may be used under the terms of the creative commons attribution - noncommercial - noderivs 3.0 ( cc by - nc - nd 3.0 ) license .any further distribution of these images must maintain attribution to the author(s ) and the title of the work , journal citation and doi .you may not use the images for commercial purposes and if you remix , transform or build upon the images , you may not distribute the modified images . ]figure [ fig13:diskpaintswatch ] illustrates the influence of gravitational lensing ( light - ray bending ) . to construct this image, we placed the infinitely thin disk shown in the upper left in the equatorial plane around a fast - spinning black hole , and we used dngr to compute what the disk would look like to a camera near the hole and slightly above the disk plane .the disk consisted of paint swatches arranged in a simple pattern that facilitates identifying , visually , the mapping from the disk to its lensed images .we omitted frequency shifts and their associated colour and brightness changes , and also omitted camera lens flare ; i.e. , we ( incorrectly ) transported the light s specific intensity along each ray , unchanged , as would be appropriate in flat spacetime .here , , , , and are energy , time , area , solid angle , and frequency measured by an observer just above the disk or at the camera . in the figurewe see three images of the disk .the upper image swings around the front of the black hole s shadow and then , instead of passing behind the shadow , it swings up over the shadow and back down to close on itself .this wrapping over the shadow has a simple physical origin : light rays from the top face of the disk , which is actually behind the hole , pass up over the top of the hole and down to the camera due to gravitational light deflection ; see figure 9.8 of .this entire image comes from light rays emitted by the disk s top face . by looking at the colours , lengths , and widths of the disk s swatches and comparing with those in the inset, one can deduce , in each region of the disk , the details of the gravitational lensing . in figure [ fig13:diskpaintswatch ] , the lower disk image wraps under the black hole s shadow and then swings inward , becoming very thin , then up over the shadow and back down and outward to close on itself .this entire image comes from light rays emitted by the disk s bottom face : the wide bottom portion of the image , from rays that originate behind the hole , and travel under the hole and back upward to the camera ; the narrow top portion , from rays that originate on the disk s front underside and travel under the hole , upward on its back side , over its top , and down to the camera making one full loop around the hole .there is a third disk image whose bottom portion is barely visible near the shadow s edge .that third image consists of light emitted from the disk s top face , that travels around the hole once for the visible bottom part of the image , and one and a half times for the unresolved top part of the image . as in figure [ fig13:diskpaintswatch ] and with the same geometry . ] in the remainder of this section [ sec : disk ] we deal with a moderately realistic accretion disk but a disk created for _ interstellar _ by double negative artists rather than created by solving astrophysical equations such as .in [ subsec : dngrdisks ] we give some details of how this and other double negative accretion disk images were created .this artists _ interstellar _ disk was chosen to be very anemic compared to the disks that astronomers see around black holes and that astrophysicists model so the humans who travel near it will not get fried by x - rays and gamma - rays .it is physically thin and marginally optically thick and lies in the black hole s equatorial plane .it is not currently accreting onto the black hole , and it has cooled to a position - independent temperature , at which it emits a black - body spectrum .figure [ fig14:disknatural999 ] shows an image of this artists disk , generated with a gravitational lensing geometry and computational procedure identical to those for our paint - swatch disk , figure [ fig13:diskpaintswatch ] ( no frequency shifts or associated colour and brightness changes ; no lens flare ) .christopher nolan and paul franklin decided that the flattened left edge of the black - hole shadow , and the multiple disk images alongside that left edge , and the off - centred disk would be too confusing for a mass audience .although _ interstellar _ s black hole had to spin very fast to produce the huge time dilations seen in the movie for visual purposes nolan and franklin slowed the spin to , resulting in the disk of figure [ fig15:disk]a .but with the black hole s spin slowed from to for reasons discussed in the text .( b ) this same disk with its colours ( light frequencies ) doppler shifted and gravitationally shifted .( c ) the same disk with its specific intensity ( brightness ) also shifted in accord with liouville s theorem , .this image is what the disk would truly look like to an observer near the black hole ., title="fig : " ] but with the black hole s spin slowed from to for reasons discussed in the text .( b ) this same disk with its colours ( light frequencies ) doppler shifted and gravitationally shifted .( c ) the same disk with its specific intensity ( brightness ) also shifted in accord with liouville s theorem , .this image is what the disk would truly look like to an observer near the black hole ., title="fig : " ] but with the black hole s spin slowed from to for reasons discussed in the text .( b ) this same disk with its colours ( light frequencies ) doppler shifted and gravitationally shifted .( c ) the same disk with its specific intensity ( brightness ) also shifted in accord with liouville s theorem , .this image is what the disk would truly look like to an observer near the black hole ., title="fig : " ] the influences of doppler and gravitational frequency shifts on the appearance of this disk are shown in figures [ fig15:disk]b , c . since the left side of the disk is moving toward the camera and the right side away with speeds of roughly , their light frequencies get shifted blueward on the left and redward on the right by multiplicative factors of order 1.5 and 0.4 respectively when one combines the doppler shift with a percent gravitational redshift .these frequency changes induce changes in the disk s perceived _ colours _( which we compute by convolving the frequency - shifted spectrum with the sensitivity curves of motion picture film ) and also induce changes in the disk s perceived _ brightness _ ; see [ subsec : dngrdisks ] for some details . in figure [ fig15:disk]b ,we have turned on the colour changes , but not the corresponding brightness changes . as expected ,the disk has become blue on the left and red on the right . in figure[ fig15:disk]c , we have turned on both the colour and the brightness changes . notice that the disk s left side , moving toward the camera , has become very bright , while the right side , moving away , has become very dim .this is similar to astrophysically observed jets , emerging from distant galaxies and quasars ; one jet , moving toward earth is typically bright , while the other , moving away , is often too dim to be seen .christopher nolan , the director and co - writer of _ interstellar _ , and paul franklin , the visual effects supervisor , were committed to make the film as scientifically accurate as possible within constraints of not confusing his mass audience unduly and using images that are exciting and fresh .a fully realistic accretion disk , figure [ fig15:disk]c , that is exceedingly lopsided , with the hole s shadow barely discernible , was obviously unacceptable . a ( no colour or brightness shifts ) with lens flare added a type of lens flare called `` veiling flare '' , which has the look of a soft glow and is very characteristic of imax camera lenses .this is a variant of the accretion disk seen in _interstellar_. ( figure created by our double negative team using dngr , and tm & warner bros .entertainment inc .( s15 ) ) this image may be used under the terms of the creative commons attribution - noncommercial - noderivs 3.0 ( cc by - nc - nd 3.0 ) license .any further distribution of these images must maintain attribution to the author(s ) and the title of the work , journal citation and doi .you may not use the images for commercial purposes and if you remix , transform or build upon the images , you may not distribute the modified images . ] the first image in figure [ fig15:disk ] , the one without frequency shifts and associated colour and brightness changes , was particularly appealing , but it lacked one element of realism that few astrophysicists would ever think of ( though astronomers take it into account when modelling their own optical instruments ) .movie audiences are accustomed to seeing scenes filmed through a real camera a camera whose optics scatter and diffract the incoming light , producing what is called _ lens flare_. as is conventional for movies ( so that computer generated images will have visual continuity with images shot by real cameras ) , nolan and franklin asked that simulated lens flare be imposed on the accretion - disk image .the result , for the first image in figure [ fig15:disk ] , is figure [ fig16:disklensflare ] .this , with some embellishments , is the accretion disk seen around the black hole gargantua in _interstellar_. _all _ of the black - hole and accretion - disk images in _ interstellar _ were generated using dngr , with a single exception : when cooper ( matthew mcconaughey ) , riding in the ranger spacecraft , has plunged into the black hole gargantua , the camera , looking back upward from inside the event horizon , sees the gravitationally distorted external universe within the accretion disk and the black - hole shadow outside it as general relativity predicts . because dngr uses boyer - lindquist coordinates , which do not extend smoothly through the horizon , this exceptional image had to be constructed by double negative artists manipulating dngr images by hand . in 2002one of our authors ( james ) formulated and perfected the following ( rather obvious ) method for applying lens flare to images .the appearance of a distant star on a camera s focal plane is mainly determined by the point spread function of the camera s optics . for christopher nolan s films we measure the point spread function by recording with hdr photography ( see e.g. ) a point source of light with the full set of 35 mm and 65 mm lenses typically used in his imax and anamorphic cameras , attached to a single lens reflex camera .we apply the camera s lens flare to an image by convolving it with this point spread function .( for these optics concepts see , e.g. , . ) for the image [ fig15:disk]a , this produces figure [ fig16:disklensflare ] .more recent work does a more thorough analysis of reflections between the optical elements in a lens , but requires detailed knowledge of each lens construction , which was not readily available for our _ interstellar _ work . as discussed above , the accretion disk in _interstellar _ was an artist s conception , informed by images that astrophysicists have produced , rather than computed directly from astrophysicists accretion - disk equations such as . in our work on _ interstellar _ , we developed three different types of disk models : * an infinitely thin , planar disk , with colour and optical thickness defined by an artist s image ; * a three dimensional ` voxel ' model ; * for close - up shots , a disk with detailed texture added by modifying a commercial renderer , mantra .we discuss these briefly in [ subsec : dngrdisks ] .in this paper we have described the code dngr , developed at double negative ltd . , for creating general relativistically correct images of black holes and their accretion disks .we have described our use of dngr to generate the accretion - disk images seen in the movie _ interstellar _ , and to explain effects that influence the disk s appearance : light - ray bending , doppler and gravitational frequency shifts , shift - induced colour and brightness changes , and camera lens flare .we have also used dngr to explore , for a camera orbiting a fast spinning black hole , the gravitational lensing of a star field on the celestial sphere including the lensing s caustics and critical curves , and how they influence the stellar images pattern on the camera s sky , the creation and annihilation of image pairs , and the image motions .elsewhere we describe our use of dngr to explore gravitational lensing by hypothetical wormholes ; particularly , the influence of a wormhole s length and shape on its lensing of stars and nebulae ; and we describe the choices of length and shape that were made for _ interstellar _ s wormhole and how we generated that movie s wormhole images . for very helpful advice during the development of dngr and/or during the research with it reported here , we thank christopher nolan , alain riazuelo , jean - pierre luminet , roger blandford , andy bohn , francois hebert , william throwe , avery broderick and dimitrios psaltis . for contributions to dngr and its applications ,we thank members of the double negative r&d team sylvan dieckmann , simon pabst , shane christopher , paul - george roberts , and damien maupu ; and also double negative artists fabio zangla , shaun roth , zoe lord , iacopo di luigi , finella fan , nicholas new , tristan myles , and peter howlett .the construction of dngr was funded by warner bros .entertainment inc . , for generating visual effects for the movie _ interstellar_. we thank warner bros . for authorising this code s additional use for scientific research , and in particular the research reported in this paper .in this section we give a step - by - step prescription for computing ( i ) the ray - tracing map from a point on the camera s local sky to a point on the celestial sphere , and also ( ii ) the blue shift of light emitted from the celestial sphere with frequency and received at the camera with frequency . throughoutwe set the black hole smass to unity , so distances are measured in terms of .the foundations for our prescription are : ( i ) the kerr metric written in boyer - lindquist coordinates where ( ii ) the 3 + 1 split of spacetime into space plus time embodied in this form of the metric ; and ( iii ) the family of fiducial observers ( fidos ) whose world lines are orthogonal to the 3-spaces of constant , and their orthonormal basis vectors that lie in those 3-spaces and are depicted in figure [ fig1:skymapa ] , we shall also need three functions of and of a ray s constants of motion ( axial angular momentum ) and ( carter constant ) , which appear in the ray s evolution equations ( [ eq : rays2 ] ) below i.e . , in the equations for a _null _ geodesic : \ ; , \;\ ; \theta = q-\cos^2\theta\left({b^2\over\sin^2\theta } - a^2\right)\;. \label{eqs : prtheta}\ ] ] and we shall need the function , which identifies the constants of motion for rays ( photons ) that are unstably trapped in constant- orbits around the black hole . this function is defined parametrically by where is the radius of the trapped orbit , which runs over the interval , with \right\}\ ; , \;\ ; r_2 \equiv 2 \left\{1 + \cos\left[{2\over3 } \arccos(+a)\right]\right\}\;. \label{eq : r1r2}\ ] ] the prescription that we use , in dngr , for computing the ray - tracing map and the blue shift is the following concrete embodiment of the discussion in section [ subsec : raytracing ] . 1 .specify the camera s location , and its speed and the components , , and of its direction of motion relative to the fido at its location ; and specify the ray s incoming direction on the camera s local sky .[ note : if the camera is in a circular , equatorial geodesic orbit around the black hole , then is the geodesic angular velocity at the camera s radius , and the other quantities are defined in equations ( [ eq : kerrquantities ] ) . ] 2 .compute , in the camera s proper reference frame , the cartesian components ( figure [ fig1:skymapa ] ) of the unit vector that points in the direction of the incoming ray 3 . using the equations for relativistic aberration ,compute the direction of motion of the incoming ray , , as measured by the fido in cartesian coordinates aligned with those of the camera : and from these , compute the components of on the fido s spherical orthonormal basis : [ eq : nftransform ] 4 .compute the ray s canonical momenta ( covariant coordinate components of its 4-momentum ) with its conserved energy set to unity as a convenient convention : is the energy measured by the fido .( note : can also be regarded as the ray s wave vector or simply as a tangent vector to the ray . ) then compute the ray s other two conserved quantities : its axial angular momentum and its carter constant : ( note : if we had not set , then would be and would be ( carter constant) . ) 5 .determine , from the ray s constants , whether it comes from the horizon or the celestial sphere by the following algorithm : ( a ) are both of the following conditions satisfied ? : ( v.1 ) with and given by equations ( [ eq : r1r2 ] ) and the function given by equations ( [ eq : boqo ] ) ; and ( v.2 ) the ray s value of lies in the range ( b ) if the answer to ( a ) is _ yes _ , then there are no radial turning points for that , whence if at the camera s location , the ray comes from the horizon ; and if there , it comes from the celestial sphere .( c ) if the answer to ( a ) is _ no _ , then there are two radial turning points for that , and if the camera radius is greater than or equal to the radius of the upper turning point , then the ray comes from the celestial sphere ; otherwise , it comes from the horizon . here is the largest real root of , with defined by equations ( [ eqs : prtheta ] ) .if the ray comes from the celestial sphere , then compute its point of origin there as follows : beginning with the computed values of the constants of motion and canonical momenta , and beginning in space at the camera s location , integrate the ray equations \ ; , \nonumber \\ { d p_\theta\over d\zeta } = { \partial \over \partial \theta}\left[- { \delta\over 2\rho^2}\ , p_r^2 - { 1\over 2\rho^2 } \,p_\theta^2 + \left({r+\delta\theta\over 2 \delta \rho^2}\right ) \right ] \ ; \label{eq : rays2}\end{aligned}\ ] ] + numerically , backward in time from to , where is either or some very negative value .( this super - hamiltonian version of the ray equations is well behaved and robust at turning points , by contrast , for example , with a commonly used version of the form , , ... ; eqs .( 33.32 ) of mtw . )the ray s source point on the celestial sphere is , .compute the light s blue shift , i.e. the ratio of its frequency as seen by the camera to its frequency as seen by the source at rest on the celestial sphere , from in this section we give a step - by - step prescription for evolving a tiny bundle of rays ( a light beam ) along a reference ray , from the camera to the celestial sphere , with the goal of learning the beam s major and minor angular diameters and on the celestial sphere , and the angle from on the celestial sphere to the beam s major axis ( cf .figure [ fig1:skymapa ] ) . as in the previous subsection, we set the black hole s mass to unity , .our prescription is a concrete embodiment of the discussion in section [ subsec : raybundle ] ; it is a variant of the sachs optical scalar equations ; and it is a slight modification of a prescription developed in the 1970s by serge pineault and robert roeder . from to in equation ( 6 ) of .this simple change of one foundational equation produced many changes in the final evolution equations for the ray bundle .most importantly , it changed to in eqs .( [ eq : repsios ] ) and ( [ eq : impsios ] ) ; the former diverges at the celestial sphere for our ingoing ray bundles , while the latter is finite there . for a detailed derivation of our prescription ,see . ]our prescription relies on the following functions , which are defined along the reference ray .( a ) the components of the ray bundle s 4-momentum ( wave vector ) on the fido s orthonormal spherical basis ( b ) a function defined on the phase space of the reference ray by -2 p_{\hat \phi}\ , \left(a^2+r^2\right)^3 \cot \theta\right\rgroup \label{eq : calm}\\ \fl \quad -a^2 p_{\hat \phi}\ , \sin \theta \left\lgroup 2 \sqrt{\delta } \,p_{\hat \theta}\ , \sin \theta \left[a^2 ( r-1 ) \cos 2\theta+a^2 ( r-1)+2 r^2 ( r+3)\right ] \right .\fl \quad\quad \left .+ a^2 \delta p^{\hat t}\ , \cos 3\theta+\cos \theta \left[a^4 ( 7 p^{\hat t}-4 p^{\hat r})+a^2 r ( p^{\hat t } \,(15 r-22)-8 p^{\hat r } \,(r-2 ) ) \right.\right .\fl \quad\quad\quad \left.\left .-4 r^3 ( p^{\hat r}\ , ( r-4)-2 p^{\hat t}\ , ( r-3))\right]\right\rgroup \bigg\ } \;. \nonumber \end{aligned}\ ] ] \(c ) a complex component of the weyl tensor , evaluated on vectors constructed from the ray s 4-momentum ; its real and imaginary parts work out to be : \right.\nonumber\\ \fl \left .+ 6 p_{\hat r}^2 \left[p_{\hat \theta}^2-(p^{\hat t})^2\ , w - p^{\hat t}\ , p_{\hat \phi } \,s ( w-1)-p_{\hat \phi}^2 ( w+1)\right ] + 2 p^{\hat r } \left[p_{\hat \phi}\ , s ( w-1 ) \left(p_{\hat \phi}^2 - 3 p_{\hat \theta}^2\right )\right.\right.\nonumber\\ \fl \left . \left .+ 3 ( p^{\hat t})^3 w + 3 ( p^{\hat t})^2 p_{\hat \phi}\ , s ( w-1)-3 p^{\hat t}\ , ( w+2 ) \left(p_{\hat \theta}^2-p_{\hat \phi}^2\right)\right]-3 ( p^{\hat t})^4 w-2 ( p^{\hat t})^3 p_{\hat \phi}\ , s ( w-1 ) \right .\nonumber\\ \fl \left .+ 6 ( p^{\hat t})^2 \left[p_{\hat \theta}^2 ( w+1)- p_{\hat \phi}^2\right ] -2 p^{\hat t}\ , p_{\hat \phi } \,s ( w-1 ) \left(p_{\hat \phi}^2 - 3 p_{\hat \theta}^2\right)\right\rgroup \ ; + \ ; 2\ , q_2\,p_{\hat \theta}\ , \left\lgroup3 p_{\hat \phi}\ , w \left(p_{\hat \phi}^2-p_{\hat \theta}^2\right)\right . \nonumber\\ \fl \left .+ p_{\hat r}^3 s(1- w)+3 p_{\hat r}^2 [ p^{\hat t } \,s ( w-1)+p_{\hat \phi}\ , ( w+2 ) ] -p_{\hat r}\ , \left[-s ( w-1 ) \left(p_{\hat \theta}^2 - 3 p_{\hat \phi}^2\right ) + 3 ( p^{\hat t})^2 s ( w-1 ) \right .\right.\nonumber\\ \fl \left.\left .+ 6 p^{\hat t } \,p_{\hat \phi}\ , ( w+2)\right ] + ( p^{\hat t})^3 s ( w-1)+3 ( p^{\hat t})^2 p_{\hat \phi } \,(w+2)-p^{\hat t}\ , s ( w-1 ) \left(p_{\hat \theta}^2 - 3 p_{\hat \phi}^2\right)\right\rgroup \bigg\ } \ ; , \label{eq : repsios } \end{aligned}\ ] ] \right .\nonumber\\ \fl \left .+ 3 p_{\hat r}^2 \left[p_{\hat \theta}^2 ( w+1)-2 ( p^{\hat t})^2 w -p^{\hat t}\ , p_{\hat \phi } \,s ( w-1)-p_{\hat \phi}^2\right]+p_{\hat r}\ , \left[p_{\hat \phi } \,s ( w-1 ) \left(p_{\hat \phi}^2 - 3 p_{\hat \theta}^2\right ) \right .\nonumber\\ \fl \left.\left .+ 3 ( p^{\hat t})^3 w+3 ( p^{\hat t})^2 p_{\hat \phi}\ , s ( w-1 ) -3 p^{\hat t } \,(w+2 ) \left(p_{\hat \theta}^2-p_{\hat \phi}^2\right)\right]-(p^{\hat t})^3 p_{\hat \phi } \,s ( w-1)\right\rgroup \nonumber\\ \fl + 3 ( p^{\hat t})^2 \left[p_{\hat \theta}^2-p_{\hat \phi}^2 ( w+1)\right]-p^{\hat t}\ , p_{\hat \phi } \,s ( w-1 ) \left(p_{\hat \phi}^2 - 3 p_{\hat \theta}^2\right ) + q_1\,p_{\hat \theta}\ ,\left\lgroup3 p_{\hat \phi } \,w \left(p_{\hat \phi}^2-p_{\hat \theta}^2\right ) \right .\\ \fl + p_{\hat r}^3 s(1- w)+3 p_{\hat r}^2 [ p^{\hat t } \,s ( w-1)+p_{\hat \phi}\ , ( w+2)]-p_{\hat r}\ , \left[-s ( w-1 ) \left(p_{\hat \theta}^2 - 3 p_{\hat \phi}^2\right ) + 3 ( p^{\hat t})^2 s ( w-1 ) \right .\fl \left .+ 6 p^{\hat t } \,p_{\hat \phi } \,(w+2)\right]+(p^{\hat t})^3 s ( w-1)+3 ( p^{\hat t})^2 p_{\hat \phi } \,(w+2)-p^{\hat t } \,s ( w-1 ) \left(p_{\hat \theta}^2 - 3 p_{\hat \phi}^2\right)\right\rgroup \bigg\ } \;. \nonumber \label{eq : impsios } \end{aligned}\ ] ] here , , , and are given by we shall state our prescription for computing the shape and orientation of the ray bundle on the celestial sphere separately , for two cases : a ray bundle that begins circular at the camera ; and one that begins elliptical . then , we shall briefly sketch the pineault - roeder foundations for these prescriptions .if the ray bundle begins at the camera with a tiny circular angular diameter as measured by the camera , then our prescription for computing its shape and orientation at the celestial sphere is this : 1 .introduce five new functions along the reference ray : , , , , and [ which are denoted , , , and by pineault and roeder ] .give them and their derivatives the following initial conditions at the camera , : here a dot means .2 . integrate the following differential equations backward along the ray toward its source at the celestial sphere , i.e. from to .these are coupled to the ray equations ( [ eq : rays2 ] ) for the reference ray . here ^ 2 + [ \im(\psi_{0*})]^2}\ ; , \quad \psi = \arg(\psi_{0 * } ) -2\chi \;.\ ] ] 3 . upon reaching the celestial sphere at ,evaluate the angular diameters of the bundle s major and minor axes using the equations here is the angular radius that the bundle would have had at the celestial sphere if spacetime curvature had not affected it : where the right hand side is to be evaluated at the camera . also evaluate the bundle s orientation angle at the celestial sphere from \ ; , \label{eq : mu}\ ] ] with the right hand sides evaluated at the celestial sphere , , and with .the first term , , is the contribution from parallel transporting the bundle s major axis along the reference ray ; the second term is the deviation from parallel transport caused by spacetime curvature .our prescription , for a ray bundle that begins elliptical at the camera , is this : 1 . in the camera s proper reference frame ,specify the cartesian components of the bundle s major axis ( a unit vector ) , subject to the constraint that be orthogonal to the ray direction [ equation ( [ eq : cameraangles ] ) ] , .2 . compute the major - axis direction in the fido frame using the transformation equations where is the camera s speed ( in the direction ) relative to the fido at its location .3 . in the camera s reference frame ,specify ( a ) the bundle s angular diameter _ along its major axis _ , and ( b ) the ratio of the angular diameter along the minor axis to that along the major axis .these quantities are the same in the fido frame as in the camera frame : they are insensitive to special relativistic aberration .4 . then the initial conditions for the integration variables , , , , are here is the cartesian inner product , and and are two orthogonal unit vectors that span the plane orthogonal to the fiducial ray : 5 .continue and conclude with steps ( ii ) and ( iii ) of [ subsec : circatcamera ] . for completenesswe briefly describe the pineault - roeder foundations for these ray - bundle - evolution prescriptions . at an arbitrary location along the reference ray , in the reference frame of a fido there , denote by the components of the fiducial ray s unit tangent vector .( these are the same s as in the previous two subsections , extended to the fiducial ray s entire world line . ) and at this arbitrary , introduce the unit basis vectors and [ eqs .( [ eq : bdef ] ) ] that span the plane orthogonal to the fiducial ray .notice that far from the black hole ( at the celestial sphere ) , where ( negative because the reference ray is incoming ) , , the only nonzero components of these basis vectors are and ; so and .we can think of the transverse plane at as the complex plane , and think of the direction as the real axis for complex numbers , and as the imaginary axis .then any transverse vector can be represented by a complex number , with to describe the ray bundle and its evolution , ( following pineault and roeder ) we introduce two complex numbers ( transverse vectors ) and , whose real and imaginary parts are functions that are evolved by the ray - bundle equations ( [ eq : bundleevolution ] ) : the outer edge of the ray bundle , at location along the reference ray , is described by the complex number where is also evolved by the ray - bundle equations . here is a parameter that ranges from to , and as it changes , it carries the head of the vector around the bundle s edge .that vector , of course , is given by {\bf a } + { \im}[(\xi \,e^{i \sigma } + \eta \,e^{-i\sigma})e^{i\chi } ] { \bf b}\ ; \label{eq : vecyval}\ ] ] [ cf .( [ eq : vecascomplexno ] ) ] , and its tail sits on the reference ray while its head sits on the edge of the bundle .one can verify that the shape of this , with varying from 0 to , is an ellipse .let us denote by and the arguments of and , so , .then eq .( [ eq : ydef ] ) reads .as varies from 0 to , the modulus of ( equal to the length of the vector ) reaches its largest value when the phase of the two terms is the same , i.e. when , or equivalently when .this maximal value of is and the argument of at the max ( the angle of to ) is the major diameter ( not angular diameter ) of the elliptical ray bundle , as measured by a fido at the location , is this multiplied by : when the bundle reaches the celestial sphere , its measured angular diameter is the rate of increase of this major diameter with distance traveled .but at the celestial sphere , the distance traveled is equal to the affine parameter , so the measured angular diameter is .now , the fido - measured light frequency is equal to at the celestial sphere , and the real and imaginary parts of and are increasing linearly with , so it turns out that the formula becomes the first of eqs .( [ eq : deltapm ] ) ; and the angle of the major axis to the real axis , eq .( [ eq : muevolve ] ) , becomes eq .( [ eq : mu ] ) . by an argument similar to the first part of the paragraph before last, one can deduce that anywhere along the evolving ray bundle and correspondingly that the minor angular diameter at the celestial sphere is the second of eqs .( [ eq : deltapm ] ) .the evolution equations ( [ eq : bundleevolution ] ) are the equation of geodesic deviation for the separation vector whose spatial part in a fido reference frame is the vector [ eq .( [ eq : vecyval ] ) ] at fixed ; see section ii of pineault and roeder , and see . in dngrwe treat stars as point sources of light with infinitesimal angular size .we trace rays backwards from the camera to the celestial sphere .if we were to treat these rays as infinitely thin , there would be zero probability of any ray intersecting a star ; so instead we construct a beam with a narrow but finite angular width , centred on a pixel on the camera s sky and extending over a small number of adjacent pixels , we evolve the shape of this beam along with the ray , and if the beam intercepts the star , we collect the star s light into it .this gives us some important benefits : * the images of our unresolved stars remain small : they do nt stretch when they get magnified by gravitational lensing .* the fractional change in the beam s solid angle is directly related to the optical magnification due to gravitational lensing and hence to the intensity ( brightness ) change in the image of an unresolved star .* when sampling images of accretion discs or extended structures such as interstellar dust clouds , we minimise moir artefacts by adapting the resampling filter according to the shape of the beam .another consequence is that each star contributes intensity to several pixels .the eye is sensitive to the sum of these and as a single star crosses this grid , this sum can vary depending on the phase of the geometric image of the star on the grid . ]this can cause distracting flickering of stars as they traverse the virtual camera s image plane .we mitigate this effect by modulating the star s intensity across the beam with a truncated gaussian filter and by setting the beam s initial radius to twice the pixel separation . with this filter ,the sum is brightest when the geometric image of the star falls exactly on a pixel and dimmest when centred between four pixels .the shape and width of the filter are designed to give a maximum 2% difference between these extremes , which we found , empirically , was not noticeable in fast changing _ interstellar _ scenes .this result assumes the final size of the ellipse is small and the shape of the beam does not change significantly between adjacent pixels . in extreme casesthese assumptions can break down , leading to a distortion in the shape of a star s image , flickering , and aliasing artefacts . in these caseswe can trace multiple beams per pixel and resample ( see e.g. chapter 7 of ) .however , _ interstellar _ s black hole , gargantua , when visualized , has a moderate spin , which gives rise to few areas of extreme distortion , so we did not observe these problems in images created for the movie . for moving images , we need to filter over time as well as space .a traditional film camera typically exposes the film for half the time between individual frames of film , so for a typical 24 fps ( frame per second ) film , the exposure time will be 1/48 s. ( this fraction is directly related to the camera s `` shutter angle '' : the shutters in film cameras are rotating discs with a slice cut out .the film is exposed when the slice is over the emulsion , and the film moves onto the next frame when covered .the most common shape is a semicircular disk which is a `` 180 degree shutter '' . )any movement of the object or camera during the exposure leads to motion blur . in our case , if the camera is orbiting a black hole , stellar images close to a critical curve appear to zip around in frantic arcs and would appear as streaks of light on a 1/48 s photograph .we aim to reproduce that effect when creating synthetic images for films .this motion blur in computer graphics is typically simulated with monte carlo methods , computing multiple rays per pixel over the shutter duration . in order to cleanly depict these streaks with monte carlo methods, we would need to compute the paths of many additional rays and the computational cost of calculating each ray is very high . instead ,in dngr we take an analytic approach to motion blur ( cf .figure [ figa1:motionblur ] ) by calculating how the motion of the camera influences the motion of the deflected beam : the camera s instantaneous position influences the beam s momentary state , and likewise the camera s motion affects the time derivatives of the beam s state .we augment our ray and ray bundle equations ( [ eq : rays2 ] ) and ( [ eq : bundleevolution ] ) to track these derivatives throughout the beam s trajectory and end up with a description of an elliptical beam being swept through space during the exposure .the beam takes the form of a swept ellipse when it reaches the celestial sphere , and we integrate the contributions of all stars within this to create the motion - blurred image .. ] these additional calculations approximately double the computation time , but this is considerably faster than a naive monte carlo implementation of comparable quality .we can put our analytic method into a more formal setting , using the mathematical description of motion blurred rendering of sung _ , as follows .we introduce the quantity which represents the resulting intensity of a sample located at coordinates on the virtual image plane of the camera , corresponding to a ray direction , and subtending a solid angle , .the sum is over each object , , in the scene , is the time the shutter is open , and represents the radiance of the object directed back towards the camera along from object .the shutter and optics may filter the incident radiance ( specific intensity ) and this is accounted for by the term .the term is a geometric term that takes the value 1 if object is visible or 0 if occluded .considering just the distortion of the celestial sphere to start with , we start with a ray in the direction at time , in the middle of the exposure .we determine whether this ray gets swallowed up by the black hole or continues to the celestial sphere using equations ( [ eq : b1b2 ] ) and ( [ eq : qob ] ) .this corresponds to .we evolve the initial direction and solid angle subtended by the sample into a new direction at the celestial sphere , , and ellipse , using the method described in [ subsec : app - raytrace ] and [ subsec : app - raybundle ] .when the ray strikes the celestial sphere , we integrate over the area , weighting the integration with our truncated gaussian filter , corresponding to .( we assume there is no time - dependent aspect to this filter and ignore the term .we also assume the celestial sphere is static . )our motion blur method approximates with the following expression : where the partial derivative is evaluated at the centre of the beam and at .this represents a ray that can be swept over an arc in the celestial sphere by varying the value of .this is convolved with the beam shape and filter and occlusion terms to give : in this form , the consequences of our approximations become clearer : by evaluating only at the beam centre and shutter - centre , a sample that crosses the black hole s shadow during the exposure would be given a constant value of ( either 0 or 1 ) instead of the correct expression that transitions during the exposure . this would lead to a hard edge at the shadow if the camera performed a fast pan .luckily , in _interstellar _ , the design of the shots meant this rarely happened : the main source of motion blur was the movement of the spaceship around the black hole , and not local camera motion . to handle any cases where this approximation might cause a problem , we also implemented a hybrid method where we launch a small number of monte carlo rays over the shutter duration and each ray sweeps a correspondingly shorter analytic path .we used a very similar technique for the accretion disk .dngr was written in c++ as a command - line application .it takes as input the camera s position , velocity , and field of view , as well as the black hole s location , mass and spin , plus details of any accretion disk , star maps and nebulae .each of the 23 million pixels in an imax image defines a beam that is evolved as described in [ subsec : app - raybundle ] .the beam s evolution is described by the set of coupled first and second order differential equations ( [ eq : rays2 ] ) and ( [ eq : bundleevolution ] ) that we put into fully first order form and then numerically integrate backwards in time using a custom implementation of the runge - kutta - fehlberg method ( see , e.g. , chapter 7 of ) .this method gives an estimate of the truncation error at every integration step so we can adapt the step size during integration : we take small steps when the beam is bending sharply relative to our coordinates , and large steps when it is bending least .we use empirically determined tolerances to control this behaviour . evolving the beam along with its central ray triplesthe time per integration step , on average , compared to evolving only the central ray .the beam either travels near the black hole and then heads off to the celestial sphere ; or it goes into the black hole and is never seen again ; or it strikes the accretion disk , in which case it gets attenuated by the disk s optical thickness to extinction or continues through and beyond the disk to pick up additional data from other light sources , but with attenuated amplitude .we use automatic differentiation to track the derivatives of the camera motion through the ray equations .each pixel can be calculated independently , so we run the calculations in parallel over multiple cpu cores and over multiple computers on our render - farm .we use the openvdb library to store and navigate volumetric data and autodesk s maya to design the motion of the camera .( the motion is chosen to fit the film s narrative . ) a custom plug - in running within maya creates the command line parameters for each frame .these commands are queued up on our render - farm for off - line processing .a typical imax image has 23 million pixels , and for _ interstellar _ we had to generate many thousand images , so dngr had to be very efficient .it has 40,000 lines of c++ code and runs across double negative s linux - based render - farm .depending on the degree of gravitational lensing in an image , it typically takes from 30 minutes to several hours running on 10 cpu cores to create a single imax image .the longest renders were those of the close - up accretion disk when we shoe - horned dngr into mantra .for _ interstellar _ , render times were never a serious enough issue to influence shot composition or action .our london render - farm comprises 1633 dell - m620 blade servers ; each blade has two 10-core e5 - 2680 intel xeon cpus with 156 gb ram . during production of _interstellar _ , several hundred of these were typically being used by our dngr code .our code dngr includes three different implementations of an accretion disk : _ thin disk _ : for this , we adapted our dngr ray - bundle code so it detects intersections of the beam with the disk . at each intersection, we sample the artist s image in the manner described in section [ subsec : raybundle ] and [ subsec : app - filtering ] above , to determine the colour and intensity of the bundle s light and we attenuate the beam by the optical thickness ( cf . [ subsec : app - implementation ] ) ._ volumetric disk _ : the volumetric accretion disk was built by an artist using sidefx houdini software and stored in a volumetric data structure containing roughly 17 million voxels ( a fairly typical number ) .each voxel describes the optical density and colour of the disk in that region .we used extinction - based sampling to build a mipmap volume representation of this data .the length of the ray bundle was split into short piecewise - linear segments which , in turn , were traced through the voxels .we used the length of the major axis of the beam s cross section to select the closest two levels in the mipmap volume ; and in each level we sampled the volume data at steps the length of a voxel , picking up contributions from the colour at each voxel and attenuating the beam by the optical thickness .the results from the two mipmap levels were interpolated before moving on to the next line segment . _ close - up disk with procedural textures _ : side effects software s renderer , mantra , has a plug - in architecture that lets you modify its operation .we embedded the dngr ray - tracing code into a plug - in and used it to generate piecewise - linear ray segments which were evaluated through mantra .this let us take advantage of mantra s procedural textures and shading language to create a model of the accretion disk with much more detail than was possible with the limited resolution of a voxelised representation .however this method was much slower so was only used when absolutely necessary ._ disk layers close to the camera _ the accretion - disk images in _ interstellar _ were generated in layers that were blended together to form the final images .occasionally a layer only occupied space in the immediate vicinity of the camera , so close that the infuences of spacetime curvature and gravitational redshifts were negligible .these nearby layers were rendered as if they were in flat spacetime , to reduce computation time ._ frequency shifts ._ in our visualizations that included doppler and gravitational frequency shifts , we modelled the accretion disk as a constant - temperature blackbody emitter and we estimated the star temperatures from published nasa data .the doppler and gravitational frequency shifts correspond to a temperature shift in the blackbody spectra on arrival at the camera .we convolve those temperature - shifted black body spectra with the published spectral sensitivity curves of a typical motion picture film to generate separate red , green and blue radiance values , , and . for the volumetric disk ,these are calculated at each step and used in place of artist - created radiance values . in figure[ fig15:disk]b we removed the doppler - induced intensity change by dividing figure [ fig15:disk]d s rgb triplet of radiance values by a weighted mean that , empirically , leaves the eye - perceived colours unchanged : .this dimmed the blue side of the disk and brightened the red side .we set the white balance of our virtual camera to render a 6500 k blackbody spectrum with equal red , green and blue pixel values by applying a simple gain to each colour channel .we did not model the complex , nonlinear interaction between the colour - sensitive layers that occurs in real film . near the end of section [ subsec : previous ] , we compared our code dngr with the state - of - the - art astrophysical visualization code gray . the most important differences dngr s use of light - beam mappings vs. gray s use of individual - ray mappings , and dngr s use of ordinary processors vs. gray s use of gpus were motivated by our different goals : smoothness of images in movies vs. fast throughput in astrophysics .so far as we are aware , our code is unique in using light - beam - based mappings .no other code , astrophysical or film cgi , uses them . however , some film cgi codes use flat - spacetime mappings from the camera s image plane to the source plane that are mathematically equivalent to our light beams .specifically , they use _ ray differential _ techniques described by igehy , which track , along a ray , the transverse derivatives of the locations of nearby rays , thereby producing the matrix , where is a ray s source - plane location as a function of its image - plane location . tracking these differentials in flat spacetime is moderately easy because they are constant along straight paths between reflections , refractions , etc ; and they change via simple matrix transformation laws at interaction locations . tracking them through curved spacetime would entail the same geodesic - deviation technique as underlies our ray - bundle mappings .in [ sec : pointstars ] we have described our methods of imaging star fields , using stars that are point sources of light : we feed each star s light into every light beam that intersects the star s location , with an appropriate weighting .other gravitational - lensing codes deal with star fields differently .for example , mller and frauendiener calculate where each point - star ends up in the camera s image plane , effectively producing the inverse of the traditional ray - tracing algorithm .they do this for lensing by a non - spinning black hole ( schwarzchild metric ) which has a high degree of symmetry , making this calculation tractable . doing so for a spinning black hole would be much more challenging .a common method for rendering a star - field is to create a 2d source picture ( environment map ) that records the stars positions as finite - sized dots in the source plane ( see e.g. ) .this source picture is then sampled using the evolved rays from the camera .this has the disadvantage that stars can get stretched in an unrealistic way in areas of extreme magnification , such as near the critical curves described in section [ subsec : kerrcaustics ] of this paper . as we discussed in [ sec : pointstars ] , our dngr light - beam technique with point stars circumvents this problem as does the mller - frauendiener technique with point stars .10 marck , jean - alain 1996 , short - cut method of solution of geodesic equations for schwarzschild black hole _ class .quantum grav . _ * 13 * 393402 .see also marck j - a and luminet j - p 1997 plongeon dans un trou noir _pour la science _ hors - srie `` les trous noirs '' ( july 1997 ) 5056 .marck , jean - alain 1991 general relativity calculation underlying movie online at ` https://www.youtube.com/watch?v=5oqop50ltrm ` ; conversion into a movie first appeared in the documentary film _ infinitely curved _ by delesalle a , lachize - rey m and luminet j - p , cnrs / arte , france 1994 .mckinney j c and blandford r d 2009 stability of relativistic jets from rotating , accreting black holes via fully three - dimensional magnetohydrodynamics simulations _ mon not roy astron soc _ * 394 * l126l130 museth k , lait j , johanson j , budsberg j , henderson r , alden m , cucka p , hill d and pearce a 2013 openvdb : an open - source data structure and toolkit for high - resolution volumes , in _ acm siggraph 2013 courses _( new york : acm )
_ interstellar _ is the first hollywood movie to attempt depicting a black hole as it would actually be seen by somebody nearby . for this , our team at _ double negative visual effects _ , in collaboration with physicist kip thorne , developed a code called dngr ( double negative gravitational renderer ) to solve the equations for ray - bundle ( light - beam ) propagation through the curved spacetime of a spinning ( kerr ) black hole , and to render imax - quality , rapidly changing images . our ray - bundle techniques were crucial for achieving imax - quality smoothness without flickering ; and they differ from physicists image - generation techniques ( which generally rely on individual light rays rather than ray bundles ) , and also differ from techniques previously used in the film industry s cgi community . this paper has four purposes : ( i ) to describe dngr for physicists and cgi practitioners , who may find interesting and useful some of our unconventional techniques . ( ii ) to present the equations we use , when the camera is in arbitrary motion at an arbitrary location near a kerr black hole , for mapping light sources to camera images via elliptical ray bundles . ( iii ) to describe new insights , from dngr , into gravitational lensing when the camera is near the spinning black hole , rather than far away as in almost all prior studies ; we focus on the shapes , sizes and influence of caustics and critical curves , the creation and annihilation of stellar images , the pattern of multiple images , and the influence of almost - trapped light rays , and we find similar results to the more familiar case of a camera far from the hole . ( iv ) to describe how the images of the black hole gargantua and its accretion disk , in the movie _ interstellar _ , were generated with dngr including , especially , the influences of ( a ) colour changes due to doppler and gravitational frequency shifts , ( b ) intensity changes due to the frequency shifts , ( c ) simulated camera lens flare , and ( d ) decisions that the film makers made about these influences and about the gargantua s spin , with the goal of producing images understandable for a mass audience . there are no new astrophysical insights in this accretion - disk section of the paper , but disk novices may find it pedagogically interesting , and movie buffs may find its discussions of _ interstellar _ interesting . classical and quantum gravity * 32 * ( 2015 ) 065001 . received 27 november 2014 , revised 12 january 2015 accepted for publication 13 january 2015 published 13 february 2015
chemotaxis , the directed movement of cells in response to chemical gradients , plays an important role in many biological fields , such as embryogenesis , immunology , cancer growth , and wound healing . at the macroscopic level ,chemotaxis models can be formulated in terms of the cell density and the concentration of the chemical signal .a classical model to describe the time evolution of these two variables is the ( patlak- ) keller - segel system , suggested by patlak in 1953 and keller and segel in 1970 . assuming that the time scale of the chemical signal is much larger than that of the cell movement , the classical parabolic - elliptic keller - segel equations read as follows : where is a bounded domain or , with homogeneous neumann boundary and initial conditions .the parameter is the secretion rate at which the chemical substance is emitted by the cells .the nonlinear term models the cell movement towards higher concentrations of the chemical signal .this model exhibits the phenomenon of cell aggregation .the more cells are aggregated , the more the attracting chemical signal is produced by the cells .this process is counterbalanced by cell diffusion , but if the cell density is sufficiently large , the nonlocal chemical interaction dominates diffusion and results in a blow - up of the cell density . in two space dimensions , the critical threshold for blow - upis given by if is a bounded connected domain with boundary and in the radial and whole - space case .the existence and uniqueness of smooth global - in - time solutions in the subcritical case is proved for bounded domains in and in the whole space in . in the critical case ,a global whole - space solution exists , which becomes unbounded as .furthermore , there exist radially symmetric initial data such that , in the supercritical case , the solution forms a -singularity in finite time . motivated by numerical and modeling issues , the question how blow up can be avoided has been investigated intensively the last years .it has been suggested to modify the chemotactic sensitivity ( modeling , e.g. , volume - filling effects ) , to allow for degenerate cell diffusion , or to include suitable growth - death terms .we refer to for references .another idea is to introduce additional cell diffusion in the equation for the chemical concentration .this diffusion term avoids , even for arbitrarily small diffusion constants , the blow - up and leads to the global - in - time existence of weak solutions .the model , which is investigated in this paper , reads as follows : where is the additional diffusion constant .we impose the homogeneous neumann boundary and initial conditions the advantage of the additional diffusion term is that blow - up of solutions is translated to large gradients which may help to determine the blow - up time numerically .another advantage is that the enlarged system exhibits an interesting entropy structure ( see below ) . at first sight, the additional term seems to complicate the mathematical analysis .indeed , the resulting diffusion matrix of the system is neither symmetric nor positive definite , and we can not apply the maximum principle to the equation for the chemical signal anymore . it was shown in that all these difficulties can be resolved by the observation that the above system possesses a logarithmic entropy , which is dissipated according to suitable gagliardo - nirenberg inequalities applied to the right - hand side lead to gradient estimates for and , which are the starting point for the global existence and long - time analysis . in this paper, we aim at developing a finite volume scheme which preserves the entropy structure on the discrete level by generalizing the scheme proposed in .in contrast to , we are able to prove the existence of discrete solutions and their numerical convergence to the continuous solution for all values of the initial mass . moreover , we show that the discrete solution converges for large times to the homogeneous steady state if or are sufficiently small , using a new discrete logarithmic sobolev inequality ( proposition [ prop.dlsi ] ) . in the literature, there exist several approaches to solve the classical keller - segel system numerically .the parabolic - elliptic model was approximated by using finite difference or finite element methods .also a dynamic moving - mesh method , a variational steepest descent approximation scheme , and a stochastic particle approximation were developed . concerning numerical schemes for the parabolic - parabolic model ( in which is added to the second equation in ) , we mention the second - order central - upwind finite volume method of , the discontinuous finite element approach of , and the conservative finite element scheme of .we also cite the paper for a mixed finite element discretization of a keller - segel model with nonlinear diffusion .there are only a few works in which a numerical analysis of the scheme was performed .filbet proved the existence and numerical convergence of finite volume solutions .error estimates for a conservative finite element approximation were shown by saito .epshteyn and izmirlioglu proved error estimates for a fully discrete discontinuous finite element method .convergence proofs for other schemes can be found in , e.g. , .this paper contains the first numerical analysis for the keller - segel model with additional cross - diffusion .its originality comes from the fact that we `` translate '' all the analytical properties of on a discrete level , namely positivity preservation , mass conservation , entropy stability , and entropy dissipation ( under additional hypotheses ) .the paper is organized as follows .section [ sec.main ] is devoted to the description of the finite volume scheme and the formulation of the main results .the existence of a discrete solution is shown in section [ sec.ex ] .a discrete version of the entropy - dissipation relation and corresponding gradient estimates are proved in section [ sec.est ] .these estimates allow us to obtain in section [ sec.conv ] the convergence of the discrete solution to the continuous one when the approximation parameters tend to zero .a proof of the discrete logarithmic sobolev inequality is given in section [ sec.dlsi ] .the long - time behavior of the discrete solution is investigated in section [ sec.long ] .finally , we present some numerical examples in section [ sec.num ] and compare the discrete solutions to our model with those computed from the classical keller - segel system .in this section , we introduce the finite volume scheme and present our main results .let be an open , bounded , polygonal subset .an admissible mesh of is given by a family of control volumes ( open and convex polygons ) , a family of edges , and a family of points which satisfy definition 9.1 in .this definition implies that the straight line between two neighboring centers of cells is orthogonal to the edge .for instance , voronoi meshes are admissible meshes ( * ? ? ? * example 9.2 ) .triangular meshes satisfy the admissibility condition if all angles of the triangles are smaller than ( * ? ? ?* example 9.1 ) .we distinguish the interior edges and the boundary edges .the set of edges equals the union . for a control volume ,we denote by the set of its edges , by the set of its interior edges , and by the set of edges of included in .furthermore , we denote by d the distance in and by m the lebesgue measure in or .we assume that the family of meshes satisfies the following regularity requirement : there exists such that for all and all with , it holds this hypothesis is needed to apply discrete sobolev - type inequalities . introducing for the notation we definethe transmissibility coefficient the size of the mesh is denoted by let be some final time and the number of time steps . then the time step size and the time points are given by , respectively , we denote by an admissible space - time discretization of composed of an admissible mesh of and the values and .the size of this space - time discretization is defined by .let be the linear space of functions which are constant on each cell .we define on the discrete norm , discrete seminorm , and discrete norm by , respectively , where , , and for .we are now in the position to define the finite volume discretization of - .let be a finite volume discretization of .the initial datum is approximated by its projection on control volumes : and is the characteristic function on . denoting by and approximations of the mean value of and on , respectively, the numerical scheme reads as follows : for all and . here , , , and the approximation is computed from with .this scheme is based on a fully implicit euler discretization in time and a finite volume approach for the volume variable .the implicit scheme allows us to establish discrete entropy - dissipation estimates which would not be possible with an explicit scheme .this approximation is similar to that in except the additional cross - diffusion term in the second equation .the numerical approximations and of and are defined by ,\ ] ] and .furthermore , we define approximations and of the gradients of and , respectively . to this end , we introduce a dual mesh : for and , let be defined by : * if , is the cell ( `` diamond '' ) whose vertices are given by , , and the end points of the edge . * if , is the cell ( `` triangle '' ) whose vertices are given by and the end points of the edge .an example of construction of can be found in .clearly , defines a partition of .the approximate gradient is a piecewise constant function , defined in by where is given as in and is the unit vector normal to and outward to .the approximate gradient is defined in a similar way .our first result is the existence of solutions to the finite volume scheme .[ thm.ex ] let be an open , bounded , polygonal subset and let be an admissible discretization of .the initial datum satisfies , in . then there exists a solution to - satisfying properties and show that the scheme is positivity preserving and mass conserving .it is also entropy stable ; see below .let be a sequence of admissible space - time discretizations indexed by the size of the discretization .we denote by the corresponding meshes of .we suppose that these discretizations satisfy uniformly in , i.e. , does not depend on .let be a finite volume solution , constructed in theorem [ thm.ex ] , on the discretization .we set .our second result concerns the convergence of to a weak solution to - .[ thm.conv ] let the assumptions of theorem [ thm.ex ] hold . furthermore , let be a sequence of admissible discretizations satisfying uniformly in , and let be a sequence of finite volume solutions to -. then there exists such that , up to a subsequence , and is a weak solution to - in the sense of for all test functions ) ] in semi - logarithmic scale . in all cases shown , the convergence seems to be of exponential rate .the rate becomes larger for larger values of or smaller values of which is in agreement with estimate .in fact , the constant is proportional to ( see theorem [ thm.long ] ) and the rate improves if is smaller . as a numerical check , we computed the evolution of the relative entropies for different grid sizes and different time step sizes .figure [ fig.ent2 ] shows that the decay rate does not depend on the time step or the mesh considered . in this subsection, we explore the behavior of the solutions to for different values of .we choose with a cartesian grid , , and .we consider two nonsymmetric initial functions with mass : where , , and ( see figure [ fig.init ] ) .we consider first the case , which corresponds to the classical parabolic - elliptic keller - segel system . in this case ,our finite volume scheme coincides with that of .we recall that solutions to the classical parabolic - elliptic model blow up in finite time if the initial mass satisfies ( in the non - radial case ) .the numerical results at a time just before the numerical blow - up are presented in figure [ fig.nonsymm.0 ] .we observe the blow - up of the cell density in finite time , and the blow - up occurs at the boundary , as expected .more precisely , it occurs at that corner which is closest to the global maximum of the initial datum .next , we choose and .according to theorem [ thm.ex ] , the numerical solution exists for all time .this behavior is confirmed in figure [ fig.nonsymm.23 ] , where we show the cell density at time . at this time , the solution is very close to the steady state which is nonhomogeneous .we observe a smoothing effect of the cross - diffusion parameter ; the cell density maximum decreases with increasing values of .we consider , as in the previous subsection , the domain with a cartesian grid , , and . here , we consider the radially symmetric initial datum with and . since andthe initial datum is radially symmetric , we expect that the solution to the classical keller - segel model ( ) blows up in finite time .figure [ fig.symm.0 ] shows that this is indeed the case , and blow - up occurs in the center of the domain .in contrast to the classical keller - segel system , when taking , the cell density peak moves to a corner of the domain and converges to a nonhomogeneous steady state ( see figure [ fig.symm.3 ] ) .the time evolution of the norm of the cell density shows an interesting behavior ( see figure [ fig.symm.linfty ] ) .we observe two distinct levels .the first one is reached almost instantaneously .the norm stays almost constant and the cell density seems to stabilize at an intermediate symmetric state ( figure [ fig.symm.3]a ) .after some time , the norm increases sharply and the cell density peak moves to the boundary ( figure [ fig.symm.3]b ) .then the solution stabilizes again ( figure [ fig.symm.3]c ) .we note that we obtain the same steady state when using a gaussian centered at .we consider the domain and compute the approximate solutions on a cartesian grid with .the secretion rate is again , and we choose the initial data and , defined in - with mass .if , the solution blows up in finite time and the blow up occurs in a corner as in the square domain ( see figure [ fig.nonsymm.r0 ] ) . if , the approximate solutions converge to a non - homogeneous steady state ( figure [ fig.nonsymm.r3 ] ) .interestingly , before moving to the corner , the solution evolving from the nonsymmetric initial datum shows some intermediate behavior ; see figure [ fig.nonsymm.r3]b .the domain is still the rectangle , we take a cartesian grid , , and .we choose the initial datum , defined in , with .clearly , the approximate solution to the classical keller - segel model blows up in finite time in the center of the rectangle .when , the cell density peak first moves to the closest boundary point before moving to a corner of the domain , as in the square domain ( figure [ fig.symm.r ] ) .however , in contrast to the case of a square domain , there exist _ two _ intermediate states , one up to time and another in the interval , and one final state for long times ( see figure [ fig.symm.rl ] ) .we note that the same qualitative behavior is obtained using .c. budd , r. carretero - gonzlez , and r. russell .precise computations of chemotactic collapse using moving mesh methods ._ j. comput ._ 202 ( 2005 ) , 463 - 487 . m. burger , j. a. carrillo , and m .-wolfram . a mixed finite element method for nonlinear diffusion equations ._ kinetic related models _ 3 ( 2010 ) , 59 - 83 .a. guionnet and b. zegarlinski .lectures on logarithmic sobolev inequalities . in : j. azma( eds . ) , _ sminaire de probabilits _ , vol . 36 , pp . 1 - 134 , lect . notes math .1801 , springer , berlin , 2003 .
a finite volume scheme for the ( patlak- ) keller - segel model in two space dimensions with an additional cross - diffusion term in the elliptic equation for the chemical signal is analyzed . the main feature of the model is that there exists a new entropy functional yielding gradient estimates for the cell density and chemical concentration . the main features of the numerical scheme are positivity preservation , mass conservation , entropy stability , and under additional assumptions entropy dissipation . the existence of a discrete solution and its numerical convergence to the continuous solution is proved . furthermore , temporal decay rates for convergence of the discrete solution to the homogeneous steady state is shown using a new discrete logarithmic sobolev inequality . numerical examples point out that the solutions exhibit intermediate states and that there exist nonhomogeneous stationary solutions with a finite cell density peak at the domain boundary .
diffusion processes are widely used to model financial asset price processes . for example , suppose that we have multiple stocks , say , stocks whose price processes are denoted by for , and are the log price processes .let .then a widely used model for is [ see , e.g. , definition 1 in ] where , is a -dimensional drift process ; is a matrix for any , and is called the ( instantaneous ) _ covolatility process _ ; and is a -dimensional standard brownian motion .the _ integrated covariance _ ( icv ) matrix is of great interest in financial applications , which in the one dimensional case is known as the _integrated volatility_. a widely used estimator of the icv matrix is the so - called _ realized covariance _ ( rcv ) matrix , which is defined as follows .assume that we can observe the processes s at high frequency synchronously , say , at time points : then the rcv matrix is defined as \\[-8pt ] & & \eqntext{\mbox{where } \delta\mathbf{x}_{\ell } = \pmatrix { \delta x^{(1)}_\ell\cr \vdots\cr \delta x^{(p)}_\ell } : = \pmatrix { x^{(1)}_{\tau_{n,\ell}}-x^{(1)}_{\tau_{n,\ell-1}}\cr \vdots\cr x^{(p)}_{\tau_{n,\ell}}-x^{(p)}_{\tau_{n,\ell-1}}}.}\end{aligned}\ ] ] in the one dimensional case , the rcv matrix reduces to the _ realized volatility_. thanks to its nice convergence to the icv matrix as the observation frequency goes to infinity [ see ] , the rcv matrix is highly appreciated in both academic research and practical applications . the tick - by - tick dataare usually not observed synchronously , and moreover are contaminated by market microstructure noise . on sparsely sampled data ( e.g. , 5-minute data for some highly liquid assets , or subsample from data synchronized by refresh times [ ] ), the theory in this paper should be readily applicable , just as one can use the realized volatility based on sparsely sampled data to estimate the integrated volatility ; see , for example , . having a good estimate of the icv matrix , in particular , its spectrum ( i.e., its set of eigenvalues ) , is crucial in many applications such as principal component analysis and portfolio optimization ( see , e.g. , the pioneer work of markowitz ( ) and a more recent work [ ] ) .when the dimension is high , it is more convenient to study , instead of the eigenvalues , the associated _ empirical spectral distribution _ ( esd ) a naive estimator of the spectrum of the icv matrix is the spectrum of the rcv matrix .in particular , one wishes that the esd of would approximate well when the frequency is sufficiently high . from the large dimensional random matrix theory ( ldrmt ), we now understand quite well that in the high dimensional setting this good wish wo nt come true .for example , in the simplest case when the drift process is 0 , covolatility process is constant , and observation times are equally spaced , namely , , we are in the setting of estimating the usual _ covariance matrix _ using the _ sample covariance matrix _ , given i.i.d .-dimensional observations . from ldrmt , we know that if converges to a non - zero number and the esd of the true covariance matrix converges , then the esd of the sample covariance matrix also converges ; see , for example , , , and .the relationship between the _ limiting spectral distribution _ ( lsd ) of in this case and the lsd of can be described by a marenko pastur equation through stieltjes transforms , as follows .[ propmpgen ] assume on a common probability space : for and for , with i.i.d . with mean 0 and variance 1 ; [ asmyn ] with as ; [ asmconvsigma ] is a ( possibly random ) nonnegative definite matrix such that its esd converges almost surely in distribution to a probability distribution on as ; and s are independent .let be the ( nonnegative ) square root matrix of and .then , almost surely , the esd of converges in distribution to a probability distribution , which is determined by in that its stieltjes transform solves the equation in the special case when , where is the identity matrix , the lsd can be explicitly expressed as follows .[ propmp ] suppose that s are as in the previous proposition , and for some .then the lsd has density and a point mass at the origin if , where the lsd in this proposition is called the marenko pastur law with ratio index and scale index , and will be denoted by mp in this article . in practice , the covolatility process is typically not constant .for example , it is commonly observed that the stock intraday volatility tends to be u - shaped [ see , e.g. , , ] or exhibits some other patterns [ see , e.g. , ] . in this article , we shall allow them to be not only varying in time but also stochastic .furthermore , we shall allow the observation times to be random .these generalizations make our study to be different in nature from the ldrmt : in ldrmt the observations are i.i.d . ; in our setting , the observations may , first , be dependant with each other , and second , have different distributions because ( i ) the covolatility process may vary over time , and ( ii ) the observation durations may be different . in general , for any time - varying covolatility process , we associate it with a constant covolatility process given by the square root of the icv matrix .\ ] ] let be defined by replacing with the constant covolatility process ( and replacing with 0 , and with another independent brownian motion , if necessary ) in ( [ eqx ] ) .observe that and share the same icv matrix at time 1 . based on , we have an associated rcv matrix which is estimating the same icv matrix as .since and are based on the same estimation method and share the same targeting icv matrix , it is desirable that their esds have similar properties .in particular , based on the results in ldrmt and the discussion about constant covolatility case in section [ ssecldrmt ] , we have the following property for : if the esd converges , then so does ; moreover , their limits are related to each other via the marenko pastur equation ( [ eqstfh0 ] ) .does this property also hold for ?our first result ( proposition [ propnotconv ] ) shows that even in the most ideal case when the covolatility process has the form for some deterministic ( scalar ) function , such convergence results may _ not _ hold for . in particular , the limit of ( when it exists ) changes according to how the covolatility process evolves over time .this leads to the following natural and interesting question : how does the lsd of rcv matrix depend on the time - variability of the covolatility process ? answering this question in a general context without putting any structural assumption onthe covolatility process seems to be rather challenging , if not impossible . for a class ( see section [ sectheory ] ) of processes , we do establish a result for rcv matrices that s analogous to the marenko pastur theorem ( see proposition [ propmprcv ] ) , which demonstrates clearly how the time - variability of the covolatility process affects the lsd of rcv matrix .proposition [ propmprcv ] is proved based on theorem [ thmmpweightedcov ] , which is a marenko pastur type theorem for _ weighted _ sample covariance matrices .these results , in principle , allow one to recover the lsd of icv matrix based on that of rcv matrix .estimating high dimensional icv matrices based on high frequency data has only recently started to gain attention .see , for example , ; who made use of data over long time horizons by proposing a method incorporating low - frequency dynamics ; and who studied the estimation of icv matrices for portfolio allocation under gross exposure constraint . in ,under sparsity assumptions on the icv matrix , banding / thresholding was innovatively used to construct consistent estimators of the icv matrix in the spectral norm sense . in particular , when the sparsity assumptions are satisfied ,their estimators share the same lsd as the icv matrix .it remains an open question that when the sparsity assumptions are not satisfied , whether one can still make good inference about the spectrum of icv matrix . for processes in class ( see section [ sectheory ] ) , whose icv matrices do not need to be sparse , we propose a new estimator , the _ time - variation adjusted realized covariance _ (tvarcv ) matrix .we show that the tvarcv matrix has the desirable property that its lsd exists provided that the lsd of icv matrix exists , and furthermore , the two lsds are related to each other via the marenko pastur equation ( [ eqstfh0 ] ) ( see theorem [ thmconvedf ] ) .therefore , the tvarcv matrix can be used , for example , to recover the lsd of icv matrix by inverting the marenko pastur equation using existing algorithms .the rest of the paper is organized as the following : theoretical results are presented in section [ sectheory ] , proofs are given in section [ secproofs ] , simulation studies in section [ secsimulation ] , and conclusion and discussions in section [ secconclusion ] . _ notation . _ for any matrix , denotes its spectral norm . for any hermitian matrix , stands for its esd . for two matrices and , we write ( , resp . ) if ( , resp . ) is a nonnegative definite matrix . for any interval , and any metric space , stands for the space of cdlg functions from to .additionally , stands for the imaginary unit , and for any , we write as its real part and imaginary part , respectively , and as its complex conjugate .we also denote , and .we follow the custom of writing to mean that the ratio converges to 1 . finally , throughout the paper , etc . denote generic constants whosevalues may change from line to line .proposition [ propmpgen ] asserts that the esd of sample covariance matrix converges to a limiting distribution which is uniquely determined by the lsd of the underlying covariance matrix .unfortunately , proposition [ propmpgen ] does not apply to our case , since the observations under our general diffusion process setting are not i.i.d .proposition [ propnotconv ] below shows that even in the following most ideal case , the rcv matrix does not have the desired convergence property .[ propnotconv ] suppose that for all , is a -dimensional process satisfying ,\ ] ] where is a nonrandom ( scalar ) cdlg process .let , and so that the icv matrix is .assume further that the observation times are equally spaced , that is , , and that the rcv matrix is defined by ( [ eqrcv ] ) .then so long as is not constant on , for any , there exists such that if , in particular , does not converge to the marenko pastur law mp .observe that mp is the lsd of rcv matrix when .the main message of proposition [ propnotconv ] is that , the lsd of rcv matrix depends on the whole covolatility process _ not only through _ , _ but also on how the covolatility process varies in time_. it will also be clear from the proof of proposition [ propnotconv ] ( section [ seccountereg ] ) that , the more `` volatile '' the covolatility process is , the further away the lsd is from the marenko pastur law mp .this is also illustrated in the simulation study in section [ secsimulation ] . to understand the behavior of the esd of rcv matrix more clearly, we next focus on a special class of diffusion processes for which we ( i ) establish a marenko pastur type theorem for rcv matrices ; and ( ii ) propose an alternative estimator of icv matrix .suppose that is a -dimensional process satisfying ( [ eqx ] ) , and is cdlg .we say that belongs to class if , almost surely , there exist ;{\mathbb{r}}) ] are the drift and volatility processes for stock , and s are ( one - dimensional ) standard brownian motions . if the following conditions hold : the _ correlation matrix _ process of is constant in ] ; then belongs to class . the proof is given in the supplementary article [ ] .equation ( [ eqx2 ] ) is another common way of representing multi - dimensional log - price processes .we note that if are log price processes , then over short time period , say , one day , it is reasonable to assume that the correlation structure of does not change , hence by this proposition , belongs to class . observe that if a diffusion process belongs to class , the drift process , and s and are independent of , then where `` '' stands for `` equal in distribution , '' is the nonnegative square root matrix of , and consists of independent standard normals .therefore the rcv matrix where .this is similar to the in proposition [ propmpgen ] , except that here the `` weights '' may vary in , while in proposition [ propmpgen ] the `` weights '' are constantly .motivated by this observation we develop the following marenko pastur type theorems for weighted sample covariance matrices and rcv matrices .[ thmmpweightedcov ] suppose that assumptions and in proposition [ propmpgen ] hold .assume further that : for and , with i.i.d . with mean 0 , variance 1 and finite moments of all orders ; is a ( possibly random ) nonnegative definite matrix such that its esd converges almost surely in distribution to a probability distribution on as ; moreover , has a finite second moment ; [ asmconvw ] the weights are all positive , and there exists such that the rescaled weights satisfy moreover , almost surely , there exists a process ;{\mathbb{r}}_+) ] such that the observation times are independent of ; moreover , there exists such that the observation durations additionally , almost surely , there exists a process such that } \rightarrow \upsilon_s:=\int_0^s \upsilon_r\ , dr \qquad\mbox{as } n\rightarrow\infty\mbox { for all } 0\leq s\leq1,\ ] ] where for any , ] almost surely ; [ asmtrsigmagrow ] almost surely ; [ asmsigmalsd ] almost surely , as , the esd converges to a probability distribution on ; [ asmsigmanorm ] there exist and such that for all , almost surely ; [ asm2delta ] the in [ asmgammadep ] and in [ asmsigmanorm ] satisfy that [ asmpn ] as ; and [ asmobstime ] there exists such that for all , moreover , s are independent of .we have the following convergence theorem regarding the esd of our proposed estimator tvarcv matrix . [ thmconvedf ]suppose that for all , is a -dimensional process in class for some drift process , covolatility process and -dimensional brownian motion , which satisfy assumptions [ asmdrift][asm2delta ] above .suppose also that and satisfy [ asmpn ] , and the observation times satisfy [ asmobstime ] .let be as in ( [ eqtvarcv ] ) . then, as , converges almost surely to a probability distribution , which is determined by through stieltjes transforms via the same marenko pastur equation ( [ eqstfh0 ] ) as in proposition [ propmpgen ] . the proof of theorem [ thmconvedf ] is given in section [ secpfthm ] .the lsd of the targeting icv matrix is in general not the same as the lsd , but can be recovered from based on equation ( [ eqstfh0 ] ) . in practice , when one has only finite number of samples , the articles [ , and etc . ]studied the estimation of the population spectral distribution based on the sample covariance matrices .in particular , applying theorem 2 of to our case yields .[ coresdrec ] let , and define as in theorem 2 of el karoui ( ) . if are bounded in , then , as , almost surely .therefore , when the dimension is large , based on the esd of tvarcv matrix , we can estimate the spectrum of underlying icv matrix well .we collect some either elementary or well - known facts in the following .the proofs are given in the supplemental article [ ] .[ lemmadriftnegligible ] suppose that for each , and , are all -dimensional vectors .define if the following conditions are satisfied : with ; there exists a sequence such that for all and all , all the entries of are bounded by in absolute value ; almost surely .then almost surely , where for any two probability distribution functions and , denotes the levy distance between them .[ lemmatrdiff ] let with , and be with hermitian , and .then the following two lemmas are similar to lemma 2.3 in .[ lemmanormsum ] let with , and be an hermitian nonnegative definite matrix .then .[ lemmadiffest ] let with and , be a hermitian nonnegative definite matrix , any matrix , and .then : [ lemmanormdiff ] for any hermitian matrix and with , .both lemmas [ lemmanormsum ] and [ lemmadiffest ] require the real part of ( or , ) to be nonnegative . in our proof of theorem [ thmmpweightedcov ] , the requirements will be fulfilled thanks to the following lemma .[ lemmarealpos ] let with , be a hermitian nonnegative definite matrix , , .then [ lemmapostrace ] let with , be any matrix , and be a hermitian nonnegative definite matrix . then .[ lemmaunitm ] suppose that .then for any , the equation admits at most one solution in .the following result is an immediate consequence of lemma 2.7 of .[ lemmaconcstdnormal ] for where s are i.i.d .random variables such that , and for some , there exists , depending only on , and , such that for any nonrandom matrix , [ propstieltjesconv ] supposethat are real probability measures with stieltjes transforms .let be an infinite set with a limit point in .if exists for all , then there exists a probability measure with stieljes transform if and only if in which case in distribution . by assumption , is positive and non - constant on , and is cdlg , in particular , right - continuous ; moreover , .hence , there exists and \subseteq[0,1] ] , where consists of independent standard normals .hence , if we let \subseteq[c , d]\} ] with functions and defined by ( [ eqmpsupp ] ) . by the formula of , hence , for any , there exists such that for all , that is , by ( [ eqspecmono ] ) , when the above inequality holds , to prove theorem [ thmmpweightedcov ] , following the strategies in , , , we will work with stieltjes transforms .proof of theorem [ thmmpweightedcov ] for notational ease , we shall sometimes omit the sub / superscripts and in the arguments below : thus , we write instead of , instead of , instead of , instead of , etc .also recall that , which converges to . by assumption ( a.vi )we may , without loss of generality , assume that the weights are independent of s .this is because , if we let be the result of replacing with independent random variables with the same distribution that are also independent of , and , then , and so by the rank inequality [ see , e.g. , lemma 2.2 in ] , and must have the same lsd .we proceed according to whether is a delta measure at or not .if is a delta measure at , we claim that is also a delta measure at , and the conclusion of the theorem holds .the reason is as follows . by assumption [ asmconvw ] , hence by weyl s monotonicity theorem again , for any however , it follows easily from proposition [ propmpgen ] that converges to the delta measure at , hence so does .below we assume that is _ not _ a delta measure at .let be the identity matrix , and be the stieltjes transform of .by proposition [ propstieltjesconv ] , in order to show that converges , it suffices to prove that for all with sufficiently large , exists , and that satisfies condition ( [ eqconditionst ] ) .we first show the convergence of for with sufficiently large .since for all , , it suffices to show that has at most one limit . for notational ease ,we denote by .we first show that where .in fact , by lemma [ lemmaconcstdnormal ] and assumptions ( a.i)and [ asmsigmanormrcv ] , for any , using markov s inequality we get that for any , hence , choosing , using borel cantelli and that yield the convergence ( [ eqrinorm ] ) follows .next , let where note that by lemma [ lemmarealpos ] , for any , we shall show that observe the following identity : for any matrix , and for which and are both invertible , see equation ( 2.2 ) in .writing taking the inverse , using ( [ eqmultidecom ] ) and the definition ( [ eqmn ] ) of yield taking trace and dividing by we get where by ( 5.2 ) in the proof of lemma [ lemmarealpos ] in the supplementary article [ ] , .hence , therefore in order to show ( [ eq2sttrsame ] ) , by assumption [ asmconvw ] , it suffices to prove define where .observe that for every , is independent of .for any with and any , define which belongs to by lemma [ lemmapostrace ] , and then by a similar argument for ( [ eqbddcoef ] ) and using assumption [ asmconvw ] , hence , it suffices to show that \\[-8pt ] \max_{\ell=1,\ldots , n } p^{\varepsilon}\bigl|m_{(\ell)}(z ) - { \widetilde}{m}_{(\ell ) } ( z)\bigr| & \rightarrow & 0\qquad \mbox{almost surely.}\nonumber\end{aligned}\ ] ] we shall only prove the second convergence .in fact , where since for all , it suffices to show that to prove this , recall that , by lemma [ lemmaconcstdnormal ] and the independence between and , for any , where in the last line we used lemma [ lemmanormdiff ] and assumption [ asmsigmanormrcv ] .hence , for any , choosing and using borel cantelli again , we get \\[-8pt ] & & \hspace*{44pt}\qquad { } - \frac{{\operatorname{tr } } ( { \sigma}^{1/2 } ( s_{(j,\ell ) } - zi)^{-1}{\sigma}^{1/2})}{p}\biggr| \rightarrow0.\nonumber\end{aligned}\ ] ] furthermore , by lemma [ lemmatrdiff ] and assumption [ asmsigmanormrcv ] , recall that , \\[-8pt ] & & \qquad\leq 2 \frac { \|{\sigma}\| } { p v } \leq\frac{c p^\delta}{pv}. \nonumber\end{aligned}\ ] ] the convergence ( [ eqzeta ] ) follows .we now continue the proof of the theorem .recall that , and consists of i.i.d .random variables with finite moments of all orders . by lemma [ lemmadiffest](ii ) and ( [ eqwipos ] ) , where in the last line we used lemma [ lemmanormdiff ] , assumption [ asmsigmanormrcv ] , the assumption that ( and hence ) and ( [ eqdiffm ] ) , and ( [ eqrinorm ] ) .furthermore , similar to ( [ eqdifftracenorm0 ] ) , by lemma [ lemmaconcstdnormal ] and the independence between and , for any , where in the last line we use lemmas [ lemmanormdiff ] , [ lemmanormsum ] and ( [ eqwipos ] ) , and assumption [ asmsigmanormrcv ] .hence , choosing and using borel cantelli again ,we get \\[-8pt ] & & \hspace*{23.2pt}\quad { } - \frac{{\operatorname{tr } } ( { \sigma}^{1/2 } ( s_{(\ell ) } - zi)^{-1}(m_{(\ell ) } { \sigma } + i)^{-1}{\sigma}^{1/2})}{p}\biggr| \rightarrow 0 . \nonumber\end{aligned}\ ] ] furthermore , by lemmas [ lemmadiffest](i ) , [ lemmanormsum ] and ( [ eqwipos ] ) , the assumption that ( and hence ) and ( [ eqdiffm ] ) , and assumption [ asmsigmanormrcv ] , \\[-8pt ] & & \qquad\leq \max_{\ell=1,\ldots , n } \bigl|m_{(\ell ) } - m_n\bigr|\cdot\bigl\|\bigl(s_{(\ell ) } - zi\bigr)^{-1 } \bigr\|\cdot\|{\sigma}\|^2\nonumber\\ & & \qquad\leq \max_{\ell=1,\ldots , n } \bigl|m_{(\ell ) } - m_n\bigr|\cdot\frac{c p^{2\delta}}{v } \rightarrow0 . \nonumber\end{aligned}\ ] ] finally , similar to ( [ eqdifftrace0 ] ) , by lemmas [ lemmatrdiff ] and [ lemmanormsum ] , and assumption [ asmsigmanormrcv ] , combining ( [ eqdiffnorm ] ) , ( [ eqconcnum ] ) , ( [ eqdifftrace ] ) and ( [ eqdifftrace2 ] ) , we see that ( [ eqdito0 ] ) , and hence ( [ eq2sttrsame ] ) holds .now we are ready to show that admits at most one limit .[ claimconvm ] suppose that converges to , then where is the unique solution in to the following equation : writing right - multiplying both sides by and using ( [ eqmultidecom ] ) we get taking trace and dividing by we get where , recall that , is the stieltjes transform of . hence , if , then however , by the same arguments for ( [ eqconcnum0 ] ) and ( [ eqdifftrace0 ] ) we have and where , recall that which belongs to by lemma [ lemmapostrace ]. then by ( [ eqconvtm ] ) , assumption [ asmconvw ] and lemma [ lemmaunitm ] , must also converge , and the limit , denoted by , must be the unique solution in to the equation ( [ eqntm ] ) .now by ( [ dfnmtilde ] ) , ( [ eqmmtilde ] ) and assumption [ asmconvw ] , we get the convergence for in the claim . that follows from the expression and that .we now continue the proof of the theorem . by the convergence of to andthe previous claim , but ( [ eq2sttrsame ] ) implies that observing that , , and is not a delta measure at , we obtain that .hence , and by ( [ eqntm ] ) , . based on this , we can get another expression for , as follows . by ( [ eqnm ] ) , we have & = & -\frac{1}{z}\cdot\frac{1}{y { \widetilde}{m}(z)}\cdot\biggl(1- \int_0 ^ 1 \frac{1}{1+y { \widetilde}{m}(z ) w_s } \,ds\biggr)\nonumber\\[-9.5pt]\\[-9.5pt ] & = & -\frac{1}{z}\cdot\frac{1}{y { \widetilde}{m}(z)}\cdot\bigl(1- \bigl(1-y\bigl ( 1 + z m(z ) \bigr)\bigr)\bigr)\nonumber\\[-2pt ] & = & -\frac{1}{z}\cdot\frac{1 + zm(z)}{{\widetilde}{m}(z ) } , \nonumber\vspace*{-1pt}\end{aligned}\ ] ] where in the third line we used the definition ( [ eqntm ] ) of .we can then derive another formula for . by ( [ eqnm ] ) , by using that is a probability distribution . dividing both sides by and using ( [ eqnmf2 ] ) yield and hence since , that by lemma [ lemmapostrace ] and ( [ eqwipos ] ) , for any , both and belong to , hence so do and .we proceed to show that for those with sufficiently large , there is at most one triple that solves the equations ( [ eqnm ] ) , ( [ eqnm ] ) and ( [ eqntmf2 ] ) .in fact , if there are two different triples , both satisfying ( [ eqnm ] ) , ( [ eqnm ] ) and ( [ eqntmf2 ] ) . then necessarily , and . now by ( [ eqnm ] ) , by ( [ eqntmf2 ] ) , therefore , \\[-9.5pt ] & & { } \times \int_{\tau\in{\mathbb{r}}}\frac{\tau^2}{(\tau m_1(z ) + 1)(\tau m_2(z ) + 1 ) } \,d{h}(\tau ) .\nonumber\vspace*{-1pt}\vadjust{\goodbreak}\end{aligned}\ ] ] however , since , , and hence , for with sufficiently large , ( [ eq1 ] ) can not be true .it remains to verify ( [ eqconditionst ] ) , that is , in fact , using ( [ eqnm ] ) we get that since , for all .moreover , by ( [ eqnm ] ) and that , , hence by the dominated convergence theorem , the right - hand side of ( [ eqmcond ] ) converges to as .the tvarcv matrix has the form of weighted sample covariance matrices as studied in theorem [ thmmpweightedcov ] ; however , assumption [ asmwdep ] therein is not satisfied , and we need another proof . theorem [ thmconvedf ] is a direct consequence of the following two convergence results .[ proprvldp ] under assumption [ asmtrsigmagrow ] , namely , suppose that then , almost surely , the proof is given in the supplemental article [ ] . next , recall that and are defined by ( [ eqsigmabr ] ) and ( [ eqsigmawt ] ) , respectively . [ propconvedf ] the assumptions of theorem [ thmconvedf ] , both and converge almost surely . converges to defined by the lsd of is determined by in that its stieltjes transform satisfies the equation this can be proved in very much the same way as theorem [ thmmpweightedcov ] , by working with stieltjes transforms .however , a much simpler and transparent proof is as follows .proof of proposition [ propconvedf ] the convergence of is obvious since we now show the convergence of . as in the proof of theorem [ thmmpweightedcov ] , for notational ease , we shall sometimes omit the superscript in the arguments below : thus , we write instead of , instead of , instead of , etc .first , note that where by performing an orthogonal transformation if necessary , without loss of generality , we may assume that the index set .then by assumptions [ asmgammadep ] and [ asmobstime ] , for , are i.i.d . . write and . with the above notation, can be rewritten as by assumptions [ asmdrift ] , [ asmgammadep ] and [ asmobstime ] , there exists such that for all and , hence s are uniformly bounded .we will show that \\[-8pt ] & & \qquad= \max_{\ell=1,\ldots , n } |\mathbf{z}_\ell^t\breve{\sigma}_p \mathbf{z}_\ell / p - 1 |\rightarrow0 \qquad\mbox{almost surely},\nonumber\end{aligned}\ ] ] which clearly implies that to prove ( [ eqldpendo ] ) , write where and are and matrices , respectively .then by a well - known fact about the spectral norm , in particular , by assumptions [ asmgammadep ] , [ asmsigmanorm ] and [ asm2delta ] , hence .now using the fact that consists of i.i.d .standard normals and by the same proof as that for ( [ eqrinorm0 ] ) we get to complete the proof of ( [ eqldpendo ] ) , it then suffices to show that we shall only prove the first convergence ; the second one can be proved similarly .we have observe that for all , by assumption [ asmgammadep ] , by the burkholder davis gundy inequality , we then get that for any , there exists such that now we are ready to show that .in fact , for any , for any , by markov s inequality , ( [ equlnorm ] ) , hlder s inequality and ( [ eqrtnmoments ] ) , }{p^{k } { \varepsilon}^{k}}\\ & \leq & c p^{1+k\delta_2 + k\delta_1 - k}.\end{aligned}\ ] ] by assumption [ asm2delta ] , , hence by choosing to be large enough , the right hand side will be summable in , hence by borel cantelli , almost surely , .we now get back to as in ( [ eqwtsigmaneat ] ) . by ( [ eqldpnorm ] ) , for any , almost surely , for all sufficiently large , for all , hence , almost surely , for all sufficiently large , where . hence , by weyl s monotonicity theorem , for any , next , by lemma [ lemmadriftnegligible ] , has the same lsd as .moreover , by using the same trick as in the beginning of the proof of theorem [ thmmpweightedcov ] , has the same limit as , where , and consists of i.i.d .standard normals . for , it follows easily from proposition [ propmpgen ] that it converges to .moreover , by theorems 1.1 and 2.1 in , is differentiable and in particular continuous at all .it follows from ( [ eqfsandwich ] ) that must also converge to .in this section , we present some simulation studies to illustrate the behavior of esds of rcv and tvarcv matrices . in particular , we show that the esds of rcv matrices that have the same targeting icv matrix can be quite different from each other , depending on the time variability of the covolatility process .our proposed estimator , the tvarcv matrix , in contrast , has a very stable esd .we use in particular a reference curve which is the marcenko pastur law .the reason we compare the esds of rcv and tvarcv matrices with the marcenko pastur law is that the marcenko pastur law is the lsd of defined in ( [ eqrcv0 ] ) , which is the rcv matrix estimated from sample paths of constant volatility that has the same targeting icv matrix as . as we will see soon in the following two subsections , when the covolatility process is time varying, the esd of rcv matrix can be very different from the marcenko pastur law , while the esd of tvarcv matrix always matches the marcenko pastur law very well . in the simulationbelow , we assume that , or in other words , satisfies ( [ eqxsimplest ] ) with a deterministic ( scalar ) process , and a -dimensional standard brownian motion . the observation times are taken to be equidistant : . we present simulation results of two different designs : one when is piecewise constant , the other when is continuous ( and non - constant ) . in both cases ,we compare the esds of the rcv and tvarcv matrices .results for different dimension and observation frequency are reported . in all the figures below ,we use red solid lines to represent the lsds of given by the marcenko pastur law , black dashed line to represent the esds of rcv matrices , blue bold longdashed line to represent the esds of tvarcv matrices .we first consider the case when the volatility path follows piecewise constants .more specifically , we take to be t\in[1/4,3/4),\cr b^{1/2}\times10^{-2 } , & \quad ,}\qquad \mbox{where } a + b = 8.\ ] ] we plot the esds of rcv and tvarcv matrices for the case when , in the left and right panel , respectively .the curves corresponding parameters are reported in the legend .note that since all pairs of have the same summation , in all cases the targeting icv matrices are the same .we see clearly from figure [ figcomprv ] that , the esds of rcv matrices can be very different from each other even though the rcv matrices are estimating the same icv matrix ; while for tvarcv matrices , the esds are almost identical .we illustrate in this subsection the case when the volatility processes have continuous sample paths . in particular , we assume that satisfies ( [ eqxsimplest ] ) with .\ ] ] we see from figure [ figp100n1000cos ] similar phenomena as in design i about the esds of rcv and tvarcv matrices for different pairs of and . , ; right panel : , . ]we have shown theoretically and via simulation studies that : * the _ limiting spectral distribution _ ( lsd ) of rcv matrix depends not only on that of the icv matrix , but also on the time - variability of covolatility process ; * in particular , even with the same targeting icv matrix , the _ empirical spectral distribution _ ( esd ) of rcv matrix can vary a lot , depending on how the underlying covolatility process evolves over time ; * for a class of processes , our proposed estimator , the _ time - variation adjusted realized covariance _ ( tvarcv ) matrix , possesses the following desirable properties as an estimator of the icv matrix : as long as the targeting icv matrix is the same , the esds of tvarcv matrices estimated from processes with different covolatility paths will be close to each other , sharing a unique limit ; moreover , the lsd of tvarcv matrix is related to that of the targeting icv matrix through the same marcenko pastur equation as in the sample covariance matrix case . furthermore , we establish a marcenko pastur type theorem for weighted sample covariance matrices . for a class of processes , we also establish a marcenko pastur type theorem for rcv matrices , which explicitly demonstrates how the time - variability of the covolatility process affects the lsd of rcv matrix . in practice , for given and , based on the ( observable ) esd of tvarcv matrix , one can use existing algorithms to obtain an estimate of the esd of icv matrix , which can then be applied to further applications such as portfolio allocation , risk management , etc .we are very grateful to the editor , the associate editor and anonymous referees for their very valuable comments and .
we consider the estimation of integrated covariance ( icv ) matrices of high dimensional diffusion processes based on high frequency observations . we start by studying the most commonly used estimator , the _ realized covariance _ ( rcv ) matrix . we show that in the high dimensional case when the dimension and the observation frequency grow in the same rate , the limiting spectral distribution ( lsd ) of rcv depends on the covolatility process _ not only through the targeting icv _ , _ but also on how the covolatility process varies in time_. we establish a marenko pastur type theorem for weighted sample covariance matrices , based on which we obtain a marenko pastur type theorem for rcv for a class of diffusion processes . the results explicitly demonstrate how the time variability of the covolatility process affects the lsd of rcv . we further propose an alternative estimator , the _ time - variation adjusted realized covariance _ ( tvarcv ) matrix . we show that for processes in class , the tvarcv possesses the desirable property that its lsd depends solely on that of the targeting icv through the marenko pastur equation , and hence , in particular , the tvarcv can be used to recover the empirical spectral distribution of the icv by using existing algorithms . . .
in the context of the european program ssa , the aim of the project _ sara - part i feasibility study of an innovative system for debris surveillance in leo regime _ was to demonstrate the feasibility of a european network based on optical sensors , capable of complementing the use of radars for the identification and cataloging of debris in the high part of the leo region , to lower the requirements on the radar system in terms of power and performances .the proposal relied on the definition of a wide - eye optical instrument able to work in high leo zone and on the development of new orbit determination algorithms , suitable for the kind and amount of data coming from the surveys targeting at leos .taking into account the performances expected from the innovative optical sensor , we have been able to define an observing strategy allowing to acquire data from every object passing above a station of the assumed network , provided the is good enough .still the number of telescopes required for a survey capable of a rapid debris catalog build - up was large : to reduce this number we have assumed that the goal of the survey had to be only one exposure per pass .a new algorithm , based on the first integrals of the kepler problem , was developed by dm to solve the critical issue of leos orbit determination .standard methods , such as gauss , require at least three observations per pass in order to compute a preliminary orbit , while the proposed algorithm needs only two exposures , observed at different passes .this results in a significant reduction of the number of required telescopes , thus of the cost of the entire system . for leo ,the proposed method takes into account the nodal precession due to the quadrupole term of the earth geopotential . because of the low altitude of the orbits and the availability of sparse observations , separated by several orbital periods , this effect is not negligible and it must be considered since the first step of preliminary orbit computation .the aim was to perform a realistic simulation .thus in addition to the correlation and orbit determination algorithms , all the relevant elements of the optic system were considered : the telescope design , the network of sensors , the observation constraints and strategy , the image processing techniques .starting from the esa - master2005 population model , cgs provided us with simulated observations , produced taking into account the performances of the optical sensors .these data were processed with our new orbit determination algorithms in the simulations of three different operational phases : catalog build - up , orbit improvement , and fragmentation analysis .the results of these simulations are given in the sections [ sec : survey_res ] , [ sec : task_results ] , and [ sec : frag ] .the only way to validate a proposed system , including a network of sensors and the data processing algorithms , was to perform a realistic simulation . this does not mean a simulation including all details , but one in which the main difficulties of the task are addressed .the output of such a simulation depends upon all the assumptions used , be they on the sensor performance , on the algorithms and software , on the physical constraints ( e.g. , meteorological conditions ) . in what follows ,we list all the assumptions used in the catalog build - up simulation , in the orbit improvement simulation , and in the fragmentation detection simulation .we discuss the importance of each one in either surveying capability or detection capability or orbit availability and accuracy .all the assumptions turn out to be essential to achieve the performance measured by the simulations .we are assuming a network consisting of optical sensors only , with the following properties : 1 .the telescope and the camera have a * large field of view : 45 square degrees * ( arcsec ) .the telescope has * quick motion capability * , with mechanical components allowing a * 1 s exposure every 3 s * , with each image covering new sky area : the motion in the 2 s interval must be , with stabilization in the new position .3 . the camera system has a * quick readout * , to be able to acquire the image from all the ccd chips * within the same 2 s * used for telescope repositioning , and this with a low readout noise , such that the main source of noise is the sky background .4 . the camera system needs to have * high resolution * , comparable to the seeing . a pixel scale of about 1.5 arcsec is the best compromise , known to allow for accurate astrometry .then field of view of 24000 arcsec implies a camera system with 256 megapixel .the camera has by design a * fill factor 1*. the fill factor is the ratio between the effective area , on the focal plane , of the active sending elements and the area of the field of view . 6 .the * telescope aperture needs to be large * enough to detect the target debris population , we are assuming an * effective aperture of 1 meter*. that is , the unobstructed photon collecting area has an area equal to a disk of 1 meter diameter .the * network of sensors * includes * 7 geographically distributed stations , each with 3 telescopes * available for leo tracking . 8 .the telescope is assumed to have * tasking capabilities * , consisting in the possibility of * non - sidereal tracking at a programmed rate up to 2000 arcsec / s * ( relative to the sidereal frame ) while maintaining the image stable .the above assumptions about the sensor hardware require a significant effort in both technological development and resources : a discussion on the feasibility of each of them is beside the scope of this paper .we need just to point out that a design for an innovative sensor with such properties does exist and has been presented in .the large field of view is needed to cope with the tight requirements on surveying capability resulting from a population of space objects with very fast angular motion , up to 2000 arcsec / s .the surveying capability is further enhanced by taking an image on a new field of view every 3 s , and by the use of 21 telescopes . thus it should not be surprising that such a network has the capability of observing objects in orbits lower than those previously considered suitable for optical tracking. the large fill factor also contributes by making the surveying capability deterministic , that is objects in the field of view are effectively observable every time .manufacturing imperfections unavoidably decrease the fill factor , but with good quality chips the reduction to a value around does not substantially change the performance , while detectors with values in the range would result in a severely decreased performance .both properties ( large aperture and large fill factor ) are feasible as a result of an innovative design based on the fly - eye concept , that is the telescope does not have a monolithic focal plane but many of them , each filled by the sensitive area of a separate , single chip camera .the comparatively high astrometric resolution is essential to guarantee the observation accuracy , in turn guaranteeing the orbit determination accuracy .if the goal is to perform collision avoidance , there is no point in having low accuracy orbits .thus a low astrometric resolution survey would require a separate tasking / follow up network of telescopes for orbit improvement . in our assumed network allthe tasking is performed with the same telescopes .the telescope effective aperture , which corresponds in the proposed design to a primary mirror with meters diameter , is enough to observe leos in the cm diameter range if it is coupled with a computationally aggressive image processing algorithm , discussed below and in sec .[ sec : s2n ] .the selection of the stations locations is a complicated problem , because it has to strike a compromise between the requirement of a wide geographical distribution and the constraints from meteorology , logistics and geopolitics ; see sec .[ sec : network ] . to simulate the outcome of such a complicated selection process, we have used a network which is ideal neither from the point of view of geographical distribution nor for meteorological conditions , but it is quite realistic .the assumed sensors are optimized for leo , but they are also very efficient to observe medium earth orbit ( meo ) , geostationary orbit ( geo ) , and any other earth orbit above 1000 km .the same sensors could also be used for near earth objects ; the required changes have only to do with longer exposure times and can be implemented in software .we are assuming the observations from the optical sensor network are processed with algorithms and the corresponding software , having the following properties : 1 .the * scheduler * of the optical observations is capable of * taking into account the geometry of light and the phase * ( which is defined as the sun - object - observer angle ) , in such a way that the objects passing above the station are imaged and the phase is minimized .the * image processing * includes a procedure to * detect long trails at low signal to noise * , with a loss due to the spreading of the image on pixels proportional to .3 . the * astrometric reduction algorithms * allow for * sub - pixel accuracy , even for long trails * , and taking properly into account star catalog errors .the * correlation and orbit determination * algorithms allow us to compute preliminary orbits starting from a * single trail per pass * , and correlating passes separated by several orbital periods of the objects ( e.g. , a time span of the order of a day ) .the assumptions 9 - 12 are different from the previous ones because they are all about software .of course a significant software development effort is necessary , but we consider part of our current research effort to ensure that the algorithms which could lead to the assumed results already exist . the synthetic observations used in the simulation have been obtained by taking one exposure for each pass , in the visibility interval ( 15 of elevation , illuminated by the sun , station in darkness ) . within this interval ,we have assumed the best third from the point of view of the phase angle is used ( near the shadow of the earth ) . since the apparent magnitude of the objects is a steep function of the phase angle ,the number of observations with sufficient is significantly increased ( by a factor 3 - 4 ) .such a _ light aware _scheduler was not actually available , but we have tested that a simple observing strategy exists leading to this result . the idea is to use a dynamic barrier formed by frequently visited fields of view .the barrier could be bordering the earth shadow at the altitude of the objects being targeted : both simple computations and a numerical simulation show that this can be achieved by using telescopes with the performances outlined in assumptions 1 - 6 .the _ trailing loss _ , that is the decrease of the signal due to the spreading of the image on pixels , appears to limit the sensitivity of the detector for objects with a high angular velocity , like arcsec / s ( typical values for an object at an altitude of 1400 km ) .this appears to defeat the approach used in astronomy , that is increasing the exposure time to observe dimmer objects .however , even for a stationary object such as a star , the increase in is only with the square root of the exposure time .thus , if an algorithms is available to _ sum up _ the signal from adjacent pixels , in such a way that accumulates with , the increase of exposure time is as effective as for a stationary target .such algorithms exist and are discussed in sec .[ sec : s2n ] .the actual implementation in operational software and field testing are assumptions .the observations have to be reduced astrometrically in an accurate way , with rms error of 0.4 arcsec when the pixel is good .geo and geostationary transfer orbit ( gto ) data from the esa optical ground sensor in teide ( canary islands ) , reduced by university of bern , show a typical rms arcsec of the residuals from the orbit determination performed by our group . for low on each pixel the astrometric error is assumed to increase , see sec .[ sec : s2n ] . when the rms grows to arcsec , the observations can result in orbit determination failure and/or accuracy requirements non compliance .improvements in the astrometric reduction procedure , to remove systematic star catalog errors , are already implemented in asteroid orbit catalogs such as the ones available online from the astdys-2 and neodys-2 development systems .they are based on the star catalog debiasing algorithms proposed by .the assumption is that an ad hoc astrometric reduction software is developed .correlation and orbit determination algorithms , developed and implemented in software by our group , have the capability to use significantly less observations with respect to classical methods to compute a preliminary orbit . as an example, we can use two trails from different passes of the same object above either the same or a different station to compute an orbit with covariance matrix .these methods , and those used for successive orbit improvement , are discussed in sec .[ sec : orbdet ] .the correlation software we use is capable of computing orbits starting from sparse uncorrelated observations of leo ( also of geo , ) .the amount of data to be used as input for initial catalog build - up is limited to 1 exposure per pass .the advantage with respect to the traditional approach , requiring three separate observations in the same pass , is such that the surveying capability for a given sensor network is increased by a factor 3 .note that in this case the software with the assumed performances actually exists and is being tested in simulations like the ones we are discussing in this paper .the assumption is only that the existing software is upgraded to operational .to perform a debris observation some conditions shall be verified : a minimum elevation angle , the orbiting object must be in sunlight , etc .. these conditions are strongly dependent on the object orbit parameters , on the observatory location and on the seasonal factors .there are also other observational constraints that have been taken into account , such as the distance from the moon and the galactic plane .the first constraints to the network architecture are purely geometrical and are due to the horizon .an orbiting object at an altitude is visible only up to a given distance from a station , beyond which the object is below a minimum elevation , being a reasonable value . for an object at km , the distance to the objectis thus limited to about 3100 km , and the distance to the groundtrack of the object to about 2500 km ; see fig .[ fig : horizon ] , showing these values in km as a function of the object altitude .moreover , for a station at a latitude of , this figure also shows the half width of the equatorial band such that , if the groundtrack is in there , the object is not observable .this argument favors the stations located at low latitudes .1400 km , ,scaledwidth=70.0% ] the second consideration is how the object presents itself with a groundtrack passing near a station .figure [ fig : groundtra ] shows the groundtrack for a nearly circular orbit with km and inclination of 60 .the oval contour shows the maximum visibility range , for an equatorial station and an elevation .typically a leo has 4 passes / day above the required altitude as seen from a station at low latitudes .note that the constraints discussed so far apply equally to a radar sensor .the main difference with radar arises because of the geometry of sunlight .the requirements for an optical station are the following : 1 .the ground station is in darkness , e.g. , the sun must be at least 10 - 12 below the horizon , that is the sky is dark enough to begin operations , typically about 30 - 60 minutes after sunset and before sunrise ( this is strongly dependent on the latitude and the season at the station ) ; 2 .the orbiting object is in sunlight ; 3 .the atmosphere is clear ( no dense clouds ) .the condition on sunlight is quite restrictive : the low orbiting objects are fully illuminated in all directions only just after sunset and before sunrise . in the figures [ fig : shadow1]-[fig : shadow3 ] the bold line represents the earth shadow boundary at 1400 km above ground .the shadow also depends upon the season , the figures have been drawn for march 20 , a date close to an equinox .the earth shadow region , where the orbiting object is invisible , is represented in gray .the circles represent the iso - elevation regions of the sky above the horizon , which is the outer curve ; the center is the local zenith .the labeled lines ( 30 , 60 , 90 and 120 ) represent the iso - phase curves for objects at 1400 km above ground , that is the directions in the sky where the objects have a specific phase angle .the phase angle is a very critical observing parameter for a debris .the optical magnitude of an object ( generally all solar system moving objects ) depends , among other parameters , by the phase angle : the smaller the phase angle the brighter the object .the strength of this effect also depends upon the optical properties of the object surface , such as the albedo .anyway the effect is large , e.g. , at a phase of the apparent magnitude could increase by magnitudes with respect to an object at the same distance but with phase .thus an observation scheduling taking into account the need of observing with the lowest possible phase increases very significantly the optical sensor performance .the figures show that the regions where the phase angles are smaller are close to the earth shadow boundary .very low phases can be achieved only near sunset and sunrise , by looking in a direction roughly opposite to the sun ( see fig . [fig : shadow1 ] ) . for leosthere is a central portion of the night , lasting several hours , in which for a either an equatorial or a tropical station the observations are either impossible or with a very unfavorable phase , e.g. , for the station at north latitude on march 20 from about 22 hours to 2 hours of the next day ( see fig . [fig : shadow2 ] ) . on the contrary , for a station at an high latitude ( both north and south ) the dark period around midnight when leo can not be observed does not occur , because the earth shadow moves south ( for a north station ; see fig .[ fig : shadow3 ] ) .it is clear that , by combining the orbital geometry of passages above the station with the no shadow condition , it is possible to obtain objects which are unobservable from any given low latitude station , at least for a time span until the precession of the orbit ( due to earth s oblateness , /day for an altitude of 1400 km and inclination ) changes the angle between the orbit plane and the direction to the sun . on the other hand , high latitude stations can not observe low inclination objects , and operate for a lower number of hours per year because of shorter hours of darkness in summer and worse weather in winter .a trade off is needed , which suggests to select some intermediate latitude stations , somewhere between 40 and 50 both north and south .the meteorological constraints can be handled by having multiple opportunities of observations from stations far enough to have low meteorological correlation .this implies that an optimal network needs to include both tropical and high latitude stations , with a good distribution also in longitude . beside the need for geographic distribution as discussed above, the other elements to be considered in the selection of the network are the following : * geopolitics : the land needs to belong to europe , or to friendly nations .the limitations of the european continent implies that sites in minor european islands around the world are needed as well as observing sites in other countries .* logistics : some essentials like electrical power , water supply , telecommunications , airports , harbors , and roads have to be available .* meteo : the cloud cover can be extremely high , in some geographic areas , and especially in ( local ) winter . other meteorological parameters such as humidity , seeing , wind play an important role .high elevation observing sites over the inversion layer are desirable , but in mid ocean there are not many mountains high enough . * orography : an unobstructed view of the needed sky portions , down to 15 of elevation , is necessary .astronomical observatories are not so demanding , especially in the pole direction . *light pollution : an observing site with low light pollution is essential , to lower the sky background , which is the main source of noise ..[tab : network ] geographical coordinates of the proposed network of stations [ cols=">,>,>,>",options="header " , ] in conclusion , our study suggests that the simulated network of telescopes can detect and catalog the fragments generated by a catastrophic event in high leo after just a few days from the event .some caveats and conclusions can be stated .first , the detection of a stream of fragments , with low ejection velocity , within 24 hours is like detecting a single object , because the fragments are not spread along the entire orbit . for this reason it might not be possible to detect a large fraction of the fragments in 1 day , if bad meteorological conditions are present on critical stations .then , the gabbard diagram , built with the output of the orbit determination simulation after 6 days , shows that the orbital information is more than enough to assess the fragmentation event ( parent body , energy , etc . ) .the results of the catalog build - up simulation show that more than 98% of the leo objects with perigee height above 1100 km and diameter greater than 8 cm can be cataloged in 2 months . as fig.[fig : simul12_2months ] shows , a central area around 1100 km of orbital perigee altitude has been identified where radar sensors and the optical network should operate in a cooperative way .all the numbered orbits are accurate enough to allow follow up observations with no trailing loss , and the orbit accuracy from the improved orbits is compliant with the accuracy we have used , which corresponds to collision avoidance requirements . finally the simulated network of telescopes is able to detect and catalog the fragments generated by a catastrophic event just a few days after the event .the significance of our results for the design of a space situational awareness system is in the possibility to use optical sensors to catalog and follow up space debris on orbits significantly lower than those previously considered suitable .of course this is true only provided a list of technological assumptions , both in hardware and in software , spelled out in section [ sec : assumptions ] , are satisfied . if these technologies are available , then it is possible to trade off between an upgraded system of optical sensors and a radar system with higher energy density .we wish to thank cgs for providing us with the expected sensor performances and realistic simulated observations , taking into account the characteristics of the fly - eye telescope , the meteorological model to account for cloud cover , and the statistical model for the signal to noise ratio .this work was performed under esa / esoc contract 22750/09/d / hk _ sara - part i feasibility study of an innovative system for debris surveillance in leo regime _ , with cgs as prime contractor .bowell e. , hapke , b. , domingue , d. , lumme , k. , peltionemi , j. , harriw , a. w. , `` application of photometric models to asteroids '' , _ asteroids ii , proceedings of the conference _ ,university of arizona press , tucson , 1989 , pp .524556 .farnocchia , d. , tommei , g. , milani , a. , rossi , a. , `` innovative methods of correlation and orbit determination for space debris '' , _ celestial mechanics and dynamical astronomy _ , vol .107 , no . 12 , pp .169185 , 2010 .fujimoto k. , maruskin j. d. , scheeres d. j. , `` circular and zero - inclination solutions for optical observations of earth - orbiting objects '' , _ celestial mechanics and dynamical astronomy _ , vol .2 , pp . 157182 , 2010 .milani , a. , gronchi , g. f. , farnocchia , d. , tommei , g. , and dimare , l. , `` optimization of space surveillance resources by innovative preliminary orbit methods '' , _ proceedings of the fifth european conference on space debris _ , darmstadt , germany , 2009 .schildknecht , t. , musci , r. , ploner , m. , beutler , g. , flury , w. , kuusela , j. , de leon cruz , j. , de fatima dominguez palmero , l. , `` optical observations of space debris in geo and in highly - eccentric orbits '' , _ advances in space research _ , vol .5 , pp . 901911 , 2004 .
we present the results of a large scale simulation , reproducing the behavior of a data center for the build - up and maintenance of a complete catalog of space debris in the upper part of the low earth orbits region ( leo ) . the purpose is to determine the performances of a network of advanced optical sensors , through the use of the newest orbit determination algorithms developed by the department of mathematics of pisa ( dm ) . such a network has been proposed to esa in the space situational awareness ( ssa ) framework by carlo gavazzi space spa ( cgs ) , istituto nazionale di astrofisica ( inaf ) , dm , and istituto di scienza e tecnologie dellinformazione ( isti - cnr ) . the conclusion is that it is possible to use a network of optical sensors to build up a catalog containing more than 98% of the objects with perigee height between 1100 and 2000 km , which would be observable by a reference radar system selected as comparison . it is also possible to maintain such a catalog within the accuracy requirements motivated by collision avoidance , and to detect catastrophic fragmentation events . however , such results depend upon specific assumptions on the sensor and on the software technologies .
in this paper we apply evolutionary optimization techniques to compute optimal rule - based trading strategies based on financial sentiment data .the number of application areas in the field of sentiment analysis is huge , see especially for a comprehensive overview .the field of finance attracted research on how to use specific financial sentiment data to find or optimize investment opportunities and strategies , see e.g. , , and .this paper is organized as follows .section [ financial - sentiments ] describes the financial sentiment data used for the evolutionary approach to optimize trading strategies and portfolios .section [ evolutionary - investment - strategy - generation ] presents an evolutionary optimization algorithm to create optimal trading strategies using financial sentiment data and how to build a portfolio using single - asset trading strategies .section [ numerical - results ] contains numerical results obtained with the presented algorithm and a comparison to classical risk - return portfolio optimization strategies as proposed by using stock market data from all stocks in the dow jones industrial average ( djia ) index .section [ conclusion ] concludes the paper .we apply financial sentiment data created by psychsignal .the psychsignal technology utilizes the wisdom of crowds in order to extract meaningful analysis , which is not achievable through the study of single individuals , see for a general introduction to measurement of psychological states through verbal behavior .let a group of individuals together be a crowd .not all crowds are wise , however four elements have been identified , which are required to form a wise crowd : diversity of opinion , independence , decentralization and aggregation as proposed by .these four elements are sometimes present in some forms of social media platforms , e.g. in the financial community stocktwits , from which the crowd wisdom used for the evolutionary approach described in this paper is derived .emotions are regarded as being unique to individual persons and occurring over brief moments in time .let a mood be a set of emotions together .in order to quantify the collective mood of a crowd , distinct emotions of individual members within the crowd must be quantified .subsequently , individual emotions can be aggregated to form a collective crowd mood .psychsignals natural language processing engine is tuned to the social media language of individual traders and investors based on the general findings of e.g. and of for the financial domain .the engine further targets and extracts emotions and attitudes in that distinct language and categorizes as well as quantifies these emotions from text .the methodology is based on the linguistic inquiry and word count ( liwc ) project , which is available publicly .see also for a description of an algorithm on how to generate such a semantic lexicon for financial sentiment data directly .the main idea is to assign a degree of bullishness or bearishness on stocks depending on the messages , which are sent through stocktwits , which utilizes twitter s application programming interface ( api ) to integrate stocktwits as a social media platform of market news , sentiment and stock - picking tools .stocktwits utilized so called _ cashtags _ with the stock ticker symbol , similar to the twitter _ hashtag _ , as a way of indexing people s thoughts and ideas about companies and their respective stocks .the available sentiment data format is described in tab . [tab : psychsignal ] .the data was obtained through quandl , where psychsignal s sentiment data for stocks can be accessed easily .stocktwits sentiment data format per asset . [ cols="<,<",options="header " , ]in this paper an evolutionary optimization approach to compute optimal rule - based trading strategies based on financial sentiment data has been developed .it can be shown that a portfolio composed out of the single trading strategies outperforms classical risk - return portfolio optimization approaches in this setting .the next step is to include transaction costs to see how this active evolutionary strategy loses performance when transaction costs are considered .future extensions include extensive numerical studies on other indices as well as using and comparing different evaluation risk metrics or a combination of metrics .one may also consider to create a more flexible rule - generating algorithm e.g. by using genetic programming .finally , to achieve an even better out - of - sample performance the recalibrating of the trading strategy can be done using a rolling horizon approach every month .a. brabazon and m. oneill .intra - day trading using grammatical evolution . in a.brabazon and m. oneill , editors , _ biologically inspired algorithms for financial modelling _ , pages 203210 .springer , 2006 .n. oliveira , p. cortez , and n. areal .automatic creation of stock market lexicons for sentiment analysis using stocktwits data . in _ proceedings of the 18th international database engineering & applications symposium _ , pages 115123 .acm , 2014 .
in this paper we apply evolutionary optimization techniques to compute optimal rule - based trading strategies based on financial sentiment data . the sentiment data was extracted from the social media service stocktwits to accommodate the level of bullishness or bearishness of the online trading community towards certain stocks . numerical results for all stocks from the dow jones industrial average ( djia ) index are presented and a comparison to classical risk - return portfolio selection is provided . + * keywords : * evolutionary optimization , sentiment analysis , technical trading , portfolio optimization
in survival and medical studies it is quite common that more than one cause of failure may be directed to a system at the same time .it is often interesting that an investigator needs to estimate a specific risk in presence of other risk factors . in statistical literature analysis of such riskis known as competing risk model .parametric inference of competing risk models are studied by many authors assuming that competing risks follow different lifetime distributions such as gamma , exponential and weibull distribution ; see for example , , .however determination of the cause of the failure is more difficult many times to observe than to follow up the time to failure . under the assumption that risks follow exponential distribution , inference of the model studied by . considered the same model , studied by miyakawa under the assumption that every member of a certain target population either dies of a particular cause , say cancer or by other causes .model can be more flexible by considering the fact that some individual may be alive at the end of experiment i.e. data are censored .such models are available in literature even under bayesian set up ; see for example , . herewe consider competing risk under progressive type - ii censoring .the censoring scheme is defined as follows .consider n individuals in a study and assume that there are k causes of failure which are known . at time of each failure , one of more surviving units may be removed from the study at random .the data from a progressively type - ii censored sample is as follows : .note that the complete and type - ii right censored samples are special cases of the above scheme when and and respectively .for an exhaustive list of references and further details on progressive censoring , the reader may refer to the book by .the main focus of this paper is the analysis of parametric competing risk model when the data is progressively censored .no work has been done taking modified weibull as parametric distribution of cause of the failures .modified weibull has different forms .we consider a specific form mentioned in section 2 which is not well - studied . in bayesian analysiswe choose reference prior .use of reference prior is rare as it makes the computational problem harder , though the best motivation for prior selection can be obtained through reference prior .also many times expression of the prior may not be tractable .however using gibbs combined with slice can provide a solution to this problem .bayes estimators and credible intervals are also obtained .the organization of the chapter is as follows . in section 2we describe the model and present the definitions and notation used throughout the paper .the bayesian estimation of the different parameters are considered in section 3 .numerical results are provided in section 4 .we illustrate the performance of those techniques in section 5 using real data set .finally some conclusions are drawn in section 6 .we assume are n independent , identically distributed ( i.i.d . )modified weibull random variable .further and are independent for all and .we observe the sample assuming all the causes of failure are known .we assume that the s are modified weibull distribution with parameters for and for .the distribution function of has the following form : , and 2 .therefore for each cause , the pdf of failure time can be given by , the likelihood function of the observed data is \\ & \times & \prod_{i = m_{1}}^{m } [ f_{2}(t_{i } ; \alpha , \beta , \lambda_{1})s_{1}(t_{i } ; \alpha , \beta , \lambda_{2})]\end{aligned}\ ] ] where evaluating the above likelihood we get e^{(\lambda_{1 } + \lambda_{2})\alpha\sum_{i = 1}^{m}(r_{i } + 1)(1 - e^{(\frac{t_{i}}{\alpha})^{\beta } } ) } \end{aligned}\ ] ] the key idea is to derive reference prior described by ( 1992 ) is to get prior that maximizes the expected posterior information about the parameters .let is expected information about given the data .then where is the kullback - leibler distance between posterior and prior can be given by if the set up is as follows , , where is and is .we define where is the parameter of interest and is a nuisance parameter .let we can show , where is the constant that makes this distribution a proper density .we will be using this fact to get the expression for the prior in conditional distribution at each gibbs sampler step .* reference prior for gibbs sampling set - up * in the gibbs sampling set - up we try to reduce the posterior as the conditional distribution of one parameter given the other parameters and the data . for example , we are interested in finding the complication of finding reference prior can be much simplied by using gibbs sampler method . generation of random number from these conditional densities is not tractable through inverse transform method due to its highly complicated form .therefore we use slice sampler to generate sample from those conditional distribution .steps of the algorithms and calculation of hpd region can be provided as follows : 1 .choose a starting value of .2 . use slice sampling to generate using above .3 . use this new and earlier to generate new .4 . use new and earlier to generate new .use new to generate new .repeat step 2 - 5 m times to generate new ( ) .mean bayesian estimate of is given by:- 8 . to obtain hpd(highest posterior density ) region of ,we order , as .then 100(1 - ) hpd region of become + therefore 100(1 - ) hpd region of becomes where is such that for all .similarly , we can obtain the hpd credible interval for , and .in this section we conduct a simulation study to investigate the performance of the proposed bayes estimators . we generate sample points from the actual distribution assuming the parameters provided along with different censoring schemes and then apply it to our model to predict the parameters and compare the results . to generate data setscurrently we assign the cause of failure as 1 or 2 with probability 0.5 .we have generated 200 data points .then we have divided it in 2 cases . in case 1we consider all 200 data points ( which is * type 2 censoring * ) and in case 2 we will apply censoring on those data ( which is * progressive censoring * ) and then predict the parameters .below are the different schemes of data sets used for validation .1 . , type 2 censoring 2 . , progressive type 2 censoring 3 . , type 2 censoring 4 . , progressive type 2 censoring * scheme 1 * .results comparison for scheme 1 [ cols="<,<,<",options="header " , ] [ fig : case2 ] the results obtained for hpd region of apparently seem to look awkward as its width is very high .however taking posterior median as true value of the parameters of modified weibull , we can easily cross - check it provides a good fit for histograms of the distribution of cause - specific data .this verifies the distributional assumption of this parametric approach .we consider the bayesian analysis of the competing risks data when they are type - ii progressively censored .numerical simulation shows the posterior median calculated via slice sampling combined with gibbs sampler works quite well . in this articlewe consider with two causes of failure only , the work can be extended to more than two causes of failure .we select parameters to follow reference prior which is very flexible .the similar work can be done for type - i progressively censored data .the work is on progress .slice sampling uses step out methods where selection of width plays an important role .an automated choice of such selection based on the data and distribution can enhance the quality of the algorithm .abdel - hamid , alaa h and al - hussaini , essam k ( 2011 ) .inference for a progressive stress model from weibull distribution under progressive type - ii censoring ._ journal of computational and applied mathematics _ * 235 * , no .17 , 5259 - 5271 .asgharzadeh , a. , valiollahi , r. and kundu , d. ( 2015 ) .prediction for future failures in weibull distribution under hybrid censoring _ international journal of scientific & engineering research _ * 6 * , 16251632 .damien , paul and wakefield , jon and walker , stephen ( 1999 ) .gibbs sampling for bayesian non - conjugate and hierarchical models by using auxiliary variables ._ journal of the royal statistical society .series b , statistical methodology _ * 5 * , 331344 .satagopan , jm and ben - porat , l and berwick , m and robson , m and kutler , d and auerbach , ad ( 2004 ) , a note on competing risks in survival data analysis _ british journal of cancer _ vol .91 , no . 7 , 12291235
in this paper we study bayesian analysis of modified weibull distribution under progressively censored competing risk model . this study is made for progressively censored data . we use deterministic scan gibbs sampling combined with slice sampling to generate from the posterior distribution . posterior distribution is formed by taking prior distribution as reference prior . a real life data analysis is shown for illustrative purpose . # 1 0 0 1 0 * bayesian analysis of modified weibull distribution under progressively censored competing risk model * _ keywords : _ modified weibull , competing risk , likelihood function , slice sampling .
inspiralling compact binaries containing spinning black holes ( bhs ) are plausible sources for the network of second generation gravitational wave ( gw ) detectors like the advanced ligo ( aligo ) , advanced virgo , kagra , geo - hf and the planned ligo - india .the inspiral dynamics and associated gws from compact binaries can be accurately described using the post - newtonian ( pn ) approximation to general relativity . moreover ,an optimal detection technique of _ matched filtering _ is employed to detect and characterize inspiral gws from such binaries . in this technique, one cross correlates the interferometric output data with a bank of templates that theoretically model inspiral gws from spinning binaries .the construction of these templates involves modeling the two gw polarization states , and , associated with such events , in an accurate and efficient manner . at present , gw frequency and associated phase evolution , crucial inputs to compute , are known to 3.5pn order for non - spinning compact binaries whereas the amplitudes are available to 3pn order . in the case of spinning components ,the spin effects enter the dynamics and gw emission via spin - orbit ( so ) and spin - spin ( ss ) interactions . additionally , , and , the two spin and orbital angular momenta , for generic spinning compact binaries precess around the total angular momentum due toso and ss interactions .this forces substantial modulations of the emitted gws from inspiralling generic spinning compact binaries .therefore , it is important to incorporate various spin effects while constructing inspiral gw templates for spinning compact binaries . at present , gw frequency evolution and amplitudes of for maximally spinning bh binaries are fully determined to 2.5pn and 2pn orders , respectively , while incorporating all the relevant spin induced effects .there exist inspiral waveforms for precessing binaries , implemented in the lalsimulation package of ligo scientific collaboration ( lsc ) , that employ the _ precessing convention _ of .an attractive feature of this convention is its ability to remove all the spin precession induced modulations from the orbital phase evolution .this allows one to express the orbital phase as an integral of the orbital frequency , namely .therefore , in this convention , the inspiral waveform from precessing binaries can be written as the product of a non - precessing carrier waveform and a modulation term that contains all the precessional effects .the convention involves a _ precessing source frame _ whose basis vectors satisfy the evolution equations .the angular frequency is constructed in such a manner that the three basis vectors and always form an orthonormal triad .subsequently , has to be , where is the usually employed precessional frequency for .the relevant expression for can be obtained by collecting the terms that multiply in equation ( 9 ) in .this triad defines an orbital phase such that , where is the unit vector along binary separation vector .furthermore , one can express in the co - moving frame ( ) as .it was argued in that should only be proportional to , leading to .thus , the adiabatic condition for the sequence of circular orbits , namely , gives the desired result , i.e. , .it should be obvious that the above adiabatic condition can also imply . in practice , the precessional equation for employed to construct and to evolve . as a consequence , is no longer proportional to and this leads to pn corrections to ( see section 2.1 of for detailed calculation ) .these observations motivated us to provide a set of pn accurate equations to obtain temporally evolving quadrupolar order for generic spinning compact binaries in an -based precessing convention . in the next section, we present our -based precessing convention and explore its data analysis implications in the later section .in this section , we introduce a -based precessing source frame ( , , ) , to develop a -based precessing convention , where is unit vector along .the precessional dynamics of and are provided by , where and is the usual precessional frequency of .it should be obvious that is identical to as .it is possible to construct a -based co - moving triad ( ) and define an orbital phase such that and .also , the time derivatives of is given by .consequently , the frame independent adiabatic condition for circular orbits , namely , leads to , where .this results in the following 3pn accurate differential equation for , ^ 2\biggr\ } \,,\ ] ] where , , and .the pn expansion parameter is defined as . the kerr parameters and of the two compact objects of mass and specify their spin angular momenta by , where and are the unit vectors along and .the use of to describe binary orbits also modifies the evolution equation for ( or ) .this is because the so interactions are usually incorporated in terms of and , in the literature .these terms require modifications due to the 1.5pn order relation between and which can be obtained from equation ( 8) in .the pn accurate expression for along with the 3pn additional terms is given by equation ( 9 ) in .notice that these additional terms are , for example , with respect to equation ( 3.16 ) in that provides pn accurate expression for while invoking to describe binary orbits .we now model inspiral gws from spinning binaries in our -based precessing convention .the expressions for quadrupolar order and , written in the frame - less convention , read [ eq_hp_hx ] where and are the and components of and in an inertial frame associated with , the unit vector that points from the source to the detector , while is the distance to the binary .these and components of and can be expressed in terms of the cartesian components of and . in order to obtain and , we require to solve numerically the differential equations for and .we use equation ( [ eq_phidot ] ) for while the differential equation for , and are given by equations ( 9 ) and ( 13 ) in .it easy to see that the evolution of and depends upon the time variation of , and .therefore , we also need to solve differential equations for , and .these differential equations that include the leading order so and ss interactions can be obtained from equation ( 15 ) in . in practice, we numerically solve the differential equations for , , , and to obtain temporally evolving cartesian components of and .note that we do not solve the differential equation for .this is because the temporal evolution of can be estimated using the relation .the required initial values for the cartesian components of , , and are given by freely choosing the following five angles : , , , and .the initial cartesian components of , , and as functions of the above angles are given by equations ( 16 ) in .note that this choice of initial conditions is influenced by the lalsuite spintaylort4 code of lsc .additionally , we let the initial value to be where hz ( relevant for aligo ) and the initial phase to be zero . inwhat follows , we explore the data analysis implications of these inspiral waveforms that employ the -based precessing convention .we employ the _ match _ to compare inspiral waveforms constructed via the and -based precessing conventions .our comparison is influenced ( and justified ) by the fact that the precessing source frames of these two conventions are functionally identical .this should be evident from the use of _ the same _ precessional frequency , appropriate for , to obtain pn accurate expressions for both the -based and -based .therefore , the _ match _ estimates probe influences of the additional 3pn order terms present in the differential equations for and in our approach .note that these 3pn order terms are not present in the usual implementation of the precessing convention as provided by the lalsuite spintaylort4 code .our match computations involve and , the two families of inspiral waveforms arising from the and -based precessing conventions .the inspiral waveform families are adapted from the lalsuite spintaylort4 code of lsc while families arise from our approach ( equations ( [ eq_hp_hx ] ) ) .we employ the quadrupolar order expressions for while computing and in the present analysis .moreover , the two families are characterized by identical values of and .also , the initial orientations of the two spins in the -based inertial frame are also chosen to be identical .the computation of from with the help of equation ( 8) in ensures that and orientations at the initial epoch are physically equivalent .therefore , our match computations indeed compare two waveform families with physically equivalent orbital and spin configurations at the initial epoch .note that we terminate and inspiral waveform families when their respective parameters reach ( ) .figure [ figure : q_m_phi ] represents the result of our computations .the binary configurations have initial dominant so misalignments as and we let the initial orbital plane orientation in the -based inertial frame to take two values leading to edge - on ( ) and face - on ( ) binary orientations . for these two configurations , we need to choose to be and , respectively . moreover , we choose , , .let us note that orientations ( from ) for these configurations will be slightly different from or due to the 1.5pn accurate relation between and .plots for the accumulated orbital phase ( ) and the associated match ( ) estimates as functions of the mass ratio for maximally spinning bh binaries inspiralling in the ] .we find that the variations in estimates are quite independent of the initial orbital plane orientations .we see a gradual decrease in values as we increase the value and this variation is reflected in the gradual increase of .incidentally , this pattern is also observed for configurations having somewhat smaller initial dominant so misalignments .however , the estimates are close to unity for tiny values and this is expected as precessional effects are minimal for such binaries .therefore , the effect of the above discussed additional 3pn order terms are more pronounced for high mass ratio compact binaries having _ moderate _dominant so misalignments .we find that the match estimates are less than the optimal 0.97 value for a non - negligible fraction of unequal mass spinning compact binaries .it may be recalled that such an optimal match value roughly corresponds to a loss in the ideal event rate .we , therefore , conclude that the additional 3pn order terms in frequency and phase evolution equations in our approach should not be neglected for a substantial fraction of unequal mass binaries .9 sathyaprakash b s and schutz b f 2009 _ living rev . relativity _ * 12 *
it is customary to use a precessing convention , based on newtonian orbital angular momentum , to model inspiral gravitational waves from generic spinning compact binaries . a key feature of such a precessing convention is its ability to remove all spin precession induced modulations from the orbital phase evolution . however , this convention usually employs a post - newtonian ( pn ) accurate precessional equation , appropriate for the pn accurate orbital angular momentum , to evolve the -based precessing source frame . this motivated us to develop inspiral waveforms for spinning compact binaries in a precessing convention that explicitly use to describe the binary orbits . our approach introduces certain additional 3pn order terms in the orbital phase and frequency evolution equations with respect to the usual -based implementation of the precessing convention . the implications of these additional terms are explored by computing the match between inspiral waveforms that employ and -based precessing conventions . we found that the match estimates are smaller than the optimal value , namely 0.97 , for a non - negligible fraction of unequal mass spinning compact binaries .
among the various sources of energy , oil remains as one of the most valuable ones , considering its extensive use in the daily life , such as in the production of gasoline , plastic , etc . after discovering a petroleum reservoir, one can extract about 15 - 50% of the oil by using and maintaining the initial pressure in the reservoir through water flooding ( first and second phase oil recovery ) ; however , 50 - 85% of oil remains in the reservoir after this , so called conventional recovery .this is the motivation for developing new extraction techniques in order to recover the most oil possible .one of these eor techniques consists of adding bacteria to the reservoirs and using their bioproducts and effects to improve the oil production , which is called meor .besides all meor experiments , it is worth pointing out that meor has been already used successfully in oil reservoirs .nevertheless , the meor technology is not yet completely understood and there is a strong need for reliable mathematical models and numerical tools to be used for optimizing meor .the bioproducts formed due to microbial activity are acids , biomass , gases , polymers , solvents and surfactants .the main purpose of using microbes ( bacteria ) is to modify the fluid and rock properties in order to enhance the oil recovery .these microbes and the produced surfactants have the advantage to be biodegradable , temperature tolerant , ph - hardy , non - harmful to humans and lower concentrations of them can produce similar results as chemical surfactants .we can describe briefly the model presented as follows : we inject water , bacteria and nutrients to a reservoir .the bacteria consume nutrients and produce more bacteria and surfactants . as time passes , some bacteria die or reproduce .the surfactants reduce the oil - water ift , allowing the recovery of more oil .the consideration of ifa in the model allows to include the biological production of surfactants at the oil - water interface , reduces the hysteresis and also enables to include that bacteria is mainly living at the oil - water interface , which is believed to be a very important feature for meor .there exist different systems where the ifa is important .for example , another important application of microorganisms is in the soil remediation .specially , surfactants can increase bioavailability and degradation of soil contaminants , for example petroleum - derived hydrocarbons . nevertheless , although the general theory for ifa was established , the development of ifa based models for particular applications remains a current challenge . in this workwe will derive for the first time a mathematical model for meor which includes ifa .it is worth to be mentioned that further developments of the present model , which are considering a formal upscaling from pore to core and in this way better describe the evolution of the micro scale are possible but beyond the aim of this study .mathematical models for meor are based on coupled nonlinear partial differential equations ( pdes ) and ordinary differential equations ( odes ) , which are very difficult to be solved .therefore , it is necessary to use advanced numerical methods and simulations to predict the behavior on time of the unknowns in this complex system .for example , in they used a semi - implicit finite difference technique and in they used comsol multiphysics , that is a commercial pde solver using finite elements together with variable - step back differentiation and newton method . even though it is possible to buy commercial software in the petroleum industry for simulation, it is preferable to do the discretization of the equations and write an own code to perform numerical simulations , in order to implement new relations that are not included in the commercial ones .most of the meor models are based on non - realistic simplifications ( for example , only one transport equation for the bacteria is considered , hysteresis in the capillary pressure is neglected , the oil - water interfacial area is not included , numerical simulations are just made in 1d ) . in this general context ,the objective of the research reported in the present article was to develop and implement ( in a 2d porous media ) an accurate numerical simulator for meor . to summarize , the new contributions of this paper are * the development of a multidimensional comprehensive mathematical model for + meor , which includes bacteria , nutrients , surfactants and two - phase flow . * the inclusion of the role of ifa in meor .* the inclusion of the tendency of bacteria to move to the oil - water interface . * the inclusion of the biological production of surfactants at the oil - water interface .the paper is structured as follows * reservoir modeling .we introduce the basic concepts , ideas and equations for modeling meor .in addition , we explain the new phenomena we can model in meor when we include the ifa .* discretization and implementation .we explain the techniques we used for the discretization , namely finite differences and tpfa for the spatial discretization and backward euler ( be ) for the time discretization .we also describe the algorithm we used for numerically solving the mathematical model for meor . * results and discussion .we present the results of the numerical experiments by studying the effects of the new relations we proposed for modeling meor .* conclusion and future work .let us consider a porous medium filled with water and oil .we assume that the fluids are immiscible and incompressible . for knowing the amount of a phase in the representative element volume ( rev ), we introduce the saturation of phase ( for oil and for water ) given by the ratio of volume of phase ( in rev ) over the volume of voids ( in rev ) . in the case where the porous medium is just filled with two fluids, we have that . in the oil - water interfacethere is a surface free energy due to natural electrical forces , which attract the molecules to the interior of each phase and to the contact surface .the ift keeps the fluids separated and it is defined by the quantity of work needed to separate a surface of unit area from both fluids . capillary pressure is the difference in pressure between two immiscible phases of fluids occupying similar pores due to ift between the phases .it is known that is not a well - defined function because to one value of water saturation , corresponds more than one value of , due to being dependent on the history .it means that different saturation values are expected during imbibition as in drainage .this phenomenon occurring in many porous media systems is called hysteresis .we write darcy s law and the mass conservation equations for each phase ( ) where is the porosity , the volumetric flow rate per area , the source / sink term , the density , the absolute permeability and the phase mobility , with the relative permeability and the viscosity . in this workwe consider that the porosity does not change over time .defining the average pressure , and , using and , we can reformulate the problem solving for and an extended description of the previous equations can be found e.g. in and . considering a porous medium filled with two fluids , the surface where they make contactis called ifa .mathematically , we compute the specific ifa as a ratio of the ifa in the rev over the volume of rev . for understanding better the importance of in the oil recovery ,let us consider fig .[ fig1 ] , where we observe that splitting the square in four pieces , the ifa increases by a factor of 2 .then , we can recover faster the oil in the zones with larger ifa . when darcy made his experiments and deduced his law , he just considered a single - phase flow . in the case of two - phase flow , we just extend darcy s law for two fluids , but we may expect there are more forces involve than the gradient of the hydraulic head . in developed equations of momentum balance for phases and interfaces , based on thermodynamic principles .in addition , equations of balance of mass for phases and interfaces are considered . after performing various transformations , the following balance equation of specific interfacial area for the oil - water interface is obtained where is the interfacial velocity , is the rate of production / destruction of specific ifa and is the interfacial permeability .based on a thermodynamic approach , demonstrated that including the ifa in the capillary pressure relation reduces the hysteresis under equilibrium conditions . in order to close our model , which includes the specific oil - water ifa, we have to provide a relation that accounts for interfacial forces .this relation can be obtained by fitting surfaces to data coming from models or experiments . in , they used a bi - quadratic relationship .however , this relation does not fulfill the requirements . in this work, we use the next relation with , , and constants . from the previous parameterization, we can isolate the capillary pressure for solving the specific ifa equation , we need to provide the mathematical expression for . in proposed the following relation based on physical arguments where is a parameter characterizing the strength of change of specific ifa due to a change of saturation the path is in general unknown , but in the main drainage and imbibition curves , is a known function of .in addition , it is possible to compute this derivative for .for all other paths , we interpolate using these three values of .experimental investigations focused on simultaneously measuring , and are often difficult , expensive and subject to limitations , thus only a few have been reported in the literature , indicating a need for further experimental studies characterizing the relationship . for describing the movement of bacteria , nutrients and surfactants , we consider the following transport equations ( ) where the reaction rate terms are given by and in the general casethe dispersion coefficients are given by where the fluid velocity of the aqueous phase is given by .in the previous equations , , , are the concentrations of bacteria , nutrients and surfactants , the longitudinal dispersivity , the transverse dispersivity , the effective diffusion coefficients of bacteria , nutrients and surfactants in the water phase and the dirac delta .we consider that the bacteria , nutrients and surfactants live on the water , so their transport due to the convection is given by the term .we include gravity effects on the bacteria considering the settling velocity of bacteria . for including that the bacteria has a tendency to live in the oil - water interface , we add the chemotactic velocity in the bacteria transport equation .we propose the following expression for the chemotactic velocity where is a diffusive term .it is for the first time when such a chemotaxis term is included in the modeling transport of bacteria in two - phase porous media . including the chemotaxis in meor models is important because besides the external constrains , it also determines the distribution of bacteria in the soil .let us analyses the reaction terms for the transport equations . for modeling the growth of bacteria, we use the monod - type model where is the observed maximum growth rate and the half saturation constant , being the nutrient concentration level when . on the other hand, we consider a linear death of bacteria , given by . due to nutrients and bacteria being involved in the generation of surfactants ,we introduce the surfactant yield coefficients .for the nutrients consumed for bacteria , we consider the yield coefficient , which we included in the term . in the absence of ifa , one relation for the production rate of surfactants is given by where is the maximum specific biomass production rate , and the critical nutrient concentration for metabolism term , that models a need of minimum for obtaining surfactants .one of the characteristics that a surfactant should have is biological production at the oil - water interface . in order to consider this effect in our model, we consider the production rate of surfactant as a function of the nutrient concentration and ifa . to our knowledge, there are not experimental studies to deduce a mathematical relation of the surfactant production in function of the ifa ; therefore , we need experiments for . given the mathematical characteristics of the monod - type function , we propose the following expression for the production rate of surfactants where is the half saturation constant .the pressure , saturation and ifa equations are coupled with these transport equations under the assumptions that the two - phase flows are incompressible and immiscible , both viscosities are constants , the presence of dissolved salt in the wetting phase is neglected and the system is isothermal .one of the main objectives of applying meor is to reduce the via surfactant effect on the oil - water ift .there exist several experiments showing the impact of surfactants in reducing the ift .common initial ift values are of the order of mn / m and we aim to lower this value mn / m . one mathematical model for the ift reduction is given by where is the initial ift , and are fitting parameters , which define the efficiency of the surfactant , moderating the concentration where the ift drops dramatically and the minimal ift achieved after the surfactant action .when the surfactant concentration increases , the ift and decrease .for considering this effect in our model , we include the dependence of the in eq .[ pca ] , resulting in the following capillary pressure expression where we also include the porosity and permeability .then , the ifa becomes the residual oil saturation after water flooding is believed to be distributed through the pores in the petroleum reservoir in the form of immobile globules , being the capillary and viscous interactions the main forces acting on these globules .the capillary number relates the surface tension and viscous forces acting in the interface , the bond number relates the buoyancy to capillary forces and the trapping number quantifies the force balance .then , the mathematical expressions for these numbers are given by where is the contact angle between the oil - water interface and is the angle of flow relative to the horizontal .at the end of water flooding , the capillary number is in the range to . in order to increase the capillary number , from eq .[ trap ] we observe that increasing the flow rate , the water viscosity or lowering the ift are the three possibilities . in , they stated that meor could improve the oil extraction if we can obtain a capillary number between and . for relating the residual oil saturation and the capillary number , we use the following relation ( ) ^{\tfrac{1}{t_2}-1 } \big ) , \ ] ] where and are the maximum and minimum residual oil saturation and both and are fitting parameters estimated from the experimental data . giving the mathematical expressions for the ift reduction , the trapping number and the residual oil saturation reduction , we can account in the model the effect of the surfactants in improving the oil recovery . in summary, we propose the next set of equations as the first complete model for meor including ifa effects * pressure * + + * saturation * + + * interfacial area * + + * bacterial concentration * + + * nutrient concentration * + + * surfactant concentration * + + * relative permeabilities * + + * capillary pressure * + + * interfacial tension * + + * trapping number * + + * residual oil saturation * + ^{\tfrac{1}{t_2}-1 } \big ) $ ] .after having set the model equations , we proceed to define the space domain .we consider a rectangular domain with a uniform cell - centered grid with half - cells at the boundaries .[ fig2 ] shows a uniform cell - centered grid in a space domain of length and width .+ ] + discretization of time is achieved considering a uniform partition from the initial time until the final time with the time step .after discretizing the space and time , now we discretize the derivatives and integrals .considering an arbitrary function , using its taylor expansion we get the following approximation for the derivatives the approximation of first order is used on the boundaries of the spatial domain and time derivatives while the second order approximation is used in the cell - centered grill . to discretize time derivatives , we consider the be method finally , for approximating integrals , we use the midpoint rule in the previous section we developed a two - phase flow model for meor with transport equations including ifa effects .we are interested in the solution of this system . in order to perform numerical simulations , it is necessary to discretize these equations . in this work, we used a cell - centered finite - volume method called tpfa .a detailed description about how to use and implement tpfa in matlab can be found in .as we consider a cell centered grid , we do not know the values of the parameters on the walls , so it is necessary to use an approximation on the boundaries .depending on the parameter , we should consider different technique approximations , in order to get stability and correct results .regarding the permeability of the medium , we approximate by the harmonic mean the reason for considering this harmonic mean comes from the computation of an effective permeability when we consider a layered system with different values of permeability and a flux perpendicular to these layers , finding that the effective permeability of a system with two layers is given by the previous equation . for the rest of the parameters that we need to approximate on the walls , we simply use the average value there are several algorithms to solve reactive transport models . in this work , for solving the pressure , saturation and ifa equations , we use an implicit scheme .the use of these iterative formulations is very common , for example in and they solved the richards equation using this technique . regarding the two - phase flow , in they solved the pressure and saturation equations using the same iterative scheme .the convergence of this implicit scheme can be followed from , and . due tothe capillary pressure is a function of the saturation and interfacial area , both of them being unknowns , we use an inner iteration in order to upgrade the values of the functions depending on the saturation and interfacial area and solve this system of equations ( pressure , saturation and ifa equations ) until a stopping criterion is reached . for initializing the iteration, we consider the solution at the previous time step when we discretized the pressure and saturation equation , we used the chain rule for computing the gradient of the capillary pressure in order to improve the stability of the scheme . to solve the transport equations, we use an iterative solver we write the three system of equations in the same matrix , looking for the solution of iteratively until a stopping criterion is reached . due towe use an iterative scheme , it is necessary to have a measurement of the error . for this work, we use the following -norm then , the algorithm for solving the model equations is the following 1 . we solve the pressure equation using the previous values of water saturation and ifa .2 . we solve the saturation equation using the updated values of pressure but the previous values of ifa .3 . we solve the ifa equation using the updated values of saturation .we compute the errors , and . 5 .if the errors are less than a given tolerance , we proceed to solve the concentration equations .otherwise , we upgrade the values for the inner iteration and we solve again the three equations . if any of the errors does not get less than in a given maximum number of iterations , we halve the time step and try again . if we halve the time step in a maximum number of times , we have to check if the problem is well - posed .we solve the concentration equations iteratively until the error is less than or we proceed as mentioned before halving the time step .7 . if the concentration error is less than , we compute the ift , and .we move to the next time step and we repeat the process until we reach the final time t and we plot the results .following all previous work , we can finally perform numerical experiments to study the effects of meor considering the oil - water ifa . in order to formulate the model , we considered the next works : ( transport equations ) , and ( ifa ) , ( reduction of ift ) and ( reduction of ) . using the best estimate of physical parameters from the existing experiments , we were not able to obtain physically plausible results .we interpret this to be due to the disparate experimental conditions used in the cited works , leading to results which are physically incompatible .thus the first conclusion of our work is that the existing experimental literature for meor and interfacial area is incomplete , and that dedicated experiments encompassing the full process of microbial growth , transport and surfactant production together with changing ifa , new relations for the rate of production / destruction of ifa and new capillary pressure surfaces are needed . in lieu of complete and compatible experimental data ,we have thus conducted numerical simulations with what we deem plausible data , to highlight the dominating physical processes in the system . + we consider a porous medium of length m and width m. we set the initial water saturation as .we inject water , bacteria and nutrients into the left boundary and oil , water , bacteria , nutrients and surfactants flow out through the right boundary .there is not flux through the upper and bottom boundary .for the water and oil pressures , we take the same conditions as in : kpa and kpa ; leading to an average pressure of kpa and initial capillary pressure of kpa . on the left boundary, we have a flux boundary condition .due to we inject water , the left boundary condition for the water saturation is . regarding the right boundary condition for the water saturation , we consider a neumann condition with zero value .we choose the initial value and left boundary of ifa evaluating eq .[ chiquita ] with the initial and left values of water saturation , capillary pressure , ift , permeability and porosity respectively .we consider that there is neither bacteria nor nutrients initially in the porous media and we inject them on the left boundary with a concentration of . we also consider a no - flux boundary condition for the surfactant concentration on the left boundary . regarding the relation in eq .[ porque ] , we use eq .[ pepo ] evaluating with the initial average ifa , ift , permeability and porosity .table [ tab:1 ] shows the value parameters used in the numerical simulations .[ tab:1 ] first , we focus on the ifa effects using the parameters in table [ tab:1 ] .[ fig3 ] shows the evolution in time of water saturation .we notice that in the beginning the upper part has greater water saturation but over time the water saturation in the whole porous media is approaching to the entry saturation .[ fig4 ] shows the evolution in time of capillary pressure .the capillary pressure presents an expected behavior where it is a decreasing function of the water saturation .+ fig .[ fig5 ] shows the evolution in time of ifa .we notice that the ifa decreases over time , due to the increment of water saturation .+ in this work we have introduced the term in the bacteria transport equation in order to model the tendency of surfactants to live in the oil - water interface . to our knowledge, there are not experimental measurements of the coefficient .then , after simulations , we set the value .[ fig6 ] shows the bacterial concentrations in the reservoir after 3.5 hours of water injection for different scenarios .+ from fig .[ fig5 ] , we observe that the ifa is increasing from left to right and is greater in the lower part , then we expect to have greater bacterial concentration on the lower part when we consider the chemotaxis ; result that we can observe from fig .[ fig6 ] . in this work , we have also introduced a monod - type term in the surfactant production rate in order to model the surfactant production at the oil - water interface .[ fig7 ] shows the surfactant concentrations in the reservoir after 3.5 hours of water injection for the different scenarios .we observe less surfactant production when we included the monod - type term because the ifa is increasing from left to right , so when we do not include the surfactant production on the ifa , the surfactant production is overestimated .the main goal of meor it is to enhance the oil recovery using bacteria .[ fig8 ] shows the residual oil saturation profiles after 10 hours of injection for the different scenarios .+ when we just include chemotaxis , we observe the greatest recovery of residual oil saturation , due to the bacteria moves to the zones with greater ifa .when we just included the surfactant production on the ifa , we observe the lowest recovery of residual oil saturation , due to the surfactant production is now limited by the ifa .when we combine both effects , we obtain a greater recovery of residual oil saturation than in the case where we do not include the ifa effects . in order to have a measure of the improvements in the oil extraction ,we compute the oil recovery .[ fig9 ] shows the oil recovery as a function of the pore volume injected in the reservoir .we notice that 10 hours of water injection equals to 2.5 pore volumes .we observe that after injecting approximately 1 pore volumes of water , the surfactant starts to lower the interfacial tension and raise the oil production .when we include the production of surfactants on the ifa , we observe a delay effect in the oil recovery .this delay is due to the production of surfactants is also determined for the ifa that is increasing from left to right .then , the rate production of surfactants is less than in the case we do not include the surfactant production on the ifa .when we include the chemotaxis , we observe a faster effect of the surfactants , due to faster migration of bacteria in the reservoir .when we include both chemotaxis and production of surfactant on the ifa , the oil recovery is between these two profiles , being greater than the profile not including the ifa effects .the election of the parameters in table 1 determined all previous results .it is necessary to estimate all these parameters in the laboratory in order to corroborate the model assumptions .these numerical examples give a better understanding of the mechanisms involve in meor .a new , comprehensive model for meor , which includes two - phase flow , bacteria , nutrient and surfactant transport and considers the role of the oil - water ifa , chemotaxis and reduction of residual oil saturation due to the action of surfactants has been developed .the model particularly includes the oil - water ifa in order to reduce the hysteresis in the capillary pressure relationship , to include the effects of observed bacteria migration towards the oil - water interface and biological production of surfactants at the oil - water interface . to our knowledge, the present work is the first study concerning these effects in the context of meor . in particular , the first time to consider the oil - water ifa and chemotaxis for meor .the meor model consists on a system of nonlinear coupled pdes and odes , whose solution represents a challenge by itself . in order to have an efficient and stable scheme, we used an implicit stepping that considers a linear approximation of the capillary pressure gradient .the time discretization of the equations was obtained using be and the spatial discretization using fd and tpfa . in order to model that surfactants are produce at the oil - water interface , we considered the production rate of surfactants as a function of the nutrient concentration and ifa in the form of a monod - type function . to include the chemotaxis , we added the gradient of the ifa in the transport equation for the bacteria .we obtained different water flux profiles and oil recovery predictions when we considered the ifa in the model . in the numerical experiments, we observed an improvement in the oil recovery when we included the ifa effects . even though real reservoirs are more complex than the model presented , this work is useful for understanding the main phenomena involved in the recovery of petroleum .moreover , for further calibrating of the present meor model , it is necessary to perform more experiments in the laboratory . through our model , we hope to convince the community for the importance of including ifa and chemotaxis in simulation of meor and to inspire further experiments focusing on these relevant effects . finally , we propose further work inspired in this work .we solved the equations for the pressure , saturation and ifa iteratively , verifying the convergence rate numerically .nevertheless , it is necessary to do a theoretical analysis of the convergence of the scheme in order to determinate the maximum time step size for having convergence . in order to have a more complete model , we should extend it considering more phenomena , for example bioclogging , surfactant transportation in the oil phase and changes in the viscosities .it is necessary to investigate new relations for the production / destruction rate of ifa because currently there is just one model based on physical arguments but not experimental results .aarnes , j.e . , gimse , t. , lie , k .- a .: an introduction to the numerics of flow in porous media using matlab . in : hasle , g. , lie , k .- a ., quak , e. ( eds . ) geometric modelling , numerical simulation , and optimization : applied mathematics at sintef .. 265306 .springer , heidelberg ( 2007 ) armstrong , r.t . ,wildenschild , d. : microbial enhanced oil recovery in fractional - wet systems : a pore - scale investigation .porous media ( 2012 ) .doi : http://dx.doi.org/10.1007/s11242 - 011 - 9934 - 3[10.1007/s11242 - 011 - 9934 - 3 ] bollag , j.m .: interactions of soil components and microorganisms and their effects on soil remediation. revista de la ciencia del suelo y nutricin vegetal ( 2008 ) .doi : http://dx.doi.org/10.4067/s0718 - 27912008000400006[10.4067/s0718 - 27912008000400006 ] bringedal , c. , berre , i. , pop , i.s ., radu , f.a .: upscaling of non - isothermal reactive porous media flow with changing porosity .porous media ( 2016 ) .doi : http://dx.doi.org/10.1007/s11242 - 015 - 0530 - 9[10.1007/s11242 - 015 - 0530 - 9 ] centler , f. , thullner , m. : chemotactic preferences govern competition and pattern formation in simulated two - strain microbial communities .frontiers in microbiology ( 2015 ) .doi : http://dx.doi.org/10.3389/fmicb.2015.00040[10.3389/fmicb.2015.00040 ] chen , d. , pyrak - nolte , l.j . , griffin , j. , giordano , n.j .: measurement of interfacial area per volume for drainage and imbibition .water resour .doi : http://dx.doi.org/10.1029/2007wr006021[10.1029/2007wr006021 ] gharasoo , m. , centler , f. , fetzer , i. , thullner , m. : how the chemotactic characteristics of bacteria can determine their population patterns .soil biology and biochemistry ( 2014 ) .doi : http://dx.doi.org/10.1016/j.soilbio.2013.11.019[10.1016/j.soilbio.2013.11.019 ] hassanizadeh , s.m ., gray , w.g . :mechanics and thermodynamics of multiphase flow in porous media including interphase boundaries .water resour .doi : http://dx.doi.org/10.1016/0309 - 1708(90)90040-b[10.1016/0309 - 1708(90)90040-b ] hassanizadeh , s.m ., gray , w.g . : toward an improved description of the physics of two - phase flowwater resour .doi : http://dx.doi.org/10.1016/0309 - 1708(93)90029-f[10.1016/0309 - 1708(93)90029-f ] hommel , j. , lauchnor , e. , phillips , a. , gerlach , r. , cunningham , a.b ., helmig , r. , ebigbo , a. , class , h. : a revised model for microbially induced calcite precipitation : improvements and new insights based on recent experiments .water resour .doi : http://dx.doi.org/10.1002/2014wr016503[10.1002/2014wr016503 ] joekar - niasar , v. , hassanizadeh , s.m .: uniqueness of specific interfacial area - capillary pressure - saturation relationship under non - equilibrium conditions in two - phase porous media flow .porous media ( 2012 ) .doi : http://dx.doi.org/10.1007/s11242 - 012 - 9958 - 3[10.1007/s11242 - 012 - 9958 - 3 ] kovrov - kovar , k. , egli , t. : growth kinetics of suspended microbial cells : from single - substrate - controlled growth to mixed - substrate kinetics .microbiology and molecular biology reviews * 62*(3 ) , 646666 ( 1998 ) kumar , k. , pop , i.s . , radu , f.a . : convergence analysis of mixed numerical schemes for reactive flow in a porous medium .siam j. num .doi : http://dx.doi.org/10.1137/120880938[10.1137/120880938 ] kumar , k. , pop , i.s ., radu , f.a . : convergence analysis for a conformal discretization of a model for precipitation and dissolution in porous media " , numerische mathematik ( 2014 ) .doi : http://dx.doi.org/0.1007/s00211 - 013 - 0601 - 1[0.1007/s00211 - 013 - 0601 - 1 ] lacerda , e. , da silva , c.m . ,priimenko , v.i . ,pires , a.p . :microbial eor : a quantitative prediction of recovery factor .society of petroleum engineers ( 2012 ) .doi : http://dx.doi.org/10.2118/153866-ms[10.2118/153866-ms ] li , j. , liu , j. , trefry , m.g . , park , j. , liu , k. , haq , b. , johnston , c.d . ,volk , h. : interactions of microbial - enhanced oil recovery processes . transp . porous media ( 2011 ) .doi : http://dx.doi.org/10.1007/s11242 - 010 - 9669 - 6[10.1007/s11242 - 010 - 9669 - 6 ] li , y. , abriola , l.m . ,phelan , t.j ., ramsburg , c.a . ,pennell , k.d . : experimental and numerical validation of the total trapping number for prediction of dnapl mobilization .doi : http://dx.doi.org/10.1021/es070834i[10.1021/es070834i ] musuuza , j.l ., attinger , s. , radu , f.a .: an extended stability criterion for density - driven flows in homogeneous porous media . adv .water resour .doi : http://dx.doi.org/10.1016/j.advwatres.2009.01.012[10.1016/j.advwatres.2009.01.012 ] nick , h.m ., raoof , a. , centler , f. , thullner , m. , regnier , p. : reactive dispersive contaminant transport in coastal aquifers : numerical simulation of a reactive henry problem . journal of contaminant hydrology ( 2013 ) .doi : http://dx.doi.org/10.1016/j.jconhyd.2012.12.005[10.1016/j.jconhyd.2012.12.005 ] nielsen , s.m ., nesterov , i. , shapiro , a.a . : microbial enhanced oil recovery a modeling study of the potential of spore - forming bacteria " , comput .doi : http://dx.doi.org/10.1007/s10596 - 015 - 9526 - 3[10.1007/s10596 - 015 - 9526 - 3 ] niessner , j. , hassanizadeh , s.m .: a model for two - phase flow in porous media including fluid - fluid interfacial area .water resour .doi : http://dx.doi.org/10.1029/2007wr006721[10.1029/2007wr006721 ] patel , i. , borgohain , s.,kumar , m. , rangarajan , v. , somasundaran , p. , sen , r. : recent developments in microbial enhanced oil recovery . renewable and sustainable energy reviews ( 2015 ) .doi : http://dx.doi.org/10.1016/j.rser.2015.07.135[10.1016/j.rser.2015.07.135 ] pennell , k.d ., pope , g.a ., abriola , l.m . : influence of viscous and buoyancy forces on the mobilization of residual tetrachloroethylene during surfactant flushing .doi : http://dx.doi.org/10.1021/es9505311[10.1021/es9505311 ] pop , i.s ., radu , f. , knabner , p. : mixed finite elements for the richardsequation : linearization procedure . j. comput . and appldoi : http://dx.doi.org/10.1016/j.cam.2003.04.008[10.1016/j.cam.2003.04.008 ] pop , i.s ., van duijn , c.j . , hassanizadeh , s.m . :horizontal redistribution of fluids in a porous medium : the role of interfacial area in modeling hysteresis . adv .water resour .doi : http://dx.doi.org/10.1016/j.advwatres.2008.12.006[10.1016/j.advwatres.2008.12.006 ] porter , m.l ., wildenschild , d. , grant , g. , gerhard , j.i . : measurement and prediction of the relationship between capillary pressure , saturation , and interfacial area in a napl - water - glass bead system .water resour .doi : http://dx.doi.org/10.1029/2009wr007786[10.1029/2009wr007786 ] radu , f.a ., pop , i.s . ,attinger , s. : analysis of an euler implicit - mixed finite element scheme for reactive solute transport in porous media .numerical methods for partial differential equations ( 2010 ) .doi : http://dx.doi.org/10.1002/num.20436[10.1002/num.20436 ] radu , f.a ., nordbotten , j.m . , pop , i.s . , kumar , k. : a robust linearization scheme for finite volume based discretizations for simulation of two - phase flow in porous media . j. comput . and appl . math .doi : http://dx.doi.org/10.1016/j.cam.2015.02.051[10.1016/j.cam.2015.02.051 ] viramontes - ramos , s. , portillo - ruiz , m.c ., ballinas - casarrubias , m.l . , torres - muoz , j.v . , rivera - chavira , b.e ., nevrez - moorilln , g.v . : selection of biosurfactan / bioemulsifier - producing bacteria from hydrocarbon - contaminated soil .brazilian journal of microbiology ( 2010 ) .doi : http://dx.doi.org/10.1590/s1517 - 83822010000300017[10.1590/s1517 - 83822010000300017 ] van wijngaarden , w.k . , vermolen , f.j . ,van meurs , g.a.m . ,vuik , c. : modelling biogrout : a new ground improvement method based on microbial - induced carbonate precipitation .porous media ( 2011 ) .doi : http://dx.doi.org/10.1007/s11242 - 010 - 9691 - 8[10.1007/s11242 - 010 - 9691 - 8 ] wu , z. , yue , x. , cheng , t. , yu , j. , yang , h. : effect of viscosity and interfacial tension of surfactant - polymer flooding on oil recovery in high - temperature and high - salinity reservoirs .journal of petroleum exploration and production technology ( 2014 ) .doi : http://dx.doi.org/10.1007/s13202 - 013 - 0078 - 6[10.1007/s13202 - 013 - 0078 - 6 ] yuan , c.d ., pu , w.f . , wang , x.c . , sun , l. , zhang , y.c . , cheng , s. : effects of interfacial tension , emulsification , and surfactant concentration on oil recovery in surfactant flooding process for high temperature and high salinity reservoirs . energy fuels ( 2015 ) .doi : http://dx.doi.org/10.1021/acs.energyfuels.5b01393[10.1021/acs.energyfuels.5b01393 ]
the focus of this paper is the derivation of a non - standard model for microbial enhanced oil recovery ( meor ) that includes the interfacial area ( ifa ) between the oil and water . we consider the continuity equations for water and oil , a balance equation for the oil - water interface and advective - dispersive transport equations for bacteria , nutrients and surfactants . surfactants lower the interfacial tension ( ift ) , which improves the oil recovery . therefore , we include in the model parameterizations of the ift reduction and residual oil saturation as a function of the surfactant concentration . we consider for the first time in context of meor , the role of ifa in enhanced oil recovery ( eor ) . the motivation to include the ifa in the model is to reduce the hysteresis in the capillary pressure relationship , include the effects of observed bacteria migration towards the oil - water interface and biological production of surfactants at the oil - water interface . an efficient and robust linearization scheme was implemented , in which we use an implicit scheme that considers a linear approximation of the capillary pressure gradient , resulting in an efficient and stable scheme . a comprehensive , 2d implementation based on two - point flux approximation ( tpfa ) has been achieved . illustrative numerical simulations are presented . we give an explanation of the differences in the oil recovery profiles obtained when we consider the ifa and meor effects . the model can also be used to design new experiments in order to gain a better understanding and optimization of meor . + * keywords * bacteria interfacial area interfacial tension microbial enhanced oil recovery surfactant [ cols= " < , < " , ]
graphs are used to describe a wide range of situations in a precise yet intuitive way .different kinds of graphs are used in modelling techniques depending on the investigated fields , which include computer science , chemistry , biology , quantum computing , etc .when system states are represented by graphs , it is natural to use rules that transform graphs to describe the system evolution .there are two main streams in the research on graph transformations : ( i ) the algorithmic approaches , which describe explicitly , with a concrete algorithm , the result of applying a rule to a graph ( see e.g. ) , and ( ii ) the algebraic approaches which define abstractly a graph transformation step using basic constructs borrowed from category theory . in this paperwe will consider the latter .the basic idea of all approaches is the same : states are represented by graphs and state changes are represented by rules that modify graphs .the differences are the kind of graphs that may be used , and the definitions of when and how rules may be applied .one critical point when defining graph transformation is that one can not delete or copy part of a graph without considering the effect of the operation on the rest of the graph , because deleted / copied items may be linked to others .for example , rule in figure [ fig_rulesandgraphs](a ) specifies that a node shall be deleted and rule that a node shall be duplicated ( labels the copy ) . what should be the result of applying these rules to the grey node of graph in figure [ fig_rulesandgraphs](b ) ?different approaches give different answers to this question .the most popular algebraic approaches are the double - pushout ( dpo ) and the single - pushout ( spo ) , which can be illustrated as follows : |{po } l \ar[d]_{m } & \ar@{}[rd]|{po}k \ar[l]|{{\phantom{\big(}l{\phantom{\big ) } } } } \ar[d]^{d } \ar[r]|{{\phantom{\big(}r{\phantom{\big ) } } } } & r \ar[d]^{m ' } \\g & d \ar[l]|{{\phantom{\big(}l'{\phantom{\big ) } } } } \ar[r]|{{\phantom{\big(}r'{\phantom{\big ) } } } } & h \\ } & \hspace{5 mm } & \xymatrix=6pc=1.5pc { \ar@{}[rd]|{po } l \ar[d]_{m } \ar[r]|{{\phantom{\big(}\psi{\phantom{\big ) } } } } & r \ar[d]^{m ' } \\ g \ar[r]|{{\phantom{\big(}\psi'{\phantom{\big ) } } } } & h \\ } \\ \mbox{double pushout rewrite step } & & \mbox{single pushout rewrite step } \\\end{array}\ ] ] in the dpo approach , a rule is defined as a span and a match is a morphism .a graph rewrites into a graph using rule and match if the diagram above to the left can be constructed , where both squares are pushouts .conditions for the existence and uniqueness of graph need to be studied explicitly , since it is not a universal construction .with dpo rules it is easy to specify the addition , deletion , merging or cloning of items , but their applicability is limited . for example ,rule of figure [ fig_rulesandgraphs ] is not applicable to the grey node of ( as it would leave dangling edges ) , and a rule like is usually forbidden as the _ pushout complement _ would not be unique . in the spo approach ,a rule is a _ partial _ graph morphism and a match is a total morphism .a graph rewrites into a graph using rule and match if a square like the one above to the right can be constructed , which is a pushout in the category of graphs and partial morphisms . deleting , adding and merging itemscan easily be specified with spo rules , and the approach is appropriate for specifying deletion of nodes in unknown context , thanks to partial morphisms .the deletion of a node causes the deletion of all edges connected to it , and thus applying rule to would result in graph in figure [ fig_rulesandgraphs](b ) .however , since a rule is defined as a single graph morphism , copying of items ( as in rule ) can not be specified directly in spo .a more recent algebraic approach is the sesqui - pushout approach ( sqpo ) .rules are spans like in the dpo , but in the left square of a rewriting step , graph is built as a _ final pullback complement_. this characterises with a universal property , enabling to apply rule , obtaining the same result as in the spo approach ( ) , as well as rule , obtaining as result .also has a side effect : when a node is copied all the edges of the original node are copied as well .rules do not specify explicitly which context edges are deleted / copied , this is determined by the categorical constructions that define rule application . in general , in all algebraic approaches , the items that are preserved by a rule will retain the connections they have with items which are not in the image of the match .this holds also for items that are copied in the sqpo approach .however , there are situations in which the designer should be able to specify which of the edges connecting the original node should be copied when a node is copied , depending for example on the direction of the edges ( incoming or outgoing ) , or on their labels , if any . for example , if the graphs of figure [ fig_rulesandgraphs ] represent web pages ( nodes ) and hyperlinks among them ( edges ) it would be reasonable to expect that the result of copying the grey page of with rule would be graph rather than , so that new hyperlinks are created only in the new page , and not in the pages pointing to the original one . as another example, the fork and clone system commands in linux both generate a clone of a process , but with different semantics .both commands precisely differ in the way the environment of the cloned process is dealt with : see for more details .these examples motivate the rewriting approach that we introduce in this paper . in order to give the designer the possibility of controlling how the nodes that are preserved or cloned by a rule are embedded in the context graph , we propose a new algebraic approach to graph transformation where rules are triples of arrows with the same source .arrows and are the usual left- and right - hand sides , while is a mono called the _ embedding _ : it will play a role in controlling which edges from the context are copied .the resulting rewriting approach , called agree ( for algebraic graph rewriting with controlled embedding ) is presented in sect .[ sec : pbcpo ] . as usual for the algebraic approaches ,agree rewriting will be introduced abstractly for a category satisfying suitable requirements , that will be introduced in sect .[ sec : preliminaries ] . for the knowledgeable readerwe anticipate that we will require the existence of _ partial map classifiers _ . after discussing an example of social networks in sect .[ sec : examples ] , in sect .[ sec : sqpo ] we show that agree rewriting can simulate both sqpo rewriting ( restricted to mono matches ) and _ rewriting with polarised cloning _ .finally some related and future works are briefly discussed in sect .[ sec : discussion ] .two appendices collect the proofs of the main results , that were omitted in the published version of the present paper .we start recalling some definitions and a few properties concerning pullbacks , partial maps and partial map classifiers : a survey on them can be found in .let be a category with all pullbacks .we recall the following properties : * monos are stable under pullbacks , i.e. if is the pullback of and is mono , then is mono as well .* the _ composition _ property of pullbacks : in a commutative diagram as below on the left , if squares ( a ) and ( b ) are pullbacks , so is the composed square ; + |{pb~(a ) } \bullet \ar[d ] \ar[r ] \ar@/^3ex/[rr]_{= } & \ar@{}[rd]|{pb~(b ) } \bullet \ar[d ] \ar[r ] & \bullet \ar[d ] \\\bullet \ar[r ] \ar@/_3ex/[rr]^{= } & \bullet \ar[r ] & \bullet \\ } \qquad\qquad \xymatrix=3pc=1pc { \ar@{}[rd]|{pb~(c ) } \bullet \ar[d ] \ar@/^3ex/[rr]_{= } \ar@{-->}[r ] & \bullet \ar[d ] \ar[r ] \ar@{}[rd]|{pb~(d ) } & \bullet \ar[d ]\\ \bullet \ar[r ] & \bullet \ar[r ] & \bullet \\ } \ ] ] * and the _ decomposition _ property : in a commutative diagram as the one made of solid arrows above on the right , if square ( d ) and the outer square are pullbacks , then there is a unique arrow ( the dotted one ) such that the top triangle commutes and square ( c ) is a pullback .a _ stable system of monos _ of is a family of monos including all isomorphisms , closed under composition , and ( _ stability _ ) such that if is a pullback of and , then .-partial map _ over , denoted , is a span made of a mono in and an arrow in , up to the equivalence relation whenever there is an isomorphism with and .category has an _-partial map classifier _ if is a functor and is a natural transformation , such that for each object of , the following holds : for each -partial map there is a unique arrow such that square ( [ pb : pmc ] ) is a pullback . in this caseit can be shown ( see ) that for each object , that preserves pullbacks , and that the natural transformation is _ cartesian _ , which means that for each the naturality square ( [ pb : eta ] ) is a pullback . for each mono in will use the notation , thus is defined by the pullback square ( [ pb : olm ] ) .|{pb } x \ar@ { > ->}[d]|{{\phantom{\big(}m{\phantom{\big ) } } } } \ar[r]|{{\phantom{\big(}f{\phantom{\big ) } } } } & y \ar@ { > ->}[d]|{{\phantom{\big(}\eta_y{\phantom{\big ) } } } } \\ z \ar[r]|{{\phantom{\big(}\varphi(m , f){\phantom{\big ) } } } } & t(y ) } \ ] ] |{pb } x \ar@ { > ->}[d]|{{\phantom{\big(}\eta_x{\phantom{\big ) } } } } \ar[r]|{{\phantom{\big(}f{\phantom{\big ) } } } } & y \ar@ { > ->}[d]|{{\phantom{\big(}\eta_y{\phantom{\big ) } } } } \\t(x ) \ar[r]|{{\phantom{\big(}t(f){\phantom{\big ) } } } } & t(y ) \\ } \ ] ] |{pb } x \ar@ { > ->}[d]|{{\phantom{\big(}m{\phantom{\big ) } } } } \ar[r]|{{\phantom{\big(}{\mathit{id}}_x{\phantom{\big ) } } } } & x \ar@ { > ->}[d]|{{\phantom{\big(}\eta_x{\phantom{\big ) } } } } \\ z \ar[r]|{{\phantom{\big(}{\overline{m}}{\phantom{\big ) } } } } & t(x ) \\ } \ ] ] before discussing some examples of categories that have -partial map classifiers , let us recall the definition of some categories of graphs .[ de : graphs ] the category of _ graphs _ is defined as follows .a _ graph _ is made of a set of _ nodes _ , a set of _ edges _ and two functions , called _ source _ and _ target _ , respectively .as usual , we write when , and .morphism _ of graphs is made of two functions and , such that in for each edge in .given a fixed graph , called _ type graph _ , the category of _ graphs typed over _ is the slice category .[ def : pol - cat ] a _ polarized graph _ is a graph with a pair of subsets of the set of nodes such that for each edge one has and .morphism _ of polarized graphs , where and , is a morphism of graphs such that and .this defines the _ category _ of polarized graphs .a morphism of polarized graphs is _ strict _ , or _strictly preserves the polarization _, if and . informally , if is a partial map , a total arrow representing it should agree with on the `` items '' of on which it is defined , and should map any item of on which is not defined in a unique possible way to some item of which does not belong to ( the image via of ) .for example , in the partial map classifier is defined as and for functor , while the natural transformation is made of the inclusions . for each partial function , function extends by mapping to when and to when is not in the image of .( b ) in in the partial map classifier is such that embeds into the graph made of the disjoint union of with a node and with an edge for each pair of vertices in .the total morphism is defined on the set of nodes exactly as in , and on each edge similarly , but consistently with the way its source and target nodes are mapped .figure [ fig_partialmapclassexamples](a ) shows an example of a partial map and the corresponding extension to the total morphism . in the graphical notationwe use edges with double tips to denote two edges , one in each direction ; arrows and node marked with are added to by the construction . and are instances of the general result that all elementary toposes have -partial map classifier , for the family of all monos .these include , among others , all _ presheaf categories _( i.e. , functor categories like , where is a small category ) , and the slice categories like where is a topos and an object of . in fact is the presheaf category where has two objects , and two non - identity arrows . as a consequence alsothe category of typed graphs has partial maps classifiers for all monos .figure [ fig_partialmapclassexamples](b ) shows an example : the partial map classifier of a graph typed over is obtained by adding to all the nodes of and , for each pair of nodes of the resulting graph , one instance of each edge that is compatible with the type graph . the category of _ polarized graphs _ of def .[ def : pol - cat ] ( that will be used later in sect .[ subsec : pbcpovspolclo ] ) , is an example of category which has -partial map classifiers for a family which is a proper subset of all monos .it is easy to check that strict monos form a stable system of monos ( denoted ) for category , and that has an -partial map classifier .morphism embeds a polarised graph into , which is the disjoint union of with a node ( having polarity ) and with an edge for each pair of nodes .the total morphism is defined exactly as in the category of graphs .in this section we introduce the agree approach to rewriting , defining rules , matches and rewrite steps . the main difference with respect to the dpo and sqpo approachesis that a rule has an additional component , called the _ embedding _ , that enriches the interface and can be used to control the embedding of preserved items .we assume that is a category with all pullbacks , with a stable system of monos , with an -partial map classifier , and with pushouts along monos in .[ def : pbcpo ] a _ rule _ is a triple of arrows with the same source , with in . arrows and are the _ left- _ and _ right - hand side _ , respectively , and is called the _embedding_. {l } \ar[r]^{r } \ar@ { > ->}[d]^{t } & r \\ & t_k & } \ ] ] a _ match _ of a rule with left - hand - side is a mono in .|{pb~(remark ) } l \ar@ { > ->}[d]^{m } \ar@<-.5ex>@ { > ->}@/_3ex/[dd]_(.6){\eta_l}^(.6){= } & \ar@{}[dr]|{po~(b ) } k \ar[l]_{l } \ar[r]^{r }\ar@ { > ->}[d]_{n } \ar@<.5ex>@ { > ->}@/^3ex/[dd]^(.6){t}_(.6){=}|{\hole } & r \ar[d]^{p } \\\ar@{}[dr]|{pb~(a ) } g \ar[d]^{{\overline{m } } } & d \ar[l]_{g } \ar[r]^{h } \ar[d]_{n ' } & h \\t(l ) & t_k \ar[l]^{l ' = \varphi(t ,l ) } & } \ ] ] [ def : pbcpo - rewriting ] given a rule and a match , an agree _ rewrite step _ is constructed in two phases as follows ( see diagram ( [ eq : agree - rew ] ) ) : + ( a ) let and , then is the pullback of .+ ( remark ) in diagram ( [ eq : agree - rew ] ) is a pullback of and is a pullback of because , thus by the decomposition property there is a unique such that , and is a pullback of .therefore is a mono in by stability .+ ( b ) let be as in the previous remark .then is the pushout of . using the agree approach, the web page copy operation can be modelled using the rule shown in figure [ fig_webpagecopy ] .this rule is typed over the type graph .nodes denote web pages , solid edges denote links and dashed edges describe the subpage relation .the different node colours ( gray and black ) are used just to define the match , whereas the * * inside some nodes is used to indicate that this is a copy . when this rule is applied to graph ,only out - links are copied because the pages that link the copied one remain the same , that is , they only have a link to the original page , not to its copy .the subpage structure is not copied .note that all black nodes of and are mapped to -nodes of and , respectively . in the general case just presented , the embedding could have a non - local effect on the rewritten object . in the following example , based on category *set * , the rule simply preserves a single element and is the identity .if applied to set , its effect is to delete all the elements not matched by , as shown .we say that this rewrite step is _ non - local _ , because it modifies the complement of the image of in . in the rest of this section we present a condition on rules that ensures the locality of the rewrite steps .in order to formulate this condition in the general setting of a category with -partial map classifiers , we need to consider a generalisation of the notion of complement of a subset in a set , that we call _ strict complement_. for instance , in category , the strict complement of a subgraph in a graph is the largest subgraph of disjoint from ; thus , the union of and is in general smaller than . intuitively , we will say that an agree rewrite step as in diagram ( [ eq : agree - rew ] ) is _ local _ if the strict complement of in is preserved , i.e. , if restricts to an isomorphism between and . for the definitions and results that follow , we assume that category , besides satisfying the conditions listed at the beginning of this section , has a final object and a _ strict _ initial object ( i.e. , each arrow with target must have as source ) ; furthermore , the unique arrow from to , that we denote , belongs to . for each object of will denote by the unique arrow to the final object , and by the unique arrow from the initial object . for each mono in _ characteristic arrow _ of is defined as , ( see pullback ( a ) in diagram ( [ pb : sub - class - new ] ) ) .object is called the _-subobject classifier_. |{pb~(c ) } k \ar@ { > ->}[d]|{{\phantom{\big(}n{\phantom{\big ) } } } } \ar[r]|{{\phantom{\big(}l{\phantom{\big ) } } } } & \ar@{}[rd]|{pb~(a ) } l \ar@ { > ->}[d]|{{\phantom{\big(}m{\phantom{\big ) } } } } \ar[r]|{{\phantom{\big(}{1}_l{\phantom{\big ) } } } } & { 1}\ar@ { > ->}[d]^{\eta_{{1}}}_{{\mathit{true } } } \\\ar@{}[rd]|{pb~(d ) } d \ar[r]|{{\phantom{\big(}g{\phantom{\big ) } } } } & \ar@{}[rd]|{pb~(b ) } g \ar[r]|{{\phantom{\big(}\chi_m = \varphi(m,{1}_l){\phantom{\big ) } } } } & t({1 } ) \\d\setminus k \ar@{->}[u]|{{\phantom{\big(}d\setminus n{\phantom{\big ) } } } } \ar[r]|(.5){{\phantom{\big(}g\setminus l{\phantom{\big ) } } } } & g\setminus l \ar@{->}[u]|{{\phantom{\big(}g\setminus m{\phantom{\big ) } } } } \ar[r]|(.6){{\phantom{\big(}{1}_{g\setminus l}{\phantom{\big ) } } } } & { 1}\ar@{->}[u]^{{{\mathit{false}}}}_{t({!})\circ { { \overline}{\,!\ , } } } } \ ] ] by exploiting the assumption that and that is strict initial , it can be shown that is isomorphic to , with , and this yields an arrow . in category ( with the family of all injective functions ) arrows and are the coproduct injections of the subobject classifier ( which is a two element set ) , and are also known as and , respectively . in the complement of an injective function can be defined as the pullback of along .we generalise this to the present setting as follows .[ def : complement ] let be a category that satisfies the conditions listed at the beginning of section [ sec : pbcpo ] , has final object , strict initial object , and such that .let be a mono in , and be its characteristic arrow defined by pullback ( a ) of diagram ( [ pb : sub - class - new ] ) .then the _ strict complement of in ( with respect to ) _ is the arrow obtained as the pullback of and , as in square ( b ) of diagram ( [ pb : sub - class - new ] ) .furthermore , for each pair of monos and in and for each pair of arrows and such that square ( c ) of diagram ( [ pb : sub - class - new ] ) is a pullback , arrow as in square ( d ) is called the _ strict complement _ of in ( with respect to and ) . it is easy to check that arrow exists and is uniquely determined by the fact that square ( b ) is a pullback ; furthermore square ( d ) is a pullback as well , by decomposition . we will now exploit the notion of strict complement to formalize locality of agree rewriting . [ def : embspec ]an agree rule is _ local _ if is such that is an iso .an agree rewrite step as in diagram ( [ eq : agree - rew ] ) is _ local _ if arrow is an iso .the definition of local rewrite steps is as expected , but that of local rules deserves some comments .essentially , in the first phase of agree rewriting , when building the pullback ( a ) of diagram ( [ eq : agree - rew ] ) , the shape of determines the effect of the rule on the strict complement of in , which is mapped by to .it can be proved that is isomorphic to , therefore if the rule is local we have that is isomorphic to , and this guarantees that the strict complement of in is preserved in the rewrite step .these considerations provide an outline of the proof of the main result of this section , which is reported in appendix [ sec : appproofs ] .[ prop : locality ] let be a local rule .then , with the notations as in diagram ( [ eq : agree - rew ] ) , for each match the resulting rewrite step is local .huge network data sets , like social networks ( describing personal relationships and cultural preferences ) or communication networks ( the graph of phone calls or email correspondents ) become more and more common .these data sets are analyzed in many ways varying from the study of disease transmission to targeted advertising .selling network data set to third - parties is a significant part of the business model of major internet companies .usually , in order to preserve the confidentiality of the sold data set , only `` anonymized '' data is released .the structure of the network is preserved , but personal identification informations are erased and replaced by random numbers .this anonymized network may then be subject to further processing to make sure that it is not possible to identify the nodes of the network ( see for a discussion about re - identification issues ) .we are going to show how agree rewriting can be used for such anonymization procedure .of course , due to space limitations we can not deal with a complete example and will focus on the first task of the anonymization process : the creation of a clone of the social network in which only non - sensitive links are copied . we model the following idealized scenario : the administrator of a social network sells anonymized data sets to third - parties so that they can be analyzed without compromising confidentiality .our graphs are made of four kinds of nodes : customer ( grey nodes ) , administrator of the social network ( white node ) , user of the social network ( black nodes ) and square nodes that model the fact that data will suffer post - processing .links of the social network can be either public ( black solid ) or private ( dashed this latter denotes sensitive information that should not be disclosed ) , moreover we use another type of edges ( grey ) , denoting the fact that a node knows " , or has access to another node .the corresponding type graph is shown in figure [ fig_graphs ] . , graphs and ,title="fig:",scaledwidth=20.0% ] , graphs and ,title="fig:",scaledwidth=55.0% ] the rule depicted in figure [ fig_anonymizerule ] shows an example that anonymizes a portion of a social network with nodes ( typically portions of a fixed size are sold ) .graph consists of a clique of all copies of matched black nodes ( denoted by * * ) with public links , and a graph representing the construction applied to the rest of . to enhance readability, we just indicated that the graph inside the dotted square should be completed according to : a copy of the nodes of the type graph should be added , together with all possible edges that are compatible with the type graph .this allows the cloning of the subgraph defined by the match limited to public edges . in the right hand side a new square node is added marking the cloned nodes for post - processing .the application of this rule to graph in figure [ fig_graphs ] with a match not including the top black node produces graph .as recalled in the introduction , in the sqpo approach a rule is a span and a rewriting step for a match is made of a first phase where the _ final pullback complement _ is constructed , and next a pushout with the right - hand side is performed .[ def : fpbc ] in diagram ( [ eq : fpbc ] ) , is a _ final pullback complement _ of if 1 .the resulting square is a pullback , and 2 . for each pullback and arrow such that , there is a unique arrow such that and .|{{\phantom{\big(}m{\phantom{\big ) } } } } & & k \ar[ll]|{{\phantom{\big(}l{\phantom{\big ) } } } } \ar[dd]|{{\phantom{\big(}n{\phantom{\big ) } } } } & & k ' \ar[ll]|{{\phantom{\big(}h{\phantom{\big ) } } } } \ar[dd]|{{\phantom{\big(}e{\phantom{\big ) } } } } \ar@/_3ex/[llll]|{{\phantom{\big(}d{\phantom{\big ) } } } } \\ & \\ g & & d \ar[ll]|{{\phantom{\big(}a{\phantom{\big ) } } } } & & d ' \ar@{-->}[ll]_{{\phantom{\big(}g{\phantom{\big ) } } } } \ar@/^3ex/[llll]|{{\phantom{\big(}f{\phantom{\big ) } } } } } \ ] ] the next result shows that in a category with a stable system of monos and with -partial map classifiers , the final pullback complement of , with , can be obtained by taking the pullback of along .this means that if the embedding morphism of an agree rule is the partial map classifier of , i.e. , , then the first phase of the agree rewriting algorithm of definition [ def : pbcpo - rewriting ] actually builds the final pullback complement of the left - hand side of the rule and of the match .this will allow us to relate the agree approach with others based on the construction of final pullback complements .[ theorem : sqpo ] let be a category with pullbacks , with a stable system of monos and with an -partial map classifier .let be an arrow in and be a mono in .consider the naturality square built over on the left of figure [ fig : fpbcaspb ] , which is a pullback because is cartesian , and let be the pullback of .then is a final pullback complement of , where is the only arrow making the right triangle commute and the top square a pullback .|{{\phantom{\big(}m{\phantom{\big ) } } } } \ar@ { > ->}[dddd]|{{\phantom{\big(}\eta_l{\phantom{\big ) } } } } & & k \ar[ll]|{{\phantom{\big(}l{\phantom{\big ) } } } } \ar@ { > ->}[dddd]|(.35){{\phantom{\big(}\eta_k{\phantom{\big)}}}}|(.5){\hole } \ar@ { > ->}[ddr]|(.35){{\phantom{\big(}n{\phantom{\big ) } } } } \\ & & & \\ & g \ar[ddl]|{{\phantom{\big(}\overline{m}{\phantom{\big ) } } } } & & d \ar[ddl]|(.35){\phantom{\big(}n'\phantom{\big ( } } \ar[ll]|(.3){{\phantom{\big(}a{\phantom{\big)}}}}\\ & \\ t(l ) & & t(k ) \ar[ll]|{{\phantom{\big(}t(l){\phantom{\big ) } } } } } ] by the decomposition property we have that is a pullback complement of , and by stability .we have to show that the pullback complement is final , i.e. that given a pullback and an arrow such that , as shown on the right of figure [ fig : fpbcaspb ] , there is a unique arrow such that and .we present here the _ existence _ part , while the proof of _ uniqueness _ is in appendix [ sec : appproofs ] .note that is in by stability . by the properties of the -partial map classifier , there is a unique arrow such that and the square is a pullback. we will show below that , hence by the universal property of the pullback there is a unique arrow such that and .it remains to show that : by exploiting again pullback , it is sufficient to show that ( i ) and ( ii ) .in fact we have , by simple diagram chasing : \(i ) \(ii ) we still have to show that .this follows by comparing the following two diagrams , where all squares are pullbacks , either by the statements of section [ sec : preliminaries ] or ( the last to the right ) by assumption .clearly , also the composite squares are pullbacks , but then the bottom arrows must both be equal to , as in equation ( [ pb : pmc ] ) .therefore we conclude that .|{{\phantom{\big(}\eta_l{\phantom{\big ) } } } } \ar@{}[rd]|{pb~(\ref{pb : eta } ) } & k \ar[l]|{{\phantom{\big(}l{\phantom{\big ) } } } } \ar@ { > ->}[d]|{{\phantom{\big(}\eta_k{\phantom{\big ) } } } } \ar@{}[rd]|{pb~(\ref{pb : pmc } ) } & k ' \ar[l]|{{\phantom{\big(}h{\phantom{\big ) } } } } \ar@ { > ->}[d]|{{\phantom{\big(}e{\phantom{\big ) } } } } \ar@/_4ex/[ll]|{{\phantom{\big(}d{\phantom{\big ) } } } } \\t(l ) & t(k ) \ar[l]|{{\phantom{\big(}t(l){\phantom{\big ) } } } } & d ' \ar[l]|(.5){{\phantom{\big(}\varphi(e , h){\phantom{\big ) } } } } } \quad\quad \xymatrix=4pc=3pc { l \ar@ { > ->}[d]|{{\phantom{\big(}\eta_l{\phantom{\big ) } } } } \ar@{}[rd]|{pb~(\ref{pb : olm } ) } & l \ar[l]|{{\phantom{\big(}id_l{\phantom{\big ) } } } } \ar@ { > ->}[d]|{{\phantom{\big(}m{\phantom{\big ) } } } } & k ' \ar[l]|{{\phantom{\big(}l \circ h{\phantom{\big ) } } } } \ar@ { > ->}[d]|{{\phantom{\big(}e{\phantom{\big ) } } } } \ar@/_4ex/[ll]|{{\phantom{\big(}d{\phantom{\big ) } } } } \\t(l ) & g \ar[l]|{{\phantom{\big(}{\overline{m}}{\phantom{\big ) } } } } & d ' \ar[l]|(.5){{\phantom{\big(}f{\phantom{\big)}}}}}\ ] ] the statement of theorem [ theorem : sqpo ] can be formulated equivalently in a more abstract way , as the fact that composing functor with a pullback along one gets a functor that is right adjoint to the functor taking pullbacks along .this alternative presentation and its proof are presented in appendix [ app : abstract ] .using theorem [ theorem : sqpo ] it is easy to show that the agree approach is a conservative extension of the sqpo approach , because the two coincide if the embedding of the agree rule is the arrow injecting into its partial map classifier . [theorem : agreevssqpo ] let be a category with all pullbacks , with -partial map classifiers for a stable system of monos , and with pushouts along arrows in .let be a rule and be a match in .then in words , the application of rule to match using the sqpo approach has exactly the same effect of applying to the same rule enriched with the embedding using the agree approach . since the embedding of the rule is arrow , phase ( a ) of agree rewriting ( definition [ def : pbcpo - rewriting ] ) is exactly the construction that is shown , in theorem [ theorem : sqpo ] , to build as a final pullback complement of , therefore it coincides with the construction of the left square of the sqpo approach . the second phase , i.e. the construction of the pushout of and is identical for both approaches by definition .we now show that agree rewriting allows to simulate rewriting with polarized cloning on graphs , which is defined in by using the polarized graphs of definition [ def : pol - cat ] .polarization is used in rewriting to control the copies of edges not matched but incident to the matched nodes .[ defi : pol - depol ] the _ underlying graph _ of a polarized graph is .this defines a functor which has both a right- and a left - adjoint functor denoted and , resp ., i.e. .functor maps each graph to the polarized graph _ induced by _ , defined as , and each graph morphism to itself ; it is easy to check that is a _ strict _ polarized graph morphism .furthermore we have that , and we denote the unit of adjunction as , thus .functor maps each graph to the polarized graph , where a node is in ( resp . in ) if and only if it has at least one outgoing ( resp .incoming ) edge in .since has a left adjoint , we have that preserves limits and in particular pullbacks .the category has final pullback complements along strict monos : their construction is given in ( * ? ? ? * appendix ) .[ defi : psqpo ] a _ psqporewrite rule _ is made of a span of graphs and a polarized graph with underlying graph .a _ psqpomatch _ of the psqporewrite rule is a mono in .a _ psqporewriting step _ is constructed as follows : 1 .the left - hand - side of the rule gives rise to a morphism in .the match gives rise to a strict mono in .then is constructed as the final pullback complement of in category .2 . since , we get in .then is built as the pushout of in category .recall that , as observed in sect .[ sect : examples - classifiers ] , category has an -partial map classifier . this will be exploited in the next result .[ theorem : psqpo ] let be a psqporule made of span and polarized graph . consider the component on of the natural transformation , and let and , thus .furthermore , let be a mono. then the first phase of psqpo rewriting consists of building the final pullback complement of in category . according to theorem [ theorem : sqpo ] ,since is strict such final pullback complement can be obtained as the top square in the diagram below to the left , where both squares are pullbacks in .the second phase consists of taking the pushout of morphisms and in . by applying functor to the left diagramwe obtain the diagram below to the right in , where both squares are pullbacks because preserves limits .in fact , recall that , that and that ; the fact that can be checked easily by comparing the construction of the ( -)partial map classifiers in and in . |{pb } { \mathrm{pol}}(l ) \ar@ { > ->}[d]^{{\mathrm{pol}}(m ) } \ar@ { > ->}@/_4ex/[dd]_(.6){\eta_{{\mathrm{pol}}(l)}}^(.6){= } & { \mathbb{k } } \ar[l]_{{\widehat}{l } } \ar@ { > ->}[d]_{n } \ar@ { > ->}@/^4ex/[dd]^(.6){\eta_{{\mathbb{k}}}}_(.6){= } \\ \ar@{}[dr]|{pb } { \mathrm{pol}}(g ) \ar[d]^{{\overline}{{\mathrm{pol}}(m ) } } & { \mathbb{d } } \ar[l]_{g } \ar[d]_{q={\overline{n } } } \\ { \mathbb{t}}({\mathrm{pol}}(l ) ) & { \mathbb{t}}({\mathbb{k } } ) \ar[l]^{{\mathbb{t}}({\widehat}{l } ) } \\ } \qquad \xymatrix=4pc=1.3pc { \ar@{}[dr]|{pb } l \ar@ { > ->}[d]^{m } \ar@ { > ->}@/_4ex/[dd]_(.6){\eta_{l}}^(.6){= } & k \ar[l]_{l } \ar@ { > ->}[d]|{{\mathrm{depol}}(n ) } \ar@ { > ->}@/^6ex/[dd]^(.6){t}_(.6){= } \\ \ar@{}[dr]|{pb } g \ar[d]^{{\overline{m } } } & { \mathrm{depol}}({\mathbb{d } } ) \ar[l ] \ar[d ] \\t(l ) & t_k \ar[l ] \\ } \ ] ] now , the first phase of agree rewriting with rule and match consists of taking the pullback in of and the only arrow that makes the outer square of the right diagram a pullback .this arrow is precisely , and therefore the pullback is exactly the lower square of the right diagram .the second phase consists of taking the pushout of and of the only arrow that makes the diagram commute ; but is such an arrow , thus the pushout is the same computed by the psqpo approach and this concludes the proof .in this paper we presented the basic definitions of a new approach to algebraic graph rewriting , called agree .we showed that this approach subsumes other algebraic approaches like sqpo ( sesqui - pushout ) with injective matches ( and therefore dpo and spo under mild restrictions , see ( * ? ? ? * propositions 12 and 14 ) ) , as well as its polarised version psqpo .the main feature provided by this approach is the possibility , in a rule , of specifying which edges shall be copied as a side effect of the copy of a node .this feature offers new facilities to specify applications in which copy of nodes shall be done in an unknown context , and thus it is not possible to describe in the left - hand side of the rule all edges that shall be copied together with the node . as an example , the anonymization of parts of a social network was described in sect .[ sec : examples ] .the idea of controlling explicitly in the rule how the right - hand side should be embedded in the context graph is not new in graph rewriting , as it is a standard ingredient of the algorithmic approaches .for example , in node label controlled ( nlc ) graph rewriting and its variations productions are equipped with _ embedding rules _ , which allow one to specify how the right - hand side of a production has to be embedded in the context graph obtained by deleting the corresponding left - hand side .the name of our approach is reminiscent of those older ones .adaptive star grammars is another framework where node cloning is performed by means of rewrite rules of the form where graph has a shape of a star and is a graph .cloning operation , see ( * ? ? ?* definitions 5 and 6 ) , shares the same restrictions as the sesqui - pushout approach : nodes are cloned with all their incident edges . in a general framework for graph transformations in span - categories , called _ contextual graph rewriting _ , briefly cr , has been proposed . using cr , thanks to the notions of rule and of match that are more elaborated than in other approaches , it is possible to specify cloning as in agree rewriting , and even more general transformations : e.g. , one may create multiple copies of nodes / edges as a side effect , not only when cloning items .the left - hand sides of cr rules allow to specify elements that must exist for the rule to be applicable , called , and also a context for , i.e. a part of the graph that will be universally quantified when the rule is applied , called .a third component plays the role of embedding the context in the rest of the graph .the rule for copying a web page shown in figure [ fig_webpagecopy ] could be specified using cr as rule , where and . finding a match fora rule in a graph involves finding a smallest subgraph of that contains and its complete context .thus , even if cr is more general , our approach enhances the expressiveness of classical algebraic approaches with a form of controlled cloning using simpler and possibly more natural rules .bauderon s pullback approach is also related to our proposal .it was proposed as an algebraic variant of the above mentioned nlc and ed - nlc algorithmic approaches .bauderon s approach is similar , in part , to the pullback construction used in our first phase of a rewriting step , but a closer analysis is needed and is planned as future work .we also intend to explore if there are relevant applications where agree rewriting in its full generality ( i.e. , with possibly non - local rules ) could be useful. concerning the applicability of our approach to other structures , in practice the requirement of existence of partial maps classifiers looks quite demanding .agree rewriting works in categories of typed / colored graphs , which are used in several applications , because they are slice categories over graphs , and thus toposes .but even more used are the categories of attributed graphs , which are not toposes . under which conditions our approach can be extended or adapted to such structuresis an interesting topic that we intend to investigate .we are grateful to the anonymous reviewers of former versions of this paper for the insightful and constructive criticisms .10 [ 1]`#1 ` bauderon , m. , jacquet , h. : pullback as a generic graph rewriting mechanism . applied categorical structures 9(1 ) , 6582 ( 2001 ) cockett , j. , lack , s. : restriction categories i : categories of partial maps .theoretical computer science 270(12 ) , 223259 ( 2002 ) cockett , j. , lack , s. : restriction categories ii : partial map classification .theoretical computer science 294(12 ) , 61102 ( 2003 ) corradini , a. , duval , d. , echahed , r. , prost , f. , ribeiro , l. : agree - algebraic graph rewriting with controlled embedding . in : parisi - presicce , f. , westfechtel , b. ( eds . ) graph transformations , icgt 2015 .lncs , vol . 9151 .springer ( 2015 ) corradini , a. , heindel , t. , hermann , f. , knig , b. : sesqui - pushout rewriting . in : corradini ,a. , ehrig , h. , montanari , u. , ribeiro , l. , rozenberg , g. ( eds . ) graph transformations , icgt 2006 .lncs , vol . 4178 , pp .3045 . springer ( 2006 ) corradini , a. , montanari , u. , rossi , f. , ehrig , h. , heckel , r. , lwe , m. : algebraic approaches to graph transformation - part i : basic concepts and double pushout approach . in : rozenberg , pp. 163246 drewes , f. , hoffmann , b. , janssens , d. , minas , m. : adaptive star grammars and their languages .411(34 - 36 ) , 30903109 ( 2010 ) duval , d. , echahed , r. , prost , f. : graph rewriting with polarized cloning .corr abs/0911.3786 ( 2009 ) , http://arxiv.org/abs/0911.3786 duval , d. , echahed , r. , prost , f. : graph transformation with focus on incident edges . in : ehrig , h. , engels , g. , kreowski , h. , rozenberg , g. ( eds . ) graph transformations , icgt 2012 .lncs , vol .7562 , pp .springer ( 2012 ) duval , d. , echahed , r. , prost , f. , ribeiro , l. : transformation of attributed structures with cloning . in : gnesi ,s. , rensink , a. ( eds . ) fundamental approaches to software engineering , fase 2014 .lncs , vol . 8411 , pp .310324 . springer ( 2014 ) dyckhoff , r. , tholen , w. : exponentiable morphisms , partial products and pullback complements. journal of pure and applied algebra 49(1 - 2 ) , 103116 ( 1987 ) echahed , r. : inductively sequential term - graph rewrite systems . in : ehrig , h. , heckel , r. , rozenberg , g. , taentzer , g. ( eds . ) graph transformations , icgt 2008 .lncs , vol . 5214 , pp .springer ( 2008 ) ehrig , h. , heckel , r. , korff , m. , lwe , m. , ribeiro , l. , wagner , a. , corradini , a. : algebraic approaches to graph transformation - part ii : single pushout approach and comparison with double pushout approach . in : rozenberg , pp .247312 ehrig , h. , pfender , m. , schneider , h.j . : graph - grammars : an algebraic approach . in : 14th annual symposium on switching and automata theory , iowa city , iowa ,usa , october 15 - 17 , 1973 . pp .ieee computer society ( 1973 ) engelfriet , j. , rozenberg , g. : node replacement graph grammars . in : rozenberg , pp .194 hay , m. , miklau , g. , jensen , d. , towsley , d.f . , li , c. : resisting structural re - identification in anonymized social networks .vldb j. 19(6 ) , 797823 ( 2010 ) lwe , m. : algebraic approach to single - pushout graph transformation .109(1&2 ) , 181224 ( 1993 ) lwe , m. : graph rewriting in span - categories . in : graph transformations , icgt 2010 .lncs , vol . 6372 , pp .springer ( 2010 ) mitchell , m. , oldham , j. , samuel , a. : advanced linux programming .landmark series , new riders ( 2001 ) rozenberg , g. ( ed . ) : handbook of graph grammars and computing by graph transformations , volume 1 : foundations .world scientific ( 1997 )this section is devoted to the proof of proposition [ prop : locality ] and to part of the proof of theorem [ theorem : sqpo ] .let be a category satisfying all conditions of definition [ def : complement ] , where is an -partial map classifier .let us start with a technical lemma .[ lemma : locality ] object is isomorphic to for each , and furthermore is an iso for each . _proof.__first , let us look at the diagram to the right where is any object . in this diagram the topsquare is a pullback of shape ( [ pb : eta ] ) and the bottom square is a pullback because , up to the isomorphism between and we may replace by and by , so that the bottom square becomes the image by of a pullback square .thus , is isomorphic to and , up to this iso , is .now , let us look at the diagram to the right where is any arrow . in this diagram the topsquare is a pullback of shape ( [ pb : eta ] ) and the bottom square is a pullback because it is the image by of a pullback square .thus , is an iso .|{pb } l \ar@ { > ->}[d]|{{\phantom{\big(}\eta_l{\phantom{\big ) } } } } \ar[r]|{{\phantom{\big(}{1}_l{\phantom{\big ) } } } } & { 1}\ar@ { > ->}[d]|{{\phantom{\big(}{\mathit{true}}{\phantom{\big ) } } } } \\\ar@{}[rd]|{pb } t(l ) \ar[r]|{{\phantom{\big(}t({1}_l){\phantom{\big ) } } } } & t({1 } ) \\ t({0 } ) \ar@ { > ->}[u]|{{\phantom{\big(}t({0}_l){\phantom{\big ) } } } } \ar[r]|(.6){{\phantom{\big(}{1}_{t({0})}{\phantom{\big ) } } } } & { 1}\ar@ { > ->}[u]|{{\phantom{\big(}{\mathit{false}}{\phantom{\big ) } } } } \\ } $ ] =4.0pc=1.8pc @[dr]|pb k @ > ->[d]|_k |l & l @ > ->[d]|_l+ @[dr]|pb t(k ) |t(l ) & t(l ) + t(0 ) @ >->[u]|t(0_k ) |_t(0 ) & t(0 ) @ >->[u]|t(0_l ) + let us recall the statement of the proposition , for the readers convenience : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ let be a local rule . then , with the notations as in diagram ( [ eq : agree - rew ] ) , for each match the resulting rewrite step is local ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ by definition [ def : embspec ] we have to show that if is such that is an iso , i.e. the rule is local , then arrow is an iso as well .consider the diagram in figure [ fig : sm ] , where the left part depicts the first phase of an agree rewriting step , together with several arrows to the -subobject classifier .the right part is obtained by pulling back ( part of ) the left part along , obtaining the depicted strict complements ( see definition [ def : embspec ] ) .now , in triangle arrow is iso by hypothesis , and is iso by lemma [ lemma : locality ] .therefore is an iso as well .furthermore the square around is a pullback , because it is obtained by pulling back ( along ) the pullback around , and therefore is an iso .|{{\phantom{\big(}m{\phantom{\big ) } } } } \ar@/_3ex/[dddd]|{{\phantom{\big(}\eta_l{\phantom{\big ) } } } } & & k \ar[ll]|{{\phantom{\big(}l{\phantom{\big ) } } } } \ar@ { > ->}[dd]|{{\phantom{\big(}n{\phantom{\big ) } } } } \ar@ { > ->}@/_3ex/[dddd]|(.2){{\phantom{\big(}t{\phantom{\big ) } } } } \ar@ { > ->}@/^4ex/[dddddd]|(.49){{\phantom{\big(}\eta_k{\phantom{\big ) } } } } \\ & & & { g{\setminus}l } \ar[dd]|(.33){\hole}|{{\phantom{\big(}{\overline{m}}{\setminus}id_l{\phantom{\big)}}}}|(.65){\hole } \ar@{-->}[llld]|(.3)\hole|(.37)\hole|(.45)\hole|(.6){{\phantom{\big(}g{\setminus}m{\phantom{\big ) } } } } \ar@{ .. >}[rd]|\hole & & { d{\setminus}k } \ar[dd]|{{\phantom{\big(}n'{\setminus}id_k{\phantom{\big ) } } } } \ar@{-->}[llld]|(.55){{\phantom{\big(}d{\setminus}n{\phantom{\big ) } } } } \ar@{ .. >}[ld ] \ar[ll]|{{\phantom{\big(}g { \setminus}l{\phantom{\big ) } } } } \\ g\ar@{ .. >}[rd]|{{\phantom{\big(}\chi_m{\phantom{\big ) } } } } \ar[dd]|{{\phantom{\big(}{\overline{m}}{\phantom{\big ) } } } } & & d \ar[dd]|{{\phantom{\big(}n'{\phantom{\big ) } } } } \ar@{ .. >}[ld]|{{\phantom{\big(}\chi_n{\phantom{\big ) } } } } \ar[ll]|(.13)\hole|{{\phantom{\big(}g{\phantom{\big ) } } } } & & 1 \ar@{=>}[llld]|(.45){{\phantom{\big(}\mathit{false}{\phantom{\big)}}}}|(.59)\hole|(.68)\hole|(.75)\hole \\ & t(1 ) & & { t(l ) { \setminus}l } \ar@{ .. >}[ru ] \ar@{-->}[llld]|(.26)\hole|(.37)\hole|(.42)\hole|(.53)\hole|(.62)\hole & & { t_k { \setminus}k } \ar[dd]|{{\phantom{\big(}{\overline{t}}{\setminus}id_k{\phantom{\big ) } } } } \ar@{-->}[llld]|(.7){{\phantom{\big(}t_k{\setminus}t{\phantom{\big ) } } } } \ar@{ .. >}[lu ] \ar[ll]|{{\phantom{\big(}l ' { \setminus}l{\phantom{\big)}}}}\\ t(l ) \ar@{ .. >}[ru]|{{\phantom{\big(}t(1_l){\phantom{\big ) } } } } & & t_k \ar[ll]|{{\phantom{\big(}l ' = \varphi(t , l){\phantom{\big ) } } } } \ar@{ .. >}[ul]|(.4){{\phantom{\big(}\chi_t{\phantom{\big ) } } } } \ar[dd]|{{\phantom{\big(}{\overline{t}}{\phantom{\big ) } } } } \\ & & & \ar@{}[uurr]|(.67){(\ddagger ) } & & t(k ) { \setminus}k \ar[uull]|{{\phantom{\big(}t(l){\setminus}l{\phantom{\big)}}}}|(.77)\hole \ar@{-->}[llld]|{{\phantom{\big(}t(k){\setminus}\eta_k{\phantom{\big ) } } } } \ar@{ .. >}[uuul]|(.61)\hole|(.68)\hole \\ & & t(k ) \ar@{ .. >}[uuul]|{{\phantom{\big(}t(1_k){\phantom{\big ) } } } } \ar[uull]|{{\phantom{\big(}t(l){\phantom{\big ) } } } } } \ ] ] let as redraw the right diagram of figure [ fig : fpbcaspb ] for the reader s convenience , enriched with some additional information .|{{\phantom{\big(}v{\phantom{\big ) } } } } \ar@{ .. >}@/^3ex/[dddr]|{{\phantom{\big(}w{\phantom{\big ) } } } } \ar@{ .. >}[d]_z \ar@<0ex>@{}[ddl]|(.4){{\ensuremath{\langle6\rangle } } } \ar@<0ex>@{}[dddr]|(.4){{\ensuremath{\langle7\rangle}}}\\ & & & k ' \ar@/_2ex/[dlll]|{{\phantom{\big(}d{\phantom{\big ) } } } } \ar@<1ex>@{}[dlll]|{{\ensuremath{\langle1\rangle } } } \ar[ddr]|{{\phantom{\big(}e{\phantom{\big ) } } } } \ar[dl]|{{\phantom{\big(}h{\phantom{\big ) } } } } \ar@<0ex>@{}[ddd]|{{\ensuremath{\langle2\rangle } } } \\l \ar@ { > ->}[ddr]|{{\phantom{\big(}m{\phantom{\big ) } } } } \ar@ { > ->}[dddd]|{{\phantom{\big(}\eta_l{\phantom{\big ) } } } } & & k \ar[ll]|{{\phantom{\big(}l{\phantom{\big ) } } } } \ar@ { > ->}[dddd]|(.23){{\phantom{\big(}\eta_k{\phantom{\big)}}}}|(.4){\hole}|(.53){\hole } \ar@ { > ->}[ddr]|(.35){{\phantom{\big(}n{\phantom{\big)}}}}|(.63){\hole } \\ & & & & d ' \ar@{ .. >}[dl]|{{g } } \ar@/_1ex/[dlll]|{{\phantom{\big(}f{\phantom{\big ) } } } } \ar@/^3ex/[dddll]|{\phantom{\big(}\varphi(e , h)\phantom{\big ( } } \ar@/^13ex/[dddllll]|{\phantom{\big(}\varphi(e , d)\phantom{\big ( } } \ar@<1ex>@{}[dlll]|{{\ensuremath{\langle3\rangle } } } \ar@<1ex>@{}[dddll]|(.4){{\ensuremath{\langle5\rangle}}}\\ & g \ar[ddl]|{{\phantom{\big(}\overline{m}{\phantom{\big ) } } } } \ar@{}[ddr]|{{\ensuremath{\langle4\rangle } } } & & d \ar[ddl]|(.35){\phantom{\big(}n'\phantom{\big ( } } \ar[ll]|(.3){{\phantom{\big(}a{\phantom{\big)}}}}\\ & \\ t(l ) & & t(k ) \ar[ll]|{{\phantom{\big(}t(l){\phantom{\big ) } } } } } \ ] ] we have to prove that the arrow , that was shown to exists in the first part of the proof , is the only arrow that satisfies and .suppose indeed that is another arrow such that and . since is a pullback , in order to show that it is sufficient to show that , because commutativity of and uniquely determines a mediating arrow .to show , recall that by the properties of the -partial map classifier there is a unique arrow such that and the square is a pullback .therefore it is sufficient to show that is a pullback .first , it commutes , as .next , let be such that .we have to show that there is a unique such that and . for _ existence_ , an arrow is determined by exploiting the pullback ( it is a pullback again by the properties of ) .in fact we have .thus there is an arrow such that both and hold .it remains to show , i.e. that . by exploiting pullback , it is sufficient to show that ( i ) and ( ii ) .in fact , we have ( i ) , and ( ii ) .finally , the _ uniqueness _ of follows by the observation that commutativity of and uniquely determines a mediating morphism to regarded as pullback object of .this appendix is dedicated to a more abstract , equivalent presentation of the statement of theorem [ theorem : sqpo ] and of its proof . by exploiting the characterization of the final pullback complement as an adjoint functor, we get a proof which hides some diagram chasing by using general properties of partial map classifiers and adjunctions .first we state a lemma about decomposing the arrow , then we recall the definitions of slice categories and pullback functors , and finally we get a new point of view on theorem [ theorem : sqpo ] .[ lemma : mf ] let be a category with pullbacks and with an -partial map classifier for a stable system of monos . for each -partial map , with , we have . if in addition is the pullback of some with in , then . for the first point, the left diagram below is composed of two pullbacks of shape ( [ pb : olm ] ) and ( [ pb : eta ] ) , respectively , therefore it is a pullback . since it has shape ( [ pb : pmc ] ) , we conclude that .|{pb~(\ref{pb : olm } ) } x \ar@ { > ->}[d]|{{\phantom{\big(}m{\phantom{\big ) } } } } \ar[r]|{{\phantom{\big(}{\mathit{id}}_x{\phantom{\big ) } } } } & \ar@{}[rd]|{pb~(\ref{pb : eta } ) } x \ar@ { > ->}[d]|{{\phantom{\big(}\eta_x{\phantom{\big ) } } } } \ar[r]|{{\phantom{\big(}f{\phantom{\big ) } } } } & y \ar@ { > ->}[d]|{{\phantom{\big(}\eta_y{\phantom{\big ) } } } } \\ z \ar[r]|{{\phantom{\big(}{\overline{m}}{\phantom{\big ) } } } } \ar@{-->}@/_3ex/[rr]|{{\phantom{\big(}\varphi(m , f){\phantom{\big ) } } } } & t(x ) \ar[r]|{{\phantom{\big(}t(f){\phantom{\big ) } } } } & t(y ) } \qquad \xymatrix=3pc { \ar@{}[rd]|{pb } x \ar@ { > ->}[d]|{{\phantom{\big(}m{\phantom{\big ) } } } } \ar[r]|{{\phantom{\big(}f{\phantom{\big ) } } } } & \ar@{}[rd]|{pb~(\ref{pb : olm } ) } y \ar@ { > ->}[d]|{{\phantom{\big(}n{\phantom{\big ) } } } } \ar[r]|{{\phantom{\big(}{\mathit{id}}_x{\phantom{\big ) } } } } & y\ar@ { > ->}[d]|{{\phantom{\big(}\eta_y{\phantom{\big ) } } } } \\ z \ar[r]|{{\phantom{\big(}g{\phantom{\big ) } } } } \ar@{-->}@/_3ex/[rr]|{{\phantom{\big(}\varphi(m , f){\phantom{\big ) } } } } & w \ar[r]|{{\phantom{\big(}{\overline{n}}{\phantom{\big ) } } } } & t(y ) } \ ] ] for the second point , similarly , the right diagram above is the composition of a pullback of shape ( [ pb : olm ] ) and of the left square that is pullback by assumption , thus it is a pullback . since it has shape ( [ pb : pmc ] ) , we can conclude that . for each object in a category , the _ slice category _ over is denoted : its objects are the arrows in and an arrow in , with and in , is an arrow in such that .for each endofunctor and each object in , let us still denote by the functor which maps each object of to and each arrow of to . for each arrow in a category with pullbacks , the _ pullback functor _ associated with is denoted ; on objects , it maps each to such that the square below on the left is a pullback square ; on arrows , using the decomposition property of pullbacks , it maps each to the unique , where and , such that is a pullback of ( below on the right ) .in fact , `` the '' pullback functor is defined only up to isomorphism , but this will not raise any problem . |{pb } y \ar[r]|{{\phantom{\big(}f{\phantom{\big ) } } } } \ar[d]|{{\phantom{\big(}n{\phantom{\big ) } } } } & x \ar[d]|{{\phantom{\big(}m{\phantom{\big ) } } } } \\ w \ar[r]|{{\phantom{\big(}h{\phantom{\big ) } } } } & z \\ }\qquad \qquad \xymatrix=1.5pc=1pc { y_1 \ar[rrr]|(.5){{\phantom{\big(}f_1{\phantom{\big ) } } } } \ar[dd]|{{\phantom{\big(}n_1{\phantom{\big ) } } } } \ar[rd]|{{\phantom{\big(}g{\phantom{\big ) } } } } & & & x \ar[dd]|(.5){{\phantom{\big(}m{\phantom{\big ) } } } } \\ & y_2 \ar[rru]|{{\phantom{\big(}f_2{\phantom{\big ) } } } } \ar[dd]|(.35){{\phantom{\big(}n_2{\phantom{\big ) } } } } & & \\w_1 \ar[rrr]|(.4)\hole|(.7){{\phantom{\big(}h_1{\phantom{\big ) } } } } \ar[rd]|{{\phantom{\big(}k{\phantom{\big ) } } } } & & & z \\ & w_2 \ar[rru]|{{\phantom{\big(}h_2{\phantom{\big ) } } } } & & \\ } \ ] ] 1 .the pullback functor has a right adjoint and the counit of the adjunction is a natural isomorphism .arrow _ has final pullback complements _ , i.e. , for each there is a pair of composable arrows which are a final pullback complement of .let be a category with pullbacks and let be a stable system of monos of .then the composition of consecutive -partial maps is defined in the usual way , using a pullback in .this yields the category of -partial maps over and the inclusion functor , which maps each object to and each arrow to . according to ( * ? ? ?* sec.2.1 ) , has an -partial map classifier if and only if the functor has a right adjoint , and then the -partial map classifier is made of the endofunctor on and of the unit of the adjunction , .thus , functor is defined as for each object and for each arrow . now , exploiting theorem [ thm : dt ] we can state and prove theorem [ theorem : sqpo ] in a more abstract framework , as follows . [ theorem : sqpo - abstract ] let be a category with pullbacks and with an -partial map classifier for a stable system of monos .then for each mono in the functor is the right adjoint to functor .in addition , the counit of the adjunction is a natural isomorphism .let us sketch this proof by describing the unit and counit of the adjunction .for the counit , since we have , and since the natural transformation is cartesian we have .then the counit is the resulting natural isomorphism . for the unit ,let be an object in and let in ( see the diagrams below ) .let be the fourth arrow in this pullback , then by stability and by lemma [ lemma : mf ] we have .let and let be the fourth arrow in this pullback . by definition of pullback , there is a unique arrow such that and .it follows that is an arrow in .moreover , let , then .since is cartesian , the decomposition property of pullbacks implies that is the pullback of , so that is in and .then it can be checked that the arrows defines a natural transformation , which is the unit of the adjunction .|{pb } l \ar@ { > ->}[d]_{m } & k \ar[l]|{{\phantom{\big(}l{\phantom{\big ) } } } } \ar@ { > ->}[d]^{n } \\\ar@{}[rd]|{= } g \ar[d]_{{\overline{m } } } & d \ar[l]|{{\phantom{\big(}g{\phantom{\big ) } } } } \ar[d]^{{\overline{n } } } \\t(l ) & t(k ) \ar[l]|{{\phantom{\big(}t(l){\phantom{\big ) } } } } \\ } \qquad\qquad \xymatrix=4pc { \ar@{}[rd]|{pb } l \ar@ { > ->}[d]_{m } & k \ar[l]|{{\phantom{\big(}l{\phantom{\big ) } } } } \ar@ { > ->}[d]^{n ' } \\ \ar@{}[rd]|{pb } g \ar[d]_{{\overline{m } } } & d ' \ar[l]|{{\phantom{\big(}g'{\phantom{\big ) } } } } \ar[d]^{q={\overline}{n ' } } \\t(l ) & t(k ) \ar[l]|{{\phantom{\big(}t(l){\phantom{\big ) } } } } \\ } \ ] ]
the several algebraic approaches to graph transformation proposed in the literature all ensure that if an item is preserved by a rule , so are its connections with the context graph where it is embedded . but there are applications in which it is desirable to specify different embeddings . for example when cloning an item , there may be a need to handle the original and the copy in different ways . we propose a conservative extension of classical algebraic approaches to graph transformation , for the case of monic matches , where rules allow one to specify how the embedding of preserved items should be carried out .
this paper will present an overview of some recent developments in the application of random matrix analysis to the _ topological combinatorics _ of surfaces .such applications have a long history about which we should say a few words at the outset .the combinatorial objects of interest here are _a map is an embedding of a graph into a compact , oriented and connected surface with the requirement that the complement of the graph in should be a disjoint union of simply connected open sets .if the genus of is , this object is referred to as a _ g - map_. the notion of -maps was introduced by tutte and his collaborators in the 60s as part of their investigations of the four color conjecture . in the early 80s bessis , itzykson and zuber ,a group of physicists studying t hooft s diagrammatic approaches to large n expansions in quantum field theory , discovered a profound connection between the problem of enumerating -maps and random matrix theory .that seminal work was the basis for bringing asymptotic analytical methods into the study of maps and other related combinatorial problems .subsequently , in the early 90s , other physicists realized that the matrix model diagrammatics described in provide a natural means for discretizing the einstein - hilbert action in two dimensions . from that and a formal double scaling limit, they were able to put forward a candidate for so - called _2d quantum gravity_. this generated a great deal of interest in the emerging field of string theory .we refer to for a systematic review of this activity and to for a description of more recent developments related to topological string theory .all of these applications were based on the postulated existence of a asymptotic expansion of the free energy associated to the random matrix partition function , where denotes the size of the matrix , as becomes large .the combinatorial significance of this expansion is that the coefficient of should be the generating function for the enumeration of -maps ( ordered by the cardinality of the map s vertices ) .in the existence of this asymptotic expansion and several of its important analytical properties were rigorously established .this analysis was based on a riemann - hilbert problem originally introduced by fokas , its and kitaev to study the 2d gravity problem .the aim of this paper is to outline how the results of and its sequel have been used to gain new insights into the map enumeration problem . in particular, we will be able to prove and significantly extend a conjecture made in about the closed form structure of the generating functions for map enumeration . over time combinatorialistshave made novel use of many tools from analysis including contour integrals and differential equations . in this workwe also introduce nonlinear partial diferential equations , in particular a hierarchy of conservation laws reminiscent of the _ shallow water wave equations _ ( see ( [ toda ] ) ) .this appears to make contact with the class of _ differential posets _ introduced by stanley ( see remark [ diffposet ] ) .the general class of matrix ensembles we analyze has probability measures of the form \right\ } dm,\,\ , \mbox{where}\\ \label{i.001b } v_j(\lambda ; \ t_{j } ) & = & \frac{1}{2 } \lambda^{2 } + \frac{t_{j}}{j } \lambda^{j}\end{aligned}\ ] ] defined on the space of hermitean matrices , , and with a positive parameter , referred to as the _ string coefficient_. the normalization factor , which serves to make a probability measure , is called the _ partition function _ of this unitary ensemble . in previous treatments , ,we have used the parameter instead of . this was in keeping with notational usages in some areas of random matrix theory ; however , since here we are trying to make a connection to some applications in quatum gravity , we have adopted the notation traditionally used in that context .this also is why we have scaled the time parameter by in this paper . for general polynomial weights it is possible to establish the following fundamental asymptotic expansion , of the logarithm of the _ free energy _associated to the partition function .more precisely , those papers consider weights of the form with even . we introduce a renormalized partition function , which we refer to as a _ tau function _ representation , where .the principal object of interest is the _large n _ asymptotic expansion of this representation for which one has the result as while with , called the t hooft parameter , held fixed .moreover , for for some , a. [ unif ] the expansion is uniformly valid on compact subsets of ; b. [ analyt ] extends to be complex analytic in ; c. [ diff ] the expansion may be differentiated term by term in with uniform error estimates as in ( [ unif ] ) the meaning of ( [ unif ] ) is that for each there is a constant , , depending only on and such that for in a compact subset of .the estimates referred to in ( [ diff ] ) have a similar form with and replaced by their mixed derivatives ( the same derivatives in each term ) and with a possibly different set of constants .recently these results were extended to the case where is odd . in this caseone should replace the normalized partiiton function ( [ tausquare ] ) by its szeg representation in terms of eignevalues ( [ szego ] ) .to explain the topological significance of the as generating functions , we begin with a precise definition of the objects they enumerate . on a compact , oriented and connected surface is a pair ) ] is an isotopical class of inclusions ; * the complement of in is a disjoint union of open cells ( faces ) ; * the complement of the vertices in is a disjoint union of open segments ( edges ) . when the genus of x is one refers to the map as a .what effectively showed was that the partial derivatives of evaluated at `` count '' a geometric quotient of a certain class of _ labelled g - maps_. as a means to reduce from enumerating these labelled -maps to enumerating -maps , it is natural to try taking a geometric quotient by a `` relabelling group '' more properly referred to as a _ cartographic group _ .this labelling has two parts ; first the vertices of the same valence , have an order labelling and second at each vertex one of the edges is distinguished .given that x is oriented , this second labelling gives a unique ordering of the edges around each vertex .the fact that the coefficients of the free energy expansion ( [ i.002 ] ) enumerate this class of labelled -maps is a consequence of ( [ i.002 ] ) ( i ) which enables one to evaluate a mixed partial derivative of in terms of the _ gaussian unitary ensemble _ ( gue ) where correlation functions of matrix coefficients all reduce to two point functions .( a precise description of this correspondence may be found in ) . to help fix these ideas we consider the case of a -regular -map ( i.e. , every vertex has the same valence , ) of size ( i.e. , the map has vertices ) which is the main interest of this paper .the cartographic group in this case is generated by the symmmetric group which permutes the vertex labels and factors of the cyclic group which rotates the distinguished edge at a given vertex in the direction of the holomorphic ( counter - clockwise ) orientation on .the order of the cartographic group here is the same as that of the product of its factors which is . on the other handthe generating function for -maps in this setting is given by where = the number of labelled -regular -maps on vertices .the factor perfectly cancels the order of the cartographic group , making this series appear to indeed be the ordinary generating function for pure -maps .however , for some -maps the cartographic action may have non - trivial isotropy and this can create an `` over - cancellation '' of the labelling .this happens when a particular relabelling of a given map can be transformed back to the original labelling by a diffeomorphism of the underlying riemann surface . in this eventthe two labellings are indistinguishable and the diffeomorphism induces an automorphism of the underlying map .in addition , the element of the cartographic group giving rise to this situation is an element of the isotropy group of the given map .hence , as a generating function for the geometric quotient , ( [ gquoti ] ) is expressible as where = the number of vertices of , = the number of faces of and aut( ) = the automorphism group of the map .we have included the -dependent form , ( [ gquotiii ] ) , of since that will play an important role later on and also to observe that this is in fact a _ bivariate _ generating function for enumerating -maps with a fised number of vertices and faces .moreover , in this -regular setting , one sees that the bivariate function is self - similar .this is a direct consequence of euler s relation : the presence of geometric factors such as is not uncommon in enumerative graph theory , a classical example being that of erds - rnyi graphs . in the quantum gravity setting these factors also have a natural interpretation in terms of the discretization of the reduction to conformal structures via a quotient of metrics by the action of the diffeomorphism group .we refer to for further details on this attractive set of ideas . in , and explicitly computed for the case of valence .we quote , from the same paper , the following conjecture ( some notation has been changed to be consistent with ours ) : _ it would of course be very interesting to obtain in closed form for any value of .the method of this paper enabled us to do so up to , but works in the general case , although it requires an incresing amount of work .we conjecture a general expression of the form _ _ with a polynomial in , the degree of which could be obtained by a careful analysis of the above procedure . " _ here is equal , up to a scaling , to the generating function for the catalan numbers ; below it will signify which is similarly related to the generating function for the higher catalan numbers ( [ catalan ] ) . over the yearsthere have been a number of attempts to systematically address this question by studying the resolvent of the random matrix and associated schwinger - dyson equations .our methods take a different approach .the main purpose of this paper is to show how this conjecture can be verified and significantly extended . in particular, we will show that for the case of even valence , , [ thm51 ] for , for all .the top coefficient and the constant term are respectively given by \qquad\end{aligned}\ ] ] where ( see thm [ result ] ) is proportional to the coefficient in the asymptotic expansion at infinity of the equation in the painlev i hierarchy and .our methods can be extended to the case of odd and the derivation of the analogue to theorem [ thm51 ] is in progress ( see section [ sec:6 ] ) .the route to getting these results passes through nonlinear pde , in particular a class of nonlinear evolution equations known as conservation laws which come from studying scaling limits of the recursion operators for orthogonal polynomials whose weights match those of the matrix models .this appeal to orthogonal polynomials also motivated the approaches of and . however to give a rigorous _ and _ effective treatment to the problem of finding closed form expressions for the coefficients of the asymptotic free energy , ( [ i.002 ] ) , requires essential use of riemann - hilbert analysis on the riemann hilbert problem for orthogonal polynomials that was introduced in .though we will not review this analysis here , we will state the consequences of it needed for our applications and reference their sources .in section [ sec:2 ] we present the necessary background on orthogonal polynomials and introduce the main equations governing their recurrences operators : the _ difference string equations _ and the _ toda lattice equations_. in section [ sec:44 ] we describe how ( [ i.002 ] ) can be used to derive and solve ( in the case of even valence ) the continuum limits of these equations which relates to the nonlinear evolution equations alluded to earlier . in section [ sec:4 ]we outline the proof theorem [ thm51 ] and in section [ sec:6 ] we describe the extension of this program to the case of odd valence and briefly mention what has been accomplished in that case thus far .this will also help to illuminate the full picture behind the idea of conservation laws for random matrices .let us recall the classical relation between orthogonal polynomials and the space of square - integrable functions on the real line , , with respect to exponentially weighted measures .in particular , we want to focus attention on weights that correspond to the random matrix weights , , ( [ i.001b ] ) with even .( recently this relation has been extended to the cases of odd , with the orthogonal polynomials generalised to the class of so - called _ non - hermitean _ orthogonal polynomials ; however , for this exposition we will stick primarily with the even case . ) to that end we consider the hilbert space of weighted square integrable functions .this space has a natural polynomial basis , , determined by the conditions that for the construction of this basis and related details we refer the reader to .with respect to this basis , the operator of multiplication by is representable as a semi - infinite tri - diagonal matrix , is commonly referred as the _ recursion operator _ for the orthogonal polynomials and its entries as _( when is an even potential , it follows from symmetry that for all . )we remark that often a basis of orthonormal , rather than monic orthogonal , polynomials is used to make this representation . in that casethe analogue of ( [ multop ] ) is a symmetric tri - diagonal matrix .as long as the coefficients do not vanish , these two matrix representations can be related through conjugation by a semi - infinite diagonal matrix of the form .similarly , the operator of differentiation with respect to , which is densely defined on , has a semi - infinite matrix representation , , that can be expressed in terms of as where the `` minus '' subscript denotes projection onto the strictly lower part of the matrix . from the canonical ( heisenberg ) relation on , one sees that & = & 1,\end{aligned}\ ] ] where here in the bracket and on the right hand side are regarded as multiplication operators . with respect to the basis of orthogonal polynomialsthis may be re - expressed as & = & g_s i .\end{aligned}\ ] ] the relations implicit in ( [ fl1 ] ) have been referred to as _ string equations _ in the physics literature .in fact the relations that one has , row by row , in ( [ fl1 ] ) are actually successive differences of consecutive string equations in the usual sense . however , by continuing back to the first row one may recursively de - couple these differences to get the usual equations . to make this distinction clear we will refer to the row by row equations that one has directly from ( [ fl1 ] ) as _ difference string equations_. depends smoothly on the coupling parameter in the weight ( see [ i.001b ] ) .the explicit dependence can be determined from the fact that multiplication by commutes with differentiation by .this yields our second fundamental relation on the recurrence coefficients , \,,\end{aligned}\ ] ] which is equivalent to the equation of the semi - infinite toda lattice hierarchy .the toda equations for are one may apply standard methods of orthogonal polynomial theory to deduce the existence of a semi - infinite lower unipotent matrix such that where ( for a description of the construction of such a unipotent matrix we refer to proposition 1 of . ) this is related to the hankel matrix where is the moment of the measure , by with where denotes the principal sub - matrix of whose determinant may be expressed as ( see szeg s classical text ) , \right\ } d^{n } \lambda,\end{aligned}\ ] ] where .we set .we sometimes need to extend the domain of the tau functions to include other parameters , such as , as we have done here .doing this presents no difficulties in the prior constructions .the diagonal elements may in fact be expressed as where which agrees with the definition of the tau function given in ( [ tausquare ] ) .the second equality follows by reducing the unitarily invariant matrix integrals in ( [ szego2 ] ) to their diagonalizations which yields ( [ szego1 ] ) .tracing through these connections , from to , one may derive the fundamental identity relating the random matrix partition function to the recurrence coefficients , which is the basis for our analysis of continuum limits in the next section .( note that and therefore . )we will also need a differential version of this relation : ( hirota ) = -g_s \frac{\partial}{\partial t_1 } \log \left [ \frac{z^{(n+1)}(t_1 , t_{2\nu})}{z^{(n)}(t_1 , t_{2\nu } ) } \right]\\ \label{b } b_{n , g_s}^2 & = g_s^2 \frac{\partial^2}{\partial t_1 ^ 2 } \log \tau^2_{n , g_s } = g_s^2 \frac{\partial^2}{\partial t_1 ^ 2 } \log z^{(n)}(t_1 , t_{2\nu})\,,\end{aligned}\ ] ] ( a derivation of this lemma may be found in . )it follows from ( [ b ] ) and ( [ i.002 ] ) that [ two - leg ] is a uniformly valid asymptotic asymptotic expansion in the sense of ( [ i.002 ] iii ) . in order to effectively utilize the relations ( [ fl1 ] , [ fl2 ] )it will be essential to keep track of how the matrix entries of powers of the recurrence operator , , depend on the original recurrence coefficients .that is best done via the combinatorics of weighted walks on the index lattice of the orthogonal polynomials . for the case of even potentials ,the relevant walks are _dyck paths _ which are walks , , on which , at each step , can either increase by 1 or decrease by 1. set then step weights , path weights and the -entry of are , respectively , given by the _ difference string equations _ are given ( for the -valent case ) by ( [ fl1 ] ) : = g_s i \,.\ ] ] by parity considerations , when the potential is even , the only non - tautological equations come from the diagonal entries of ( [ string - star2 ] ) : _ the entry _gives in terms of dyck paths this becomes where denotes the lattice location of the path after the downstep and we have used the relation on the left hand side of the equation . we illustrate this more concretely for the case of . referring to ( [ motzkin ] ) ,the relevant path classes here are note that the structure of the path classes does not actually depend upon .this is a reflection of the underlying spatial homogeneity of these equations .thus , for the purpose of describing the path classes , one can translate to . now applying ( [ weights ] )the difference string equation becomes , for , where , for this example , we have set the parameter equal to .we now pass to a more explicit form of of the _ toda equations _ ( [ fl2 ] ) in the case : once agian we illustrate these equations in the tetravalent case ( ) .the relevant path class is : applying ( [ weights ] ) , the tetravalent toda equations become where we have again used the relation and then set the parameter .the continuum limits of the difference string and toda equations will be described in terms of certain scalings of the independent variables , both discrete and continuous . as indicated at the outset , the positive parameter sets the scale for the potential in the random matrix partition function and is taken to be small .the discrete variable labels the lattice _ position _ on that marks , for instance , the orthogonal polynomial and recurrence coefficients .we also always take to be large and in fact to be of the same order as ; i.e. , as and tend to and respectively , they do so in such a way that their product remains fixed at a value close to .in addition to the _ global _ or _ absolute _ lattice variable , we also introduce a _local _ or _ relative _ lattice variable denoted by .it varies over integers but will always be taken to be small in comparison to and independent of .the dyck lattice paths naturally introduce the composite discrete variable into the formulation of the difference string and toda equations which we think of as a small discrete variation around a large value of .the spatial homogeneity of those equations manifests itself in their all having the same form , independent of what is , while in those equations varies over , the _ bandwidth _ of the toda / difference string equations .taking will insure the necessary separation of scales between and .we define as a _ spatial _ variation close to which will serve as a continuous analogue of the lattice location along a dyck path relative to the starting location of the path .we also introduce the self - similar scalings : that are natural given ( [ sss ] ) . in terms of these scalings , ( [ b - asymp ] ) may be rewritten as and , by app .a , we mention here that the variables as defined above differ slightly from their usage in related works where for appropriate parameters .we also introduce a shorthand notation to denote the expansion of the coefficients of around .[ shorthand ] for , where the subscript denotes the operation of taking the derivative with respect to of each coefficient of : as valid asymptotic expansions these representations denote the asymptotic series whose successive terms are gotten by collecting all terms with a common power of in ( [ f1k ] ) . in what followswe will frequently abuse notation and drop the evaluation at .in particular , we will write in doing this these series must now be regarded as formal but whose orders are still defined by collecting all terms in and of a common order .( recall that so that ) .they will be substituted into the difference string and the toda equations to derive the respective continuum equations . at any point in this process, if one evaluates these expressions at and one may recover valid asymptotic expansions in which the and have their original significance as valid asymptotic expansions of the recursion coefficients .we are now in a position to study the toda lattice equations ( [ bn ] ) expanded on the formal asymptotic series ( [ f1kform ] ) : from now on we will take , since its role in determining the structure of the asymptotic expansions of the is now completed , and set .collecting terms in these equations order by order in orders of we will have a hierarchy of equations that , in principle , allows one to recursively determine the coefficients of ( [ bs - asymp ] ) .we will refer to this hierarchy as the _ continuum toda equations_. ( note that one has such a hierarchy for each value of . ) of course this is a standard procedure in perturbation theory .the equations we will derive are pdes in the form of evolution equations in which , now regarded as a continuous variable , is the independent _ spatial _ variable and is the _ temporal _ variable .one must still determine , at each level of the hierarchy , which solution of the ode is the one that corresponds to the expressions given for in ( [ b - shift_g ] ) .this amounts to a kind of solvability condition .this process was carried out fully in and .we will now state the results of that analysis .[ cont ] the continuum limit , to all orders , of the toda lattice equations as is given by the following infinite order partial differential equation for : where is a partition , with , of ; ; is the _ length _ of ; and is the _ size _ of ; and are coefficients to be described in the next proposition . by ( [ burgers ] ) , .the above result effectively reduces the determination of the hierarchy to an enumeration in terms of a pair of partitions .the first class of partitions is fairly straightforward and amounts to keeping track of the partial derivatives that enter into the expressions at a given level .thus at the terms are products of partial derivatives of various orders ( the _ parts _ ) which must add up to .these terms correspond to the tableaux at the level of the _ hasse - young graph _ shown in the left panel of figure [ zigzag ] .( in this we ignore , for now , the powers of that are internal to the asymptotic series and its -derivatives . )the other type of partition relates to how the dyck paths enter our equations .we have already seen that a dyck path is completely determined by specifying when its downsteps occur .the toda equations depend explicitly on where these downsteps ( the appearing in the equations as stated at the start of this subsection ). however these two specifications can be related and the downstep times are encoded in terms of a partition that measures deviation of the path from a standard zig - zag path .the following proposition gives an explicit closed form expression for the coefficients in terms of both classes of partitions . ( _ app .a.3_)[d - coeffs ] where is the set of _ restricted partitions _( meaning that ) , , and is the monomial symmetric polynomial associated to .the relation of inclusion between partitions , means that for all .the right panel of figure [ zigzag ] exemplifies , in the case when , a geometric realization of the partitions being summed over in the above formula .such a partition corresponds to a zig - zag path contained in the rectangle which starts at the leftmost corner with the step being a up - step and terminates at the rightmost corner with the step .the red path corresponds to the distinguished partition which records the downsteps at times so that in this case the initial ( ) step is a downstep .the green path illustrates a typical which , in this case , takes downsteps at times .given such a partition - path , one may project the initial point of each of its downsteps to the horizontal axis as indicated in the figure for the red and green paths .then equals the signed separation between the green point and the red point , reading from right to left .the symmetric polynomial then gets evaluated at these separation values in the above formula for .this description could also have been formulated in terms of the _ border strips _ associated to the _ skew tableaux _ but we will not elaborate on that here .[ h ] hasse - young graph ( courtesy d. eppstein ) ; right : partition walks , title="fig:",width=192 ] hasse - young graph ( courtesy d. eppstein ) ; right : partition walks , title="fig:",width=192 ] one is now in a position to deduce the form of the toda hierarchy .this is done by setting so that .one then collects _ all _ terms of order in the resulting expansion of ( [ burgers ] ) and this will be a partial differential equation in and that we refer to as the equation in the continuum toda hierarchy . at leading order in the hierarchy one observes that , for general , the continuum toda equation is an inviscid burgers equation with initial data .a solution exists and is unique for sufficiently small values of .it may be explicitly calculated by the method of characteristics , also known as the _ hodograph _ method in the version we now present .consider the ( hodograph ) relation among the independent variables , [ hodlemma ] a local solution of ( [ burgers ] ) is implicitly defined by ( [ hodograph ] ) .the annihilator of the differential of ( [ hodograph ] ) is a two - dimensional distribution locally on the space .an initial curve over the -axis ( parametrized as the graph of a function ) , transverse to the locus where locally determines a unique integral surface foliated by the integral curves of the vector field of the _ characteristic _ vector field equation ( [ char1 ] ) requires that along an integral curve of the characteristic vector field , is constant ; i.e. , by ( [ char2 ] ) which is equivalent to ( [ burgers ] ) . using ( [ char3 ] ) to set pins down our solution uniquely .we note that the numerical coefficients appearing in these burgers equations depend only on the total number of dyck paths in .one finds from ( [ hodograph ] ) and the self similar form of , when , is the catalan number .for general these are the _ higher catalan numbers _ which play a role in a wide variety of enumerative combinatorial problems .the _ continuum difference string _ hierarchies may be derived from the difference string equations ( [ diff - string ] ) in a manner completely analogous to what was done with the toda equations in the previous subsection .expanding ( [ diff - string ] ) on the asymptotic series ( [ f1kform ] ) we arrive at the following asymptotic equations . the equations at leading order , , are or , equivalently , which one directly recognizes as the spatial derivative of the hodograph solution ( [ hodograph ] ) . evaluating that solution at yields which is the functional equation for the generating function of the higher catalan numbers , mentioned in the previous subsection .the terms of the equations at can be computed directly and are found to have the form & + & 2s\left(c_\nu \partial_w \sum_{\begin{array}{c } 0 \leq k_j < g \\k_1 + \dots + k_{\nu } = g\\ \end{array } } f_{k_1}\cdots f_{k_{\nu}}\right)\\ \nonumber + \partial_w \sum_{k=0}^{g-1 } \frac{f_{k w^{(2g-2k)}}}{(2g-2k+1 ) ! } & + & 2\nu s \left(f_1^{(\nu - 1)}[2g-2 ] + f_2^{(\nu - 1)}[2g-4 ] + \cdots + f_{g}^{(\nu - 1)}[0]\right ) = 0 , \end{aligned}\ ] ] where ] appearing in ( [ cont - string ] ) .[ higher_exact ] by symmetry under the action of the symmetric group , it will suffice to check this identity for the single monomial term corresponding to .we directly calculate where in the second line we have made changes of variables replacing by . using thiswe observe that the inner summands of have the form in the second line the two inner summations of the first line have been rewritten in terms of clusters of partitions adjacent to a given partition of size . to get the last line we use the fact that .the proposition now follows from this observation , ( [ potential ] ) and ( [ fg ] ) .as a consequence of this result one sees that the continuum difference string equation is directly integrable : + \widehat{f}_2^{(\nu - 1)}[2g-4 ] + \cdots + \widehat{f}_{g}^{(\nu - 1)}[0]\right ) + \frac{1}{2s}\sum_{k=0}^{g-1 } \frac{f_{k w^{(2g-2k)}}}{(2g-2k+1)!}\right\}.\end{aligned}\ ] ] setting and applying ( [ stringeqn ] ) to eliminate this reduces to + \widehat{f}_2^{(\nu - 1)}[2g-4 ] + \cdots + \widehat{f}_{g}^{(\nu - 1)}[0]\right)\big|_{w = 1 } \right\}.\end{aligned}\ ] ] it is immediate from this representation that is a rational function of .apriori this _ anti - derivative _ should also include a constant term ( in ; it could depend on ) .this would lead to a term of the form .however , in it is shown , by an independent argument , that the pole order in at is always greater than one .hence the constant of integration must be zero . with further effortthis can be refined to [result ] where is a polynomial of degree in whose coefficients are rational functions of over the rational numbers and .[ diffposet ] a key element in the proof of proposition [ higher_exact ] is the observation that differentiation with respect to adjusts the multinomial labelling of partial derivatives in the expansion according to the edges of the hasse - young graph ( fig [ zigzag ] ) .this graph describes the adjacency relations between young diagrams of differing sizes .the edges describe which partitions of size are _ covered _ by a given partition of size .conversely it describes which partitions of size cover a partition of size which in the setting described here acts as an anti - differentiation operator .this kind of structure was called a differential poset by stanley and systematically examined in .recalling the basic identity ( [ hirota ] ) we have , by taking logarithms , where the initial value is given by the recursion relations of the hermite polynomials . as in , we can use formula to recursively determine in terms of solutions to the continuum equations .we use the asymptotic expansion of which has the form ( [ bs - asymp ] ) : note that the left hand side of equation ( [ tauk-2nddiff ] ) has the form of a centered second difference , .it follows that this expression has an expansion for large involving only even derivatives of the spatial variable .we have , at order , where . in was shown that is rational in with poles located only at .however we will now prove the more refined result stated in theorem [ thm51 ] .the proof of this result is by induction on .( the base case of is established by direct calculation . )we assume that ( [ note ] ) holds for all .we state here , without proof , some straightforward lemmas and propositions describing the derivatives of ( [ note ] ) ( details may be found in where similar lemmas are proved for the ) . set [ lem51 ] [ lem52 ] for and , c_\ell^{k , j-1}(\nu ) + \nu(2k + \ell + ( j-3 ) ) c_{\ell -1 } ^{k , j-1}(\nu)\\ c_\ell^{(k , j)}(\nu ) & = & 0 \qquad \ell < 0 , \qquad \ell \geq 3k-3+j\\ c_\ell^{(k,0)}(\nu ) & = & c_\ell^{(k)}(\nu).\end{aligned}\ ] ] [ lem53 ] for and where . [ prop52 ] for and but in fact , by the following vanishing lemma [ lem54 ] for , the minimal pole order of the expansion in proposition [ prop52 ] is .in particular the minimal pole orders coming from terms involving on the right - hand side of ( [ hirota2 ] ) are all greater than .[ prop53 ] the terms of by ( [ rational ] ) .this result shows that the minimal pole order coming from the terms in ( [ hirota2 ] ) is once again greater than .the preceding lemmas and propositions provide explicit laurent expansions ( in ) for all terms on the right hand side of ( [ hirota2 ] ) with two exceptions : with a small modification ( [ e1 ] ) may be brought in line with proposition [ prop52 ] , [ prop54 ] with .all other coefficients are then specified by the corresponding recursions stated in lemmas [ lem51 ] - [ lem53 ] with set to .a variant of the vanishing lemma [ lem54 ] also holds for : for .it follows that the minimal pole order of the expansion in proposition [ prop54 ] is at least and so the corresponding contribution to the minimal pole order of ( [ hirota2 ] ) is .finally we observe that for is a rational function of and its -derivatives , [ prop55 ] \\ \nonumber & + & ( p-3 ) !\left(-\frac{1}{w}\right)^{p-2}\end{aligned}\ ] ] each line of the above proposition can be established directly by induction starting with the base case for .it then follows from proposition 3.1(iii ) of that the minimal pole order contributed by ( [ e0 ] ) is .we are now in a position to outline the ( of theorem [ thm51 ] ) in ( theorem 1.3 ) it was shown that where denotes a polynomial of degree in .we first want to determine the relation between this degree and the pole order . to this endwe observe from propositions [ prop52 ] , [ prop53 ] , [ prop54 ] , and [ prop55 ] that the right hand side of ( [ hirota2 ] ) , evaluated at , is a rational function in which approaches a finite constant value as . from the form of the left hand side of ( [ hirota2 ] ) evaluated at also sees that its asymptotic order ( as ) is the same as that of .hence , and this shows that ( [ note ] ) is valid up to the determination of the minimal and maximal pole orders at . in the preceding lemmas and proposiitons we have seen that , for all terms on the right hand side of ( [ hirota2 ] ) , the minimal pole order is .furthermore , from these same representations together with proposition 3.1(iii ) of one sees that , with the possible exception of the genus 0 terms in ( [ e0exp ] ) , the maximal pole order of the terms on the right hand side of ( [ hirota2 ] ) is .the apparent maximal pole order in ( [ e0exp ] ) is which exceeds the stated bound when .this maximal order comes from terms containing the factor which are , specifically , {w=1}\\ & = & \frac{f_{0 w^{(2g+2)}}}{2f_0}|_{w=1 } \left[1 - \frac{(2\nu+1)(\nu-1)}{2\nu(\nu+1 ) } z_0 + \frac{(\nu-1)^2}{2\nu ( \nu+1 ) } z_0 ^ 2\right]\\ & = & \frac{f_{0 w^{(2g+2)}}}{2f_0}|_{w=1 } \mathcal{o}(\nu - ( \nu-1)z_0).\end{aligned}\ ] ] hence the maximal pole order contributed by the genus 0 terms is , in fact , which is for and for .this establishes that , for , the pole orders on the right hand side of ( [ hirota2 ] ) are bounded between and .moreover , for , the case by case checking of terms on the right hand side of ( [ hirota2 ] ) that has been carried out in this subsection , shows that the maximal pole order is realized by the term in proposition [ prop53 ] corresponding to the partition of having minimal length ( = ) ; i.e. , the partition whose young diagram is a single row .this implies that the residue of the maximal order pole is which is non - vanishing by theorem [ result ] .hence the maximal order pole is realized .now , given that has the form ( [ eg_ratl ] ) with , it follows from direct calculation that raises the minimum pole degree by and the maximum pole degree by with the coefficient at this order given by ( [ leadcoeff ] ) .this establishes ( [ note ] ) for .the cases of may be established separately by direct calculation ( see , for example , section 1.4.2 of ) .to establish ( [ note3 ] ) first note that by euler s relation , for a -map where is the number of ( -valent ) vertices and is the number of faces . since , one immediately sees that the number of vertices of such a map must satisfy the inequality it follows that for .( must vanish at least simply at since since . ) via cauchy s theorem these conditions may be re - expressed as for where in the second line we have rewritten as a rational function of ( [ eg_ratl ] ) and employed the change of variables which may be deduced from the string equation ( [ stringeqn ] ) .this yields a contour integral in centerd at .now one can see that these vanishing conditions are satisfied if and only if for which in turn proves ( [ note3 ] ) .finally we turn to the determination of the constant . by proposition [ prop53 ] , contributions to the constant term of only from the first sum on the right hand side of ( [ hirota2 ] ) .the parts of this coming from and are , by propositions [ prop55 ] and [ prop54 ] respectively , and . at higher genus , , the contribution to the constant termis determined by lemma [ lem51 ] to be .hence , by ( [ hirota2 ] ) we have from which ( [ note3 ] ) immediately follows .in the case when is odd in the weight ( [ genpot ] ) for , there is clearly a problem in applying the method of orthogonal polynomials as it was outlined in section [ sec:2 ] .very recently , however , a generalization of the _ equilibrium measure _ ( which governs the leading order behavior of the free energy associated to ( [ rmt ] ) ) was developed and applied to this problem , .it is based on generalizing to a class of complex valued non - hermitean orthogonal polynomials on a contour in the complex plane other than the real axis .these extensions were motivated by new ideas in approximation theory related to complex gaussian quadrature of integrals with high order stationary points .but even when the issue of existence of appropriate orthogonal polynomials has been resolved , there are still a number of significant obstacles to deriving results like theorem [ thm51 ] that are not present when the valence is even . for odd valencethere is an additional string of recurrence coefficients , the diagonal coefficients of , whose asymptotics needs to be analyzed .this in turn requires that the lattice paths used to define and analyze the toda and difference string equations must be generalized to the class of _ motzkin paths _ which can have segments where the lattice site remains fixed rather than always taking a step ( either up or down ) as was the case for dyck paths .nevertheless , all these constructions have been carried out in to derive the hierarchies of continuum toda and difference string equations when the valence is odd .the recurrence coefficients again have asymptotic expansions with continuum representations given by {\tilde{w } = 1}\end{aligned}\ ] ] the off - diagonal coefficients have corresponding representations which are much as they were in the even valence case , the coefficients in these expansions have a self - similar structure given by at leading order the continuum toda equations are and the leading order continuum difference string equations are where the coefficients of the matrix in ( [ toda ] ) are specified by and those of the matrix in ( [ string ] ) by the index appearing in the trinomial coefficients corresponds to the number of flat steps in the motzkin paths giving rise to that term .it is straightforward to see that ( [ toda ] ) may be rewritten in conservation law form as where the coefficients in the flux vector are given by recently , , we have determined that the equations ( [ string ] ) are in fact a differentiated form of the generalized hodograph solution of the conservation law ( [ law ] ) .this hodograph solution is given by analogous to what was done in theorem [ thm51 ] we expect to determine closed form expressions for all the coefficients in the topological expansion with odd weights .the first few of these , for the trivalent case , are where is implicitly related to by the polynomial equation is in fact the generating function for a _ fractional _ generalization of the catalan numbers .its coefficient counts the number of connected , non - crossing symmetric graphs on equi - distributed vertices on the unit circle .nothing has been said , in this article , about the eigenvalues of the random matrix although this is at the heart of the riemann hilbert analysis underlying all of our results .the essential link comes through the _ equilibrium measure _ , or density of states , for these eigenvalues .when in ( [ i.001b ] ) , this equilibrium measure reduces to the well - known wigner semi - circle law . as changes this measure deforms ; but , for satisfying the bounds implicit in ( [ i.002 ] ) ( i ) , its support remains a single interval , ] ( which corresponds to ] so that ( [ eqmeas ] ) remains a positive measure along an appropriate connected contour ( `` single interval '' ) in the complex -plane .for this continuation may be made up to a boundary curve in the complex -plane passing through ( with a corresponding image in the complex plane ) .extension of this to more general values of is in progress .the mechanism for carrying out this continuation is to regard ( [ eqmeas ] ) as a -parametrized family of holomorphic quadratic differentials .the candidate for the measure s support is then an appropriate bounded real trajectory of the quadratic differential . outside the boundary curve, the riemann - hilbert analysis used in this paper may be analytically deformed and our results extended .the boundary may be regarded as a curve of critical parameters for this deformation .this curve is precisely the locus where the riemann invariants , that determine the edge of the spectrum ( as described in [ 71 ] ) exhibit a shock .this scenario is reminiscent of that for the small -limit of the nonlinear schrdinger equation in which the analogue of our boundary curve is the envelope of _ dispersive shocks_. in that setting it is the zakharov - shabat inverse scattering problem that shows one how to pass through the dispersive shocks and describe a continuation of measure - valued solutions with so - called _ multi - gap _ support .it is our expectation that coupling gravity to an appropriate conformal field theory ( to thus arrive at a bona fide string theory ) will play a similar role in our setting to determine a unique continuation through the boundary curve of critical parameters to a unique equilibrium measure with multi - cut support .we also hope that this will help bring powerful methods from the study of dispersive limits of nonlinear pde into the realm of random matrix theory .* acknowledgement . *the author wishes to thank msri for its hospitality and the organizers for the excellent fall 2010 program on random matrix theory .most of the new results described here had their inception during that happy period .n. m. ercolani and k. d. t .-mclaughlin , _ asymptotics of the partition function for random matrices via riemann hilbert techniques , and applications to graphical enumeration _ , int .res . not .* 14 * , 755820 , 2003 .s. kamvissis , k. d. t .-mclaughlin and p. d. miller , _ semiclassical soliton ensembles for the focusing nonlinear schrdinger equation_. annals of mathematics studies , * 154 * princeton university press , princeton , nj , 2003 .
this paper presents an overview of the derivation and significance of recently derived conservation laws for the matrix moments of hermitean random matrices with dominant exponential weights that may be either even or odd . this is based on a detailed asymptotic analysis of the partition function for these unitary ensembles and their scaling limits . as a particular application we derive closed form expressions for the coefficients of the genus expansion for the associated free energy in a particular class of dominant even weights . these coefficients are generating functions for enumerating _ g_-maps , related to graphical combinatorics on riemann surfaces . this generalizes and resolves a 30 + year old conjecture in the physics literature related to quantum gravity .
iterative message passing algorithms for decoding low - density parity - check ( ldpc ) codes have been the focus of research over the past decade and most of their properties are well understood , .these algorithms operate by passing messages along the edges of a graphical representation of the code known as the tanner graph and are optimal when the underlying graph is a tree .message passing decoders perform remarkably well which can be attributed to their ability to correct errors beyond the traditional bounded distance decoding capability . however , in contrast to bounded distance decoders ( bdds ) , iterative decoders can not guarantee correction of a fixed number of errors at relatively short code lengths .this is due to the fact that the associated tanner graphs for short length codes have cycles and the decoding becomes suboptimal and there exist a few low - weight patterns ( termed as near codewords or trapping sets ) uncorrectable by the decoder .it is now well established that the trapping sets lead to the phenomenon of error floor . roughly , error floor is an abrupt change in the frame error rate ( fer ) performance of an iterative decoder in the high signal - to - noise ratio ( snr ) region .the error floor problem is well understood for iterative decoding over binary erasure channel ( bec ) .the decoder fails when the received vector contains erasures in locations corresponding to a stopping set .for the awgn channel , richardson in presented a numerical method to estimate error floors of ldpc codes .he established a relation between trapping sets and the fer performance of the code in the error floor region ( the necessary definitions will be given in the next section ) .the approach from was further refined by stepanov _et al _ in .vontobel and koetter established a theoretical framework for finite length analysis of message passing iterative decoding based on graph covers .this approach was used by smarandache _et al _ in to analyze performance of ldpc codes from projective and for ldpc convolutional codes .for the binary symmetric channel ( bsc ) , error floor estimation based on trapping sets was proposed in and we adopt the notation from . in this paper , we make the following two fundamental contributions : ( a ) give necessary and sufficient conditions for a column - weight - three ldpc code to correct three errors , and ( b ) propose a construction method which results in a code satisfying the above conditions .we consider hard decision decoding for transmission over bsc .the bsc is a simple yet useful channel model used extensively in areas where decoding speed is a major factor .note that the problem of recovering from a fixed number of erasures is solved for the bec .if the tanner graph of a code does not contain any stopping sets up to size ( the size of minimum stopping set is ) , then the decoder is guaranteed to recover from any erasures .an analogous result for the bsc is still unknown .the problem of guaranteed error correction capability is known to be difficult and in this paper , we present a first step toward such result .previously , expansion arguments were used to show that message passing can correct a fixed fraction of errors . however , the code length needed to guarantee such correction capability is generally very large and to correct three errors , the length would be in the order of a few hundred thousand .also , these arguments can not be used for column - weight - three codes .column - weight - three codes are of special importance as their decoders have very low complexity and are used in a wide range of applications .we also show that the slope of the frame error rate ( fer ) is dependent on the critical number of the most relevant trapping sets and hence the slope can be improved by avoiding such trapping sets .we provide a technique to construct codes which outperform empirically best known codes of the same length .our method can be seen as a modification of the progressive edge growth ( peg ) technique proposed in .the rest of the paper is organized as follows . in section [ section2 ]we establish the notation , describe the gallager a algorithm and define trapping sets .in section [ section3 ] we present the main theorem which gives the necessary and sufficient conditions to correct three errors . in section [ section4 ]we describe a technique to construct codes satisfying the conditions of the theorem and provide numerical results .we conclude with a few remarks in section [ section5 ]in this section , we establish the notation and describe a hard decision decoding algorithm known as gallager a algorithm .we then characterize the failures of the gallager a decoder with the help of fixed points .we also introduce the notions of trapping sets and critical number .the tanner graph of an ldpc code , , is a bipartite graph with two sets of nodes : variable ( bit ) nodes and check ( constraint ) nodes .every edge in the bipartite graph is associated with a variable node and check node .the check nodes / variable nodes connected to a variable node / check node are referred to as its neighbors .the degree of a node is the number of its neighbors . in a regular ldpc code , each variable node has degree of and each check node has degree .the girth is the length of the shortest cycle in . in this paper , represents a variable node , represents an even degree check node and represents an odd degree check node .gallager in proposed two simple binary message passing algorithms for decoding over the bsc ; gallager a and gallager b. see for a detailed description of gallager b algorithm . for column - weight - three codes , which are the main focus of this paper , these two algorithms are the same .every round of message passing ( iteration ) starts with sending messages from variable nodes ( first half of the iteration ) and ends by sending messages from check nodes to variable nodes ( second half of the iteration ) .initially , the variable nodes send their received values to the neighboring checks . in the iteration , a variable node, sends the following message , , along edge to its neighboring check node ; if all incoming messages to other than the message from are equal to a certain value , it sends that value ; else , it sends the received value .a check node sends to a variable node , the modulo two sum of all incoming messages except the message from . at the end of each iteration ,an estimate of each variable node is made based on the incoming messages and possibly the received value .the decoder is run until a valid codeword is found or for a maximum number of iterations is reached , whichever is earlier .see for a detailed description of the messages passed in gallager a algorithm . _ a note on the decision rule : _ different rules to estimate a variable node after each iteration are possible and it is likely that changing the rule after certain iterations may be beneficial . however , the analysis of various scenarios is beyond the scope of this paper .for column - weight - three codes only two rules are possible .* decision rule a : if all incoming messages to a variable node from neighboring checks are equal , set the variable node to that value ; else set it to received value * decision rule b : set the value of a variable node to the majority of the incoming messages ; majority always exists since the column - weight is three we adopt decision rule a throughout this paper .we now characterize failures of the gallager a decoder using fixed points and trapping sets .much of the following discussion appears in ,,, and we include it for sake of completeness .consider an ldpc code of length and let be the binary vector which is the input to the gallager a decoder .let be the support of .the support of is defined as the set of all positions where . a decoder failure is said to have occurred if the output of the decoder is not equal to the transmitted codeword . is called a _ fixed point _ if for every edge and its associated variable node that is , the message passed from variable nodes to check nodes along the edges are the same in every iteration .since the outgoing messages from variable nodes are same in every iteration , it follows that the incoming messages from check nodes to variable nodes are also same in every iteration and so is the estimate of a variable after each iteration .in fact , the estimate after each iteration coincides with the received value .it is clear from above definition that if the input to the decoder is a fixed point , then the output of the decoder is the same fixed point . without loss of generality , we assume that the all zero codeword is sent over bsc and the input to the decoder is the error vector .so , a fixed point with small weight means that few errors lead to decoder failure . a detailed discussion about different kinds of decoder failures is given in the support of a fixed point is known as a trapping set .a trapping set is a set of variable nodes whose induced subgraph has odd degree checks .our definition of a trapping set gives necessary and sufficient conditions for a set of variable nodes to form a trapping set .we state the following theorem which is a consequence of fact 3 from .[ thm1] let be a set consisting of variable nodes with induced subgraph .let the checks in be partitioned into two disjoint subsets ; consisting of checks with odd degree and consisting of checks with even degree .let and . is a trapping set if : ( a ) every variable node in is connected to at least two checks in and at most one checks in and ( b ) no two checks of are connected to a variable node outside .see . if the variable nodes corresponding to a trapping set are in error , then a decoder failure occurs . however , not all variable nodes corresponding to trapping set need to be in error for a decoder failure to occur . the minimal number of variable nodes that have to be initially in error for the decoder to end up in the trapping set will be referred to as _critical number _ for that trapping set . a set of variable nodes which if in error lead to a decoding failure is known as a _ failure set_. _ remarks _ 1 .to `` end up '' in a trapping set means that , after a possible finite number of iterations , the decoder will be in error , on at least one variable node from at every iteration .the notion of a failure set is more fundamental than a trapping set .however , from the definition , we can not derive necessary and sufficient conditions for a set of variable nodes to form a failure set .3 . a trapping set is a failure set .subsets of trapping sets can be failure sets . more specifically , for a trapping set of size , there exists at least one subset of size equal to the critical number which is a failure set .4 . the critical number of a trapping set is not fixed .it depends on the outside connections of checks in .however , the maximum value of critical number of a trapping set is .in this section , we establish the necessary and sufficient conditions for a column - weight - three code to correct three errors .we first illustrate three trapping sets and show that the critical number of these trapping sets is three thereby providing necessary condition to correct three errors .we then prove that avoiding structures isomorphic to these trapping sets in the tanner graph is sufficient to guarantee correction of three errors .[ trappingsets ] shows three subgraphs induced by different number of variable nodes .let us assume that in all these induced graphs , no two odd degree checks are connected to a variable node outside the graph . by the conditions of theorem [ thm1 ] , all these induced subgraphs are trapping sets . fig .[ sixcycle ] is a trapping set , fig .[ 53trappingset ] is a trapping set and fig .[ weight8codeword ] is a trapping set .note that a is isomorphic to a six cycle .and the trapping set is a codeword of weight eight .[ sixcycle ] [ 53trappingset ] [ weight8codeword ] the critical number for trapping set is three .there exist and trapping sets with critical number three .for the trapping set , the result follows from definition .we omit the proof for and trapping sets due to space considerations .detailed proofs can be found in the longer version of the paper . to correct three errors in a column - weight - three ldpc code by gallager a algorithm ,it is necessary to avoid trapping sets and and trapping sets with critical number three in its tanner graph. follows from the above discussion .we now state and prove the main theorem .if the tanner graph of a column - weight - three ldpc codes has girth eight and no set of variable nodes induces a subgraph isomorphic to trapping set or a subgraph isomorphic to trapping sets , then any three errors can be corrected using gallager a algorithm ._ sketch of proof : _ in a column - weight - three code three variable nodes can induce only one of the five subgraphs given in fig . [ errorconfigs ] and the proof proceeds by examining these subgraphs one at a time .the complete proof involves many arguments and here we just illustrate the methodology of the proof by considering two possible subgraphs .the proof for the remaining subgraphs appears in the longer version of the paper .* subgraph 1 : * since the girth of the code is eight , it has no six cycles and hence the configuration in fig .[ config1 ] is not possible .* subgraph 5 : * the three variable nodes in error induce a subgraph as shown in fig .[ config5 ] . in first half of first iteration and send incorrect messages . in the second half of first iteration , and send incorrect messages to neighboring variables except to and .if there is no variable node which receives three incorrect messages , a valid codeword is reached after first iteration . on the contrary ,assume there exists a variable node , say , which receives three incorrect messages ( w.l.o.g .we can assume that is connected to and ) .also , there can not be two such variable nodes as that would introduce a six cycle or a graph isomorphic to trapping set .also , there can be at most three variable nodes which receive two incorrect messages , say , and .let the other checks connected to these variables be and respectively . in the first half of second iteration , and send all correct messages , sends all incorrect messages , send incorrect messages to and respectively .in second half of second iteration , send incorrect messages to their neighbors except to . and send incorrect messages to neighboring variables except to and .there can not be a variable node which is connected to one check from and to one check from .also , there can not be a variable node which is connected to all the three checks and as this would introduce a graph isomorphic to trapping set .however , there can be at most two variable nodes which receive two incorrect messages from the checks and , say and .let the other checks connected to and be and . at the end of second iteration , and receive one incorrect message , and receive two incorrect messages . in the first half of third iteration , and send two incorrect messages each , and send one incorrect message each . in the second half of third iteration , and send incorrect messages to their neighbors except to and . and send incorrect messages to their neighbors except to and .it can be shown that there can not exist a variable node which receives three incorrect messages . at the end of third iteration , and receive all correct messages and no variable node receives all incorrect messages .so , if a decision is made , a valid codeword is reached and decoder is successful . [ config1 ] [ config2 ] [ config3 ] [ config4 ] [ config5 ] _remark : _ it is worth noting that the complete proof is more involved than the proofs which use expansion arguments. however , the result is also more precise and holds for codes of small lengths .in this section , we describe a technique to construct codes which can correct three errors .codes capable of correcting a fixed number of errors show superior performance on the bsc at low values of probability of transition .this is because the slope of the fer curve is related to the minimum critical number .a code which can correct errors has minimum critical number and the slope of fer curve is .we restate the arguments from to make this connection clear .let be the transition probability of bsc and be number of configurations of received bits for which channel errors lead to codeword ( frame ) error .the frame error rate ( fer ) is given by : where is the minimal number of channel errors that can lead to a decoding error ( size of instantons ) and is length of the code . on a semilog scalethe fer is given by the expression in the limit we note that =0\ ] ] and \!\!=0\ ] ] so , the behavior of the fer curve for small is dominated by the vs graph is close to a straight line with slope equal to the minimal critical number . if two codes and have minimum critical numbers and such that then the code will perform better than for small enough independent of the number of trapping sets . from the discussion in section [ section3 ] and section [ section4 ] , it is clear that for a code to have a fer curve with slope at least , the corresponding tanner graph should not contain the trapping sets shown in fig . [ trappingsets ] as subgraphs .we now describe a method to construct such codes .the method can be seen as a modification of the peg construction technique used by hu _the algorithm is as follows : note that checking for a graph isomorphic to trapping set at every step of code construction is computationally complex .since , the peg construction empirically gives good codes , it is unlikely that it introduces a weight - eight codeword .however , once the graph is grown fully , it can be checked for the presence of weight - eight codewords and these can be removed by swapping few edges . using the above algorithm ,a column - weight - three code with variable nodes and check nodes was constructed .the code has slight irregularity in check degree .there is one check node degree five and one check node with degree seven , but the majority of them have degree six .the code has rate 0.5 . in the algorithm, we restrict maximum check degree to seven .the performance of the code on bsc is compared with the peg code of same length .the peg code is empirically the best known code at that length on awgn channel .however , it has fourteen trapping sets .[ pegnewvsold ] shows the performance comparison of the two codes .as can be seen , the new code performs better than the original peg code at small values of .in this paper , we have given conditions for a column - weight - three code to correct three errors .since , the check degree does not play any part in the proof , it follows that the result is independent of code rate .a direction for future work is extending the analysis to more number of errors and higher column weight codes .preliminary investigation shows a lot of promise .the complexity of the proof , even in the case of three errors , suggests that solving the problem for an arbitrary number of errors will be a challenge . on the code construction front, we have shown that avoiding trapping sets with minimum critical number is the criterion to suppress error floor . however , the conditions for correcting more errors could be more complicated thereby increasing the complexity of code construction . deriving bounds on lengths and minimum distance of codes which avoid certain structures also need to be investigatedthis work is funded by nsf under grant ccf-0634969 and insic - ehdr program .d. j. c. mackay and m. j. postol , `` weaknesses of margulis and ramanujan margulis low - density parity - check codes , '' in _ proceedings of mfcsit2002 , galway _ , ser .electronic notes in theoretical computer science , vol .74.1em plus 0.5em minus 0.4em elsevier , 2003 .[ online ] .available : http://www.inference.phy.cam.ac.uk/mackay/abstracts/margulis.html p.o. vontobel and r. koetter , `` graph - cover decoding and finite - length analysis of message - passing iterative decoding of ldpc codes , '' 2005 .[ online ] .available : http://www.citebase.org/abstract?id=oai:arxiv.org:cs/0512078 s. k. chilappagari , s. sankaranarayanan , and b. vasic , `` error floors of ldpc codes on the binary symmetric channel , '' in _ international conference on communications _, vol . 3 , june 11 - 15 2006 , pp .10891094 .a. shokrollahi , `` an introduction to low - density parity - check codes , '' in _ theoretical aspects of computer science : advanced lectures_.1em plus 0.5em minus 0.4emnew york , ny , usa : springer - verlag new york , inc . , 2002 , pp .175197 .s. sankaranarayanan , s. k. chilappagari , r. radhakrishnan , and b. vasic , `` failures of the gallager b decoder : analysis and applications , '' in _ ucsd center for information theory and its applications inaugural workshop _ , feb 6 - 9 2006 .[ online ] .available : htpp//ita.5i.net / papers/160.pdf m. ivkovic , s. k. chilappagari , and b. vasic , `` eliminating trapping sets in low - density parity check codes using tanner graph lifting , '' in _ international symposium on information theory _ , june 24 - 29 2007 , pp .
in this paper , we provide necessary and sufficient conditions for a column - weight - three ldpc code to correct three errors when decoded using gallager a algorithm . we then provide a construction technique which results in a code satisfying the above conditions . we also provide numerical assessment of code performance via simulation results .
this study is concerned with steady three - dimensional free - surface profiles that are caused by a disturbance to a free stream .these profiles are characterised by the distinctive kelvin ship wave patterns that are observed at the stern of a vessel or even behind a duck swimming in an otherwise still body of water .while free - surface flows of this type have ongoing practical applications to ship hull design , as we mention below , the structure of these patterns has sparked renewed interest in the physics literature , with observations that ships moving sufficiently fast may give rise to wake angles that decrease with ship speed , in apparent contradiction to the well - known kelvin angle of , which is derived from linear theory .in contrast to these approaches , our purpose here is to treat the fully _ nonlinear _ equations , and present algorithms for the accurate computation of nonlinear ship wave profiles .the mathematical analysis of ship wave patterns has a very long history , the overwhelming majority of which concerns linear theories .for example , for the classic problem of flow past a pressure distribution applied to the surface of the fluid , if the pressure is small enough then the kinematic and bernoulli boundary conditions on can be linearised onto the undisturbed plane .this framework is used to model the wave pattern caused by an air - cushioned vehicle such as a hovercraft or a high - speed `` flat ship '' with a small draft .another approach is to consider the ship wave pattern due to a thin ship . in this casethe no - flux conditions on the ship hull are linearised onto the centreplane , while the thinness of the ship is assumed to produce small - amplitude waves , so the free surface conditions are again linearised onto the plane . this set - up has obvious applications to ship hull design , especially for vessels with narrow hulls .another geometry of interest involves flow past a submerged object , such as a spheroid , or , in a fluid of finite - depth , a bottom topography .if the magnitude of the disturbance is again small , then the usual linearisation of the surface conditions applies .furthermore , one can apply the thin ship approximation to submerged bodies as well .flows past submerged bodies have applications to submarine design and detection , for example . in all of the linear formulations cited above, the linear problem of laplace s equation in a known domain can be solved in principle with fourier transforms .the velocity potential and free surface are then given as quadruple integrals that involve the havelock potential ( the green s function or fundamental solution ) .in practice , the challenge of evaluating the resulting singular integrals with rapidly oscillating integrands has lead to analytical approximations such as the method of stationary phase , although accurate numerical computations have been conducted more recently . of particular interest here , we note that the havelock potential is the velocity potential for the linearised flow past a single submerged point source singularity .thus we see that the problem of computing the wave pattern caused by turning on a submerged source in a uniform stream acts as a building block for all the other flows mentioned ( as an example , the thin - ship theory effectively states that the flow past a thin ship hull is equivalent to the flow past a distribution of point sources on the centreplane whose strength is proportional to the hull slope ) .our focus in this study is to compute nonlinear flows , for which the full nonlinear boundary conditions on the actual displaced free surface apply .nonlinear versions of the above problems have been considered by a number of authors ( see , for example ) .in particular , following the framework of forbes , the approach we are most interested in is to apply a boundary - integral technique that relies on green s second formula .the result is a singular integro - differential equation which holds on the unknown free surface .that is , the free - surface problem in three dimensions is reduced to a two - dimensional problem for the free surface and the velocity potential . to proceed numerically, the rough approach is to place a mesh of grid points over the truncated -plane , so that the integro - differential equation and bernoulli s equation can both be applied at each of the half - mesh points .a radiation - type condition for the four unknown functions is applied at each of the grid points upstream .newton s method is then used to solve the resulting nonlinear system of equations for the unknowns ( which are slopes , and the values of , on the upstream grid points ) . as discussed by forbes ,moderate efficiencies can be gained by exploiting the symmetry of the problem and using an inexact newton s method which re - uses the jacobian a number of times if possible .in more recent times , over a series of papers , pru , vanden - broeck and cooker have applied forbes formulation to solve fully three - dimensional nonlinear ship wave problems and have typically used meshes of between and grid points .the same authors applied the same formulation to study three - dimensional solitary waves with typical meshes of grid points , while forbes & hocking used a mesh of points when applying the method to a three - dimensional withdrawal problem . to put the method into context , other approaches for three - dimensional ship wave problems use a similar grid size ; for example , tuck & scullen apply a mesh of grid points with their rankine source method , while similar resolution is provided for a rankine source method in .the level of grid refinement demonstrated for the three - dimensional problems just mentioned is to be contrasted with the vast literature on two - dimensional flows .for example , by applying a boundary - integral method in two dimensions combined with a straight - forward newton approach , authors can easily use in excess of 1000 grid points over the two - dimensional surface or , in more recent times , even 2000 points .although most authors end up using fewer than 1000 points for their two - dimensional calculations , generally an accepted procedure is to continue to refine the mesh until the results are grid - independent , at least visually . turning our attention back to three - dimensional flows , with less than 100 points used along the -direction ,the resolution over each wavelength is simply not of a sufficient standard for any claims about grid - independence to be made .indeed , this is one of the key reasons why there has been little to no detailed study of the effect of high nonlinearity for three - dimensional ship wave problems . in the present paper ,we use a variation of the numerical scheme developed by forbes for the problem of flow past a submerged source singularity , and apply jacobian - free newton - krylov methods and exploit graphics processing unit ( gpu ) acceleration to drastically increase the grid refinement and decrease the run - time when compared with schemes published in the literature .we choose this particular geometric configuration since , as mentioned above , it can be thought of as the most fundamental flow type within the class that produces three - dimensional ship wave patterns .further , this is precisely the geometry that forbes used when presenting the boundary - integral technique described above .thus we have a direct correspondence and a bigger picture view of how far the community has progressed since that time .finally , all of our ideas should generalise for other configurations ( such as flows past pressure distributions ) , provided there is a linear problem that arises in the small disturbance regime . in the following section we formulate the problem of interest and provide a summary of the boundary - integral technique developed by forbes and pru and vanden - broeck .the numerical scheme is described in section [ sec : numerical ] , which leads to a nonlinear system of equations where is the vector of unknowns of length .the damped newton s method approach leads to the iteration where is the iterate in the sequence and the damping parameter $ ] is chosen such that at every iterate .the newton step satisfies where is the jacobian matrix .the integral nature of our governing equations results in all of the entries in contributing to the evaluation of each component of that corresponds to enforcing the integral equation , which means that the lower - half of the jacobian is fully dense .this density has been a significant factor in limiting the number of grid points used in previously published numerical simulations .a key aspect of our approach is the use of a jacobian - free newton - krylov method to solve the system ( [ eq : nonlinearsys ] ) .a jacobian - free newton - krylov method requires the action of the jacobian only in the form of jacobian - vector products , which can be approximated using difference quotients without ever forming the jacobian itself . in practice , the underlying krylov subspace iterativesolver requires preconditioning in order to achieve a satisfactory rate of convergence , meaning the overall method is not typically fully matrix - free ; however , for preconditioning purposes , an approximation of the jacobian is all that is required , and this is where significant savings can be made .while jacobian - free newton - krylov methods are most commonly associated with problems for which the jacobian matrices are sparse , they have been used successfully in a number of applications that give rise to dense jacobian matrices . in each of these applications , a sparse approximation of the jacobian was used in constructing the preconditioner .we take the same approach in this work .the type of approximation we find to be the most effective involves a banded structure , with its nonzero entries coming from the linearised problem for a havelock source mentioned above .we emphasise that this approximation is used only for preconditioning purposes ; the action of the dense jacobian is still felt throughout the newton solver , which distinguishes our approach from the inexact method of forbes and others . in section [ sec :results ] we present our results .we choose to present most of our results for a particular set of parameter values , which includes the same froude number as used by forbes , and a moderately large value of the dimensionless strength of the submerged source . while forbes showed results computed with a mesh of grid points in 1989 , we are able to easily use a mesh on a modern desktop pc , computed in under 75 minutes . by utilising graphics processing unit ( gpu ) acceleration on a more powerful workstation ,the same solution was computed in roughly 3.5 minutes .furthermore , with this technology we are able to significantly improve upon the resolution , and generate results for a mesh ( in under 2 hours ) .this sort of resolution is important for three - dimensional ship wave problems , as it provides opportunities to explore the effect that nonlinearity has on the flow field in the same way as has been done in numerous instances for two - dimensional flows .finally , we close the paper in section [ sec : discussion ] with our discussion , including directions as to where our work can be applied .we consider the irrotational flow of an inviscid , incompressible fluid of infinite depth , bounded above by a free surface , upon which gravity is acting .the effects of surface tension are ignored .suppose that initially there is a free stream of fluid travelling with uniform speed in the positive -direction , and that a source singularity of strength is introduced at a distance below the surface .the disturbance caused by the source will lead to transient waves being generated on the free - surface .we are interested in the steady - state problem that arises in the long - time limit of this flow .the problem is nondimensionalised by scaling all lengths with respect to and all speeds with respect to . by labelling the free - surface ,the dimensionless problem is to solve laplace s equation for the velocity potential : except at the source singularity itself , whose dimensionless location is at .the appropriate limiting behaviour is where is the dimensionless source strength . on the free surfacethere are the kinematic and dynamic boundary conditions being satisfied , where the second of two dimensionless parameters in the problem is the depth - based froude number finally , the flow will approach the free stream both far upstream ( the radiation condition ) and infinitely far below the free surface , providing the final two conditions the governing equation ( [ eqn : nondimlap ] ) subject to ( [ eqn : nondimsource])-([eqn : nondimfar ] ) make up a nonlinear free - surface problem with no known analytical solution . in order to solve ( [ eqn : nondimlap])-([eqn : nondimfar ] ) numerically , we first reformulate the problem in terms of an integral equation using green s second formula .the full derivation is provided in forbes , while very similar approaches are outlined in a variety of other papers . by setting ,the final boundary - integral equation is which holds for any point in the -plane . here and are the kernel functions the integral equation ( [ eqn : integroeqn ] ) identically satisfies laplace s equation ( [ eqn : nondimlap ] ) and the kinematic condition ( [ eqn : nondimkin ] ) , as well as the limiting condition ( [ eqn : nondimsource ] ) and the far - field conditions ( [ eqn : nondimup])-([eqn : nondimfar ] ) .thus we are left to solve ( [ eqn : integroeqn ] ) and the dynamic condition ( [ eqn : nondimdyn ] ) .it proves convenient to rewrite ( [ eqn : nondimdyn ] ) with the help of ( [ eqn : nondimkin ] ) to be while our focus is on generating numerical solutions to ( [ eqn : nondimlap])-([eqn : nondimfar ] ) , it will prove instructive to note the linearised problem which arises in the weak source strength limit .the problem is formulated by writing , , and considering the formal limit . as a result, the linear problem becomes subject to the linearised kinematic and dynamic conditions the near - source behaviour ( [ eqn : nondimsource ] ) and the far - field conditions ( [ eqn : nondimup])-([eqn : nondimfar ] ) remain the same .as discussed in the introduction , the solution to this linear problem can be found using fourier transforms ; however , for our purposes we shall pursue the equivalent boundary - integral approach as that used for the nonlinear problem .this time if we set , the application of green s second formula gives where again , the integral equation ( [ eqn : integroeqnlinear ] ) identically satisfies laplace s equation ( [ eqn : nondimlaplinear ] ) , the kinematic condition ( [ eqn : nondimkinlinear ] ) , the far - field conditions ( [ eqn : nondimup])-([eqn : nondimfar ] ) and the near - source condition ( [ eqn : nondimsource ] ) .for the discretisation of the nonlinear boundary - integral equation ( [ eqn : integroeqn ] ) , we use a slight variant of the method outlined in pru and vanden - broeck , which is based on the original approach of forbes .this involves laying a regular mesh of nodes on the free surface with spacings of and in the and directions , respectively . for a given and , we shall refer to the mesh as being an mesh .the free - surface position and the velocity potential are represented by discrete values and at the points .we define the vector of unknowns to be ^t , \label{eq : unknowns}\end{aligned}\ ] ] comprising the -derivatives of the functions and at the free - surface mesh points , together with the values of and at the upstream boundary of the truncated domain .the values of these unknowns are related via nonlinear equations , of the form ( [ eq : nonlinearsys ] ) , which we now derive .given the elements of the vector of unknowns ( [ eq : unknowns ] ) , the remaining values of are obtained by trapezoidal - rule integration using the values of : the values of are then computed by fitting a cubic spline through the points for .values of and at each grid point are similarly computed using .we must now enforce the integro - differential equation ( [ eqn : integroeqn ] ) , which will be evaluated on the half - mesh points using two - point interpolation .the domain is truncated to the rectangle .the singularity in the second integral of ( [ eqn : integroeqn ] ) is removed by the addition and subtraction of the term where with the second integral of the equation ( [ eqn : integroeqn ] ) becomes where the integral now contains the singularity ; it can be evaluated exactly in terms of logarithms .the integrals in the approximation to equation ( [ eqn : integroeqn ] ) are discretised using the trapezoidal rule and then evaluated for all half - mesh points .this results in nonlinear algebraic equations for the unknowns in the vector .an additional equations are given by evaluating the free surface condition ( [ eqn : freesurfcond ] ) at the half mesh points .the final equations are provided to enforce the far - field condition ( [ eqn : nondimup ] ) on the relevant boundary of the truncated domain by applying the upstream radiation condition using the approach outlined by scullen .the idea here is to enforce an equation of the form along the boundary for the four functions , , and .the value of represents how fast the functions decay to zero upstream , and in our calculation was taken to be ( larger values of were found to amplify the small spurious upstream waves mentioned below ) .this method for applying the radiation condition gives us the equations for , where second derivatives are computed by a forward difference approximation on the first derivative .we now have equations for our vector of unknowns ( [ eq : unknowns ] ) . in order to optimise our schemewe have ordered these equations very carefully .this ordering is explained in [ appendixa ] .this numerical scheme has two main sources of error .the first is truncation error introduced when approximating the infinite domain of integration with a finite domain .this truncation has the potential to lead to errors if the chosen upstream truncation point ( ) is too close to the source , as the upstream radiation condition ( [ eqn : numupradiation ] ) may no longer be accurately enforced . indeed, truncating the domain upstream appears to generate very small nonphysical waves on the surface , as discussed later . truncating the domain downstream ( at )may also introduce significant errors as the amplitude of the wavetrain decays slowly with space , and contribution to the integrals from the truncated waves is nonzero .the second main source of error is from the discretisation of the integrals .both the mesh spacing and the chosen integration weighting scheme will have an effect on the accuracy of the final result .the system ( [ eq : nonlinearsys ] ) is solved with a jacobian - free newton - krylov method . at the outer , nonlinear level , this is simply the damped newton iteration ( [ eq : newtonstep0 ] ) , with chosen via a simple linesearch to ensure a sufficient decrease in the nonlinear residual is obtained with each iteration . at the inner , linear level , the system ( [ eq : newtonstep ] ) is solved using the iterative generalised minimum residual algorithm with right preconditioning .after iterations of this algorithm , the approximate solution for the newton correction is found by projecting obliquely onto the preconditioned krylov subspace where we are now using the notation , .the matrix is the preconditioner matrix a sparse approximation to which is discussed in more detail in the next subsection .its function is to reduce the dimension of the krylov subspace required to obtain a sufficiently accurate solution for .krylov subspace methods are very attractive as linear solvers in the context of nonlinear newton iteration , because they do not require explicit formation of the jacobian matrix .indeed , only the action of the jacobian matrix in the form of jacobian - vector products is required to assemble a basis for the preconditioned krylov subspace .these jacobian - vector products can be approximated without needing to form by using first order difference quotients : where represents an arbitrary vector used in building the krylov subspace , and is a suitably - chosen shift . since the newton correction is solved for only approximately , and the action of the jacobian in computing this solution is itself only approximated , we are left with an inexact newton method , which exhibits superlinear , rather than quadratic , convergence .the reduction in the convergence rate is of little practical consequence , given the enormous performance gains realised by removing the burden of forming the ( dense ) jacobian matrix .furthermore , only solving for the newton correction approximately can actually improve performance in the early stages of the nonlinear iteration , by not wasting operations computing an extremely accurate value of the newton correction which , even if it were computed exactly , would only reduce the nonlinear residual by so much . for most values of the parameters and , it proves sufficient to use a flat surface as the initial guess in the newton iteration , which corresponds to : for and .another approach is to use the exact solution to the linear problem outlined in section [ sec : linearproblem ] ( given in , for example ) .however , for highly nonlinear solutions with large values of , a further alternative approach is to apply a bootstrapping process in which a solution is computed using for a moderate value of , and then this solution is used as an initial guess for a slightly larger , and so on . in forming the preconditioner matrix ,the goal is to construct an approximation to the jacobian that is cheap to form and to factorise , such that the spectrum of the preconditioned jacobian exhibits a clustering of eigenvalues . a common starting point in building sucha preconditioner is to consider a matrix constructed from the same problem under simplified physics . in the present context, this is achieved by applying our numerical scheme to the linearised governing equations which apply formally in the limit .these equations make up the well - studied linear problem of computing the havelock potential for flow past a submerged point source , as discussed in the introduction and section [ sec : linearproblem ] .the numerical discretisation of the integrals in ( [ eqn : integroeqnlinear ] ) allows for easy differentiation by hand , so that all elements of the linear jacobian can be calculated exactly , requiring considerably less computational time .the details are included in [ appendixb ] .+ in figure [ fig : twojacvis ] , the jacobian matrix for the full nonlinear problem ( ( a ) `` nonlinear jacobian '' ) for with parameters and is compared to its counterpart for the linear problem ( ( b ) `` linear jacobian '' ) by means of the magnitude of their entries .the comparison confirms that , although there are slight differences in the magnitude of these entries ( in particular , the grey triangular regions near the diagonal in the upper - left submatrix in figure [ fig :twojacvis](a ) do not appear in figure [ fig : twojacvis](b ) ) , the general structure of the two matrices is the same .the eigenvalue spectra of the nonlinear jacobian before and after preconditioning with the linear jacobian are exhibited in figure [ fig : twojaceig ] .the figure reveals that the application of the preconditioner has resulted in a tight clustering of the eigenvalues around unity , confirming its effectiveness .while the linear jacobian is significantly cheaper to compute than its nonlinear counterpart , its lower - right submatrix is nonetheless fully dense , which would ultimately limit the number of mesh nodes that could be used in the discretisation due to storage and factorisation considerations .therefore , we focus attention on the lower - right submatrix of the two jacobians ( figure [ fig : twojacvis ] ( c ) , ( d ) ) , which reveals that the magnitudes of the entries decay with distance from the main block diagonal .this observation suggests using a block - banded approximation to this portion of the matrix for our preconditioner , whereby we keep only the nonzero entries of the lower - right submatrix of the linear jacobian within a stated block bandwidth , with block sizes . by varying this bandwidth, the sparsity of the preconditioner can be controlled such that the storage and factorisation costs are manageable .the method of storing , factorising and applying the preconditioner is outlined in [ appendixc ] . in figure[ fig : threebbandeigenplots ] we illustrate that even with block bandwidth ( that is , a block diagonal approximation ) , the linear jacobian still functions effectively as a preconditioner , providing the required eigenvalue clustering .the tightness of this clustering can be further improved by increasing the bandwidth , as the results for and confirm .we have computed solutions using both a standard desktop computer with all code written in matlab , and using a more powerful workstation with gpu accelerator using a mixture of matlab and cuda ( compute unified device architecture ) code . in all cases the kinsol implementation of the jacobian - free newton - krylov method was used . in the following ,recall that an mesh involves grid points in the direction and grid points in the direction .we present results obtained by solving our system of nonlinear equations on a typical desktop computer for a contemporary mesh ( , , ) as well as for a significantly finer mesh ( , , ) .the parameter values we focus on are and , which are representative of a moderately small froude number and a moderately nonlinear flow regime .for the contemporary mesh , the resulting problem size is sufficiently small that the full preconditioner ( without taking the banded approximation ) can be formed and factorised without difficulty on today s desktop machines . using the jacobian - free newton - krylov method with this dense preconditioner , the solution was obtained in under 26 seconds . calculating numerical solutions likethis one in such a small time is useful for exploring the effect of different parameter values on the free surface ; however , as can be seen in figure [ fig:4 ] , the resulting surface is rather coarse , and does not reveal much detail of the wave pattern . by using the block - banded preconditioner with our jacobian - free newton - krylov method ,we are able to compute the solution on the much finer mesh ( ) in under 75 minutes on the desktop computer . a block bandwidth of is used for the jacobian , which means it essentially fills all of the available system memory .this level of mesh refinement represents a comfortable size of problem for the given machine , and produces a free surface profile that is significantly smoother than the one computed with a mesh ( again , see figure [ fig:4 ] ) . with a modest degree of further refinement , the problem may still be solved on the desktop computer , however the effectiveness of the preconditioner is reduced owing to the limited number of bands that can be accommodated in memory . by coding the nonlinear discretisation in cuda and executing each evaluation ( hereafter a `` function evaluation '' ) on the gpu, we were able to significantly accelerate the computations as demonstrated in table [ tab : funcevaltimes ] .here we are experiencing an approximately 25 times speed up in function evaluation times over the multicore matlab code for the larger meshes .this leads to a reduced overall runtime , for example , calculating the solution on the same mesh with gpu acceleration took only 3.5 minutes .this dramatic reduction in computational time coupled with the extra system memory available on the workstation allowed us to produce solutions on much finer meshes in a practical amount of time .our most detailed solution using a mesh with and , was computed in 1.5 hours .the corresponding free surface profile is illustrated in figure [ fig:5 ] ..a comparison of the function evaluation times using multicore matlab on the desktop pc and the workstation with and without gpu acceleration for different meshes with parameters and .time is in seconds . [cols="^,^,^,^ " , ] + as mentioned in the introduction , a common procedure in the free - surface literature is to explore grid independence by computing solutions on a given truncated domain with more grid points ( twice as many , say ) and visually comparing the free surface profiles to test whether the grid refinement has not significantly altered the solution .similarly , authors often keep the spatial increment the same and increase the size of the truncated domain ( make it twice as long , say ) , again to test whether the solution changes . for steady two - dimensional flows ,this exercise is reasonably straight forward ( in principle ) , as the free surface profile is a curve .examples of these tests for two - dimensional problems that involve a downstream wavetrain can be found in , all of which were published at a time when demonstrating grid independence was still a difficult issue .more recently , equivalent tests of grid independence have been attempted for three - dimensional flows past disturbances . in this case , as the wave pattern is a two - dimensional surface , the domain was divided in half , with one part showing a solution computed with a particular grid , and the other part with a solution computed with a more refined or extended grid .such a comparison is also given in figure [ fig:4 ] .what we can see from figure [ fig:4 ] is that the solution computed on the mesh is clearly not grid independent , as the more refined surface corresponding to a mesh appears to be different , even on this larger scale .we have conducted the same comparison exercise for a variety of parameter sets and meshes for our problem , and conclude that the number of grid points used presently in the literature ( for a range of very similar problems ) is not nearly enough for authors to claim their solutions are grid independent .similarly , noting that pru and coauthors call these visual comparisons ` accuracy checks ' , we would not say that solutions computed with contemporary meshes are accurate .of course it is understandable that these coarse meshes have been used in published studies , given the dense nature of the nonlinear jacobian , the lack of a jacobian - free approach such as we are using here , and computational power .we hope that the algorithms presented here will allow much more accurate computations in the future .another obvious approach for observing the degree of grid independence is to plot the centreline of the free surface ( ) for a number of difference meshes , as shown in figure [ fig : centrelineep1fr7all ] .in addition to the and meshes used in figure [ fig:4 ] , we have also included the centreline plot for the mesh used in figure [ fig:5 ] . recall that this latter mesh was implemented a workstation with gpu acceleration .we see there is quite good agreement between the solutions for the and meshes , at least over the first four or five wavelengths .further downstream the amplitudes of the waves appear to agree well , but the actual wavelength is slightly out .this comparison suggests that while we can not yet claim our solutions will not be affected by further grid refinement , we argue that meshes of the order of and are required for solutions to begin to appear independent of the mesh spacing and truncation .and computed on three different grids .the dashed curve has 91 nodes in the -direction with .the surface made up by solid circles has 361 nodes with .note that each circle here represents an actual grid point ( the illusion of uneven grid spacing is due to the vastly different scales in the and directions ) .the solid curve has 721 nodes with .the inset shows a close up of this comparison near . ]it is worth making some comments about the truncation errors we discussed at the end of section [ sec : numerical ] .first , we note that truncating the domain upstream at has the effect of introducing very small spurious ( almost two - dimensional ) waves throughout the domain .these may be seen in figures [ fig:4 ] and [ fig:5 ] , both ahead of the source and also outside of the kelvin wedge .this numerical artefact has been an issue for two - dimensional flows for many years , and the associated spurious waves have been eliminated by employing a variety of upstream boundary conditions .a detailed discussion for two - dimensional flows is given by grandison & vanden - broeck . in our scheme , the enforcement of the radiation condition via ( [ eqn : numupradiation ] ) has the effect of dramatically reducing the size of these spurious waves ( the coefficient is chosen based on these observations ) .this issue deserves further attention .further , we note any truncation of the domain at will introduce errors in the system , as the contribution from the wavetrain to the integrals for will be ignored .visually , we can see in figure [ fig : centrelineep1fr7all ] that the final wavelength of the free surface seems affected by this truncation .again , strategies have been developed to deal with these errors in much simpler two - dimensional problems , and similar work is needed for the types of three - dimensional flows considered here .the free - surface profiles presented in figures [ fig:4]-[fig:5 ] are computed for the moderately small value of the froude number , . in this regime ,the transverse waves , which run perpendicular to the flow direction , are prominent .these are the waves we observe in the centreline plot in figure [ fig : centrelineep1fr7all ] .the other type of waves are the divergent waves , whose crests appear to form ridges pointing diagonally away from the source .the amplitude of the transverse waves decays as increases , leaving the divergent waves to dominate at larger distances away from the source .it is the divergent wave pattern that characterises the well - known v - shaped kelvin wake .a free - surface profile computed for and is presented in figure [ fig:7 ] . for this moderately large froude number, we see that the divergent waves dominate closer to the source , making it more difficult to view the transverse waves .note that the wavelength of the transverse waves increases with froude number , which means we need to truncate further downstream for larger froude numbers in order to capture the same amount of detail .the solution in this figure was computed using a mesh of on a workstation with gpu acceleration . with this resolution, we can see fine details of the surface in part ( a ) of the figure .we have considered the fully nonlinear problem of the free - surface flow past a submerged point source . following forbes , we apply a boundary - integral technique based on green s second formula to derive a singular integro - differential equation for the velocity potential and the shape of the surface .this equation , together with bernoulli s equation , is discretised and satisfied at midpoints on a two - dimensional mesh .the resulting system of nonlinear algebraic equations is solved using newton s method . in the past, numerical approaches of this sort were hindered by the fact that the jacobian matrix in newton s method is dense .our contribution is to apply a jacobian - free newton - krylov method to solve the nonlinear system , thus avoiding the need to ever form or factorise the jacobian . as such, we are able to use much finer meshes than used in the past by other authors .further , in order to ensure efficiency , we use a banded matrix preconditioner whose nonzero entries come from the linearised problem . finally , we code the function to run efficiently on a gpu , to greatly speed up function evaluation times .the resolution of the mesh we use is now essentially up to the standard of many two - dimensional schemes published in the literature .as discussed in the introduction , the problem of flow past a source singularity can be thought of as a building block for more complicated configurations such as flow due to a steadily moving applied pressure distribution ( like a hovercraft ) , a thin ship hull , or a submerged body ( like a submarine ) .the next stage in this research is to adapt the present techniques for these more complicated flows .we expect that the key ideas developed in this paper will generalise in a straightforward manner , provided there is a natural linearised version of the problem at hand . with the accuracy and efficiency of our approach ,one may be able to devise appropriate optimisation schemes for designing ship hulls with minimal resistance , and so on .our approach should also translate to time - dependent problems , such as the study by pru et al . , who apply a similar boundary integral approach , discretised with meshes , to solve for time - dependent flows past a pressure distribution ( see for a thorough discussion of further issues that arise in time - dependent problems ) .we leave all this work for further study . with the degree of accuracyour numerical schemes allow , we are now in a position to explore the effect of strong nonlinearity on the wave pattern , as has been done extensively in the two - dimensional analogue . for example , as the nonlinearity in a steady ship wave problem increases ( for our problem this tendency comes from increasing ) , the waves will become more nonlinear in shape , perhaps with sharper crests .given the flow is steady , we expect that the waves will ultimately `` break '' when the most nonlinear wave reaches a limiting configuration ( this occurs when the highest wave crest reaches the dimensionless height ) .while this general behaviour is well understood for two - dimensional waves , with studies of highly nonlinear waves producing highly accurate calculations of near - breaking waves ( the breaking point corresponding to the stokes limiting configuration with a angle at the wave crest ) , the highly nonlinear regime for fully three - dimensional problems is relatively unexplored .indeed , the extra dimension makes the pattern structure much more complicated , and so it is not always obvious what part of the domain will break first . as such , the challenge of generalising the two - dimensional results to three dimensions remains .swm acknowledges the support of the australian research council via the discovery project dp140100933 .the authors thank prof .kevin burrage for the use of high performance computing facilities and acknowledge further computational resources and support provided by the high performance computing and research support ( hpc ) group at queensland university of technology .99 p. n. brown and y. saad .hybrid krylov methods for nonlinear systems of equations . _ siam j. sci .stat . comp ._ , 11:450481 , 1990 .l. chacn , d. c. barnes , d. a. knoll , and g. h. miley .an implicit energy - conservative 2d fokker - planck algorithm : ii .jacobian - free newton - krylov solver ._ j. comp ._ , 157:654682 , 2000 .e. d. cokelet .steep gravity waves in water of arbitrary uniform depth .a _ , 286:183230 , 1977. g. d. crapper .surface waves generated by a travelling pressure point .a _ , 282:547558 , 1964 .m. c. dallaston and s. w. mccue .accurate series solutions for gravity - driven stokes waves ._ phys . fluids _ , 22:082104 , 2010 .a. darmon , m. benzaquen , and e. raphal .kelvin wake pattern at large froude numbers. _ j. fluid mech ._ , 738:r38 , 2014 .f. dias and t. j. bridges . the numerical computation of freely propagating time - dependent irrotational water waves ._ fluid dyn ._ , 38:803830 , 2006 .ship waves in the presence of uniform vorticity ._ j. fluid mech ._ , 742:r211 , 2014 c. fochesato and f. dias . a fast method for nonlinear three - dimensional free - surface wavesa _ , 462:27152735 , 2006 .l. k. forbes .an algorithm for 3-dimensional free - surface problems in hydrodynamics ._ j. comp ._ , 82:330347 , 1989 .l. k. forbes and g. c. hocking .flow due to a sink near a vertical wall , in infinitely deep fluid .fluids , 34:684704,2005 .s. grandison and j .- m .vanden - broeck .truncation approximations for gravity - capillary free - surface flows .math._,54:8997 , 2006 .s. t. grilli , p. guyenne , and f. dias . a fully non - linear model for three - dimensional overturning waves over an arbitrary bottom .meth . fluids _ , 35:829867 , 2001 .t. h. havelock .wave resistance : some cases of three - dimensional fluid flow .a _ , 95:354365 , 1919 .t. h. havelock .the wave resistance of a spheroid .a _ , 131:275285 , 1931. t. h. havelock .the theory of wave resistance ., 138:339348 , 1932 .p. j. higgins , w. w. read , and s. r. belward .analytical series solutions for three - dimensional supercritical flow over topography ._ , 77:3949 , 2012 .a. c. hindmarsh , p. n. brown , k. e. grant , s. l. lee , r. serban , d. e. shumaker , and c. s. woodward .sundials : suite of nonlinear and differential / algebraic equation solvers ._ acm trans . math ._ , 31:363396 , 2005 .janson , m. leer - andersen , and l. larsson .calculation of deep - water wash waves using a combined rankine / kelvin source method ._ j. ship res . _ , 47:313326 , 2003 .s. khatiwala .fast spin up of ocean biogeochemical models using matrix - free newton - krylov ._ ocean modell ._ , 23:121129 , 2008 .d. a. knoll and d. e. keyes .jacobian - free newton - krylov methods : a survey of approaches and applications ._ j. comp . phys ._ , 193:357397 , 2004 .j. lighthill ._ waves in fluids_. cambridge university press , cambridge , 1978 .v. lukomsky , i. gandzha , and d. lukomsky .steep sharp - crested gravity waves in deep water ._ , 89:164502 , 2002 . c. j. lustri and s.jchapman . steady gravity waves due to a submerged source ._ j. fluid mech ._ , 732:660686 , 2013 .c. j. lustri , s. w. mccue , and b. j. binder .free surface flow past topography : a beyond - all - orders approach .j. appl . math ._ , 23:441467 , 2012 . s. w. mccue and l. k. forbes .bow and stern flows with constant vorticity ._ j. fluid mech ._ , 399:277300 , 1999 . s. w. mccue and l. k. forbes .free - surface flows emerging from beneath a semi - infinite plate with constant vorticity ._ j. fluid mech ._ , 461:387407 , 2002 .h. mekias and j .-m . vanden - broeck .subcritical flow with a stagnation point due to a source beneath a free surface .fluids a _ , 3:26522658 , 1991 .j. h. michell .the wave resistance of a ship ._ phil . mag ._ , 45:106123 , 1898 . t. j. moroney and q. yang . a banded preconditioner for the two - sided , nonlinear space - fractional diffusion equation . _ comp . & math ._ , 66:659667 , 2013 .f. noblesse .the steady wave potential of a unit source , at the centerplane ._ j. ship res ._ , 22:8088 , 1978 .f. noblesse .alternative integral representations for the green function of the theory of ship wave resistance ._ , 15:241265 , 1981 .f. noblesse , g. delhommeau , h. y. kim , and c. yang . thin - ship theory and influence of rake and flare ._ , 64:4980 , 2009 .f. noblesse , g. delhommeau , and c. yang .practical evaluation of steady flow resulting from a free - surface pressure patch ._ j. ship res ._ , 53:137150 , 2009 .o. ogilat , s. w. mccue , i.w. turner , j. a. belward , and b. j. binder .minimising wave drag for free surface flow past a two - dimensional stern ._ phys . fluids _ , 23:072101 , 2011 .a. s. peters . a new treatment of the ship wave problem .pure appl ._ , 2:123148 , 1949 .e. pru and j .- m .vanden - broeck .nonlinear two- and three - dimensional free surface flows due to moving disturbances .b / fluids _ , 21:643656 , 2002 .e. pru and j .- m .vanden - broeck .three - dimensional waves beneath an ice sheet due to a steadily moving pressure . _a _ , 369:29732988 , 2011 .e. pru , j .-vanden - broeck , and m. j. cooker .nonlinear three - dimensional gravitycapillary solitary waves ._ j. fluid mech ._ , 536:99105 , 2005 .e. pru , j .-vanden - broeck , and m. j. cooker .three - dimensional gravity - capillary solitary waves in water of finite depth and related problems ._ phys . fluids _ , 17:122101 , 2005 .e. pru , j .-vanden - broeck , and m. j. cooker .nonlinear three - dimensional interfacial flows with a free surface ._ j. fluid mech ._ , 591:481494 , 2007 .e. pru , j .-vanden - broeck , and m. j. cooker .three - dimensional capillary - gravity waves generated by a moving disturbance ._ phys . fluids _ , 19:082102 , 2007 .e. pru , j .-vanden - broeck , and m. j. cooker .three - dimensional gravity and gravity - capillary interfacial flows . _ math ._ , 74:105112 , 2007 .e. pru , j .-vanden - broeck , and m. j. cooker .time evolution of three - dimensional nonlinear gravity - capillary free - surface flows ._ , 68:291300 , 2010 .m. rabaud and f. moisy .ship wakes : kelvin or mach angle ?_ , 110:214503 , 2013 . a. m. reed and j. h. milgram .ship wakes and their radar images .fluid mech ._ , 34:469502 , 2002 . y. saad and m. h. schultz .gmres : a generalised minimum residual algorithm for solving nonsymmetric linear systems ._ siam j. sci ._ , 7:856869 , 1986 .l. w. schwartz .computer extension and analytic continuation of stokes expansion for gravity waves ._ j. fluid mech ._ , 62:553578 , 1974 .d. c. scullen ._ accurate computation of steady nonlinear free - surface flows ._ phd thesis , the university of adelaide , 1998 .d. c. scullen and e. o. tuck .free - surface elevation due to moving pressure distributions in three dimensions ._ , 70:2942 , 2011 .p. h. trinh and s. j. chapman .the wake of a two - dimensional ship in the low - speed limit : results for multi - cornered hulls ._ j. fluid mech ._ , 741:492513 , 2014 .p. h. trinh , s. j. chapman , and j .-m . vanden - broeck .do waveless ships exist ?results for single - cornered hulls ._ j. fluid mech ._ , 685:413439 , 2011 .e. o. tuck , j. i. collins , and w. h. wells . on ship wave patterns and their spectra ._ j. ship res . _ , 15:1121 , 1971 .e. o. tuck and d. c. scullen .a comparison of linear and nonlinear computations of waves made by slender submerged bodies ._ , 42:255264 , 2002 .e. o. tuck , d. c. scullen , and l. lazauskas .ship - wave patterns in the spirit of michell . in _iutam symposium on free surface flows _ , pages 311318 .springer .f. ursell . on kelvin s ship - wave pattern ._ j. fluid mech ._ , 8:418431 , 1960 .s. l. wade , b. j. binder , t. w. mattner , and j. p. denier . on the free - surface flow ofvery steep forced solitary waves . _j. fluid mech ._ , 739:121 , 2014 .j. m. williams . limiting gravity waves in water of finite depth .a _ , 302:139188 , 1981 .y. zhang and s. zhu .open channel flow past a bottom obstruction ._ , 30:487499 , 1996 .the left - hand side of is a vector valued function made up of six different functions taken from the numerical scheme .the free surface condition ( [ eqn : freesurfcond ] ) and boundary integral equation ( [ eqn : integroeqn ] ) evaluated at the half mesh points are denoted and , respectively , for and .we also have the radiation conditions ( [ eqn : numupradiation ] ) denoted : for .we order these equations as ^t,\end{aligned}\ ] ] which results in the jacobian structure illustrated in figure [ fig : twojacvis ] .to construct the linear jacobian , we need to apply the same numerical discretisation outlined in section [ sec : numerical ] to the linear problem derived in section [ sec : linearproblem ] . the singularity in ( [ eqn : integroeqnlinear ] ) is dealt with in the same way as with the nonlinear problem , by adding and subtracting the term ( [ eqn : i2dashdash ] ) , except that now , which simplifies the details .the linear system then becomes {3_{i , j , k,\ell } } -\zeta^*_{x_{i , j}}i,\\ \textbf{e}_{3_{\ell}}&=x_1\phi_{x_{1,\ell}}+n\phi_{1,\ell}-x_1(n+1),\\ \textbf{e}_{4_{\ell}}&=\frac{x_1}{\delta x}\phi_{x_{2,\ell}}+(n-\frac{x_1}{\delta x})\phi_{x_{1,\ell}}-n,\\ \textbf{e}_{5_{\ell}}&=x_1\zeta_{x_{1,\ell}}+n\zeta_{1,\ell},\\ \textbf{e}_{6_{\ell}}&=\frac{x_1}{\delta x}\zeta_{x_{2,\ell}}+(n-\frac{x_1}{\delta x})\zeta_{x_{1,\ell } } , \end{aligned}\label{eqn : systemequation}\ ] ] for , where * e * is constructed from these equations and is given by and is the weighting function for numerical integration .as before , can be evaluated exactly in terms of logarithms .the next step is to determine how , , and depend on the unknowns in ( [ eq : unknowns ] ) .we first expand the trapezoidal - rule integration of in ( [ eqn : zetaapprox ] ) which gives similarly , we expand as this result immediately provides the values for using two point interpolation substituting this expression and its equivalent in and into ( [ eqn : systemequation ] ) gives the resulting linear system in terms of the unknowns , +\frac{\epsilon}{\left({x^*_k}^2+{y^*_\ell}^2 + 1 \right)^\frac{1}{2}}\notag\\ & -\sum\limits_{i=1}^{n}\sum\limits_{j=1}^{m } w(i , j)\left[\zeta_{x_{i , j } } -\frac{1}{2}(\zeta_{x_{k,\ell}}+\zeta_{x_{k+1,\ell}})\right]k_{3_{i , j , k,\ell}}-\frac{1}{2}(\zeta_{x_{k,\ell}}+\zeta_{x_{k+1,\ell}})i,\notag\\ \textbf{e}_{3_{\ell}}&=x_1\phi_{x_{1,\ell}}+n\phi_{1,\ell}-x_1(n+1),\label{eqn : systemequationsinzetax}\\ \textbf{e}_{4_{\ell}}&=\frac{x_1}{\delta x}\phi_{x_{2,\ell}}+(n-\frac{x_1}{\delta x})\phi_{x_{1,\ell}}-n,\notag\\ \textbf{e}_{5_{\ell}}&=x_1\zeta_{x_{1,\ell}}+n\zeta_{1,\ell},\notag\\ \textbf{e}_{6_{\ell}}&=\frac{x_1}{\delta x}\zeta_{x_{2,\ell}}+(n-\frac{x_1}{\delta x})\zeta_{x_{1,\ell}},\notag\end{aligned}\ ] ] for , . finally , to calculate the linear jacobian , the equations in ( [ eqn : systemequationsinzetax ] ) can be differentiated with respect to , , and to give : for , and , .the derivatives for can be easily calculated and will not be explicitly written here .our preconditioner is formed by ordering these jacobian entries in the manner described in [ appendixa ] .[ sec : invpre ] as shown in figure [ fig : twojacvis ] , the preconditioner can be divided up into four equal submatrices of size .this preconditioner can then be factorised using the block decomposition , = \left[\begin{matrix } i & 0\\ ca^{-1 } & i\\ \end{matrix}\right ] \left[\begin{matrix } a & 0\\ 0 & d - ca^{-1}b\\ \end{matrix}\right ] \left[\begin{matrix } i & a^{-1}b\\ 0 & i\\ \end{matrix}\right],\ ] ] where , , , and are primarily given by equations ( [ eq : prea ] ) , ( [ eq : preb ] ) , ( [ eq : prec ] ) and ( [ eq : pred ] ) , respectively .thus we can solve the system by performing the following operations , = \left[\begin{matrix } \textbf{b}_1\\ \textbf{b}_2-ca^{-1}\textbf{b}_1 \end{matrix}\right],\quad \left[\begin{matrix } \textbf{s}_1\\ \textbf{s}_2 \end{matrix}\right ] = \left[\begin{matrix } a^{-1}\textbf{t}_1\\ ( d - ca^{-1}b)^{-1}\textbf{t}_2 \end{matrix}\right],\quad \left[\begin{matrix } \textbf{r}_1\\ \textbf{r}_2 \end{matrix}\right ] = \left[\begin{matrix } \textbf{s}_1-a^{-1}b\textbf{s}_2\\ \textbf{s}_2 \end{matrix}\right].\ ] ] this method provides several advantages .first , is tridiagonal , allowing for easy storage and fast factorisation and inversion when needed .second , and are only used in matrix vector multiplication operations and thus can be implemented as functions that perform these operations rather than stored as matrices .furthermore , , and are block diagonal , and each diagonal block is identical within a given matrix , meaning need only be computed for one block .finally , appears only in the schur complement , which we store and factorise in the preconditioner set - up phase .these advantages mean we only store a matrix for and a block - banded matrix for the schur complement when constructing and factorising the preconditioner .
the nonlinear problem of steady free - surface flow past a submerged source is considered as a case study for three - dimensional ship wave problems . of particular interest is the distinctive wedge - shaped wave pattern that forms on the surface of the fluid . by reformulating the governing equations with a standard boundary - integral method , we derive a system of nonlinear algebraic equations that enforce a singular integro - differential equation at each midpoint on a two - dimensional mesh . our contribution is to solve the system of equations with a jacobian - free newton - krylov method together with a banded preconditioner that is carefully constructed with entries taken from the jacobian of the linearised problem . further , we are able to utilise graphics processing unit acceleration to significantly increase the grid refinement and decrease the run - time of our solutions in comparison to schemes that are presently employed in the literature . our approach provides opportunities to explore the nonlinear features of three - dimensional ship wave patterns , such as the shape of steep waves close to their limiting configuration , in a manner that has been possible in the two - dimensional analogue for some time . three dimensional free surface flows , nonlinear gravity waves , kelvin ship wave patterns , boundary integral method , preconditioned jacobian - free newton - krylov method , gpu acceleration
argumentation has evolved as an important field in ai , with abstract argumentation frameworks ( afs , for short ) as introduced by dung being its most popular formalization .several semantics for afs have been proposed ( see e.g. for an overview ) , but here we shall focus on the so - called preferred semantics .reasoning under this semantics is known to be intractable .an interesting approach to dealing with intractable problems comes from parameterized complexity theory which suggests to focus on parameters that allow for fast evaluations as long as these parameters are kept small .one important parameter for graphs ( and thus for argumentation frameworks ) is tree - width , which measures the `` tree - likeness '' of a graph . to be more specific , tree - widthis defined via a certain decomposition of graphs , the so - called tree decomposition .recent work describes novel algorithms for reasoning in the preferred semantics , such that the performance mainly depends on the tree - width of the given af , but the running times remain linear in the size of the af . to put this approach to practice, we shall use the _ sharp _ framework , a c++ environment which includes heuristic methods to obtain tree decompositions , provides an interface to run algorithms on these decompositions , and offers further useful features , for instance for parsing the input . fora description of the _ sharp _ framework , see .the main purpose of our work here is to support the theoretical results from with experimental ones .therefore we use different classes of afs and analyze the performance of our approach compared to an implementation based on answer - set programming ( see ) .our prototype system together with the used benchmark instances is available as a ready - to - use tool from http://www.dbai.tuwien.ac.at / research / project / argumentation / dynpartix/.[ [ argumentation - frameworks . ] ] argumentation frameworks .+ + + + + + + + + + + + + + + + + + + + + + + + + an _ argumentation framework ( af ) _ is a pair where is a set of arguments and is the attack relation . if we say attacks .an is _ defended _ by a set iff for each , there exists a such that .an af can naturally be represented as a digraph .[ example : argumentation_framework ] consider the af , with and , , , , , , , .the graph representation of is given as follows : node[arg](a) + + ( 1,0 ) node[arg](b) + + ( 1,0 ) node[arg](c) + + ( 1,0 ) node[arg](d) + + ( 1,0 ) node[arg](e) + + ( 1,0 ) node[arg](f) + + ( 1,0 ) node[arg](g) ; ( a ) edge ( b ) ( c ) edge ( b ) ( d ) edge ( e ) ( f ) edge ( e ) ( g ) edge ( f ) ; ( c ) edge ( d ) ( d ) edge ( c ) ( e ) edge ( g ) ; we require the following semantical concepts : let be an af .a set is ( i ) _ conflict - free _ in , if there are no , such that ; ( ii ) _ admissible _ in , if is conflict - free in and each is defended by ; ( iii ) a _ preferred extension _ of , if is a -maximal admissible set in . for the af in example [ example : argumentation_framework ] , we get the admissible sets , and . consequently , the preferred extensions of this framework are .+ the typical reasoning problems associated with afs are the following : ( 1 ) credulous acceptance asks whether a given argument is contained in at least one preferred extension of a given af ; ( 2 ) skeptical acceptance asks whether a given argument is contained in all preferred extensions of a given af .credulous acceptance is -complete , while skeptical acceptance is even harder , namely -complete .[ [ tree - decompositions - and - tree - width . ] ] tree decompositions and tree - width .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + as already outlined , tree decompositions will underlie our implemented algorithms .we briefly recall this concept ( which is easily adapted to afs ) .a _ tree decomposition _ of an undirected graph is a pair where is a tree and is a set of so - called bags , which has to satisfy the following conditions : ( a ) , i.e. is a cover of ; ( b ) for each , is connected ; ( c ) for each , for some .the width of a tree decomposition is given by .the _ tree - width _ of is the minimum width over all tree decompositions of .it can be shown that our example af has tree - width and next we illustrate a tree decomposition of width : = [ rectangle , draw , rounded corners=2pt ] ( r) child node [ trd ] ( l1) child node [ trd ] ( l2) child node [ trd](r1) child node [ trd](r2) ; dynamic programming algorithms traverse such tree decompositions ( for our purposes we shall use so - called normalized decompositions , however ) and compute local solutions for each node in the decomposition .thus the combinatorial explosion is now limited to the size of the bags , that is , to the width of the given tree decomposition .for the formal definition of the algorithms , we refer to ._ dynpartix _ implements these algorithms using the _ sharp _ framework , which is a purpose - built framework for implementing algorithms that are based on tree decompositions .figure [ fig : architectureofsharp ] shows the typical architecture , that systems working with the _ sharp _ framework follow . in fact , _ sharp _ provides interfaces and helper methods for the preprocessing and dynamic algorithm steps as well as ready - to - use implementations of various tree decomposition heuristics , i.e. minimum - fill , maximum - cardinality - search and minimum - degree heuristics ( cf . ) .= [ very thick , draw = black,>=latex ] ( a ) [ box ] parsing ; ( in ) at ( a ) [ above=9 mm ] ; at ( a ) [ above=9mm , right=4.5mm , anchor = south east ] input ; ( b ) [ right of = a , box ] preprocessing ; ( c ) [ right of = b , box]tree decomposition ; ( d ) [ right of = c , box]normalization ; ( e ) [ right of = d , box]dynamic algorithm ; ( out ) at ( e ) [ above=9 mm ] solutions ; ( in ) ( a ) node ; ( a ) ( b ) ; ( b ) ( c ) ; ( c ) ( d ) ; ( d ) ( e ) ; ( e ) ( out ) ; _ dynpartix _ builds on normalized tree decompositions provided by _ sharp _ , which contain four types of nodes : leaf- , branch- , introduction- and removal - nodes . to implement our algorithms we just have to provide the methods and data structures for each of these node types ( see for the formal details ) . in short ,the tree decomposition is traversed in a bottom - up manner , where at each node a table of all possible partial solutions is computed .depending on the node type , it is then modified accordingly and passed on to the respective parent node .finally one can obtain the complete solutions from the root node s table ._ sharp _ handles data - flow management and provides data structures where the calculated ( partial ) solutions to the problem under consideration can be stored .the amount of dedicated code for _ dynpartix _ comes to around 2700 lines in c++ .together with the _ sharp _ framework ( and the used libraries for the tree - decomposition heuristics ) , our system roughly comprises of 13 000 lines of c++ code .currently the implementation is able to calculate the admissible and preferred extensions of the given argumentation framework and to check if credulous or skeptical acceptance holds for a specified argument .the basic usage of _ dynpartix _ is as follows : .... > ./dynpartix [ -f < file > ] [ -s < semantics > ] [ --enum | --count | --cred < arg > |--skept < arg > ] .... the argument ` -f < file > ` specifies the input file , the argument ` -s < semantics > ` selects the semantics to reason with , i.e. either admissible or preferred , and the remaining arguments choose one of the reasoning modes .[ [ input - file - conventions ] ] input file conventions : + + + + + + + + + + + + + + + + + + + + + + + we borrow the input format from the _ aspartix _ system .dynpartix _ thus handles text files where an argument is encoded as arg(a ) and an attack is encoded as att(a , b ) .for instance , consider the following encoding of our running example and let us assume that it is stored in a file inputaf . ....att(a , b ) .att(c , b ) .att(c , d ) .att(d , c ) .att(d , e ) .att(e , g ) .att(f , e ) .att(g , f ) . ....[ [ enumerating - extensions ] ] enumerating extensions : + + + + + + + + + + + + + + + + + + + + + + + first of all , _ dynpartix _ can be used to compute extensions , i.e. admissible sets and preferred extensions .for instance to compute the admissible sets of our running example one can use the following command : .... > ./dynpartix -f inputaf -s admissible .... [ [ credulous - reasoning ] ] credulous reasoning : + + + + + + + + + + + + + + + + + + + + _ dynpartix _ decides credulous acceptance using proof procedures for admissible sets ( even if one reasons with preferred semantics ) to avoid unnecessary computational costs .the following statement decides if the argument is credulously accepted in our running example . ....> ./dynpartix -f inputaf -spreferred --cred d .... indeed the answer would be _ yes _ as is a preferred extension .[ [ skeptical - reasoning ] ] skeptical reasoning : + + + + + + + + + + + + + + + + + + + + to decide skeptical acceptance , _ dynpartix _ uses proof procedures for preferred extensions which usually results in higher computational costs ( but is unavoidable due to complexity results ) . to decide if the argument is skeptically accepted , the following command is used : .... > ./dynpartix -f inputaf -s preferred --skept d .... here the answer would be _ no _ as is a preferred extension not containing . [ [ counting - extensions ] ] counting extensions : + + + + + + + + + + + + + + + + + + + + recently the problem of counting extensions has gained some interest .we note that our algorithms allow counting without an explicit enumeration of all extensions ( thanks to the particular nature of dynamic programming ; see also ) . counting preferred extensions with _dynpartix _ is done by ....> ./dynpartix -f inputaf -s preferred --count ....in this section we compare _ dynpartix _ with _ aspartix _ , one of the most efficient reasoning tools for abstract argumentation ( for an overview of existing argumentation systems see ) . for our benchmarks we usedrandomly generated afs of low tree - width . to ensure that afs are of a certain tree - width we considered random grid - structured afs .in such a grid - structured af each argument is arranged in an grid and attacks are only allowed between neighbours in the grid ( we used a 8-neighborhood here to allow odd - length cycles ) . when generating the instances we varied the following parameters : the number of arguments ; the tree - width ; and the probability that an possible attack is actually in the af .the benchmark tests were executed on an intelcore2 cpu 6300.86ghz machine running suse linux version 2.6.27.48 .we generated a total of 4800 argumentation frameworks with varying parameters as mentioned above .the corresponding runtimes are illustrated in figure [ figure : benchmarks ] .the two graphs on the left - hand side compare the running times of _ dynpartix _ and _ aspartix _ ( using dlv ) on instances of small treewidth ( viz . 3 and 5 ) .for the graphs on the right - hand side , we have used instances of higher width .results for credulous acceptance are given in the upper graphs and those for skeptical acceptance in the lower graphs .the y - axis gives the runtimes in logarithmic scale ; the x - axis shows the number of arguments .note that the upper - left picture has different ranges on the axes compared to the three other graphs .we remark that the test script stopped a calculation if it was not finished after 300 seconds . for these caseswe stored the value of 300 seconds in the database .[ [ interpretation - of - the - benchmark - results ] ] interpretation of the benchmark results : + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + we observe that , independent of the reasoning mode , the runtime of _ aspartix _ is only minorly affected by the tree - width while _ dynpartix _ strongly benefits from a low tree - width , as expected by theoretical results . for the _ credulous acceptance _problem we have that our current implementation is competitive only up to tree - width .this is basically because _ aspartix _ is quite good at this task .considering figures [ figure : benchmarks_a ] and [ figure : benchmarks_b ] , there is to note that for credulous acceptance _ aspartix _ decided every instance in less than 300 seconds , while _ dynpartix _ exceeded this value in 4% of the cases .now let us consider the _skeptical acceptance _ problem . as mentioned before , skeptical acceptance is much harder computationally than credulous acceptance , which is reflected by the bad runtime behaviour of _indeed we have that for tree - width , _ dynpartix _ has a significantly better runtime behaviour , and that it is competitive on the whole set of test instances . as an additional comment to figures [ figure : benchmarks_c ] and [ figure : benchmarks_d ] , we note that for skeptical acceptance , _ dynpartix _ was able to decide about 71% of the test cases within the time limit , while _ aspartix _ only finished 41% .finally let us briefly mention the problem of _ counting preferred extensions_. on the one side we have that _ aspartix _ has no option for explicit counting extensions , so the best thing one can do is enumerating extensions and then counting them .it can easily be seen that this can be quite inefficient , which is reflected by the fact that _ aspartix _ only finished 21% of the test instances in time . on the other handwe have that the dynamic algorithms for counting preferred extensions and deciding skeptical acceptance are essentially the same and thus have the same runtime behaviour .we identify several directions for future work .first , a more comprehensive empirical evaluation would be of high value .for instance , it would be interesting to explore how our algorithms perform on real world instances .to this end , we need more knowledge about the tree - width typical argumentation instances comprise , i.e. whether it is the case that such instances have low tree - width . due to the unavailability of benchmark libraries for argumentation , so far we had to omit such considerations .second , we see the following directions for further development of _ dynpartix _ : enriching the framework with additional argumentation semantics mentioned in ; implementing further reasoning modes , which can be efficiently computed on tree decompositions , e.g. ideal reasoning ; and optimizing the algorithms to benefit from recent developments in the sharp framework .
the aim of this paper is to announce the release of a novel system for abstract argumentation which is based on decomposition and dynamic programming . we provide first experimental evaluations to show the feasibility of this approach .
the most obvious way to generate a three - dimensional mesh in a mutateable way would be to simply take a representation of the shape , and directly mutate it .if the shape was the level set of a sum of spherical harmonics , then you could just mutate the proportions of each spherical harmonic , and the shape would change correspondingly . in a shape represented by a mesh , the mesh vertices could be mutated directly .in biology , the way that morphologies can be mutated seems richer than in either of these examples .for instance , in both of the examples above , a child organism would be unlikely to be just a scaled version of its parent , because too many mutations would have to coincide .it would be unlikely to find left - right symmetry evolving in either of the above methods unless the morphology was explicitly constrained . in nature, the link between morphology and the organism s genome is much more complicated than for the examples above .modeling the chemical processes behind the development of an organism is an active field , which is described in detail in _ _ on growth , form and computers__ .a widely used model that describes organism development has been presented by kumar and bentley . in this work , the same philosophy is adopted , that an emergent system needs to be parameterised by a genome , and the morphology needs to be a result of the system s dynamics . however , the emergent system used here is a network of identical neural networks , or cellular neural network .these were described by chua and yang and a well - known review of work on cellular neural networks was written by cimagalli and balsi .a paper by wilfried elmenreich and istvn fehrvri uses a cellular neural network to reconstruct images , and the architecture of their cellular neural network appears similar to the method in this paper . here, though , the output of the network is information that is used to grow a mesh , and the top - level topology of the cellular neural network here has to be flexible enough to allow for differing numbers of neighbours for each cell .cellular neural networks are capable of simulating a large variety of systems , and have been demonstrated to be able to model conway s game of life , which is known to be turing complete , and so it is at least plausible that they could generate complicated structured patterns that resemble biological morphologies .the calculation presented here takes place on a network of vertices .there are a certain number of discrete timesteps .each vertex , , at each time , , has a real - valued output vector , .each vertex has a number of neighboring vertices .each vertex , at each timestep has an input vector , , such that is a function of neighboring vertices outputs in the previous timestep : where ,, is the set of neighbours of vertex .the function that maps from neighboring outputs to inputs , , is given the superscript , , to denote that it can vary from vertex to vertex .this is simply to allow for slightly different processing when the vertex might have different numbers of neighbours or have a slightly different geometry .the mapping from input , , to output , is calculated using a feed - forward neural network with a sigmoid activation function .the neural network is described in c - like pseudocode : .... double [ ] evaluate(double [ ] input ) { for ( int j = 0 ; j < input.length ; j++ ) neuron[0][j].value = input[j ] ; for ( int i = 1 ; i < nlayers ; i++ ) //the zero - th layer is skipped .{ for ( int j = 0 ; j < number of neurons in layer i ; j++ ) { double a = -neuron[i][j].threshold ; for ( int k = 0 ; k < number of neurons in layer ( i-1 ) ; k++ ) a + = ( neuron[i - 1][k].value - 0.5 ) * neuron[i][j].weights[k ] ; neuron[i][j].value = 1.0 / ( 1.0 + exp(-a ) ) ; } } for ( int j = 0 ; j < number of neurons in final layer ; j++ ) output[j ] = neuron[last][j].value ; return output ; } .... the neural network is parameterised by each neuron s weights vector and threshold .these form the mesh s genetic code - any mutation or crossover or other operation on the mesh s genetic code simply varies these weights and thresholds .the vertex network here is a three dimensional mesh - consisting of vertices with a three dimensional position , and faces with three vertices .each vertex s neighbours are any vertex with which it shares a face .the input function , , that gives each vertex s input vector as a function of its neighbours output vectors , does the following : * if the output vector is length , then the input vector is length : each output number becomes three input numbers regardless of the mesh topology .* the first input number to this vertex is its own output from the previous timestep . *the second input number is the average output from all its neighbours from the previous timestep . *the third input number is a measure of the dispersion of its neighbours . *some inputs are reserved for things like the orientation of the vertex or its adjacent faces . in this way , each vertex can communicate with its neighbours , but in a symmetry preserving way .the architecture of this network and how it relates to the neural networks on each vertex is shown in figure [ archnet ] the timestep has one final component : the mesh updates according to the vertex output vector .each vertex has a normalised three - dimensional vector , , describing the direction that it s position , can grow .it then grows according to the following : where is a normalization factor so that the mesh as a whole can only grow so fast , and is the zero - th element of the output of vertex at this time .the mesh then checks to see if any face has an area that is above a threshold ( which in some cases can be altered by a different one of the output elements of its vertices ) , and if so , places a new vertex in the middle of the face , and replaces itself with three new faces that integrate the new central vertex . the growing direction , , for the new vertex depends on both the normal of the original face , and a weighted sum of the growing directions of the three parent vertices , the weighting being determined by a vertex output element .finally , if any faces share two vertices , adjacent faces are checked to see if it they would be improved by switching the common edge : faces and might be rearranged as , , depending on their relative orientation , the length compared to , and whether vertices or have already got too few adjacent faces ( since this operation would reduce that number ) .start with a simple three - dimensional mesh with vertices and triangular faces .assign each vertex an output vector of length , and a growing direction , . for each timestep ,* calculate a length input vector for each vertex based on the outputs of its neighbours .* calculate each vertex s output vector using a neural network each vertex has an identical neural network to the others .* update the mesh according to the vertex output : move the vertices , check to see if any new vertices should be added , and adjust the mesh accordingly , and consider switching a few edges . in the examples shown below , there are fifteen outputs , forty - five inputs and thirty neurons in the hidden layer . of the forty - five inputs ,four are overridden with the vertex growth direction ( three inputs ) , the height of the vertex ( one input ) .three outputs are used to guide mesh growth : one moves the vertex along its growth direction , one influences the area required for a face split , and the last influences the growth direction of any vertices that are placed as a result of a face splitting that this vertex is part of .it is outside the scope of this document to discuss how to implement a genetic algorithm , but the basic idea is that you have a population of genomes , each specifying the free parameters of a neural network that generates the 3d mesh .the population gradually replaces worse genomes with better ones , and generates new genomes from old ones , allowing for mutation and optionally crossover .if the genomes are chosen completely randomly , with each weight or threshold chosen to be evenly distributed between -2 and + 2 , and run the algorithm for 200 timesteps , then the shapes generated look like those shown in figure [ unselected ] .note that even without selection , the shapes are quite diverse and already slightly interesting .not many genomes have simply failed to produce any mesh other than the initial .an example is given here to demonstrate that this is suitable for use in genetic algorithms , where meshes are chosen to maximise the heuristic where is the maximum z - height of the mesh at that x , y position .this has been chosen to roughly mimic the selection pressure on trees the larger the horizontal surface area , the more light the organism will receive , but only if high enough to escape the shade of competitors .this heuristic encourages the formation of a canopy above ten units of height , and the mesh will necessarily have a trunk in order to reach that height .an example after several generations of selection is show in figure [ over10area ] .it is apparent that the shape look somewhat like a tree .a fairly simple algorithm is presented that can generate interesting shapes according to a genetic code that is suitable for use in a genetic algorithm .an example heuristic was given which demonstrated its suitability for use in a genetic algorithm .the neural networks here are meant to be an analogy to the process that forms morphology in nature : the outputs of the neural network are meant to be analogous to the state of the cells in this region of the organism , including any chemical markers or hormones that might influence the organism s local growth .neural network were chosen to be the equivalent process in these simulations because the implementation is simple .this technique might find applications in reconstructing asteroid shapes from light curves , or in biomedical imaging , or in computer graphics .the method outlined here was developed in a non - academic setting , and published because it appears to be novel .however , the author acknowledges that there may be relevant papers that should have been cited but were neglected .any comments or suggestions would be gratefully received .eduardo gomez - ramirez and giovanni egidio pazienza ., chapter the game of life using polynomial discrete time cellular neural networks , pages 719726 .springer berlin heidelberg , berlin , heidelberg , 2007
there are a number of ways to procedurally generate interesting three - dimensional shapes , and a method where a cellular neural network is combined with a mesh growth algorithm is presented here . the aim is to create a shape from a genetic code in such a way that a crude search can find interesting shapes . identical neural networks are placed at each vertex of a mesh which can communicate with neural networks on neighboring vertices . the output of the neural networks determine how the mesh grows , allowing interesting shapes to be produced emergently , mimicking some of the complexity of biological organism development . since the neural networks parameters can be freely mutated , the approach is amenable for use in a genetic algorithm .
this paper is part of a larger program to develop mathematical methods to quantitatively study performance of models for flocking .the main underlying motivation for the current work is to inform development of methods for programming driverless cars to enable coherent motion at high speed , even under dense traffic conditions .this is obviously an important problem , not only because it can lead to enormous cost savings to have smooth and dense traffic on our busier highways , but also because failures may cost lives .we study models that assume that each car is programmed identically and that can observe relative velocities and positions of nearby cars . in this workwe take nearby to mean only the car in front and behind .however the methods we develop will be applicable to larger interactions ( and these will be explored in future work ) .we will assume that the system is linearized .various examples and analyses of nonlinear systems exist .but the emphasis here is on linear systems where we can allow for many parameters ( to take the neighbors into account ) and still perform a meaningful analysis .there are two main aspects in our analysis .the first is the asymptotic stability .this can be analyzed via the eigenvalues of the matrix associated with the first order differential equation .section [ chap : stability ] is devoted to establishing necessary and sufficient conditions for a class of systems to be asymptotically stable .even though this is a fairly straightforward calculation , we have not found it in this generality in the literature .the second , more delicate aspect of the problem is related to the fact that we may have arbitrarily many cars following each other , hundreds or even thousands . in this situation , even if all our systems are known to be asymptotically stable , transients may still grow exponentially in the number of cars . the spectrum of the linear operator does not help us to recognize this problem ( ) .a dramatic example of this can be found in where eigenvalues have real part bounded from above by a negative number and yet transients grow exponentially in .this kind of exponential growth underscores the need for different ( non - spectral ) methods to analyze these systems .the main result of our paper represents one such alternative approach .we establish that for the parameter values of interest ( e.g. asymptotically stable systems ) , solutions are well approximated by travelling wave signals with two distinct signal velocities , one positive ( in direction of increasing agent number ) and one negative .ever since the inception ( , ) of the subject , systems with periodic boundary conditions have been popular ( , , and ) because they tend to be easier to study .however the precise connection between these systems and more realistic systems with non - trivial boundary conditions has always been somewhat unclear .our current program differs from earlier work in two crucial ways .the first is that we make precise what the impact of our analysis is for the ( more realistic ) systems on the line : namely in this paper we derive an expression for the velocity with which disturbances propagate in systems with periodic boundary , and in we numerically verify that this holds on the line as well .the second is that we consider all possible nearest neighbor interactions : we do not impose symmetries .this turns out to be of the utmost importance : when we apply these ideas in it turns out that the systems with the best performance are asymmetric .asymmetric systems ( though not the same as ours ) have also been considered by and with similar results .however their methods are perturbative , and spectral based . in and asymmetric interactions are also studied , and it was shown that in certain cases they may lead to exponential growth ( in ) in the perturbation . in the later of these , the model is qualitatively different because absolute velocity feedback is assumed ( their method is also perturbative and not global ) .signal velocities were employed in earlier calculations namely and .these calculations have in common that they were done for _ car - following _ models .we are interested in a more general framework , namely where automated pilots may pay attention _ also _ to their neighbor _ behind _ them or indeed other cars further afield .our model is _ strictly decentralized_. there are two reasons to do that .first , in high speed , high / density traffic , small differences in measured absolute velocity may render that measurement useless , if not dangerous , for the feedback .secondly , the desired velocity , even on the highway , may not be constant .it will depend on weather , time of day , condition of the road , and so on .for these reasons we limit ourselves to strictly _ decentralized _ models that only use information relative to the observers in the cars ( see and ) .many authors study models featuring a term proportional to velocity minus desired velocity ( see e.g. , , , , , , and ) .we consider a model of a _ decentralized _ flock of moving agents ( e.g. cars ) , where each agent s acceleration depends linearly on on the differences between its own relative position and velocity , and those of some subset of neighbors .letting be the position of the , and its desired distance within the flock ( typically times a fixed spacing ) , the general linear decentralized flock satisfies where is the set of neighbors for agent , and and are the coefficients for how the difference of positions and velocities respectively between agent and affect the acceleration of agent .the above model is more general than that considered in this work , we restrict ourselves to a leaderless decentralized flock with identical agents and periodic boundary .these restrictions imply and depend only on , and that the neighborhood sets be shift invariant , e.g. .we will also restrict ourselves to nearest neighbor systems . to further simplify the resulting equations ,we introduce the change of variables ( see for more details ) .we also introduce constants and , define for and where all indices are treated mod , and define similarly. it will be convenient to allow negative indices for by setting , similarly for . in this notation ,the flock equations become the following : [ defn : normalized system ] the system is given by the equation where the matrices and defined implicitly above are circulant matrices , as and depend only on .they also have row sums equal to 0 , as the decentralized condition has implied that we will accordingly refer to and as laplacian matrices . * remark :* it is well known that circulant matrices have orthogonal eigenbases , and are diagonalized by the discrete fourier transform ( see ) .this is the reason periodic boundary conditions are so convenient. it will be useful to write the equations of as a first order system : this system has a 2-dimensional family of coherent solutions , namely : where and are arbitrary elements of .these correspond to the generalized eigenspace of for the eigenvalue 0 .it is easy to see that all solutions converge to one of these coherent solutions if and only if all other eigenvalues of have negative real part . with a slight abuse of notationwe will call this case asymptotically stable ( see for precise definitions ) : the system in equation [ eqn : first - order ] is called asymptotically stable if it has a single eigenvalue equal to 0 with algebraic multiplicity 2 , and all other eigenvalues have strictly negative real parts .[ defn : asympt ] the discrete fourier transform will play a fundamental role in our analysis .we define and as follows : denote and set denote the vector by : we furthermore define the moments of and : and observe that can be expanded as an analogous expansion for can also be given .in this section we state and prove necessary and sufficient conditions for nearest neighbor systems to be asymptotically stable .let and be the laplacians defined in definition [ defn : normalized system ] .the eigenvalues of are with associated eigenvector ( where ) .similarly , and form eigenpairs for . [ prop : evals lapl ] this follows immediately from the previous remark as and are circulant matrices . * remark : * even though and have bases of orthogonal eigenvectors , does not . instead ,the eigenvectors of the matrix lie within two - dimensional subspaces which are orthogonal to each other .each of these may be spanned by two not necessarily orthogonal eigenvectors , or by an eigenvector and a ( jordan ) generalized eigenvector .this is made precise below : the eigenvalues ( ) of are given by the solutions of with associated eigenvectors given by .[ prop:2 ] let be an eigenvalue of , with eigenvector written as . then which implies first that and then that .the latter shows that is an eigenvalue of the circulant matrix , which from proposition [ prop : evals lapl ] has eigenvalues given by , for .this implies satisfies for some .finally , letting be as above , it is straightforward to show are eigenvectors with eigenvalue . ,blue ellipse : .[ fig : phasevelocities1],height=240 ] define and by and define the curve to be the set of all satisfying for some ] given by , so that } g(\phi) ] , and the functions and are continuous , then there must be some and so that either or , in which case is not asymptotically stable .we are now in a position to state and prove the main theorem of this section .recall that we identify and with and in definition [ defn : normalized system ] .suppose is as defined in definition [ defn : normalized system ] , with .then is asymptotically stable for all if and only if , , and .[ theo : main1 ] let be asymptotically stable for all .first , proposition [ prop : ix1=0 ] implies , which for implies .next , equation [ eq : decentralized ] implies that , which implies similarly . as for , proposition [ prop : routh ] implies we must have and . to prove the other direction ,let , and .the same calculation as above shows and for , then proposition [ prop : routh ] implies is asymptotically stable .the main result of this section is the determination of the signal velocity in asymptotically stable systems as characterized in theorem [ theo : main1 ] .the signal velocity is the velocity with which disturbances ( such as a short pulse ) propagate through the flock . in general ,signal velocities in dispersive media may be difficult to determine .the reason is that a pulse consists of a superposition of plane waves , typically each with a different phase velocity .if the component plane waves have different phase velocities , the pulse may spread out over time ( dispersion ) , and the determination of arrival time of the signal may becomes problematic . for detailswe refer to . for nearest neighbor systems of definition [ defn : normalized system ]we define : * remark : * from now on we will restrict our attention to ( stable ) systems satisfying the conditions of definition [ defn : normalized system ] and the conclusions of theorem [ theo : main1 ] . note that for these systems . in order to simplify notation we willalso ( without loss of generality , because of theorem [ theo : main1 ] ) re - scale and so that the values of and are 1 from now on. * remark : * from the definitions it is clear that can be identified with and that is the complex conjugate of . it will be convenient in this section to relabel these eigenvalues so that runs from to .for simplicity of notation , we will however write as . let as in definition [ defn : normalized system ] and theorem [ theo : main1 ] .then and the eigenvalues of can be expanded as ( with and ) : [ prop : expansion eigenvals ] expand given in proposition [ prop:2 ] in powers of using after a substantial but straightforward calculation the result is obtained .the phase velocity of the time - varying sinusoid on the real line is defined by the evolution of points of constant phase : , which gives the phase velocity .disturbances in the positions of agents in the flock may be decomposed in terms of solutions to equation [ eqn : normalized system ] which are damped sinusoidal waves as functions of time and agent number .we define phase velocity in units of number of agents per unit time , as follows .[ def : phase_velocity ] the set of solutions has phase velocity .on our way to studying the propagation velocity of disturbances in the system , we will characterize its phase velocities .we first establish the following : [ lemma : nu_m_opposite_signs ] for as in theorem [ theo : main1 ] , the imaginary parts of the eigenvalues have opposite signs for .set and .as are roots of , and we can identify .we have as is symmetric , so .solving gives .but and because is asymptotically stable , so and have opposite signs . for in theorem [ theo : main1 ] , phase velocities are given by for .[ lem : phasevelocity ] lemma [ lemma : nu_m_opposite_signs ] implies and have opposite signs .redefine ( if necessary , see proposition [ prop:2 ] ) the subscripts " and " so that has positive imaginary part , and has negative imaginary part .we now derive phase velocities where denotes going from agent 0 " towards agent the expression for the entry of the time - evolution of the solution corresponding to the eigenvalue is ( see proposition [ prop : evals lapl ] ) is ( up to an arbitrary multiplicative constant ) : comparing this to definition [ def : phase_velocity ] shows these two solutions have phase velocities and as given in [ eq : phase_velocity_c ] . from proposition [ prop : expansion eigenvals ]we see that the eigenvalues close to the origin form four branches which intersect at the origin .namely can be + 1 or -1 , and the counter can be positive or negative .this is illustrated in figure [ fig : phasevelocities2 ] .so for given we get two phase velocities : one in each direction . for as in theorem [ theo : main1 ], the phase velocities of lemma [ lem : phasevelocity ] can be expanded as ( ) : ^{1/2 } } + \varepsilon\;\dfrac{g_v^2(1 + 2\rho_{v,1})}{16[g_v^2(1 + 2\rho_{v,1})^2 - 2g_x]^{3/2 } } \right)\\ & & + { \mathop{\mathcal{o}}\nolimits}((m\theta)^4 ) \\\end{aligned}\ ] ] the real parts of the associated eigenvalues can be expanded as : ^{1/2 } } \right)+{\mathop{\mathcal{o}}\nolimits}((m\theta)^4)\ ] ] [ lem : phasevelocity2 ] with the reduction as described in the remark in at the beginning of section [ chap : signal ] , theorem [ theo : main1 ] implies , and equation ( [ eq : decentralized ] ) implies .we can then compute all of the moments substituting the expansion from proposition [ prop : expansion eigenvals ] into the expressions for the phase velocity from lemma [ lem : phasevelocity ] , and using the above expressions for the moments and gives the desired expansion .for any set of initial conditions and , there are unique constants and so that the solution of the system has the form our main result of this paper is to show that the first sum represents a signal travelling to the left ( decreasing agent number ) , that may be approximated by a travelling wave with a single signal velocity . likewise, the second sum represents a signal travelling to the right .we first need a small technical lemma .[ lemma : exp_ab ] there is a such that for all , satisfying and , it follows that .using we have .the term multiplying expression is a convergent power series , so is continuous , and approaches 1 as and .the desired inequality then follows .we now address the first term in equation [ eq : zkt_solution_expansion ] .[ prop : timedomain1 ] let be as in theorem [ theo : main1 ] , and as given in lemma [ lem : phasevelocity2 ] ( ) .suppose the initial conditions are such that for all , in the expansion in equation ( [ eq : zkt_solution_expansion ] ) .in addition , suppose that the coefficients satisfy for some .fix and .then , for all |{\operatorname{re}}(\nu_{m+})| + |m\theta| and as defined above may be made sufficiently small , by taking sufficiently large ) . the first sum in equation [ eq : abs_zktmfm ] has terms , each has ;the entire sum is then bounded by . for ] .in addition , if , then all terms on the r.h.s .of the above inequality tend to 0 as .an analogous result to proposition [ prop : timedomain1 ] can be proved for the case when for all .if we write , where has expansion with all and has expansion with all , we have using proposition [ prop : timedomain1 ] and the aforementioned analogous result to bound the two terms on the right establishes equation [ eq : maintheorem ] . if , then as . for , both and , as , which proves that all terms on the r.h.s of equation [ eq : maintheorem ] go to zero as . ,blue : , orange : , red : .the maximum phase velocities occur at , these are the signal velocities and of theorem [ theo : timedomain1 ] ._ , height=288 ] * remark : * it is interesting to note that the signal velocity we determine is actually equal to the group velocity at .the group velocity is defined as .it is not necessarily true that group velocity in these kinds of systems equals signal velocity . in the system studied in are different .see for more information .* remark : * a similar argument as the one in theorem [ theo : timedomain1 ] easily shows that eigenfunctions with wave numbers greater than will die out before .thus for considerations on time - scales longer than that , these are irrelevant. it also ( conveniently ) turns out that very often the greatest phase velocities are associated with the lowest wave numbers .a typical case is seen in figure [ fig : phasevelocities3 ] .one can show that in those asymptotically stable cases where is close to -1/2 , we have that has a local maximum at .in fact lemma [ lem : phasevelocity2 ] implies that for : which has a local maximum at .though experiments with cars have been done on circular roads ( see ) , our interest in the system with periodic boundary conditions of as defined in definition [ defn : normalized system ] stems from the applicability to traffic systems with non - periodic boundary conditions .the primary motivation for studying the former is that they enable us to analyze how disturbances propagate , and under the assumption that this propagation does not depend on boundary conditions apply that to the latter systems to find the transients .some remarks on how that works are given in the introduction and is the subject of .a relative novelty here is that we consider all strictly decentralized systems , not just symmetric ones . in section [ chap : stability ]we give precise conditions on the parameters so that decentralized systems with periodic boundary condition are asymptotically stable . in its generalitystated here this is new , though related observations have been made in and .the main importance here is that we use these conditions on the parameters to show that in these systems disturbances travel with constant a constant signal velocity , and as our main result we determine that velocity in section [ chap : signal ] .this explains why in these cases , approximations of these systems with large , by the wave equation are successful ( see for example ) .it can be shown however that for other parameter values diffusive behavior may occur ( see .is the desired distance between cars . ) at time 0 agent 0 receives a different initial condition .they are color coded according to the velocity of the agent .the black curves indicate the theoretical position of the wavefront calculated via the signal velocity .note that these velocities depend on the direction , and that the signal velocity is measured in number of cars per time unit .due to the different velocities of the cars , these curves are not straight lines.__,height=288 ] finally we test our prediction of the signal velocity in a numerical experiment .our theory described the error due to approximating the disturbance signal as having a pair of signal velocities as a sum of three terms ( see equation [ eq : maintheorem ] ) , which asymptotically go to zero for large , subject to a constraint on the decay of the fourier coefficients of the initial disturbance . in this numerical experimentwe give agent number at time is a different initial velocity from the others .we note that even though this type of impulse disturbance does not have the fourier coefficient decay required by our theory , we nonetheless observe two distinct signal velocities as predicted .the result can be seen in figure [ fig : signalvelocity ] .that signal propagates forward ( in the direction 1,2,3 , .. ) through the flock as well as backwards ( in the direction , , , ... ) . in figurewe color coded according to the speed of the agents , who are stationary until the signal reaches them . in blackwe mark when the signal is predicted to arrive , according to the theoretically predicted signal velocities .one can see the excellent agreement .l. brillouin , _ propagation of electro - magnetic waves in material media _ , congrs international delectricit vol 2 , 739 - 788 , 1933 . also appeared in : l. brillouin , _ wave propagation and group velocity _ , academic press , 1960 .y. sugiyama , m. fukui , m. kikuchi , k. hasebe , a. nakayama , k. nishionari , s. tadaki , s. yukawa , _ traffic jams without bottlenecks experimental evidence for the physical mechanism of the formation of a jam _, new journal of physics 10 , no 3 , 033001 , 2008 .
we investigate a system of coupled oscillators on the circle , which arises from a simple model for behavior of large numbers of autonomous vehicles . the model considers asymmetric , linear , decentralized dynamics , where the acceleration of each vehicle depends on the relative positions and velocities between itself and a set of local neighbors . we first derive necessary and sufficient conditions for asymptotic stability , then derive expressions for the phase velocity of propagation of disturbances in velocity through this system . we show that the high frequencies exhibit damping , which implies existence of well - defined _ signal velocities _ and such that low frequency disturbances travel through the flock as in the direction of increasing agent numbers and in the other .
the spallation neutron source ( sns ) is a high intensity pulsed accelerator for neutron production . to commission andrun the sns efficiently , high level physics application software for modeling , integrated operation and accelerator physics studies is required ; in particular , construction of an object - oriented , accelerator - hierarchy programming framework .java is chosen as the core programming language because it provides object - oriented scope and existing interfaces to the controls software ( _ e.g. _ java channel access ) and database information ( jdbc , xml ) .the sns physics application software environment includes the sns global database , a java - based software infrastructure ( xal ) , and existing lattice tools such as trace-3d and mad .the core part of this environment is the xal infrastructure , which includes links to the sns database , epics channel access signals , shared extensible markup language ( xml ) files among applications and external modeling tools , as well as built - in accelerator physics algorithms .the present plan for quick on - line modeling during the sns commissioning is to use trace-3d for the linac and mad for the ring .data synchronization at the epics level for the sns pulsed nature is also in progress , and will be included in the xal infrastructure later .the sns global database contains static information about beam line devices ( magnets , diagnostics , etc . ) , power supplies , magnet measurement , global coordinates , as well as other accelerator equipment .the table schemas , entities and relationships are described in .the basic accelerator hierarchy is constructed from the database information .for example information for constructing representative beamline sequences , their constituent lattice and diagnostic components , and the mapping of beamline components to their respective epics process variables ( pvs ) all comes from the global database .although it is possible to directly query the database from the java based xal framework , an intermediate xml file containing the accelerator hierarchy is created instead .the structure of the xml files is based on the xal class view .the global database to local xml file translation is a stand - alone program outside the xal , which obviates the need for each xal based applications to query the database for initialization .the xal infrastructure is a java class structure providing a programming interface with an accelerator hierarchy view .xal is a variant of ual 2.0 , and detailed api information for the xal can be found on - line . a schematic diagram depicting the xal infrastructure relationship to other accelerator components is shown in fig . [ xal_cs ] .the xal provides application programs with connections to the static data via xml files and the run - time data via java channel access . the xal class hierarchy is shown in fig . [ xal_cs ] .at the top of the xal class hierarchy is the sns accelerator .the accelerator is composed of different accelerator sequences , _e.g. _ medium energy beam transport ( mebt ) , drift tube linac ( dtl ) , ring .the sequences are composed of nodes , _e.g. _ quadrupoles , bpms , correctors .there is a built - in capability to include algorithms in xal , but initially we are using an external model ( trace-3d ) for the linac applications . regarding scripting possibilities , xal class objects directly with jython are being tested , without the need for interface code .all the run - time information for the applications will be obtained through epics channel access .the xal provides the connectivity to the epics channel access via the channel access ( ca ) classes as shown in fig .[ xal_cs ] .because the sns is a pulsed machine , for many applications the data correlation among pulses is vital .the ca classes provide both synchronized and non - synchronized methods for data taking .the data synchronization will be described in detail in section [ sync ] .most of the existing accelerator modeling software packages are written in languages other than java . in order to run applications from java - based xal, the software packages must be compiled as shared libraries , then connected to the shared libraries via the java native interface ( jni ) .the file i / o is done through xml parsing provided by xal , for example , storing the calculated result in xml files .thus the information is portable , share - able , and can be accessed remotely .the jni calls also require arranging the running threads carefully because programs normally tend to execute its own threads before starting the non - java threads .data synchronization is an important feature for a pulsed accelerator ( 1 ms beam pulses at 60 hz ) .the sns real time data link will synchronize the clocks of all iocs across the accelerator at 60 hz rate , ensuring a good synchronization of the time - stamps being applied to pvs .however , it may be difficult for high level applications to reliably gather sets of data from across the accelerator , all from the same pulse . to facilitate this ,a data - silo data time correlator is being written .the data - silo method is shown schematically in fig .[ silo ] . for a requested pv set, the correlator returns the most recent collection of time - correlated data .the behavior of the datasilo class is configurable by three parameters : the maximum time to wait since start of request , maximum width of the time bin , and the maximum number of channels allowed to be missing from the synchronized data set .the correlator is implemented as the c++ datasilo class which allows the application s programmer to : * add and remove epics process variables from the datasilo set ; * dynamically define the maximum wait time , maximum bin number , and maximum missing bins allowed ; * obtain the most recent synchronized set ( no waiting ) ; wait up to the maximum time to obtain a synchronized set ( blocking ) * choose the earliest , latest , or mean time stamp from a synchronized set .the sns global database is close to the end of design phase and has been tested with sns mebt data .the xal infrastructure is constructed and tested with a modeling tool , trace-3d .the channel access part of the xal will be tested with simulated ioc signals .scripting tools such as matlab and python will be used in the mebt commissioning this spring .the authors would like to thank the sns controls and diagnostics groups for kindly providing us all the epics , database and other support .we would also like to thank dr .n. malitsky for his help on the early xal development . .n.malitsky , _ et al ._ , `` a prototype of the ual 2.0 application toolkit '' , icalepcs 2001 , san jose , ca , usa , november , 2001 , physics/0111096 . .b.oerter , _ et al ._ , `` sns timing system '' , icalepcs 2001 , san jose , ca , usa , november , 2001 , cs.ar/0111032 .
the architecture for spallation neutron source accelerator physics application programs is presented . these high level applications involve processing and managing information from the diagnostic instruments , the machine control system , models and static databases ; they will be used to investigate and control beam behavior . primary components include an sns global database and java - based application toolkit , called xal . a key element in the sns application programs is time synchronization of data used in these applications , due to the short pulse length ( 1 ms ) , pulsed ( 60 hz ) nature of the device . the data synchronization progress is also presented .
quantum manipulation applied to information processing or information storage is actually a continuously growing research area searching for practical , easy and optimal solutions in evolution of quantum systems . for spin based resources , single two level systems on well known exact and optimal control solutions in terms of energy or time . despite, bigger spin systems in general have a complex behavior still not completely known contrasting with those solutions .spin interactions have been analyzed in terms of transference and control of entanglement in bipartite qubits , chains and lattices . on these arrangements ,several approaches have extended research on more complex systems depending on external parameters ( temperature , strength of external fields , geometry , etc . ) .anysotropic ising model for bipartite systems in lets a block decomposition when their evolution is written in a non - local basis instead of traditional computational basis .it means becomes a direct sum of two subspaces , each one generated by a pair of bell states . while , becomes in the semi - direct product . in these terms , control can be reduced to two control problems in each block and exact solutions can be found .blocks can be configured as a function of the direction of external driven interactions being included .such scheme lets control transformations between pairs of bell states on demand , therefore on the complete state in spite of the possibility to reconfigure those pairs .thus , the procedure sets a control method to manipulate quantum information on matter based on bell states as computational grammar instead of traditional computational basis , letting the transformation between any pair of elements of this basis under well known control procedures for .thus , reduction or decomposition schemes from large systems in terms of simpler problems based on isolated two level subsystems could to state easier and universal ( but not globally optimal in general ) control procedures to manipulate them .complexity of multiqubit systems grows unpredictably with their size in terms of entanglement properties , usefulness and control . in particular , their manipulation do not exhibit scalable rules departing from their smallest systems on which they are constituted .recently , a series of works considering driven magnetic fields together with bipartite ising interaction show the evolution matrix expressed in a non - local basis splits the dynamics on two weak related information subsystems .the splitting could be selected as function of field direction . in this sense ,current work shows the generalization of that decomposition scheme for an extended kind of -partite two level spin systems when is based on non - local generalized bell states , the bell gems basis .structure developed could be understood as a splitting of the quantum information channels into information channels weakly related , in the form of two level subsystems .problem stated in this work is established for a general hamiltonian for coupled two level systems on . if it is written as a linear combination of tensor products of pauli matrices : where , and , a set of time dependent real functions in general .sometimes , as in the second expression in ( [ hamiltonian ] ) , will be represented as a number when it is expressed in base-4 with digits , . is its term in that base .additionally , and for are respectively the unitary matrix and traditional pauli matrices assumed being expressed for the computational basis of each part .then , it represents a generalized model for spin and polarization systems obeying the schrdinger equation for its associated evolution operator . without loss of generality, the identity term can be dropped because it only contributes with a global phase .thus , hamiltonian is traceless , remaining evolution operator belongs to and it can be written as a linear combination of eigenvectors projectors : clearly , these eigenstates become invariant under time evolution . decomposition , as it was obtained in can be induced in general by pairing eigenvalues under an arbitrary rule and then considering orthogonal states obtained by pairs as rotations of each defined pair of eigenvalues : these transformations directly split the whole hilbert space in a direct sum of subspaces generated by each pair of states and alternatively by the corresponding pair of eigenstates .there are lots of possibilities for last states in terms of eigenstates and pairings . for them ,separability or entanglement properties are not necessarily assured as in , which corresponds only to a particular interaction .this new basis transforms the diagonal structure of hamiltonian on the basis into a block structure on the basis : with this , each block is a `` localized '' hamiltonian which can be expressed on a pauli - like basis in : and : where : in addition , this structure simplifies and translates quantum error correction procedures into specific flip errors .it means , can be written as a sum of hamiltonian operators on different two level subspaces .this block structure is inherited to the evolution matrix via the -time ordered integral : assuring the possibility to apply quantum control optimal schemes : it implies that ( because one arbitrary factor phase , , in some block depends on remaining phase factors in spite of ) .informally , we will call to this factorization , the `` decomposition '' due to each block structure ( in reality ) . then, hilbert space becomes the direct sum of subspaces generated for each associated pair , . in each subspace, there is a complex mixing dynamics of probabilities , but without mix subspace probabilities .for , bell gems form a orthogonal basis of entangled states for particles : where . at this point , can be considered as the traditional pauli matrices .in addition , is the set of digits of when it is written in base-4 with digits ( , it means . in a similar way , numbers written in base-2 with digits ( . then it is possible express the components of in that basis , obtaining a master expression to determine the necessary restrictions to fit their elements in the states for the decomposition : where ( here , can be removed in spite of the initial discussion ) . in last expressions , the product has some properties in spite of pauli matrices properties . because are traceless and ( negative sign only if ) , then is non - zero only if : a ) , b ) completely different between them , and c ) are equal by pairs .a remark is convenient in this stage . in some works , as in , bell gems are preferred be defined using for and because it lets to have real coefficients when they are expressed in the computational basis ( other alternative definitions can introduce specific phase factor in ) .we will adopt last definition in the following , which does not produce changes in the previous discussion .last analysis conduces to a convenient definition : splitting the set of particles , then two parts , are _ correspondents _ if ( they are in the same position of subscripts as the other in the second one ) . in last terms , a careful but direct analysis and development of ( [ hamiltalpha ] ) shows that the decomposition arises when hamiltonian depicts two types of interactions ( by requiring for all in specific entries ) .the first one ( type i ) comprehends all non local interactions between any correspondent parts in any direction .these terms generate diagonal entries in the hamiltonian expressed in bell gems basis .together , it could be include two local interactions in one specific direction on only one pair up most of correspondent parts generating the off diagonal entries to conform the blocks .note these local interaction terms could be interpreted as external driven fields as in .the second one ( type ii ) is obtained substituting the type i local interactions with non local interactions between pairs of any non correspondent parts included in exactly two pairs of correspondent parts .it means , if with are these two pairs of correspondent parts , then interactions allowed are and .this group of four interactions generates the off diagonal entries to conform blocks .type ii interaction normally could be interpreted as a non driven process .figure 1 resumes those two types of interactions .block decomposition : a ) type i interaction , and b ) type ii interaction.,title="fig : " ] [ fig1 ]some applications of decomposition are foreseen .it can be exploited in quantum control of bigger systems in which control schemes are not well developed as those of dynamics .the selectivity of pairing is related with the non diagonal elements arisen , it means , with the local interactions in type i case and with no correspondent interactions in type ii case .this approach to quantum evolution will let control analytically the flow of quantum information in different geometrical arrangements . in a related but not necessarily equivalent direction , selective block decomposition could be useful for unitary factorization in quantum gate design .finally , other applications in quantum superdense coding could be engineered for multichannel quantum information storage , using each subspace to storage differentiated information which could be necessary to process simultaneously , by example in quantum image processing .
quantum computation and quantum information are continuously growing research areas which are based on nature and resources of quantum mechanics , as superposition and entanglement . in its gate array version , the use of convenient and appropriate gates is essential . but while those proposed gates adopt convenient forms for computational algorithms , in the practice , their design depends on specific quantum systems and stuff being used . gates design is restricted to properties and limitations of interactions and physical elements being involved , where quantum control plays a deep role . quantum complexity of multipartite systems and their interactions requires a tight control to manipulate their quantum states , either local and non - local ones , but still a reducibility procedure should be addressed . this work shows how a general -partite two level spin system in could be decomposed in subsystems on , letting establish control operations . in particular , it is shown that bell gems basis is a set of natural states on which decomposition happen naturally under some interaction restrictions . thus , alternating the direction of local interaction terms in the hamiltonian , this procedure states a universal exchange semantics on those basis . the structure developed could be understood as a splitting of the information channels into pairs of level information subsystems .
in social network analysis , the problem of determining the importance of actors in a network has been studied for a long time ( see , for example , ) .it is in this context that the concept of the _ centrality _ of a vertex in a network emerged .there are numerous measures that have been proposed to numerically quantify centrality which differ both in the nature of the underlying notion of vertex importance that they seek to capture , and in the manner in which that notion is encoded through some functional of the network graph .see , for example , for a recent review and categorization of centrality measures .paths as the routes by which flows ( e.g. , of information or commodities ) travel over a network are fundamental to the functioning of many networks . therefore , not surprisingly , a number of centrality measures quantity importance with respect to the sharing of paths in the network .one popular measure is _betweenness centrality_. first introduced in its modern form by , the betweenness centrality is essentially a measure of how many geodesic ( ie ., shortest ) paths run over a given vertex .in other words , in a social network for example , the betweenness centrality measures the extent to which an actor `` lies between '' other individuals in the network , with respect to the network path structure . as such , it is a measure of the control that actor has over the distribution of information in the network . the betweenness centrality as with all other centrality measures of which we are aware is defined specifically with respect to a single given vertex . in particular , vertex centralities produce an ordering of the vertices in terms of their individual importance , but do not provide insight into the manner in which vertices act together in the spread of information across the network .insight of this kind can be important in presenting an appropriately more nuanced view of the roles of the different vertices , beyond their individual importance .a first natural extension of the idea of centrality in this manner is to pairs of vertices . in this paper, we introduce such an extension , which we term the _ co - betweenness centrality _ , or simply the _ co - betweenness_. the co - betweenness of two vertices is essentially a measure of how many geodesic paths are shared by the vertices , and as such provides us with a sense of the interplay of vertices across the network .for example , the co - betweenness alone quantifies the extent to which pairs of vertices jointly control the distribution of information in the network .alternatively , a standardized version of co - betweenness produces a well - defined measure of correlation between flows over the two vertices .finally , an alternative normalization quantifies the extent to which one vertex controls the distribution of information to another vertex .this paper is organized as follows . in section [ sec : bg ] , we briefly review necessary technical background . in section [ sec : cob ] , we provide a precise definition for the co - betweenness and related measures , and motivate each in the context of an internet communication network .an algorithm for the efficient computation of co - betweenness , for all pairs of vertices in a network , is sketched in section [ sec : computation ] , and its properties are discussed . in section [ sec : applications ] , we further illustrate our measures using two social networks whose ties are reflective of communication .some additional discussion is provided in section [ sec : disc ] .finally , a formal description of our algorithm , as well as pseudo - code , may be found in the appendix .let denote an undirected , connected network graph with vertices in and edges in .a _ walk _ on , from a vertex to another vertex , is an alternating sequence of vertices and edges , say , where the endpoints of are .the _ length _ of this walk is said to be .trail _ is a walk without repeated edges , and a _ path _ ,a trail without repeated vertices .a shortest path between two vertices is a path between and whose length is a minimum .such a path is also called a _geodesic _ and its length , the _ geodesic distance _ between and . in the case that the graph is weighted i.e. , there is a collection of edge weights , where , shortest paths may be instead defined as paths for which the total sum of edge weights is a minimum . in the material that follows , we will restrict our exposition primarily to the case of unweighted graphs , but extensions to weighted graphs are straightforward . for additional background of this type , see , for example , the textbook .let denote the total number of shortest paths that connect vertices and ( with ) , and let denote the number of shortest paths between and that also run over vertex .then we define the betweenness centrality of a vertex as a weighted sum of the number of paths through , note that this definition excludes the shortest paths that start or end at .however , in a connected graph we will have whenever or , so the exclusion amounts to removing a constant term that would otherwise be present in the betweenness centrality of every vertex . as an illustration , which we will use throughout this section and the next ,consider the network in figure [ fig : abilene.network ] .this is the abilene network , an internet network that is part of the internet2 project , a research project devoted to development of the ` next generation ' internet .it serves as a so - called ` backbone ' network for universities and research labs across the united states , in a manner analogous to the federal highway system of roads .we use this network for illustration because , as a technological communication network , the notions of connectivity , information , flows , and paths are all explicit and physical , and hence facilitate our initial discussion of betweenness and co - betweenness .later , in section [ sec : applications ] , we will illustrate further with two communication networks from the social network literature . the information traversing this network takes the form of so - called ` packets ' , and the packets flow between origins and destinations on this network along paths strictly determined according to a set of underlying routing protocols ( technically , the abilene network is more accurately described by a directed graph .but , given the fact that routing is typically symmetric in this network , we follow the internet2 convention of displaying abilene using an undirected graph . ) .a reasonable first approximation of the routing of information in this network is with respect to a set of unique shortest paths . in this case , the betweenness of any given vertex will be exactly equal to the number of shortest paths through .the vertices in figure [ fig : abilene.network ] correspond to metropolitan regions , and have been laid out roughly with respect to their true geographical locations . intuitively and according to earlier work on centrality in spatial networks , one might suspect that vertices near the central portion of the network , such as denver or indianapolis , have larger betweenness , being likely forced to support most of the flows of communication between east and west .we will see in section [ sec : cob ] that such is indeed the case . until recently ,standard algorithms for computing betweenness centralities for all vertices in a network had running times , which was a stumbling block to their application in large - scale network analyses .faster algorithms now exist , such as those introduced in , which have running time of on unweighted networks and on weighted networks , with an space requirement .these improvements derive from exploiting a clever recursive relation for the partial sums .as we will see , the need for efficient algorithms is even more important in the case of the co - betweenness , and we will make similar usage of recursions in developing an efficient algorithm for computing this quantity .we extend the concept of vertex betweenness centrality to pairs of vertices and by letting denote the number of shortest paths between vertices and that pass through both and , and defining the vertex co - betweenness as thus co - betweenness gives us a measure of the number of shortest paths that run through both vertices and . to gain some insight into the relation between betweenness and co - betweenness , consider the following statistical perspective .recall the abilene network described in the previous section , and suppose that is a measure of the information ( i.e. , internet packets ) flowing between vertices and in the network .similarly , let be the total information flowing through vertex .next , define to be the vector of values , where is the total number of pairs of vertices exchanging information , and , to be the vector of values . a common expression modeling the relation between these two quantities is simply , where is an matrix ( i.e. , the so - called ` routing matrix ' ) of s and s , indicating through which vertices each given routed path goes .now if is considered as a random variable , with uncorrelated elements , then its covariance matrix is simply equal to the identity matrix .the elements of , however , will be correlated , and their covariance matrix takes the form , by virtue of the linear relation between and .importantly , note that the diagonal elements of are the betweenness .furthermore , the off - diagonal elements are the co - betweenness . when shortest paths are not unique , the same results hold if the matrix is expanded so that each shortest path between a pair of vertices and is afforded a separate column , and the non - zero entries of each such column has the value , rather than . in this case , may be interpreted as a stochastic routing matrix . to illustrate , in figure [ fig : abilene.cob ] , we show a network graph representation of the matrix for the abilene network .the vertices are again placed roughly with respect to their actual geographic location , but are now drawn in proportion to their betweenness .edges between pairs of vertices now represent non - zero co - betweenness for the pair , and are also drawn with a thickness in proportion to their value .a number of interesting features are evident from this graph .first , we see that , as surmised earlier , the more centrally located vertices tend to have the largest betweenness values . andit is these vertices that typically are involved with the larger co - betweenness values .since the paths going through both a vertex and a vertex are a subset of the paths going through either one or the other , this tendancy for large co - betweenness to associate with large betweenness should not be a surprise .also note that the co - betweenness values tend to be smaller between vertices separated by a larger geographical distance , which again seems intuitive .somewhat more surprising perhaps , however , is the manner in which the network becomes disconnected .the seattle vertex is now isolated , as there are no paths that route through that vertex only to and from .additionally , the vertices houston , atlanta , and washington now form a separate component in this graph , indicating that information is routed on paths running through both the first two and the last two , but not through all three , and also not through any of these and some other vertex .overall , one gets the impression of information being routed primarily over paths along the upper portion of the network in figure [ fig : abilene.network ] .a similar observation has been made in , using different techniques .while the raw co - betweenness values appear to be quite informative , one can imagine contexts in which it would be useful to compare co - betweenness across pairs of vertices in a manner that adjusts for the unequal betweenness of the participating vertices .the value is a natural candidate for a standardized version of the co - betweeness in ( [ eq : cob ] ) , being simply the corresponding entry of the correlation matrix deriving from .figure [ fig : abilene.cob.corr ] shows a network graph representation of the quantities in for the abilene network , with edges again drawn in proportion to the values and vertices now naturally all drawn to be the same size . for the abilene network .vertices are all drawn with equal size .edge width is drawn in proportion to the standardized co - betweenness of the two vertices indicent to it . ]much of this network looks like that in figure [ fig : abilene.cob ] .the one notable exception is that the magnitude of the values between the three vertices in the lower subgraph component are now of a similar order to most of the other values in the other component .this fact may be interpreted as indicating that among themselves , adjusting for the lower levels of information flowing through this part of the network , these vertices are as strongly ` correlated ' as many of the others .the co - betweenness may also be used to define a directed notion of the strength of pairwise relationships .let denote the relative proportion of shortest paths through that also go through .this quantity may be interpreted as a measure of the control that vertex has over the information that passes through vertex . alternatively , under uniqueness of shortest paths , if from among the set of shortest paths through one is chosen uniformly at random , the value is the probabilty that the chosen path will also go through .we call the _ conditional betweenness _ of , given . note that , in general , .figure [ fig : abilene.cob.condp ] shows a graph representation of the values for the abilene network .( given by eq .( 4 ) ) for the abilene network .edges are drawn with width in proportion to their value of and indicate how one vertex ( at the head ) controls the flow of information through another ( at the tail ) . ] due to the asymmetry of these values in and , arcs are used , rather than edges , with an arc from to corresponding to .the thickness of the arcs is proportional to these values , and is therefore indicative of the control exercised on the vertex at the tail by the vertex at the head . for improved visualization ,we have used a simple circular layout for the vertices .examination of this figure shows symmetry in the relationships between some pairs of vertices , but a strong asymmetry between most others .for example , vertices like indianapolis , which were seen previously to have a large betweenness , clearly exercise a strong degree of control over almost any other vertices with which they share paths .more interestingly , note that certain vertices that are neighbors in the original abilene network have more symmetric relationships than others .the conditional betweenness for atlanta and washington , dc , are fairly similar in magnitude , while those for los angeles and sunnyvale are quite dissimilar , with the latter evidently exercising a noticeably greater degree of control over the former .we discuss here the calculation of the co - betweenness values in ( [ eq : cob ] ) , for all pairs , from which the other quantities in ( [ eq : cob.corr ] ) and ( [ eq : cob.condp ] ) follow trivially . at a first glance , it would appear that an algorithm of running time is necessary , given that the number of vertex pairs grows as the square of the number of vertices .such an implementation would render the notion of co - betweenness infeasible to implement in any but network graphs of relatively modest size .however , exploiting ideas similar to those underlying the algorithms of for calculating the betweenness , a decidedly more efficient implementation may be obtained , as we now describe briefly .details may be found in the appendix .our algorithm for computing co - betweenness involves a three - stage procedure for each vertex . in the first stage, we perform a breadth - first traversal of the network graph , to quickly compute intermediary quantities such as , the number of shortest paths from a source to each other vertex in the network ; in the process we form a directed acyclic graph that contains all shortest paths leading from vertex . in the second stage ,we iterate through each vertex in order of decreasing distance from and compute a score for each vertex that is related to its contribution to the co - betweenness .these contributions are then aggregated in a depth - first traversal of the directed acyclic graph , which is carried out in the third and final stage . in order to compute the number of shortest paths in the first stage , we note that the number of shortest paths from to a vertex is the sum of all shortest paths to each parent of in the directed acyclic graph rooted at , namely , in the case of an undirected graph , this can be computed in the course of a breadth - first search with a running time of . in the second stage , we compute using the recursive relation established in theorem 6 of , where denotes the set of child vertices of in the directed acyclic graph rooted at . finally , in the third stage , we compute the co - betweennesses by interpreting the relation as assigning a contribution of to for each of the shortest paths to that run through .we accumulate these contributions at each step of the depth - first traversal when we visit a vertex by adding to for every ancestor of the current vertex .our proposed algorithms exploit recursions analogous to those of to produce run - times that are in the worst case , but in empirical studies were found to vary like in general , or in the case of sparse graphs . here is related to the total number of shortest paths in the network and seems to lie comfortably between and in our experience . in the case of unique shortest paths, it may be shown rigorously that the running time reduces to , and if the network is sparse as well as ` small - world ' ( i.e. , with diameter of size ) .see the appendix for details .we provide in this section additional illustration of the use of co - betweenness , based on two other networks graphs .both graphs originally derive from social network analyses in which one goal was to understand the flow of certain information among actors .our first illustration involves the strike dataset of , which is also analyzed in detail in chapter 7 of .new management took over at a forest products manufacturing facility , and this management team proposed certain changes to the compensation package of the workers .the changes were not accepted by the workers , and a strike ensued , which was then followed by a halt in negotiations . at the request of management , who felt that the information about their proposed changes was not being communicated adequately , an outside consultant analyzed the communication structure among relevant actors .the social network graph in figure [ fig : strike.group ] represents the communication structure among these actors , with an edge between two actors indicating that they communicated at some minimally sufficient level of frequency about the strike .three subgroups are present in the network : younger , spanish - speaking employees ( black vertices ) , younger , english - speaking employees ( gray vertices ) , and older , english - speaking employees ( white vertices ) . in addition , the two union negotiators , sam and wendle , are indicated by asterix next to their names . it is these last two that were responsible for explaining the details of the proposed changes to the employees .when the structure of this network was revealed , two additional actors bob and norm were approached , had the changes explained to them , which they then discussed with their colleagues , and within two days the employees requested that their union representatives re - open negotiations .the strike was resolved soon thereafter . that such a result could follow by targeting bob and norm is not entirely surprising , from the perspective of the network structure . both are cut - vertices ( i.e., their removal would disconnect the network ) , and are incident to edges serving as bridges ( i.e. , their removal similarly would disconnect the network ) from their respective groups to at least one of the other groups .co - betweenness provides a useful alternative characterization , one which explicitly emphasizes the patterns of communication in the network , as shown in figure [ fig : strike.group.cob ] . as with figure [fig : abilene.cob ] , vertices ( now arranged in a circular layout ) are drawn in proportion to their betweenness , and edges , to their co - betweenness .bob and norm clearly have the largest betweenness values , followed by alejandro , who we remark also is a cut - vertex , but incident to a bridge to a smaller subnetwork than the other two ( i.e. , four younger spanish - speakers , in comparison to nine younger english - speakers and 11 older english - speakers , for bob and norm , respectively ) .the importance of these three actors on the communication process is evident from the distinct triangle formed by their large co - betweenness values .note that for the two union representatives , the co - betweenness values suggest that sam also plays a non - trivial role in facilitating communication , but that wendle is not well - situated in this regard .in fact , wendle is not even connected to the main component of the graph , since his betweenness is zero ( as is also true for six other actors ) .a plot of the standardized co - betweenness shows similar patterns overall , and we have therefore not included it here .the conditional betweenness for this network primarily shows most of the actors with large arcs pointing to bob and norm , and much smaller arcs pointing the opposite direction .this pattern further confirms the influence that these two actors can have on the other actors in the communication process . however , there are also some interesting asymmetrical relationships among the actors with smaller parts . for example , consider figure [ fig : strike.group.condp ] , which shows the conditional betweenness among the older english - speaking employees .ultrecht , for example , clearly has potential for a large amount of control on the communication of information passing through russ , and similarly , karl , on that through john .our second illustration uses the karate club dataset of . over the course of a couple of years in the 1970s, zachary collected information from the members of a university karate club , including the number of situations ( both inside and outside of the club ) in which interactions occurred between members . during the course of this study , there was a dispute between the club s administrator and the principal karate instructor .as a result , the club eventually split into two smaller clubs of approximately equal size one centered around the administrator and the other centered around the instructor . figure [ fig : zachary.net ] displays the network of social interactions between club members .the gray vertices represent members of one of the two smaller clubs and the white vertices represent members who went to the other club .the edges are drawn with a width proportional to the number of situations in which the two members interacted .the graph clearly shows that the original club was already polarized into two groups centered about actors 1 and 34 , who were the key players in the dispute that split the club in two .the co - betweenness for this network is shown in figure [ fig : zachary.net.cob ] . and ) have non - zero betweenness , but are bridges , in the sense that they only serve to connect to other vertices , and hence have zero co - betweenness .( note : the vertices for actors with zero betweenness are drawn to have unit diameter , for purposes of visibility . ) ] as in figure [ fig : zachary.net ] , the layout is done using an energy minimization algorithm . again , as in our other examples , the co - betweenness entries are dominated by a handful of larger values . as might be expected , actors 1 and 34 , who were at the center of the dispute , have the largest betweenness centralities and are also involved in the largest co - betweenness. more interesting , however , is the fact that these two actors have a large co - betweenness with each other despite not being directly connected in the original network graph .this indicates that they are nevertheless involved in connecting a large number of other pairs probably through key intermediaries such as actors 3 and 32 .these latter two actors , while certainly not cut - vertices , nevertheless seem to operate like conduits between the two groups , quite likely due to their direct ties to both actor 1 and either of actors 33 and 34 , the latter of which are both central to the group of white vertices .the co - betweenness for actors 1 and 32 is in fact the largest in the entire network . also of potential interestare the 14 vertices that are isolated from the network in the co - betweenness representation .some of these vertices , such as actor 8 , have strong social interactions with certain other actors ( i.e. , with actors 1 , 2 , 3 and 4 ) , but evidently play a peripheral role in the communication patterns of the network , as evidenced by their lack of betweenness . alternatively , there are the vertices like those representing actors 5 and 11 , who have some betweenness centrality but nonetheless find themselves cut off from the connected component in the co - betweenness graph .an examination of the definition of the co - betweenness tells us that such vertices must be bridge - vertices , in the sense that they only serve to connect pairs of other vertices , _i.e. _ , they only occur in the middle of paths of length two .we introduced in this paper the notion of co - betweenness as a natural and interpretable metric for quantifying the interplay between pairs of vertices in a network graph . as we discussed in different real world examples ,this quantity has several interesting features .in particular , unlike the usual betweenness centrality which orders the vertices according to their importance in the information flow on the network , the co - betweenness gives additional information about the flow structure and the correlations between different actors . using this quantity , we were able to identify vertices which are not the most central ones , but which however play a very important role in relaying the information and which therefore appear as crucial vertices in the control of the information flow . in principle, of course , one could continue to define higher - order analogues , involving three or more vertices at a time .but the computational requirements associated with calculating such analogues would soon become burdensome . in the case of triplets of vertices, one can expect algorithms analogous to those presented here to scale no better than .additionally , we remark that , in keeping with the statistics analogy made in section [ sec : cob ] , it is likely that the pairwise ` correlations ' picked up by co - betweenness captures to a large extent the more important elements of vertex interplay in the network , with respect to shortest paths . following the tendancies in the statistical physics literature on complex networks , it can be of some interest to explore the statistical properties of co - betweenness in large - scale networks . some work in this directionmay be found in , where co - betweenness and functions thereof were examined in the context of standard network graph models .the most striking properties discovered were certain basic scaling relations with distance between vertices . on a final note , we point out that , while our discussion here has been focused on co - betweenness for pairs of vertices in unweighted graphs , we have also developed the analogous quantities and algorithms for vertex co - betweenness on weighted graphs and for edge co - betweenness on unweighted and weighted graphs . also see , where a result is given relating edge betweenness to the eigen - values of the matrix edge - betweenness ` covariance ' matrix , defined in analogy to the matrix in section [ sec : cob ] .[ sec : algorithm ] this appendix contains details specific to the proposed algorithm for computing co - betweeness , including a derivation of key expressions , a rough analysis of algorithmic complexity .the pseudo - codes can be found at the address .actual software implementing our algorithm , written in the matlab software enviroment , is available at .central to our algorithm are the expressions in and , the derivations for which we present here . before doing so , however , we need to introduce some definitions and relations .first note that a simple combinatorial argument will show that and for the the sake of notational simplicity , we will assume , without loss of generality , that for the remainder of this discussion .the remaining quantities we need to introduce are notions of the path - dependency of vertices . in the spirit of , we define the `` dependency '' of vertices and on the vertex pair as and we define the dependency of alone on the pair of vertices as similarly , we define the pair - wise dependency of and on a single vertex as and the dependency of alone on as note that unlike , we exclude from the sum in . two relations that follow immediately from these definitions , combined with and ,are and these two relations allow us to show that since by and using eq . , we obtain we use this result to re - express the co - betweenness defined in as lastly , to establish the recursive relation in , note that for a child vertex every path to gives rise to exactly one path to by following the edge .this means that and that also note that for we have this allows us to decompose in essentially the same manner as , namely , using and , we then obtain where the last equality is due to the fact that since is a child of we have and thus .standard breadth - first search results put the running time for the first stage of our algorithm at , and since we touch each edge at most twice when we compute the dependency scores , the running time for the second stage is also .since we repeat each stage for each vertex in the network , the first two stages have a running time of .the running time for the depth - first traversal , that occurs during the third stage , depends on the number and length of all shortest paths in the network .overall , we visit every shortest path once and compute a co - betweenness contribution for each edge of every shortest path . for ` small - world ' networks i.e. , networks with an diameter , we must compute contributions , where is the total number of shortest paths in the network .so the overall running time for the algorithm is .empirical evidence suggests that the upper bound for the average ranges from to for common random graph models , and at worst has been seen to reach in the case of a network of airports .( in the latter case , there were extreme fluctuations in so the total number of shortest paths , , might be much smaller than times this upper bound . )this suggests a running time of , though it is an open question to show this rigorously . in the case of sparse networks , where , this reduces to a running time of .chua , e.d .kolaczyk , m. crovella , ieee journal on selected areas in communications , special issue on ` sampling the internet ' , * 24 * , 2263 - 2272 ( 2006 ) .michael , forest products journal * 47 * , 41 - 45 ( 1997 ) .w. de nooy , a. mrvar , v. batagelj , _ exploratory social network analysis with pajek _ , cambridge university press ( cambridge , uk , 2005 ) .
betweenness centrality is a metric that seeks to quantify a sense of the importance of a vertex in a network graph in terms of its ` control ' on the distribution of information along geodesic paths throughout that network . this quantity however does not capture how different vertices participate _ together _ in such control . in order to allow for the uncovering of finer details in this regard , we introduce here an extension of betweenness centrality to pairs of vertices , which we term _ co - betweenness _ , that provides the basis for quantifying various analogous pairwise notions of importance and control . more specifically , we motivate and define a precise notion of co - betweenness , we present an efficient algorithm for its computation , extending the algorithm of in a natural manner , and we illustrate the utilization of this co - betweenness on a handful of different communication networks . from these real - world examples , we show that the co - betweenness allows one to identify certain vertices which are not the most central vertices but which , nevertheless , act as important actors in the relaying and dispatching of information in the network .
maximum likelihood estimation of shape - constrained densities has received a great deal of interest recently .the allure is the prospect of obtaining fully automatic nonparametric estimators , with no tuning parameters to choose .the general idea dates back to , who derived the maximum likelihood estimator of a decreasing density on .a characteristic feature of these shape - constrained maximum likelihood estimators is that they are not smooth .for instance , the grenander estimator has discontinuities at some of the data points .the maximum likelihood estimator of a multi - dimensional log - concave density is the exponential of what call a _ tent function _ ; it may have several ridges .moreover , in this ( and other ) examples , the estimator drops discontinuously to zero outside the convex hull of the data . in some applications, the lack of smoothness may not be a drawback in itself .however , in other circumstances , a smooth estimate might be preferred , because : a. it has a more attractive visual appearance , without ridges or discontinuities that might be difficult to justify to a practitioner ; b. it has the potential to offer substantially improved estimation performance , particularly for small sample sizes , where the convex hull of the data is likely to be rather small ; c. for certain applications , e.g. classification , the maximum likelihood estimator being zero outside the convex hull of the data may present problems ; see section [ sec : classification ] for further discussion . for these reasons , we investigate a smoothed version of the -dimensional log - concave maximum likelihood estimator .the smoothing is achieved by a convolution with a gaussian density , which preserves the log - concavity shape constraint . to decide how much to smooth, we exploit an interesting property of the log - concave maximum likelihood estimator , which provides a canonical choice of covariance matrix for the gaussian density , thereby retaining the fully automatic nature of the estimate . the basic idea , which was introduced by for the case and touched upon in , is described in greater detail in section [ sec : mdp ] . the challenge of computing the estimator , which involves a -dimensional convolution integral , is taken up in section [ sec : computation ] ; see figure [ fig : lcdsmlcd ] for an illustration of the estimates obtained. the theoretical properties of the smoothed log - concave estimator are studied in section [ sec : theory ] .our framework handles both cases where the log - concavity assumption holds and where it is violated . in section [ sec : projections ] , we present new results on the infinite - dimensional projection from a probability distribution on to its closest log - concave approximation ; these give further insight into the misspecified setting .a simulation study follows in section [ sec : fsp ] , confirming the excellent finite - sample performance .{lcd2.ps } & \includegraphics[scale=0.32]{smlcd2.ps } \\\mathrm{(a ) } & \mathrm{(b ) } \end{array} ] , and let be the unit simplex in .following , we further define the auxiliary functions by where .then , writing , we have we have applied the basic results of in the last step .an exact expression for is given in appendix b.1 of when its arguments are non - zero and distinct .the taylor approximation of can be used when some of the arguments are small or have similar ( or equal ) values .we have by making an affine transformation of each onto the unit simplex as in section [ sec : covmatrix ] , we reduce the problem to integrating the exponential of a quadratic polynomial over the unit simplex . in general , this has no explicit solution , so it has to be evaluated numerically . gives a brief introduction to the problem of evaluating integrals over the unit simplex , while proposed a combinatorial method .we apply their method , first noting that by integrating out one variable , the dimensionality of the integral can be reduced by one . to see this , consider any positive definite , symmetric matrix ] , and other standard numerical integration methods such as the gaussian quadrature rule , can be applied .the combinatorial method and its variations are implemented in the latest version of the ` r ` package ` logconcdead ` .we found this method to be numerically stable even with several thousand observations , when may be rather small ( note that in such cases , in ( [ eq : integ1d ] ) will typically not be close to zero ) .however , we briefly present below two other ways of computing ; while slower in most cases , they do not require the inversion of , so can be used even when is very small .a. * monte carlo method*. 1 .conditional on , generate independent random vectors from the distribution .approximate by . + the validity of this approximation follows from the strong law of large numbers , applied conditional on . b. * fourier transform*. we can take advantage of the convolution property of the fourier transform as follows .first note that which can be evaluated by extending the auxiliary functions to the complex plane . since , we can invert on a fine grid using the fast fourier transform . since is the convolution of and a multivariate normal density , conditional on , it is straightforward to draw an observation from as follows : a. draw from using the algorithm described in appendix b.3 of or the algorithm of . b. draw , independent of . c. return .it is convenient to define , for , the classes of probability distributions on given by the condition is necessary and sufficient for the existence of a unique upper semi - continuous log - concave density that maximises over all log - concave densities ( * ? ? ? * theorem 2.2 ) .in fact , if has a density , and provided that ( which is certainly the case if is bounded ) , minimises the kullback leibler divergence over all log - concave densities . in this sense, is the closest log - concave density to .the density plays an important role in the following theorem , which describes the asymptotic behaviour of the smoothed log - concave maximum likelihood estimator .[ thm : asymp ] suppose that , and write and .let , where with .taking and such that , we have for all that and , if is continuous , .the condition that imposed in theorem [ thm : asymp ] ensures the finiteness of .we see that in general , converges to a slightly smoothed version of the closest log - concave density to .however , if has a log - concave density , then , so is strongly consistent in these exponentially weighted total variation and supremum norms .in fact , suppose that is a sublinear function , i.e. and for all and , satisfying as .it can be shown that under the conditions of theorem [ thm : asymp ] , . despite being smooth and having full support, it turns out that is rather close to .this is quantified in the finite - sample bound below .[ prop : bounds ] if , and , then moreover , where , and . in this subsection, we give new insights into the maps from a probability distribution to its log - concave approximation , and its smoothed version .results such as these enhance our understanding of the behaviour of maximum likelihood estimators in non - convex , misspecified models , where existing results are very limited .theorem [ thm : independent ] below shows that log - concave approximations and their smoothed analogues preserve independence of components . as well asbeing of use in our simulation studies , this is the key result which underpins a new approach to fitting independent component analysis models using nonparametric maximum likelihood .[ thm : independent ] suppose that is a product measure on , so that , say , where and are probability measures on and respectively , with .let denote the log - concave approximation to , and let denote the log - concave approximation to , for . then , writing , where and , we have now suppose further that .let denote the smoothed log - concave approximation to , and let denote the smoothed log - concave approximation to , for .then , for all , our next theorem characterises the log - concavity constraint through the trace of the non - negative definite matrix defined in theorem [ thm : asymp ] .[ thm : covariance ] suppose that . then if and only if has a log - concave density . the `if ' part of this statement is well - known , but the ` only if ' part is new .the two parts together motivate our testing procedure for log - concavity , which is developed in section [ sec : test ] . in most cases , it is very difficult to find explicitly the log - concave approximation to a given distribution .our final result of this section is straightforward to prove , but is of interest because it shows that some log - concave densities can have a large ` domain of attraction ' . [prop : convex ] let be an upper semi - continuous , log - concave density on . then the class of distributions with log - concave approximation is convex .for instance , if is a symmetrised pareto density with and , then it can be shown that its log - concave projection is .thus the class of distributions with whose log - concave projection is the standard laplace density is infinite - dimensional .our simulation study considered the normal location mixture density for 1 , 2 and 3 , where .this mixture density is log - concave if and only if . for each density , for and , and for sample sizes and , we computed the integrated squared error ( ise ) of the smoothed log - concave maximum likelihood estimator for each of 50 replications .we also computed the ise of the log - concave maximum likelihood estimator and that of a kernel density estimator with a gaussian kernel and the optimal ise bandwidth for each individual data set , which would be unknown in practice .the boxplots of the ises for the different methods are given in figure [ fig : box3d ] for .the analogous plots for the case can be found in . with the gaussian location mixture true density for the smoothed log - concave maximum likelihood estimator smlcd , log - concave maximum likelihood estimator lcd and kernel density estimator with the ` oracle ' optimal ise bandwidth : ( a ) , ; ( b ) , ; ( c ) , ; ( d ) , ; ( e ) , ; ( f ) , . ]we see that when the true density is log - concave , the smoothed log - concave estimator offers substantial ise improvements over its unsmoothed analogue for both sample sizes , particularly at the smaller sample size .it also outperforms by a considerable margin the kernel density estimator with the optimal ise bandwidth .when the log - concavity assumption is violated , the smoothed log - concave estimator is still competitive with the optimal - ise kernel estimator at the smaller sample size , and also improves on its unsmoothed analogue .however , at the larger sample size , the bias caused by the fact that dominates the contribution from the variance of the estimator , and the kernel estimator is an improvement .these results confirm that the smoothed log - concave estimator has excellent performance when the true density is log - concave , and remains competitive in situations where the log - concavity assumption is violated , provided that the modelling bias caused by this misspecification is not too large relative to the sampling variability of the estimator .several tests of log - concavity have been proposed in the literature . and various tests for univariate data , while presented two tests of log - concavity for multivariate data . proposed another multivariate test based on kernel density estimates which had improved finite - sample performance on his simulated examples .however , none of these multivariate tests has theoretical support .suppose , and we seek a size test of has a log - concave density against does not have a log - concave density .motivated by theorem [ thm : covariance ] , we propose the following procedure : a. compute the log - concave maximum likelihood density estimate .b. compute the test statistic , where , as in ( [ eq : sigmas ] ) .c. generate a reference distribution as follows : for , draw conditionally independent samples from . for each bootstrap sample , first compute the log - concave maximum likelihood estimator .then compute , where and .d. reject if .we call this procedure a _ trace _ test .it is justified by the following result : [ thm : test ] suppose that .the trace test is consistent : that is , if is not log - concave , then for each , the power of the test converges to one as .we remark that if , one can also draw bootstrap samples from instead of in step ( c ) . to illustrate the performance of the test , we ran two small simulation studies . in the first study, we simulated from the bivariate mixture of normal distributions density , with ( which we recall is log - concave if and only if ) .for each simulation setup , we performed 200 hypothesis tests with .the proportion of times that the null hypothesis was rejected in a size test is reported in table [ tab : sim1 ] . for comparison, we also report the results from the critical bandwidth test proposed by .the permutation test studied by did not perform as well as the critical bandwidth test , so we omitted its results here ..proportion of times out of 200 repetitions that the null hypothesis was rejected with .[ cols="^,^,^,^,^",options="header " , ] the first study confirms that the trace test controls the type i error satisfactorily ( and appears to be less conservative than the critical bandwidth test when ) .the results of the second study , though , are quite striking , and suggest that our new test for log - concavity has considerably improved finite - sample power compared to the critical bandwidth test . noted that the critical bandwidth test can have reduced power due to the boundary bias of the kernel estimators and is quite sensitive to the outliers ( in fact , one also needs to pick a compact region containing the majority of the data , and this choice is somewhat arbitrary ) .our test avoids these issues and performs well even in the presence of outliers or when the true density has bounded support .changing notation slightly from the previous section , we now assume that , , , are independent and identically distributed pairs taking values in .let for , and suppose that conditional on , the random vector has distribution .a _ classifier _ is a measurable function , with the interpretation that the classifier assigns the point to class .the _ misclassification error rate _ , or _risk _ , of is in the case where each distribution has a density , the classifier that minimises the risk is the _ bayes classifier _ , given by ( for all classifiers defined by an as above , we will for the sake of definiteness split ties by taking the smallest element of the . )we will also be interested in the _ log - concave bayes classifier _ and _ smoothed log - concave bayes classifier _ , defined respectively by here , and are the log - concave approximation to and its smoothed analogue , defined in theorem [ thm : asymp ] .in particular , both classifier coincide with the bayes classifier when have log - concave densities .empirical analogues of these theoretical classifiers are given by here , is the number of observations from the class , and and are respectively the log - concave maximum likelihood estimator of and its smoothed analogue , based on .the theorem below describes the asymptotic behaviour of these classifiers .it reveals that the risk of and converges not ( in general ) to the bayes risk , but instead to the risk of and respectively .this is a similar situation to that encountered when a parametric classifier such as linear or quadratic discriminant analysis is used , but the relevant parametric modelling assumptions fail to hold .it suggests that the classifiers and should only be used when the hypothesis of log - concavity can be expected to hold , at least approximately .[ thm : classifiers ] ( a ) : : assume for .let . then for almost all , and ( b ) : : now assume for .let .then for almost all , and in fact , the smoothed log - concave classifier is somewhat easier to apply in practical classification problems than its unsmoothed analogue .this is because if is outside the convex hull of the training data for each of the classes ( an event of positive probability ) , then the log - concave maximum likelihood estimates of the densities at are all zero .thus all such points would be assigned by to class 1 . on the other hand, avoids this problem altogether . for these reasons , we considered only in our simulation study and below .we remark that the direct use of ( or any other classifier based on nonparametric density estimation ) is not recommended when , due to the curse of dimensionality . in such circumstancesthere are two options : dimension reduction ( cf . section [ sec : breastcancer ] below ) , or further modelling assumptions such as independent component analysis models . in either case , the methodology we develop remains applicable , but now as part of a more involved procedure . in the wisconsin breast cancer data set ,30 measurements were taken from a digitised image of a fine needle aspirate of different breast masses .there are 357 benign and 212 malignant instances , and we aim to construct a classifier based on this training data set to aid future diagnoses .only the first two principal components of the training data were considered , and these capture 63% of the total variability ; cf .figure [ fig : wbcd](a ) .this was done to make our procedure computationally feasible , to reduce the effect of the curse of dimensionality , and to facilitate plots such as figure [ fig : wbcd ] below .{wbcd_a.ps } & \includegraphics[scale=0.30]{wbcd_b.ps } \\\mathrm{(a ) } & \mathrm{(b ) } \\ \includegraphics[scale=0.76]{wbcd_c.ps } & \includegraphics[scale=0.76]{wbcd_d.ps } \\\mathrm{(c ) } & \mathrm{(d ) } \end{array} ] and ] . writing as and respectively , where and , it follows again by fubini s theorem that of theorem [ thm : covariance ] let , and let denote its log - concave approximation . without loss of generality, we may assume , so it suffices to show that if is the zero matrix , then has a log - concave density .let denote the distribution corresponding to , let and let .for an arbitrary , let and denote the distribution functions of and respectively , and let fix . by applying remark 2.3 of to the convex function and fubini s theorem , we have that all moments of log - concave densities are finite , we have .so , since , we must have .we can therefore integrate by parts as follows : combining ( [ eq : cdfmarginal ] ) , ( [ eq : cdfmarginal2 ] ) and the fact that is continuous , we deduce that .thus , by the fundamental theorem of calculus and the fact that and are both right - continuous .it follows that since was arbitrary , we deduce that , so has a log - concave density . of proposition[ prop : convex ] suppose that the upper semi - continuous log - concave density is the log - concave approximation to .then for each , we see that also maximises over all upper semi - continuous log - concave densities on . of theorem [ thm : test ]let denote the second mallows metric on , so , where the infimum is taken over all pairs of random vectors and on a common probability space .recall that the infimum in this definition is attained , and that if , then if and only if both and .let denote the distribution corresponding to the log - concave approximation to , and for to be chosen later , let denote the subset of consisting of those distributions with that have a log - concave density .fix and let .let and denote the empirical distribution of an independent sample of size from and an independent sample from respectively .we will require a bound for that holds uniformly over , and obtain this using the following coupling argument .we may suppose that are independent and identically distributed pairs with and and that and are obtained as the empirical distribution of and respectively .we may further suppose that ; in other words , and are coupled in such a way that they attain the infimum in the definition of the second mallows distance .using standard results on the mallows distance ( e.g. equation ( 8.2 ) and lemma 8.7 of ) , we deduce that for , now let denote the distribution corresponding to the log - concave maximum likelihood estimator constructed from , and let denote the empirical distribution of a sample of size which , conditional on , is drawn independently from . by reducing if necessary, we may assume .it follows that for sufficiently large .the final convergence of the second term here follows from the weak law of large numbers , while for the third term it follows from proposition 2(c ) of and the dominated convergence theorem .let and denote respectively the empirical distribution and the distribution corresponding to the log - concave maximum likelihood estimator of the bootstrap sample drawn from .we deduce from ( [ eq : mallows ] ) , theorem 2.15 of and another application of proposition 2(c ) of that there exists such that now let where . from ( [ eq : mallows ] ) , ( [ eq : hatq ] ) , the dominated convergence theorem and the continuous mapping theorem , we have that as . on the other hand , in the notation of theorem [ thm : asymp ] , where the final claim follows from theorem [ thm : covariance ] and the fact that does not have a log - concave density .note that this claim holds even if , in which case .write , and note that are exchangeable ( so in particular , identically distributed ) .thus , for any , as .we deduce that for any given size of test , the power at any alternative converges to 1 . of theorem [ thm : classifiers ] * ( a ) * note that we have that as for every , and in fact , by theorem 10.8 of , it is almost surely the case that converges to uniformly on compact sets in the interior of the support of . by the strong law of large numbers andthe fact that the boundary of the support of has zero -dimensional lebesgue measure , it therefore follows that for almost all .in fact , with probability one , converges to uniformly on compact sets in the interior of the support of .it follows immediately from this and the dominated convergence theorem that of proposition [ prop : functionals ] the conclusion of theorem [ thm : asymp ] can be stated in the notation of section [ sec : functionals ] as the result therefore follows immediately by the continuous mapping theorem . of corollary [ cor : linearfunctionals ] it suffices to show that under condition ( [ eq : funccond ] ) , the functional is continuous .fix such that , and choose a sequence such that .then as .thus is continuous , as required . carroll , r. j. , delaigle , a. and hall , p. ( 2011 ) testing and estimating shape - constrained nonparametric density and regression in the presence of measurement error ._ j. amer ._ , * 106 * , 191202 .gopal , v. and casella , g. ( 2010 ) discussion of _ maximum likelihood estimation of a multi - dimensional log - concave density _ by m. cule , r. samworth and m. stewart _b _ , * 72 * , 580582 . pal , j. k. , woodroofe , m. and meyer , m. ( 2007 ) estimating a polya frequency function . in _complex datasets and inverse problems : tomography , networks and beyond_. vol .54 of _ lecture notes - monograph series _ , 239249 .ohio : institute of mathematical statistics .street , w. n. wolberg , w. h. and mangasarian , o. l. ( 1993 ) nuclear feature extraction for breast tumor diagnosis . in _ proc .electronic imaging : science and technology_. vol . * 1905 * , 861870 .
we study the smoothed log - concave maximum likelihood estimator of a probability distribution on . this is a fully automatic nonparametric density estimator , obtained as a canonical smoothing of the log - concave maximum likelihood estimator . we demonstrate its attractive features both through an analysis of its theoretical properties and a simulation study . moreover , we use our methodology to develop a new test of log - concavity , and show how the estimator can be used as an intermediate stage of more involved procedures , such as constructing a classifier or estimating a functional of the density . here again , the use of these procedures can be justified both on theoretical grounds and through its finite sample performance , and we illustrate its use in a breast cancer diagnosis ( classification ) problem . key words : classification ; functional estimation ; log - concave maximum likelihood estimation ; testing log - concavity ; smoothing
this paper intends to introduce a new display method using the holography theory by dennis gabor in 1948 .the holography had been expected to be a popular display method for a 3-dimensional image , but the burden of tremendous amount of data processing prohibited the practical application .thus , i would like to introduce a one - dimensional holographic display concept which can reduce the burden of data processing , can adopt simple optical computing method , and has some more practical merits in manufacturing .a one - dimensional hologram can display only a two - dimensional image , but it does not require the lenses like hmd(head mount display ) . instead , this one - dimensional holographic display device has a possibility of showing a real - time two - dimensional information without a lens , within today s technology .this paper contains theoretical considerations about the one - dimensional holography , and the equations that i derived showing the existence of the one - dimensional hologram as well as discussions about practical structures of the display device , the light modulators , and the optical computing device .this work had started by considering the information dimension of a hologram .a traditional two - dimensional hologram can display a three dimensional image , so i speculated that this 2 to 3 relationship between data dimension and image dimension could be transformed into 1 to 2 relationship .so , i first tried to find a one - dimensional hologram for a two - dimensional image by geometrical method , but failed .instead , i found that a diffraction lattice like one - dimensional hologram is formed by some special condition of line image . and, i had conceived a vector and matrix based mathematical technique which can easily express the idea .unexpectedly , this technique was also useful to handle the problem of diffraction efficiency and noise cancelling problem for computed artificial two - dimensional hologram .a phasor expression for a wave from one point source is eq .let represent the relative phase and the amplitude of a wave function , then )}\ ] ] ( , ) the is indeed a complex function , but when , it is a simple function proportional to , and it is actually a constant when computing a one - dimensional hologram mainly discussed in this paper . the traditional wave function is obtained by considering the time term . the above phasor expression does not represent a real wave , but the interference pattern of a certain point depends on only relative phase differences between light rays , so the time term disappears when computing the interference pattern .the coherent rays have constant relative phases .the polarization of lights are ignored .a phasor expression for waves from many point sources is eq .( 2 ) by the principle of superposition .let be superpositioned of eq.(1 ) , then )}\ ] ] an actual hologram is a record of the interference pattern on a photographic plate .the interference pattern depends on the illumination .let be the illumination over the space then , ) } \right|^2\ ] ] this can be rewritten as ) } \times \sum \alpha ( \vec{r } ) \exp{(-i[f(\vec{r } ) + \delta])}\ ] ] and , when expanded to a matrix , it is this matrix needs normalization for actual application , but the image reproduction with a hologram may now be certified .if you select as reference light and illuminate it as reproducing light(select for intensity ) on a hologram which represents the above matrix , then represents the modulated lights , take this matrix s diagonals to the first term , remaining first column to the second term , remaining first row to the third term , and others are added to their symmetry conjugated and defined by cosine function , then the result is )}\nonumber\\ & & + 2\exp{(if_1(\vec{r } ) ) } \sum \alpha_{m}\alpha_{n } \cos(f_m(\vec{r})-f_n(\vec{r}))\end{aligned}\ ] ] the first term is 0th order term , the second term represents image , the third term represents conjugate image , and the fourth term is noise term .the above method does not depend on any particular coordinate system , so it can explain volume hologram , as well .when considering the dimensions of storing and displaying hologram , the volume hologram can display three - dimensional image , and can be multiplexed with wavelengths and spatial coordinates of light sources .the two - dimensional hologram can display three - dimensional image , and can not be multiplexed . when considering one - dimensional hologram , one - dimensional hologram can display one - dimensional image(one - line image ) , and can not be multiplexed . in cartesian coordinate system ,when , eq .( 1 ) can be rewritten as ) } \end{aligned}\ ] ] ( , is the coordinate of dot light source ) this expression represents a volume hologram s case .the two - dimensional hologram formation on a plain can be obtained by substitution .the thickness of zero can not exist in real world , so an actual two - dimensional hologram by photographic method is a thin volume hologram and in fact , it is significantly advantageous to reduce image noise .if is substituted with zero again , it could be called as a one - dimensional hologram .but , it is a hologram on a physical line .it is hard to find physical meaning . instead ,if a hologram on a plain is expressed with single axes information , then it also can be called as the one - dimensional hologram . the phase of eq .( 8) is relative to the source of light .it is possible to transform the expression to be relative to the origin of coordinate system . when the light is parallel , the of eq . ( 1 ) is infinite , is the distance between the origin and source . ) } \\ & = & \frac{\alpha}{r}e^{\textstyle 2 \pi i[\frac{\sqrt{(ox - x)^2+(oy - y)^2+(oz - z)^2}- \sqrt{ox^2+oy^2+oz^2 } } { \lambda } + \delta ] } \\ & \approx & \frac{\alpha}{r}\exp { ( 2 \pi i[\frac { -\frac{ox x + oy y + oz z}{\sqrt{ox^2+oy^2+oz^2 } } } { \lambda } + \delta ] ) } \\ & = & \frac{\alpha}{r}\exp { ( -2 \pi i[\frac { \frac{\textstyle\vec{r}\cdot\hat{x}x + \vec{r}\cdot\hat{y}y + \vec{r}\cdot\hat{z}z}{\textstyle r}}{\lambda } + \delta ] ) } \\ & = & \frac{\alpha}{r}\exp { ( -2 \pi i[\frac{\textstyle\hat{r}\cdot\hat{x}x + \hat{r}\cdot\hat{y}y + \hat{r}\cdot\hat{z}z}{\lambda } + \delta ] ) } \end{aligned}\ ] ] when , can be changed to constant , therefore )}\ ] ] now , one dimension can be reduced by limiting plain with .therefore the result is , )}\ ] ] ( ) } $ ] is more adept for final result , but eq . ( 9 ) shall be used for convenience ) according to the method of eq .( 2 ) , the expression for many points is )}\ ] ] at this time , when (onstant ) ( all the points are on same latitude in polar coordinate ) , the above can be rewritten as )}\ ] ] the real hologram information is obtained by applying the method of eq . ( 4 ) for eq .let , then the result is , ) } \nonumber\\ & & \times \exp { ( 2 \pi i\frac { \hat{r}\cdot\hat{y}}{\lambda}y)}\sum \alpha \exp { ( 2 \pi i[\frac{\textstyle\hat{r}\cdot\hat{x}}{\lambda}x + \delta ] ) } \nonumber \\ & = & \sum \alpha \exp { ( -2 \pi i[\frac{\hat{r}\cdot\hat{x}}{\lambda}x + \delta ] ) } \times \sum \alpha \exp { ( 2 \pi i[\frac{\textstyle\hat{r}\cdot\hat{x}}{\lambda}x + \delta ] ) } \nonumber \\ & = & i(x ) \nonumber \\ & = & \left ( \begin{array}{cccc } 1 & \alpha_2e^{2 \pi i \frac{\textstyle ( \hat{r_1}-\hat{r_2 } ) \cdot\hat{x}}{\textstyle \lambda}x } & \alpha_3e^{2 \pi i \frac{\textstyle ( \hat{r_1}-\hat{r_3 } ) \cdot\hat{x}}{\textstyle \lambda}x } & \cdot \\\alpha_2e^{2 \pi i \frac{\textstyle ( \hat{r_2}-\hat{r_1 } ) \cdot\hat{x}}{\textstyle \lambda}x } & \alpha_2 ^ 2 & \alpha_{2}\alpha_{3}e^{2 \pi i \frac{\textstyle ( \hat{r_2}-\hat{r_3 } ) \cdot\hat{x}}{\textstyle \lambda}x } & \cdot \\\alpha_3e^{2 \pi i \frac{\textstyle ( \hat{r_3}-\hat{r_1 } ) \cdot\hat{x}}{\textstyle \lambda}x } & \alpha_{3}\alpha_{2}e^{2 \pi i \frac{\textstyle \ ! ( \hat{r_3}\!\!-\hat{r_2}\ ! ) \!\cdot\hat{x}}{\textstyle \lambda}x } & \alpha_3 ^ 2 & \cdot \\ \cdots & \cdots & \cdots & \cdot \end{array } \right ) \nonumber \\\end{aligned}\ ] ] the term of was cancelled .so , this is expressed with one - dimensional data which depend on axes only .therefore , eq . ( 11 ) represents a one - dimensional hologram in this paper . according to the method of eq .( 5),(6 ) , the modulation of reproducing light is and , sorting as eq .( 7 ) , results are ( is omitted ) also , \{1 } is term of 0th order , \{2 } is a term representing the image , \{3 } is a term representing the conjugated image , which confirms that it works as a hologram . some of the lights expressed by terms of \{3 } and \{4 } , may not be reproduced because the final unit vectors of light ray always have to satisfy the size of 1 .this means , for example , among the lights of term \{3 } , the lights those are smaller than 1 , can be generated .it is the same situation of a diffraction grating that is expressed with grating equation .in grating equation , the degree of is limited as the absolute value of a sinusoidal function is limited to 1 .term \{2 } can have physical meaning with different wavelengths or latitude angles of incidence , so it is impossible to multiplex the one - dimensional hologram by wavelengths or latitude angles of incidence .the one - dimensional hologram may be used to make a display device as described in figure 1 . to reproduce a image ,a one - dimensional hologram should be expressed with a spatial light modulator and a proper reproducing light should be illuminated , then one line of image shall be displayed . and, the whole plain image is displayed by updating the one - dimensional hologram and the angle of incidence ( of eq .( 12 ) term \{2 } ) of the parallel reproducing light synchronously and in sequence .the natural color is obtained by repeating display with the three primary colors .the incident angle of reproducing light should be adjustable , so a deflection device is needed . there may be many kind of deflection device , but the one - dimensional hologram itself also can be used as a deflector .in fact , the one - dimensional hologram deflector is identical to a cosine diffraction lattice . the deflecting plate 1 , 2 and the one - dimensional hologram are cross structured light modulators .one of the deflecting plate 1 or 2 operates at a time and the other maintains the transparent state .the incident angle of parallel ray in figure 1 is fixed as in figure 2 , then one of the deflectors 1 and 2 deflects the parallel ray by deflection range 1 or 2 in figure 2 .this structure makes it possible to eliminate the 0th order light by total internal reflection .it is considerable to replace one of the one - dimensional hologram deflectors with a multiplexed volume hologram . theoretically , it is possible to display a two - dimensional image with above scheme , but there are some more considerable problems to actually develop and operate this display device .they are developing optical modulation device , noise cancelling of image , and the fast computation of the interference pattern . and , the comparison with conventional two - dimensional hologram method or with controllable diffraction lattice method is needed to verify the usefulness of the one - dimensional hologram display method .a hologram display device requires very high resolution spatial light modulator than conventional display device .recently , it had been announced that liquid crystal display device has reached the resolution of . however , this resolution is still not enough to display a hologram .the hologram method display device has no relation between the image resolution and the resolution of optical modulation device .the resolution of modulator is related to the field of view , precisely , it is related to the angle between a light from a picture element of an image and the reference light of hologram .when , eq .( 11 ) can be rewritten as , this shows that a hologram is the sum of spatial periodic structures which is expressed with .the possible maximum value of is 2 and at least two pixel is needed to express one spatial period , so , the resolution of light modulator should be to display a image without the limitation of visual field . to express natural color ,if about of blue ray wavelength is substituted for , then a modulator of approximately resolution is required .when using previously mentioned liquid crystal display device of resolution as spatial light modulator , from , the maximum field of view is , this is capable of displaying about wide virtual screen at distance , which has no practical use .fortunately , there have been continuous researches for other types of optical modulation devices . as one of them , according to recently opened japan ntt docomo s patent document , they have mentioned that higher than resolution may be obtained by using a photo - refractive crystal .this is capable of displaying about wide screen at distance , but still it is not fully enough .the resolution mentioned above is the possible resolution for the two - dimensional hologram .the resolution of modulator can be improved by using one - dimensional hologram . to display a two - dimensional hologram ,one pixel electrodes should be placed for each pixel , each electrode should have a controlling circuit , each circuit should have at least two interface wires , all these elements should be placed on a transparent plate with a matrix form . figure 3 is a light modulation device structure for hologram display suggested by ntt docomo . to display a one - dimensional hologram ,all the structures mentioned above may not be placed on the displaying transparent plate , except the pixel electrodes . displaying the figure 4clearly does nt need the matrix of figure 3 .only the pixel electrodes are needed to be placed for display and all other elements may be placed at the edge of each electrode .this will improve the display resolution almost to the limit of wiring technology .it seems that the recent wiring technique is enough for the goal of resolution .a practical display device needs to consider about the problem of image quality .the image quality is determined by resolution of image , luminosity and noise .the resolution problem shall not be discussed , because the holography is intrinsically high resolution display , regardless of the modulator s resolution . and , the luminosity problems may be solved by multiple modulating of phase modulation method . then the noise remains . the 4th term of eq .( 7 ) and term \{4 } of eq .( 12 ) are the noise terms .these noises are caused from the assumption that light modulation happens instantly at a surface .when reproducing light is illuminated to hologram of eq . ( 11 ) , the energy distribution of all modulated lights by hologram without normalization is when the number of image elements is and assuming that all image elements have identical luminosity for convenience , the eq .( 14 ) can be rewritten as .the first term is the 0th order term except diagonals , the second term is the total luminosity of image , third term is the total luminosity of conjugate image and fourth term is sum of 0th order diagonals and noise term . when with same the total luminosity of image is sufficiently less than reference light , the energy of noise term becomes negligible than the energy of image .this shows that the noise term is especially important for the computer generated hologram on a plain , and negligible when a volume hologram is used .but , noise can be eliminated by simply throwing the noise term and using row 1 and column 1 of eq . ( 5 ) , except diagonals .that is , instead of the expression of eq .( 3 ) ) } \right|^2\ ] ] , by adding row 1 and column 1 those elements are complex conjugates with one another .so , it is expressed with cosine function . applying eq .( 15 ) to ( 11 ) to get expression of one - dimensional hologram , the negative value becomes possible , so , it needs different way for normalization .this one - dimensional hologram can be called as multiplexed cosine diffracting lattice .one of the most big problem in hologram display device is its tremendous data processing burden .when displaying 3-d image with a hologram , there is no other way except improving the algorithm , but when displaying 2-d image , it is possible to compute only partial area of modulator , and can reuse its data on whole area to improve the speed of computing . when using one - dimensional holography , this situation becomes better . for two - dimensional hologram ,all the hologram pixels ( ) must be computed by all the image pixels( ) .but , for one - dimensional , just one column of the hologram pixels( ) are computed by one column of the image pixels( ) , and repeats this for number of the row line of image( ) .this increases computing speed by equals the aperture size of human eyes are between to .so , for clean visuality , let the size of partial hologram to be , and let the hologram resolution to be , then the one - dimensional hologram may be computed 20,000 times faster than the 2-d displaying two - dimensional hologram .but , current digital calculator may not be able to handle the required computing burden for real time color motion picture display .fortunately , there is other solution for one - dimensional hologram calculation .it is possible to make adjustable one - dimensional interference pattern then read it with photo - sensor array .a coherent light started from a source is modulated and diffused by light modulation device with input signal , this light is modulated into multiple parallel rays by lens , and gain hologram output data by reading interference patterns from those parallel rays with photo - sensor array .this is shown on figure 5 . in this case , the calculation speed depends on the speed of sufficient light gathering at the photo - sensor , a laser has sufficient power with care of only generated heat .the reference light was not indicated on figure 5 .it is out of range from radical axis , so , it can not be illuminated through the lens , it should be illuminated diagonally from axis direction . in this case , the noise removing method of eq .( 16 ) ca nt be used , so , small values of should be used . in order to do so , a multi - layered one - dimensional hologram may be used .the light modulation efficiency should be lowered at each hologram , and the modulation is repeated with multiple layer . when looking at expression from eq . ( 11 ) , ) } \times \sum \alpha \exp { ( 2 \pi i [ \frac{\hat{r}\cdot\hat{x}}{\lambda}x + \delta ] ) } \ ] ] it shows that only values are used for one - dimensional hologram calculation .therefore , the structure of figure 5 can be applied to all latitude lines regardless of .also because , reducing value and properly increasing value results in same , so , input pixels can be changed to more paraxial , and at the same time , it makes the size of the photosensor array larger .the method in figure 5 can be formed in a thin shape with tens thousand pixel lineal ccd in one - dimensional holography , but when applied to two - dimensional hologram , it would encounter the problems of embodying in a thick shape , illuminating the reference light very out of ranged from radical axis , and making hundreds million pixel ccd .as mentioned above , a one - dimensional hologram can be regarded as a multiplexed diffraction lattice , too .so , the comparison of one - dimensional and diffraction lattice is considerable . when examining the calculation speed of diffraction lattice to display an image , for a diffraction lattice ,each column of the lattice line pixel ( ) should be computed by each pixels of the image , and this should be repeated for the number of the row lines of image ( ) , then repeated again for the number of the column lines of image ( ) .this amount of calculation equals to that of the one - dimensional hologram . considering the computing speed , diffraction lattice is not bad , but there are other problems in diffraction lattice method .the light modulator for displaying the diffraction lattice should be changed for each image pixels .when one line image of one - dimensional hologram consists of 2000 pixels , the modulator for diffraction lattice should be reconfigured 2000 times more than one - dimensional hologram .this means that the light modulator and all the elements of figure 5 should have 2000 times faster speed than those of one - dimensional hologram . to compare the data transfer rate ,let us assume that light modulator consists of parts by resolution , the displaying image consists of pixels , and let us choose the frame rate of 48 frames per a second(24 is traditional frame rate , but a hologram or a lattice can express only one color at a time , thus some extra frames are needed .it seems that 72 monochrome frames are not required for 24 color frames , when 6 frames are used for three times of shape refreshing , and two times of color refreshing , 48 frames are enough .48 is chosen because it is about 50 that is easy to handle . ) , then the time limit for a frame of two - dimensional hologram is about , for a line frame of one - dimensional hologram is about , and for a dot frame of the diffraction lattice is .(a pixel image is quite high resolution for plain pictures , but it is only not so bad resolution for eyeglasses type display devices of wide visual angle . ) and , with , the transfer rates are calculated as giga times per a second for two - dimensional hologram , giga times for one - dimensional hologram , and tera times for diffraction lattice .these results show that the one - dimensional holography is most efficient .some other ways of using diffraction lattice exist they avoid calculation and transmission of data . a material which self arranges its fringe by voltage has been known , and the method of using acoustic wave as the diffraction lattice also has been known .but , the arrangement speeds of these methods seem difficult to meet of time limit , because the state of molecules in that kind of material should be determined by their neighbour molecules , and the informations are exchanged with the speed of acoustic wave .the acoustic wave method may be considerable for one - dimensional holography , if it is capable of expressing resolution .the one - dimensional holography is a new display method which has balanced characteristics between conventional two - dimensional holography and diffraction lattice method .this is a theoretical method yet , and thus it seems that there are no precedent and few references .but , this is not a unique theory , this is an application of common theory for a special problem , thus , this could be theoretically verified easily .many researcher s dedications are required for practical use of one - dimensional holography .especially , the research for a fast responsive light modulating material seems essential . as modulating method , phase modulating or polarizationmodulating material may be adequate .also , more precise design of optical calculator is required .other computing methods like analog computing device or faster dsp could be researched . and , many others may also be needed .young - cheol kim , `` appling 1 dimensional hologram to display device '' in _ journal of the institute of electronics engineers of korea proceedings on semiconductors and devices _ ( the institute of electronics engineers of korea , seoul , korea 2005 ) , pp .
this paper introduces a new concept of one - dimensional hologram which represents one line image , and a new kind of display structure using it . this one - dimensional hologram is similar to a superpositioned diffraction lattice . and the interference patterns can be efficiently computed with a simple optical computing structure . this is a proposal for a new kind of display method .
invariant manifolds and their intersections are important features that organize qualitative properties of dynamical systems .three types of manifolds have been prominent in the subject : ( 1 ) compact invariant tori , ( 2 ) stable and unstable manifolds of equilibria and periodic orbits , and ( 3 ) slow manifolds of multiple time scale systems .interval arithmetic and verified computing have been used extensively to give rigorous estimates and existence proofs for invariant tori and occasionally to locate stable and unstable manifolds , but this paper is the first to employ these methods to locate slow manifolds .each of these three cases pose numerical challenges to locate the manifolds .many methods that locate invariant tori assume that the flow on the tori is smoothly conjugate to a constant flow with dense orbits .existence of this conjugacy confronts well known small divisor problems and the winding vector of the flow must satisfy diophantine conditions in order for this problem to be solvable .typically , the numerical methods produce a fourier expansion of the conjugacy which is determined up to a translation .the manifolds are located by projection onto a discrete set of fourier modes and solving a fixed point equation for the coefficients of the conjugacy .the computation of stable and unstable manifolds of equilibria and periodic orbits is a `` one - sided '' boundary value problem .the manifolds consist of trajectories that are asymptotic to the equilibrium or periodic orbit . in the case of an equilibrium point of an analytic vector field ,the local stable and unstable manifolds are analytic graphs that have convergent asymptotic expansions whose coefficients can be determined iteratively .the most challenging aspect of computations of two dimensional manifolds arises from the way that trajectories do or do not spread out in the manifold as one departs from the equilibrium or periodic orbit .as illustrated by the lorenz manifold , the manifolds can twist and fold in ways that present additional geometrical complications for numerical methods .the development of rigorous bounds for these invariant manifolds follows similar principles to the verified computation of individual trajectories .multiple time scale vector fields , also known as singularly perturbed differential equations , occur in many settings : systems of chemical reactions , lasers , fluid dynamics and models of the electrical activity of neurons are a few examples .borrowing terminology from fluid dynamics , the solutions of these systems can have ( boundary ) layers in which the fast time scale determines the rate at which the solution varies as well as long periods of time during which the solution evolves on the slow time scale .the slow motion typically occurs along _ slow manifolds _ that are locally invariant .the slow manifolds play a prominent role in qualitative analysis of the dynamics and bifurcations of multiple time scale systems .indeed , model reduction procedures are frequently employed that replace a model by a lower dimensional model that approximates the motion along a slow manifold and ignores the fast dynamics of the original model .the ideal for this type of model reduction is an algorithm that computes the slow manifold exactly .that ideal seems very difficult to achieve and is not addressed in this paper . instead , we seek rigorous bounds for the location of the slow manifold that are tight enough to give information that can be used in the analysis of bifurcations of the system . to explain the methods we introduce in the simplest terms , we focus upon _ slow - fast _ systems that contain an explicit parameter that represents the ratio of time scales .moreover , we restrict attention to systems that have two slow variables and one fast variable and use a single example as a test case . in principle , the methods generalize to the case of codimension one slow manifolds , and the definitions and existence proofs in sections [ s_overmethod ] and [ s_existence ] have obvious higher dimensional analogues . in practice , however , due to the scarcity of tools for computational geometry in higher dimensions , implementing a higher dimensional version would be a significant extension of the work described in this paper .we comment on generalizations from the setting of systems with two slow and one fast variable in the discussion at the end of the paper , but leave consideration of further details to future work .slow manifolds of multiple time scale systems present unique theoretical and numerical challenges compared to the computation of invariant tori and ( un)stable manifolds .the first of these challenges is that theory is developed primarily in terms of `` small enough '' values of the parameter measuring the time scale ratio of a slow - fast system . numerically , one always works with specific values of .the convergence of trajectories as is singular , making it difficult to develop methods framed in terms of asymptotic expansions in .divergent series are the rule rather than the exception in this context .the rich history of numerical integration methods for stiff systems and the large literature on reduction methods for kinetic equations of chemical systems reflect the difficulty of computing attracting slow manifolds , the simplest case for this problem . computing slow manifolds of saddle - typepresents the additional challenge that most nearby trajectories diverge from the slow manifold on the fast time scale in both forward and backward time .the second theoretical difficulty in finding slow manifolds is that they are only locally invariant in most problems of interest .the local invariance is accompanied by a lack of uniqueness : possible manifolds intersect fast subspaces in open sets whose diameter is exponentially small in ; i.e. , bounded by for a suitable . methods based upon root finding of a discretized set of equations must choose a specific solution of the discretized equations .we compute enclosures of slow manifolds by exploiting transversality properties that improve as while being suitable for fixed values of .the methods do not identify a unique object and are well suited to locating locally invariant slow manifolds . if is a hypersurface and is a vector field , then transversality of to is a _ local _ property : verification does not rely upon computation of trajectories of . for a slow - fast vector field with one fast variable , translation of a normally hyperbolic critical manifold along the fast direction produces a transverse hypersurface when the translation distance is large enough .translation distances proportional to suffice . in this paper, we use piecewise linear surfaces as enclosing manifolds . for the examplewe consider , transversality at vertices of a face of implies transversality of the entire face .this reduces the computational complexity of checking transversality sufficiently that iterative refinement of the enclosures was feasible .since slow manifolds are objects that are defined asymptotically in terms of the parameter , they are not directly computable using finite information .one part of this paper is devoted to the development of a mathematical framework within which slow manifolds are defined for fixed values of .we define _ computable slow manifolds _ and relate this concept to the slow manifolds studied in geometric singular perturbation theory .all computations and statements in this paper are for computable slow manifolds .this is similar in spirit to the finite resolution dynamics approach of luzzatto and pilarczyk .our work is motivated by the study of tangencies of invariant manifolds .significant global changes in the dynamics of a system have been observed to occur at bifurcations involving tangencies .proving the existence of tangencies is intrinsically complicated because the manifolds themselves must be tracked over a range of parameters .computer - aided proofs of tangencies of invariant manifolds have previously been studied by arai and mischaikow in , and wilczak and zgliczyski in . in section [ s_tang ], we prove that a tangency bifurcation involving a computable slow manifold occurs in the singular hopf normal form introduced in .slow - fast differential equations have the form : where , , , and .we assume that the vector field is smooth ( ) , although most of this paper can easily be adapted to the finitely differentiable setting . here and are the fast and slow variables , respectively . throughout the paper we consider the case and of two slow variables and one fast variable .we define the _ critical manifold _, as the set the critical manifold is normally hyperbolic at points where is hyperbolic ; i.e. , has no eigenvalue whose real part is zero .points where is singular are referred to as folds . on the normally hyperbolic pieces of the critical manifold , given as a function of , .the corresponding differential equation is called the slow flow .if one instead rescales time with and puts in , one gets the _ layer equation _ : note that the manifold is exactly the set of critical points for the layer equation .singular perturbation theory studies how the solutions to ( [ eq_slowfast ] ) for small , but positive , can be understood by studying solutions to ( [ eq_slowsystem ] ) and ( [ eq_fastsystem ] ) .when is normally hyperbolic and is sufficiently small , geometric singular perturbation theory ensures that the critical manifold perturbs to a _slow manifold_. slow manifolds are _ locally invariant _ and close to the critical manifold. however , slow manifolds are not unique , although different choices are within distance from each other .we denote slow manifolds by .the purpose of this work is to compute approximations of that are guaranteed to be of a certain accuracy .this is achieved by computing two approximations that enclose the slow manifold .the two approximations of the slow manifold are triangulated surfaces transverse to the vector field .to prove the transversality , we use interval analysis , to be explained in subsection [ ss_valnum ] .interval analysis is a general technique that enables mathematically rigorous proofs of inequalities on a digital computer . to simplify notationwe denote the two slow variables by and , i.e. , from now on , and the vector field in the slow variables is denoted by .we also assume that and are independent of . to summarize , the systems we study are of the following form : where , and .we will sometimes use the notation ._ interval analysis _ was introduced by moore in as a method to use a digital computer to produce mathematically rigorous results with only approximate arithmetic .tucker is a modern introduction to the subject , and more advanced topics are discussed by neumaier .the main idea is to replace floating point arithmetic with a set arithmetic ; the basic objects are intervals of points rather than individual points .together with directed rounding this method yields an enclosure arithmetic that allows for the rigorous verification of inequalities . to use interval analysis to produce a mathematical proof , often called _ ( auto-)validated numerical methods _, one has to prove that the statement at hand can be reduced to a finite number of inequalities , and then verify that these inequalities are satisfied .interval arithmetic is used for the verification .the objects used to describe sets in validated numerics are typically convex sets in some coordinate system , e.g. , intervals , parallelograms , or ellipsoids . in this paper we will employ triangular meshes of surfaces , an approach that previously , in this setting , only has been used in . the study of invariant manifolds is central to the theory of dynamical systems .the behavior of a system can often be understood by understanding its invariant structures .numerical computations of invariant manifolds are important in many applications .there are no universally applicable methods to compute invariant manifolds ; to be efficient , they have to be tailored for the specific class of problems one is studying . computing invariant manifolds of slow - fast systems is particularly challenging . two existing methodsare , and no rigorous methods exist .the main idea of our method is to refine a first order approximation of the manifold by local modifications that maintain transversality of the enclosing manifolds .interval arithmetic is used to make the local computation of transversality rigorous .this is similar in spirit to the methods developed in to study the phase portraits of planar polynomial vector fields . even in the planar casethe verified computation of phase portraits is a challenging task , and the few methods that exist include .this section describes our method to compute enclosures of the slow manifold of a slow - fast system of the form ( [ eq_slowfast_12 ] ) .we start by giving an overview of the main ideas of the method .there are five main steps in the algorithm : 1 .triangulation of the critical manifold , 2 . computing the correction term for the slow manifold , 3 . constructing left and right perturbations of the slow manifold , 4 . proving that the left and right perturbations enclose the manifold , and 5 . tightening the enclosure by contracting the left and right perturbations towards each other .the first step is to compute a triangulation of the critical manifold , which is adapted to its geometry .the manifold is defined implicitly by the condition . in the example we consider in section [ s_method ], we solve this equation to obtain explicit expressions for the functions of the form whose graphs lie in the critical manifold .alternatively , one computes approximations to using , e.g. , automatic differentiation and continuation procedures .there are many software packages to compute triangulations of surfaces ; we use cgal via its matlab interface .when a part of the critical manifold is represented as the graph of a function , its domain in the plane of the slow variables can be triangulated , and then this triangulation can be lifted to the graph , as illustrated in figure [ f_triangulation ] .so that the triangles in the lifted triangulation have similar diameters , we choose triangles in the plane of the slow variables to have diameters that depend upon the gradient of .we stress that the rest of the algorithm is independent from how the triangulation of the critical manifold is constructed . rather than using axis parallel patches, one could , e.g. , use approximate trajectory segments of the reduced system to determine the piece of the domain of the slow variables , where the slow manifold is computed .\(a ) ( b ) we compute an approximation to the slow manifold using a procedure similar to that employed in stiff integrators that use rosenbrock methods .the tangent space to the critical manifold is orthogonal to the vector . according to the fenichel theory ,the slow manifold is close to the critical manifold in the topology , so its tangent space is approximately normal to . at a point in the ( lifted ) triangulation of , we look for a nearby point at which the vector field is orthogonal to .since and the normal hyperbolicity implies that , is an approximate solution to this equation . setting to this value, we take as a point of the triangulation of the approximate slow manifold .the critical manifold and the approximation to the slow manifold are illustrated in figure [ f_critslowmfd ] ( a ) and ( b ) , respectively .we next perturb this triangulation of the approximate slow manifold in both directions parallel to the -axis , as in figure [ f_critslowmfd ] ( c ) , by a factor , where is a natural number that will be specified later . in case that is very small ,we replace it by a term .this procedure yields two surfaces that are candidates for the enclosing surfaces that we seek .\(a ) ( b ) ( c ) to verify that the surfaces enclose the slow manifold , we check whether the flow of the full system ( [ eq_slowfast_12 ] ) is transversal to the candidate surfaces . asthe candidate surfaces are piecewise linear , we have to define what we mean by transversality at the edges and vertices of the triangulation .[ d_cone ] let be a triangulated , piecewise linear two dimensional manifold .since is a manifold , it locally separates into two sides .we say that a vector is transverse to if and point to opposite sides of .a smooth vector field is transverse to if it is transverse to at every point of .figure [ f_cone](a ) illustrates this definition .trajectories of the flow generated by will all cross from one side to another if is transverse to .if and are triangulated surfaces transverse to the flow with opposite crossing directions , then they form enclosing surfaces for the slow manifold we seek .\(a ) ( b ) [ f_triangle ] transversality is a condition that is local to each face of the triangulation , so we can check it on each face of the triangulation separately . to check the transversality condition on one face, we estimate the range of the inner product of the vector field with the normal of the face , as illustrated in figure [ f_triangle](b ) .details about the existence of locally invariant , normally hyperbolic manifolds inside the enclosure are addressed in section [ s_existence ] below .the final part of the algorithm is to iteratively update the location of the vertices by moving them towards each other in small steps along the fast direction .we check that the transversality properties still hold , see figure [ f_updatemfd ] .this tightening step is stopped when no more vertices can be moved .note that the vertices of all triangulations : the critical manifold , the approximate slow manifold , and the two perturbed manifolds , all have the same components .\(a ) ( b ) ( c )the method outlined in the previous section constructs two triangulated surfaces , in the phase space of a slow - fast system , that are transversal to the flow for the given . in this section we discuss the existence of locally invariant manifolds enclosed between these two triangulations .we denote the two enclosing surfaces by and , and the region enclosed between them by .note that and are graphs over the same compact region , so is well defined . specifically ,if for some compact set of slow variables , , , then , ( y , z)\in d\} ] .for other systems , any suitable method for finding a sufficiently accurate approximation to can be used . to construct the vertices of a delaunay triangulation of , as shown in figure [ f_triangulation](a ) , we start with a triangulation of the domain of , but want the diameter of the triangles on to be almost uniform . setting , , and with to be chosen later ,we select the following points in the plane as vertices of a triangulation : note that these points are aligned along lines parallel to the fold curve where .let denote the delaunay triangulation generated by the set and its lift to , using the map .clearly is a homeomorphism ; i.e. , the set of vertices , edges , and faces of , denoted by , , and , are defined by , , and , respectively . and are shown in figures [ f_triangulation](a ) and [ f_triangulation](b ) , respectively .our next step is to perturb , as illustrated in figure [ f_critslowmfd ] , so that it lies closer to the slow manifold we are trying to enclose .fenichel theory , , guarantees that for sufficiently small , is the graph of a function with domain and . to compute triangulations , that approximate , we write in the form substituting into the equation , we get that : to compute and , we use that , and hence and in addition , since , thus , we can solve equation for , up to , and substitute for using , obtaining which in our case , considering reads : for , that we will use in section [ s_tang ] , we get : we put , and define : is our approximation to the slow manifold , shown together with in figure [ f_critslowmfd](b ) .heuristically , it is to at the vertex points .let denote the following map that moves points parallel to the -axis : we define our candidate enclosing surfaces as : where .the initial choice for in our implementation was , but we would have chosen a smaller if that had failed .the verification step of the algorithm includes a loop that divides by a factor upon failure and repeats the transversality test .note that the region that is enclosed by and is disjoint from the critical manifold so long as .the construction of , , and is shown in figure [ f_critslowmfd ] . to prove that a slow manifold is located between and , it suffices to prove that the vector field is transversal to each face of the triangulations , with opposite crossing directions for and . for the remainder of this subsection ,we restrict our attention to a single triangle .local transversality , i.e. , the verification of transversality on each face in the triangulation implies global transversality of and .let be one face in or .we denote its vertices by , , and and its edges by , , and with the edge between the vertices and . to verify that the vector field is transverse , it suffices to prove that the inner product between the normal of the face and the vector field is non - zero .note that in contrast to most work on slow - fast systems , this condition , which is the main condition checked by our algorithm , becomes _ easier _ to verify as .the reason is that as , the condition becomes essentially one - dimensional .we denote the normal to the face , normalized so that the first component is positive , by .this is possible because the first component is zero exactly at the folds , where the critical manifold fails to be normally hyperbolic . with this notation, the condition that we have to verify is condition ( [ eq_transcond ] ) is equivalent to a verification that , \lambda_1+\lambda_2+\lambda_3=1,\ ] ] which is an enclosure of the range of a function on a compact domain .this problem is the one we solve with interval analysis . directly enclosing ( [ eq_transcondface ] ) using interval analysis in order to verify that the function is non - zero is , however , not optimal .the reason is that the problem is sufficiently sensitive that we would have to split the domains into a very fine subdivision , and since this has to be done on each face , such a procedure would be prohibitively slow .our actual approach is based on monotonicity ; first we prove that is monotone on the face and on its restriction to the edges. then we compute for the three vertices and verify that the interval hull of the results , i.e. , the smallest representable interval containing the results , does not contain .note that this amounts to showing that the dot - product does not change sign on the face .we introduce if on all of then has no critical points inside of and we can restrict our attention to the edges , i.e. the boundary of . consider an edge \} ] in the plane . since is fixed after the verification step we henceforth drop the indices on and .our aim is to produce enclosures that are as tight as possible , given the mesh size .we , therefore , try to improve the enclosure .the procedure is illustrated in figure [ f_updatemfd ] .we do this by iteratively updating each of the vertices in the triangulation by moving them towards each other along the segment joining them .this segment is parallel to the x - axis due to our earlier constructions .the moves are done in two steps : ( 1 ) a tentative move is made of a vertex , and ( 2 ) the transversality conditions of all faces attached to this vertex are verified .when the transversality holds , the vertex is fixed at its new position and we proceed to the next vertex . the efficiency of this procedure will depend on several factors , primarily the ordering of the vertices and how much the vertices are moved . by moving a vertex only a fraction of what seems to be possible , the effect of the ordering of the vertices can be minimized .the penalty of smaller updates is that the procedure has to be run more times .larger moves might be possible if an appropriate sorting algorithm were used , but we have not found an effective and efficient sorting criterion . instead , we heuristically determine an update factor that optimizes the accuracy vs complexity . given a right vertex , , and a left vertex , , such that , we move each of them towards each other by an amount we run the procedure to refine the enclosures of the slow - manifold several times , until no further improvement is possible .the quantity we use to measure the quality of the enclosures is the average distance between the two triangulations at the vertices .let denote the number of vertices of the triangulations ; by construction and have the same number of vertices , edges , and faces .the only difference between and is the values of the -coordinates .we put if the triangulation is fine enough will be .this fact is investigated numerically in section [ s_numres ] . in order to ensure that there are manifolds inside of the set enclosed by and , we need to have invariant cone fields on , as introduced in section [ s_existence ] . in this subsectionwe describe how such cone fields - one horizontal and one vertical - are constructed .recall , see , that a standard horizontal or vertical cone for a phase space with variables is a set or , respectively , and that a cone is the image of a standard cone under an invertible linear map .equivalently , a cone is the set of points where a non - degenerate indefinite quadratic form is non - negative . since horizontal and vertical cones are traditionally in the expanding and contracting directions , respectively , we will call the cone in the normal direction the vertical cone , and the cone in the direction of the slow manifold the horizontal cone . also recall that a cone field is invariant if it is mapped into itself by the derivative of the dynamics , i.e. , if the set where the quadratic form is non - negative is mapped by the derivative into the set where the quadratic form at the image point under the map is non - negative . for the case at hand we will use for both the horizontal and vertical cones in an appropriate coordinate system , such that the normal direction is in the vertical cone .a cone field is a map that associates a cone to each point of its domain .given that only has one nonlinear component , we will use constant cone fields .to prove that the cone fields are invariant , we solve the variational equation for the time flow map , and use the eigendirections of the derivative of the flow as a basis , in which we represent the standard horizontal and vertical cones with .we verify that the vertical and horizontal cone fields are invariant , and that the vertical cone contains the fast direction , which ensures that defined in projects injectively onto the slow variables , and , thus , is a graph over them .the flow time needs to be large enough for us to be able to prove the separation of the horizontal and vertical directions , but small enough that we do not move away too far in phase space .the value turned out to be a good choice .an implementation of the method described above has been made using the intlab package for interval arithmetic .a detailed description of the main algorithm is given as algorithm [ mainalgorithm ] .the algorithm that checks if the vector field is transversal to a face is given as algorithm [ transversalityalgorithm ] .algorithm [ mainalgorithm ] takes a triangulation as input .that triangulation can be computed with any method , not necessarily the one outlined in section [ ss_triangulation ] . in algorithm [ transversalityalgorithm ] the function returns if .[ mainalgorithm ] transversal = false exit(fail ) [ transversalityalgorithm ] this section we describe the results of several experiments illustrating the behavior of the enclosure computations . given a system and a domain , there are two numbers that can be changed , the number , which controls the mesh size , and the value of . in the experiments below, we use the normal form , , for the singular hopf bifurcation discussed in section 3 .we choose the same values of the constants as in the first part of : , , , and .we enclose the branch of the critical manifold with .the results of four experiments are described below , in each of them we present the results as a plot of vs . in the first experiment, we fix the domain as a small strip : ] and give the results for several values of ( defined implicitly by changing ) . in the second , we take a square domain : ] for comparison .our third example analyzes the effect and usefulness of the tightening step described in section [ ss_tight ] . in our fourth example, we investigate the heuristic constant in the denominator of ( [ eq_update ] ) ; the domain and constants are from the first example with its finest mesh .note that our domains are such that , which means that the assumptions from section [ s_existence ] are satisfied , i.e. , all trajectories with initial conditions in leave in both forward and backward time , and tangencies of the vector field with occur along a plane where they have quadratic tangency . during the computations we use the function defined in ( [ eq_sdef ] ) to prove the monotonicity properties that enables us to efficiently prove transversality .we note that for the example at hand , is a trivial calculation shows that if and only if and is a multiple of , so monotonicity always holds on the right branch of the critical manifold .the convergence rate of the enclosures at the vertex points should ideally be , since we have corrected for the linear term in the asymptotic expansion of .our interpolating surfaces between the vertex points are , however , linear .the discretization size thus puts a curvature dependent restriction on the tightness of the enclosure . in figure [ f_varyiota](a ) , we illustrate how , for different values of first decreases , but then reaches a plateau . looking at as a function of , we see that as the mesh size decreases ( increases ) , is approximately proportional to , as expected .this gives a heuristic picture of how depends on : first , there will be a period of quadratic convergence , where the accuracy depends on ; while at the end , the accuracy oscillates around some fixed value and depends on the mesh size . in the intermediate region , the accuracy depends both on the ratio of time scales and the mesh size . in this region ,the exponent will decrease from to .figure [ f_varyiota](b ) illustrates the quadratic convergence region for the finest mesh size from figure [ f_varyiota](a ) . as the plateau is reached defined in starts to increase . for enclosure is too wide for all trajectories inside to be slow . in table[ t_varyiotaslopes ] we give the slopes on the interval ] , for some different values of . the third row gives the maximum value of , where the flow is slow , i.e. , . [ cols="^,^,^,^,^,^,^,^",options="header " , ] * remark . * note that since the position of as well as the map to are computed using interval arithmetic , their computed positions have errors due to over estimation associated with them .these errors have to be taken into account when choosing the value at which to place the half - plane , the interval boundaries and , and the functions and . generally , placing at greater values of results in tighter bounds for the slow manifold , and the repelling nature of the slow manifold spreads trajectories that were initially close in the fundamental domain far apart , making it easier to verify assumptions [ a_tangency].(ii - iv ) .we found the size of the -sets constructed in section [ sss_wu ] to be large enough to keep the validated numerical integration to short enough to not accumulate prohibitively large errors , while being small enough to be efficiently computable .* to give further insight into what happens after the bifurcation we note that the following set is forward invariant . for other values of the parameters , similar sets can be constructed . for , we verify that the above conditions are satisfied , with , for the point .thus , and for a part of the unstable manifold past the tangential bifurcation .computation of the slow manifolds in a normal form for singular hopf bifurcation served as a case study for this paper .a singular hopf bifurcation in slow - fast systems with two slow and one fast variable occurs when an equilibrium point crosses between attracting and repelling slow manifolds .the dynamics associated with this crossing a _ folded saddle - node type ii _ in the singular limit is complicated .the small amplitude oscillations emanating from the equilibrium point are part of _ mixed mode oscillations _ in some examples , notably the model originally studied by koper .subsidiary bifurcations occur , including tangency between the repelling slow manifold and the two dimensional unstable manifold of the equilibrium point .tangency bifurcations form part of the boundary of the parameter space region in which mixed mode oscillations occur in the koper model , making them essential to understanding global aspects of the dynamics in this and other systems .since there are no analytic methods for locating the tangency bifurcations , this paper uses verified computing methods to prove the existence of tangency bifurcations between a slow manifold and an unstable manifold of an equilibrium point for the first time .some of our ideas generalize to the case of slow manifolds of saddle type . to compute normally hyperbolic manifolds of saddle type , see e.g. , one usually first computes the manifold s stable and unstable manifolds , and then intersects them . to compute a saddle slow manifold in a three dimensional ambient space using our ideas , one could compute enclosures of the stable and unstable manifolds , as presented in this paper . the existence argument given in section [ s_existence ]can be modified to this setting , under appropriate assumptions on the dynamics on the slow manifold .generalization to slow manifolds of saddle type in higher dimensional ambient spaces is substantially more challenging .we made several design decisions while constructing our algorithm for computing slow manifolds .this section discusses details of several and motivates our choices . * our enclosureswere constructed as pairs of enclosing transversal piecewise linear surfaces .there are several alternative approaches to how to construct and refine the vertices of the enclosing triangulated surfaces and . for the examples in sections [ s_numres ] and [ s_tang ] we used rectangular patches in the domain of the slow variablesinstead , one could construct the triangulations of the original domain in the slow variables by considering a dynamically defined region , constructed by flowing a set of initial conditions on the critical manifold with the slow flow , and use a discretization of those trajectories as the vertices of the triangulation .* we considered other possibilities for moving vertices in section [ ss_impbound ] ; namely , to move them along trajectories of the flow of ( [ eq_slowfast_12 ] ) , or to move them along the normal of the triangulation . both of these methods have serious disadvantages . when moving vertices along the flow of the system , we have to carefully check whether the vertices are moved past edges , thereby destroying the integrity of the triangulation .if the triangulation remains a graph over the domain , it is possible to generate a new triangulation by a delaunay - type algorithm , and lift it to the surface , but if two vertices flow to the same coordinate this is no longer possible. additionally , this method of moving vertices moves the two enclosing surfaces by different amounts , so that we obtain an enclosure of a smaller part of the slow manifold .a third drawback is that the triangulations might develop very acute triangles .finally , numerical integration of a large number of vertices is slow compared to the approach that we use .moving vertices along the normals combines the worst of both methods : we no longer control the triangulations , and we might introduce violations of the transversality conditions .* the tightening procedure described in section [ ss_impbound ] only updates one vertex at the time , i.e. , we move one vertex a big step and if all the faces attached to it are still transversal to the flow , then we move it .an alternative would be to move not only the vertex itself , but at the same time all vertices attached to it by an edge .such a procedure would work as follows : when it is one vertex `` turn '' , only update it by a fraction of its potential improvement , and simultaneously move the ones it attaches to , by a smaller amount .the smaller neighbour updates should be such that the expected value of the total update of each vertex stays the same as in section [ ss_impbound ] .the benefit of such an approach is that the triangulation is not skewed as much in each step , so it should be easier to verify the transversality condition . in practice , however , the gain of this approach is negligible , compared to a slight increase of the denominator of ( [ eq_update ] ) .there are also disadvantages of such an approach , primarily in its computational complexity .each time an update is made , one has to not only locate all its neighbouring vertices and update them , but also locate all of their neighbouring faces and check the transversality condition on them . in the results presented in section [ s_numres ] , we thus only update one vertex at the time . *we construct invariant cone fields on to prove that it contains normally hyperbolic locally invariant manifolds .we constructed these manifolds by flowing a `` ribbon '' around the inflowing boundaries of the enclosure .the property that our enclosures were aligned with the flow in the sense that for one of the slow variables the vector field is non - zero , was crucial for proving the existence of computable slow manifolds . in generalone could also use the invariant cone fields to show that the graph transform is well defined , by adapting the method in . to provethe convergence of such a scheme would require very careful estimates of the expansion and contraction rates , and the norms of the nonlinear components of the vector field .an alternative is to define an extension of the vector field outside of that has a slow manifold that is invariant rather than just locally invariant .global invariance together with normal hyperbolicity would give a unique manifold for the extension using the technique from .given normal hyperbolicity , ensured by the existence of the cone field , either method would give the existence of a ( non unique ) normally hyperbolic manifold , which is the graph over the slow variables . either of these approaches , however , include many subtle details that need to be clarified for the case at hand .* if the mesh size of piecewise linear enclosing surfaces remains fixed as decreases , then the curvature of the slow manifold becomes a a limiting factor in the tightness of enclosures . with smoother enclosing manifolds ,tighter enclosures are likely to be possible .we did not attempt this because the transversality calculations for piecewise linear systems were particularly simple in the singular hopf normal form we studied .t. j. was funded by a postdoctoral fellowship from _ vetenskapsrdet _ ( the swedish research council ) .j. g. and p. m. were partially supported by a grant from the national science foundation . , _ on a fast and accurate method to enclose all zeros of an analytic function on a triangulated domain _ , in proceedings of para - 2008 , to appear in lecture notes in computer science 6126/6127 springer - verlag , 2011 .
slow - fast dynamical systems have two time scales and an explicit parameter representing the ratio of these time scales . locally invariant slow manifolds along which motion occurs on the slow time scale are a prominent feature of slow - fast systems . this paper introduces a rigorous numerical method to compute enclosures of the slow manifold of a slow - fast system with one fast and two slow variables . a triangulated first order approximation to the two dimensional invariant manifold is computed `` algebraically '' . two translations of the computed manifold in the fast direction that are transverse to the vector field are computed as the boundaries of an initial enclosure . the enclosures are refined to bring them closer to each other by moving vertices of the enclosure boundaries one at a time . as an application we use it to prove the existence of tangencies of invariant manifolds in the problem of singular hopf bifurcation and to give bounds on the location of one such tangency .
network modeling is the recent interest of a wide interdisciplinary academic field which studies complex systems such as social , biological and physical systems . by using a networked representation ,it is possible to compare , in the same framework , systems that are originally very different , so that the identification of some universal properties becomes much easier .moreover , a network description of complex system allows to obtain related information by means of completely statistical coarse - grained analyses , without taking into account the detailed characterization of the system .so universality and simplicity are two fundamental principles that are interested in the study of the emergence of collective phenomena in systems with many interacting components . from a general point of view , complex networks are connected graphs with , at most , a single edge between nodes where nodes stand for individuals and an edge corresponding to the interaction between individuals .the collective behavior of nodes is complex in the sense that it can not be directly predicted and characterized in terms of the behavior of each individual .the collective behavior is the responsible of interactions that occurs when pairs of components are connected with links .it is simple to find various systems in both nature and society that can be described in this manner .the most studied class of network modelling is the communication networks such as the internet and the world wide web .a second class is related to social networks such as sexual contact networks , friendship networks and scientific collaboration networks .the last large class is concerned to biological networks such as metabolic networks and food webs .rumors have been a basic element of human interaction for as long as people have had questions about their social environment .rumors are known for spreading between people quickly and easily since they are easy to tell , but hard to prove .sometime , a rumor contains harmful information , so it is impossible to ignore , and can has damaging and perhaps even deadly consequences .we know it is probably bad for us , and we know it can hurt those around us , but we often find it hard to resist becoming active participants in the rumor spreading process . in business settings , it can greatly impact financial markets .despite its obvious negative connotations , a rumor has the capacity to satisfy certain fundamental personal and social needs and can shape the public opinion in a country . to a great extent , rumors help people make sense of what is going on around them . in this case , rumors spreading becomes a means by which people try to get the facts , to obtain enough information so that it reduces their psychological discomfort and relieves their fears .a rumor can be interpreted as an infection of the mind .daley and kendall ( dk ) have introduced the original model of rumor spreading . in the dk modela closed and homogeneously mixed population can be classified into three distinct classes .these classes are called ignorants , spreaders and stiflers . the ignorants , those who have not heard the rumor yet , so they are susceptible to become infected by rumor .the second class consists of the spreaders , those who have heard the rumor and are still interested to transmit it .eventually , the stiflers , those who have heard the rumor but have lost interest in the rumor and have ceased to transmit it .when the pairwise contacts between spreader and others occur in the society , the rumor is propagated through the population .if a spreader meets an ignorant , the last one turns into a new spreader with probability ; otherwise , the spreader meets another spreader or stifler , so they conclude that the rumor is known and do not spread the rumor anymore , therefore , turning into stiflers with probability .an important variant model of dk is the maki - thompson ( mk ) model . in the mk model ,the rumor is spread by directed contacts of the spreaders with others .furthermore , in the contacts of the type spreader - spreader , only the initiating spreader becomes a stifler .therefore , there is no double transition to the stifler class . in the past ,the dk and the mk models were used extensively to study rumor spreading . in the above - mentioned models of rumor spreading ,the authors have investigated the rumor spreading in the homogeneous networks that their degree distributions are very peaked around the average value , with bounded fluctuations . while , in the last years , a huge amount of experimental data yielded undoubtful evidences that real networks present a strong degree heterogeneity , expressed by a broad degree distribution .recently , the model that we call the standard rumor model has been studied in ref . where authors studied a new model of rumor spreading on complex networks which , in comparison with previous models , provides a more realistic description of this process . in standard rumor model unlike previous rumor models that stifling process is the only mechanism that results in cessation of rumor spreading , authors assumed two distinct mechanisms that cause cessation of a rumor , stifling and forgetting . in reality, cessation can occur also purely as a result of spreaders forgetting to tell the rumor , or their disinclination to spread the rumor anymore .they took forgetting mechanism into account by assuming that individuals may also cease spreading a rumor spontaneously ( i.e. , without any contact ) with probability .furthermore , in the standard rumor spreading model , each node has an infectivity equal to its degree , and connectivity is uniform across all links .the generalization of the standard rumor model considered in ref . has been studied in ref . by introducing an infectivity function that determines the number of simultaneous contacts that a given node ( individual ) may establish with its connected neighbors and a connectivity strength function for the direct link between two connected nodes .these lead to a degree - biased propagation of rumors .to read more about social networks , one can refer to . in the above - mentioned models of rumor spreading, it has been assumed that in both interactions of stifling process ( i.e. , spreader - spreader and spreader - stifler ) the initiating spreader becomes a stifler with the same rate . in this paperwe leave this assumption and assign a distinct rate for each interaction .more precisely , we introduce the generalized model in which the encounter of the spreader - spreader ( spreader - stifler ) leads to the stifler - spreader ( stifler - stifler ) with rate ( ) .we study in detail the dynamics of a generalized rumor model on some complex networks through analytic and numerical studies , and investigate the impact of the interaction rules on the efficiency and reliability of the rumor process . the rest of the paper is organized as follows . in section 2 ,we introduce the standard model of rumor spreading and shortly review epidemic dynamics of this model . in section 3we introduce the generalized rumor spreading model and analytically study the dynamics of this model on complex social networks in detail .the influence of the topological structure of the network in rumor spreading is studied by analyzing the behavior of several global parameters such as reliability , efficiency in section 4 .finally , our conclusions are presented in the last section .the rumor model is defined as follows .each of the individuals ( the nodes in the network ) can be classified in three distinct states with respect to the rumor as , the ignorant or the individual has not heard the rumor yet , , the spreader or the individual is aware of the rumor and is willing to transmit it , and , the stifler or the individual has heard the rumor but has lost the interest in it , and does not transmit it anymore . based on maki and thompson model , the directed contact between spreaders and the rest of the population is the main requirement for spreading the rumor . from mathematical point of view, these contacts only can occur along the links of an undirected graph , where and denote the nodes and the edges of the graph , respectively .the model that we call the standard model has been studied in ref . . by following ,the possible processes that can occur between the spreaders and the rest of the population are * spreading process : whenever a spreader meets an ignorant , the ignorant becomes a spreader at a rate . *stifling processes : * * when a spreader contacts another spreader , the initiating spreader becomes a stifler at a rate .* * when a spreader encounters a stifler , the spreader becomes a stifler at a rate .* forgetting process: there is a rate for a spreader to forget spreading a rumor spontaneously ( i.e. , without any contact ) .the individuals in social complex networks not only be in three different states but also belong to different connectivity ( degree ) classes , therefore we denote , and for densities of the ignorant , spreader , and stifler nodes ( individuals ) with connectivity at time , respectively .these quantities satisfy the normalization condition for all classes .we shortly review some classical results of standard model , where nekovee et al . described a formulation of this model on networks in terms of interacting markov chains , and used this framework to derive , from first - principles , mean - field equations for the dynamics of rumor spreading on complex networks with arbitrary degree correlations as follows : where the conditional probability means that a randomly chosen link emanating from a node of degree leads to a node of degree .moreover , we suppose that the degrees of nodes in the whole network are uncorrelated , i.e. , where is the degree distribution and is the average degree .they have used approximate analytical and exacted numerical solutions of these equations to examine both the steady - state and the time - dependent behavior of the model on several models of social networks such as homogeneous networks , random graphs and uncorrelated scale - free ( sf ) networks .they have found that , as a function of the rumor spreading rate , their model shows a new critical behavior on networks with bounded degree fluctuations , such as random graphs , and that this behavior is absent in sf networks with unbounded fluctuations in node degree distribution .furthermore , the initial spreading rate at which a rumor spreads is much higher in sf networks as compared to random graphs . in standard modelthe authors have mainly focused on critical threshold in several models of social networks but in the following section we introduce generalized rumor model and we concentrate on the final fraction of the population that heard the rumor , , when the spreading rate is fixed , , and we vary the value of other parameters of our model .in the standard rumor model , authors have assumed that both stifling processes , the and the , have the same rate . but in this paper , we leave this assumption and define and for and interactions , respectively .we will show that this separation of rates leads to notable results .the other interactions and their rates of our model are the same as the standard model .now , the mean - field rate equations can be rewritten as eq .( 4 ) can be integrated exactly to yield : where is the initial density of ignorant nodes with connectivity , and we have used the auxiliary function in order to get a closed relation for finding the final fraction of the population that heard the rumor , , it is more useful to focus on the time evolution of . assuming an homogeneous initial distribution of ignorant , i.e. , ( without lose of generality , we can put ) .the spreading process starts with one element becoming informed of a rumour and terminates when no spreaders are left in the population , i.e. , thus according to normalization condition , at the end of the epidemic we have after rather lengthy calculations , similar to what has been done in refs . , one can find the expansion of as }\ ] ] where is a finite and positive integral that has the form . at the end of epidemic ,the final fraction of the population that heard the rumor , , is given by regardless of the network topology and configuration , for any form of p(k ) , above relation can be simplified by expanding the exponential for the first order in , one obtains of the most important practical aspects of any rumor mongering process is whether or not it reaches a high number of individuals that heard the rumor . this value is simply given by the density of stiflers , , at the end of the epidemic and is called `` reliability '' of the rumor process . for obvious practical purposes , any algorithm or process that emulates an effective spreading of a rumor will try to find the conditions that under these the reliability reaches as much as possible value .another important quantity is the efficiency of the process which is the ratio between the reliability and the load imposed to the network .load means number of messages on average each node sending to its neighbors in order to propagate the rumor . for these purposes ,one does not only want to have high reliability levels , but also the lowest possible cost in terms of network load .this is important in order to reduce the amount of processing power used by nodes participating in the spreading process . in order to characterize this trade - off between reliability and cost , we use time as a practical measure of efficiency .similar to ref . , we call a rumor process less efficient than another if it needs more time to reach the same level of reliability .to illustrate the effect of separation of stifling process rate ( for both and interactions ) into and for and , respectively , we consider a standard scale - free ( sf ) and erds - rnyi ( er ) network .the sf network has generated according to , the number of nodes is and the average degree is .the er network is a homogenous network that has the size with . throughoutthe rest of the paper we set without loss of generality and vary the value of and .1 shows the time evolution of the density of spreaders for different values of the stifling process rates , and , when the forgetting process rate is .we define two different models according to variation of and as following * model 1 : varies with condition \{ and =1 } , * model 2 : varies with condition \{ and =1}. we have performed large scale numerical simulations by applying two stated conditions on sf and er networks .figs .1 ( fig .2 ) corresponding to model 1 ( model 2 ) illustrates , as expected , that the number of individuals who spread the rumor increases as the stifling process rate ( ) decreases . in the cases in which the is fixed , i.e. , , the maximum value of spreaders in fig .1 ( a ) ( fig .2 ( a ) ) , the case in which , is greater than the corresponding values in fig.1 ( b ) ( fig .2 ( b ) ) , the case in which , although the lifetime of spreaders in latter is greater . on the other worlds ,when the is fixed and , more individuals participate in spreading the rumor . on the other hand , in model 1 the time it takes for to reach its final value , i.e., no spreaders are left in the population , very slightly varies with different amount of , but clear differences arise between the time that spreaders die out for different amount of in model 2 .3 ( a ) ( fig . 3 ( b ) ) shows the final densities of stiflers for sf ( er ) network . itobvious that in model 2 on both networks , the lower leads to higher reliability ( blue - solid curves ) . on the other hand , the time it takes for to reach its asymptotic value slightly increases when decreases , the clear differences arise for the two lowest values of . imposing the condition of model 1 on both networks , the red - dashed curves, illustrates that the lower leads to higher reliability but unlike the previous case , the time it takes for to reach its asymptotic value slightly decreases when decreases as the inset figures show . generally , from fig .3 , after the comparison of the cases in which , we can conclude that the model 1 ( blue - solid curves ) leads to more reliable rumor spreading model but under condition of model 2 ( red - dashed curves ) the society reaches the steady state with respect the rumor in less time . to make the comparison between the number of stiflers in the sf and er networks at the end of epidemic , we have plotted the fig .as shows this figure , in the er network the number of stiflers at the end of the process is definitely higher than sf network , so the sf network appears less reliable .it results that er networks allow a larger reliability to this epidemic process .this is not straightforward and one may think that the existence of hubs in sf networks helps propagate the rumor .however , a closer look at the spreading dynamics reveals us that the presence of hubs introduces conflicting effects in the dynamics . while hubs may in principle contact with a larger number of individuals , spreader - spreader and spreader - stifler interactions get favored on the long run .more precisely , it is very likely that a hub in the spreader state turns into a stifler before contacts all its ignorant neighbors .once a few hubs are turned into stiflers many of the neighboring individuals could be isolated and never get the rumor . in this sense ,homogeneous networks allow for a more capillary propagation of the rumor , since all individuals contribute almost equally to the rumor spreading .on the other hand , the sf network reaches the steady state with respect the rumor in less time than the homogeneous network ( at the same condition ) . in this sense , sf network has a better efficiency .in this paper we introduced a generalized model of rumor spreading on complex social networks .unlike previous rumor models , our model incorporates two distinct rates for stifling processes .we have defined and for and interactions , respectively .our simulations showed that in the condition , when is smaller than , the society reaches the steady state with respect the rumor in less time . on the other hand , when , the higher level of reliability is obtained .this result is valid for both homogeneous and heterogeneous ( scale - free ) networks . by analyzing the behavior of several global parameters such as reliability and efficiency , we studied the influence of the topological structure of the network in rumor spreading .our results showed that while networks with homogeneous connectivity patterns reach a higher reliability , scale - free topologies need a less time to reach a steady state with respect the rumor .99 s. n. dorogovtsev , and j. f. mendes , adv . phys .* 51 * ( 2002 ) 1079 .r. albert , and a - l .barabsi , rev .mod . phys .* 74 * ( 2002 ) 47 .s. h. strogatz , nature ( london ) * 410 * ( 2001 ) 268 .watts , and s.h .strogatz , nature ( london ) * 393 * ( 1998 ) 440 .r. pastor - satorras , and a. vespignani , _ evolution and structure of internet : a statistical physics approach _ , cambridge university press , cambridge , uk ( 2004 ) .b. a. huberman , _ the laws of the web _ , mit press , cambridge , ma ( 2001 ) .f. liljeros et al . ,nature ( london ) * 411 * ( 2001 ) 907 .l. a. n. amaral , a. scala , m. barthelemy , and h.e .stanley , proc .* 97 * ( 2000 ) 11149 .m. e. j. newman , phys .e * 64 * ( 2001 ) 016131 .h. jeong et al . , nature ( london ) * 411 * ( 2000 ) 651 .j. m. montoya , and r. v. sol , j. theor .* 214 * ( 2001 ) 405 .s. galam , physica a * 320 * ( 2003 ) 571 .a. j. kimmel , j. behav . fin . * 5 * ( 2004 ) 134 .m. kosfeld , j. math . econ .* 41 * ( 2005 ) 646 .d. j. daley , and d. g. kendal , j. inst .* 1 * ( 1965 ) 42 .daley , and j. gani j , _ epidemic modelling _, cambridge university press , cambridge , uk ( 2000 ) .d. p. maki , _ mathematical models and applications , with emphasis on social , life , and management sciences _ , prentice - hall , englewood cliffs , nj ( 1973 ) .b. pittel , j. appl . probab . * 27 * ( 1987 ) 14 .a. sudbury , j. appl .* 22 * ( 1985 ) 443 . c. lefevre , and p. picard , j. appl* 31 * ( 1994 ) 244 .b. pittel , j. app . prob . * 27 * ( 1987 ) 14 .a. noymer , j. mathematical sociology * 25 * ( 2001 ) 299 . c. lefevre , and p. picard , j. appl* 31 * ( 1994 ) 244 .m. e. j. newman , s. forest , and j. balthrop , phys .e * 66 * ( 2002 ) 035101 .m. nekoveea , y. moreno , g. bianconic , and m. marsili , physica a * 374 * ( 2007 ) 457 .f. roshani , and y. naimi , phys .rev , e * 85 * ( 2012 ) 036109 . c. castellano , s. fortunato , and v. loreto , rev.mod.phys * 81* ( 2009 ) 591 .p. holme , and jari saramki , phys . rep .* 519 * ( 2012 ) 97 .h. yang , z. wu , c. zhou , t. zhou , and b. wang , phys .e * 80 * ( 2009 ) 046108 .j. borge - holthoefer , s. meloni , b. goncalves , and y. moreno , j. stat.phys .doi : 10.1007/s10955 - 012 - 0595 - 6 ( 2012 ) y. moreno , m. nekovee , and a. pacheco , phys .e * 69 * ( 2004 ) 066130 .
we introduce the generalized rumor spreading model and investigate some properties of this model on different complex social networks . despite pervious rumor models that both the spreader - spreader ( ) and the spreader - stifler ( ) interactions have the same rate , we define and for and interactions , respectively . the effect of variation of and on the final density of stiflers is investigated . furthermore , the influence of the topological structure of the network in rumor spreading is studied by analyzing the behavior of several global parameters such as reliability and efficiency . our results show that while networks with homogeneous connectivity patterns reach a higher reliability , scale - free topologies need a less time to reach a steady state with respect the rumor . keywords : complex networks , rumor spreading , reliability , efficiency + pacs number(s ) : 89.75.hc , 02.50.ey , 64.60.aq
rate - compatible coding schemes are desirable to provide different error protection requirements , or accommodate time - varying channel characteristics .especially , we would like to design a pair of encoder and decoder which can adapt both different code length and different code rate without changing their basic structure in the hybrid automatic repeat - request ( harq ) protocols .in such cases , rate compatible punctured convolutional ( rcpc ) codes or rate compatible punctured turbo ( rcpt ) codes are typical coding techniques , which are broadly applied in modern wireless communication systems , such as lte ( long term evolution ) . recently , as the first constructive capacity - achieving coding scheme , polar codes reveal the advantages of error performance and many attractive application prospects . according to the original code construction , polar codes are also able to support rate compatibility partially since the code rate can be precisely adjusted by adding or deleting one information bit .however , the code length still is limited to the power of two , i.e. , .consequently , puncturing code bits and shortening the code length becomes the key technique of designing good rate - compatible punctured polar ( rcpp ) codes . to the best of the authors knowledge, the puncturing schemes of polar codes can be summarized as two categories .first , some code bits are punctured in the encoder and the decoder has no _ a priori _ information about these bits which can be regarded as the ones transmitting over zero - capacity channels . in this paper, we call this category as the capacity - zero ( c0 ) puncturing mode .second , the values of the punctured code bits are predetermined and known by the encoder and decoder .thus the associated channels can be regarded as one - capacity channels .we use the capacity - one ( c1 ) puncturing mode to sketch the feature of this category . for the puncturing schemes under the c0 mode ,eslami _ et al . _first proposed a stopping - tree puncturing to match arbitrary code length under the belief propagation ( bp ) decoding .then , shin _ et al . _proposed a reduced generator matrix method to efficiently improve the error performance of the rcpp codes under the successive cancellation ( sc ) decoding , whereas searching the good polarizing matrices is still a time consuming process . in ,a heuristic puncturing approach was proposed for the codes with short length . in , an efficiently universal puncturing scheme , named quasi - uniform puncturing algorithm ( qup )was proposed and the corresponding rcpp codes can outperform the performance of turbo codes in 3g/4 g wireless systems . on the other hand , for the puncturing schemes under the c1 mode , wang _ et al . _ first introduced the concept of capacity - one puncturing and devised a simple puncturing method by finding columns with weight 1 to improve the error performance of sc decoding .later , the author in exploited the structure of polar codes and proposed a reduced - complexity search algorithm to jointly optimize the puncturing patterns and the values of the punctured bits . to sum up , for the mainstream sc / sc - like decoding , most of the current puncturing schemes under the c0 or c1 modes are heuristic methods and lack of a systematic framework to design the rcpp codes .intuitively , the optimal punctured scheme under the sc decoding can be obtained by enumerating each punctured pattern and calculating the relative upper bound of block error rate ( bler ) .obviously , this exhausted search is intractable due to the prohibitive complexity .theoretically , like the optimization of rcpc or rcpt codes , rcpp codes can also be constructed by the optimization of the distance spectra ( ds ) or weight enumeration function ( wef ) for different punctured patterns .but due to the high complexity of ds / wef calculation of polar codes , it is also unrealistic to design rcpp codes based on these metrics .hence , designing a feasible and computable measurement is crucial for the optimization of rcpp codes under the sc decoding . in this paper , we establish a complete framework to design and optimize the rcpp codes under the sc / sc - like decoding .based on this framework , we obtain the optimal puncturing schemes for both modes .the main contributions of this paper can be summarized as follows .\(1 ) first , we propose a new tool , called polar spectra ( ps ) , to simplify the performance evaluation of rcpp codes under sc decoding .conceptually , polar spectra are defined on the code tree and include two categories : ps1 and ps0 , which represent the number of paths with the same hamming weight or complemental hamming weight ( the number of zeros ) respectively .based on ps , we introduce two kinds of path weight enumeration function ( pwef1 and pwef0 ) to indicate the distribution of ( complemental ) path weight .furthermore , three performance metrics , the spectrum distance for pwef0 ( sd0 ) , the spectrum distance for pwef1 ( sd1 ) , and joint spectrum distance ( jsd ) for the entire ps , are defined to optimize the distribution of ( complemental ) path weight under two puncturing modes ( c0 and c1 ) .\(2 ) second , for the c0 mode , thanks to the easily analyzed property of ps , we prove that the quasi - uniform puncturing ( qup ) algorithm proposed in can maximize the metrics sd1 and jsd .moreover , we analyze the structure feature of this puncturing and obtain the exact number of equivalent puncturing tables .\(3 ) third , for the c1 mode , we propose a new reversal quasi - uniform puncturing ( rqup ) and prove that this scheme can maximize the metrics sd0 and jsd .the remainder of the paper is organized as follows .section [ section_ii ] describes the preliminaries of polar codes , including polar coding , decoding algorithm , and upper bounds analysis of bhattacharyya parameter .section [ section_iii ] describes the puncturing modes of rcpp codes and sketches out the en-/decoding process .the concepts of polar spectra , pwefs ( pwef0 and pwef1 ) , and spectrum distances ( sd0 , sd1 , and jsd ) are introduced in section [ section_iv ] . the qup algorithm is presented and proved to be the optimal one under the c0 puncturing mode in section [ section_v ] .similarly , the rqup scheme is proposed and proved to maximize the sd0 and jsd under the c1 puncturing mode in section [ section_vi ] .section [ section_vii ] provides the numerical analysis for various puncturing schemes and simulation results for rcpp and turbo codes in lte systems .finally , section [ section_viii ] concludes the paper .in this paper , calligraphy letters , such as and , are mainly used to denote sets , and the cardinality of is defined as .the cartesian product of and is written as and denotes the -th cartesian power of .we write to denote an -dimensional vector and to denote a subvector of , .further , given an index set and its complement set , we write and to denote two complementary subvectors of , which consist of with or respectively. we use to denote the expectation operation of a random variable . throughout this paper, means logarithm to base 2 " , and stands for the natural logarithm . given a b - dmc with input alphabet and output alphabet , the channel transition probabilities can be defined as , and and the corresponding reliability metric , bhattacharyya parameter , can be expressed as applying channel polarization transform for independent uses of b - dmc , after channel combining and splitting operation , we can obtain a group of polarized channels , .the bhattacharyya parameters of these channels satisfy the following recursion by using of the channel polarization , the polar coding can be described as follows .given the code length , the information length and code rate , the indices set of polarized channels can be divided into two subsets : one set to carry information bits and the other complement set to assign the fixed binary sequence , named frozen bits .so a message block of bits is transmitted over the most reliable channels with indices and the others are used to transmit the frozen bits .so a binary source block consisting of information bits and frozen bits can be encoded into a codeword by where the matrix is the -dimension generator matrix .this matrix can be recursively defined as , where " denotes the -th kronecker product , is the bit - reversal permutation matrix , and ] and =\frac{n}{2} ] .first , we prove the right - side inequality . due to , we have next , let and , we need to prove . due to , we have \\ & \ge\frac{1}{2m}\left[2^{n-1}-\sum\limits_{k = 0}^{n-2 } \left(n-2-k\right)2^k\right]\\ & \overset{(1)}{=}\frac{1}{2m}\left[2^{n-1}-\left(2^{n-1}-n\right)\right]=\frac{n}{2m}>0 \end{aligned}\ ] ] where the equality ( 1 ) is derived from the summation of arithmetico - geometric sequence . [ corollary8 ] for the c0 mode , the jsd corresponding to qup satisfies . by theorem [ theorem7 ] ,for the qup puncturing , there is only one prune path corresponding to a subtree with a depth and this path contains the leftmost path of the parent subtree .so we have . like the proof in theorem [ theorem8 ] , we have . for the left - side inequality , by theorem [ theorem8 ] , we have \ge n-2 ] and respectively .the proof is similar to that of theorem [ theorem8 ] and corollary [ corollary8 ] .in this section , at first , we compare various puncturing schemes under the c0 or c1 modes by calculating the spectrum distances sd0/sd1. then rcpp codes based on different puncturing schemes under sc or scl decodings are evaluated .furthermore , the blers of rcpp and turbo codes are also compared via simulations over awgn channels .we compare the spectrum distances of various puncturing schemes . for the c0 mode, we mainly concern three typical puncturing schemes , such as qup algorithm , the algorithm proposed by eslami __ and that proposed by shin __ . on the other hand , for the c1 mode , we mainly investigate two puncturing schemes , such as rqup algrithm proposed in this paper and the algorithm proposed by wang _. _ . for the latter ,given the generator , the index of column with column weight 1 is selected as the punctured position .however there may be many selections for the column weight 1 as stated in ( * ? ? ?* algorithm1 ) . in order to simplify evaluation , we use a puncturing table where the last code bits are punctured as a reference of wang algorithm . for all the puncturing schemes under the c0 or c1 modes , the spectrum distances sd1/sd0 versus code length ( ) are shown in fig .[ fig_sd0_sd1_vs_code_length ] . among the three puncturing schemes( qup / shin / eslami ) under the c0 mode , the sd1 of qup algorithm is larger than that of the others due to the optimal polar spectra ps1 .similarly , the sd0 of rqup is better than that of wang method due to the optimal ps0 .recall that the polar spectra of qup and rqup schemes are symmetrical , the sd1 of qup and sd0 of rqup are overlapped as depicted in fig .[ fig_sd0_sd1_vs_code_length ] .further , we observe that the sd1 ( sd0 ) of qup ( rqup ) is distributed between and which is consistent with theorem [ theorem8 ] ( [ theorem16 ] ) . for jsds of all the puncturing schemes, we can observe the similar results , that is , qup or rqup have the maximal jsds under the c0 or c1 modes . due to the limitation of space , these results are not shown here .however , the performance comparison just based on jsd may result a bias conclusion . as an example , sd1 versus sd0 at the code length for all the schemes is drawn in fig . [ fig_sd1_vs_sd0 ] . in this -d chart ,the point a located at is relative to the sd1/sd0 of the original polar code with the code length .recall that the aim of rcpp codes optimization is to approach the spectrum distances of the parent codes as close together as possible , that is , in this chart , the more one point corresponding to a puncturing scheme is close to the point a , the better this scheme will achieve an error performance .all points relative to qup are concentrated at and all points relative to rqup at . obviously , among the three puncturing schemes ( qup / shin / eslami ) under the c0 mode , qup has the maximal value of sd1 when the value of sd0 is fixed .on the other hand , given the fixed sd1 , the sd0 of qup is larger than that of wang scheme .first , we compare the error performance of rcpp codes with various puncturing schemes under the bi - awgn channels .the gaussian approximation algorithm is applied to construct these codes . given the sc decoding and the parent code length , the bler performance comparisons of rcpp codes based on all the puncturing schemes with the code length are shown in fig .[ fig_varrate_sc_comp ] for the code rate , and respectively . and code rate ( the parent code length ) . ] for the low code rate , compared with other schemes , such as wang , rqup and shin algorithms, we can see that qup achieves the best error performance .on the other hand , for the high code rate , rqup is the best one among all the puncturing schemes . these results are consistent with the analysis in section [ section_v ] and [ section_vi ] .further , we find that the error performance of qup is worse than that of rqup in the high code rate and vice versa in the low code rate .especially , we observe that the schemes of qup , rqup and wang can achieve almost the same performance and they are better than shin or eslami schemes .these phenomena may imply that the code rate is a critical value .so rqup will be the best scheme when and qup will be the best one when .next we compare the performance of rcpp and turbo codes under awgn channel .rcpp codes are constructed from the parent code with the code length by qup or rqup schemes and ca - scl is used as a decoding algorithm with the maximum list size .an eight - state turbo code in 3gpp lte standard is used as a reference .a crc code is used in all concatenation coding schemes ( both for turbo and rcpp codes ) .the log - map algorithm is applied in turbo decoding and the maximum number of iterations is . andbler of we investigate the relationship of bit signal noise ratio ( snr ) and code length for these two codes .the performance curves of vs code length ( ) for the lte turbo and rcpp codes ( punctured by qup and rqup algorithms ) at the bler of and code rate are shown in fig .[ fig_coding_gain33 ] . in most cases, rcpp codes can achieve additional coding gains relative to lte turbo codes .for the low code rate , as shown in fig .[ fig_coding_gain33 ] , a maximum db additional gain can be obtained at the code length and the rcpp codes punctured by qup algorithms can achieve slightly better performance than those codes punctured by rqup . on the other hand , for the high code rate , a maximum db performance gaincan be attained at the code length and the bler of .in contrast to the case of low code rate , the rqup algorithms can generate better rcpp codes in this case . for the medium code rate , additional gaincan be obtained and the rcpp codes punctured by qup or rqup algorithms can achieve the same performance . due to the limitation of space , these results are not shown .in this paper , we propose a theoretic framework based on the polar spectra to analyze and design rate - compatible punctured polar code .guided by the spectrum distances , two simple quasi - uniform puncturing methods ( qup and rqup ) are proposed to generate the puncturing tables under the c0/c1 modes . by the analysis of the performance metrics , such as sd0/sd1/jsd, we prove that these two algorithms can achieve the maximal value of corresponding spectrum distance .simulation results in awgn channel show that the performance of rcpp codes by qup or rqup can be equal to or exceed that of the turbo codes at the same code length . without loss of generality , we consider the polarization of the first scenario , that is , . under the c0 mode , due to , for , the transition probabilities of polarized channel can be written by so the channel is degraded to a punctured channel and .on the other hand , we analyze the llr of polarized channel .let denote the llr of b - dmc .due to , the corresponding probability density function ( pdf ) is , where is the dirac function .since the source bit is relative to a variable node , the pdf of the corresponding llr can be derived as where is the convolutional operation .so the polarized channel has the same reliability as that of the original b - dmc , that is , . by the coding relationship , the channel transition probabilities can be presented as where the punctured vector is composed of the punctured code bits ( ) , andthe corresponding received vector can be written by . under the c1 mode , we assume is only true for each specific pair . therefore , for the specific vector , we can write . let and .furthermore , these two vectors satisfy and respectively .so we have where .recall that the value of the punctured bit in the c1 mode is known by the decoder . apparently , puncturing the code bit is a good selection because this bit is only involved one source bit .hence , in order to ensure that the bit is punctured and has a fixed value to the decoder , as shown in fig .[ fig_two_channel](c ) , the puncturing table should be .let and denote the llrs of the source bits and ( ) respectively .considering the check node constraint , we have where is the hyperbolic tangent function .therefore , we can conclude that .e. arikan , `` channel polarization : a method for constructing capacity achieving codes for symmetric binary - input memoryless channels , '' _ ieee trans .inf . theory _55 , no . 7 , pp . 3051 - 3073 , july 2009 .
polar codes are the first class of constructive channel codes achieving the symmetric capacity of the binary - input discrete memoryless channels . but the corresponding code length is limited to the power of two . in this paper , we establish a systematic framework to design the rate - compatible punctured polar ( rcpp ) codes with arbitrary code length . a new theoretic tool , called polar spectra , is proposed to count the number of paths on the code tree with the same number of zeros or ones respectively . furthermore , a spectrum distance sd0 ( sd1 ) and a joint spectrum distance ( jsd ) are presented as performance criteria to optimize the puncturing tables . for the capacity - zero puncturing mode ( punctured bits are unknown to the decoder ) , we propose a quasi - uniform puncturing algorithm , analyze the number of equivalent puncturings and prove that this scheme can maximize sd1 and jsd . similarly , for the capacity - one mode ( punctured bits are known to the decoder ) , we also devise a reversal quasi - uniform puncturing scheme and prove that it has the maximum sd0 and jsd . both schemes have a universal puncturing table without any exhausted search . these optimal rcpp codes outperform the performance of turbo codes in lte wireless communication systems . polar codes , rate - compatible punctured polar ( rcpp ) codes , polar spectra , path weight enumerating function ( pwef ) , spectrum distance ( sd ) .
in economics discounting " refers to weighting the future relative to the present .the choice of a discounting function has enormous consequences for long run environmental planning .for example , in a highly influential report on climate change commissioned by the uk government , stern uses a discounting rate of , which on a 100 year horizon implies a present value of ( meaning the future is worth as much as the present ) . in contrast , nordhaus argues for a discount rate of , which implies a present value of , and at other times has advocated rates as high as , which implies a present value of .the choice of discount rate is perhaps the biggest factor influencing the debate on the urgency of the response to global warming .stern has been widely criticized for using such a low rate .this issue is likely to surface again with the upcoming calderon report in july 2014 .a simple argument to motivate discounting is based on opportunity cost . under a constant , continuously compounded rate of interest , a dollar invested today will yield at time , so an environmental problem that costs to fix at time is equivalent to an investment of now .economists present a variety of reasons for discounting , including impatience , economic growth , and declining marginal utility ; these are embedded in the ramsey formula , which forms the basis for the standard approaches to discounting . here we adopt the net present value approach , which treats the real interest rate as the measure of the trade - off between consumption today and consumption next year , without delving into the factors influencing the real interest rate .we estimate the stochastic real interest rate process using historical data .it is often argued that , based on past trends in economic growth , future technologies will be so powerful compared with present technologies that it is more cost - effective to encourage economic growth , or to solve other problems such as aids or malaria , than it is to take action against global warming now .analyses supporting this conclusion typically study discounting by working with an interest rate that is fixed over time , ignoring fluctuations about the average .this is mathematically convenient , but it is also dangerous : in this problem , as in many others , fluctuations play a decisive role .a proper analysis takes fluctuations in the real interest rate , caused partly by fluctuations in growth , into account .when the real interest rate varies the discounting function becomes , \label{d}\ ] ] where the expectation ] for the ou process this is and is the correlation time .we estimate ( measured in units of 1/year ) by evaluating the empirical auto - correlation and fitting it with an exponential .once is determined the parameter is obtained from the ( empirical ) standard deviation , ,$ ] which is given by the correlation function since .hence in order to have an idea about the robustness of the estimation procedure we split the constructed real interest rate data from each country into four equally spaced blocks .in each block we estimate the parameters of the ou model applying the method described above , except for the parameter , which is always estimated using the complete data set . the main reason to avoid estimating on small blocks is because the time series of some countries are too short .instead the quoted uncertainty in is the standard least square error , computed by fitting an exponential to the autocorrelation function of the real interest time series .table [ tab3 ] shows the minimum and the maximum values for , and , and their uncertainties under subsampling .we would like to thank national science foundation grant 0624351 .we also acknowledge partial support form the ministerio de ciencia e innovacin under contract no .fis2009 - 09689 and the institute for new economic thinking . arrow , k. j. , cropper , m. l. , gollier , c. , groom , b. , heal , g. m. , newell , r. g. , nordhaus , w. d. , pindyck , r. s. , pizer , w. a. , portney , p. r. , sterner , t. , tol , r. s. j. & weitzman , m. l. how should benefits and costs be discounted in an intergenerational context ? the views of an expert panel .resources for the future , wasington d. c. december 2012 .freeman , m. c. , groom , b. , panopoulou , e. & pantelidis , t. declining discount rates and the fisher effect . inflated past , discounted future ?center for climate change economics and and policy. working paper 129 ( 2013 ) osborne m.f.m .( 1959 ) brownian motion in the stock market . _ operation research 7 _ : 145 - 173 . reprinted in cootner p. h. ( editor ) ( 1964 ) _ the random character of stock market prices _ ( cambridge , massachusetts , m.i.t . press ) .
for environmental problems such as global warming future costs must be balanced against present costs . this is traditionally done using an exponential function with a constant discount rate , which reduces the present value of future costs . the result is highly sensitive to the choice of discount rate and has generated a major controversy as to the urgency for immediate action . we study analytically several standard interest rate models from finance and compare their properties to empirical data . from historical time series for nominal interest rates and inflation covering 14 countries over hundreds of years , we find that extended periods of negative real interest rates are common , occurring in many epochs in all countries . this leads us to choose the ornstein - uhlenbeck model , in which real short run interest rates fluctuate stochastically and can become negative , even if they revert to a positive mean value . we solve the model in closed form and prove that the long - run discount rate is always less than the mean ; indeed it can be zero or even negative , despite the fact that the mean short term interest rate is positive . we fit the parameters of the model to the data , and find that nine of the countries have positive long run discount rates while five have negative long - run discount rates . even if one rejects the countries where hyperinflation has occurred , our results support the low discounting rate used in the stern report over higher rates advocated by others . , , , ,
several solar phenomena exhibit hemispheric asymmetries and their variations .most of the relevant papers focus on the variation of the amplitude of the asymmetry .various periods have been found , 3.7 years , periods between 9 and 12 years , 43.25 , 8.65 and 1.44 years and a time scale of 12 cycles . signatures of the solar hemispheric asymmetry has been claimed in solar wind speed and the cosmic rays .have not found an 11 year period in the normalized north - south asymmetry index .this timescale has to be studied in a different way , by examining the phase lags of the hemispheric cycles .earlier investigations of this behaviour indicated that these phase lags exhibit a long term variation . in our previous paper ( , henceforth paper i )the phase lags of hemispheric cycles have been examined in cycles 1223 by using different methods and a characteristic behaviour has been found : in four consecutive cycles the same hemisphere leads and in the next four consecutive cycles the other hemispheric cycle leads .this characteristic time is reminiscent to that published by .the present work was motivated by the question whether this variation was also working before the greenwich era starting with cycle 12 .the other aspect was raised by the recent work of examining the hemispheric phase lags of the polarity reversals of the poloidal field .they investigated the time interval of 19452011 and this feature can also be examined on a longer time interval in comparison with the phase lags of the hemispheric cycles .the investigation of paper i was based on the greenwich photoheliographic results , henceforth gpr , and the debrecen photoheliographic data , henceforth dpd , . for the present extension sunspot data for the previous cycles have been gathered from the observations of johann caspar staudacher for 17491798 and samuel heinrich schwabe for 18251867 .the observations of staudacher were sparse ( figure [ spares ] ) , in certain years only a few observations were made .after two and a half cycles without any data schwabe has carried out a long , continuous series of observations covering 43 years .he made observations more regularly than staudacher and he identified the sunspot groups , although sometimes he considered two groups as a single one if they were at the same longitude but at different latitudes .his observations provide position and area data as well as identifying numbers of groups making possible to track the development of certain sunspot groups .the gpr covers cycles 1220 , it provides position and area data for sunspot groups , the dpd covers the time interval since cycle 21 up to now , it contains position and area data for not only sunspot groups but also for all observable individual sunspots .both catalogues present the data on a daily basis . as figure [ spares ] shows , hemispheric sunspot data are not available in electronic form for cycle 11 , between the schwabe data and the start of gpr , butthis gap can be filled by using the data of sprer that have been read into the computer manually .this dataset is based on the observations of carrington and sprer between 1854 and 1878 for cycles 10 and 11 .sprer took the observed sunspot groups into account once and weighted them by their area summarizing in five carrington rotations .he named the obtained data hemispheric frequencies .+ considering the differences between the datasets of different observational periods the input data are somewhat different . in the time intervals of gpr anddpd the monthly sums of sunspot groups are used , in the time intervals of staudacher s and schwabe s observations the monthly sums of sunspots and the monthly sums of sunspot areas are considered .no calibrations have been made between these datasets because on the one hand there is no overlap between the datasets of staudacher and schwabe , on the other hand all cycles were considered to be separate entities .only the north - south differences were targeted within each cycle and the strengths of the cycles were not compared to each other . because of sprer s weighting method his dataset is not comparable directly with those of schwabe and the gpr .the present work uses the monthly values of number of sunspots ( ) which is not the well - known international sunspot number ( _ issn_) and sunspot group number ( ) which consider all sunspots and sunspot groups , respectively as often as they were observable instead of the sunspot group number ( _ sgn _ ) used in paper i.as is mentioned above , there are missing days in the periods of staudacher and schwabe .the approximately true profiles of these cycles can only be reconstructed by using some reasonable substitutions to fill the gaps .the monthly sums of sunspots has been calculated in such a way that the monthly mean value of the observed days was applied for the missing days and these daily values have been summed up for the month ( middle panels of figures [ recost ] and [ recosc ] ) .as is discernible in the uppermost panels of figures [ recost ] and [ recosc ] the strengths of these modified cycles fit to the issn cycles .the modified hemispheric values are plotted in figure [ hemn ] .the hemispheric minima between the cycles are denoted by vertical dashed and solid lines for the northern and southern hemispheres , respectively .the minimum is determined as the time of the lowest monthly value of the inter - cycle profile by smoothing the cycle profiles with a 21-month window . in order to describe each individual cycle as a wholethe centers of weight of both hemispheric cycle profiles have been computed by using the original unsmoothed cycle profiles .the positions of the centers of weight ( and ) are plotted for all hemispheric cycles in figure [ hemn ] , their time difference is the measure of the hemispheric phase lags .figure [ hemn24 ] shows the same diagrams for cycles 1224 , this is the gpr dpd era . as cycle 24 is incomplete at the time of this work its center of weight has not been considered .figure [ hemn24 ] is similar to figure 1 of paper i except the input data . in paperi the monthly values of sunspot group number ( ) were comupted by counting all the sunspot groups only once in a month .this can not be done by using the other datasets so that , although an intercalibration can not be carried out , at least the types of input data can be as consistent as possible .figure [ hem1011 ] has been plotted for cycles 10 and 11 by using sprer s data .there are no smoothings on these hemispheric profiles because the area weighted hemispheric sunspot data are summarized over five carrington rotations .the centers of weight are calculated as in the case of and . to eliminate the uncertainty of determination of sunspot groups the hemispheric monthly umbral area of sunspots ( ) or sunspot groups ( )have been calculated .figure [ hemas ] shows the based on the data of schwabe smoothed with a 21-month window while figure [ hemagd ] depicts the by using data of gpr and dpd smoothed with an 11-month window .the hemispheric umbral area is measured in millionth of solar hemispheres ( msh ) .there is no such a plot for the staudacher era because his data contain the daily sum of the area of sunspots which does not allow us to distinguish between the hemispheres .the centers of weight and the hemispheric phase lags are also determined from the and profile , they are plotted in the same way as in the case of ( second panel of figure [ hists ] ) .the long - term variation of the hemispheric phase lags can be studied by different methods .one of them uses the difference between the averages of the normalized asymmetry index in the ascending and descending phases : where and are the monthly group numbers of the northern and southern hemispheres respectively , the indices a and d denote the ascending and descending phases respectively .it can be seen that if this difference is positive then the northern hemispheric cycle leads ( third panel of figure [ hists ] where the vertical axis is reverted in order to compare them more easily ) .another method is the study of the difference between the maxima of the hemispheric latitudinal distributions of active regions ( bottom panel of figure [ hists ] ) , this method exploits the equatorward shift of the active belt during the solar cycle .this means that the bulge of the latitudinal distribution of the activity is closer to the equator in the leading hemisphere , i.e. ( denotes the latitude ) is negative if the northern hemisphere leads in time .the bars of the bottom panel of figure [ hists ] are calculated by averaging the absolute values of these diferences over the cycles .the differences between the time coordinates of the centers of weight of the hemispheric cycle profiles are plotted in the first and second panels of figure [ hists ] by using monthly number of sunspots or sunspot groups and the monthly value of sunspot area , respectively . similarly to the result in paper i , the hemispheric phase lags alternating by four cycles are also recognizable during the gpr and dpd era by using the above described four different methods .the case is different during the pre - greenwich era , beacuse there is no uniform pattern in that period .cycles 1 , 4 and 9 do not fit into the 4 + 4 alternation by using the first method . since the ascending phase of cycle 7 is missing and the descending phase of cycle 24 is not complete as yet these cycles are disregarded in these studies .the presented methods do not allow to determine the real hemispheric centers of weight without full coverages of cycles thus these cycles are missing from figure [ hists ] .examining the ascending phase of cycle 24 ( figures [ hemn24 ] and [ hemagd ] ) it can be seen that this phase is similar to the case of cycle 16 .the ascending phase of cycle 24 might indicate northern leading because of the northern predominance of the activity .however , the ascending phase of cycle 16 also exhibited northern predominance but the examination of the entire hemispheric cycle profiles showed southern leading .this means that the real phase lag can only be determined after the full cycle is completed . also formulated a precautious expectation on this phase lag .there are no area data for the staudacher era as described above as well as for the sprer period . studying the n - s phase shift by using the area data we can conclude that cycle 9 here also is an exception to the rule .the method of asymmetry index gives fairly similar results .when in equation [ deltaai ] is positive / negative then the northern / southern hemisphere leads during the cycle .it can be clearly seen , with reversed vertical axis , in the third panel of figure [ hists ] that this was the case during the greeenwich and dpd eras . during the pre - greenwich period cycles 2 and 10do not fit into the 4 + 4 alternation by using this third method .the fourt panel of figure [ hists ] shows the hemispheric phase lags obtained from the hemispheric sprer diagrams .it can be seen that the 4 + 4 alternation can be pointed out on the gpr and dpd data , but can not be clearly perceivable in the pre - greenwich age , because cycles 3 and 10 are exceptions to the 4 + 4 alternation ..authenticities of the cycles . + /mean right / wrong results on the basis of figure [ hists ] . [cols="^,^,^,^,^,^",options="header " , ] it can be discernible that the different methods result in different behavioural patterns in the pre - greenwich cycles . in order to determine the authenticity of each cycle , let the authenticity of the cycles mean the number of right cases from all the investigatable methods .it can be seen in table [ auth ] created by using figure [ hists ] that there are four cycles with two - thirds , two cycles with a half and another two cycles with one authenticity . in spite of the low coverages of the eight full pre - greenwich cyclesthere are six cycles with authenticity higher than a half .obviously , the results of these kinds of investigations will be better and reliable if the sunspot datasets are full or almost full .this is why the long - term databases are so important .however , cycle 10 shows half authenticity in this study by using reconstructed data ; it can not be disregarded that this cycle and cycle 11 fitted to the 4 + 4 rule in the work of on the zrich data ( see paper i ) . investigated the asymmetries of hemispheric activity cycles in connection with the timings of polar field reversals by examining the supersynoptic maps of the mount wilson observatory starting in 1970 .they did not study any long - term variations or regularities in this relationship because of the short time interval but the backward extension of the set of the times of polarity reversals makes it possible . like the long - term sunspot studies and all long - term investigations the work with these data has also to compromise with the broad variety of sources and types of observations .the most suitable set of dates has been published by .their procedure is based on the method of who reconstructed the large scale surface magnetic field distribution by using h - alpha synoptic charts .large regions of radial magnetic fields of opposite polarities are separated by borderlines indicated by filament bands of mainly east - west direction .it is a century old finding that these filaments migrate toward the poles thus by tracking the poleward migration of these borderlines the time of the polarity reversal can be determined .have compiled a set of reversal dates from different sources covering the period 18701981 .the kodaikanal h - alpha and ca ii k spectroheliograms cover the period 19041964 , their reliability in identifying the opposite polarity regions has been checked by comparing them to magnetograms . after 1964 magnetograms were used .the period 18701903 has been covered by using the limb filament observations of .after 1981 i used the reversal dates published by and for cycle 24 .the upper panel of figure [ polrev ] shows the n - s differences between the polarity reversal dates of the poloidal magnetic field while for the sake of comparison the lower panel shows the phase lags of the hemispheric cycles . in those cases ( cycles 12 , 14 , 16 , 19 , 20 and 24 ) when there were two or more polarity changes i have taken into account the dates of the final reversals because the magnetic field can be strongly varying around the time of polarity reversal .as it can be seen in table 1 of the northern and southern polarity reversals took place simultaneously in cycles 11 and 13 .the time differences are zero in these cycles and these results neither contradict to nor corroborate the examined long - term variation but the pattern of the other cycles is conspicuous .the variation of the poloidal polarity reversals in the upper panel of figure [ polrev ] seems to fit to the regularity of the variation of hemispheric phase lags by 4 + 4 cycles .cycle 20 is the only exception to that regularity .the similarity of the toroidal and poloidal phase lags is remarkable merely by visual inspection but their comparison is even more informative in figure [ poltor ] showing the diagram of the relationship between these two kinds of phase lags .two regression lines are indicated , the steeper one disregards the dot of the non - fitting cycle 20 , the less steep line takes it into account .apparently , the hemispheric poloidal fields sense the statuses of the hemispheric toroidal fields therefore their phase relationsiphs correspond to those of the toroidal fields .this corroborates and generalizes the existence of the phase - lag variation of 4 + 4 cycles during cycles 12 - 23 .the study published in paper i has been extended in two ways , temporally and physically .the phase lags of hemispheric cycles have been examined in the pre - greenwich era , eight additional cycles were more or less suitably covered by the necessary sunspot data .the results show that the phase lag variation by 4 + 4 cycles can be more or less recognized but with certain exceptions .therefore it can not be stated for sure that this variation was working in pre - greenwich cycles .either it may have been absent or its existence can not be pointed out because of the decreasing observational coverage . otherwise , there are six cycles with two - thirds or more and just two cycles with a half authenticity during the pre - greenwich times. an objective physical cause may be the uncertain status of cycle 4 around 1790 , where a cycle may have been lost , a statement debated by .this cycle , as a single entity , fits unambiguously into the set of 4 + 4 cycles but the next documented cycle of the group does not .the next group contains two fitting and one unfitting cycle .it can not be excluded that the case of `` missing cycle '' temporarily distorted this long - term variation similarly to the gnevyshev - ohl rule .as it can be seen in the middle panel of figure [ recost ] the so - called lost cycle can be observable in the northern hemispheric activity by using the modified sunspot data while in the original data of staudacher ( uppermost panel of this figure ) can not .the lowermost panel may strengthen the existence of the lost cycle because the mean hemispheric latitudes rise after 1792 as at the beginning of a new cycle and decrease after 1794 however the southern activity continuously decreases after the maximum of cycle 4 . a more convincing corroboration of the phase lag variations of 4 + 4 cycles is obtained by the other extension of the study , the examination of the differences between the polarity reversals of the poloidal field on the gpr - dpd era .figure [ polrev ] shows these differences in comparison to the hemispheric phase lags .the two column diagrams are fairly similar with a single exception , cycle 20 .this implies that the regularity of 4 + 4 cycles in the phase lags is a more general feature of the solar dynamo and involves both the toroidal and poloidal process .the two topologies are continuously alternating by being transformed into each other but these diagrams may rise a question of `` chicken - and - egg '' type .it should be noted that the columns of the poloidal diagram belong to those of the toroidal one , for instance the phase lag between the reversals of hemispheric poloidal fields denoted by 18 happened around the maximum of cycle 18 .thus the presented alternation may mean that this specific temporal feature of the solar cycle is ruled by the long term behaviour of the hemispheric toroidal fields .the temporally leading hemispheric cycle is able to initiate an earlier polar reversal than the opposite hemispheric cycle .the presented variation of 4 + 4 cycles should be the evolutional property of the toroidal field. it would be premature to speculate about any underlying mechanisms .relevant phase relations have been targeted earlier in different ways . as well as examined theoretically phase relations between and fields .apparently , a yet unknown agent has to be identified that might be responsible for this long term behavioural pattern needing long term memory .the research leading to these results has received funding from the european community s seventh framework programme ( fp7/2010 - 2013 ) under grant agreement 284461 .the staudacher and schwabe data are courtesy of rainer arlt .thanks are due to andrs ludmny for reading and discussing the manuscript .the author is deeply indebted to those people for the inspiration who asked the following question what guarantees that this variation will be continued before the gpr - era and after cycle 23 ? in several conversations .altrock , r. c. 2003 , , 216 , 343 alvestad , j. 2015 , solar polar fields vs. solar cycles , see : http://www.solen.info/solar/polarfields/polar.html arlt , r. 2009 , , 255 , 143 arlt , r. , leussu , r. , giese , n. , mursula , k. , & usoskin , i. g. 2013 , , 433 , 3165 + see : http://www.aip.de/members/rarlt/sunspots/schwabe ballester , j. l. , oliver , r. , & carbonell , m. 2005 , , 431l , 5 chang , h. y. 2008 , new astronomy , 13 , 195 fnyi , j. 1908 , , 37 , 107 gleissberg , w. 1939 , the observatory , 62 , 158 royal observatory , greenwich , greenwich photoheliographic results , 1874 - 1976 , in 103 volumes , see : http://solarscience.msfc.nasa.gov/greenwch.shtml gyri , l. , baranyi , t. , & ludmny , a. 2011 , iau symp . 273 , 403 + see : http://fenyi.solarobs.unideb.hu/dpd/index.html krivova , n.a ., solanki , s.k . , & beer , j. 2002 , , 396 , 235 krymsky , g. f. , krivoshapkin , p. a. , mamrukova , v. p. , & gerasimova , s. k. 2009 , astron .lett . , 35 , 333 li , k. j. , gao , p. x. , zhan , l. s. , shi x. j. , & zhu , w. w. 2009 , , 394 , 231 makarov , v.i . , & sivaraman , k.r .1986 , bull .astr . soc .india , 14 , 163 mcintosh , p. s. 1972 , rev .of geophys . and space phys . ,10 , 837 murakzy , j. , & ludmny , a. 2012 , , 419 , 3624 ( paper 1 ) ricco , a. 1914 , , 3 , 17 schlichenmaier , r. , & stix , m. 1995 , , 302 , 264 sprer , g. 1874 , publication der astronomischen gesellschaft xiii , leipzig sprer , g. 1878 , publ .potsdam , nr1 .stix , m. 1976 , , 47 , 243 svalgaard , l. , & kamide , y. 2013 , , 763 , id.23 temmer , m. , rybk , j. , bendk , p. , veronig , a. , vogler , f. , otruba , w. , ptzi , w. , & hanslmeier , a. 2006 , , 447 , 735 usoskin , i.g . , mursula , k. , arlt , r. , & kovaltsov , g.a .2009 , , 700 , l154 usoskin , i.g . , mursula , k. , & kovaltsov , g.a .2001 , , 370 , l31 vizoso , g. , & ballester , j.l .1990 , , 229 , 540 waldmeier , m. 1957 , zeitschrift fr astrophysik , 43 , 149 waldmeier , m. 1971 , , 20 , 332 zhang , l. , mursula , k. , & usoskin , i. 2013 , , 552 , a84 zieger , b. , & mursula , k. 1998 , , 25 , 841 zolotova , n.v . , & ponyavin , d.i .2011 , , 736 , id.115 zolotova , n.v . ,ponyavin , d.i . ,marwan , n. , & kurths , j. 2009 , , 503 , 197
the solar northern and southern hemispheres exhibit differences between the intensities and time profiles of the activity cycles . the time variation of these properties has been studied in a previous article on the data of cycles 1223 . the hemispheric phase lags exhibited a characteristic variation : the leading role has been exchanged between the hemispheres by four cycles . the present work extends the investigation of this variation with the data of schwabe and staudacher in cycles 14 and 710 as well as sprer s data in cycle 11 . the previously found variation can not be clearly recognized using the data of staudacher , schwabe and sprer . however , it is more interesting that the phase lags of the reversals of the magnetic fields at the poles follow the same variation as that of the hemispheric cycles in cycles 12 - 23 , _ i.e. _ in four cyles one of the hemispheres leads and the leading role jumps to the opposite hemisphere in the next four cycles . this means that this variation is a long term property of the entire solar dynamo mechanism , both the toroidal and poloidal fields , that hints at an unidentified component of the process responsible for the long term memory .
in the course of learning a spatial environment , an animal forms an internal representation of space that enables spatial navigation and planning .the hippocampus plays a key role in producing this map through the activity of location - specific place cells . at the neurophysiological level, these place cells exhibit spatially selective spiking activity . as the animal navigates its environment , the place cell fires only at a discrete location its place field ( figure [ pcs]a - b ) .it is believed that the entire ensemble of place cells serves as a neuronal basis of the animal s spatial awareness .remarkably , place cells spike not only during active navigation but also during quiescent wake states and even during sleep .for example , the animal can `` replay '' place cells in sequences that correspond to the physical routes traversed during active navigation or `` preplay '' sequences that represent possible future trajectories , either in direct or reversed order , while pausing at a decision point .this phenomenon implies that , after learning , the animal can explore and retrieve spatial information by cuing the hippocampal network , which may in turn be viewed as a physiological correlate of `` mental exploration '' .it bears noting , however , that the actual functional units for spatial information processing in the hippocampal network are not individual cells but repeatedly activated groups of place cells known as cell assemblies ( see and figure [ pcs]c ) .although the physiological properties of the place cell assemblies remain largely unknown , it is believed that the cells constituting an assembly synaptically drive a certain readout unit downstream from the hippocampus . in the `` reader - centric '' view , this readout neuron a small network or , most likely , a single neuron is what actually defines the cell assembly , by actualizing the information provided by its activity .the identity of the readout neurons in some cases is suggested by the network s anatomy .for example , there are direct many - to - one projections from the ca3 region of the hippocampus to the ca1 region .since replays are believed to be initiated in ca3 , this implies that the ca1 place cells may serve as the readout neurons for the activity of the ca3 place cells . assuming that contemporaneous spiking of place cells implies overlap of their respective place fields ( figure [ pcs]a - b ), it is possible to decode the rat s current location from the ongoing spiking activity of a mere 40 - 50 neurons .this suggests that the readout neurons may be wired to encode spatial connectivity between place fields by responding to place cell coactivity ( see figure [ pcs]a - c and ) .a natural assumption underlying both the trajectory reconstructing algorithms and various path integration models is that the representation of spatial locations during physical navigation is reproducible .if the rat begins locomotion at a certain location and at a certain moment of time , , and then returns to the same location at a later time , , then the population activity of the place cells at and is the same .similarly , if spatial information is consistently represented during replays , then the activity packet in the hippocampal network should be restored upon replaying " a closed path . whereas the correspondence between place cell activity and spatial locations ( i.e. , place fields ) during physical navigation is enforced by sensory and proprioceptive inputs , the consistency of spatial representation during replay must be attributable solely to the network s internal dynamics . herewe develop a model that accounts for how a neuronal network could maintain consistency of spatial information over the course of multiple replays or preplays .this model is based on the discrete differential geometry theory developed in , which reveals that key geometric concepts can be expressed in purely combinatoric terms .the choice of this theory is driven in part by recent work that indicates that the hippocampus provides a topological framework for spatial information rather than a geometric or cartesian map .the results suggest that to maintain consistency of spatial information during path replay , the synaptic connections between the place cells and the readout neurons must adhere to a zero holonomy principle . , and .the two red rectangles mark the periods during which the cells are coactive . *b. * the gold , green and blue areas represent place fields .place cell firing rate is maximal at the center of the place field and attenuates towards its periphery ; this pattern can be closely approximated by gaussian distribution .place cell cofiring reflects overlap between respective place fields : cells and are coactive in the location , cells and are coactive in the location and so on .the red links mark distances between the centers of the place fields and the triple overlap domain , ( dark region in the center ) . *c. * a schematic representation of a cell assembly : the three place cells on the top synapse onto a readout neuron ( red dot ) , which activates within the cell assembly field . * d. * during replay , the place cells repeat on a millisecond time scale the order of spiking that they exhibit during active navigation . ]* the simplicial model of the cell assembly network*. a convenient framework for representing a population of place cell assemblies is provided by simplicial topology . in this approach ,an assembly of place cells , , , ... , , is represented by a abstract simplex ( not to be confused with a geometric simplex ) containing vertexes , ] be a simplex representing an assembly of three cells with the firing rates , and .if equation ( [ q ] ) holds over , then the readout neuron fires with the rate in response to the coactivity of , and , suppose that equation ( [ q ] ) also holds for an adjacent ( maximally overlapping ) cell assembly , represented by an adjacent simplex ] adjacent to , then , once the value is found from ( [ v2sol1 ] ) , the firing rate at can be obtained from and , and so on ( figure [ thickpath]b ) .in other words , once the synaptic connections are specified for all simplexes , equation ( [ q ] ) can be used to describe the conditions for transferring the activity vector over the entire complex .notice however , that equations ( [ v1eq])-([v2sol1 ] ) do not specify the mechanism responsible for generating place cell activity ; they only describe the conditions required to ignite the cell assemblies in a particular sequence .while the subsequent simplexes and in the simplicial path ( [ g ] ) are not necessarily adjacent , the activity according to equation ( [ q ] ) is propagated along a sequence of adjacent maximal simplexes , such as depicted in figure [ thickpath]b . *discrete holonomy*. using the notation equation ( [ v2sol1 ] ) defined over a simplex can be rewritten in matrix form where the `` transfer matrix '' propagates the population activity vector from the incoming facet of the simplex into the activity vector of the outgoing , opposite facet shared with the next simplex ( edges ] respectively on figure [ thickpath]a ) , in which the vertex of the simplex shuts off and the vertex of the adjacent simplex activates . if there is a total of simplexes in the path ( for the closed simplicial path shown on figure [ thickpath]b ) then the corresponding chain of equations ( [ prop ] ) will produce if the simplicial path is closed , then the activity vector should be restored upon completing the loop , i.e. , . according to ( [ kk ] ), this will happen if the product of the transfer matrices along yields a unit matrix , it can be directly verified , however , that condition ( [ prod1 ] ) is not satisfied automatically : the product of transfer matrices ( [ prod1 ] ) has the structure which differs from the unit matrix ( see appendix ) .this implies that a population activity vector is in general altered by translations around closed simplicial paths , . to formulate this another way, the spiking condition ( [ q ] ) does not automatically guarantee that the readout neurons will consistently represent spatial connectivity ; the latter requires additional constraints ( [ prod1 ] ) , irrespective of the mechanism that shifts the activity bump .* discrete geometry of a dressed simplicial complex*. * a. * discrete holonomy : a population activity vector ( red arrow ) changes its direction from simplex to simplex as described by ( [ prop ] ) . upon completing a closed path ,the starting and ending vectors may differ , , which indicates nonzero holonomy . *b. * a elementary closed path of the order encircling a vertex .the pivot " vertex carries the discrete curvature coefficients defined by ( [ curv ] ) . *c. * a higher dimensional elementary closed path consisting of -dimensional simplexes ( one such exemplary simplex is shadowed ) sharing the same -dimensional face , the pivot simplex , shown in red .the dimensional pivot simplex shown in red carries the curvature coefficients . ]mathematically , a mismatch between the starting and the ending orientation of the population activity vector is akin to the differential - geometric notion of holonomy which , on riemannian manifolds , measures the change of a vector s orientation as a result of a parallel transport around a closed loop .hence , the requirement ( [ prod1 ] ) that the activity vector should be the same after completing a closed simplicial trajectory implies that the discrete holonomy along paths in should vanish . * discrete curvature*. in differential geometry , zero holonomy on a riemannian manifoldis achieved by requiring that the riemannian curvature tensor associated with the connection vanishes at every point .this condition is established by contracting closed paths to infinitesimally small loops encircling a point and translating in parallel a unit vector around that loop .the difference between the starting and the ending orientations of defines the curvature at the point .an analogous procedure can be performed on a discrete manifold .however , there is a natural limit to shrinking simplicial paths : in a -dimensional complex , the tightest simplicial paths consist of simplexes which intersect the same dimensional face ( see figure [ holonomy]b ) .such a path we will call an `` elementary closed path '' , following .the order of such a path is defined by the number of -dimensional simplexes encircling a simplex . in the following we will use the short notation for the pivot " simplexes whereas the elementary simplicial path encircling will be denoted as . in order to ensure zero holonomy of place cell activity along _ all _ closed paths in ,it is sufficient to verify that the holonomy vanishes for all elementary closed paths .the product of the matrices encircling the pivot ( figure [ holonomy]b ) has the same form as equation ( [ mgamma ] ) ; however , the coefficients at the bottom row of the matrix can be viewed as the curvatures defined at .thus , to ensure zero holonomies , the conditions , ... , , must be imposed on the connection coefficients at every pivot simplex of a -dimensional dressed cell assembly simplicial complex .for example , an elementary closed path encircling a vertex with simplexes enumerated as shown on figure [ holonomy]c yields the holonomy matrix the values , , , , of the bottom row that distinguish from the unit matrix should be considered as discrete curvatures defined at the pivot vertex ( see figure [ holonomy]c and ) , which need to vanish in order to ensure a consistent representation of space during replays .since there exists a finite number of pivot simplexes , the number of constraints ( [ curv ] ) on a given dressing is finite .thus , the scope of nontrivial zero holonomy conditions ( [ prod1 ] ) drastically reduces and the task of ensuring consistency of translations of the population activity vectors over becomes tractable .nevertheless , zero curvature conditions ( [ curv ] ) are in general quite restrictive and impose nontrivial constraints on the synaptic architecture of the place cell assemblies . as the simplest illustration ,consider the case when the firing rates of all the place cells and readout neurons are the same : , and all the connection strengths from the place cells to the readout neuron in all cell assemblies are identical : , giving a constant connection dressing .it can be shown that in this case the resulting transfer matrix is idempotent , that is , so that the zero curvature condition ( [ curv ] ) is satisfied identically for the even order elementary closed paths and can not be satisfied if the paths order is odd . under more general and physiologically more plausible assumptions equation ( [ curv ] ) does not necessarily restrict the order of the cell assemblies .however , the domain of permissible dressings , is significantly restricted by ( [ curv ] ) , as compared to the domain occupied by the synaptic parameters of the unconstrained cell assembly networks .the zero curvature constraints ( [ curv ] ) affect the net statistics of the synaptic weights .since the structure of the full space of marginal dressings and of the corresponding probability measures is too complex , we considered a family of connections parametrized as in which the fluctuations are normally distributed , in the absence of zero curvature constraints , cell assemblies are uncoupled and the synaptic fluctuations are statistically independent , so that the joint probability distribution of is under zero - curvature conditions ( [ curv ] ) the parameters of the synaptic architecture are coupled ( figure [ coupling ] ) and the probability distribution for a particular variable is obtained by averaging the joint distribution ( [ indep ] ) under delta - constraints : where is the normalization constant and denotes integration over all . in the appendixit is demonstrated that for weak fluctuations , the shape of the distribution ( [ gauss ] ) remains gaussian , but its width decreases : .thus , zero curvature conditions narrow the distribution of the uncorrelated weights , i.e. , produce a tuning " of the synaptic connections .this result also applies to the synaptic weights : in cases where the place fields are distributed regularly , so that the coefficients have a well defined mean , , and a small multiplicative variance , , the coefficients are approximately defined by the ratios of the synaptic weights , and therefore the zero curvature conditions produce the same effect on as on , i.e. , reduce the variability of synaptic weights .understanding the effects produced by zero curvature constraints ( [ curv ] ) on a wider range of fluctuations is mathematically more challenging .the qualitative results obtained here , however , may generalize beyond the limit of small multiplicative synaptic noise and could eventually be experimentally verified .a physiological implication of the result ( [ tune ] ) is that the distribution of the unconstrained synaptic weights in a network that does not encode a representation of space ( e.g. , measured _ in vitro _ ) should be broader than the distribution measured _ in vivo _ in healthy animals , which can be tested once such measurements become technically possible .the task of encoding a consistent map of the environment imposes a system of constraints on the hippocampal network ( i.e. , on the coefficients ) that enforce the correspondence between place cell activity and the animal s location in the physical world .here we show that zero holonomy is a key condition , which is implemented by requiring that curvatures vanish at the pivot simplexes .this approach works within a combinatorial framework , but a similar intuition guided a geometric approach , where the place cells ability to encode the location of the animal but not the path leading to that location was achieved by imposing the conditions of stoke s theorem on the synaptic weights of the hippocampal network , which were viewed as functions of cartesian coordinates .our model is based on the same requirement of path - invariance of place cell population activity , implemented on a discrete representation of space a dressed abstract simplicial complex involving geometric information about the animal s environment . in particular , note that the concepts of `` curvature '' and `` holonomy '' are defined in combinatorial , not geometric , terms .this is an advantage in light of ( and indeed was motivated by ) recent work indicating that the hippocampus provides a topological framework for spatial experience rather than cartesian map of the environment , and it also makes our model somewhat more realistic .it does , however , lead to a number of technical complications .for example , discrete connections ( [ b ] ) defined over are nonabelian , so using the approach of would require a nontrivial generalization of stoke s theorem , which is valid only in spaces with abelian differential - geometric connections .our approach is based on the analysis of discrete holonomies suggested in the pioneering work of which , in fact , explains the mathematical underpinning of the stoke s theorem approach in both abelian and nonabelian cases .indeed , the zero - holonomy constraint ensures that no matter what direction the activity is propagated in the network ( forward , backward , or skipping over some cell assemblies ) , the integrity of the spatial information remains intact .* generality of the approach*. a key instrument of our analyses is equation ( [ q ] ) , which describes the conditions necessary for propagating spiking conditions over the cell assembly network .the exact form of this equation is not essential a physiologically more detailed description of near - threshold neuronal spiking could be used to establish more accurate zero holonomy and curvature constraints on the hippocampal network s synaptic architecture , which should be viewed as a general requirement for any spatial replay model .the assumption of maximally - overlapping place cell assemblies may also be relaxed , since equation ( [ q ] ) can be applied in cases where the order of the cell assemblies varies , that is , when the simplicial complex is not a manifold but a quasimanifold ( see figure [ qmanif ] and ) .unfortunately , implementing the `` zero holonomy '' principle in this case would require rather arduous combinatorial analysis .for example , propagating the activity packets using ( [ kk ] ) would impose relationships between the dimensionalities of the maximal simplexes and their placement in , i.e. , require a particular cell assembly network architecture . * a replay in simplicial quasimanifold*. an example of a simplicial quasi - manifold containing and simplexes .the activity of cells in the simplexes is induced from the simplexes approaching its sides .two simplicial paths are shown by gray triangles , marked by red dotted lines . ] * learning the constraints*. in this paper , the requirements ( [ prod1 ] ) and ( [ curv ] ) enforcing path consistency of place cell replay are imposed on a fully trained network : it is assumed that the place fields have had time to stabilize and that the cell assemblies with constant weights have had time to form . in a more realistic approach, these constraints should modulate the hippocampal network s training process .for example , if the unconstrained network is trained by minimizing a certain cost functional then the constraints ( [ curv ] ) would contribute an additional curvature term " , defined , e.g. , via lagrange multipliers , physiologically , the network may be trained by `` ringing out '' the violations of the conditions ( [ curv ] ) in the neuronal circuit , i.e. , by replaying sequences and adapting the synaptic weights to get rid of the centers of non - vanishing holonomy .curiously , the role played by in ( [ action ] ) resembles the role played by the curvature term in the hilbert einstein action of general relativity theory , which ensures that , in the absence of gravitational field sources , the solution of the hilbert einstein equations describes a flat space - time . by analogy , the constraints imposed by ( [ curv ] )may be viewed as conditions that enforce `` synaptic flatness '' of the hippocampal cognitive map .it is worth noting that the mechanism suggested here is an implementation of the zero holonomy condition in this simplest case of the reader - centric cell assembly theory that is consistent with physiology .the place cell readout might involve , instead of a single neuron , a small network of a few neurons ( not yet identified experimentally ) , which might require a different implementation of zero holonomy principle , depending on the specific architecture of such a network .if the readout network is a cluster of synchronously activated downstream neurons , then this cluster of cells could be viewed as a `` meta - neuron '' and the proposed approach would apply to this case as well .more complicated architectures would require modifications , but it is reasonable that the reproducibility of the population vector would require zero holonomy in all cases .i thank v. brandt and r. phenix for their critical reading of the manuscript and the reviewers for helpful comments .. the work was supported in part by houston bioinformatics endowment fund , the w. m. keck foundation grant for pioneering research and by the nsf 1422438 grant .* transfer matrix * construction is carried out for the case , since higher dimensions are similar . in the matrix form ,equation ( [ v2sol1 ] ) defined over the simplex , can be written as , in which the matrix transfers the activity vector defined over the incoming edge ] of the _ same _ simplex ( e.g. , from the edge ] of on figure [ thickpath]a ) , to ignite the readout neuron of the next cell assembly , which shares the edge $ ] with the vector ( [ fp1 ] ) needs to be transformed into by the diagonal matrix . together, these two operations produce the transfer matrix a direct verification shows that a product of transfer matrices that stat and end at the same simplex , has the form ( [ mgamma ] ) , in which are order polynomials of the coefficients ( [ mu ] ) . * coupling between simplexes*. a schematic illustration of a maximal simplex span by three pivot vertexes , , and , and shared by three overlapping elementary paths , and in a cell assembly complex . as a result, the synaptic connectivity coefficients will appear in three sets of discrete curvatures , , and , , , , bootstrapping the constraints ( [ curv ] ) . ] * tuning of the fluctuation distribution*. in case when the fluctuations are small , , the constraints ( [ curv ] ) uncouple ( figure [ coupling ] ) yielding linearized curvature coefficients where are constant coefficients and the summation is over the vertexes of the elementary path ( figure [ holonomy]b ) . to simplify the expression ( [ eps ] ) , we rewrite it using indexes and , and exponentiate the delta - functions , using the joint distribution ( [ indep ] ) and the linearized expressions ( [ lincurv ] ) in ( [ expp ] ) produces where the are the coefficients obtained by collecting the terms proportional to produced by ( [ lincurv ] ) . completing the square and integrating over yields a gaussian integral over a positive quadratic form , where is the row of the matrix . evaluating ( [ pp ] ) yields where since the second term in the parentheses is positive , , which indicates narrowing of the uncoupled distribution ( [ gauss ] ) .the magnitude of the correction in ( [ epss ] ) depends on the topological structure of the coactivity complex ( e.g. , its dimensionality and the statistics of the pivots orders , ) and on the dressing parameters , . in the approximation ( [ param ] ) , , the diaginal matrix elemens of the matrix are of the order , and hence the m. e. hasselmo , l. m. giocomo , m. p. brandon and m. yoshida , _ cellular dynamical mechanisms for encoding the time and place of events along spatiotemporal trajectories in episodic memory _ , behav .brain res .. , vol .215(2 ) , pp . 261 - 74 ( 2010 ) .e. brown , l. frank , d. tang , m. quirk and m. wilson , a statistical paradigm for neural spike train decoding applied to position prediction from ensemble firing patterns of rat hippocampal place cells , _ j. neurosci . _ ,18 : 7411 - 7425 ( 1998 ) .m. arai , v. brandt and y. dabaghian , _ the effects of theta precession on spatial learning and simplicial complex dynamics in a topological model of the hippocampal spatial map _ , plos comput biol 10(6 ) : e1003651 ( 2014 ) .t. jahans - price , t. gorochowski , m. wilson , m. jones and r. bogacz , _ computational modelling and analysis of hippocampal - prefrontal information coding during a spatial decision - making task _ ,neurosci . 8( 2014 ) .p. poirazi , t. brannon and b. mel , _ arithmetic of subthreshold synaptic summation in a model ca1 pyramidal cell _ , neuron 37 , pp .977 - 987 ( 2003 ) .p. poirazi , t. brannon and b. mel , _ pyramidal neuron as two - layer neural network _ ,neuron 37 , pp . 989 - 999 ( 2003 ) .a. wallach , d. eytan , a. gal , c. zrenner and s. marom , _ neuronal response clamp _ , frontiers in neuroengineering 4 ( 2011 ) .l. floriani , m. mesmoudi , f. morando and e. puppo . , _ non - manifold decomposition in arbitrary dimensions _ , in discrete geometry for computer imagery , a. braquelaire , j .- o .lachaud , and a. vialard , editors , springer berlin heidelberg , pp .69 - 80 , ( 2002 ) .
place cells in the rat hippocampus play a key role in creating the animal s internal representation of the world . during active navigation , these cells spike only in discrete locations , together encoding a map of the environment . electrophysiological recordings have shown that the animal can revisit this map mentally , during both sleep and awake states , reactivating the place cells that fired during its exploration in the same sequence they were originally activated . although consistency of place cell activity during active navigation is arguably enforced by sensory and proprioceptive inputs , it remains unclear how a consistent representation of space can be maintained during spontaneous replay . we propose a model that can account for this phenomenon and suggests that a spatially consistent replay requires a number of constraints on the hippocampal network that affect its synaptic architecture and the statistics of synaptic connection strengths .
imagine the following scenario .members of your organization are located throughout a crowded conference hall .you know a rumor that you want to spread to all the members of your organization , but you do not want anyone else in the hall to learn it . to maintain discreetness , communication occurs only through whispered one - on - one conversations held between pairs of nearby members of your organization . in more detail ,time proceeds in rounds . in each round , each member of your organization can attempt to initiate a whispered conversation with a single nearby member in the conference hall . to avoid drawing attention , each member can only whisper to one person per round . in this paper , we study how quickly simple random strategies will propagate your rumor in this imagined crowded conference hall scenario . [ [ the - classical - telephone - model . ] ] the classical telephone model .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + at first encounter , the above scenario seems mappable to the well - studied problem of rumor spreading in the classical _ telephone model . _ in more detail , the telephone model describes a network topology as a graph of size with a computational process ( called _ nodes _ in the following ) associated with each vertex in . in this model, an edge indicates that node can communicate directly with node .time proceeds in rounds . in each round, each node can initiate a _ connection _( e.g. , place a telephone call ) with a neighbor in through which the two nodes can then communicate .there exists an extensive literature on the performance of a random rumor spreading strategy called push - pull in the telephone model under different graph assumptions ; e.g. , .the push - pull algorithm works as follows : _ in each round , each node connects to a neighbor selected with uniform randomness ; if exactly one node in the connection is _ informed _( knows the rumor ) and one node is _ uninformed _ ( does not know the rumor ) , then the rumor is spread from the informed to the uninformed node ._ an interesting series of papers culminating only recently established that push - pull terminates ( with high probability ) in rounds in graphs with vertex expansion , and in rounds in graphs with graph conductance .( see section [ sec : prelim ] for definitions of and . )it might be tempting to use these bounds to describe the performance of the push - pull strategy in our above conference hall scenario_but they do not apply_. a well - known quirk of the telephone model is that a given node can accept an unbounded number of incoming connections in a single round .for example , if a node has neighbors initiate a connection in a given round , in the classical telephone model is allowed to accept all connections and communicate with all neighbors in that round . in our conference hall scenario , by contrast , we enforce the natural assumption that each node can participate in at most one connection per round .( to share the rumor to multiple neighbors at once might attract unwanted attention . ) the existing analyses of push - pull in the telephone model , which depend on the ability of nodes to accept multiple incoming connections , do not carry over to this bounded connection setting .[ [ the - mobile - telephone - model . ] ] the mobile telephone model .+ + + + + + + + + + + + + + + + + + + + + + + + + + + in this paper , we formalize our conference hall scenario with a variant of the telephone model we call the _ mobile telephone model . _ our new model differs from the classical version in that it now limits each node to participate in at most one connection per round .we also introduce two new parameterized properties .the first is _ stability _ , which is described with an integer . for a given ,the network topology must remain stable for intervals of at least rounds before changing .the second property is _ tag length _ , which is described with an integer .for a given , at the beginning of each round , each node is allowed to publish an _ advertisement _ containing bits that is visible to its neighbors .notice , for and , the mobile telephone model exactly describes the conference hall scenario that opened this paper . our true motivation for introducing this model , of course , is not just to facilitate covert cavorting at conferences .we believe it fits many emerging peer - to - peer communication technologies better than the classical telephone model .in particular , in the massively important space of mobile wireless devices ( e.g. , smartphones , tablets , networked vehicles , sensors ) , standards such as bluetooth le , wifi direct , and the apple multipeer connectivity framework , all depend on a _ scan - and - connect _ architecture in which devices scan for nearby devices before attempting to initiate a reliable unicast connection with a single neighbor. this architecture does not support a given device concurrently connecting with many nearby devices .furthermore , this scanning behavior enables the possibility of devices adding a small number of advertisement bits to their publicly visible identifiers ( as we capture with our tag length parameter ) , and mobility is fundamental ( as we capture with our graph stability parameter ) .[ [ results . ] ] results .+ + + + + + + + in this paper , we study rumor spreading in the mobile telephone model under different assumptions regarding the connectivity properties of the graph as well as the values of model parameters and .all upper bound results described below hold with high probability in the network size .we begin , in section [ sec : prop ] , by studying whether and still provide useful upper bounds on the efficiency of rumor spreading once we move from the classical to mobile telephone model .we first prove that offline optimal rumor spreading terminates in rounds in the mobile telephone model in any graph with vertex expansion .it follows that it is _ possible _ , from a graph theory perspective , for a simple distributed rumor spreading algorithm in the mobile telephone model to match the performance of push - pull in the classical telephone model .( the question of whether simple strategies _ do _ match this optimal bound is explored later in the paper . ) at the core of this analysis are two ideas : ( 1 ) the size of a maximum matching bridging a set of informed and uninformed nodes at a given round describes the maximum number of new nodes that can be informed in that round ; and ( 2 ) we can , crucially , bound the size of these matchings with respect to the vertex expansion of the graph .we later leverage both ideas in our upper bound analysis .we then consider graph conductance and uncover a negative answer .in particular , we prove that offline optimal rumor spreading terminates in rounds in graphs with conductance , maximum degree , and minimum degree .we also prove that there exist graphs where rounds are required .these results stand in contrast to the potentially much smaller upper bound of for push - pull in the classical telephone model .in other words , once we shift from the classical to mobile telephone model , conductance no longer provides a useful upper bound on rumor spreading time . in section [ sec : b0 ], we turn our attention to studying the behavior of the push - pull algorithm in the mobile telephone model with and . , there are several natural modifications we must make to push - pull for it to operate as intended under the new assumptions of the mobile telephone model . ]our goal is to determine whether this standard strategy approaches the optimal bounds from section [ sec : prop ] . for the case of vertex expansion, we provide a negative answer by constructing a graph with constant vertex expansion in which push - pull requires rounds to terminate .whether there exists _ any _ distributed rumor spreading algorithm that can approach optimal bounds with respect to vertex expansion under these assumptions , however , remains an intriguing open question .for the case of graph conductance , we note that a consequence of a result from is that push - pull in this setting comes within a factor of the ( slow ) optimal bound proved in section [ sec : prop ] .in other words , in the mobile telephone model rumor spreading might be slow with respect to a graph s conductance , but push - pull matches this slow spreading time .finally , in section [ sec : b1 ] , we study push - pull in the mobile telephone model with . in more detail , we study the natural variant of push - pull in this setting in which nodes use their -bit tag to advertise at the beginning of each round whether or not they are informed .we assume that informed nodes select a neighbor in each round uniformly from the set of their uninformed neighbors ( if any ) .we call this variant _ productive push _ ( ppush ) as nodes only attempt to push the rumor toward nodes that still need the rumor. notice , in the classical telephone model , the ability to advertise your informed status trivializes rumor spreading as it allows nodes to implement a basic flood ( uninformed nodes pull only from informed neighbors)which is clearly optimal . in the mobile telephone model , by contrast ,the power of is not obvious : a given informed node can only communicate with ( at most ) a single uninformed neighbor per round , and it can not tell in advance which such neighbor might be most useful to inform . our primary result in this section , which provides the primary upper bound contribution of this paper , is the following : in the mobile telephone model with and stability parameter , ppush terminates in rounds , where . in other words , for , ppush terminates in rounds , matching ( within log factors ) the performance of the optimal algorithm in the mobile telephone model _ and _ the performance of push - pull in the classical telephone model .an interesting implication of this result is that the power gained by allowing nodes to advertise whether or not they know the rumor outweighs the power lost by limiting nodes to a single connection per round . as the stability of the graph decreases from toward , the performance of ppush is degraded by a factor of . at the core of this resultis a novel analysis of randomized approximate distributed maximal matchings in bipartite graphs , which we combine with the results from section [ sec : prop ] to connect the approximate matchings generated by our algorithm to the graph vertex expansion .we note that it is not _ a priori _ obvious that mobility makes rumor spreading more difficult .it remains an open question , therefore , as to whether this factor is an artifact of our analysis or a reflection of something fundamental about changing topologies .[ [ returning - to - the - conference - hall . ] ] returning to the conference hall .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the ppush algorithm enables us to tackle the question that opens the paper : _ what is a good way to discreetly spread a rumor in a crowd ?_ one answer , we now know , goes as follows .if you know the rumor , randomly choose a nearby member that does not know the rumor and attempt to whisper it in their ear .when you do , also instruct them to make some visible sign to indicate to their neighborhood that they are now informed ; e.g. , turn your conference badge upside down " .( this signal can be agreed upon in advance or decided by the source and spread along with the rumor . )this simple strategy which effectively implements ppush in the conference hall will spread the rumor fast with respect to the crowd topology s vertex expansion , and it will do so in a way that copes elegantly and automatically to any level of encountered topology changes . more practically speaking , we argue that in the new world of mobile peer - to - peer networking , something like ppush is probably the right primitive to use to spread information efficiently through an unknown and potentially changing network .the telephone model described above was first introduced by frieze and grimmett . a key problem in this model is _rumor spreading _ : a rumor must spread from a single source to the whole network . in studying this problem , algorithmic simplicityis typically prioritized over absolute optimality . the push algorithm ( first mentioned ) , for example , simply has every node with the message choose a neighbor with uniform randomness and send it the message . the pull algorithm ( first mentioned ) , by contrast ,has every node without the message choose a neighbor with uniform randomness and ask for the message .the push - pull algorithm combines those two strategies . in a complete graph , both push andpull complete in rounds , with high probability leveraging epidemic - style spreading behavior .karp et al . proved that the average number of connections per node when running push - pull in the complete graph is bounded at . in recent years, attention has turned toward studying the performance of push - pull with respect to graph properties describing the connectedness or expansion characteristics of the graph .one such measure is _ graph conductance _, denoted , which captures , roughly speaking , how well - knit together is a given graph .a series of papers produced increasingly refined results with respect to , culminating in the 2011 work of giakkoupis which established that push - pull terminates in rounds with high probability in graphs with conductance .this bound is tight in the sense that there exist graphs with this diameter and conductance . around this same time ,chierichetti et al . motivated and initiated the study of push - pull with respect to the graphs vertex expansion number , , which measures its expansion characteristics .follow - up work by giakkoupis and sauerwald proved that there exist graphs with expansion where rounds are necessary for push - pull to terminate , and that push alone achieves this time in regular graphs .fountoulakis et al . proved that push performs better in this case , rounds given even stronger expansion properties . a 2014 paper by giakkoupis proved a matching bound of for push - pull in any graph with expansion .recent work by daum et al . emphasized the shortcoming of the telephone model mentioned above : it allows a single node to accept an unlimited number of incoming connections .they study a restricted model in which each node can only accept a single connection per round .we emphasize that the mobile telephone model with and is equivalent to the model of .this existing work proves the existence of graphs where pull works in polylogarithmic time in the classical telephone model but requires rounds in their bounded variation .they also prove that in any graph with maximum degree and minimum degree , push - pull completes in rounds , where is the performance of push - pull in the classical telephone model .our work picks up where leaves off by : ( 1 ) studying the relationship between rumor spreading and graph properties such as and under the assumption of bounded connections ; ( 2 ) leveraging small advertisement tags to identify simple strategies that close the gap with the classical telephone model results ; and ( 3 ) considering the impact of topology changes . finally , from a centralized perspective , baumann et al . proved that in a model similar to the mobile telephone model with and ( i.e. , a model where you can only connect with a single neighbor per round but can learn the informed status of all neighbors in every round ) there exists no ptas for computing the worst - case rumor spreading time for a push - pull style strategy in a given graph .we will model a network topology with a connected undirected graph . for each , we use to describe s neighbors and to describe .we define and . for a given node ,define . forgiven set , define and let describe the number of edges with one endpoint in and one endpoint in . as in , we define the _ graph conductance _ of a given graph as follows : for a given , define the _ boundary _ of , indicated , as follows : : that is , is the set of nodes not in that are directly connected to by an edge .we define . as in , we define the _ vertex expansion _ of a given graph as follows : notice that , despite the possibility of for some , we always have ] , we have and .[ fact : prob ]we introduce a variation of the classical telephone model we call the _ mobile telephone model_. this model describes a network topology in each round as an undirected connected graph .we assume a computational process ( called a _ node _ ) is assigned to each vertex in .time proceeds in synchronized rounds . at the beginning of each round, we assume each node knows its neighbor set .node can then select at most one node from and send a connection proposal . a node that sends a proposal can not also receive a proposal .however , if a node does not send a proposal , and at least one neighbor sends a proposal to , then can select at most one incoming proposal to accept .( a slightly stronger variation of this model is that the accepted proposal is selected arbitrarily by an adversarial process and not by .our algorithms work for this strong variation and our lower bounds hold for the weaker variation . )if node accepts a proposal from node , the two nodes are _ connected _ and can perform an unbounded amount of communication in that round .we parameterize the mobile telephone model with two integers , and . if , then we allow each node to select a _tag _ containing bits to advertise at the beginning of each round .that is , if node chooses tag at the beginning of a round , all neighbors of learn before making their connection decisions in this round .we also allow for the possibility of the network topology changing , which we formalize by describing the network topology with a dynamic graph .we bound the allowable changes in with a _ stability _parameter . for a given , must satisfy the property that we can partition it into intervals of length , such that all static graphs in each interval are the same . for ,the graph can change every round .we use the convention of stating to indicate the graph never changes . in the mobile telephone model we studythe _ rumor spreading problem _ , defined as follows : a single distinguished source begins with a _ rumor _ and the problem is solved once all nodes learn the rumor .as summarized above , a series of recent papers established that in the classical telephone model push - pull terminates with high probability in rounds in graphs with vertex expansion , and in rounds in graphs with graph conductance .the question we investigate here is the relationship between and and the optimal offline rumor spreading time in the mobile telephone model .that is , we ask : once we bound connections , do and still provide a good indicator of how fast a rumor can spread in a graph ?our goal in this section is to prove the following property regarding optimal rumor spreading in our model and its relationship to the graph s vertex expansion : fix some connected graph with vertex expansion . the optimal rumor spreading algorithm terminates in rounds in in the mobile telephone model .[ thm : alpha ] in other words , it is at least theoretically possible to spread a rumor in the mobile telephone model as fast ( with respect to ) as push - pull in the easier classical telephone model . in the analysis below ,assume a fixed connected graph with vertex expansion and .[ [ connecting - maximum - matchings - to - rumor - spreading . ] ] connecting maximum matchings to rumor spreading .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the core difference between our model and the classical telephone model is that now each node can only participate in at most one connection per round .unlike in the classical telephone model , therefore , the set of connections in a given round must describe a matching . to make this more concrete ,we first define some notation .in particular , given some , let be the bipartite graph with bipartitions and the edge set , , and . also recall that the _ edge independence number _ of a graph , denoted , describes the maximum matching on .we can now formalize our above claim as follows : fix some .the maximum number of concurrent connections between nodes in and in a single round is .[ lem : match ] we can connect the smallest such maximum matchings in our graph to the optimal rumor spreading time .our proof of the following lemma combines the connection between matchings and rumor spreading captured in lemma [ lem : match ] , with the same high - level analysis structure deployed in existing studies of rumor spreading and vertex expansion in the classical telephone model ( e.g. , ) : let .it follows that optimal rumor spreading in terminates in rounds .[ lem : gamma ] assume some subset know the rumor .combining lemma [ lem : match ] with the definition of , it follows that : ( 1 ) if , then at least new nodes can learn the rumor in the next round ; and ( 2 ) if , then at least new nodes can learn the rumor .so long as case holds , the number of informed nodes grows by at least a factor of in each round .by fact [ fact : prob ] , after rounds , the number of informed nodes has grown to at least .therefore , after at most rounds , the set of informed nodes is of size at least . at this pointwe can start applying case to the shrinking set of uninformed nodes .again by fact [ fact : prob ] , after additional rounds , the number of uninformed nodes has reduced to at most .therefore , after at most rounds , the set of uninformed nodes is reduced to a constant .after this point , a constant number of additional rounds is sufficient to complete rumor spreading .it follows that rounds is enough to solve the problem .[ [ connecting - maximum - matching - sizes - to - vertex - expansion . ] ] connecting maximum matching sizes to vertex expansion .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + given lemma [ lem : gamma ] , to connect rumor spreading time to vertex expansion in our mobile telephone model , it is sufficient to bound maximum matching sizes with respect to . in particular , we will now argue that ( the details of this constant factor do not matter much ; happened to be convenient for the below argument ) .theorem [ thm : alpha ] follows directly from the below result combined with lemma [ lem : gamma ] .let .it follows that .[ lem : msize ] we can restate the lemma equivalently as follows : _ for every , , the maximum matching on is of size at least ._ we will prove this equivalent formulation .to start , fix some arbitrary subset such that .let be the size of a maximum matching on .recall that .therefore , if we could show that , we would be done .unfortunately , it is easy to show that this is not always the case .consider a partition in which a single node is connected to large number of nodes in , and these are the only edges leaving .the vertex expansion in this example is large while the maximum matching size is only ( as all nodes in share as an endpoint ) . to overcome this problem ,we will , in some instances , instead consider a related smaller partition such that is small enough to ensure our needed property .in more detail , we consider two cases regarding the size of : _ the first case _ is that . by definition ,it follows that , which more than satisfies our claim . _ the second ( and more interesting ) case _ is that .let be a maximum matching of size for .let be the endpoints in in .we define a smaller partition .note , by the case assumption , .we now argue that every node in is also in . to see why ,assume for contradiction that there exists some that is not in .because , there must exist some edge , where .notice , however , because is in it is not in .if follows that we could have added to our matching defined on the assumption that is maximum .we have established , therefore , that .it follows : from which it follows that , as needed to satisfy the claim . in the classical telephone model push - pull terminates in rounds in a graph with conductance .here we prove optimal rumor spreading might be much slower in the mobile telephone model . to establish the intuition for this result ,consider a star graph with one center node and points .it is straightforward to verify that the conductance of this graph is constant .but it is also easy to verify that at most one point can learn the rumor per round in the mobile telephone model , due to the restriction that each node ( including the center of the star ) can only participate in one connection per round . in this case , every rumor spreading algorithm will be a factor of slower than push - pull in the classical telephone model .below we formalize a fine - grained version of this result , parameterized with maximum and minimum degree of the graph .we then leverage theorem [ thm : alpha ] , and a useful property from , to prove the result tight .fix some integers , such that .there exists a graph with minimum degree and maximum degree , such that every rumor spreading algorithm requires rounds in the mobile telephone model .in addition , for every graph with minimum degree and maximum degree , the optimal rumor spreading algorithm terminates in rounds in the mobile telephone model .[ thm : phi ] fix some and as specified by the theorem statement .consider a generalization of the star , , where the center is composed of a clique containing nodes , and there are point nodes , each of which is connected to ( and only to ) all nodes in the center clique .we first establish that the graph conductance of is constant . to see why, we consider three cases for each set considered in the definition of .the first case is that includes only center nodes .here it follows : next consider the case where contains only point nodes . hereit follows : finally , consider the case where contains points nodes and center nodes .an important observation is that .it follows that : because , given that every edge is symmetrically adjacent to the center , it would otherwise follow : .we now lower bound the conductance by as follows : if , then for each node in , at least half of its neighbors are outside , which shows that . on the other hand ,suppose .then at least of the edges between the center clique and point nodes go out of , because and . andat least of the edges inside the clique go out of , because .since the conductance is simply a weighted average of these two ratios , we have . having established the conductance is constant , to conclude the lower bound component of the theorem proof , it is sufficient to note that at most of the points can learn the rumor per round .it follows that every rumor spreading algorithm requires at least rounds . finally , to prove that rounds is always sufficient for a graph with minimum and maximum degrees and , respectively , we leverage the following property ( noted in , among other places ) : for every graph with vertex expansion and graph conductance , , which directly implies . combining this observation with theorem [ thm : alpha ] , the claimed upper bound follows .we now study the performance of push - pull in the mobile telephone model with and . we investigate its performance with respect to the optimal rumor spreading performance bounds from section [ sec : prop ] . in more detail, we consider the following natural variation of push - pull , adapted to our model : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ in even rounds , nodes that know the rumor choose a neighbor at random and attempt to establish a connection to push the message . in odd rounds , nodes that do not know the rumor choose a neighbor at random and attempt to establish a connection to pull the message ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ we study this push - pull variant with respect to both graph conductance and vertex expansion . [ [ graph - conductance - analysis . ] ] graph conductance analysis .+ + + + + + + + + + + + + + + + + + + + + + + + + + + we begin by considering the performance of this algorithm with respect to graph conductance .theorem [ thm : phi ] tells us that for any minimum and maximum degree and , respectively , the optimal rumor spreading algorithm completes in rounds , and there are graphs where rounds are necessary .interestingly , as noted in section [ sec : related ] , daum et al . proved that the above algorithm terminates in rounds , where is the optimal performance of push - pull in the classical telephone model . because in the classical setting , the above algorithm should terminate in rounds in our model nearly matching the bound from theorem [ thm : phi ] .put another way , rumor spreading potentially performs poorly with respect to graph conductance , but push - pull with nearly matches this poor performance .notice , we are omitting from consideration here the mobile telephone model with graphs that can change ( non - infinite ) . the analysis from does not hold in this case and new work would be required to bound push - pull in this setting with less stable graphs . by contrast ,when studying uniform rumor spreading below for , we explicitly include the graph stability as a parameter in our time complexity .[ [ vertex - conductance - analysis . ] ] vertex conductance analysis . + + + + + + + + + + + + + + + + + + + + + + + + + + + + arguably , the more important optimal time complexity bound to match is the bound established in theorem [ thm : alpha ] , as it is similar to the performance of push - pull in the telephone model .we show , however , that for , the algorithm can deviate from the optimal performance of theorem [ thm : alpha ] by a factor in .this observation motivates our subsequent study of the case where we prove that uniform rumor spreading can nearly match optimal performance with respect to vertex expansion .[ lem : badgraph ] there is a graph with constant vertex expansion , in which the above algorithm would need at least rounds to spread the rumor , with high probability .we start with describing the graph .the graph has two sides , the left side is a complete graph with nodes , and the right side is an independent set of size .the connection between and is made of two edge - sets : ( 1 ) a matching of size , ( 2 ) a full bipartite graph connecting to a subset of where .see . note that this graph has constant vertex expansion .we argue that , w.h.p ., per round at most new nodes of get informed ( regardless of the current state ) .hence , the spreading takes rounds , despite the good constant vertex expansion of . rounds . the nodes on the left side form a complete graph , and the nodes on the right side are only connected to , via a matching of size and a complete bipartite graph to subset of size .,scaledwidth=60.0% ] first , let us consider the push process : each node in pushes to a node in with probability .hence , over all nodes of , we expect pushes to , which , by chernoff , means we will not have more than such pushes , with high probability . on the other hand , each node in pushes to at most one node in .hence , the total number of successful pushes to is at most , with high probability .now , we consider the restricted pull process ( rpull ) : for each node , with probability 1/ , the pull lands in . hence ,overall , we expect pulls to land in .hence , with high probability , at most nodes of get informed by pulling nodes of . on the other hand , the vast majority of the pulls of -nodes lands in but due to the restriction in the rpull, nodes of can only respond to of these pulls , which is many .hence , the number of -nodes informed via pulls is at most , with high probability . taking both processes into account, we get that per round , new nodes of get informed , which means rumor spreading will require at least rounds .in the previous section , we proved that push - pull in the mobile telephone model and fails to match the optimal vertex expansion bound by a factor in in the worst case .motivated by this shortcoming , we turn our attention to the setting where . in particular , we consider the following natural variant of push - pull adapted to our model with .we call this algorithm _ productive push _ ( or , ppush ) as nodes leverage the -bit tag to advertise their informed status and therefore keep connections productive . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ at the beginning of each round , each node uses a single bit to advertise whether or not it is _ informed _ ( knows the rumor ) .each informed node that has at least one uninformed neighbor , chooses an uninformed neighbor with uniform randomness and tries to form a connection to send it the rumor ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ fix a dynamic network of size with vertex expansion and stability factor at least , .the ppush algorithm solves rumor spreading in in rounds , with high probability in .[ thm : ppush ] to prove this theorem , the core technical part is in studying the success of ppush over a stable period of rounds , which we present in section [ sec : upper:1 ] .this analysis bounds the number of new nodes that receive the message in the stable period with respect to the size of the maximum matching defined over the informed and uninformed partitions at the beginning of the stable period . in section [ sec : upper:2 ] , we connect this analysis back to the vertex expansion of the graph ( leveraging our earlier analysis from section [ sec : prop ] connecting to edge independence numbers ) , and carry it through over multiple stable periods until we can show rumor spreading completes . to study the effectiveness of the ppush rumor spreading algorithm we connect it to the problem of generating a graph matching . in more detail ,we will lower bound the size of a matching induced by ppush over a limited round execution and show that this matching size lower bounds the extent to which ppush spreads the rumor to new nodes .we will leverage this matching result to prove our final theorem about the performance of ppush .we first formalize the notion of _ inducing _ a matching with an execution of ppush .we will confine our attention to running ppush in bipartite graphs , as this will be the only relevant context for our subsequent use of these results . at a high - level ,the matching induced a fixed number of rounds of ppush is simply the set of edges where started with the rumor , did not , and was the first node informed . in more detail , consider an execution consisting of rounds of ppush on a bipartite graph with bipartitions and , where all ( and only ) the nodes in being the execution with the rumor .we can define the _ matching induced _by as follows : the first time a node succeeds in having its proposal accepted by a node , we add to the matching . notice the size of the matching induced by is a lower bound on the total number of nodes in that learn the rumor in ( the matching can be strictly smaller in the case where a node in informs multiple nodes in ) .fix a bipartite graph with bipartitions and , such and has a matching of size .assume is a subgraph of some ( potentially ) larger network , and all uninformed neighbors in of nodes in are also in .fix an integer , , where is the maximum degree of .consider an round execution of ppush in in which the nodes in start with the rumor and the nodes in do not . with constant probability : at least nodes in learn the rumor .[ thm : matching ] we start with some helpful notation . for any and , let be the subgraph of induced by the nodes and . similarly , let and be the neighbor and degree functions , respectively , defined over .we begin with the special case .we then move to our main analysis which handles all .( notice , our result below provides an approximation of which is tighter than the approximation for this case claimed by theorem [ thm : matching ] .we could refine the theorem claim to more tightly capture performance for small , but we leave it in the looser more general form for the sake of concision in the result statement . ) consider the maximal matching in the bipartite graph , and let be the set of informed endpoints of this matching .divide nodes of into classes based on their degree in the bipartite graph , by putting nodes of degree in ] .now each node in pushes to its pair in the matching with probability .hence , we expect uninformed endpoints of to be informed by getting a push from their matching pair . if , chernoff bound already shows us that with high probability in this expectation and thus with at least constant probability , the matching size is at least , hence establishing the lemma s claim .suppose on the contrary that .now , let be the set of -nodes adjacent to .call each node high - degree if .each -node pushes to each of its adjacent neighbors with probability at least , which means each high - degree node in gets at least one push with probability at least .note that there are at least edges of incident on .hence , the number of edges incident on is also at least .now either at least edges are incident on high - degree nodes of , or at least edges are incident on low - degree nodes of . in the former case , since each high - degree node has degree at most , there must be at least high degree nodes .since each of these gets hit with probability at least , we expect at least such hits .due to the negative correlation of these hits , the lemma s claim follows from chernoff bound .suppose on the contrary that we are in the latter case and edges are incident on low - degree nodes .each low - degree node gets hit with probability at least .since summation of degrees among low - degree nodes is at least , we get that the expected number of hit low - degree nodes is at least .a chernoff bound concentration then completes the proof .we start now the proof of the case by making a claim that says if for a given large subset of that has a relatively small degree sum , a couple rounds of the algorithm run on this subset will either generate a large enough matching , or leave behind a subset with an even smaller degree sum .fix any ] , for some constant that we fix later .notice , for each , we can define , where for each , the variable is a indicator variable indicating that sent a proposal to .notice , for , and are independent .therefore , is defined as the sum of independent random variables .it follows that we can apply a chernoff bound to achieve concentration on the mean ] let be the total number of proposals received by nodes from nodes . by linearity of expectation : as defined , is _ not _ necessarily the sum of independent random variables , as there could be dependencies between different values .however , it is straightforward to verify that for any , and are _ negatively associated _ : receiving more proposal can only reduce the number of proposals received by . because we can apply a chernoff bound to negatively associated random variables , we can achieve concentration around the expected value for . note that \geq c\log{n} ] . for any , we can bound the probability that , for some sufficiently large constant , to be polynomially small in ( with an exponent that increases with ) . by a union bound ,the probability that any node in receives more than values is still polynomially small in .assume , therefore , that this upper bound holds for all nodes .it follows that if , the number of unique nodes receiving proposals is at least .we can simple combine the high probability bounds on the size of being large and the size of being small ( for every relevant ) , by a simple union bound on all of those events , as each holds with high probability in and we certainly have at most such events .pulling together the pieces , we have shown that if the degree sum on is too large , then with high probability in at least new nodes are informed in this round satisfying the lemma . to conclude, we note that the above analysis only applies to the number of nodes in that are informed by nodes in .it is , of course , possible that some nodes in are also informed by nodes outside of .this behavior can only help this step of the proof as we are proving a lower bound on the number of informed nodes and this can only increase the actual value .we now leverage lemma [ lem : matching1 ] to prove theorem [ thm : matching ] .the following argument establishes a base case that satisfies the lemma preconditions of lemma [ lem : matching1 ] and then repeatedly applies it times . either : ( 1 ) a matching of sufficient size is generated along the way ( i.e. , case of the lemma statement applies ) ; or ( 2 ) we begin round with a set with size in that has an average degree in which case it is easy to show that in the final round we get a matching of size .fix a bipartite graph with bipartitions and with a matching of size , and a value , as specified by the theorem statement preconditions . if , the claim follows directly from .assume in the following , therefore , that .we claim that we can apply lemma [ lem : matching1 ] to , , and . to see why , notice that this definition of satisfies the preconditions , , and .it also satisfies the condition requiring all of the uninformed neighbors of to be in . finally , because we fixed , it holds that : , as there are nodes in each with a maximum degree of .consider this first application of lemma [ lem : matching1 ] .it tells us that , w.h.p . , either we finish after one or two rounds , or after a single round we identify a smaller bipartitate graph , where and satisfy all the properties needed to apply the lemma to , , and .we can keep applying this lemma inductively , each time increasing the value of , until either : ( 1 ) we get through ; ( 2 ) an earlier application of the lemma generates a sufficiently large matching to satisfy the theorem ; or ( 3 ) at some point before either option 1 or 2 , the lemma fails to hold .since the third possibility happens with probability polynomially small in at each application , we can use a union bound and conclude that with high probability , it does not happen in any of the iterations . ignoring this negligible probability ,we focus on the other two possibilities . before that , let us discuss a small nuance in applying the lemma times .we need to ensure that the specified sets are always of size at least , as required to keep applying the lemma .notice , however , that we start with an set of size , and the lemma guarantees it decreases by a factor of at most . therefore , after applications , .going back to the two possibilities , if option 2 holds , we are done . on the other hand ,if option 1 holds , we have one final step in our argument . in this case, we end up with having identified a bipartite subgraph with a maximum matching of size at least .we also know as . in this case , it holds trivially that at most nodes of have .hence , at least nodes have degree at most .now , each of these proposes to its own match in with probability at least .thus , we expect nodes of to receive proposals directly from their matches. note that these events are independent .moreover , we have as otherwise the claim of the theorem would be trivial .therefore , w.h.p ., nodes of receive proposals from their pairs .hence , at least nodes of get informed , thus completing the proof .divide the rounds into _ stable phases _ each consisting of rounds , such that the graph does not change during a stable phase .we label these phases .let be the node set for .let , for some phase , be the subset of _ informed _ nodes that know rumor at the beginning of phase .let , where is the node set of .let be the approximation factor on the maximum matching provided by theorem [ thm : matching ] .we define the notion of a _ good _ phase with respect to this approximation factor : notice , the factor of in the above definition comes from lemma [ lem : msize ] , from our earlier analysis connecting the size of a maximum matching across any partition to the vertex expansion of the graph .we next bound the number of good phases needed to complete rumor spreading .intuitively , until the rumor spreads to at least half the nodes , each good phase increases the number of informed nodes by a fractional factor of at least .given that we start with informed node , after such increases , the number of informed nodes is at least ( by fact [ fact : prob ] ) .therefore , we need good phases to get the rumor to at least half the nodes . once we have informed half the nodes , we can flip our perspective .we now decrease our uninformed nodes by a factor of at least .if we start with no more than uninformed nodes , then after good phases , the number of uninformed nodes left is no more than ( also by fact [ fact : prob ] ) .similar to before , good phases is sufficient to reduce these remaining nodes down to a constant number at which we can complete the rumor spreading .( see the proof of lemma [ lem : gamma ] for the details of this style of argument . )we capture this intuition formally as follows : we begin by focusing on the probability that a given phase is good .it is in this analysis that we pull together many of the threads woven so far throughout this paper .in particular , we consider the partition between informed and uniformed nodes .the maximum matching between these partitions describe the maximum number of new nodes that might be informed .we can leverage theorem [ thm : matching ] to prove that with at least constant probability , we spread rumors to at least a -fraction of this matching .we then leverage our earlier analysis of the relationship between matchings and vertex expansion , to show that this matching generates a factor of increase in informed nodes ( or decrease in uninformed , depending on what stage we are in the analysis ) .this matches our definition of _ good_. formally : consider the maximum matching between the partitions and .let .a direct implication of lemma [ lem : msize ] ( see the equivalent formulation in the first line of the proof ) , is the following : we now apply theorem [ thm : matching ] to bound what fraction of this matching we can expect to inform in the rounds of the phase that follows . in more detail ,set to be the nodes in from , set to be the uninformed neighbors of nodes in , and set .it is easy to verify that these values satisfy the preconditions of theorem [ thm : matching ] .the theorem tells us that with constant probability , at least new nodes learn the message in this phase .combined with our case analysis from above , it follows that if if , then this is at least new nodes , and if then this is at least new nodes . in both cases ,we have satisfied the definition of _ good _ with constant probability . we know from lemma [ lem : good:2 ] that each phase is good with constant probability . we know from lemma [ lem : good ] , that good phases are sufficient .we must now combine these two observations to determine how many phases are needed to generate good phases with high probability .let be the number of good phases out of the first phases .by linearity of expectation and lemma [ lem : good:2 ] : . combining this observation with lemma [ lem : good ]it follows that the _ expected time _ to solve rumor spreading with ppush is in .we are seeking , however , a high probability bound to prove theorem [ thm : ppush ] .we can not simply apply a chernoff bound to concentrate around , as for , and are not necessarily independent .our final theorem proof will leverage a stochastic dominance argument to overcome this obstacle . according to lemma [ lem : good:2 ], there exists some constant probability that lower bounds , for every phase , the probability that the phase is good . for each , we define a trivial random variable that is with independent probability , and otherwise . by definition , for each phase , regardless of the history through phase , stochastically dominates .it follows that if is greater than with some probability , then is greater than with probability at least .a chernoff bound applied to , for ( where is a sufficiently large constant define with respect to the constant from lemma [ lem : good:2 ] and the chernoff form , and is provided lemma [ lem : good ] ) , provides that is at least with high probability in .it follows the same holds for . by lemma [ lem : good ] , this is a sufficient number of good phases to solve rumor spreading . to obtain the final round bound we first note that the upper bound on phases simplifies as : n. fountoulakis and k. panagiotou .rumor spreading on random regular graphs and expanders . in _ approximation , randomization , and combinatorial optimization .algorithms and techniques _ , pages 560573 .springer , 2010 .
in this paper , we study push - pull style rumor spreading algorithms in the _ mobile telephone model _ , a variant of the classical _ telephone model _ in which each node can participate in at most one connection per round ; i.e. , you can no longer have multiple nodes pull information from the same source in a single round . our model also includes two new parameterized generalizations : ( 1 ) the network topology can undergo a bounded rate of change ( for a parameterized rate that spans from no changes to changes in every round ) ; and ( 2 ) in each round , each node can advertise a bounded amount of information to all of its neighbors before connection decisions are made ( for a parameterized number of bits that spans from no advertisement to large advertisements ) . we prove that in the mobile telephone model with no advertisements and no topology changes , push - pull style algorithms perform poorly with respect to a graph s vertex expansion and graph conductance as compared to the known tight results in the classical telephone model . we then prove , however , that if nodes are allowed to advertise a single bit in each round , a natural variation of push - pull terminates in time that matches ( within logarithmic factors ) this strategy s performance in the classical telephone model even in the presence of frequent topology changes . we also analyze how the performance of this algorithm degrades as the rate of change increases toward the maximum possible amount . we argue that our model matches well the properties of emerging peer - to - peer communication standards for mobile devices , and that our efficient push - pull variation that leverages small advertisements and adapts well to topology changes is a good choice for rumor spreading in this increasingly important setting .
quantum key distribution protocols allow two users , alice ( ) and bob ( ) , to establish a shared secret key secure against an all - powerful adversary eve ( ) who is bounded only by the laws of physics - an end unattainable through classical means alone .several such protocols have been developed since the original bb84 ( the reader is referred to for a general survey ) and many of them include rigorous proofs of unconditional security .such a proof of security generally involves determining a bound on the protocol s key - rate ( to be defined shortly , though roughly speaking , it is the ratio of secret key bits to qubits sent ) as a function of the observed noise in the quantum channel . in this paper , we consider several qkd protocols , and derive key - rate expressions based on multiple channel statistics .furthermore , our key - rate bounds will utilize statistics from mismatched measurements ; that is to say , those measurement outcomes where a and b s choice of bases are incompatible - events which are typically discarded by the protocol specification ( there are exceptions as we mention next section ) .in fact , by using these mismatched measurement results , the key - rate bounds we derive demonstrate that many of the protocols we consider here can actually tolerate higher levels of noise than previously thought .thus , the primary contributions of this paper are two - fold .first , we derive a general approach to deriving key - rate expressions , in the asymptotic scenario , for a wide - range of discrete variable qkd protocol utilizing all possible channel statistics , including mismatched measurement results .secondly , by applying this technique to several , very different protocols ( including a limited - resource bb84 , an extended b92 , and a two - way semi - quantum protocol ) , we not only derive new key - rate expressions applicable to arbitrary , possibly asymmetric quantum channels , but , in many cases , our new bounds are substantial improvements over previous work with these protocols . along the way , we will also use our method to investigate optimal qkd protocols for asymmetric channels .we are not the first to consider the use of mismatched measurement outcomes for quantum key distribution . indeed , in the 1990 s barnett et al . showed that mismatched measurement results may be used to better detect an eavesdropper using an intercept - and - resend attack . in ,mismatched measurement bases were applied to the four - state and six - state bb84 protocols .this method was shown to improve the key rate ( as determined by the devetak - winter equation in ) for certain quantum channels , namely the amplitude damping channel and rotation channel . in ,mismatched measurement results were actually used to distill a raw key ( as opposed to being used only for channel tomography ) - a modified bb84 protocol was adopted and this method was shown to improve the key rate for certain channels . in , a modified four - state , two - basis bb84 was used where the first basis was the standard computational basis ( ) , while the second consisted of states and where .the authors of that work showed that for small , mismatched measurement bases can still be used to gain good channel estimates while also allowing and to use mismatched measurement bases to distill their key ( since , for small , even with differing bases , their measurement results will be nearly correlated ) .mismatched measurements were used in in order to get better channel statistics for a single - state semi - quantum protocol first introduced in .though single - state semi - quantum protocols utilize two - way quantum channels , they admit many simplifications which ease their security analysis . in this paper, we consider a multi - state semi - quantum protocol ( which are more difficult to analyze ) and show mismatched measurements improve its key - rate . in ,it was proven , using mismatched measurement bases , that the three - state bb84 protocol from has a key rate equal to that of the full four state bb84 protocol assuming a symmetric attack .also a four - state protocol using three bases has a key rate equal to that of the full six - state bb84 protocol . in this paper, we will arrive at the same conclusion , though using a more information theoretic argument .however , we will also analyze other protocols , adapt our approach to two - way quantum channels , consider an optimized qkd protocol , and , for all considered protocols , derive new key - rate expressions suitable for asymmetric channels .our method , building off of our conference paper in ( where we only considered a three - state bb84 protocol ) , is very general and applicable to multiple qkd protocols , both one - way and , as we will demonstrate , two - way ( those which utilize a two - way quantum channel allowing a qubit to travel from to , then return to thus passing through the adversary twice ) . after an introduction to our notation, we will first explain the parameter estimation method and our technique. we will then apply it to variants of the bb84 protocol , confirming the results of mentioned above ( though also deriving key - rate expressions for arbitrary channels ) .we will consider the extended b92 protocol and derive an improved key - rate bound for it .we will then use our method to consider an `` optimal '' qkd protocol .finally , we will analyze a multi - state semi - quantum protocol from which relies on a two - way quantum channel .this new proof of security will derive a more optimistic bound on the key rate expression than the one previously constructed in ( the latter did not use mismatched measurement bases ) .we now introduce some notation we will use . let be the shannon entropy function , namely : where all logarithms in this paper are base two unless otherwise specified .we will occasionally use the notation to mean .we denote by the binary shannon entropy function , namely : .we write to mean the von neumann entropy of the density operator .if is finite dimensional ( and all systems in this paper are finite dimensional ) , then let be its eigenvalues . in this case .if acts on a bipartite hilbert space we will often write . when we write we mean the result of tracing out s portion of ( that is , ) . similarly for and for systems with three or more subspaces .given density operator we will write to mean and to mean . we denote by to be the conditional von neumman entropy defined as .if the context is clear , we will forgo writing the `` '' subscript .when we talk about qubits , we will often refer to the , , and bases , the states of which we denote : , , and , where : will now describe the parameter estimation method and what may be gleaned from it .this extends the preliminary work we did in our conference paper where we only considered three states from two bases .let be a unitary operator acting on the finite dimensional hilbert space where and ( will model eve s attack operation ) ; models the qubit `` transit '' space , while will model the adversary s private quantum memory .without loss of generality , we may describe s action on states of the form , where is some arbitrary , normalized state in , as follows : where the are states in which are not necessarily normalized nor orthogonal .unitarity of imposes certain obvious restrictions on these states which will become important momentarily .let be the computational basis .let ] thus implying ] ( while also utilizing the work performed in section [ section : sym ] ) . doing so yields a result shown in figure [ fig : bb84-keyrate1 ] - in particular , the key rate drops to zero when . when , the key rate drops to zero at as also shown in figure [ fig : bb84-keyrate1 ] .this latter is exactly the noise tolerance of the full six - state bb84 protocol as shown in ( without preprocessing ) .note that the value of and do not appear in these expression ( so long as they are both in the interval ) .since we are also assuming perfect parameter estimation , they also do not affect that process .-bb84 protocol under a symmetric attack .solid line is the key - rate of -bb84 and can tolerate up to noise ( same as the four - state bb84 protocol ) while the dashed line shows the key - rate of -bb84 , in which case the protocol withstands up to error ( same as the full six - state bb84 without preprocessing ) . ]this result confirms , independently , that found in where it was also shown , using mismatched measurement bases , though using a different analytical technique , that three states were sufficient to achieve the noise tolerance of the usual four state bb84 and four states from three bases was enough to attain the noise tolerance of the six - state protocol . in the next sections we show how our method easily extends to other protocols which do not necessarily follow the bb84-style encoding and which may utilize a two - way quantum communication channel .we will use the more general , non - symmetric , expression ( equation [ eq : bb84-general ] ) in a later subsection .we will also comment on general attacks at the end of this section . in this section ,we apply our method to the analysis of the extended b92 protocol introduced in .this protocol , like the standard b92 , uses two non - orthogonal states to encode the raw key bits , however it extends the protocol by allowing to send other qubit states ( beyond the two only allowed by standard b92 ) for parameter estimation purposes .we will use our technique to derive more optimistic noise tolerances for this protocol than prior work in .this also serves to show the generality of our technique .we begin by introducing the protocol using our terminology .denote by -b92 the protocol shown in protocol [ alg : b92 ] .* input * : let be the set of possible states that may send to under the restrictions that . *quantum communication stage * : the quantum communication stage of the protocol repeats the following process : 1 . will send a qubit state , choosing randomly according to some publicly known distribution ( we assume and are chosen with equal probability and that all states in have non - zero probability of being chosen ) .2 . chooses a random basis and measures the qubit in this basis .if chose to send or , she sets her raw key bit to be or respectively .4 . if observes a or , he sets his key bit to be or respectively . 5 . and announce , over the authenticated classical channel , whether this is a _successful _ iteration : namely , whether choose or and whether observed or ( of course they do not disclose their actual preparations or observations ) .all other iterations , along with a suitable , randomly chosen , subset of successful iterations , are used for parameter estimation as described in section [ section : pe ] . to compute the key rate of this extended b92 protocol , we must first , as before , describe the joint quantum system held by , , and conditioning on `` successful '' iterations .recall that ( thus ) .we also define , where is the conjugate transpose of .it is not difficult to show that this quantum system is : \\ & + \frac{1}{n ' } \left [ \frac{1}{2}{\ket{01}\bra{01}}_{ab } \otimes { \ket{e_1}\bra{e_1 } } + \frac{1}{2}{\ket{10}\bra{10}}_{ab } \otimes { \ket{f_1}\bra{f_1}}\right],\end{aligned}\ ] ] where we have adopted the same notation for s attack as in equations [ eq : u - states ] and [ eq :u - states - f ] , and is a normalization term to be discussed shortly .tracing out s system and using the fact that , from equation [ eq : u - states - f2 ] , we find , we have : \\ & + \frac{1}{n'}\left [ \frac{1}{2}{\ket{0}\bra{0}}_a \otimes { \ket{e_1}\bra{e_1 } } + \frac{1}{2}{\ket{1}\bra{1}}_a\otimes{\ket{f_1}\bra{f_1}}\right].\end{aligned}\ ] ] in order to apply theorem [ thm : entropy ] , we write the above state in the following form : where we defined : from this , theorem [ thm : entropy ] may be directly applied to compute the key rate of this extended b92 protocol , providing us with a key - rate expression for any asymmetric channel ( computing is , as before , trivial ) . writing out the expression ,however , is not enlightening as it would entail simply copying equation [ eq : thm : entropy ] from theorem [ thm : entropy ] .instead , we will illustrate by writing out the case for a symmetric attack ( again , an enforceable assumption ) allowing us to take advantage of certain simplifications in the expressions .furthermore , it will allow us to compare with prior work to demonstrate the advantage to using mismatched measurement bases for this protocol .we stress , however , that symmetry is not required at this point. let denote the error rate of the channel as before .assuming a symmetric attack ( and thus ; see section [ section : sym ] ) , the normalization term simplifies to : \notag\\ & = \bar{\alpha}^2(1-q ) + \alpha^2q + q\notag\\ & = 1 - \alpha^2(1 - 2q).\label{eq : b92-n}\end{aligned}\ ] ] where , to derive the last equality , we used the fact that . let ( the two are equal when faced with a symmetric attack ) and . by theorem [ thm :entropy ] , we have : where : may be evaluated directly : ( again we use the notation to mean . )furthermore , we have : bounds on and are found as described in section [ section : sym ] .all that remains is to compute .but this is simply : to evaluate the key - rate assuming this symmetric attack , we must optimize over all ] .case 1 : which , of course , implies that . in this case ,equation [ eq : lemma : positive - ent ] holds if and only if .but : the case when is similar ( in that case , we have ) .note that , we require so that remains in the domain of . of course , since we are working with quantum states , the cauchy - schwarz inequality will always guarantee this condition for the equations we are interested in .the above lemma implies that each term from theorem [ thm : entropy ] ( equation [ eq : thm : cond - entropy ] ) is non - negative and , since this is a lower - bound , they may be removed if needed .thus , applying theorem [ thm : entropy ] and lemma [ lemma : positive ] , we compute a lower - bound on the conditional entropy of the state , given by equation [ eq : sqkd - state ] , as : ,\ ] ] where , , and : in the case the attack is symmetric , this expression becomes : ,\ ] ] where : we will soon see that , even though we discarded several terms which might have increased the conditional entropy ( thus increasing the key rate of the protocol ) , we will still produce a very optimistic lower - bound .we leave as potential future study , the problem of computing the entropy of those ignored systems .our goal now is to determine a bound on .in particular we must lower - bound . to do so, we again utilize the parameter estimation process described in section [ section : pe ] .notice that , if reflects , his operation is , essentially , the identity operator .thus , conditioning on s choice to reflect , the two - way quantum channel becomes , in essence , a one - way channel with a qubit leaving s lab , attacking it via the unitary operator , and the qubit returning to .let us first consider this operator .we may write its action on basis states as follows ( again s lab is cleared to some state `` '' at the start of the iteration ) : due to the linearity of and , these states are : now , may perform parameter estimation on this operator using the process described in section [ section : pe ] in order to learn statistics on the quantities . to evaluate the key - rate bound , and require a bound on . to do so , they may first determine bounds on using the process described in section [ section : pe ] .from this , a bound on the desired quantity may be found by expanding : bounds on the `` '' quantities which appear in the right - hand side of the above expression may be found using the cauchy - schwarz inequality . as before, we will illustrate assuming a symmetric attack allowing us to boil down the above expressions to something simpler to parameterize . if we assume s attack is symmetric ( an assumption that , again , may be enforced ) , and if , we find : . if , then we additionally find .now , consider the probability that measures if she initially sent and reflects .it is clear that this quantity is identical to equation [ eq : paa ] , replacing `` '' states with `` '' states .call this probability . using the above ,this becomes : now , let be the basis error rate when reflects ( i.e. , the basis error when a qubit travels forwards to , then backwards to , through the two - way channel ) .it is clear that this is a statistic which and may estimate .furthermore , . substituting equation [ eq : sqkd : g03 ] into equation [ eq : qa ] and solving for yields : ( note that above we used the fact that . ) at this point , we already have enough to find reasonable estimates of .indeed , the cauchy - schwarz inequality enforces the condition that ( though if is used for parameter estimation , we get ) . also , using equation [ eq : prbounds ] and the cauchy - schwarz inequality we have : .the second scenario involves the two channels being correlated in that . for each channel scenario, we will consider two cases : and . for the independent channel case ,when we see our key rate remains positive for all .when it remains positive for all .for the correlated channels , when the key rate remains positive for all .when it remains positive for all .these are improvements from the original bound from ( which did not use mismatched measurement outcomes ) .see table [ table : sqkd ] for a summary of this data .we have thus shown not only that our methods may be extended to two - way quantum channels , but also that this semi - quantum protocol can actually withstand a much higher level of noise than previously thought .improvements in the parameter estimation process for two - way channels may lead to even better bounds ( recall , we were forced to drop certain terms which could only have improved the result ) .r|cc & independent & correlated + old bound from & & + + new bound using & & + new bound using & & in the previous sections , we considered only collective attacks .however , all protocols considered in this paper may be made permutation invariant in the usual way .thus , the results of apply : namely , proving security against collective attacks is sufficient to show security against arbitrary , general attacks . in the asymptotic scenario , which we considered here , the key - rate expressions will remain the same .we have shown , using information theoretic arguments , that the use of mismatched measurement results can be used to improve the key rate bounds and noise tolerances for various protocols , both one - way and two - way .we applied this technique to a general , limited - resource bb84 , an extended b92 , an optimized qkd protocol , and a semi - quantum protocol . for the bb84 protocol , we confirmed through alternative means , the results of : namely that three states is sufficient to obtain full bb84 security , while four states in three bases is enough to obtain the full six - state bb84 level of security .our new key - rate bounds for the extended b92 protocol show it has a higher tolerance to noise than previously thought .similarly we derived improved key - rate bounds for the semi - quantum protocol of boyer et al. in all cases , we did not require any symmetry assumptions ( we evaluated our key - rate bounds using a symmetric attack for illustrative and comparative purposes only ) .one might attempt to use this technique on other two - way protocols beyond the class of semi - quantum ones .we did consider this - however the primary advantage of many two - way , fully quantum protocols ( i.e. , not semi - quantum that we considered here ) is that there are no mismatched measurement basis choices .thus a modification to the protocols would be required - an improved key rate may be found in this case , but one must then ask what the resulting advantage would be to the modified protocol . we leave this as an open question .we also leave as future study improving our parameter estimation method for two - way protocols .when deriving the key - rate bound for the sqkd protocol , we were forced to drop several terms which could only have increased the noise tolerance of the protocol . finally , and very importantly , would be to study the performance of this method in finite key settings and when imperfect parameter estimation occurs .all results in this paper assumed and could perform enough iterations so as to derive arbitrarily precise estimates of various statistics . in a practicalsetting , there will always be some error .taking this into account , and deriving key - rate expressions in the finite key setting is important future work .charles h bennett and gilles brassard .quantum cryptography : public key distribution and coin tossing . in _ proceedings of ieee international conference on computers , systems andsignal processing _ , volume 175 .new york , 1984 .walter o. krawec .asymptotic analysis of a three state quantum cryptographic protocol . in _ieee international symposium on information theory , isit 2016 , barcelona , july 10 - 15 , 2016 _ , pages 24892493 , 2016 .
in this paper , we derive key - rate expressions for several different quantum key distribution protocols . our key - rate equations utilize multiple channel statistics , including those gathered from mismatched measurement bases - i.e. , when alice and bob choose incompatible bases . in particular , we will consider a limited - resource form of bb84 , an extended b92 , and a two - way semi - quantum protocol . for the first protocol , we will show it has the same tolerance to noise as the full bb84 ( a result already known , however we provide an alternative , more information theoretic , proof ) . for the last two protocols , we demonstrate that their tolerance to noise is higher than previously thought . along the way , we will also consider an optimal qkd protocol for various quantum channels . finally , all the key - rate expressions which we derive in this paper are applicable to any arbitrary , not necessarily symmetric , quantum channel .
cell adhesion and the adhesion of vesicles to the membranes of cells and cellular organelles is mediated by the binding of receptor and ligand proteins that are anchored in the adhering membranes .central questions are how the binding affinity of the anchored proteins can be measured and quantified , how this affinity is affected by characteristic properties of the proteins and membranes , and how it is related to the affinity of soluble variants of the receptor and ligand proteins without membrane anchors . for soluble receptors and ligands that are free to diffuse in three dimensions ( 3d ) , the binding affinity can be quantified by the binding equilibrium constant \text{3d}}{[\text{r}]_\text{3d}[\text{l}]_\text{3d } } \label{k3d}\ ] ] where \text{3d} ] and \text{3d} ] , \text{2d} ] are the _ area _ concentrations of bound receptor - ligand complexes , unbound receptors , and unbound ligands .the binding of membrane - anchored receptors and ligands in cell adhesion zones has been experimentally investigated with fluorescence methods and with several mechanical methods involving hydrodynamic flow , centrifugation , or micropipette setups that use red blood cells as force sensors .however , the values obtained from different methods can differ by several orders of magnitude , which indicates a ` global ' dependence of on the membrane adhesion system , besides the dependence on local receptor and ligand interactions .in this article , we present a general theory that relates the binding constant of membrane - anchored receptor and ligand molecules to the binding constant of soluble variants of these molecules .this theory describes how depends both on overall characteristics of the membranes and on molecular properties of the receptors and ligands .quantifying is complicated by the fact that the binding of membrane - anchored receptors and ligands depends on the local separation of the membranes , which varies along the membranes , and in time because of thermally excited membrane shape fluctuations .experiments that probe imply averages in space and time over membrane adhesion regions and measurement durations . in our theory , we first determine the binding constant for a given local separation , and then average over the distribution of local membrane separations that describes the spatial and temporal variations of .the two key overall membrane characteristics that emerge from this theoretical approach are the average separation and relative roughness of the two apposing membranes , which are the mean and standard deviation of the distribution .our theory quantifies the dependence of on the average separation and relative membrane roughness , and helps to understand why different experimental methods can lead to values of that differ by orders of magnitude ( see discussion and conclusions ) .our theory is validated in this article by a detailed comparison to data from monte carlo ( mc ) simulations .such a comparison is essential to test simplifying assumptions and heuristic elements in relating to the binding constant of soluble variants of receptors and ligands without membrane anchors .our theoretical results for the ratio of the binding constants agree with detailed results from mc simulations without any data fitting , which indicates that our theory captures the essential features of the ` dimensionality reduction ' due to membrane anchoring .the mc simulations are based on a novel model of biomembrane adhesion in which the membranes are represented as discretized elastic surfaces , and the receptors and ligands as anchored molecules that diffuse continuously along the membranes and rotate around their anchoring points .we use the mc simulations to determine both the binding constant of these membrane - anchored molecules and the binding constant of soluble variants of the molecules that have the same binding interactions but are free to move in 3d . in previous elastic - membrane models of biomembrane adhesion , determining both and and the molecular characteristics affecting these binding constantshas not been possible because the receptors and ligands are not explicitly represented as anchored molecules . instead, the binding of receptors and ligands has been described implicitly by interactions that depend on the membrane separation . in other previous elastic - membrane models , receptors and ligandsare described by concentration fields rather than individual molecules , or receptor - ligand bonds are treated as constraints on the local membrane separation . in our accompanying article, we compare our theory for the binding equilibrium of membrane - anchored receptor and ligand molecules to detailed data from molecular dynamics simulations of a coarse - grained molecular model of biomembrane adhesion , and extend this theory to the binding kinetics of membrane - anchored molecules .in this section , we introduce our elastic - membrane model of biomembrane adhesion . in this model ,the overall configurational energy of rod - like receptors and ligands is the sum of the elastic energies and of the two membranes , the total interaction energy of the receptor and ligand molecules , and the total anchoring energy of these molecules .the conformations of the two apposing membranes can be described in monge representation via their local deviations out of a reference plane .we discretize this reference plane into a quadratic lattice with lattice spacing , which results in a partitioning of the membranes into approximately quadratic patches .the elastic energy and of the membranes then can be written as \label{elastic_energy}\ ] ] with where and are the local deviations of the membranes at lattice site out of the reference plane .the elastic energy ( [ elastic_energy ] ) is the sum of the bending energy with rigidity and the contribution from the membrane tension .the bending energy depends on the total curvature with discretized laplacian the tension contribution depends on the local area increase of the curved membranes with respect to the reference - plane .the whole spectrum of bending deformations is captured in this model if the lattice spacing of the discretized membranes is about 5 nm , which is close to the membrane thickness .the total interaction energy represents the interactions of all receptor - ligand complexes . in our model ,the binding potential of a single receptor and a single ligand depends on the distance between the binding sites located at the tips of the rod - like receptor and ligand molecules , and on the two angles and that describe the relative orientation of the molecules . for our rod - like receptors and ligands ,the angle is the angle between the receptor and the binding vector connecting the two binding sites , and the angle is the angle between the ligand and this vector .we use two angles and for the relative orientation to ensure that the binding sites of the receptor and ligand do not overlap .the total interaction energy of the receptors and ligands in eq .( [ overall_energy ] ) is the sum of the potential energies ( [ binding_potential ] ) of all bound receptor - ligand complexes .the total anchoring energy is the sum of the anchoring energies of all receptors and ligands . in our model ,the anchoring energy of a single receptor or ligand is described by the harmonic potential with anchoring strength .the anchoring angle is the angle between the receptors or ligands and the local membrane normal ( see appendix a for further details ) .in this section , we derive our general theory for the binding constants and of rigid , rod - like receptors and ligands .the starting point of our theory is the binding free energy and of membrane - anchored and soluble receptor and ligand molecules .we first summarize a standard theory for the binding free energy of soluble molecules , and then extend this theory to the binding free energy of membrane - anchored molecules . from these binding free energies ,we obtain general relations between the binding constants and . in section iv, we compare these theoretical relations to detailed results from mc simulations , and generalize our theory to semi - flexible receptor and ligand molecules .we first consider the binding free energy of a single soluble receptor and a single soluble ligand in a volume . a standard approach in which this free energy is expanded around its minimum leads to the decomposition - k_bt \ln\left[\frac{\omega_b}{4 \pi}\right ] \label{dg3d}\end{aligned}\ ] ] into the minimum binding energy and the translational and rotational free - energy contributions and . here , and are the translational and rotational phase - space volume of the bound ligand relative to the receptor .the translational phase - space volume of the bound ligand is where , , and are the standard deviations of the distributions for the coordinates , , and of the binding vector that connects the two binding sites .the -direction here is taken to be parallel to the direction of the receptor - ligand complex . for a preferred collinear binding of the receptor and ligand as in the binding potential of eq .( [ binding_potential ] ) , the rotational phase space volume of the bound ligand is where is the standard deviation of the binding - angle distribution .the unbound ligand translates and rotates freely with translational phase - space volume and rotational phase - space volume . in analogy to eq .( [ dg3d ] ) , the binding free energy of a receptor and a ligand molecule that are anchored to two apposing planar and parallel membranes of area and separation can be decomposed as - k_b t\ln\left[\frac{\omega_b\omega_\text{rl}}{\omega_\text{r}\omega_\text{l}}\right ] \label{dg2d}\end{aligned}\ ] ] where is the translational phase space area of the bound ligand relative to the receptor in the two directions and parallel to the membranes , and , , and are the rotational phase space volumes of the unbound receptor r , unbound ligand l , and bound receptor - ligand complex rl relative to the membranes .we have assumed here that the binding angle variations are small compared to the overall rotations of the bound rl complex , i.e. we have assumed that the anchoring potential is ` soft ' compared to the binding potential .the rational phase space volume for the binding angle and the minimal binding energy then are not affected by the anchoring , and the overall rotational phase space volume of the bound complex can be approximated as the product of the rotational phase space volume for the binding angle and the phase space volume for the rotations of the whole complex relative to the membrane .for the harmonic anchoring potential ( [ vanchor ] ) , the rotational phase space volumes of the unbound molecules are for simplicity , we consider here receptors and ligands with identical anchoring strength .the remaining task now is to determine the phase space volume for the rotations of the bound rl complex relative to the membrane .we find that these rotations can be described by the effective configurational energy ( see appendix b ) the first term of this effective energy is the sum of the anchoring energies ( [ vanchor ] ) for the receptor and ligand in the complex .the two anchoring angles for the bound receptor and ligand here are taken to be approximately equal , which holds for binding angles and binding angle variations that are small compared to the anchoring angle variations , or in other words , for binding potentials that are ` hard ' compared to the anchoring potentials .the second term of the effective energy ( [ hef ] ) is a harmonic approximation for variations in the length of the receptor - ligand complex , i.e. in the distance between the two anchoring points of the complex . for rod - like receptor and ligand molecules , variations in the length of the complex result from variations of the binding angle and binding - site distance .the preferred length and effective spring constant of the rl complex in the effective energy ( [ hef ] ) are then approximately ( see appendix b ) where and are the lengths of the rod - like receptor and ligand , is the average of the distance between the binding sites in the direction of the complex , is the standard deviation of this distance , and is the standard deviation of the binding - angle distribution for preferred collinear binding as in our model . for a given separation of the membranes , the length and anchoring angle of the receptor - ligand complex are related via the effective configurational energy ( [ hef ] ) then only depends on the single variable . with this effective configurational energy , the rotational phase space volume of the bound rlcomplex can be calculated as the integration in eq .( [ omegarl ] ) can be easily evaluated numerically for specific values of the spring constants and , of the preferred length of the complex , and of the membrane separation . from the binding free energies and given in eqs .( [ dg3d ] ) and ( [ dg2d ] ) and the relations ] between the binding free energies and binding constants , we obtain the general result which relates the binding constant of receptors and ligands anchored to parallel and planar membranes of separation to the binding constant of soluble variants of the receptors and ligands without membrane anchors . in deriving eq .( [ k2dl ] ) , we have assumed that the binding interface is not affected by the membrane anchoring , which holds for anchoring potentials that are much softer than the binding potential .the minimum binding energy and the standard deviations and of the binding vector coordinates in the two directions perpendicular to the complex are then the same for the soluble and the membrane - anchored receptor - ligand complex . for simplicity , we take the two directions and perpendicular to the complex to be identical with the two directions along the membranes .the ratio of the translational phase space volume of the soluble rl complex and the translational phase space area of the bound complex then is approximately .can be taken into account via and .however , since the values of the standard deviations , , and in the directions and perpendicular to the complex and the direction parallel to the complex are typically rather similar , we neglect this effect here . ] in membrane - membrane adhesion zones , the local separation is not fixed but varies because of thermally excited shape fluctuations of the membranes .our mc simulations show that the distribution of this local separation is well approximated by the gaussian distribution /(\sqrt{2\pi } \xi_\perp ) \label{pl}\ ] ] where is the average separation of the membranes or membrane segments , and is the relative roughness of the membranes .the relative roughness is the standard deviation of the local membrane separation , i.e. the width of the distribution . the same gaussian behavior of is also found in molecular dynamics simulations ( see our accompanying manuscript ) .the gaussian behavior of holds for situations in which the adhesion of two apposing membrane segments is mediated by a single type of receptors and ligands as in our simulations. our mc simulations also reveal that the equilibrium constant for fluctuating membranes can be obtained in two rather different ways . on the one hand, we can determine directly from its definition in eq .( [ k2ddef ] ) by measuring the area concentrations ] , and ] , \text{2d} ] in this equation are obtained from thermodynamic averages of the numbers of receptor - ligand complexes , of unbound receptors , and of unbound ligands for the membrane area of our simulations with periodic boundary conditions .we define a receptor and ligand to be bound if the binding distance and the two angles and in eq .( [ binding_potential ] ) are smaller than the cutoff values and , respectively .these cutoff values include 99% of the area of the gaussian functions and in eq .( [ binding_potential ] ) for the parameter values and used in our simulations .we only allow the binding of a single ligand to a single receptor . in our simulations , the numbers and of receptors and ligands varies between and .the binding constant of soluble variants of the receptors and ligands is determined from eq .( [ k3d ] ) .these soluble receptor and ligand molecules exhibit the same binding potential ( [ binding_potential ] ) as the membrane - anchored molecules , but translate and rotate freely in a box of volume with periodic boundary conditions .for the parameters of the binding potential given above , we obtain the value . for simplicity , we consider two membranes with identical rigidity and tension in our simulations with flexible membranes .we use the value in all our simulations , which is a typical value for lipid membranes .our mc simulations with flexible membranes involve three types of mc moves : ( i ) the lateral diffusion of a receptor or ligand along the membranes is taken into account by moves in which the coordinates of the anchoring points in the reference plane are continuously and randomly shifted to new values .the local deviation of this anchoring point in the direction perpendicular to the reference plane is determined by linear interpolation of the local deviations of the discretized membranes ( see appendix a for further details ) .( ii ) the rotational diffusion of the rod - like receptors and ligands is taken into account by random continuous rotational moves around the anchor points .( iii ) shape fluctuations of the membranes can be taken into account by moves in which the local deviations and are randomly shifted to new values .our mc simulations with parallel and planar membranes only involve the mc moves ( i ) and ( ii ) .we first consider results from our mc simulations with rigid , rod - like receptors and ligands anchored to parallel and planar membranes . in fig .[ figure_mc - planar ] , mc data for the function are compared to our theory for various values of the anchoring strength and length of the receptors and ligands . the full lines in this figure result from eq .( [ k2dl ] ) of our theory and do not involve any fit parameters .the dashed lines in the figure are interpolations of the mc data points . for the binding potential of the receptors and ligands used in our simulations ,the average distance between the two binding sites in the direction of the receptor - ligand complex is , the standard deviation of this distance is , the standard deviation of the binding angle is , and the binding constant of soluble variants of the receptors and ligands is ( see above ) . with these values for , , , and ,the function can be calculated from the eqs .( [ omegar ] ) , ( [ l0 ] ) , ( [ krl ] ) , ( [ omegarl ] ) , and ( [ k2dl ] ) of our theory for the various anchoring strengths and molecular lengths of fig . [ figure_mc - planar ] .the function exhibits a maximum value at a preferred local separation of the receptors and ligands , and is asymmetric with respect to .this asymmetry reflects that the receptor - ligand complexes can tilt at local separations smaller than , but need to stretch at local separations larger than . fig .[ figure_mc - planar ] illustrates that strongly depends both on the length and anchoring strength of the receptors and ligands .the decrease of for increasing length results from a decrease of the rotational phase space volume of the receptor - ligand complex . with increasing length of the receptors and ligands ,the rl complexes become effectively stiffer because in eq .( [ omegarl ] ) increases from for to and for and , respectively .the effective stiffness determines the variations of the rescaled length of the complexes , and an increase of this stiffness reduces the rotational phase space volume of the complexes for a fixed local separation of the membranes .changes in the anchoring strength of the receptors and ligands strongly affect the rotational free energy change during binding .with decreasing , the effective width of the function increases because the tilting of the complexes at small separations is facilitated ( see eq .( [ xirl ] ) ) .the decrease of the maximum value of the function with decreasing reflects that a more flexible anchoring of receptors and ligands for smaller values of results in a larger loss of rotational entropy upon binding and , thus , a larger rotational free energy change . in our mc simulations with flexible membranes ,the two membranes exhibit a relative roughness that results from thermally excited membrane shape fluctuations , and are ` free to choose ' an optimal average separation at which the overall free energy is minimal .in figs .[ figure_mc - fluc - ka8 ] and [ figure_mc - fluc - l4 ] , mc data from these simulations are compared to our theory .the full lines in these figures are calculated from averaging our theoretical results for over the local membrane separation according to eq .( [ k2d ] ) , and do not involve any fit parameters . in this calculation , we approximate the distribution of the local membrane separation , which reflects the membrane shape fluctuations , by the gaussian distribution ( [ pl ] ) , and choose the average separation of this distribution such that the binding constant of eq .( [ k2d ] ) is maximal , because maxima of correspond to minima of the overall binding free energy of the adhering membranes .the width of the distribution is the relative membrane roughness .the dashed lines in the figs .[ figure_mc - fluc - ka8 ] and [ figure_mc - fluc - l4 ] are calculated with the dashed interpolation functions for from fig .[ figure_mc - planar ] . the figs .[ figure_mc - fluc - ka8](a ) and [ figure_mc - fluc - l4](a ) illustrate that the binding constant decreases with increasing relative roughness of the membranes .the full theory lines in these figures do not involve any data fitting and agree overall well with the mc data .slight deviations between the mc data and theory appear to result predominantly from a slight overestimation of the function in our theory ( see fig .[ figure_mc - planar ] ) .the average over local separations of eq .( [ k2d ] ) with the gaussian approximation ( [ pl ] ) does not seem to contribute significantly to these slight deviations , because the dashed lines in figs .[ figure_mc - fluc - ka8](a ) and [ figure_mc - fluc - l4](a ) tend to agree with the mc data within statistical errors .these dashed lines are calculated based on the dashed interpolations of the mc data for in fig .[ figure_mc - planar ] .for roughnesses that are much larger than the effective width of the functions shown in fig .[ figure_mc - planar ] , the binding constant is inversely proportional to at the optimal average separation for binding ( see eq .( [ k2dlim ] ) ) . in the scaling plot of fig .[ figure_mc - fluc - ka8](b ) , therefore tends to constant , limiting values for large roughnesses . based on eq .( [ xirl ] ) , the effective width of the function can be estimated as , , and for the receptor and ligand lengths , , and of fig .[ figure_mc - fluc - ka8](b ) . because of the smaller value of , the blue curve in fig .[ figure_mc - fluc - ka8](b ) for the receptor and ligand length approaches its limiting value faster than the other two curves .[ figure_mc - fluc - l4](b ) illustrates that the preferred average separation of the two adhering membranes decreases with the relative roughness of the membranes .the lines in this figure result from maximizing in eq .( [ k2d ] ) with respect to the average separation of the gaussian distribution for the functions shown in fig . [ figure_mc - planar](b ) .the full lines are based on our theoretical calculations of and do not involve any data fitting .the dashed lines are calculated based on the dashed interpolations of the mc data for in fig .[ figure_mc - planar](b ) . for small and intermediate roughnesses , the lines in fig .[ figure_mc - fluc - l4](b ) agree well with the data points from our mc simulations in which the membranes can ` freely choose ' a preferred average separation . for large roughnesses ,the mc data deviate from the theory lines because of the fluctuation - induced repulsion of the impenetrable membranes , which is not taken into account in our theory . in the roughness rangein which the fluctuation - induced repulsion of the membranes is negligible , the preferred average separation decreases because of the asymmetry of the function . at zero roughness ,the preferred average separation is identical to the local separation at which is maximal . for larger roughnesses , the average of over the local separations in eq .( [ k2d ] ) is maximal at average separations smaller than because is asymmetric , with a pronounced ` left arm ' that reflects tilting of the receptor - ligand complexes .the preferred average separation decreases for decreasing anchoring strength because of smaller tilt energies . for roughnesses that are large compared to the width of the functions , the preferred average separation in our theory can be estimated from eq .( [ barl0 ] ) , which leads to , , and for the anchoring strengths , , and of fig .[ figure_mc - fluc - l4](b ) and the preferred length of the receptor - ligand complex with the molecular lengths . in this section , we extend our theory to semi - flexible receptors and ligands and compare this extended theory to mc data .each semi - flexible receptor and ligand in our mc simulations consist of two rod - like segments , an anchoring segment and an interacting segment , that are connected by a flexible joint with bending energy and stiffness ( see also fig . [ figure_mcsnapshot - flexible ] ) .the overall configurational energy ( [ overall_energy ] ) then contains the total bending energy of all receptors and ligands as an additional term . as additional type of mc move ,our simulations with semi - flexible receptor and ligand molecules involve continuous rotational moves around the flexible joints connecting the two rod - like segments of the molecules .the anchoring segment of a semi - flexible receptor or ligand is attached to the membrane via the same anchoring potential ( [ vanchor ] ) as the rod - like receptors and ligands .the interacting segments of a semi - flexible receptor and ligand interact via the same binding potential ( [ binding_potential ] ) .since the binding constant of soluble receptors and ligands only depends on the binding potential , our semi - flexible receptors and ligands have the same value of as our rod - like receptors and ligands , irrespective of their stiffness .in contrast , the maximum value of the binding constant of membrane - anchored semi - flexible receptors and ligands decreases with decreasing stiffness ( see fig . [ figure_mc - flex ] ) . the mc data for in this figure result from simulations with parallel and planar membranes ( see fig .[ figure_mcsnapshot - flexible ] ) . in these simulations ,both rod - like segments of a receptor or ligand have the length , and the anchoring segment is anchored to the membrane with strength .we consider semi - flexible receptors and ligands with the three different stiffness values , , and .an infinite stiffness corresponds to rod - like receptors and ligands with length . the blue data in fig .[ figure_mcsnapshot - flexible ] for infinite therefore correspond to the yellow data of fig .[ figure_mc - planar ] for and .we find that the function for the semi - flexible receptor and ligand molecules can be described for large stiffness by a reduced effective anchoring strength in our theory for rod - like molecules .this effective anchoring strength can be calculated from the standard deviation of the angle of the interacting segment of the semi - flexible molecules with respect to the membrane normal . for the anchoring strength as in fig .[ figure_mc - flex ] , the standard deviation of the angle is 0.597 for , 0.547 for , 0.519 for , and 0.489 for infinite , which corresponds to rod - like receptors and ligands with .we obtain the same standard deviations for the angle of rod - like molecules with the effective anchoring strengths , , and for , , and , respectively .the lines in fig . [ figure_mc - flex ] represent our theoretical results based on eq .( [ k2dl ] ) for rod - like molecules with these effective anchoring strengths and with the values , , and for , , and , respectively , which are obtained from the standard deviations of the end - to - end distance determined in mc simulations of soluble rl complexes .the preferred length of the semi - flexible rl complexes are obtained from a fit to the mc data in fig .[ figure_mc - flex ] .the theoretical results for are in good agreement with the mc data for the stiffness , which is much larger than the anchoring strength . for the smaller stiffnesses and , the theoretical results deviate more strongly from the mc data , which indicates that our extended theory based on effective anchoring strengths is valid for .we have presented here a general theory for the binding equilibrium constant of rather stiff membrane - anchored receptors and ligands .this theory generalizes our previous theoretical results by describing how depends both on the average separation and thermal nanoscale roughness of the apposing membranes , and on the anchoring , length and flexibility of the receptors and ligands .a central element of this theory is the calculation of the rotational phase space volume of the bound receptor - ligand complex , which is based on an effective configurational energy of the complex ( see eqs .( [ hef ] ) to ( [ omegarl ] ) ) . in our previous theory for the preferred average membrane separation for binding ,the rotational phase space volume of the bound complex was determined from the distribution of anchoring angles of the complex observed in simulations . in the theory presented here , the dependence of on the average membrane separation and relative roughness results from averaging over the distribution of local membrane separations with mean and standard deviation . for relative roughnesses that are much larger than the the width of the function , the binding constant is inversely proportional to at average membrane separations equal to the preferred average separation according to eq .( [ k2dlim ] ) . in our previous theory ,this inverse proportionality resulted from the entropy loss of the membranes upon receptor - ligand binding .our theories relate the binding constant of the membrane - anchored receptor and ligand proteins to the binding constant of soluble variants of the proteins without membrane anchors by determining the translational and rotational free energy changes of anchored and soluble proteins upon binding . in a complementary approach of wu et al. , the binding constant of receptors and ligands anchored to essentially planar membranesis determined based on ranges of motion of bound and unbound receptors and ligands in the direction perpendicular to the membranes . in this article, we have corroborated our theory by a comparison to detailed data from mc simulations .our general results for the ratio of the binding constants of membrane - anchored and soluble receptors and ligands agree with the mc results without any data fitting .our mc simulations are based on a novel elastic - membrane model in which the receptors and ligands are described as anchored molecules that diffuse continuously along the membranes and rotate at their anchoring points . in our accompanying article , we compare our general theoretical results for to detailed data from molecular dynamics simulations of biomembrane adhesion with both transmembrane and lipid - anchored receptors and ligands , and extend our theory to the binding rate constants and .our theoretical results are rather general and hold for membrane - anchored molecules whose anchoring is ` soft ' compared to their binding and bending , which is realistic for a large variety of biologically important membrane receptors and ligands such as the t - cell receptor and its mhc - peptide ligand or the cell adhesion proteins cd2 , cd48 , and cd58 .the dependence of the binding constant on the average separation and relative roughness of the membranes helps to understand why mechanical methods that probe the binding kinetics of membrane - anchored proteins during initial membrane contacts can lead to values for the binding equilibrium constant that are orders of magnitude smaller than the values obtained from fluorescence measurements in equilibrated adhesion zones . in equilibrated adhesion zones that are dominated by a single species of receptors and ligands ,the average membrane separation is close to the preferred average separation for binding , and the relative membrane roughness is reduced by receptor - ligand bonds . during initial membrane contacts , in contrast ,both the membrane separation and roughness are larger , which can lead to significantly smaller values for according to our theory . in our mc simulations, we have focused on membranes that adhere _ via _ a single species of receptors and ligands .the average membrane separation then is identical to the preferred average separation of these receptors and ligands for binding .however , our elastic - membrane model can be generalized to situations in which membrane adhesion is mediated by different species of receptors or ligands , e.g. by long and short pairs of receptors or ligands as in t - cell adhesion zones , or to situations in which the binding of receptors and ligands is opposed by repulsive membrane - anchored molecules , e.g. by molecules of the cellular glycocalyx .these situations have been previously investigated with elastic - membrane models in which the molecular interactions of receptors and ligands or repulsive molecules are described implicitly by interaction potentials that depend on the local membrane separation . at sufficiently large concentrations ,long and short receptor and ligand molecules segregate into domains in which the adhesion is dominated either by the short or by the long molecules .the domain formation is caused by a membrane - mediated repulsion between long and short receptor - ligand complexes , which arises from membrane bending to compensate the length mismatch . in each domain ,the average separation of the membranes is close to the preferred average separation of the dominating receptors and ligands .within such a domain , the distribution of the local membrane separation has a single peak centered around the preferred average separation of the dominating receptors and ligands . averaged over whole adhesion zones with multiple domains , the distribution has two peaks that are centered around the preferred average separations of the long and short pairs of receptors and ligands .similarly , short receptor and ligand molecules and longer repulsive molecules segregate at sufficiently large molecular concentrations .several groups have investigated experimentally how varying the length of membrane - anchored receptors or ligands affects cell adhesion .chan and springer found an increased cell - cell adhesion efficiency in hydrodynamic flow for elongated variants of cd58 , compared to wild - type cd58 .patel et al . observed that cells with long variants of p - selectin bind more efficiently under shear flow to cells with the binding partner psgl-1 , compared to shorter variants of p - selectin .from adhesion frequencies in a micropipette setup , huang et al . obtained higher on - rates for long p - selectin constructs attached to red - blood - cell surfaces , compared to short p - selectin constructs , and identical off - rates for both constructs .these results indicate that initial cell - cell adhesion events probed in hydrodynamic flow or with micropipette setups can be more efficient for elongated receptors or ligands , presumably due to reduced cytoskeletal repulsion . in a different approach , milstein et al . investigated the cd2-mediated adhesion efficiency of t cells to supported membranes that contain either wild type cd48 or elongated variants of cd48 . for elongated variants of cd48 ,milstein et al . observed less efficient cell adhesion after one hour compared to wild type cd48 at identical concentrations .this observation is in qualitative agreement with our findings that the binding constant decreases with increasing length of receptors and ligands ( see fig .3(a ) ) , and increasing flexibility ( see fig .besides increasing the length , the addition of protein domains may lead to a larger flexibility of the elongated variants of cd48 compared to the wildtype .we have focused here on receptors and ligands with preferred collinear binding and preferred perpendicular membrane anchoring , i.e. with a preferred anchoring angle of zero relative to the membrane normal .a preferred non - zero anchoring angle can be simply taken into account by changing the anchoring energy ( [ vanchor ] ) to .for a preferred collinear binding of rod - like receptors and ligands , the preferred binding angle is 0 . for receptors and ligands anchored to parallel and planar membranes as in sections iii.b and iii.c, the anchoring angles of a receptor and ligand in a bound complex then are identical , and identical to the tilt angle of the receptor - ligand complex .the tilt angle here is defined as the angle between the membrane normal and the line connecting the two anchor points of the receptor - ligand complex . for a preferred non - zero binding angle ,the receptor - ligand complex is kinked .the anchoring angles and of a receptor and ligand in a bound complex then depend not only on the tilt angle of the complex , but also on the torsional angle of the complex around the tilt axis , the lengths and of the receptor and ligand , and the preferred binding angle . the rotational phase space volume of such a kinked rl complex can be calculated by integrating over the tilt angle and torsional angle of the complex , where is the generalized effective configurational energy of the complex with anchoring angles and .the rod - like receptors and ligands and rod - like segments of semi - flexible receptors and ligands considered here can freely rotate around their axes in the bound and unbound state . for proteins , in contrast , such rotations will be restricted in the bound complex , which leads to an additional loss of rotational entropy upon binding . however , this additional loss of rotational entropy is identical both for the membrane - anchored complex in 2d and the soluble complex in 3d and , thus , does not affect the ratio of the binding constants , provided the binding interface of the receptor - ligand complex is not affected by membrane anchoring , as assumed in section iii.c .in our elastic - membrane model of biomembrane adhesion , the conformations of the two apposing membranes are described by local deviations at lattice sites of a reference plane .the receptors and ligands of this model move continuously along the membranes and , thus , ` in between ' the discretization sites of the membrane . the anchor position and anchoring angle of a receptor or ligand can be obtained by linear interpolation from the local membrane deviations , , , and at the four lattice sites 1 , 2 , 3 , and 4 around the receptor or ligand ( see fig . [ linearinter ] ) .the anchor position of the receptor or ligand within a quadratic patch of the reference plane with corners 1 , 2 , 3 , and 4 can be described by the parameters and with .the local membrane deviation of the anchor out of the reference plane then follows from linear interpolation : to calculate the anchoring angle of a receptor or ligand molecule , we first need to determine the membrane normal at the site of the anchor .the membrane normal can be calculated from the two tangent vectors and of the membrane at site ( see fig . [ linearinter ] ) .the tangent vector is where and are the unit vectors along the and axis .the angle between the vector and the axis can be obtained from with and illustrated in fig .[ planform ] . for simplicity ,all lengths here are normalized by the lattice spacing .similarly , the tangent vector is where the angle between the vector and the axis can be obtained from from and , the membrane normal vector can be calculated as the anchoring angle between the rod - like receptor or ligand and the membrane normal then follows as where is a unit vector pointing in the direction of the receptor or ligand .in this section , we derive the effective configurational energy ( [ hef ] ) of a receptor - ligand complex and eqs .( [ l0 ] ) and ( [ krl ] ) for the preferred length and the effective spring constant of the complex .the length of a receptor - ligand complex is the distance between the two anchor points of the receptor and ligand . for rod - like receptors and ligands , variations in this lengthmainly result from variations in the binding angle and in the binding - site distance in the direction of the complex . for small binding angles , variations of the binding - site distance in the two directions and perpendicular to the complex can be neglected .the length of the complex is then where and are the lengths of the receptor and ligand . in harmonic approximation , the variations in the binding angle and binding - site distance in the direction parallel to the complex can be described by the configurational energy where and are spring constants that are related to the standard deviations and of the distributions for the binding angle and binding - site distance via and .we assume now that is much larger than the thermal energy , which implies small binding angles . from expanding eq .( [ ldef ] ) up to second order in , we obtain the average length and the variance of the length to leading order in .the thermodynamic averages here are calculated as the variations in the end - to - end distance of the receptor - ligand complex then can be described by the second term of effective configurational energy ( [ hef ] ) with the effective spring constant ( see eq .( [ krl ] ) ) .the shape of the function introduced in eq .( [ k2dl ] ) is determined by , i.e. by the rotational phase space volume of the rl complex as a function of the local separation .the mean value and standard deviation of therefore is identical to the mean value and standard deviation of .we first consider here the moments of .the zeroth moment is the integral { \rm d}l \label{moa}\\ & \simeq 2 \pi \int_0^{\infty } \left [ \int_{-\infty}^{\infty } e^{-h_\text{rl}(\theta_a , l_\text{rl}(\theta_a))/k_b t } { \rm d}l \right]\sin\theta_a{\rm d}\theta_a \label{mob}\\ & \simeq \frac{\sqrt{2 } \pi^{3/2 } k_b t}{\sqrt{k_a k_\text{rl } } } f_d\left(\sqrt{\frac{k_b t}{k_a}}\right ) \label{moc}\end{aligned}\ ] ] where is the dawson function .the approximate result ( [ moc ] ) holds for anchoring strengths for which the integrand is practically 0 at the upper limit of the integration over in eq .( [ moa ] ) .this approximate result then is obtained by interchanging the order of the integrations over and , and by extending integration limits to infinity .we assume that the binding interaction is rather ` hard ' compared to the anchoring , which implies . in the same way , the first and second moment of are obtained as \end{aligned}\ ] ] and \end{aligned}\ ] ] for . from these moments ,we obtain the mean and the standard deviation of the functions and for . the mean value is the preferred average separation of the membranes for large relative membrane roughnesses . from eq .( [ moc ] ) and the rotational phase space volume of the unbound receptors and ligands , we obtain the integral of the function .( [ intk2dapprox ] ) results from the approximation for of the dawson function and is rather precise compared to eq .( [ intk2d ] ) , with a relative error of 0.1 % for , and much smaller relative errors for larger values of .( [ k2dlim ] ) for the binding constant at large membrane roughnesses follows from eq .( [ intk2dapprox ] ) .to obtain general scaling relations for the roughness and local orientation of fluctuating membranes , we consider here a tensionless quadratic membrane segment with projected area and periodic boundary conditions in monge parametrization .the shapes of this quadratic membrane segment can be described by the fourier decomposition \label{fourier}\ ] ] with and where , and are integers .the summation in eq .( [ fourier ] ) extends over half the -plane with .the bending energy of a given membrane shape with fourier coefficients and then is with .since the fourier modes are decoupled , the mean - squared amplitude of each mode can be determined independently as the local mean - square deviation of the membrane from the average location then can be calculated \nonumber\\ & = \sum_{\boldsymbol{q}}\frac{2 k_b t}{\kappa q^4 l^2 } \nonumber\\ & \simeq \left(\frac{l}{2\pi}\right)^2\int_{\pi / l}^{\pi / a}\frac{2 k_b t}{\kappa q^4 l^2 } \pi q\,\text{d}q \simeq \frac{k_b tl^2}{4\pi^3 \kappa } \label{msd}\end{aligned}\ ] ] after converting the sum over the wavevectors into an integral over half the -plane from to where is molecular length scale . similarly , the local mean - square gradient of the on average planar membrane can be calculated as \nonumber \\ & = \sum_{\boldsymbol{q}}\frac{2 k_b t}{\kappa q^2 l^2 } \nonumber\\ & \simeq \left(\frac{l}{2\pi}\right)^2\int_{\pi / l}^{\pi / a}\frac{2 k_b t}{\kappa q^2 l^2 } \pi q\,\text{d}q = \frac{k_b t}{2\kappa\pi } \ln\left(\frac{l}{a}\right ) \label{mgrads}\end{aligned}\ ] ] according to eq .( [ msd ] ) , the roughness is proportional to the linear size of the quadratic membrane segment , which in turn is proportional to the lateral correlation length of the membrane . in our mc simulations with tensionless membranes ,the lateral correlation length is proportional to the mean distance } ] of receptor - ligand complexes .[ roughness_scaling ] illustrates that the relative roughness in our tensionless mc simulations is proportional to }12 & 12#1212_12%12[1][0] link:\doibase 10.1146/annurev.cellbio.17.1.133 [ * * , ( ) ] * * , ( ) link:\doibase 10.1016/j.ceb.2012.05.014 [ * * , ( ) ] link:\doibase 10.1088/1478 - 3975/9/4/045005 [ * * , ( ) ] * * , ( ) link:\doibase 10.1073/pnas.1011247107 [ * * , ( ) ] * * , ( ) link:\doibase 10.1016/j.bpj.2013.02.009 [ * * , ( ) ] link:\doibase 10.1146/annurev.biophys.26.1.541 [ * * , ( ) ] * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) link:\doibase 10.1529/biophysj.107.114447 [ * * , ( ) ] link:\doibase 10.1038/nature08746 [ * * , ( ) ] * * , ( ) link:\doibase 10.7554/elife.00778 [ * * , ( ) ] * * , ( ) link:\doibase 10.1038/374539a0 [ * * , ( ) ] link:\doibase 10.1016/s0006 - 3495(98)77807 - 5 [ * * , ( ) ] * * , ( ) link:\doibase 10.1038/16219 [ * * , ( ) ] link:\doibase 10.1074/jbc.m010427200 [ * * , ( ) ] link:\doibase 10.1529/biophysj.107.117895 [ * * , ( ) ] * * , ( ) link:\doibase 10.1016/j.cell.2014.02.053 [ * * , ( ) ] * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) link:\doibase 10.1529/biophysj.106.094995 [ * * , ( ) ] * * , ( ) * * , ( ) * * , ( ) * * , ( ) link:\doibase 10.1073/pnas.111536798 [ * * , ( ) ] * * , ( ) * * , ( ) * * , ( ) link:\doibase 10.1073/pnas.0500368102 [ * * , ( ) ] * * , ( ) * * , ( ) * * , ( ) * * , ( ) _ _ ( , ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) link:\doibase 10.1016/j.cis.2014.03.003 [ * * , ( ) ] link:\doibase 10.1038/25764 [ * * , ( ) ] * * , ( ) link:\doibase 10.1126/science.1119238 [ * * , ( ) ] link:\doibase 10.1038/nature13535 [ * * , ( ) ] * * , ( ) * * , ( ) link:\doibase 10.1371/journal.pcbi.1000604 [ * * , ( ) ] * * , ( ) * * , ( ) link:\doibase 10.1074/jbc.m407039200 [ * * , ( ) ] link:\doibase 10.1074/jbc.m804756200 [ * * , ( ) ] * * , ( ) * * , ( ) * * , ( )
adhesion processes of biological membranes that enclose cells and cellular organelles are essential for immune responses , tissue formation , and signaling . these processes depend sensitively on the binding constant of the membrane - anchored receptor and ligand proteins that mediate adhesion , which is difficult to measure in the ` two - dimensional ' ( 2d ) membrane environment of the proteins . an important problem therefore is to relate to the binding constant of soluble variants of the receptors and ligands that lack the membrane anchors and are free to diffuse in three dimensions ( 3d ) . in this article , we present a general theory for the binding constants and of rather stiff proteins whose main degrees of freedom are translation and rotation , along membranes and around anchor points ` in 2d ' , or unconstrained ` in 3d ' . the theory generalizes previous results by describing how depends both on the average separation and thermal nanoscale roughness of the apposing membranes , and on the length and anchoring flexibility of the receptors and ligands . our theoretical results for the ratio of the binding constants agree with detailed results from monte carlo simulations without any data fitting , which indicates that the theory captures the essential features of the ` dimensionality reduction ' due to membrane anchoring . in our monte carlo simulations , we consider a novel coarse - grained model of biomembrane adhesion in which the membranes are represented as discretized elastic surfaces , and the receptors and ligands as anchored molecules that diffuse continuously along the membranes and rotate at their anchor points .
the various quantum - state reconstruction techniques developed during recent years have made it possible to completely reconstruct an unknown state of a quantum mechanical system provided that many identical copies of the state are available .these reconstruction methods are nowadays routinely applied to the evaluation of the experiments where quantum states are generated , manipulated and transmitted .the field was pioneered in the beginning of nineties in quantum optics , where the optical homodyne tomography has been devised for reconstruction of the quantum state of traveling light field . since then, many other reconstruction methods applicable to various physical systems have been developed .the inference of quantum states plays very important role in the present - day experiments .most of the reconstruction methods , such as the direct sampling in optical homodyne tomography , are based on a direct linear inversion of the experimental data .this approach is conceptually simple and feasible .however , it may lead to certain unphysical artifacts such as the negative eigenvalues of the reconstructed density matrix . in order to avoid these unphysical artifacts ,an estimation method based on statistical maximum - likelihood principle has been devised for the reconstruction of a generic quantum state .this approach guarantees the positive semidefiniteness and trace normalization of the reconstructed density matrix .these necessary conditions are incorporated as constraints , so as a certain prior information from the statistical point of view .remarkably , the maximum likelihood estimation can be interpreted as a genuine generalized quantum measurement and can be related to the information gained by optimal measurement and the fisher information .given current interest in the quantum - information processing , it is of paramount importance to reconstruct not only the quantum states but also the transformations of these states the quantum mechanical processes .the examination of quantum communication channels and the evaluation of the performance of quantum gates are the examples of practical applicability of quantum - process reconstruction .all necessary properties of the deterministic quantum transformations , namely the complete positivity and trace preservation can be again incorporated within the maximum - likelihood approach as the appropriate constraints .compared with other reconstruction methods the maximum - likelihood approach seems to be computationally more difficult .therefore several simplifications and approximations of the maximum - likelihood technique have been suggested recently . in this paperwe present a unified approach to the maximum - likelihood reconstruction of quantum states and quantum processes .extremal equations for the reconstructed quantum state and for quantum process are derived in section ii .these equations can easily be solved numerically by means of repeated iterations .particular attention will be paid to the probing of the quantum process by entangled states which attracted considerable attention recently . in section iiiwe consider a realistic scenario where an unknown quantum transformation is probed by unknown states and the measurements are performed on both the input and output states .we propose a method for simultaneous estimation of the unknown probe states and the quantum process from the collected experimental data .the comparison of the exact maximum - likelihood method with the approximate ones is carried out in section [ comparison ] .finally , the conclusions are given in section v.let us start with a brief review of the maximum - likelihood reconstruction of a quantum state .we assume a finite number of identical samples of the physical system , each in the same but unknown quantum state described by the density operator .having these systems our task is to infer the unknown quantum state from the results of the measurements performed on them .we consider the positive operator - valued measure ( povm ) that yields probabilities of individual outcomes , , \qquad p_l \geq 0 , \qquad \sum_l p_l = 1.\ ] ] if the povm is tomographically complete it is possible to determine the true state directly by inverting the linear relations ( [ state_probabilities ] ) between the probabilities and the elements of the density matrix . however , there is no way how to find out the exact probabilities since only a finite number of samples of physical systems can be investigated . in the case of occurrences of outcomes the relative detection frequencies represent the only data that could be used for reconstructing the true state .the maximum - likelihood approach to this reconstruction problem consists in finding a density operator that generates through eq .( [ state_probabilities ] ) probabilities which are as close to the observed frequencies as possible , , } & \label{state_estimation } \\ & { \displaystyle { \cal l}[f_l , p_l(\rho ) ] = \sum_l f_l \ln p_l . } & \label{state_likelihood}\end{aligned}\ ] ] the measure ] , has been treated with the help of the numerical up - hill simplex method . a more analytical approach to the probleminvolves a formulation of nonlinear extremal operator equation for the density matrix that maximizes the log - likelihood functional , where the lagrange multiplier reads = \sum_l f_l=1.\ ] ] the crucial advantage of the equation ( [ state_extremal_eq ] ) is that it is suitable for iterative solution , as has been demonstrated on many particular reconstruction problems .a combination of equation ( [ state_extremal_eq ] ) and hermitian conjugate equation leads to the symmetric extremal equations in the manifestly positive semidefinite form , \right)^{1/2}.\ ] ] the iterations preserve the positive semidefiniteness and trace normalization of the density operator . while density operator describes the state of physical system , the linear completely positive ( cp ) map describes the generic transformation of physical system from quantum state to quantum state .the mathematical formulation of cp maps relies on the isomorphism between linear cp maps from operators on the hilbert space to operators on the hilbert space and positive semidefinite operators on hilbert space , = { \rm tr}_{\cal h } \left [ { s } \ , \rho_{\rm in}^{\rm t } \ !\otimes \openone_{\cal k } \right],\ ] ] where is an identity operator on the space and denotes the transposition .the deterministic quantum transformations preserve the trace of the transformed operators , ={\rm tr}_{\cal{h}}[\rho_{\rm in}] ] is satisfied at each iteration step .the density matrix representing the cp map can be in fact prepared physically in the laboratory if we first prepare a maximally entangled state on the hilbert space and then apply a cp map to one part of this entangled state . in this way the quantum - process tomography can be transformed to the quantum - state tomography .more generally , this suggests that it may be useful to employ entangled quantum states as probes of the unknown quantum process .let denote the entangled state on the hilbert space that serves as a probe of the cp map that is applied to the subsystem . a joint generalized measurement described by the povms if performed on the output hilbert space .the log - likelihood functional has the form ( [ process_likelihood ] ) , only the formula for the probability changes to ,\ ] ] where stands for the partial transposition in the subsystem .consequently , the operator appearing in the extremal eqs .( [ process_extremal_sym_eq ] ) and ( [ process_extremal_sym_eq_lambda ] ) must be calculated as follows , .\ ] ] apart from these modifications of and one can proceed as before and solve eqs .( [ process_extremal_sym_eq ] ) and ( [ process_extremal_sym_eq_lambda ] ) by means of repeated iterations .up to now quantum states and processes have been treated independently . however , this is just a simplification typical for the realm of physical experiments . widely accepted strategy how to approach a complex problem is to specify some partial subproblems , address them separately and merge the solutions .this technique usually gives good answer in the technical sense .though this is possible even in quantum theory , there are no fundamental reasons for such a factorization . to consider the full problem without splitting it into isolated subproblems is technically more advanced but could be advantageous .this strategy will be demonstrated on the synthesis of the problems treated separately in the previous section .let us assume the estimation of the generic process with the help of set of probe states , identity of which is also unknown .what is only known to the experimentalists are the output of certain measurements performed on the ensemble of probe states and on the ensemble of transformed probe states . in this senseall the considerations are done _ ab initio _ , since only results of generic measurements are required .a quantum object could be considered as known only to the extent specified by some preceding measurements .all the physically relevant results will be derived exclusively from the acquired data , where input states and their transformation are inseparably involved .states and their transformation should be considered as quantum objects . as suchthey are affected by quantum fluctuations , since in every experiment a certain portion of the noise will be present on the microscopic level . in the following the probe quantum states will be treated as unknown mixed states and they will be inferred together with the unknown quantum process . in accordance with the theory presented abovelet us consider the set of probe states on the space . by means of unknown quantum process these statesare transformed onto output states in the space .the observation must be more complex now involving the detection on the ensemble of both the input and the output states .for this purpose the corresponding povm elements will be denoted by and .the diagram involving detected signals and measurements is shown in fig .[ fig_simultaneous ] .let denotes the relative frequency of detection of the povm element in the input space and denotes the relative frequency of detection of the povm element in the output space .the frequencies , , and , , approximate the true probabilities and of individual outcomes , respectively , , }\\ ~ \vspace*{-2 mm } ~ \\ { \displaystyle p_{ml } = { \rm tr}_{\cal k}\ !\left [ \rho_{m,{\rm out } } \pi_{ml } \right ] = { \rm tr}\ ! \left [ { s } ( \rho_m^{\rm t } \otimes \pi_{ml } ) \right ] , } \end{array}\ ] ] where the relation ( [ cp_map ] ) was used .the estimated process and probe states should maximize the constrained log - likelihood functional - { \rm tr}\left [ \lambda { s } \right ] . }\end{array}\ ] ] the additivity of log likelihood reflects the independence of observations performed on the input and output states with the same degree of credibility .the lagrange multipliers and fix necessary constraints the trace normalization of the states , = 1 ] .this is equivalent to assuming that the lagrange multiplier is proportional to identity operator . in order to compare explicitly the exact maximum - likelihood estimation of quantum process with approximate method presented in refs . we have carried out extensive numerical simulations. quantitative comparison of the two approaches was based on the variances of estimates ( exact ) and ( approximate ) , \right\rangle_{\rm ens } , \\\sigma_{\rm a}^2 = \left\langle { \rm tr}[({s}_{\rm a } - { s}_{\rm true})^2 ] \right\rangle_{\rm ens } , \end{array}\ ] ] where denotes averaging over an ensemble of all possible experimental data and denotes the true cp map . for a given fixed cp map , input states , and output measurements , we have repeated times a simulation of the measurements and reconstruction of the cp maps and .subsequently we have calculated variances ( [ variances ] ) as statistical averages over the acquired ensemble .we have found that the exact maximum - likelihood estimation yields in all cases much lower variance than approximate approach .this is a direct consequence of the fact that the exact treatment takes into account all constraints imposed by quantum mechanical laws on the estimated operator .a typical example is shown in fig .[ fig_comparison ] . in this case , the quantum process is a unitary transformation ( [ pi4_rotation_process ] ) of a single qubit .six different input states are considered eigenstates of three pauli matrices , , and . copies of each input state are used . on each corresponding output state , a spin projection along axes , and is measured times . as can be seen in fig .[ fig_comparison ] , the variance is approximately twice smaller than variance , which is a significant difference . in fact , for cp maps which do not represent unitary transformations , such as pauli damping channel , the difference may be even stronger .the unified approach to inference of quantum states and quantum processes from experimental noisy data has been presented .the proposed technique based on the maximum - likelihood principle preserves all properties of the states and the processes imposed by quantum mechanics .this method is very versatile and can handle data from many different experimental configurations such as the probing of quantum processes with entangled states or a simultaneous reconstruction of an unknown process and unknown states that are used to probe this process .the extremal equations ( [ state_extremal_sym_eq ] ) , ( [ process_extremal_sym_eq])([process_extremal_sym_eq_lambda ] ) , and ( [ simultaneous_extremal_sym_eq__states])([simultaneous_extremal_sym_eq_lambda ] ) for the most likely quantum state and process can be very efficiently solved numerically by means of repeated iterations .the exact maximum likelihood estimation of quantum objects has been compared with the approximate methods .the approximate ones yield estimates whose variance is typically substantially larger than in the case of the exact approach .this comparison clearly illustrates the importance of keeping all the constraints imposed by quantum theory .loosely speaking there is always a choice either to acquire less portion of the data and then to adopt more sophisticated algorithm for its evaluation or vice versa .the efficient and precise reconstruction technique discussed in the present paper can find applications in design and evaluation of quantum - information devices and contemporary quantum experiments .
the maximum - likelihood principle unifies inference of quantum states and processes from experimental noisy data . particularly , a generic quantum process may be estimated simultaneously with unknown quantum probe states provided that measurements on probe and transformed probe states are available . drawbacks of various approximate treatments are considered .
the structural modeling of amorphous materials poses a particular challenge to condensed matter science .the initial hurdle to overcome is devising a computer model that accurately represents a small fragment of the material .experimental data is inevitably the result of a system average involving macroscopic number of atoms in a continuously variable range of conformations .the result is that such data tend to be smooth with very limited information content .while the information provided by experiments is evidently of critical importance to understanding these materials , such information is incomplete ( e.g. , the information in the data is incapable of uniquely specifying the structure ) .the impressive advances in protein crystallography help to illustrate the challenge : in any crystalline system , diffraction measurements yield a palisade of functions . from the information entropy it is easy to show that there is vastly more information in the sharply defined function for the crystal than the smooth function characteristic of a glass or amorphous material .the structure factor for the crystal is nearly sufficient to uniquely invert the data to obtain the structure , a stark contrast with the situation for amorphous materials .this argument also emphasizes the need to use _ all _ available experiments in modeling . despite our lamentations about the limitations of information - based modeling , it is clearly wise to build models consistent with experimental information : our concern is that this information is highly incomplete by itself. the limitations of information from experimental data beg for a molecular dynamics ( md ) or monte carlo modeling approach using accurate interatomic interactions .if properly implemented , such a scheme will enforce the proper local ordering , chemistry etc .however , these approaches suffer from their own shortcomings : despite superficial similarities to the physical process of making a glass ( quenching from the melt ) , such simulations are carried out with unphysically rapid quenches , models that are tiny ( especially if accurate interactions are used ) , and of course the interactions themselves are never perfect . despite these cautions , such simulations have met with many successes in a range of materials .an ideal modeling approach should merge the information - based method and the computer simulation scheme .there is no unique way to accomplish this , and the bottom line `` is that whatever scheme is adopted , it must produce models that agree with all known information .we are aware of three efforts in this direction : our experimentally constrained molecular relaxation '' ( ecmr ) method , a bayesian method for biomolecules and a related scheme used on amorphous carbon .these methods vary in many details , but are similar in spirit and all have met with success in the problems approached . hydrogenated a - si ( a - si :h ) is one of the most important electronic materials .while there is slight variability in pair - correlation functions measured for different samples , fluctuation electron microscopy ( fem ) experiments probing triplet or higher atomic correlations show dramatic variation from sample to sample . even in this most venerable amorphous electronic materialthere is a lack of understanding about the difference in network topology on the medium range length scale between samples with different fem data . in this paperwe further develop our ecmr method to form models of a - si including medium range order implied by fluctuation electron microscopy ( fem ) measurements .the inverse approach takes a very different route to model materials .the focus here is on available experimental information pertaining to the materials under study .the challenge is to construct a model that is consistent with a given set of experimental data , and additionally an approximate total energy functional . in the context of materialsmodeling , the primary interest is on structure determination and the resulting electronic properties , but the formalism is also useful to construct empirical potentials .although there exists no general proof that a many - body potential can be constructed uniquely within this approach , henderson has shown a connection between pair potentials and radial distributions that states for a system under given temperature and pressure two pair potentials that produce same radial distribution functions can differ only by an additive constant .lyubartsev and laaksonen have followed this idea to construct interaction potentials from radial distribution functions via reverse monte carlo simulation and apply it to aqueous sodium chloride ( nacl ) solution .soper has developed empirical potential structure refinement ( epsr ) where total diffraction data can be inverted into a set of partial structure factors by extending an earlier method of edwards and enderby and reverse monte carlo method .zunger has recently applied the inverse band structure approach to find atomic configurations for a given set of electronic and optical properties in alloys .the reverse monte carlo ( rmc ) method developed by mcgreevy and coworkers describes how to construct a physical structure ( i.e. a 3-dimensional model ) of a material using the information included in the structure factors . instead of using any conventional energy functional ,a generalised penalty function is constructed involving experimental structural data and some suitable constraints , which is then minimized by using the metropolis monte carlo algorithm .the set of configurations obtained in this method can be used for further analysis of structural , electronic and vibrational properties .the method does not generate interaction potentials and in absence of sufficient information , configurations obtained from rmc may not be physically meaningful .one usually addresses this problem by adding further information , but often this proves to be difficult to optimize via simple monte carlo scheme .ecmr has been designed to overcome some of the problems above .mathematically , ecmr offers an approximate solution to the constrained optimization problem : _ find a set of coordinates that is a minimum of an accurate energy functional subject to the constraint that the coordinates reproduce one or more experimental data sets_. in practice it may be useful to impose other constraints too , for example on atomic coordination or chemical order . in the following , we apply ecmr to model medium range order using fem data as experimental information and an empirical total energy functional .medium range order ( mro ) is defined as structural ordering that exists between the short range ( typically 3 - 5 ) and the long range ( ) length scale .quantifying order at this length scale is somewhat ambiguous and requires information beyond radial ( pair ) distribution functions . until recently ,there has been a very few direct experimental evidence to detect mro . in ionic and covalent glasses, mro manifests itself in the first sharp diffraction peak ( fsdp ) of the total factor structure factor . this feature corresponds to real space ordering in materials at the intermediate length scale .the well known staebler - wronski effect is an example where creation of metastable dangling bonds in hydrogenated amorphous silicon upon exposure to visible light has been observed to occur in the material with diminishing medium range order .fluctuation electron microscopy clearly reveals that structure of thin films of amorphous silicon are much more complex than a continuous random network model .higher order correlation functions are the most suitable candidates for studying the signature of mro in amorphous networks . however ,obtaining experimental structural information beyond the 2-body correlation function is non - trivial and there exists no simple and direct scheme of systematic analysis of the full 3- and 4-body correlation functions .treacy and gibson have addressed the problem experimentally by developing a low resolution electron microscopy technique known as fluctuation electron microscopy ( fem ) .fem can detect mro because it is sensitive to 3- and 4-body correlation functions .it was shown that the fluctuation in the diffracted intensities can be measured by the normalized variance of the intensities , and is directly related to 3- and 4-body correlation functions containing the information at the medium range length scale .we apply our ecmr technique starting with two very different models of a - si : the first is a paracrystalline model of amorphous si proposed by khare and the second includes voids in continuous random networks . in our work , we start from each of these models and apply our ecmr method to obtain final configurations displaying fem signal , which we call model - a and model - b respectively . in both the cases , one observes the presence of strong fem signal , and the model is also consistent with other physical observables such as structure factors , electronic and vibrational density of states .before we proceed to model generation , we briefly mention the key equations of fluctuation electron microscopy ( fem ) that have been used here in conjunction with ecmr method to generate amorphous network containing medium range order . for a detailed description of fem and ecmr , we refer to refs and ref respectively . in fem, we estimate mro by measuring the normalized variance of the dark - field image intensity instead of intensity itself . the normalized variance is defined as : v(k , q)= - 1 [ vk ] the variable is the magnitude of the scattering vector and defines the characteristic length scale of mro . in a variable coherence microscopy ,one fixes the value of and varies in order to determine the degree of mro present in the length scale of inverse .following treacy and gibson , we are interested in the fluctuation in the intensity for varying at a fixed spatial resolution . the intensity due to scattering from a volume centered at * r * of size proportional to is given by , i(*k * , q ) = f^2(*k * ) ^2 _ 0 t ( 1 + _ 0 d^3*r_12 * g_2(*r_12 * ) f_k(*r_12 * ) a_q(*r_12 * ) ) where is the radial distribution function , is the coherence function describing incoming illumination , and is the microscope response function .the intensity in the above expression involves only and therefore does not carry information about mro .it is the second moment of the intensity that includes 3- and 4-body correlation functions , which provide information at the medium range length scale .a mathematical expression of and its derivation is given by voyles .computer simulations have recently indicated that amorphous silicon or germanium films may contain some nano - sized crystalline grains embedded in a crn matrix .this model of amorphous silicon is called paracrystalline , and simulation of fem data using these models have been observed to interpret experimental results .it is proposed that the size and shape of the grains are related to the height and position of the peaks in the fem signal , and an appropriate concentration ( typically 20% 30% by number ) of such crystalline grains in amorphous matrix can reproduce correct structural , vibration and electronic properties .however , the model is not unique . since we know from reverse monte carlo simulation that it is possible to generate configurations of amorphous silicon having almost identical structure factor observed in experiment but with drastically different local bonding , it is necessary to explore the possibility of constructing models that do not explicitly contain nano - sized grains in the networks to start with .we have studied the problem along this direction via reverse monte carlo and modified wooten - winer - weaire ( www ) method and observed that direct inclusion of fem signal in crn introduces strain in the network .the resulting network shows a strong fem signal and maintains other properties of a - si , but does not produce any visible ordering ( such as distorted crystals that is expected from paracrystalline models ) except occasional occurrences of few schlfli clusters .it is instructive to study the stability of paracrystalline models via ecmr . to this end, we first generate a starting configuration containing grain(s ) of diamond crystal by creating voids of nanometer size in a crn , and then construct a generalized cost function involving fem signal , a suitably chosen energy functional ( modified stillinger weber potential ) and the structure factor as follows : = _ m - sw + _i=1 ^ 3 _ i _ i [ penalty ] _ 1 & = & _ j ( v_c(k_j ) - v_exp(k_j))^2 + _ 2 & = & 1 - ( r - r_c ) + _ 3 & = & _ j ( s_c(k_j ) - s_exp(k_j))^2 here is the modified stillinger - weber potential , and stand for fem data and structure factor respectively , and and are appropriate weight factors ( for each data set ) which may change during the course of simulation .our starting configuration is a 4056-atom continuous random network that contains a 216-atom grain of diamond crystal .this starting configuration shows the presence of a weak fem signal by construction .we minimize the cost function in equation via metropolis monte carlo algorithm by moving the crystal and interface atoms . during the monte carlo minimization, the topological constraint of the crystaline grain is relaxed so that the atoms in the grain are free to evolve away from ( diamond ) crystalline geometry , and yet maintain other constraints ( such as the fem signal , structure factor etc . ) .the inclusion of the latter is important because of the difference in structure factors of crystalline and amorphous environment of si .the use of modified stillinger - weber potential controls the network strain , and maintains the total energy of the system during monte carlo simulation as minimum as possible . in figure [ fem ] , we have plotted the simulated fem signal obtained from the final configuration along with the experimental data .a structural analysis of this final configuration shows that the crystal and interface atoms have moved significantly to form a distorted ordered structure away from the perfect crystal .a schlfli cluster analysis has shown the presence of cluster which originates from diamond crystal structure .the bond and dihedral angle distributions have been plotted in figures [ bond ] and [ dihed ] respectively .no significant differences have been observed in the bond angle and dihedral distributions compared to its crn counterpart .the electronic density of states ( edos ) for the final fem - fitted model ( model - a ) is plotted in figure [ edos ] using a tight - binding model hamiltonian .the density of electronic states show a gap with some states in the gap .this is due to the presence of few 3-fold and 5-fold coordination defects in the model .a very different approach to understand the fem signal and hence mro in amorphous silicon is to study the presence of voids in the network structure .voids are a universal feature in amorphous silicon , and the characteristic of voids depends largely on the growth condition of the materials .the presence of voids is considered to be one of reasons of low density of amorphous silicon compared to its crystalline counterpart .small angle scattering of neutrons , electrons , and x - rays have been widely used to detect the characteristic presence of voids in both amorphous and hydrogenated amorphous silicons .theoretical modeling of voids in amorphous silicon by biswas _et al . _ have indicated the presence of rapidly increasing structure factor for wave vectors below 1 , which is supported by experiments . in this work ,we have developed models with voids in large continuous random network and have studied the variation of fem signal with different number of voids and its size . in order to test the viability of the model ,we first start with a 1000-atom paracrystalline model and remove the grain of crystal .the resulting model continues to show the presence of fem signal but the strength of the signal decreases as the wave vector increases . in figure[ para ] , we have plotted the fem signal for a paracrystalline model with and without the crystalline grain .it is clear from the figure that the first two peaks have not changed their positions and heights significantly .the formation of voids creates some coordination defects and introduces strain in the network , which can be minimized by structural relaxation of the network . using the first - principles density functional code siesta , we have relaxed the network to minimize the strain and to reduce the number of defects . while the surface of the voids reconstructs ,the voids continue to exist in the relaxed model with a strong presence of the fem signal .this observation suggests that presence of voids in amorphous network can also produce fem signal as in paracrystalline model .together with the presence of increasing structure factor at low wave vectors and fem data , it appears that voids in amorphous silicon networks introduce some correlation that can affect the higher order correlation functions .furthermore , introduction of voids does not change the other characteristic material properties significantly ( such as vibration and electronic density of states ) . in figure [ void ] we have plotted the results obtained from a model containing a single void of radius 12 . using our ecmr method ,we have minimized the generalised penalty function ( equation ) by moving the interface atoms .the void persists , but the surface of the void reconstructs to match with the normalized variance of intensity obtained from fem experiments . in figure [ fig - nvoid ] , we have plotted the simulated fem signal for different number of voids . the signal is observed to be maximum for four voids while minimum for two voids as shown in the figure .it is important to note that similar trends have been observed in case of paracrystalline model , where signal strength is observed to be dependent on the number of crystalline grains present in the sample .we have also studied the role of rotation of the sample for a model with given number of voids .the result is shown in the figure [ fig - rvoid ] .for the model with four voids of linear size between 6 to 10 , we find that the signal is more or less independent of 25 to 100 orientations of the model .we have used fluctuation electron microscopy data to incorporate medium range order in amorphous silicon starting with continuous random networks .we have discussed two models that are capable of producing the characteristic fem signal observed in experiments maintaining structural , electronic and vibrational properties of amorphous silicon . the first model ( model - a ) is consists of a crn with nano - sized ordered grains in the network , while the second model ( model - b ) is based on presence of voids in the network .our study clearly indicates that the fem signal is sensitive to the presence of small ordered grains and voids in the network .the fem signal is found to be determined by fluctuations or inhomogeneities due to voids or phase - separated regions of nano - meter size dispersed in approximately homogeneous medium described by continuous random network .we have shown that either crystalline inclusions or voids are possible explanations for the measured fem data .dad thanks the us nsf for support under grants dmr 0605890 and 0600073 .pb acknowledges the support of the university of southern mississippi under grant no .the authors would like to thank john abelson and paul voyles for providing experimental fem data , and many conversations and mike treacy for the program for schlfli cluster analysis .large crystalline grains in paracrystalline model produce the characteristic 3rd crystalline peak in the radial distribution which is absent in amorphous silicon .this limits the size and volume of the small crystallites present in the network .our work suggests that grains are not crystalline but do contain some topological character of ( diamond ) crystal and is supported by schlfli cluster analysis .computer modeling has indicated that introduction of small grains of crystal always introduce fluctuation of intensity measured via normalized variance which is independent of the matrix ( be it completely disordered , amorphous or otherwise ) . in our work ,we move the crystalline and interface atoms with a view to search for configurations that would further enhance the fem signal .this is necessary to produce models compatible with the experimental radial distribution function .found in the fem - fitted network ( model - a ) that originate from diamond crystals .the linear dimension of the clusters are 9.1 ( left ) and 9.8 ( right ) respectively ( a high quality figure is available from the authors on request ) ., title="fig:",width=2,height=2 ] found in the fem - fitted network ( model - a ) that originate from diamond crystals .the linear dimension of the clusters are 9.1 ( left ) and 9.8 ( right ) respectively ( a high quality figure is available from the authors on request ) ., title="fig:",width=2,height=2 ] obtained from a 1000-atom paracrystalline model with a 429-atom crystalline grain . the fem signal after removing the grain is also plotted in the figure ( indicated as single void ) for comparison ., width=2,height=2 ]
ideal models of complex materials must satisfy all available information about the system . generally , this information consists of experimental data , information implicit to sophisticated interatomic interactions and potentially other _ a priori _ information . by jointly imposing first - principles or tight - binding information in conjunction with experimental data , we have developed a method : experimentally constrained molecular relaxation ( ecmr ) that uses _ all _ of the information available . we apply the method to model medium range order in amorphous silicon using fluctuation electron microscopy ( fem ) data as experimental information . the paracrystalline model of medium range order is examined , and a new model based on voids in amorphous silicon is proposed . our work suggests that films of amorphous silicon showing medium range order ( in fem experiments ) can be accurately represented by a continuous random network model with inhomogeneities consisting of ordered grains and voids dispersed in the network .
social networks have existed for thousands of years , but it was not until recently that researchers have started to gain scientific insights into phenomena like the _ small world property_. the rise of the internet has enabled people to connect with each other in new ways and to find friends sharing the same interests from all over the planet .a social network on the internet can manifest itself in various forms .for instance , on _ facebook _ , people maintain virtual references to their friends .the contacts stored on mobile phones or email clients form a social network as well .the analysis of such networks both their static properties as well as their evolution over time is an interesting endeavor , as it reveals many aspects of our society in general .a classic tool to model human behavior is _game theory_. it has been a fruitful research field in economics and sociology for many years .recently , computer scientists have started to use game theory methods to shed light onto the complexities of today s highly decentralized networks .game theoretic models traditionally assume that people act autonomously and are steered by the desire to maximize their benefits ( or utility ) . under this assumption , it is possible to quantify the performance loss of a distributed system compared to situations where all participants collaborate perfectly . a widely studied measure which captures this loss of social welfare is the _ price of anarchy _ ( poa ) .even though these concepts can lead to important insights in many environments , we believe that in some situations , the underlying assumptions do not reflect reality well enough .one such example are social networks : most likely people act less selfishly towards their friends than towards complete strangers .such altruistic behavior is typically not considered in game - theoretic models . in this article, we propose a game theoretic framework for social networks .social networks are not only attractive to their participants , e.g. , it is well - known that the user profiles are an interesting data source for the pr industry to provide tailored advertisements . moreover , social network graphs can also be exploited for attacks , e.g. , email viruses using the users address books for propagating , worms spreading on mobile phone networks and over the internet telephony tool skype have been reported ( e.g. , ) .this article investigates rational inoculation strategies against such viruses from our game theoretic perspective , and studies the propagation of such viruses on the social network .this article makes a first step to combine two active threads of research : social networks and game theory .we introduce a framework taking into consideration that people may care about the well - being of their friends .in particular , we define the _ windfall of friendship _ ( wof ) which captures to what extent the social welfare improves in social networks compared to purely selfish systems . in order to demonstrate our framework , as a case study, we provide a game - theoretic analysis of a _ virus inoculation game_. concretely , we assume that the players have the choice between inoculating by buying anti - virus software and risking infection .as expected , our analysis reveals that the players in this game always benefit from caring about the other participants in the social network rather than being selfish .intriguingly , however , we find that the windfall of friendship may not increase monotonically with stronger relationships . despite the phenomenon being an `` ever - green '' in political debates , to the best of our knowledge ,this is the first article to quantify this effect formally .this article derives upper and lower bounds on the windfall of friendship in simple graphs .for example , we show that the windfall of friendship in a complete graph is at most ; this is tight in the sense that there are problem instances where the situation can indeed improve this much .moreover , we show that in star graphs , friendship can help to eliminate undesirable equilibria .generally , we discover that even in simple graphs the windfall of friendship can attain a large spectrum of values , from constant ratios up to , being the network size , which is asymptotically maximal for general graphs .also an alternative friendship model is discussed in this article where the relative importance of an individual friend declines with a larger number of friends . while the windfall of friendship is still positive ,we show that the non - monotonicity result is no longer applicable .moreover , it is proved that in both models , computing the best and the worst friendship nash equilibrium is -hard .the paper also initiates the discussion of implications on convergence .we give a potential function argument to show convergence of best - response sequences in various models and for simple , cyclic graphs .moreover , we report on our simulations which indicate that the convergence times are typically higher in social contexts , and hence constitute a certain price of friendship .finally , to complement our formal analysis of the worst equilibria , simulation results for average case equilibria are discussed .the remainder of this article is organized as follows .section [ sec : relwork ] reviews related work and section [ sec : model ] formally introduces our model and framework .the windfall of friendship on general graphs and on special graphs is studied in sections [ sec : general ] and [ sec : cliquestar ] respectively .section [ sec : relative ] discusses an alternative model where the relative importance of a friend declines if the total number of friends increases .aspects of best - response convergence and implications are considered in section [ sec : convergence ] .we report on simulations in section [ sec : simulations ] .finally , we conclude the article in section [ sec : conclusion ] .social networks are a fascinating topic not only in social sciences , but also in ethnology , and psychology .the advent of social networks on the internet , e.g. , _ facebook _ , _ linkedin _ , _ myspace _ , _ orkut _ , or _ xing _ , to name but a few , heralded a new kind of social interactions , and the mere scale of online networks and the vast amount of data constitute an unprecedented treasure for scientific studies .the topological structure of these networks and the dynamics of the user behavior has a mathematical and algorithmic dimension , and has raised the interest of mathematicians and engineers accordingly . the famous _ small world experiment _ conducted by stanley milgram 1967 has gained attention by the algorithm community and inspired research on topics such as decentralized search algorithms , routing on social networks and the identification of communities .the dynamics of epidemic propagation of information or diseases has been studied from an algorithmic perspective as well .knowledge on effects of this cascading behavior is useful to understand phenomena as diverse as word - of - mouth effects , the diffusion of innovation , the emergence of bubbles in a financial market , or the rise of a political candidate .it can also help to identify sets of influential players in networks where marketing is particularly efficient ( _ viral marketing _ ) .for a good overview on economic aspects of social networks , we refer the reader to , which , i.a . , compares random graph theory with game theoretic models for the formation of social networks .recently , game theory has also received much attention by computer scientists .this is partly due to the various actors and stake - holders who influence the decentralized growth of the internet : game theory is a useful tool to gain insights into the internet s socio - economic complexity .many aspects have been studied from a game - theoretic point of view , e.g. , _ routing _ , _ multicast transmissions _ , or _network creation _ .moreover , computer scientists are interested in the algorithmic problems offered by game theory , e.g. , on the existence of pure equilibria .this article applies game theory to social networks where players are not completely selfish and autonomous but have friends about whose well - being they care to some extent .we demonstrate our mathematical framework with a virus inoculation game on social graphs .there is a large body of literature on the propagation of viruses .miscellaneous misuse of social networks has been reported , e.g. , _ _email viruses _ _ have used address lists to propagate to the users friends .similar vulnerabilities have been exploited to spread worms on the _ mobile phone network _ and on the internet telephony tool _ _skype__. there already exists interesting work on game theoretic and epidemic models of propagation in social networks .for instance , montanari and saberi attend to a game theoretic model for the diffusion of an innovation in a network and characterize the rate of convergence as a function of graph structure .the authors highlight crucial differences between game theoretic and epidemic models and find that the spread of viruses , new technologies , and new political or social beliefs do not have the same viral behavior .the articles closest to ours are .our model is inspired by aspnes et al .the authors apply a classic game - theoretic analysis and show that selfish systems can be very inefficient , as the price of anarchy is , where is the total number of players .they show that computing the social optimum is -hard and give a reduction to the combinatorial problem _ sum - of - squares partition_. they also present a approximation .moscibroda et al . have extended this model by introducing malicious players in the selfish network .this extension facilitates the estimation of the robustness of a distributed system to malicious attacks .they also find that in a non - oblivious model , intriguingly , the presence of malicious players may actually _ improve _ the social welfare . in a follow - up work which generalizes the social context of to arbitrary bilateral relationships , it has been shown that there is no such phenomenon in a simple network creation game .the _ windfall of malice _ has also been studied in the context of congestion games by babaioff et al .in contrast to these papers , our focus here is on social graphs where players are concerned about their friends benefits .there is other literature on game theory where players are influenced by their neighbors . in_ graphical economics _ , an undirected graph is given where an edge between two players denotes that free trade is allowed between the two parties , where the absence of such an edge denotes an embargo or an other restricted form of direct trade .the payoff of a player is a function of the actions of the players in its neighborhood only .in contrast to our work , a different equilibrium concept is used and no social aspects are taken into consideration .note that the nature of game theory on social networks also differs from _ cooperative games _( e.g. , ) where each coalition of players has a certain characteristic cost or payoff function describing the collective payoff the players can gain by forming the coalition .in contrast to cooperative games , the `` coalitions '' are fixed , and a player participates in the `` coalitions '' of all its neighbors . a preliminary version of this article appeared at acm ec 2008 , andthere have been several interesting results related to our work since then .for example , studies auctions with spite and altruism among bidders , and presents explicit characterizations of nash equilibria for first - price auctions with random valuations and arbitrary spite / altruism matrices , and for first and second price auctions with arbitrary valuations and so - called regular social networks ( players have same out - degree ) . by rounding a natural linear program with region - growing techniques ,chen et al . present a better , -approximation for the best vaccination strategy in the original model of , where is the support size of the outbreak distribution .moreover , the effect of autonomy is investigated : a benevolent authority may suggest which players should be vaccinated , and the authors analyze the `` price of opting out '' under partially altruistic behavior ; they show that with positive altruism , nash equilibria may not exist , but that the price of opting out is bounded .we extend the conference version of this article in several respects .the two most important additions concern _ relative friendship _ and _ convergence_. we study an additional model where the relative importance of a neighbor declines with the total number of friends and find that while friendship is still always beneficial , the non - monotonicity result no longer applies : unlike in the absolute friendship model , the windfall of friendship can only increase with stronger social ties .in addition , we initiate the study of convergence issues in the social network .it turns out that it takes longer until an equilibrium is reached compared to purely selfish environments and hence constitutes a price of friendship .we present a potential function argument to prove convergence in some simple cyclic networks , and complement our study with simulations on kleinberg graphs .we believe that the existence of and convergence to social equilibria are exciting questions for future research ( see also the related fields of _ player - specific utilities _ and _ local search complexity _ ) . finally , there are several minor changes , e.g. , we improve the bound in theorem [ thm : monotone ] from to .this section introduces our framework . in order to gain insights into the windfall of friendship, we study a virus inoculation game on a social graph .we present the model of this game and we show how it can be extended to incorporate social aspects .the virus inoculation game was introduced by .we are given an undirected network graph of players ( or nodes ) , for , who are connected by a set of edges ( or _ links _ ) .every player has to decide whether it wants to _ inoculate _( e.g. , purchase and install anti - virus software ) which costs , or whether it prefers saving money and facing the risk of being infected .we assume that being infected yields a damage cost of ( e.g. , a computer is out of work for days ) . in other words , an instance of a gameconsists of a graph , the inoculation cost and a damage cost .we introduce a variable for every player denoting s chosen _strategy_. namely , describes that player is protected whereas for a player willing to take the risk , . in the following, we will assume that , that is , we do not allow players to _ mix _ ( i.e. , use probabilistic distributions over ) their strategies .these choices are summarized by the _ strategy profile _ , the vector .after the players have made their decisions , a virus spreads in the network .the propagation model is as follows .first , one player of the network is chosen uniformly at random as a starting point .if this player is inoculated , there is no damage and the process terminates .otherwise , the virus infects and all unprotected neighbors of .the virus now propagates recursively to their unprotected neighbors .hence , the more insecure players are connected , the more likely they are to be infected .the vulnerable region ( set of players ) in which an insecure player lies is referred to as s _ attack component_. we only consider a limited region of the parameter space to avoid trivial cases .if the cost is too large , no player will inoculate , resulting in a totally insecure network and therefore all players eventually will be infected . on the other hand , if , the best strategy for all players is to inoculate .thus , we will assume that and in the following . in our game , a player has the following expected cost : [ actualcost] the _ actual individual cost _ of a player is defined as where denotes the size of s attack component .if is inoculated , stands for the size of the attack component that would result if became insecure . in the following ,let refer to the actual cost of an insecure and to the actual cost of a secure player .the total _ social cost _ of a game is defined as the sum of the cost of all participants : .classic game theory assumes that all players act selfishly , i.e. , each player seeks to minimize its individual cost . in order to study the impact of such selfish behavior , the solution concept of a _nash equilibrium _ ( ne ) is used .a nash equilibrium is a strategy profile where no selfish player can unilaterally reduce its individual cost given the strategy choices of the other players. we can think of nash equilibria as the stable strategy profiles of games with selfish players .we will only consider pure nash equilibria in this article , i.e. , players can not use random distributions over their strategies but must decide whether they want to inoculate or not . in a pure nash equilibrium, it must hold for each player that given a strategy profile , implying that player can not decrease its cost by choosing an alternative strategy . in order to quantify the performance loss due to selfishness , the ( not necessarily unique ) nash equilibriaare compared to the optimum situation where all players collaborate . to this endwe consider the _ price of anarchy _( poa ) , i.e. , the ratio of the social cost of the worst nash equilibrium divided by the optimal social cost for a problem instance .more formally , our model for social networks is as follows .we define a _ friendship factor _ which captures the extent to which players care about their _ friends _ ,i.e. , about the players _ adjacent _ to them in the social network .more formally , is the factor by which a player takes the individual cost of its neighbors into account when deciding for a strategy . can assume any value between 0 and 1 . implies that the players do not consider their neighbors cost at all , whereas implies that a player values the well - being of its neighbors to the same extent as its own .let denote the set of neighbors of a player .moreover , let be the set of inoculated neighbors , and the remaining insecure neighbors .we distinguish between a player s _ actual cost _ and a player s _ perceived cost_. a player s actual individual cost is the expected cost arising for each player defined in definition [ actualcost ] used to compute a game s social cost . in our social network ,the decisions of our players are steered by the players _ perceived cost_. [ perceived cost] the _ perceived individual cost _ of a player is defined as in the following , we write to denote the perceived cost of an insecure player and for the perceived cost of an inoculated player .this definition entails a new notion of equilibrium .we define a _ friendship nash equilibrium _( fne ) as a strategy profile where no player can reduce its _cost by unilaterally changing its strategy given the strategies of the other players .formally , given this equilibrium concept , we define the _ windfall of friendship _ . [def : wfdef ] the _ windfall of friendship _ is the ratio of the social cost of the worst nash equilibrium for and the social cost of the worst friendship nash equilibrium for : implies the existence of a real windfall in the system , whereas denotes that the social cost can become _greater _ in social graphs than in purely selfish environments .in this section we characterize friendship nash equilibria and derive general results on the windfall of friendship for the virus propagation game in social networks . it has been shown that in classic nash equilibria ( ), an attack component can never consist of more than insecure players .a similar characteristic also holds for friendship nash equilibria . as every player cares about its neighbors , the maximal attack component size in which an insecure player still does not inoculate depends on the number of s insecure neighbors and the size of their attack components .therefore , it differs from player to player .we have the following helper lemma .[ lemma : ac - size ] the player will inoculate if and only if the size of its attack component is where the are the attack component sizes of s insecure neighbors assuming is secure .player will inoculate if and only if this choice lowers the perceived cost . by definition [ perceived cost ] ,the perceived individual cost of an inoculated player is and for an insecure player we have for to prefer to inoculate it must hold that for all instances of the virus inoculation game and , it holds that the proof idea for is the following : for an instance we consider an arbitrary fne with . given this equilibrium ,we show the existence of a ne with larger social cost ( according to , our best response strategy always converges ) .let be any ( e.g. , the worst ) fne in the social model .if is also a ne in the same instance with then we are done .otherwise there is at least one player that prefers to change its strategy .assume is insecure but favors inoculation .therefore s attack component has on the one hand to be of size at least and on the other hand of size at most ( cf lemma [ lemma : ac - size ] ) .this is impossible and yields a contradiction to the assumption that in the selfish network , an additional player wants to inoculate .it remains to study the case where is secure in the fne but prefers to be insecure in the ne .observe that , since every player has the same preference on the attack component s size when , a newly insecure player can not trigger other players to inoculate .furthermore , only the players inside s attack component are affected by this change .the total cost of this attack component increases by at least applying lemma [ lemma : ac - size ] guarantees that this results in since a player only gives up its protection if .if more players are unhappy with their situation and become vulnerable , the cost for the ne increases further . in conclusion , there exists a ne for every fne with for the same instance which is at least as expensive .the upper bound for the wof , i.e. , , follows directly from the definitions : while the poa is the ratio of the ne s social cost divided by the social optimum , is the ratio between the cost of the ne and the fne . as the fne s cost must be at least as large as the social optimum cost the claim follows .note that aspnes et al . proved that the price of anarchy never exceeds the size of the network , i.e. , .consequently , the windfall of friendship can not be larger than due to theorem 4.2 .the above result leads to the question of whether the windfall of friendship grows monotonically with stronger social ties , i.e. , with larger friendship factors .intriguingly , this is not the case .[ thm : monotone ] for all networks with more than three players , there exist game instances where does not grow monotonically in .we give a counter example for the star graph which has one center player and leaf players .consider two friendship factors , and where .we show that for the large friendship factor , there exists a fne , , where only the center player and one leaf player remain insecure . for the same setting but with a small friendship factor , at least two leaf players will remain insecure , which will trigger the center player to inoculate , yielding a fne , , where only the center player is secure .consider first .let be the insecure center player , let be the insecure leaf player , and let be a secure leaf player .in order for to constitute a nash equilibrium , the following conditions must hold : for , let be the insecure center player , let be one of the two insecure leaf players , and let be a secure leaf player . in order for the leaf players to be happy with their situation but for the center player to prefer to inoculate , it must hold that : now choose ( note that due to our assumption that , ) .this yields the following conditions : , , and .these conditions are easily fulfilled , e.g. , with and .observe that the social cost of the first fne ( for ) is , whereas for the second fne ( for ) .thus , as we have chosen and as , due to our assumption , .this concludes the proof .reasoning about best and worst nash equilibria raises the question of how difficult it is to compute such equlibria .we can generalize the proof given in and show that computing the most economical and the most expensive fne is hard for any friendship factor .[ np - completeness ] computing the best and the worst pure fne is -complete for any ] .( _ sketch _ ) again , deciding the existence of a rfne with cost less than or more than is at least as hard as solving the _ vertex cover _ or _ independent dominating set _problem , respectively . note that verifying whether a proposed solution is correct can be done in polynomial time , hence the problems are indeed in .the proof is similar to theorem [ np - completeness ] , and we only point out the difference for condition ( a ) : an insecure player in the attack component bears the cost , and changing its strategy reduces the cost by at least latexmath:[$\delta_{i } = k_il / n+ f|\gamma_{\overline{sec}}(p_i)| k_i l/ ( |\gamma(p_i)| n ) - c - f |\gamma_{\overline{sec}}(p_i)| ( k_i-1 ) l/(|\gamma(p_i)| n ) = k_il / n - c + f l that , and hence , it holds that , resulting in becoming secure .according to lemma [ ffneversmaller1 ] and lemma [ ffneversmaller1relative ] , the social context can only improve the overall welfare of the players , both in the absolute and the relative friendship model . however , there are implications beyond the players welfare in the equilibria : in social networks , the dynamics of how the equilibria are reached is different . in ,aspnes et al . have shown that best - response behavior quickly leads to some pure nash equilibrium , from any initial situation .their potential function argument however relies on a `` symmetry '' of the players in the sense insecure players in the same attack component have the same cost .this no longer holds in the social context where different players take into account their neighborhood : a player with four insecure neighbors is more likely to inoculate than a player with just one , secure neighbor .thus , the distinction between `` big '' and `` small '' components used in can not be applied , as different players require a different threshold .nevertheless , convergence can be shown in certain scenarios .for example , the hardness proofs of lemmas [ np - completeness ] and [ rnp - completeness ] imply that equilibria always exist in the corresponding areas of the parameter space , and it is easy to see that the equilibria are also reached by best - response sequences .similarly , in the star and complete networks , best - response sequences converge in linear time .linear convergence time also happens in more complex , cyclic graphs .for example , consider the cycle graph where each player is connected to one left and one right neighbor in a circular fashion .to prove best response convergence from arbitrary initial states , we distinguish between an initial phase where certain structural invariants are established , and a second phase where a potential function argument can be applied with respect to the view of only one type of players . each event when one player is given the chance to perform a best response is called a _ round_. [ thm : cycconv ] from any initial state and in the cycle graph , a best response round - robin sequence results in an equilibrium after changes , both in case of absolute and relative friendship equilibria .after two round - robin phases where each player is given the chance to make a best response twice ( at most changes or rounds ) , it holds that an insecure player which is adjacent to a secure player can not become secure : since preferred to be insecure at some time , the only reason to become secure again is the event that a player becomes insecure in s attack component at time ; however , since has a secure neighbor and hence can only have more insecure neighbors than , can not prefer a larger attack component than , which yields a contradiction to the assumption that becomes secure while its neighbor is still secure .moreover , by the same arguments , there can not be three consecutive secure players .therefore , in the best response rounds after the two initial phases , there are the following cases .case ( a ) : a secure player having two insecure neighbors becomes insecure ; case ( b ) : a secure player with one secure neighbor becomes insecure ; and case ( c ) : an insecure player with two insecure neighbors becomes secure . in order to prove convergence , the following potential function is used : where the attack components in contain more than players and the attack components in contain at most players in case of absolute friendship equilibria ; for relative friendship equilibria we use .in other words , the threshold to distinguish between small and big components is chosen with respect to players having _ two insecure neighbors _ ; in case of absolute fnes : and in case of relative fnes : note that it holds that .we now show that case ( a ) and ( c ) reduce by at least one unit in each best response . moreover , case ( b ) can increase the potential by at most one .however , since we have shown that case ( b ) incurs less than times , the claim follows by an amortization argument ._ case ( a ) : _ in this case , a new insecure player is added to an attack component in ._ case ( b ) : _ a new insecure player is added to an attack component in or to an attack component in ( since is `` on the edge '' of the attack component , it prefers a larger attack component ) . _case ( c ) : _ an insecure player is removed from an attack component in .the proof of theorem [ thm : cycconv ] can be adapted to show linear convergence in general 2-degree networks where players have degree at most two . in order to gain deeper insights into the convergence behavior, we conducted several experiments .this section briefly reports on the simulations conducted on kleinberg graphs ( using clustering exponent ) .although the existence of equilibria and the best - response convergence time complexity for general graphs remain an open question , during the thousands of experiments , we did not encounter a single instance which did not converge .moreover , our experiments indicate that the initial configuration ( i.e. , the set of secure and insecure players ) as well as the relationship of to typically has a negligible effect on the convergence time , and hence , unless stated otherwise , the following experiments assume an initially completely insecure network and and .all experiments are repeated 100 times over different kleinberg graphs .all our experiments showed a positive windfall of friendship that increases monotonically in , both for the relative and the absolute friendship model .figure [ fig : socialcost ] shows a typical result . maybe surprisingly , it turns out that the windfall of friendship is often not due to a higher fraction of secure players , but rather the fact that the secure players are located at strategically more beneficial locations ( see also figure [ fig : numsec ] ) .we can conclude that there is a windfall of friendship not only for the worst but also for `` average equilibria '' .[ ht ] + [ ht ] + the box plots in figure [ fig : boxplotcost ] give a more detailed picture of the cost for .the overall cost of pure ne is typically higher than the cost of rfne which is in turn higher than the cost of fne .[ ht ] + besides social cost , we are mainly interested in convergence times .we find that while the convergence time typically increases already for a small , the magnitude of plays a minor role .figure [ fig : convf ] shows the typical convergence times as a function of .notice that the convergence time more than doubles when changing from the selfish to the social model but is roughly constant for all values of .[ ht ] +this article presented a framework to study and quantify the effects of game - theoretic behavior in social networks .this framework allows us to formally describe and understand phenomena which are often well - known on an anecdotal level .for instance , we find that the windfall of friendship is always positive , and that players embedded in a social context may be subject to longer convergence times .moreover , interestingly , we find that the windfall of friendship does not always increase monotonically with stronger social ties .we believe that our work opens interesting directions for future research . we have focused on a virus inoculation game , and additional insightsmust be gained by studying alternative and more general games such as potential games , or games that do and do not exhibit a braess paradox .also the implications on the games dynamics need to be investigated in more detail , and it will be interesting to take into consideration behavioral models beyond equilibria ( e.g. , ) . finally , it may be interesting to study scenarios where players care not only about their friends but also , to a smaller extent , about friends of friends . what about practical implications ?one intuitive takeaway of our work is that in case of large benefits of social behavior , it may make sense to design distributed systems where neighboring players have good relationships .however , if the resulting convergence times are large and the price of the dynamics higher than the possible gains , such connections should be discouraged .our game - theoretic tools can be used to compute these benefits and convergence times , and may hence be helpful during the design phase of such a system .we would like to thank yishay mansour and boaz patt - shamir from tel aviv university and martina hllmann and burkhard monien from paderborn university for interesting discussions on relative friendship equilibria and aspects of convergence .t. moscibroda , s. schmid , and r. wattenhofer . .in _ proc .25th annual acm symposium on principles of distributed computing ( podc ) _ , 2006. also appeared in _journal internet mathematics ( i m ) , volume 6 , number 2 _ , 2009 .
this article investigates selfish behavior in games where players are embedded in a social context . a framework is presented which allows us to measure the _ windfall of friendship _ , i.e. , how much players benefit ( compared to purely selfish environments ) if they care about the welfare of their friends in the social network graph . as a case study , a virus inoculation game is examined . we analyze the corresponding nash equilibria and show that the windfall of friendship can never be negative . however , we find that if the valuation of a friend is independent of the total number of friends , the social welfare may not increase monotonically with the extent to which players care for each other ; intriguingly , in the corresponding scenario where the relative importance of a friend declines , the windfall is monotonic again . this article also studies convergence of best - response sequences . it turns out that in social networks , convergence times are typically higher and hence constitute a price of friendship . while such phenomena may be known on an anecdotal level , our framework allows us to quantify these effects analytically . our formal insights on the worst case equilibria are complemented by simulations shedding light onto the structure of other equilibria . game theory , social networks , equilibria , virus propagation , windfall of friendship
two polarization - entangled photons can be generated in a standard parametric down - conversion ( pdc ) experiment .the entanglement can be accessed because the photons can be distinguished by their momentum a photon moving left and the one moving right .interestingly , if the two pdc photons go through polarizing beam splitters and the h - polarized one is sent to alice , whereas the v - polarized one is sent to bob , the party can still detect entanglement , but this time it occurs in the momentum degree of freedom .this phenomenon is known as duality in entanglement and does not occur in case of distinguishable particles .duality in entanglement provides an interaction - free test of particle indistinguishability and can be observed in various physical implementations .the idea of the test relies on the fact that for distinguishable particles superselection rules ( ssr ) restrict the set of all the possible measurements . as a result, the entanglement can not be observed after the swap of labels .however , such a rule can be lifted by the introduction of a proper additional state , called a reference frame . in this workwe investigate its impact on the duality of entanglement. it is well know that in the first quantization picture the symmetrization / anti - symmetrization of the wave function can be sometimes considered as entanglement which can not be operationally accessed . nevertheless , due to this fact fermionic and bosonic properties can be simulated with distinguishable particles if one prepares them in a proper symmetric or anti - symmetric state .such a preparation requires entanglement consumption , which in this case can be considered as a resource to simulate identicality of particles .however , it is not obvious that every bosonic / fermionic property can be simulated only by symmetrization / anti - symmetrization .some properties may also be simulated with alternative resources . here , we ask what alternative resources contained in reference frames can be used to simulate duality in entanglement .in particular , we identify the minimal conditions needed for such a reference frame to enable distinguishable particles to exhibit the duality .the result pinpoints the aspects of indistingiushability captured by the entanglement duality .this highlights issues that should be considered while preparing the duality - based tests of identicity of particles .moreover , it contributes to a formulation of a resource theory of indistinguishability .we follow ref . and consider two pdc photons where denotes the operator that creates a photon with polarization and momentum .the above state is clearly entangled if we distinguish particles by their momentum ( for details see ) . for a system of distinguishable particles we use different creation operators the entanglement depends on the choice of indexing .as momentum is consistent with the type of the particle , the state or , but in order to detect entanglement one also needs to measure some other complementary states , say .however , the second measurement can only be done if .this follows from ssr which prohibit states superposing different particle types from being the eigenstates of quantum observables and the fact that is associated with the particle and with the particle .because of the distinguishability of particles , there is no duality in entanglement for this system . in the first quantization picturethe state ( [ eq : bose creation ] ) can be written as ) would be operationally hyper - entangled and it would pass the indistinguishability test based on duality in entanglement . in the second quantization picture it is of the form one can interpret this in the following way by adding entanglement to the system the state ( [ eq : bose creation2 ] ) becomes symmetric and behaves like a state of indistinguishable particles .now , we ask if the state ( [ eq : bose creation2 ] ) can pass the duality test without symmetrization .this is possible if the measurements are performed on an extended system . here, is the state of the original system and is the state of an ancilla , which is commonly known as a reference frame .to illustrate the idea we consider . this way the extended system is in the state , where and the tensor product denotes the fact that each copy of the system occupies a different mode . for convenience we set and and use the convention that the first state in the product is -polarized and the second -polarized .we stress that this is just a rewriting and the above state is still operationally mixed due to the underlying ssr , i.e. , observing the state is only possible if .however , if we add its copy we obtain means that alice has a state and bob has .the state ( [ 2copies ] ) consists of the operationally mixed and entangled parts .this is because is only possible if , but is allowed for any and ( up to the normalisation constraint ) . in order to verify the entanglement in the above state we consider the peres - horodecki criterion , i.e. , the negativity of the partially transposed density matrix .however , the matrix to which we apply this condition needs to be modified .because some coherences are unobservable due to the ssr , they need to be excluded from the effective density matrix ( we locally apply the so - called _ twirling _ operation ) the first four terms are diagonal , whereas the last two correspond to observable coherencies and are responsible for the entanglement . after applying a partial transpositionwe obtain this is a block diagonal matrix and the last block , corresponding to the last two terms , has eigenvalues .this confirms that the state is entangled .the example shows an idea of bypassing ssr with an additional subsystem .however , it is not clear that the observable entanglement in the effective state originates from the original subsystem , since the additional one is also entangled .we will resolve this issue in the next section .let us consider a reference frame in a werner state where , is the same as in ( [ simplestate ] ) and the identity is expressed in terms of states ( [ n1 ] ) and ( [ n2 ] ) the state ( [ sr ] ) is separable for .the total state of the system is coherencies in the first term are unobservable due to the ssr , therefore this part of the state is effectively diagonal .the second term is the same as in the example from the previous section . in order to confirm the entanglement in the above statewe apply once again the peres - horodecki criterion .the effective partially transposed density matrix is block diagonal and the only relevant block is a submatrix , which contains an off - diagonal term and is proportional to the one considered before its eigenvalues are and we see that the total state is entangled for any value of . in particular , if we can confirm entanglement in momentum and at the same time we know that the state of the reference frame ( [ sr ] ) is separable .therefore , the entanglement comes solely from the original state and the reference frame is only used to activate it .the above examples show that it is not necessary that the reference frame is entangled , but rather that it contains non - zero off - diagonal terms . in addition, the reference frame has two important additional features . just like in the case of particle number ssr , although the state is separable , it can not be prepared locally .this is because the local preparation would require a violation of local ssr .in addition , although particles and in the original system are different , preparation of a reference frame capable of activating the dual entanglement requires the same type of particles ( either bosons or fermions ) .if the reference frame consisted of other particles , say and , one would not be able to bypass the ssr .one may ask what type of physical property of the reference frame ( what type of a resource ) is responsible for the activation of the dual form of entanglement . to answer this question we first recall some previous results on nonclassical correlations in the presence of ssr .it was shown in that in the presence of a particle number ssr in order to violate bell inequality with the entanglement contained solely in the original system one needs a reference frame with a non - zero _ superselection - induced variance _ ( siv ) .siv is a resource that arises in bipartite systems which are subject to the particle number ssr .it corresponds to a local uncertainty of the particle number , despite the fact that the global particle number is fixed .it is defined as the variance of the local particle number where the factor of four is due to normalisation . for mixed statesone can introduce siv of formation , which is analogous to the entanglement of formation ( eof ) where represents a probability distribution over pure states that create the mixed state .however , contrary to eof , the minimization is not over all possible pure states , but only over those obeying ssr .in our case the ssr corresponds to a lack of a superposition between different particle types .this is equivalent to saying that every pure state needs to have a well defined global number of particles and .we recall the definitions ( [ n1 ] ) and ( [ n2 ] ) where was associated with particle and was associated with particle .the ability to activate dual entanglement comes from the off - diagonal terms in the reference frame .they originate from a superposition which is the only element of the state ( [ sr ] ) for which the local numbers of and are uncertain .one may therefore associate this uncertainty with the resource that is responsible for the activation of the dual entanglement .it can be measured by the siv of one type of particle , say particle , which we will denote as siv .following the result in we have if we require that is separable , then siv must be less than . to conclude , we argued that the variance of the local particle number ( siv ) is a resource which can activate dual entanglement in an entangled state of distinguishable particles .this shows that some properties of indistinguishable particles can be simulated with distinguishable ones without the need of state symmetrization / anti - symmetrization , provided one has an access to a properly engineered reference frame. it would be interesting to show that other features , that are commonly considered to be typical bosonic or fermionic properties ( such as bunching or anti - bunching ) , can also be simulated with siv or some different resource other than symmetrization / anti - symmetrization , which in most cases requires entanglement .
the entanglement between two bosons or fermions can be accessed if there exists an auxiliary degree of freedom which can be used to label and effectively distinguish the two particles . for some types of entanglement between two indistinguishable particles one can observe _ duality _ , i.e. , if the entanglement is present in the hilbert space and an auxiliary hilbert space is used to label the particles , then if we used as a label the entanglement would be present in . for distinguishable particles this effect does not occur because of superselection rules which prevent superpositions of different types of particles . however , it is known that superselection rules can be bypassed if one uses special auxiliary states that are known as reference frames . here we study properties of reference frames which allow for an observation of a duality in entanglement between two distinguishable particles . finally , we discuss the consequences of this result from the resource - theoretic point of view .
animals face a recurring alternative between continuing to forage in a patch or gambling on switching to a different patch with possibly better returns . optimal foraging theory purports that animal foraging choices have been shaped by natural selection and should maximize absolute fitness . similarly , optimal foraging theory considers that both human and nonhuman animals can take into account the foraging choices of their competitors while making their own choices .thus , interactions among competitors are increasingly important to understanding how real foraging choices can be shaped as animals compete for resources .competitive interactions are typically of two types : exploitative competition , when different animals consume common limited resources ( e.g. , two predators hunting the same prey ) ; and interference competition , when direct interactions such as territoriality negatively affect the foraging of other animals . yet , broad empirical facts on the link between optimal and real foraging choices are scarce due to the complication of gathering field data or constructing experiments .importantly , biological and socio - economic systems share many common features in terms of distributed resources and competition , and thus financial systems have provided a fruitful and intriguing setting to test biological theories of behavior because of their high quality quantifiable and dynamic behavioral data . as far as we know ,however , financial traders have not been examined from the perspective of foraging .day traders face the classical foraging trade - off of trading the same stock multiple times in a row patch exploitation or switching to a different stock patch exploration .for instance , each trader can trade multiple stocks within a class of stocks she has expertise in ( e.g. , technology stocks , banks stocks , transportation stocks , etc . ) and is faced with the foraging choice of buying and selling the same stock multiple times in a row ( e.g. buy a stock at a low price and vice versa for selling ) or switching their trading to a different stock where returns are potentially higher ( fig .1 ) . by analogy to foraging in a physical habitat where energy is invested in traveling and hunting , traders either exploit the returns related to one stock ( i.e. , a patch ) or explore a different patch while potentially experiencing cognitive costs for switching between patches .moreover , the returns in each patch are shaped by exploitative competition , where the foraging choices of other traders , even within a short period of time , can increase or decrease the quality and availability of resources as they choose to buy or sell their stocks .thus , if a trader is willing to buy and the majority of traders are also buying then the stock price increases , in turn , the trader s return will be reduced . in this paperwe investigated the extent to which professional traders exploration and exploitation choices can be explained by foraging heuristics that respond to short - term competition with other traders .additionally , we analyzed whether traders trading choices are associated with their net income intake .a significant relationship would mean a real correspondence between trading choices and absolute returns ; whereas a lack of relationship would suggest a maladaptive behavior for absolute market returns .we studied the second - by - second trading decisions of day traders at a typical small - to - medium sized trading firm from january 1 , 2007 to december 31 , 2008 .we recorded when a trader begins to trade a stock , how much he subsequently traded the same stock , and when he switched to explore a different stock . in our data, traders typically ( of the time ) made more than transactions , and more than switches , per day ( fig .these novel data cover more than 300 thousand trades made on approximately different stocks across a very wide range of sectors and on various exchanges , mostly from nyse , the blue chip " exchange , and nasdaq , the exchange known for high tech and volatile stocks .in particular , the stocks include high technology firms , diversified financials , shipping , natural resources , construction , chemicals , insurance , steel , etc .the top 5 stocks traded at the firm over our time period in terms of number of trades and volume were jp morgan chase & co. , mechel steel group oao , goldman sachs group , apple inc . , and potash corporation of saskatchewan inc .a typical small - to - medium day trading firm invests the money of the owners of the firm in stocks and hires traders to make the firm s investments .day traders make only intraday trades ; they typically do not hold inventories of stocks beyond a single day . rather , they enter and exit positions each day during normal trading hours of 9:30 am and 4:00 pm ( est ) . our day traders are point - and - clickers . "they make trades in real time 98% of the time ( the 1.2% of the trades done algorithmically were omitted and did not affect the results ) . though they sit in the same firm , day traders typically trade different stocks from each other and trade independently of each other .trading different stocks diversifies the firm s holdings , exploits specialized trading knowledge , and avoids accidentally trading against each other s positions .these dynamics mean that traders have little incentive to mimic each other s trades , information gathering behavior , or trading decisions .the firm was located in the us . our sample of day traders under study was 30 .this sample of 30 traders was the full number of traders for which there was complete data on all decisions and behaviors measured over our observation period . by contrast , the other traders at the firm ( ) all worked for truncated interludes or worked erratically , which made their measurement unbalanced and unsystematic , and vulnerable to selection and small sample size biases .all traders at the firm were men of an average age of 35 years old and a range between 22 and 50 years of age .they used the same technology to trade , had access to the same public information sources , and were subject to an equivalent incentive scheme .traders were paid a base salary plus commissions on trades .the firm did not share with us their commission formula .they did indicate that like typical firms , the commission was based on end of the day earnings over a range of time to remove as possible chance fluctuations . at the time of observation , our sample of traders traded about half of the stocks available on these exchanges on average .it is likely that the specific company stocks that were not traded were ones that lent themselves to holding long - term positions rather than trading on intraday shifts in price .all trading related data was automatically captured by the firm s trading system , which is specially designed for accuracy in recording , and used by most other firms in the industry .this automated and electronic capture system works unobtrusively to avoid interference with trading .the capture system fulfills us securities and exchange commission requirements that all trades be recorded and archived for up to 7 years .the net income data were calculated by the firm using standard industry metrics . in our study, we analyzed all the trades of all the stocks of all the traders in our sample .the study conforms to institutional review board ( irb ) criteria .there was no subject interaction , all data was 100% archival , and the firm and the subjects were anonymized .legally , all data used in the study is owned by the company .all traders at the firm know the firm owns the data and that their communications and trading behavior is recorded by law .we received written permission from the firm to use these data for research purposes and publishing contingent on identifying characteristics of the firm and its traders remaining confidential and anonymous .to measure the extent to which traders exploration and exploitation choices can be shaped by the foraging choices of their competitors , we introduced a novel measure that captures the difference between a trader s resource intake and competitors expected intake over a short period of time what we called short - term comparative return and tested whether foraging choices can be explained by traders trying to maximize their daily short - term comparative returns .the short - term comparative return associated with each transaction was calculated as the difference between actual traded prices decided on by each trader and the average prices in the market within a relevant time window . since the anticipation of and response to the actions of competitors can be manifested by acting before them or by waiting and acting after them , we followed theory and defined context limits according to the smallest time window ( 5 minutes ) where it has been shown that individual transactions can impact the returns of others in the market . for each trader and each of his transactions on day , we defined the short - term comparative return as , where is the traded price and and are the average and standard deviation of stock s price on day within a five - minute interval , and ( ) for selling ( buying ) transactions ( fig .the stock s average price and standard deviation are a mirror of the foraging choices of competitors , since prices move according to the stock s consumption or demand .these price statistics are computed using the wrds database , which has all the recorded transactions made around the world for each stock .thus and always indicate , respectively , a positive and negative short - term comparative return relative to the actions of competitors at that time . to test how well the time window captures the changing foraging choices and depletion of resources over a period of time , we calculated lagged and leading short - term comparative returns using the stock s average price and standard deviation within five - minute intervals 5 minutes before and respectively 5 minutes after the observed five - minute interval of each transaction .again , we used the wrds database to calculate these values . if the distribution of lagged short - term comparative returns is similar to the actual distribution of short - term comparative returns then it would suggest that the prices within the actual time window are , in fact , representative of the actions of others over a recent short period of time and not simply artifacts of the five - minute interval . usingwilcoxon signed rank test for testing paired and non - normally distributed distributions , we found that in 28 out of 30 traders the actual and lagged short - term comparative returns were significantly similar ( table 1 ) , which confirms that the actual time window is a reasonable context to use .additionally , we repeated the same analysis but with lags ( , ) greater than 1 hour and found in all traders the actual and lagged returns were significantly different ( table 1 ) , meaning that these prices are representative of the actions of others only in the short - term . to know whether traders short - term comparative returns are associated with their foraging choices , we divided the total number of transactions of each trader in day according to their exploration index , or trading patch , and their exploitation index , or position within the patch .figure 1 presents an illustrative example of how we divided the number of transactions .this example shows that a trader in a day had a total of 14 transactions ( green bars ) allocated in 4 different patches ( gray regions ) . regarding the exploration index ,the first two transactions were characterized by for , the next 4 transactions by for , the next 6 transactions by for , and the last 2 transactions by for .additionally , these transactions were characterized by their exploitation indices , , , , , , , , , , , , and . note that each time the trader visits a new patch , the exploitation index is reset to 1 . for each trader , we modeled short - term comparative returns as a function of the importance of exploration and exploitation using a multivariate regression model that takes the form .table 1 indicates that both exploration and exploitation are negatively associated with short - term comparative returns . in line with optimal foraging theory, these results reveal diminishing payoffs per resource , i.e. , daily comparative returns can decrease in proportion of the number of stocks exploited or explored . to illustrate this point , we used and of one single trader , and assuming that in one particular day that trader made transactions exploring 65 different patches , the trader would have , where are the predicted short - term comparative returns from the regression model without considering , i.e. , this decline is relative to the trader s average returns over the same transactions .in contrast , if the trader would have explored one single patch , the total returns would have changed to .if one multiplies by say the average difference between traded price and average price in the market ( ) times the average volume of stocks per transaction ( ) in our data , translates into a _ _ r__elative loss compared to the trader s average performance over the same transactions of and .17 , respectively .note that this negative return is a relative measure of performance and should not be interpreted as the actual payoff . instead , it reflects the possibility of different expected outcomes .this resulting relative loss indicates that when foraging is compounded over many choices of exploitation and exploration , different activity patterns can impact the daily short - term comparative returns of traders .similarly , the relative loss can also be examined by quantifying the decline in generated by exploitation and exploration patterns separately when considering a constant number of transactions per patch .figure 3 shows that when considering exploration only , the lower the value of the higher the decline of ( dashed line ) ; and the opposite behavior is observed when considering exploitation only ( solid line ) .importantly , the relationship between exploitation and exploration patterns reveals that an optimal pattern for jointly maximizing traders exists , i.e. the intersection between the two curves . to test whether traders foraging choices respond to maximize their daily short - term comparative returns , we measured the extent to which the observed number of transactions per patch agreed with the optimal transactions per patch . to find , we used the equality of returns from the exploration and exploitation curves to describe the intersection point of the curves in order to then estimate the expected optimal number of transactions .mathematically , we calculated the value that maximizes given by , where is the mean number of total transaction of trader , and and are , respectively , the importance of exploration and exploitation taken from the multivariate regression model for each trader separately ( table 1 ) .thus , the expected optimal number of transaction per patch is the positive root of .interestingly , we found that exploration and exploitation choices can , in fact , be explained by traders trying to maximize their daily short - term comparative returns .we measured the deviation between the optimal and the distribution of actual values of using the normalized model error ( nme ) for each individual case . here, the nme was computed as the difference between and the observed median value of divided by the difference between the observed median value and the observed value of at the or quantiles , depending on whether the optimal value is lower or larger than the observed median value .the nme makes no particular assumption about the distribution of observed values .nme values between can be taken as cases where the optimal value is significantly similar to the observed values .we found only 3 cases with nme values greater than 1 ( fig .4 ) . importantly , this number of cases falls within the number of rejections $ ] that one would expect with confidence from a binomial model .thus , one can not reject the hypothesis that this model is a good approximation to the observed exploration and exploitation choices of traders .broadly , our findings reveal that traders choices can be explained by foraging heuristics that maximize their daily short - term comparative returns . finally , to test whether traders choices are associated with their net income intake , we introduced two additional return metrics .the first metric , which we called actual relative return , provides information about the amount of money made by traders relative to the expected amount made by competitors .it is calculated similar to the short - term comparative returns measure except that it does not take into account the standard deviation ; and instead , it multiplies returns by the number of stocks sold or bought .the second metric , which we called net income intake , , is simply the amount made by traders ; it is does not compare it with competitors .figure 5a shows a significant and positive association between short - term comparative returns and actual relative returns , confirming that traders choices respond to short - term competition with other traders .in contrast , figure 5b shows no association between comparative returns and net income intake , revealing a significant deviation between traders short - term returns and their absolute returns .this suggest that traders potential focus on short - term competition may come at the cost of missing net income optimizing opportunities .optimal foraging theory has proven useful for understanding how the fitness and survivability of animals depends on the trade - off between effort expended and absolute resources gained .it has further been shown that human and nonhuman animals rarely make the core foraging trade - off independently : their foraging choices are influenced by the choices their competitors make .nonetheless , the study of the relationship between optimal and real foraging choices remains nascent .here we investigated whether the exploration and exploitation choices of day traders can be explained by short - term exploitative competition .traders foraging choices may be more abstract , stochastic , and rapid than foraging choices in physical environments , yet the same mechanisms may underpin the allocation of vast financial and material resources under competition .our study analyzed the investing choices made by a cohort of 30 day traders at one firm .by analogy to foraging in the physical world , these traders sought to find the most beneficial compromise between the costs and benefits of continued foraging within a patch ( i.e. , consecutively buying and selling of the same stock ) or switching to forage in a new patch ( i.e. , trading a different stock ) , where the returns to trading are affected by the foraging choices made by competitors .we measured traders short - term comparative returns as the marginal difference between their actual returns to trading a stock and the mean returns possible based on the competitors foraging choices in the market within a relevant period of time .we found that traders short - term comparative returns are subject to an important trade - off between exploration and exploitation .we could not reject the hypothesis that traders exploration and exploitation choices can be explained by traders following short - term choices that focus on maximizing their daily short - term comparative returns .while a complete determination of the drivers of these choices is beyond our analysis , one possible account for the observed behavior is that traders first visit the patch in which they do best , then next best , and so on .thus , traders may choose patches that descend in worth , assuring at least early success , while limiting exposure to unpredictable shifts in competition in a patch that might create losses for the trader .such trading choices , however , may be different under new algorithmic trading where price transactions are previously fixed .foraging animals appear to optimally decide what patch of resources will offer the best returns to their efforts and how long to stay in a patch before moving onto the next best patch .remarkably , our findings revealed that stock traders trading choices can be explained by similar foraging heuristics that respond to short - term competition with other traders .however , there were important differences too .we found no one - best relationship between different trading choices and net income intake , suggesting that traders choices can be short - term win oriented and , paradoxically , maybe maladaptive for absolute market returns .this implies that traders net income intake might be more strongly associated to global outcomes , social contagion , or sporadic big losses and wins .while the same problem is not true of animal foraging since the resources gained from each patch are also the net payoffs , it would be interesting to investigate whether maladaptive foraging behavior can arise under rapid changing environments . in financial settings , it remains to see the extent to which this deviation between short - term choices and net income intake can influence the instability of markets .we thank alex bentley , esteban freidin , cristin huepe , rudolf rohr , and michael schnabel for useful comments on a previous draft .funding was provided by the kellogg school of management , northwestern university , the northwestern university institute on complex systems ( nico ) , and the army research laboratory under cooperative agreement w911nf-09 - 2 - 0053 .nsf voss grant ( oci-0838564 ) .ss also thanks conacyt .moro , e. , vicente , j. , moyano , l. g. , gerig , a. , farmer , j. d. vaglica , g. , lillo , f. & mantegna , r. n. 2009 market impact and trading profile of hidden orders in stock markets .e _ * 80 * , e066102. rode , c. , cosmides , l. , hell , w. & tooby , j. 19999 when and why do people avoid unknown probabilities in decision under uncertainty ? testing some predictions from optimal foraging theory_ cognition _ * 72 * , 269 - 304 ..traders detailed information . for each trader, the table shows the total number transactions made and the mean number of daily transactions over the observation period .the wilcoxon signed rank tests and for lags of 5 mins and 1hr respectively .note that values of are considered statistically significant .the coefficients , and taken from the multivariate regression model that takes the form . corresponds to standard errors .the correlation values and correspond , respectively , to the association of daily short - term comparative returns with actual relative returns and net income intake ( see text ) . , and correspond , respectively , to statistical significance levels of 10 , 5 and 1 percent .calculations are performed with software stata . [ cols="<,<,<,<,<,<,<,<,<,<",options="header " , ]
theory purports that animal foraging choices evolve to maximize returns , such as net energy intake . empirical research in both human and nonhuman animals reveals that individuals often attend to the foraging choices of their competitors while making their own foraging choices . due to the complications of gathering field data or constructing experiments , however , broad facts relating theoretically optimal and empirically realized foraging choices are only now emerging . here , we analyze foraging choices of a cohort of professional day traders who must choose between trading the same stock multiple times in a row patch exploitation or switching to a different stock patch exploration with potentially higher returns . we measure the difference between a trader s resource intake and the competitors expected intake within a short period of time a difference we call short - term comparative returns . we find that traders choices can be explained by foraging heuristics that maximize their daily short - term comparative returns . however , we find no one - best relationship between different trading choices and net income intake . this suggests that traders choices can be short - term win oriented and , paradoxically , maybe maladaptive for absolute market returns .
one of the fundamental security services in modern computer systems is _ access control _ , a mechanism for constraining the interaction between ( authenticated ) users and protected resources .generally , access control is enforced by a trusted component ( historically known as the _ reference monitor _ ) , which typically implements two functions : an _ authorization enforcement function _ ( aef ) and an _ authorization decision function _ ( adf ) .the aef traps all attempts by a user to interact with a resource ( usually known as a _ user request _ ) and transforms that request into one or more _ authorization queries _ ( also known as _ authorization requests _ ) which are forwarded to the adf .most access control systems are policy - based .that is , an administrator specifies an authorization policy , which , in its simplest form , encodes those authorization requests that are authorized .the adf takes an authorization query and an authorization policy as input and returns an authorization decision .for this reason , it is common to refer to the aef and adf as the _ policy enforcement point _ ( pep ) and _ policy decision point _ ( pdp ) , respectively ; it is this terminology that we will use henceforth .an authorization policy is merely an encoding of the access control requirements of an application using the authorization language that is understood by the pdp .it is necessary , therefore , to make a distinction between an _ ideal policy _ and a _ realizable policy _ : the former is an arbitrary function from requests to decisions ; the latter is a function that can be evaluated by the pdp . given a particular policy language, there might be some ideal policies that are not realizable , which may be a limitation of the policy language in practice .the access control system used in early versions of unix , for example , is rather limited .an important consideration , therefore , when designing an access control system is the _ expressivity _ of the policy language .the increasing prevalence of open , distributed , computing environments means that we may not be able to rely on a centralized authentication function to identify authorized users .this means that authorization decisions have to be made on the basis of ( authenticated ) user attributes ( rather than user identities ) . in turn, this means that the structure of authorization queries needs to be rather more flexible than that used in closed , centralized environments .the draft xacml 3.0 standard , for example , uses a much `` looser '' query format than its predecessor xacml 2.0 .however , if we have no control over the attributes that are presented to the pdp , then a malicious user ( or a user who wishes to preserve the secrecy of some attributes ) may be able to generate authorization decisions that are more `` favorable '' by withholding attributes from the pep .a second important consideration , therefore , is whether authorization policies are guaranteed to be `` monotonic '' in the sense that providing fewer attributes in an authorization query yields a less favorable outcome ( from the requester s perspective ) .there is an extensive literature on languages for specifying authorization policies , most approaches proposing a new language or an extension of an existing one .the proliferation of languages led ferraiolo and atluri to raise the question in of whether a _ meta - model _ for access control was needed and possible to achieve , hinting at xacml and rbac as potential candidates . in response, barker proposed a meta - model , which sought to identify the key components required to specify access control policies , based on a term - rewriting evaluation . in this paper , we do not present `` yet another language '' for access control policies , nor do we claim to have a `` unifying meta - model '' .we focus instead on reasoning about the properties of a language .indeed , we advocate the idea that a language is just a tool for policy designers : just as some programming languages are better suited to particular applications , it seems unlikely that there exists a single access control model ( or meta - model ) that is ideal in all possible contexts . on the contrary , we believe that providing the structure to formally analyse a language might be valuable to a policy designer , in order to understand the suitability of a particular language as the basis for a specific access control system .we conclude this section by summarizing the structure and contributions of the paper . in sec .[ sec : framework ] we propose a general framework for access control , whose role is not to be used as an off - the - shelf language , but as a way to identify and reason about the key aspects of a language . in sec .[ sec : monotonicity ] we define monotonicity and completeness in the context of our framework . then in sec .[ sec : general - abac ] we define two attribute - based models , respectively monotonic and complete , by building on existing results from the literature on multi - valued and partial logic .the main body of the paper ends with discussions of related and future work .in this section we describe the various components of our framework and introduce our formal definition of access control models and policies . broadly speaking, we provide a generic method for designing access control models and for furnishing access control policies , which are written in the context of a model , with authorization semantics .we also introduce the notion of an ideal policy , which is an abstraction of the requirements of an organization , and relate this concept to that of an access control policy . from an external viewpoint ,an access control mechanism is a process that constrains the interactions between users and data objects .those interactions are modeled as access requests , with the mechanism incorporating two functions : one to determine whether a request is authorized or not and one to enforce that decision .the overall process must be total , in the sense that its behavior is defined for _ every _ possible interaction ( which may include some default behavior that is triggered when the decision function is unable to return a decision ) . in general , designing a particular access control mechanism for a particular set of requests is the final concrete objective of any access control framework ( although we are also clearly interested in expressing general properties of the framework ) .we define an access control mechanism using an _ access control policy _ , together with an _ interpretation function _ which provides the authorization semantics for a policy .intuitively , a policy is simply a syntactical object , built from _ atomic policies _ and _ policy connectives_. the interpretation function provides the denotational semantics of the policy , by returning a function from requests to decision , thus defining the expected behavior of the pdp . clearly , a policy can be interpreted in different ways , and an interpretation function can interpret different policies , as long as they are built from the same atomic policies and connectives . an _ access control model _ defines an _ access control language _ , which consists of a set of atomic polices and policy connectives , and an interpretation function .in other words , an access control model specifies a set of access control policies and a unique way to interpret each of these policies .an _ access control mechanism _, then , is an instance of an access control model if its policy belongs to the language of the model and if its interpretation function is that of the model . in order to provide a framework within which policies can be constructed ,we introduce the notion of access control model , which is a tuple , where is a set of _ requests _ , a set of _ atomic authorization policies _, a set of _ policy connectives _ , a set of ( authorization ) _ decisions _ , and , for each , is a total function from to defining the _ evaluation _ of policy for all requests in . each -ary policy connective in identified with a function .we construct an authorization policy using elements of and .we extend the evaluation function for atomic policies to arbitrary policies : that is , provides a method of evaluating requests with respect to a policy .we say that defines the _ authorization semantics _ of the model .the syntax by which policies are defined and the extension of the authorization semantics for atomic policies to non - atomic policies are fixed ( for all models ) , as specified in definition [ def : policy - term ] below .nevertheless , different choices for , and give rise to very different models having very different properties . a _ policy term is defined by a ( rooted ) _ policy tree _ , in which leaf nodes are _ atomic policies _ and each non - leaf node is a policy connective ( we may also use the term _ policy operator _ ) .more formally we have the following definition : [ def : policy - term ] let be a model .then every atomic policy in is a _ policy term_. if are policy terms , then for each -ary operator , is a policy term . for each policy term , we define in other words , authorization policies are represented as policy trees and policies are evaluated from the bottom up by [ ( a ) ] evaluating atomic policies combining the decisions returned for atomic policies using the relevant policy connectives .we write to denote the set of policies that can be expressed within .given a set of queries and a set of decisions , an _ideal access control policy _ is a total function .we say that an ideal policy is _ realizable _ by an access control model if , and only if , there exists a policy term such that for any query , ; in the interests of simplicity we will abuse notation and write and . figure [ fig : pol - tree ] shows two policy trees each having the same atomic policies , and .the figure also shows two evaluations of the tree for the same request , where and .the symbols , and denote allow , deny and inapplicable decisions , respectively .the policy trees are evaluated using a post - order traversal , in which each leaf node is assigned a value according to the semantics defined by and each interior node is assigned a value by combining the values assigned to its child nodes .the policies in figure [ fig : pol - tree ] make use of three operators taken from table [ tab : operators ] .both and are similar to the allow - overrides operator familiar from xacml ( and also the two conjunction operators from kleene s 3-valued logic ) and only differ in the way in which is combined with .the unary operator implements a deny - by - default rule , thus .[ h ] in general , an access control model does not specify any policy in particular ( unless the language is so restricted that it can only specify one policy ) .to some extent , an access control model ( in the sense in which we use the term in this paper ) is analogous to a programming language : it describes the syntax that is used to build access control policies ( analogous to programs ) and the semantics of the run - time mechanisms that will be used to handle input data ( access control requests in this context ) .a realizable policy is in this case analogous to a program written in the syntax of the model , that is interpreted using the authorization semantics of the model , while an ideal policy is analogous to the set of functional requirements .note that an ideal policy can be realized by different access control models : and with .in other words , different access control mechanisms may be able to enforce the same security requirements . and may be realizable by different policy terms from the same access control model : and with .in other words , security requirements can be enforced by the same mechanism using different policies .however , an ideal policy may not be realizable by any policy term for a given model ; the extent to which a model can realize the set of ideal policies provides us with a notion of the _ completeness _ of a model ( as we discuss in section [ sec : completeness ] ) .a model provides the global structure from which access control policies can be built .a simple example of a model is the protection matrix model , which can be viewed as a set of triples , where is a subject , an object and an access mode .a query is also a triple , and is authorized if , and only if , it belongs to the set representing the matrix .hence , we define the set of queries to be the set of all triples , the set of decisions , where stands for an authorized access and for a denied one , the set of atomic policies , the set of operators , where is the standard boolean disjunction , and the interpretation function to be : for instance , the policy authorizing only the accesses and can be defined as . models can also consider richer sets of queries .indeed , recent work considers the possibility that , in order to make a decision , an access control system might require more attributes than the traditional subject - object - action triple . in order to define requests and atomic policies it is necessary to identify sets of attributes and the values that each of those attributes may take .role - based access control , to take a simple example , defines the sets of roles , users and permissions , together with user - role and permission - role assignment relations .we now introduce the notions of attribute vocabulary and attribute - based access control , which are intended to be as general as possible and allow for the construction of requests and policies .[ def : abac ] let denote a set of attribute names , and denote a set of attribute domains .let be a function , where denotes the set of attribute values associated with attribute .then defines an _ attribute vocabulary_. when no confusion can occur , we will simply write to denote an attribute vocabulary .a request is modeled as a set of name - value pairs of the form , where .we denote the set of requests by , omitting when it is obvious from context .we say an attribute name - value pair is _ well - formed _ if and .we assume that a pdp can recognize ( and discard ) name - value pairs in a request that are not well - formed .attribute - based access control ( abac ) policies are modular .hence , a policy component may be incomplete or two policy components may return contradictory decisions .thus , it is common to see additional decisions used to denote a policy `` gap '' or `` doubt '' indicating different reasons why policy evaluation could not reach a conclusive ( allow or deny ) decision .we write , where indicates that is neither nor . in table [ tab : operators ] we summarize the characteristics of some useful 3-valued operators , most of which are self - explanatory .the operator acts as a policy filter : if , and evaluates to otherwise .the operator models policy unanimity : evaluates to a conclusive decision only if both and do . in sec .[ app : conflict - abac ] we describe a model with a 4-valued decision set .abac is designed for open distributed systems , meaning that authenticated attributes and policy components may need to be retrieved from multiple locations .thus , some languages assume that policy evaluation may fail : it may be , for example , that a policy server or policy information point is down .ptacl relies on a three - valued logic , and considers sets of decisions in order to model indeterminacy .xacml 3.0 considers a six - valued decision set , three of those decisions representing different indeterminate answers .an access control model provides a policy designer with a language to construct a policy . that language may well have an impact on the policies that can be expressed and the properties of those policies . in this sectionwe study two specific properties of access control models , monotonicity ( a kind of safety property ) and completeness ( an expressivity property ) , and we present two models satisfying these properties in section [ sec : general - abac ] .informally , a policy is monotonic whenever removing information from a request does not lead to a `` better '' policy decision .such a property is of particular relevance in open systems , where users might be able to control what information they supply to the access control mechanism .a model in which all realizable policies are monotonic implies that they are not vulnerable to attribute hiding attacks .that is , a malicious user gains no advantage by suppressing information when making a request .we model information hiding using a partial ordering on ; the intuitive interpretation of is that contains less information than . for instance , an attribute query is less than another query when . we also need to specify what it means for a decision to `` benefit '' a user , and thus we assume the existence of an ordering relation on ; again , the intuitive interpretation of is that the decision is of greater benefit than .for instance , we can consider the ordering over , such that if and only if or . given a set of authorization queries and a set of decisions , a policy is _ monotonic _ if , and only if , for all , implies .we say that an access control model is _ monotonic _ if for all , is monotonic .note that our definition of a monotonic policy applies equally well to an ideal policy or a realizable policy term with authorization semantics .however , the notion of monotonicity is dependent on the request ordering .for instance , without further characterization , the request ordering for the access matrix could be reduced to equality , making any policy trivially monotonic .however , more complex situations can be considered by adding extra information , such as an ordering over subjects or objects .tschantz and krisnamurthi have shown that xacml 2.0 is not monotonic ( although they called the property `` safety '' rather than monotonicity ) .we show in section [ sec : abacm]provided certain restrictions are imposed on the structure of requests that it is possible to develop a monotonic , attributed - based ( xacml - like ) access control model , using results from partial logic . given a model , any realizable policy clearly corresponds to an ideal policy .however , there may exist an ideal policy ( for and ) that does not belong to and can not , therefore , be enforced by the policy decision point .trivially , for example , a model without any atomic policies does not realize any policies .it follows that the set of ideal policies that can be realized by a model represents an intuitive notion of expressivity .a model that can realize every ideal policy is said to be complete . more formally : an access control model is _ complete _ if , and only if for any ideal policy , .the completeness of a model will depend on the authorization vocabulary , the definition of atomic policies , the set and .the access matrix model defined in section [ sec : examples ] , for example , is complete .[ thm : am_complete ] the model is complete . on the other hand , it is easy to show that xacml is not complete , unless we allow the inclusion of xacml conditions , which are arbitrary functions .indeed , consider two attributes and with two respective attribute values and , it is not possible to construct a policy that evaluates to to , intuitively because any target not applicable to can not be applicable to .we propose an attribute - based access control model in section [ sec : abacc ] in which the representation of atomic policies can distinguish attribute name - value pairs , from which we can prove a completeness result .however , it is worth observing that , in general , if a model is both monotonic and complete , then the ordering over requests is limited to the identity relation , as illustrated above with the access matrix .[ thm : monotonicity - completeness ] given any model , if is complete and monotonic and if , then is the identity relation .informally , this result states that if we wish to have a ( non - trivial ) monotonic model then we can not expect to have a complete model . instead, what we should aim for is a model that realizes at least all _ monotonic ideal policies _ , and such a model is said to be _ monotonically - complete_. in section [ sec : general - abac ] , we show how to define monotonically - complete and complete attribute - based access control models that have similar characteristics to xacml .it could be argued that the main objective of xacml is to provide a standard addressing as many practical concerns as possible , rather than a language with formal semantics .nevertheless , the design choices can and should be analyzed with respect to the properties they entail .we do not claim here that xacml _ should _ be monotonic , complete , or monotonically - complete , but we show instead how , building from existing logical results , one can instantiate an access control model with these properties .the results in this section can provide guidance to the designer of an access control system .she can choose , for example , between a system that realizes only and all monotonic policies , and a system in which all policies are realizable , but some may be non - monotonic .clearly , the choice depends on the demands of the application and the constraints of the underlying environment .while we can not make this choice for the policy designer , our framework can only help her make an informed decision .if the attribute vocabulary were countably infinite ( and the cardinality of the decision set is greater than ) then the number of ideal policies would be uncountably infinite ( by a standard diagonalization argument ) .however , the number of realizable policies can , at best , be countably infinite , by construction .accordingly , it is only meaningful to consider completeness if we assume that the attribute vocabulary is finite ( but unbounded ) . in practice , of course , all attribute values will be stored as variables and there will be an upper limit on the size of such variables , so the attribute vocabulary will be finite and bounded , albeit by a very large number . recall from definition [ def : abac ] that , given a vocabulary , we write to denote the set of requests .note that a request may contain ( well - formed ) pairs having the same attribute name and different values .one obvious example arises when is the `` role '' attribute name and is the identifier of a role .we define the set of atomic policies to be the set of well - formed name - value pairs .that is .then we define note that the above interpretation of atomic policies is by no means the only possibility . in the context of a three - value decision set, we might return if and , if and otherwise . in the context of a four - value decision set, we could return if , since such a request both matches and does not match the attribute value for attribute .we discuss these possibilities in more detail in sec .[ app : conflict - abac ] .the ordering on , denoted by , is simply subset inclusion .we define the ordering on , where if and only if or .it is worth observing that if a request contains at most one value for each attribute , then each atomic policy is monotonic .more formally , if we define the set of queries , we can prove the following proposition .[ thm : single_monotonicity ] for all requests such that and for all atomic policies , we have .we will see in the following section that we can define a complete abac model that accepts requests from , but we can no longer ensure monotonicity .we now define a monotonic and monotonically - complete attribute - based access control ( abac ) model . defined to be . not merely of academic interest because it incorporates a number of features that are similar to xacml .in particular , we can * construct targets from conjunctions and disjunctions of atomic policies ; * use the operators and to model deny - overrides and allow - overrides policy - combining algorithms ; * construct ( xacml ) rules , policies and policy sets using policies of the form , since if ( corresponding to `` matching '' a request to `` target '' and then evaluating policy ) . the correspondence between xacml can not be exact , given that xacml is not monotonic .the main difference lies in the way in which and handle the decision .the operators and are what crampton and huth called intersection operators , whereas the policy - combining algorithms in xacml are union operators .informally , an intersection operator requires both operands to have conclusive decisions , while a union operator ignores inconclusive decisions .thus , for example , , whereas the xacml deny - overrides algorithm would return given the same arguments . a practical consequence of the design goals of that the decision will be returned more often than for analogous policies in xacml ( or other non - monotonic languages ) . in practice ,the policy enforcement point will have to either ask the requester to supply additional attributes in the request ; or deny all requests that are not explicitly allowed .[ thm : abac - monotonically - complete ] monotonic and monotonically complete .let us first observe that the operators ,,, and , as defined in table [ tab : operators ] , are monotonic with respect to . following proposition [ thm : single_monotonicity ] , we know that atomic policies are monotonic , and by direct induction , we can conclude that any policy in is monotonic , and thus that is monotonic .now , let be a monotonic ideal policy .we show that there exists a policy such that realizes . is a finite partially ordered set , so we may enumerate its elements using a topological sort .that is we may write } , \\\oplus(d_1 , \dots , d_k ) & \text{otherwise}. \end{cases}\ ] ] clearly , given an operator defined over , if is monotonic according to , then is also monotonic with respect to .it follows that we can still safely use the operators generated by the operators , , and , and we can deduce that any realizable policy is also monotonic .however , we lose the result of monotonic completeness , and we can no longer ensure that any monotonic operator can be generated from this set of operators . obtaining such a resultrequires a deeper study of four - valued logic , and we leave it for future work .much of the work on specification of access control languages can be traced back to the early work of woo and lam , which considered the possibility that different policy components might evaluate to different authorization decisions .more recent work has considered larger sets of policy decisions or more complex policy operators ( or both ) , and propose a formal representation of the corresponding metamodel .the `` metamodels '' in the literature are really attempts to fix an authorization vocabulary , by identifying the sets and relations that will be used to define access control policies . in contrast, our framework makes very few assumptions about access control models and policies that are written in the context of a model . in this, our framework most closely resembles the work of tschantz and krishnamurthi , which considered a number of properties of a policy language , including determinism , totality , safety and monotonicity .the notion of a monotonic operator ( as defined by ) is somewhat different from ours .this is in part because a different ordering on the set of decisions is used and because monotonicity is concerned with the inclusion of sub - policies and the effect this has on policy evaluation .this contrasts with our approach , where we are concerned with whether the exclusion of information from a request can influence the decision returned .( in fact , our concept of monotonicity is closer to the notion of safety defined in : if a request is `` lower '' than , then the decision returned for is `` lower '' than that of . )we would express their notion of monotonicity in the following way : a policy operator is monotonic ( in the context of model ) if for all and all , if , then for any and any policy .moreover , our framework is concerned with arbitrary authorization vocabularies and queries , unlike that of tschantz and krishnamurthi , which focused on the standard subject - object - action request format .the only assumption we make is that all policies can be represented using a tree - like structure and that policy decisions can be computed from the values assigned to leaf nodes and the interpretation of the operators at each non - leaf node .in addition , we define the notion of _ completeness _ of a model , which is concerned with the expressivity of the policy operators .there exists prior work on comparing the expressive power of different access control models or the extent to which one model is able to simulate another . in this paper , we show how our framework enables us to establish whether a model based on a particular set of atomic policies , decision set and policy connectives is complete .we can , therefore , compare the completeness of two different models by , for example , fixing an authorization vocabulary and comparing the completeness of models that differ in one or more of the models components ( that is , ones that differ in the set of connectives , decision sets , atomic policies and authorization semantics ) .while this is similar in spirit to earlier work , this is not the primary aim of this paper , although it would certainly be a fertile area for future research .we have presented a generic framework for specifying access control controls , within which a large variety of access control models arise as special cases , and which allows us to reason about the global properties of such systems .a major strength of our approach is that we do not provide `` yet another access control language '' .the framework is not intended to provide an off - the - shelf policy language or pdp ( unlike xacml , for example ) , nor is it intended to be an access control model ( in the style of rbac96 , say ) .rather , we try to model all aspects of an access control system at an abstract level and to provide a framework that can be instantiated in many different ways , depending on the choices made for request attributes , atomic policies , policy decisions and policy evaluation functions . in doingso we are able to identify there are many opportunities for future work .the notions of monotonicity and completeness are examples of general properties of an access control model that we can characterize formally within our framework .we have already noted that there are at least two alternative semantics for atomic policies having the form for a three - valued decision set and even more alternatives for a four - valued decision set . it would be interesting to see how these alternative semantics affect monotonicity and completeness .we would like to study the composition of access control models , and under what circumstances composition preserves monotonicity and completeness .further properties that are of interest include policy equivalence , policy ordering ( where , informally , one policy is `` more restrictive '' than if it denies every request that is denied by ) , which may allow us to define what it means for a realizable policy to be `` optimal '' with respect to an ( unrealizable ) ideal policy .moreover , our definition of monotonicity is dependent on the ordering on the set of decisions .monotonicity , in the context of the ordering , for example , is a stronger property than the one we have considered in this paper .again , it would be interesting to investigate the appropriateness of different forms of monotonicity . furthermore, although xacml is proven not to be monotonic , it is not known under which conditions it can be monotonically - complete , and if additional operators are needed to prove this property , which is also likely to depend on the decision orderings considered . in this paper , we have assumed that there exists an ideal policy and that such a policy is fixed .generally , however , a system evolves over time , and an access control policy will need to be updated to cope with changes to that system that affect the users , resources , or context . thus it may be more realistic to specify an initial ideal policy , which might be extremely simple , and the access control policy that best approximates it , and then define rules by which the access control policy may evolve . with this in mind , it makes sense to regard the access control policy ( or components thereof ) as a protected object .security is then defined in terms of properties that `` reachable '' access control policies must satisfy .typical examples of such properties are `` liveness '' and `` safety '' . including administrative policies within our framework and investigatingproperties such as liveness and safety will be an important aspect of our future work in this area .j. crampton and c. morisset , `` ptacl : a language for attribute - based access control in open systems , '' in _ principles of security and trust - first international conference , ( post 2012 ) , proceedings _ , ser .lecture notes in computer science , vol . 7215 , 2012 , pp . 390409 .a. griesmayer and c. morisset , `` automated certification of authorisation policy resistance , '' in _ esorics _ , ser .lecture notes in computer science , j. crampton , s. jajodia , and k. mayes , eds .8134.1em plus 0.5em minus 0.4emspringer , 2013 , pp. 574591 .d. ferraiolo and v. atluri , `` a meta model for access control : why is it needed and is it even possible to achieve ? '' in _ proceedings of the 13th acm symposium on access control models and technologies_.1em plus 0.5em minus 0.4emacm , 2008 , pp . 153154 .p. rao , d. lin , e. bertino , n. li , and j. lobo , `` an algebra for fine - grained integration of xacml policies , '' in _ sacmat _, b. carminati and j. joshi , eds.1em plus 0.5em minus 0.4emacm , 2009 , pp . 6372 . m. c. tschantz and s. krishnamurthi , `` towards reasonability properties for access - control policy languages , '' in _ sacmat _ , d. f. ferraiolo and i. ray , eds.1em plus 0.5em minus 0.4emacm , 2006 , pp .160169 .j. crampton and m. huth , `` an authorization framework resilient to policy evaluation failures , '' in _ esorics _ , ser .lecture notes in computer science , d. gritzalis , b. preneel , and m. theoharidou , eds .1em plus 0.5em minus 0.4emspringer , 2010 , pp . 472487 .e. bertino , b. catania , e. ferrari , and p. perlasca , `` a logical framework for reasoning about access control models , '' _ acm transactions on information and system security _ , vol . 6 , no . 1 , pp . 71127 , 2003 . j. crampton and m. huth , `` a framework for the modular specification and orchestration of authorization policies , '' in _ nordsec _ , ser .lecture notes in computer science , t. aura , k. jrvinen , and k. nyberg , eds .7127.1em plus 0.5em minus 0.4emspringer , 2010 .n. damianou , n. dulay , e. lupu , and m. sloman , `` the ponder policy specification language , '' in _ policy _ , ser .lecture notes in computer science , m. sloman , j. lobo , and e. lupu , eds .1995.1em plus 0.5em minus 0.4emspringer , 2001 , pp .1838 .n. li , q. wang , w. h. qardaji , e. bertino , p. rao , j. lobo , and d. lin , `` access control policy combining : theory meets practice , '' in _ sacmat _ ,b. carminati and j. joshi , eds.1em plus 0.5em minus 0.4emacm , 2009 , pp . 135144 .q. ni , e. bertino , and j. lobo , `` d - algebra for composing access control policy decisions , '' in _ asiaccs _ , w. li , w. susilo , u. k. tupakula , r. safavi - naini , and v. varadharajan , eds.1em plus 0.5em minus 0.4emacm , 2009 , pp .298309 .l. habib , m. jaume , and c. morisset , `` a formal comparison of the bell & lapadula and rbac models , '' in _ ias _, m. rak , a. abraham , and v. casola , eds.1em plus 0.5em minus 0.4emieee computer society , 2008 , pp .s. osborn , r. sandhu , and q. munawer , `` configuring role - based access control to enforce mandatory and discretionary access control policies , '' _ acm transactions on information and system security _ ,vol . 3 , no . 2 , pp .85106 , 2000 .
there have been many proposals for access control models and authorization policy languages , which are used to inform the design of access control systems . most , if not all , of these proposals impose restrictions on the implementation of access control systems , thereby limiting the type of authorization requests that can be processed or the structure of the authorization policies that can be specified . in this paper , we develop a formal characterization of the features of an access control model that imposes few restrictions of this nature . our characterization is intended to be a generic framework for access control , from which we may derive access control models and reason about the properties of those models . in this paper , we consider the properties of monotonicity and completeness , the first being particularly important for attribute - based access control systems . xacml , an xml - based language and architecture for attribute - based access control , is neither monotonic nor complete . using our framework , we define attribute - based access control models , in the style of xacml , that are , respectively , monotonic and complete .
consider the incidence of a time - harmonic acoustic wave onto a bounded , penetrable , and isotropic elastic solid , which is immersed in a homogeneous and compressible air or fluid . due to the interaction between the incident wave and the solid obstacle, an elastic wave is excited inside the solid region , while the acoustic incident wave is scattered in the air / fluid region .this scattering phenomenon leads to an air / fluid - solid interaction problem .the surface of the elastic solid divides the whole three - dimensional space into a bounded interior domain and an open exterior domain where the elastic wave and the acoustic wave occupies , respectively .the two waves are coupled together on the surface via the interface conditions : continuity of the normal component of velocity and the continuity of traction .the acoustic - elastic interaction problems have received ever - increasing attention due to their significant applications in geophysics and seismology .these problems have been examined mathematically by using either variational method or boundary integral equation method .many computational approaches have also been developed to numerically solve these problems such as boundary element method and coupling of finite and boundary element methods . since the work by brenger , the perfectly matched layer ( pml ) technique has been extensively studied and widely used to simulate various wave propagation problems , which include acoustic waves , elastic waves , and electromagnetic waves . the pml is to surround the domain of interest by a layer of finite thickness fictitious material which absorbs all the waves coming from inside the computational domain. it has been proven to be an effective approach to truncated open domains in the wave computation .combined with the pml technique , the adaptive finite element method ( fem ) has recently been developed to solve the diffraction grating problems and the obstacle scattering problems . despite the large number of work done so far , they were concerned with a single wave propagation problem , i.e. , either an acoustic wave , or an elastic wave , or an electromagnetic wave .it is very rare to study rigorously the pml problem for the interaction of multiple waves .this paper aims to investigate the adaptive finite element pml method for solving the acoustic - elastic interaction problem .an exact transparent boundary condition ( tbc ) is developed to reduce the problem equivalently into a boundary value problem in a bounded domain .the pml technique is adopted to truncated the unbounded physical domain into a bounded computational domain .the variational approach is taken to incorporate naturally the interface conditions which couple the two waves .the well - posedness and exponential convergence of the solution are established for the truncated pml problem by using a pml equivalent tbc .the proofs rely on the error estimate between the two transparent boundary operators .to effciently resolve the solution with possible singularities , the a posteriori error estimate based adaptive fem is developed to solve the truncated pml problem .the error estimate consists of the pml error and the finite element discretization error , and provides a theoretical basis for the mesh refinement .numerical experiments are reported to show the competitive behavior of the proposed method .the paper is organized as follows . in section 2, we introduce the model equations for the acoustic - elastic interaction problem . in section 3, we present the pml formulation and prove the well - posedness and convergence of the solution for the truncated pml problem . in section 4 , we discuss the numerical implementation and show some numerical experiments . the paper is concluded with some general remarks in section 5 .in this section , we introduce the model equations for acoustic and elastic waves , and present an interface problem for the acoustic - elastic interaction . in addition , an exact transparent boundary condition is introduced to reformulate the scattering problem into an boundary value problem in an bounded domain .consider an acoustic plane wave incident on a bounded elastic solid which is immersed in a homogeneous compressible air / fluid in three dimensions .the problem geometry is shown in figure [ fig : geo ] . due to the wave interaction ,an elastic wave is induced inside the solid region , while the scattered acoustic wave is generated in the open air / fluid region .the wave propagation described above leads to an air / fluid - solid interaction problem .the surface of the solid divides the whole three - dimensional space into the interior domain and the exterior domain , where the elastic wave and the acoustic wave occupies , respectively .let the solid be a bounded domain with a lipschitz boundary .the exterior domain is assumed to be connected and filled with a homogeneous , compressible , and inviscid air / fluid with a constant density .denote by , where are sufficiently large such that .define .let be the unit normal vector on directed from into , and let be the unit outward normal vector on .let the elastic solid be impinged by a time - harmonic sound wave , which satisfies the three - dimensional helmholtz equation : where is the wavenumber , is the angular frequency , and is the speed of sound in the air / fluid .the total acoustic wave field also satisfies the helmholtz equation : the total field consists of the incident field and the scattered field : where scattered field is required to satisfy the sommerfeld radiation condition : the time - harmonic elastic wave satisfies the three - dimensional navier equation : where is the displacement of the elastic wave , and the stress tensor is given by the generalized hook law : here are the lam parameters satisfying , and is the displacement gradient tensor given by substituting into yields to couple the acoustic wave equation and the elastic wave equation , the kinematic interface condition is imposed to ensure the continuity of the normal component of the velocity : in addition , the dynamic interface condition is required to ensure the continuity of traction : where denotes the matrix - vector multiplication .the acoustic - elastic interaction problem can be formulated into the following coupled boundary value problem : given , to find such that we refer to for the discussion on the well - posedness of the boundary value problem . from now on, we assume that the acoustic - elastic interaction problem has a unique solution .given , we define the dirichlet - to - neumann ( dtn ) operator as follows : where is the solution of the exterior dirichlet problem of the helmholtz equation : it is well - known that the exterior problem has a unique solution ( cf . , e.g. , ) .thus the dtn operator is well - defined and is a bounded linear operator .using the dtn operator , we reformulate the boundary value problem from the open domain into the bounded domain : given , to find such that where . to study the well - posedness of , we define which is endowed with the inner product : for any and , where is the frobenius inner product of square matrices and . clearly , is a norm on .let be the sesquilinear form : the acoustic - elastic interaction problem is equivalent to the following weak formulation : find such that since we assume that the variational problem has a unique weak solution , the general theory in babuka and aziz ( * ? ? ?5 ) implies that there exists a constant such that the following inf - sup condition is satisfied this section , we introduce the pml formulation for the acoustic - elastic interaction problem and establish its well - posedness .an error estimate will be shown for the solutions between the original scattering problem and the pml problem .now we turn to the introduction of an absorbing pml layer .as is shown in figure [ fig : geo1 ] , the domain is surrounded by a pml layer of thickness which is denoted as .define .let be the pml function which is continuous and satisfies here is a constant and is an integer .following , we introduce the pml by the complex coordinate stretching : let . introduce the new function : it is clear to note that in since in . it can be verified from and that satisfies where the pml differential operator is defined by where it can be verified from and that the outgoing wave in decays exponentially .therefore , the homogeneous dirichlet boundary condition can be imposed on to truncate the pml problem .we arrive at the following truncated pml problem : find such that where define which is endowed with the inner product for any and .obviously , is a norm on .the weak formulation of the truncated pml problem reads as follows : find such that on and where , and the sesquilinear form is defined by we will reformulate the variational problem imposed in the domain into an equivalent variational formulation in the domain , and discuss the existence and uniqueness of the weak solution to the equivalent weak formulation .to do so , we need to introduce the transparent boundary condition for the truncated pml problem .we start by introducing the approximate dtn operator associated with the pml problem .given , let on , where is the solution of the following boundary value problem in the pml layer : the pml problem can be reduced to the following boundary value problem : find such that where .the weak formulation of is to find such that where the sesquilinear form is defined by the following lemma establishes the relationship between the variational problem and the weak formulation .the proof is straightforward based on our constructions of the transparent boundary conditions for the pml problem .the details of the proof is omitted for simplicity .any solution of the variational problem restricted to is a solution of the variational ; conversely , any solution of the variational problem can be uniquely extended to the whole domain to be a solution of the variational problem in .now we turn to estimating the error between and .the key is to estimate the error of the boundary operators and . [ boe ] for any , there exists a constant such that where and is a sufficiently large constant such that .the proof can follow similar arguments as that in ( * ? ? ? * theorem 3.8 ) .for the sake of simplicity , we do not elaborate on the details here .let be the constant in the inf - sup condition .if then the pml variational problem has a unique weak solution , which satisfies the error estimate where is the unique weak solution of the variational problem .it suffices to show the coercivity of the sesquilinear form defined in in order to prove the unique solvability of the weak problem . using lemma [ boe ] , and the assumption , we get for any in that it remains to show the error estimate .it follows from that which completes the proof upon using lemma [ boe ] and the trace theorem .in this section we introduce the finite element approximations of the pml problem .let be a regular tetrahedral partition of the domain and are also regular tetrahedral partitions of and , respectively .let and be the conforming linear finite element space over and , respectively , and the finite element approximation to the pml problem reads as follows : find such that on and for any , let be its extension in such that introduce the sesquilinear form as follows : the weak formulation for is : given , find such that on , on , and in this paper we will not elaborate on the well - posedness of and simply make the following assumption : there exists a unique solution to the boundary value problem in the pml layer .in order to obtain a constant independent of pml parameter in the inf - sup condition , we define by using the general theory in ( * ? ? ? * chap .5 ) , we know that there exists a constant such that the constant depends on the domain and the wave number . for any , which is extended to be a function according to . then there exists a constant independent of and such that where is the unit outward normal vector on . for any such that on and on . by the inf - sup condition in and using , we know that by cauchy schwarz inequality noting using the triangle inequality and the trace inequality , we conclude that which shows the first estimate in the theorem by using the definition of . next , for any such that on , using and the integration by parts ,we obtain it follows from the cauchy schwarz inequality and that which completes the proof after using the trace inequality . for any , which is extended to be a function according to , and , we have first by , , , and , we have using yields recalling that is the unit outer normal to which points outside and is the unit outer normal vector on directed outside , we deduce that where we have used , the definition of , and the identity ( c.f . , ( * ? ? ?* lemma 5.1 ) ) by , , and , which completes the proof . for any ,we denote by its diameter .let denote the set of all sides that do not lie on .for any , stands for its length . for any , we introduce the residual : for any interior side not lying on the interface which is the common side of , we define the jump residual across : where we have used the notation that the unit normal vector on points from to .if lies on the interface , then we define the jump residual as for any , we define the local error estimator as [ thmp ] there exists a constant depending only on and the minimum angle of the mesh such that the following a posterior error estimate holds let and be scott zhang interpolation operators satisfying the following interpolation estimates : for any and , and where and are the union of all elements in having a non - empty intersection with and the side , respectively .taking and in the error representation formula , we get it follows from the integration by parts and that by and the estimate , we have by lemma [ boe ] , we have it follows from that the proof is completed by using the above estimates in and the inf - sup condition .according to the discussion in section 4 , we choose the pml medium property as the power function and need to specify the thickness of the layers and the medium parameter .it is clear to note from theorem [ thmp ] that the a posteriori error estimate consists of two parts : the pml error and the finite element discretization error , where in our implementation , we first choose and such that , which makes the pml error negligible compared with the finite element discretization error .once the pml region and the medium property are fixed , we use the standard finite element adaptive strategy to modify the mesh according to the a posteriori error estimate . for any , we define the local a posteriori error estimator the adaptive fem algorithm is summarized in table [ alg ] .-15pt .the adaptive fem algorithm . [ cols= " < , < " , ] [ alg ] in the following , we present two examples to demonstrate the competitive numerical performance of the proposed algorithm .the first - order linear element is used for solving the problem .our implementation is based on parallel hierarchical grid ( phg ) , which is a toolbox for developing parallel adaptive finite element programs on unstructured tetrahedral meshes .the linear system resulted from finite element discretization is solved by the pcg solver .* example 1 .* we consider a problem with an exact solution .we set the elastic region and the acoustic region , where denotes the ball with radius and centering at the origin .let where .the parameters are chosen as , , , , and such that first it is easy to verify that when and are constants , the navier equation reduces to using and , we have from a straightforward calculation that which shows that satisfies in . it can be verified that the interface conditions are also satisfied by letting .let and consider the following acoustic - elastic interaction problem with the dirichlet boundary condition : we may test the adaptive fem algorithm by solving the above boundary value problem .figure [ ex1:err ] displays the errors of and against the number of nodal points in and in , respectively .it clearly shows that the adaptive fem yields quasi - optimal convergence rates , i.e. , and where and are the a posterior error estimators for and , respectively .figure [ ex1:meshp ] plots the adaptive mesh of for solving and figure [ ex1:sp ] plots the mesh on a cross section of the domain on the -plane .figure [ ex1:meshu ] plots the adaptive mesh of for solving and figure [ ex1:su ] plot the mesh on the cross section of the domain on the -plane . - error estimates and the a posteriori error estimates.,scaledwidth=80.0% ] .,scaledwidth=50.0% ] on the -plane.,scaledwidth=50.0% ] .,scaledwidth=50.0% ] on -plane.,scaledwidth=50.0% ] * example 2 .* this example concerns the scattering of the incident plane wave the dirichlet boundary condition on the pml layer outer boundary is set by .we choose , , , , and .let the elastic region and the acoustic region be and , respectively .here , and \times[-0.6,0.6]\times[-0.6,0.6]$ ] .the pml domain is , i.e. , the thickness of the pml layer is 0.4 in each direction . in this example, the elastic solid is a rectangular box with a small rectuangular dent on the surface .the solutions of and may have singularities around the corners of the dent .we choose and for the medium property to ensure the pml error is negligible compared to the finite element error .for this example , we set the numerical solution on the very fine mesh to be a reference solution since there is no analytic solution .figure [ ex2:err ] shows the errors of and against the number of nodal points and .it is clear to note that the fem algorithm yields a quasi - optimal convergence rate .the surface plots of the amplitude of the fields are shown as follows : figure [ ex2:p ] shows the real part of for the cross section in on the -plane and figure [ ex2:u ] shows the real part of for the cross section in on the -plane . - error estimates and the a posteriori error estimates.,scaledwidth=80.0% ] for the cross section of on the -plane.,scaledwidth=50.0% ] for the cross section of on the -plane.,scaledwidth=50.0% ]we have studied a variational formulation for the acoustic - elastic interaction problem in and adopted the pml to truncate the unbounded physical domain .the scattering problem is reduced to a boundary value problem by using transparent boundary conditions .we prove that the truncated pml problem has a unique weak solution which converges exponentially to the solution of the original problem by increasing the pml parameters .we incorporate the adaptive mesh refinement with a posteriori error estimate for the finite element method to handle the problem where the solution may have singularities .numerical results show that the proposed method is effective to solve the acoustic - elastic interaction problem .i. babuka and a. aziz , survey lectures on mathematical foundations of the finite element method , in the mathematical foundations of the finite element method with application to the partial differential equations , ed . by a. aziz , academic press , new york , 1973 , 5359 .f. d. hastings , j. b. schneider , and s. l. broschat , application of the perfectly matched layer ( pml ) absorbing boundary condition to elastic wave propagation , j. acoust .am . , 100 ( 1996 ) , 30613069 . g. c. hsiao , on the boundary - field equation methods for fluid - structure interactions , in problems and methods in mathematical physics ( chemnitz , 1993 ) , vol . 134 , teubner - texte math . , 7988 , teubner , stuttgart , 1994 .g. c. hsiao , r. e. kleinman , and l. s. schuetz , on variational formulations of boundary value problems for fluid - solid interactions , in elastic wave propagation ( galway , 1988 ) , vol .35 , north - holland ser .mech . , 321326 , north - holland , amsterdam , 1989 .
consider the scattering of a time - harmonic acoustic incident wave by a bounded , penetrable , and isotropic elastic solid , which is immersed in a homogeneous compressible air or fluid . the paper concerns the numerical solution for such an acoustic - elastic interaction problem in three dimensions . an exact transparent boundary condition ( tbc ) is developed to reduce the problem equivalently into a boundary value problem in a bounded domain . the perfectly matched layer ( pml ) technique is adopted to truncate the unbounded physical domain into a bounded computational domain . the well - posedness and exponential convergence of the solution are established for the truncated pml problem by using a pml equivalent tbc . an a posteriori error estimate based adaptive finite element method is developed to solve the scattering problem . numerical experiments are included to demonstrate the competitive behavior of the proposed method .
the central result in this paper is the development of a new generic procedure for simulation of conditioned diffusions , also called diffusion bridges or pinned diffusions .more specifically , for some given diffusion process , we aim to simulate the functional ,\ ] ] where , is some set that may consist of only one point , is an arbitrarily given suitable test function and is a given state . in recent years , the problem of computing terms such as ( [ cp ] ) has attracted a lot of attention in the literature , sparked by several applications .indeed , many relevant properties of a diffusion process can be advantageously analyzed by considering the process conditioned on certain appropriate events .one so allows `` to study rare events by conditioning on the event happening or to analyze the behaviour of a composite system when only some of its components can be observed , '' as is eloquently put by .for instance , in statistical inference based on a continuous time model , discrete time observations can be enriched to continuous time observations by sampling from the diffusion bridges between the discrete time data ; see and for more information .conditional diffusions have further been successfully used for critical calculations in rare event situations . as an example from computational chemistry, we refer to the review paper of , where diffusion bridges are used for detection of the transition state surface between two stable regions and in configuration space . here, standard monte carlo simulation is prohibitively costly , as the event of such a transition is rare , provided that the `` walls '' in the energy surface between and are high .however , by studying the process conditioned on starting in and ending in , one can efficiently observe on which paths the configuration typically travels from to .other possible applications appear in the field of stochastic environmental models , for instance , regarding the concentration evolutions of pollution in water ; for example see and references therein for a related problem .several approaches for simulation of diffusion bridges have already been studied in the literature . for the theory of diffusion bridges we refer to and the references therein .many existing approaches utilize known radon nikodym densities of the law of the diffusion conditioned on initial and terminal values , with respect to the law of a standard diffusion bridge process ( e.g. , wiener bridge ) on path - space [ as a radon nikodym derivative obtained by doob s h - transform ; see , e.g. , or ] .several other approaches are based on ( partial ) knowledge of the transition densities of the unconditional diffusion ( that is not generically available , of course ) . for an overview of many different techniques ,we refer to .first , let us mention the work by who construct a general , rejection - based algorithm for solutions of _ one - dimensional _ sdes , based on the radon nikodym derivative of the law of the solution with respect to the wiener measure .the algorithm gives ( in finite , but random time ) discrete samples of the exact solution of the sde .a simple adaption of this algorithm gives samples of the exact diffusion process conditioned on , by using the law of the corresponding brownian bridge as reference measure ( instead of the wiener measure ) .an overview of related importance sampling techniques is given by .on the other hand , by relying on knowledge of the transition densities of , use a sequential weighted monte carlo framework , including resampling with optimal priority scores . another general technique used for simulation of diffusion bridgesis the markov chain monte carlo method .indeed , and show how the law of a ( multi - dimensional , uniformly elliptic , additive - noise ) diffusion conditioned on can be regarded as the invariant distribution of a stochastic differential equation of langevin type on path - space , that is , of a langevin - type stochastic partial differential equation ( spde ) .thus , in principle mcmc methods are applicable as explored by and . however , this requires the numerical solution of the spde involved .it should be noted that in the uniform ellipticity condition is relaxed leading to a fourth order parabolic spde rather than a second order one .other notable approaches include those of , which treat the case of physically relevant functionals of wiener integrals with respect to brownian bridges , and , who uses an mcmc approach based on successive modifications of the drift of the diffusion process .another approach is the one of developed for one - dimensional diffusions . in order to obtain a sample from the process conditioned on and , start a path of the diffusion from and another path of the diffusion in _ reversed time _ at . if these paths hit at time , consider the concatenated path .the distribution of the process ( conditional on ) equals the distribution of the bridge conditional on being hit by an independent path of the underlying diffusion with initial distribution .as proved by , the probability of this event approaches when .finally , in order to improve the accuracy , is used as initial value of an mcmc algorithm on path space , converging to a sample from the true diffusion bridge .a more general approach is given by which relies on the explicit radon nikodym derivative of the diffusion conditioned on its initial and terminal values and another diffusion , which is modeled like the brownian bridge .in fact , has the same dynamics as , except for an extra term in the drift , which enforces . under certain regularity conditions in particular invertibility of the diffusion matrix provide a girsanov - type theorem , which leads to a representation of the form = \mathbb{e } \bigl [ g(y ) z(y ) \bigr]\ ] ] for functionals defined on path - space and a factor explicitly given as a functional of the path together with quadratic variations of functions of .as such this approach allows for direct monte carlo simulation of ( [ cp ] ) . however , we stress that explicitly depends on which does not exist in many hypo - elliptic applications . on the other hand , simulation of the bridge - type process numerically troublesome because of the exploding drift term .the new method presented in this article is inspired by the forward - reverse estimator for the transition density constructed by . given a grid , we prove that \ ] ] equals } { \mathbb{e } [ k_{\varepsilon } ( y(\widehat{t}_l ) - x(t^\ast ) ) \mathcal{y}(\widehat{t}_l ) ] } , \hspace*{-20pt}\ ] ] which can be implemented by monte carlo simulation for any . in ( [ eq : for - rev - intro ] ) is a given grid - point chosen by the user .the process solves the original sde with initial value on the time - interval ] , \times\mathbb{r}^{d}\rightarrow\mathbb { r}^{d\times m} ] , that solves the sde with being a ( from independent ) -dimensional wiener process , and despite its name , we stress that is the solution of an ordinary sde _forward _ in time on the interval ] for suitable .we therefore introduce the reverseprocess that starts at time at a generic state , is defined on an interval ] and similarly for .[ rem : bound - density - restriction ] in fact , for the theorems formulated as below , we only need condition [ ass : bound - density ] for .higher order versions only become necessary in the context of remark [ rem : higher - order - kernel ] .[ rem : ass_bound_density_autonomous ] by the results of , corollary 3.25 , condition [ ass : bound - density ] is satisfied _ in the autonomous case _ provided that ( the vector fields driving ) the forward diffusion and satisfy a uniform hrmander condition , and and are bounded , and bounded ; that is , all the derivatives are bounded as well .we know of no similar study for nonautonomous stochastic differential equations . of course , the seminal work by gives upper ( and lower ) gaussian bounds for the transition density of time - dependent , but uniformly elliptic stochastic differential equations .moreover , prove the existence and smoothness of transition densities for time - dependent sdes under hrmander conditions . in any case ,an extension of the kusuoka stroock result to the time - inhomogeneous case seems entirely possible , in particular since we do not consider time - derivatives , for instance , by first considering the case of piecewise constant coefficients .[ ass : kernel - order ] the kernel satisfies and .moreover , it has lighter tails than a gaussian density in the sense that there are constants and such that in many applications , one would probably choose a compactly supported kernel , which trivially satisfies the above tail - condition .finally , we also introduce some further assumptions put forth for convenience , which could be easily relaxed . [ass : convenience ] the functional together with its gradient and its hessian are bounded .moreover , the coefficient in ( [ mc12 ] ) is bounded .[ rem : convenience ] condition [ ass : convenience ] could be replaced by a requirement of polynomial boundedness .let us consider \\[-8pt ] \nonumber & & \hspace*{38pt}\qquad{}\times{\varepsilon}^{-d } k \biggl ( \frac{y_{y;t}(\widehat{t}_l)-x_{s_{0},x}(t^{\ast})}{{\varepsilon } } \biggr ) \mathcal{y}_{y;t}(\widehat{t}_l ) \biggr],\end{aligned}\ ] ] which can and will be computed using monte carlo simulation . here, we recall the definition of given in ( [ eq : hat - grid ] ) . by theorem [ thr : conditional - dist - general - grid ] , converges to .\ ] ] [ bias ] assuming conditions [ ass : bound - density ] , [ ass : kernel - order ] and [ ass : convenience ] , there are constants such that the bias of the approximation can be bounded by changing variables in theorem [ gen1 ] , we arrive at in particular , we have that .consider in the following , we use the notation , for , . by taylor s formula , conditions [ ass : kernel - order ] and[ ass : bound - density ] , we get \,dv \\ & = & \int k(v ) \bigl [ { \varepsilon}\partial_{x } p\bigl(t^{\ast } , x_{k } , t_{1 } , y_{1}\bigr ) \cdot v \bigr ] \,dv \\ & & { } + \sum_{|\beta| = 2 } \frac{2}{\beta ! } { \varepsilon}^{2 } \int\!\!\!\int_{0}^{1 } ( 1-t ) \partial^{\beta}_{x } p\bigl(t^{\ast},x_{k } + t { \varepsilon}v , t_{1 } , y_{1}\bigr ) \cdot v^{\beta } \,dt k(v ) \,dv\end{aligned}\ ] ] implying that where , as given in condition [ ass : bound - density ] , and is chosen such that , which is possible by condition [ ass : kernel - order ] . since we can further compute , using , defining , we get the bound with , which is positive for . consequently ,for , we can interpret as a ( gaussian ) transition density , which has moments of all orders , for a suitable normalization constant , for which we can derive explicit upper bounds .thus we finally obtain provided that , as the last expression can be interpreted as \ ] ] for a markov process with transition densities , , , , , which admits finite moments of all orders by construction .note that the constant in the above statement can be explicitly bounded in terms of the bound on , the constants appearing in condition [ ass : bound - density ] and . in the spirit of now introduce a monte carlo estimator for the quantity introduced in ( [ eq : def - heps ] ) .let us denote \\[-8pt ] \nonumber & & { } \times k \biggl ( \frac{y_{y;t}^{m}(\widehat{t}_l ) - x_{s_{0},x}^{n}(t^{\ast})}{{\varepsilon } } \biggr ) \mathcal{y}_{y;t}^{m } ( \widehat{t}_l).\end{aligned}\ ] ] note that = h_{{\varepsilon}} ] for various combinations of and . for the remainder of the section , we omit the sub - scripts in , and as we keep the initial times and values fixed .[ lem : z - m - mprime ] for we obtain { \vert}_{{\varepsilon}=0 } \\ & & \qquad= \int g(x_{1 } , \ldots , x_{k } , y_{1 } , \ldots , y_{l-1 } ) g \bigl(x_{1 } , \ldots , x_{k } , y_{1}^{\prime } , \ldots , y_{l-1}^{\prime}\bigr ) \\ & & \hspace*{9pt}\qquad\quad{}\times p\bigl(t^{\ast } , x_{k } , t_{1 } , y_{1}\bigr ) p\bigl(t^{\ast } , x_{k } , t_{1 } , y_{1}^{\prime}\bigr ) \\ & & \hspace*{9pt}\qquad\quad{}\times\prod_{i=1}^{k } p(s_{i-1 } , x_{i-1 } , s_{i } , x_{i } ) \,dx_{i } \prod_{i=2}^{l } p(t_{i-1 } , y_{i-1 } , t_{i } , y_{i } ) \,dy_{i-1 } \\ & & \hspace*{9pt}\quad\qquad{}\times\prod_{i=2}^{l } p \bigl(t_{i-1 } , y_{i-1}^{\prime } , t_{i } , y_{i}^{\prime}\bigr ) \,dy_{i-1}^{\prime}.\end{aligned}\ ] ] moreover , we can bound - \mathbb{e } \bigl[z^{\varepsilon}_{nm } z^{\varepsilon}_{nm^{\prime}}\bigr ] { \vert}_{{\varepsilon}=0 } \bigr{\vert}\le c { \varepsilon}^{2}.\ ] ] in what follows , is a positive constant , which may change from line to line .we have \\ & & \qquad= { \varepsilon}^{-2d } e \biggl [ g \bigl ( x_{s_{1}}^{n } , \ldots , x_{s_{k}}^{n } , y_{\widehat{t}_{l-1}}^{m } , \ldots , y_{\widehat{t}_{1}}^{m } \bigr ) g \bigl ( x_{s_{1}}^{n } , \ldots , x_{s_{k}}^{n } , y_{\widehat{t}_{l-1}}^{m^{\prime } } , \ldots , y_{\widehat{t}_{1}}^{m^{\prime } } \bigr ) \\ & & \hspace*{129pt}\qquad\quad { } \times k \biggl ( \frac{y_{\widehat{t}_l}^{m}-x_{t^{\ast}}^{n}}{{\varepsilon } } \biggr ) k \biggl ( \frac{y_{\widehat{t}_l}^{m^{\prime}}-x_{t^{\ast}}^{n}}{{\varepsilon } } \biggr ) \mathcal{y}_{\widehat{t}_l}^{m } \mathcal{y}_{\widehat{t}_l}^{m^{\prime } } \biggr ] \\ & & \qquad = { \varepsilon}^{-2d } \int g(x_{1 } , \ldots , x_{k } , y_{1 } , \ldots , y_{l-1 } ) g\bigl(x_{1 } , \ldots , x_{k } , y_{1}^{\prime } , \ldots , y_{l-1}^{\prime } \bigr ) \\ & & \hspace*{32pt}\qquad\quad{}\times k \biggl ( \frac{y_{0}-x_{k}}{{\varepsilon } } \biggr ) k \biggl ( \frac{y_{0}^{\prime}-x_{k}}{{\varepsilon } } \biggr ) \prod_{i=1}^{k } p(s_{i-1 } , x_{i-1 } , s_{i } , x_{i } ) \,dx_{i } \\ & & \hspace*{32pt}\qquad\quad{}\times\prod_{i=1}^{l } p(t_{i-1 } , y_{i-1 } , t_{i } , y_{i } ) \,dy_{i-1 } \prod_{i=1}^{l } p \bigl(t_{i-1 } , y_{i-1}^{\prime } , t_{i } , y_{i}^{\prime}\bigr ) \,dy_{i-1}^{\prime } \\ & & \qquad = \int g(x_{1 } , \ldots , x_{k } , y_{1 } , \ldots , y_{l-1 } ) g\bigl(x_{1 } , \ldots , x_{k } , y_{1}^{\prime } , \ldots , y_{l-1}^{\prime}\bigr ) \\ & & \hspace*{8pt}\qquad\quad { } \times k(v ) k\bigl(v^{\prime}\bigr ) p\bigl(t^\ast , x_{k}+{\varepsilon}v , t_{1 } , y_{1}\bigr ) \,dv p \bigl(t^{\ast } , x_{k } + { \varepsilon}v^{\prime } , t_{1 } , y_{1}^{\prime}\bigr ) \,dv^{\prime } \\ & & \hspace*{8pt}\qquad\quad { } \times\prod_{i=1}^{k } p(s_{i-1 } , x_{i-1 } , s_{i } , x_{i } ) \,dx_{i}\\ & & \hspace*{8pt}\qquad\quad { } \times \prod_{i=2}^{l } p(t_{i-1 } , y_{i-1 } , t_{i } , y_{i } ) \,dy_{i-1 } \\ & & \hspace*{8pt}\qquad\quad { } \times\prod_{i=2}^{l } p \bigl(t_{i-1 } , y_{i-1}^{\prime } , t_{i } , y_{i}^{\prime}\bigr ) \,dy_{i-1}^{\prime},\end{aligned}\ ] ] where we changed variables and .thus , for , we arrive at the above expression , which is treated as a problem - dependent constant .using condition [ ass : kernel - order ] [ and the short - hand notation , we now consider \,dv \,dv^{\prime } \\ & & \hspace*{-6pt}\qquad = { \varepsilon}^{2 } \int k(v ) k\bigl(v^\prime\bigr)\\ & & \hspace*{-6pt}\hspace*{21pt}\qquad\quad{}\times \int _ { 0}^{1 } ( 1-t ) \biggl [ \sum _ { i=1}^{d } \partial_{x}^{2e_{i}}p(x_{k}+t { \varepsilon}v , y_{1 } ) p\bigl(x_{k}+t{\varepsilon}v^{\prime } , y_{1}^{\prime}\bigr ) v_{i}^{2 } \\ & & \hspace*{-6pt}\hspace*{84pt}\qquad\quad { } + \sum_{i=1}^{d } p(x_{k}+t { \varepsilon}v , y_{1 } ) \\ & & \hspace*{-6pt}\hspace*{122pt}\qquad{}\times\partial_{x}^{2e_{i } } p \bigl(x_{k}+t{\varepsilon}v^{\prime } , y_{1}^{\prime } \bigr ) \bigl(v^{\prime}_{i}\bigr)^{2 } \\ & & \hspace*{-6pt}\hspace*{-6pt}\hspace*{84pt}\qquad\quad { } + 2 \sum_{i , j=1}^{d } \partial_{x}^{e_{i } } p(x_{k}+t{\varepsilon}v , y_{1 } ) \\ & & \hspace*{-6pt}\hspace*{156pt}{}\times\partial_{x}^{e_{j } } p\bigl(x_{k}+t { \varepsilon}v^{\prime } , y_{1}^{\prime}\bigr ) v_{i } v^{\prime}_{j } \,dv \,dv^{\prime } \biggr ] \,dt \,dv \,dv^{\prime},\end{aligned}\ ] ] where , for instance , and . by similar techniques as in the proof of theorem [ bias ] , relying once more on the uniform bounds of condition [ ass : bound - density ] , we arrive at an upper bound for a transition density with gaussian bounds .consequently , we obtain - \mathbb{e } \bigl[z^{\varepsilon}_{nm } z^{\varepsilon}_{nm^{\prime}}\bigr ] { \vert}_{{\varepsilon}=0 } \bigr{\vert}\\ & & \qquad\le c { \varepsilon}^{2 } \int\bigl|g(x_{1 } , \ldots , x_{k } , y_{1 } , \ldots , y_{l-1})\bigr| \\ & & \hspace*{30pt}\qquad\quad{}\times\bigl|g\bigl(x_{1 } , \ldots , x_{k } , y_{1}^{\prime } , \ldots , y_{l-1}^{\prime } \bigr)\bigr|\\ & & \hspace*{30pt}\qquad\quad{}\times\prod _ { i=1}^{k } p(s_{i-1 } , x_{i-1 } , s_{i } , x_{i } ) \,dx_{i } s_{{\varepsilon}}^{(1,2)}(x_{k } , y_{1 } ) \,dy_{1 } \\ & & \hspace*{30pt}\qquad\quad{}\times\prod_{i=3}^{l } p(t_{i-1 } , y_{i-1 } , t_{i } , y_{i } ) \,dy_{i-1 } \times s_{{\varepsilon}}^{(1,2)}\bigl(x_{k } , y_{1}^{\prime}\bigr ) \,dy_{1}^{\prime}\\ & & \hspace*{30pt}\qquad\quad{}\times\prod _ { i=3}^{l } p\bigl(t_{i-1 } , y_{i-1}^{\prime } , t_{i } , y_{i}^{\prime } \bigr ) \,dy_{i-1}^{\prime},\end{aligned}\ ] ] which can be bounded by by boundedness of .in fact , we can find densities and with gaussian tails such that - \mathbb{e } \bigl[z^{\varepsilon}_{nm } z^{\varepsilon}_{nm^{\prime}}\bigr ] { \vert}_{{\varepsilon}=0 } \bigr{\vert}\nonumber \\[-8pt ] \\[-8pt ] \nonumber & & \qquad \le c { \varepsilon}^{2 } \int \widetilde{p}\bigl(s_{0},x , t^{\ast},x_{k}\bigr ) \widetilde{q}\bigl(t^{\ast},x_{k},t , y\bigr)^{2 } \,dx_{k}.\ ] ] when we consider ] and denote the constant for the difference by , that is , ; * for , we set \eqqcolon h_{{\varepsilon}}^{(2,1)} ] and denote the constant for the difference by , that is , .[ lem : variance ] the variance of the estimator is given by lemma [ lem : variance ] gives a clarification of the intuitive fact that the variance of explodes as ( and , hence , ) .indeed , as all the terms have a finite limit , the explosion is exclusively caused by the contribution of = { \varepsilon}^{-d } h^{(1,1)}_{\varepsilon} ] .we immediately obtain the following : [ lem : mse - h ] we assume conditions [ ass : bound - density ] , [ ass : kernel - order ] and [ ass : convenience ] hold .then the mean square error of the estimator introduced in ( [ eq : hat - heps ] ) for the term defined in ( [ eq : def - h ] ) satisfies \\ & & \qquad \le \frac{1-n - m}{nm } h^{2 } + \frac{m-1}{nm } h^{(1,2)}_{0 } + \frac { n-1}{nm } h^{(2,1)}_{0 } + \frac{{\varepsilon}^{-d}}{nm } h^{(1,1)}_{0 } \\ & & \qquad\quad{}+ \frac{{\varepsilon}^{-d+2}}{nm } c_{1,1 } + { \varepsilon}^{2 } \biggl [ 2 \frac{1-n - m}{nm } c h + \frac{m-1}{nm } c_{1,2 } + \frac{n-1}{nm } c_{2,1 } \biggr]\\ & & \quad\qquad { } + \frac{(n-1)(m-1)}{nm } c_0^{2 } { \varepsilon}^{4}.\end{aligned}\ ] ] similar to , we can now choose and the bandwidth so as to obtain convergence proportional to in rmse - sense .[ thr : mse - h - order ] assume conditions [ ass : bound - density ] , [ ass : kernel - order ] and [ ass : convenience ] and set , and dependent on .* if , choose for some . then we have = \mathcal{o}(n^{-1}) ] .insert and the respective choice of in lemma [ lem : mse - h ] .[ rem : higher - order - kernel ] by replacing the kernel by _ higher order _ kernels , is the order of the lowest order ( nonconstant ) monomial such that . ]one could retain the convergence rate even in higher dimensions , as higher order kernels lead to higher order estimates ( in ) in lemmas [ lem : z - m - mprime ] , [ lem : zn - nprime ] and [ lem : znm ] .so far , we have only computed the quantity as given in ( [ eq : def - h ] ) .however , finally we want to compute the conditional expectation .\ ] ] as with defined in ( [ eq : def - h ] ) , we need to divide the estimator for by an appropriate estimator for fact , we choose the forward reverse estimator with .note that we have assumed that . to rule out large error contributions when the denominator is small, we will discard experiments which give too small estimates for the transition density .more precisely , we choose our final estimator to be \\[-8pt ] \nonumber & & { } \times\mathbf{1}_{({1}/{(nm ) } ) { \varepsilon}^{-d } \sum_{n=1}^{n } \sum_{m=1}^{m } k ( ( { y_{\widehat{t}_l}^{m } - x_{t^{\ast}}^{n})}/{{\varepsilon } } ) \mathcal{y}_{\widehat{t}_l}^{m } > \overline{p}/2},\end{aligned}\ ] ] where is a lower bound for ( for fixed ) , which is assumed to be known . and then taking a value at the lower end of a required confidence interval .see remark [ rem : nullfolge ] below for a different version of the theorem . in any case, our numerical experiments suggest that the cut - off can be safely omitted in practice .keep in mind , however , that the ratio of the asymptotic distributions for numerator and denominator may not have finite moments . ][ thr : mse - h ] assume conditions [ ass : bound - density ] , [ ass : kernel - order ] and [ ass : convenience ] and set and dependent on . * if ( or and higher order kernels are used ) , choose , . then we have = \mathcal{o}(n^{-1}) ] .let , and , similarly , let denote the estimator in the denominator , including the normalization factor. moreover , let as defined in ( [ eq : def - h ] ) and let .then we have already established in theorem [ thr : mse - h - order ] that & = & \mathcal{o } \bigl(n^{-p}\bigr ) , \\\mathbb{e } \bigl [ |y_{n } - y|^{2 } \bigr ] & = & \mathcal{o } \bigl(n^{-p}\bigr),\end{aligned}\ ] ] where for and when .moreover , we have obtained in lemma [ lem : variance ] that and .we will now estimate the mean square error for the quotient by splitting it into two contributions , depending on whether is small or large . to this end , let for a constant to be specified below satisfying ] , where we assume and , we get , using a simple adaptation of ( [ eq : zm - mprime ] ) for different terminal values and , - \mathbb{e } \bigl [ z_{nm}^{{\varepsilon},\xi}z_{nm^{\prime}}^{\xi } \bigr ] | _ { { \varepsilon}=0 } \bigr{\vert}\nonumber \\ & & \qquad\le\mathbb{e } \biggl [ \frac{{\vert}z^{\varepsilon}_{n , m , m^{\prime } } ( \xi^{m},\xi^{m^{\prime } } ) - z^{\varepsilon}_{n , m , m^{\prime}}(\xi^{m},\xi^{m^{\prime } } ) { \vert}_{{\varepsilon}=0 } { \vert}}{\varphi(\xi^{m } ) \varphi(\xi^{m^{\prime } } ) } \biggr ] \nonumber \\ & & \qquad \le c { \varepsilon}^{2 } \mathbb{e } \biggl [ \frac{\int\widetilde{p}(s_{0 } , x , t^{\ast},x_{k } ) \widetilde{q}(t^{\ast},x_{k } , t , \xi^{m } ) \widetilde { q}(t^{\ast},x_{k},t,\xi^{m^{\prime } } ) \,dx_{k}}{\varphi(\xi^{m } ) \varphi ( \xi^{m^{\prime } } ) } \biggr ] \nonumber \\[-8pt ] \\[-8pt ] \nonumber & & \qquad = c{\varepsilon}^{2 } \int\widetilde{p}\bigl(s_{0},x , t^{\ast},x_{k } \bigr ) \widetilde { q}\bigl(t^{\ast},x_{k } , t , y\bigr)\\ & & \hspace*{28pt}\qquad\quad{}\times \widetilde{q}\bigl(t^{\ast},x_{k},t , y^{\prime}\bigr ) \,dx_{k } \lambda_{a}(dy ) \lambda_{a } \bigl(dy^{\prime}\bigr ) \nonumber \\ & & \qquad \le c { \varepsilon}^{2}.\nonumber\end{aligned}\ ] ] adopting the above notation for the case covered in lemma [ lem : zn - nprime ] and using ( [ eq : zn - nprime ] ) , we get - \mathbb{e } \bigl [ z_{nm}^{{\varepsilon},\xi}z_{n^{\prime}m}^{{\varepsilon},\xi } \bigr ] { \vert}_{{\varepsilon}=0}\bigr { \vert}\\ & & \qquad \le\mathbb{e } \biggl [ \frac{{\vert}z^{\varepsilon}_{n , n^{\prime } , m}(\xi^{m},\xi^{m } ) - z^{\varepsilon}_{n , n^{\prime},m}(\xi^{m},\xi ^{m } ) { \vert}_{{\varepsilon}=0 } { \vert}}{\varphi(\xi^{m } ) \varphi(\xi^{m } ) } \biggr ] \\ & & \qquad \le c { \varepsilon}^{2 } \int\frac{\widetilde{p}(s_{0},x , t^{\ast},y_{1 } ) \widetilde{q}(t^{\ast},y_{1 } , t , y)}{\varphi(y ) } \,dy_{1 } \lambda_{a}(dy).\end{aligned}\ ] ] by assumption the density has gaussian tails , whereas was assumed to have strictly sub - gaussian tails .this implies that the above integral is finite , and we get the bound - \mathbb{e } \bigl [ z_{nm}^{{\varepsilon},\xi } z_{n^{\prime}m}^{{\varepsilon},\xi } \bigr ] { \vert}_{{\varepsilon}=0 } \bigr{\vert}\le c { \varepsilon}^{2}.\ ] ] in a similar way , using ( [ eq : znm ] ) , we get the bound - \lim _ { { \varepsilon}\to0 } { \varepsilon}^{d } \mathbb{e } \bigl [ \bigl(z_{nm}^{{\varepsilon},\xi}\bigr)^{2 } \bigr ] \bigr{\vert}\le c { \varepsilon}^{2}.\ ] ] the respective versions of lemmas [ lem : variance ] , [ lem : mse - h ] and theorem [ thr : mse - h - order ] follow immediately from the bounds ( [ eq : bias - bound - xi ] ) , ( [ eq : zm - mprime - bound - xi ] ) , ( [ eq : zn - nprime - bound - xi ] ) and ( [ eq : znm - bound - xi ] ) , and we can repeat the proof of theorem [ thr : mse - h ] , arriving at the conclusion .we again stress that the nonoptimal complexity rate in theorem [ thr : mse - comp ] can be improved to the optimal one even for by remark [ rem : higher - order - kernel ] .theorems [ thr : mse - h ] and [ thr : mse - comp ] above present the asymptotic analysis of the mse for the forward - reverse estimator . in practice , for many methods with very good asymptotic rates , limitations arise due to potentially high constants , and the forward - reverse estimator is no exception .in fact , this can be already seen in a very simple example , where all the estimates can be given explicitly . for , consider the one - dimensional ornstein uhlenbeck process for .the corresponding reverse process satisfies for a brownian motion .moreover , .we first discuss the estimator introduced in ( [ eq : hat - heps ] ) for the numerator of the forward - reverse estimator for with .of course , we expect that the findings for this special case carry over to situations with nonconstant and .after elementary but tedious calculations [ milstein , schoenmakers and spokoiny ( ) , section 4 ] one arrives at = { \frac{1}{\sqrt { 2 \pi \bigl ( { \varepsilon}^2 e^{-2\alpha(t - t^\ast ) } + \sigma^2_t \bigr ) } } } \exp \biggl ( - { \frac { ( e^{-\alpha t}x - y ) ^2}{2 ( { \varepsilon}^2 e^{-2\alpha(t - t^\ast ) } + \sigma^2_t ) } } \biggr)\ ] ] and where thus , all the terms in the mse [ composed of the square of ( [ eq : ou - example - mean ] ) and ( [ eq : ou - example - var ] ) ] exhibit fairly moderate constants , except for the last term in ( [ eq : ou - example - var ] ) . indeed ,when , we have , unless . in other words , the constant in theorem [ thr : mse - h - order ] will be quite large if and . that observation is quite intuitive in view of ( [ eq : ou - example ] ) and ( [ eq : ou - example - reverse ] ) : is contracting to as time increases , whereas is exponentially expanding away from .thus , the probability of and be close to each other is very small .[ rem : ou - example-1 ] note that the last term in ( [ eq : ou - example - var ] ) is the term estimated in lemma [ lem : znm ] .the constant in the lemma depends on the constant in condition [ ass : bound - density ] for the derivatives of the transition density with respect to the -variable . for the ornstein uhlenbeck process, the density is given by therefore , we see that derivatives with respect to ( and , hence , the corresponding constants ) are considerably larger than derivatives with respect to .this explains why the last term ( and no other term ) in ( [ eq : ou - example - var ] ) causes problems for large .[ rem : ou - example - gen ] there is also a source of error due to the form of as a fraction of two terms .the error of an approximation of a quantity of interest by the fraction of the approximations for and for with corresponding ( absolute ) errors and is controlled by the _ relative _errors for and . indeed , assume for simplicity that and , then which may be close to if the relative error for the denominator is large .some care is necessary when implementing the forward reverse estimators ( [ eq : h - hat - def ] ) and ( [ eq : h - comp - def ] ) for expectations of a functional of the diffusion bridge between two points or a point and a subset .this especially concerns the evaluation of the double sum .indeed , straightforward computation would require the cost of kernel evaluations which would be tremendous , for example , when .but , fortunately , by using kernels with an ( in some sense ) small support we can get around this difficulty as outlined below ; see also for a similar discussion .we here assume that the kernel used in ( [ eq : h - hat - def ] ) and ( [ eq : h - comp - def ] ) , respectively , has bounded support contained in some ball of radius , an assumption which is easily fulfilled in practice .for instance , even though the gaussian kernel has unbounded support , in practice is negligible outside a finite ball ( with exponential decay of the value as function of the radius ) . therefore , it is easy to choose a ball such that is smaller than some error tolerance outside the ball .depends on the size of the constants in the mse bound .] then , due to the small support of , the following monte carlo algorithm for the kernel estimator is feasible . for simplicity , we take .[ we present the algorithm only for the case of ( [ eq : h - hat - def ] ) , the analysis being virtually equal for ( [ eq : h - comp - def ] ) . ] here , the input variable denotes the grid ( [ eq : full - grid ] ) .simulate trajectories of the forward process on .simulate trajectories of the reverse process on .find the sub - sample evaluate ( [ eq : h - hat - def ] ) by the complexity of the simulation steps ( 2 ) and ( 3 ) in algorithm [ alg : algorithm ] is and elementary computations , respectively .the size of the intersection in step ( 5 ) of algorithm [ alg : algorithm ] is , on average , proportional to .the search procedure itself can be done at a cost of order ( neglecting the cost of comparison between two integers ) .thus , we get the complexity bounds summarized in theorem [ thr : complexity ] below .[ thr : complexity ] assume that samples from the forward process and the reverse process can be obtained at constant cost .furthermore , assume that the cost of checking for equality of integers carries negligible cost .then the following asymptotic bounds hold for the complexity of algorithm [ alg : algorithm ] : * if , we choose , implying that the mse of the output of the algorithm is with a complexity ; * if , we choose and obtain an mse of with a complexity .we present two numerical studies : in the first example , the forward process is a two - dimensional brownian motion , with the standard brownian bridge as the conditional diffusion . in the second example , we consider a heston model whose stock price component is conditioned to end in a certain value . in both examples , we actually use a gaussian kernel and the simulation as well as the functional of interest are defined on a uniform grid with and for and . [ ex : brownian_bridge ] we consider , a two - dimensional standard brownian motion , which we condition on starting at and ending at , that is , the conditioned diffusion is a classical two - dimensional brownian bridge . in particular , the reverse process is also a standard brownian motion , and .we consider the functional where . in this simple toy - example, we can actually compute the true solution = \frac{1}{6 } \frac{l+1}{l-1}.\ ] ] as evaluation of the functional is cheap in this case , we use a naive algorithm calculating the full double sum .we choose and , which still gives the rate of convergence obtained in theorem [ thr : mse - h ] . .dashed lines are reference lines proportional to . ] in figure [ fig : ex_bb ] , we show the results for , with the choices and , that is , with and , respectively . in both case , we observe the asymptotic relation predicted by theorem [ thr : mse - h ] .the mse is slightly lower when is closer to the middle of the interval ] .the `` exact '' reference value was computed using the forward - reverse algorithm with very large , corresponding small and a very fine grid for the euler scheme .note that figure [ fig : ex_heston ] depicts the `` relative mse , '' that is , the mse normalized by the squared reference value .we are very grateful to an anonymous referee , who has pointed out to us the way to a much shorter and more transparent proof of the main theorem [ key ] .moreover , the paper has profited from various comments made by the referee , which improved the notation and general presentation of the paper .we are also grateful to g. n. milstein for providing us with enlightening references .
in this paper we derive stochastic representations for the finite dimensional distributions of a multidimensional diffusion on a fixed time interval , conditioned on the terminal state . the conditioning can be with respect to a fixed point or more generally with respect to some subset . the representations rely on a reverse process connected with the given ( forward ) diffusion as introduced in milstein , schoenmakers and spokoiny [ _ bernoulli _ * 10 * ( 2004 ) 281312 ] in the context of a forward - reverse transition density estimator . the corresponding monte carlo estimators have essentially root- accuracy , and hence they do not suffer from the curse of dimensionality . we provide a detailed convergence analysis and give a numerical example involving the realized variance in a stochastic volatility asset model conditioned on a fixed terminal value of the asset .
many astrophysical flows involve dynamically significant magnetic fields , such as molecular clouds , accretion disks , the galactic dynamo , jets , galaxy clusters , stellar dynamos and coronae , the solar wind and the interstellar medium .these problems tend to be three - dimensional , multiscale and turbulent , so there is an ongoing interest in developing high - resolution and efficient magnetohydrodynamics ( mhd ) algorithms for them . in this paper , we outline an extension of the constrained transport algorithm ( evans & hawley 1988 ) to the combination of higher spatial order and zone - centered grids , and with resolution - enhanced tuned derivatives .we then describe how these measures fit together to yield an algorithm that closely approaches the theoretical maximum wavenumber resolution of spectral algorithms .the induction equation for a magnetic field and a velocity field in ideal mhd is analytically , this equation conserves magnetic divergence : .however , this may or may not be the case for a finite - difference treatment of this equation .tth ( 2000 ) reviews the methods taken by various algorithms to treat the divergence in mhd simulations .a spectral code explicitly projects the fourier components so that . for a finite difference code , the magnetic fieldcan be evolved by a constrained transport scheme that preserves the magnetic divergence to machine precision ( evans & hawley 1988 ) . alternatively ,if the discretization does nt conserve magnetic divergence , the divergence can be removed with measures such as periodic use of a poisson solver ( brackbill & barnes 1980 ) , adding a divergence diffusion term to the magnetic evolution , or following an artificial and independently evolving divergence field ( dedner et al .2002 ) to propagate divergence away from where it is produced and then dissipate it . the powell scheme ( powell et al.1999 ) adds a source term to advect divergence rather than let it grow in place .a finite difference code can also employ a vector potential such that , in which case the magnetic divergence is automatically zero .this requires the use of a higher - order advection algorithm to ensure accurate second - derivatives , as is done in the pencil code ( brandenburg & dobler 2002 ) .we denote any finite difference scheme for mhd that explicitly conserves the magnetic divergence to machine precision as constrained transport ( ct ) , and any scheme that does not as unconstrained transport ( ut ) .several variations of ct are possible .if the electric field is differenced as a curl : then the magnetic divergence is preserved to machine precision for most grid types ( see appendix ) .evans & hawley ( 1988 ) introduced ct for staggered grids and tth ( 2000 ) showed that it works for centered grids as well ( see the appendix for explanation of centered and staggered grids ) . londrillo & del zanna ( 2000 ) further showed that high - order ct is possible on staggered grids with a radius - two stencil . in this paper , we show that volume - centered ct is possible on arbitrarily large stencil sizes , with hyperresistivity , and that the resolution of this algorithm at moderately high order approaches the theoretical maximum exhibited by a spectral code . in [ algorithms ] , we discuss the specifics of the algorithm , and in [ simulations ] , we describe test simulations that demonstrate the capabilities of ct with high - order spatial derivatives .our algorithm is based on a constrained transport scheme , plus measures to enhance the resolution and maintain stability .high wavenumber resolution is achieved by a combination of high - order and tuned finite differences plus hyperdiffusivity , and stability is achieved by runge - kutta timestepping and hyperdiffusivity .a high - order timestepping scheme for the evolution equations is essential for the stability of most algorithms .the time update for a variable is , where represents some estimate of .one example is the second - order runge - kutta scheme , which estimates and then identifies .another class of algorithms maintains the conservation of mass and momentum by computing fluxes through zone boundaries .a variety of techniques exist for time - extrapolating the fluxes at , such as piecewise parabolic advection ( colella and woodward 1984 ) , total variation diminishing ( harten 1983 ) , riemann solvers ( toro 1999 ) , the method of characteristics ( stone & norman 1992a , hawley & stone 1995 ) , and many more .mhd poses a challenge to time extrapolation because there are seven or eight wavemode characteristics , depending on the technique used for treating magnetic divergence .in particular , the well - known method - of - characteristics algorithm interpolates along the alfvn characteristic while neglecting the fast and slow mode characteristics . for our simulations, we use runge - kutta for time - extrapolation because it does nt invoke any diffusive spatial interpolations ( [ diffusivity ] ) , and because it automatically captures all three mhd wavemode types . in our demonstration implementation, we use second - order runge - kutta while the pencil code ( brandenburg & dobler 2002 ) uses third - order , although either order has proven successful . a common class of algorithmsis based on momentum fluxes that are time - extrapolated with upwind spatial interpolations .the errors from the interpolations required for these flux transport algorithms produce an intrinsic diffusivity that can stabilize the evolution , even in the absence of any explicit diffusive terms .the nature and magnitude of the diffusivity has been characterized in zhong ( 1998 ) and dobler et al ( 2006 ) .runge - kutta timestepping , on the other hand , has no spatial interpolations , and thus no intrinsic diffusivity .one then generally needs an explicit stabilizing diffusivity .one has various options for the form of this diffusivity , with laplacian or hyper - laplacian typically chosen .these diffusivities have the benefit that their magnitude is easily characterized , and the diffusive coefficient can be tuned to have the minimum value necessary to preserve stability .consider : } { \mbox{\boldmath }}^{[4 ] } { { \bf v}}.\ ] ] let the fourier components be where is the wavenumber .they evolve according to } ( k_x^4 + k_y^4 + k_z^4 ) \hat{{{\bf v}}}({{\bf k}}).\ ] ] the term is the laplacian viscosity and the others are higher - order hyperdiffusivities .specifically , and } = \partial_x^4 + \partial_y^4 + \partial_z^4. ] operator is not .this affects the maximum - possible timestep because in order to be advectively stable , the courant condition implies that the product must be less than a given value , and so the high - k corners of the 3d fourier cube are the most vulnerable to advective instability .in these corners , the term delivers more diffusion than the } \nabla \nabla \nabla \nabla \nabla \nabla \nabla \nabla$}}\cdot { { \bf b}}\label{eqctb}\ ] ] .[ tablevar ] variables in the equations of mhd [ cols= " < , < , < , < " , ]the forcing is the same as used by maron et al .a random forcing field is added to the velocity every timestep .the spectrum of the forcing field is , truncated 2.5 lattice units from the origin in fourier space , and the fourier components have random phases .the forcing power , simulation volume and density are unity , which yields rms velocity and magnetic fields of order unity ( maron 2004 ) .the diffusivities are given in [ simulations ] .we plot the kinetic and magnetic spectra in figure [ figct]_(a)_. the spectra are very similar for constrained and unconstrained transport simulations , and also for different orders .however the spectra alone do not distinguish between simulations of different orders because an error in the derivative manifests itself as an advective dispersion rather than as a diffusivity ( [ rez ] ) .one instead has to examine the fields in real space . in figure [ figct]_(b )_ , we compare the magnetic fields at t=0.4 crossing times . for the comparison ,we examine the difference between the fields integrated over space by computing the norm between simulations and ^ 2 d(\mbox{vol } ) } { \int b_y(i)^2 d\mbox{vol } + \int b_y(j)^2 d(\mbox{vol})}.\ ] ] stone et .( 1992b ) argue that this kind of comparison is more meaningful than merely plotting the overlay of both fields .the constrained transport simulation with polynomial - based finite differences on a radius - eight stencil ( ct8 in table [ tablesim ] ) serves as the basis of comparison .we compare the constrained transport simulations to an unconstrained transport simulation on a radius - eight tuned finite - difference stencil ( ut8 ) .we use ut8 as a stand - in for the spectral algorithm because of its high wavenumber resolution .the spectral algorithm delivers the highest - attainable resolution because spectral derivatives are exact for all wavenumbers . with this, a 3d spectral simulation without an aliasing grid - shift correction can resolve structure up to k=2/3 , and with a grid - shift correction it can resolve up to k=.94 ( canuto 1987 ) .the spectral algorithm can also set the magnetic divergence to zero in fourier space at negligible cost .unconstrained transport does not explicitly conserve magnetic divergence and so in model ut8 the divergence is cleaned with a fourier projection every timestep .we also tried applying the correction every fourth timestep and with virtually identical results .the radius - eight stencil of ut8 yields derivatives that are accurate up to k=.56 .the norms given in table ( [ tablek ] ) show how the simulations progressively approach the ct8 result as the stencil size increases .the match is poor for ct1 and better for ct3 .we also note that the radius - three simulation with tuned derivatives ( ct3 t ) performs better than the radius - four simulation with polynomial - based derivatives , establishing the effectiveness of tuned derivatives .this can also be qualitatively seen in figure ( [ figct ] ) , where we see that the fields for ct8 and ut8 are closely aligned ( fig .[ figct ] ) , and that they also closely resemble those for ct3 .we attribute the remaining differences between ct8 and ut8 to the fact that the magnetic divergence is removed spectrally in ut8 , while it is handled by constrained transport in ct8 . collectively , the high - order constrained and unconstrained transport simulations ( ct3 , ct8 & ut8 ) more closely resemble each other than they do the low - order constrained transport simulations ( ct1 & ct2 ) . we conclude that ct3 is already a good approximation to the spectral algorithm .we adapted the vector potential code pencil to run in ct mode and used it to compare the vector potential and ct techniques .we ran an alfvn wave on a grid with zero viscosity and resistivity .( figure [ figalfven ] ) .after ten crossing times , both the vector potential and ct techniques yield wave profiles that agree with each other to within 1 percent .the shape of the profiles are also well - matched with the initial conditions , with a phase error of 10 percent .we also used both techniques to run a turbulent dynamo simulation we started with an initially weak magnetic field in the form of a beltrami wave and applied helical forcing until it grew to a steady state .the box size is , the density is unity , the forcing power is equal to the viscosity is equal to and the resistivity is equal to the rms magnetic field strength is plotted in figure ( [ figdynamo ] ) . after 30 crossing times ,the values for for the ct and vector potential techniques agree to 1 percent ( figure [ figdynamo ] ) .we ran an alfvn wave test where the propagation axis is oblique to the grid axes , with the initial conditions in gardiner & stone ( 2005 ) : the simulation volume is a unit cube , modeled on a grid of size .the velocity field is quasi - incompressible , with the divergence removed spectrally every 4 timesteps .the kinetic and magnetic diffusivities are all set to zero for this linear problem .we ran two simulations : one with third - order polynomial finite differences and another with third - order tuned finite differences from table ( [ tablek ] ) .after the wave has traveled 16 times around the periodic box , the waveform remains almost indistinguishable from the initial conditions , with the tuned finite differences yielding a more precise result than the polynomial finite differences ( figure [ figalfvenb ] ) .we have developed a new version of the constrained transport algorithm that uses volume - centered fields and hyperresistivity on a high - order finite difference stencil , with tuned finite difference coefficients to enhance high - wavenumber resolution .high - order interpolation allows implementation of staggered dealiasing .together , these measures yield a wavenumber resolution that approaches the ideal value achieved by the spectral algorithm . volume centered fields are desirable because then , and all reside at the same grid location , allowing to be constructed directly from the cross product of and without interpolation .for staggered fields , and reside at the zone faces and on the edges , and so constructing involves spatial interpolation , which reduces wavenumber resolution .high - order stencils and tuned finite difference coefficients both enhance the wavenumber resolution of finite differences .for a radius - three stencil with tuned coefficients , derivatives can be computed to a relative precision of up to a nyquist - scaled wavenumber of . without tuning ,this would be for a radius - three stencil .a radius - one stencil derivative such as is used in zeus ( stone & norman 1992a ) is only accurate up to .the spectral derivative is precise up to , although in practice it is limited to because of aliasing .aliasing limits a finite - difference code to unless the finite - difference grid shift aliasing correction is used ( [ dealiasing ] ) . hyperresistivity is desirable because it is more effective than laplacian resistivity in diffusing high - wavenumber modes while at the same time preserving low - wavenumber modes .the fact that hyperresistivity can be written as a curl allows its inclusion into ct .if laplacian diffusivity were used instead , too much high - wavenumber structure would be diffused for the high - order or tuned derivatives to matter .the resolution of the algorithm described here approaches that of a spectral code , but because it uses finite differences , it runs faster than a spectral code and is nt restricted to periodic boundary conditions .also , since the finite differences are local , it is easily scalable to thousands of processors .the spectral algorithm is more difficult to scale to large numbers of processors because it involves all - to - all communications between processors .a finite difference code only passes information between processors whose subgrids are adjacent in physical space .lastly , because the code works with the magnetic field rather than the vector potential , boundary conditions are often easier to implement .we received support for this work from nsf career grant ast99 - 85392 , nsf grants ast03 - 07854 and ast06 - 12724 , and nasa grant nag5 - 10103 .we acknowledge stimulating discussions with e. blackman , a. brandenburg , b. chandran , and j. stone , and we also acknowledge the referee , wolfgang dobler , for thorough comments that improved the paper . "constrained transport expresses the magnetic induction equation as a pure curl plus a divergence diffusivity : where is defined in equation ( [ eqctb ] ) .the term serves to diffuse away magnetic divergence , and the finite differences are arranged so that thus , the curl term does not contribute to the evolution of the magnetic divergence , and if the initial conditions are divergence - free , the magnetic divergence remains zero throughout the evolution . to see how constrained transport works , denote the vector field by = where are integers specifying the locations of grid cell centers .there are two basic grid types : centered " and staggered " ( figure [ figstagger ] ) . for a centered grid , scalar and vector quantitiesare located at cell centers . for a staggered grid ,scalar quantities are located at cell centers and vector quantities at cell faces .for instance , we would index the components of as the finite divergence divergence of the curl of is which consists of terms such as one can straightforwardly see that this is zero for finite differences of the form eq .[ stencil ] for both centered and staggered grids . thus , constrained transport can be coordinated with high - order and tuned finite differences , as well as with hyperresistivity . for a staggered grid , " vectors are located at cell edges , whereas the and vectors from which they are constructed are found at cell faces .a staggered grid ct scheme therefore involves spatial interpolation , one example being the method of characteristics scheme for time - interpolating alfvn waves .we use volume - centered fields and runge - kutta timestepping because , among other reasons , no interpolation is required .colella , p. , & woodward , p. r. 1984 , j. comput .phys . , 54 , 174 dedner , a. , kemm , f. , kroner , d. , munz , c .- d . , schnitzer , t. & wesenberg , m. 2002 , j. comput .phys . , 175 , 645 dobler , w. , stix , m. & brandenburg , a. 2006 , apj 638 , 336
numerical simulations including magnetic fields have become important in many fields of astrophysics . evolution of magnetic fields by the constrained transport algorithm preserves magnetic divergence to machine precision , and thus represents one preferred method for the inclusion of magnetic fields in simulations . we show that constrained transport can be implemented with volume - centered fields and hyperresistivity on a high - order finite difference stencil . additionally , the finite - difference coefficients can be tuned to enhance high - wavenumber resolution . similar techniques can be used for the interpolations required for dealiasing corrections at high wavenumber . together , these measures yield an algorithm with a wavenumber resolution that approaches the theoretical maximum achieved by spectral algorithms . because this algorithm uses finite differences instead of fast fourier transforms , it runs faster and is nt restricted to periodic boundary conditions . also , since the finite differences are spatially local , this algorithm is easily scalable to thousands of processors . we demonstrate that , for low - mach - number turbulence , the results agree well with a high - order , non - constrained - transport scheme with poisson divergence cleaning .
rewriting and pattern - matching are of general use for describing computations and deduction . programming with rewrite rules and strategies has been proven most useful for describing computational logics , transition systems or transformation engines , and the notions of rewriting and pattern matching are central notions in many systems , like expert systems ( jrule ) , programming languages based on rewriting ( elan , maude , obj ) or functional programming ( , haskell ) . in this context , we are developing the system , which consists of a language extension adding syntactic and associative pattern matching and strategic rewriting capabilities to existing languages like , and ocaml .this hybrid approach is particularly well - suited when describing transformations of structured entities like trees / terms and documents .one of the main originalities of this system is to be data structure independent .this means that a _ mapping _ has to be defined to connect algebraic data structures , on which pattern matching is performed , to low - level data structures , that correspond to the implementation .thus , given an algebraic data structure definition , it is needed to implement an efficient support for this definition in the language targeted by the system , as or do not provide such data structures .tools like apigen and vas , which is a human readable language for apigen input where used previously for generating such an implementation to use with . however , experience showed that providing an efficient term data structure implementation is not enough . when implementing computational logics or transition systems with rewriting and equational matching ,it is convenient to consider terms modulo a particular theory , as identity , associativity , commutativity , idempotency , or more problem specific equations .then , it becomes crucial to provide the user of the data structure a way to conveniently describe such rules , and to have the insurance that only chosen equivalence class representatives will be manipulated by the program .this need shows up in many situations .for instance when dealing with abstract syntax trees in a compiler , and requiring constant folding or unboxing operators protecting particular data structures .is a language for describing multi - sorted term algebras designed to solve this problem . like apigen , vas or ,its goal is to allow the user of an imperative or object oriented language to describe concisely the algebra of terms he wants to use in an application , and to provide an ( efficient ) implementation of this algebra .moreover , it provides a mechanism to describe normalization functions for the operators , and it ensures that all terms manipulated by the user of the data structure are normal with respect to those rules .includes the same basic functionality as apigen and vas , and ensures that the data structure implementation it provides are maximally shared .also , the generated data structure implementation supports the visitor combinator pattern , as the strategy language of relies on this pattern .even though can be used in any environment , its features have been designed to work in synergy with .thus , it is able to generate correct mappings for the data structure ( i.e. being _ formal anchors _ ) . provides a way to define computationally complex constructors for a data structure .it also ensures those constructors are used , and that no _ raw _ term can be constructed .private types in the ocaml language do provide a similar functionality by hiding the type constructors in a private module , and exporting construction functions .however , using private types or normal types is made explicit to the user , while it is fully transparent in .moca , developed by frdric blanqui and pierre weis is a tool that implements normalization functions for theories like associativity or distributivity for ocaml types .it internally uses private types to implement those normalization functions and ensure they are used , but could also provide such an implementation for .the rest of the paper is organized as follows : in section [ sec : tom ] , to motivate the introduction of , we describe the programming environment and its facilities .section [ sec : gom ] presents the language , its semantics and some simple use cases . after presenting how can cooperate with in section [ sec : interact ] ,we expose in section [ sec : structure ] the example of a prover for the calculus of structures showing how the combination of and can help producing a reliable and extendable implementation for a complex system .we conclude with summary and discussions in section [ sec : conclusion ] .is a language extension which adds pattern matching primitives to existing imperative languages .pattern - matching is directly related to the structure of objects and therefore is a very natural programming language feature , commonly found in functional languages .this is particularly well - suited when describing various transformations of structured entities like , for example , trees / terms , hierarchized objects , and documents .the main originality of the system is its language and data - structure independence . from an implementation point of view, it is a compiler which accepts different _ native languages _ like or and whose compilation process consists in translating the matching constructs into the underlying native language .it has been designed taking into account experience about efficient compilation of rule - based systems , and allows the definition of rewriting systems , rewriting rules and strategies . for an interested reader ,design and implementation issues related to are presented in .is based on the notion of formal anchor presented in , which defines a mapping between the algebraic terms used to express pattern matching and the actual objects the underlying language manipulates . thus , it is data structure independent , and customizable for any term implementation .for example , when using as the host language , the sum of two integers can be described in as follows : .... term plus(term t1 , term t2 ) { % match(nat t1 , nat t2 ) { x , zero - > { return x ; } x , suc(y ) - > { return suc(plus(x , y ) ) ; } } } .... here the definition of ` plus ` is specified functionally , but the function ` plus ` can be used as a function to perform addition . `nat ` is the algebraic sort manipulates , which is mapped to objects of type ` term ` .the mapping between the actual object ` term ` and the algebraic view ` nat ` has to be provided by the user .the language provides support for matching modulo sophisticated theories .for example , we can specify a matching modulo associativity and neutral element ( also known as list - matching ) that is particularly useful to model the exploration of a search space and to perform list or based transformations . to illustrate the expressivity of list - matchingwe can define the search of a ` zero ` in a list as follows :.... boolean haszero(termlist l ) { % match(natlist l ) { conc(x1*,zero , x2 * ) - > { return true ; } } return false ; } .... in this example , _ list variables _ , annotated by a ` * ` should be instantiated by a ( possibly empty ) list . given a list ,if a solution to the matching problem exists , a ` zero ` can be found in the list and the function returns ` true ` , ` false ` otherwise , since no ` zero ` can be found .although this mechanism is simple and powerful , it requires a lot of work to implement an efficient data structure for a given algebraic signature , as well as to provide a _ formal anchor _ for the abstract data structure .thus we need a tool to generate such an efficient implementation from a given signature .this is what tools like apigen do .however , apigen itself only provides a tree implementation , but does not allow to add behavior and properties to the tree data structure , like defining ordered lists , neutral element or constant propagation in the context of a compiler manipulating abstract syntax tree .hence the idea to define a new language that would overcome those problems .we describe here the language and its syntax , and present an example data - structure description in .we first show the basic functionality of , which is to provide an efficient implementation in for a given algebraic signature .we then detail what makes suitable for efficiently implement normalized rewriting , and how allows us to write any normalization function .an algebraic signature describes how a tree - like data structure should be constructed .such a description contains _ sorts _ and _ operators_. _ operators _ define the different node shapes for a certain _ sort _ by their name and the names and sorts of their children .formalisms to describe such data structure definitions include apigen , schema , types , and . to this basic signature definition ,we add the notion of _ module _ as a set of sorts .this allows to define new signatures by composing existing signatures , and is particularly useful when dealing with huge signatures , as can be the abstract syntax tree definition of a compiler .figure [ fig : lightsyntax ] shows a simplified syntax for signature definition language . in this syntax, we see that a module can import existing modules to reuse its sorts and operators definitions .also , each module declares the sorts it defines with the * sorts * keyword , and declares operators for those sorts with productions .[ cols="<,^ , < " , ] this syntax is strongly influenced by the syntax of sdf , but simpler , since it intends to deal with abstract syntax trees , instead of parse trees .one of its peculiarities lies in the productions using the * * symbol , defining variadic operators .the notation * * is the same as in ( * ? ? ?* section 2.1.6 ) for a similar construction , and can be seen as a family of operators with arities in . we will now consider a simple example of signature for booleans : .... module boolean sorts bool abstract syntax true - > bool false - > bool not(b : bool ) - > bool and(lhs : bool , rhs : bool ) - > bool or(lhs : bool , rhs : bool ) - > bool .... from this description , generates a class hierarchy where to each sort corresponds an abstract class , and to each operator a class extending this _ sort _ class .the generator also creates a factory class for each module ( in this example , called ` booleanfactory ` ) , providing the user a single entry point for creating objects corresponding to the algebraic terms . like apigen and vas ,relies on the aterm library , which provides an efficient implementation of unsorted terms for the and languages , as a basis for the generated classes . the generated data structurecan then be characterized by strong typing ( as provided by the _ composite _pattern used for generation ) and maximal subterm sharing .also , the generated class hierarchy does provide support for the visitor combinator pattern , allowing the user to easily define arbitrary tree traversals over data structures using high level constructs ( providing congruence operators ) . when using abstract data types in a program , it is useful to also define a notion of canonical representative , or ensure some invariant of the structurethis is particularly the case when considering an equational theory associated to the terms of the signature , such as associativity , commutativity or neutral element for an operator , or distributivity of one operator over another one .considering our previous example with boolean , we can consider the de morgan rules as an equational theory for booleans .de morgan s laws state and .we can orient those equations to get a confluent and terminating rewrite system , suitable to implement a normalization system , where only boolean atoms are negated .we can also add a rule for removing duplicate negation .we obtain the system : s objective is to provide a low level system for implementing such normalizing rewrite systems in an efficient way , while giving the user control on how the rules are applied . to achieve this goal ,provides a _ hook _ mechanism , allowing to define arbitrary code to execute before , or replacing the original construction function of an operator .this code can be any or code , allowing to use pattern matching to specify the normalization rules . to allow _ hooks _ definitions , we add to the syntax the definitions for _ hooks _ , and add and to the productions : lcl & : : = & * * factory \ { * } * + & : : = & * :* * * \ { * } * + & : : = & * ( * ( ) * * ) * + & : : = & * make * * make_before * * make_after * + + & : : = & a _ factory hook _ is attached to the module , and allows to define additional functions .we will see in section [ sub : invariant ] an example of use for such a _ hook_. an _ operator hook _ is attached to an operator definition , and allows to extend or redefine the construction function for this operator .depending on the , the hook redefines the construction function ( * make * ) , or insert code before ( * make_before * ) or after ( * make_after * ) the construction function .those _ hooks _ take as many arguments as the operator they modify has children .we also define operation types with an appended * insert * , used for variadic operators .those hooks only take two arguments , when the operator they apply to is variadic , and allow to modify the operation of adding one element to the list of arguments of a variadic operator .such _ hooks _ can be used to define the boolean normalization system : .... module boolean sorts bool abstract syntax true - > bool false - > bool not(b : bool ) - > bool and(lhs : bool , rhs : bool ) - > bool or(lhs : bool , rhs : bool ) - > bool not : make(arg ) { % match(bool arg ) { not(x ) - > { return ` x ; } and(l , r ) - > { return ` or(not(l),not(r ) ) ; } or(l , r ) - > { return ` and(not(l),not(r ) ) ; } } return ` make_not(arg ) ; } .... we see in this example that it is possible to use in the _ hook _ definition , and to use the algebraic signature being defined in in the _ hook _ code .this lets the user define _ hooks _ as rewriting rules , to obtain the normalization system .the signature in the case of is extended to provide access to the default construction function of an operator .this is done here with the ` make_not(arg ) ` call .when using the _ hook _ mechanism of , the user has to ensure that the normalization system the hooks define is terminating and confluent , as it will not be enforced by the system .also , combining hooks for different equational theories in the same signature definition can lead to non confluent systems , as combining rewrite systems is not a straightforward task . however , a higher level strata providing completion to compute normalization functions from their equational definition , and allowing to combine theories and rules could take advantage of s design to focus on high level tasks , while getting maximal subterm sharing , strong typing of the generated code and _ hooks _ for implementing the normalization functions from the strata .can then be seen as a reusable component , intended to be used as a tool for implementing another language ( as apigen was used as basis for ) or as component in a more complex architecture .the tool is best used in conjunction with the compiler .is used to provide an implementation for the abstract data type to be used in a program .the data structure definition will also contain the description of the invariants the data structure has to preserve , by the mean of _ hooks _ , such that it is ensured the program will only manipulate terms verifying those invariants .starting from an input datatype signature definition , generates an implementation in of this data structure ( possibly using internally ) and also generates an anchor for this data structure implementation for ( see figure [ fig : interaction ] ) .the users can then write code using the match construct on the generated mapping and compiles this to plain .the dashed box represents the part handled by the tool , while the grey boxes highlight the source files the user writes .the generated code is characterized by strong typing combined with a generic interface and by maximal sub - term sharing for memory efficiency and fast equality checking , as well as the insurance the hooks defined for the data structure are always applied , leading to canonical terms .although it is possible to manually implement a data structure satisfying those constraints , it is difficult , as all those features are strongly interdependent .nonetheless , it is then very difficult to let the data structure evolve when the program matures while keeping those properties , and keep the task of maintaining the resulting program manageable . in the following example , we see how the use of for the data structure definition and for expressing both the invariants in and the rewriting rules and strategy in the program leads to a robust and reliable implementation for a prover in the structure calculus .we describe here a real world example of a program written using and together .we implement a prover for the calculus of structure where some rules are promoted to the level of data structure invariants , allowing a simpler and more efficient implementation of the calculus rules .those invariants and rules have been shown correct with respect to the original calculus , leading to an efficient prover that can be proven correct .details about the correctness proofs and about the proof search strategy can be found in .we concentrate here on the implementation using .when building a prover for a particular logic , and in particular for the system in the structure calculus , one needs to refine the strategy of applying the calculus rules .this is particularly true with the calculus of structure , because of deep inference , non confluence of the calculus and associative - commutative structures .we describe here briefly the system , to show how and can help to provide a robust and efficient implementation of such a system . atoms in are denoted by structures are denoted by and generated by where , the _ unit _ , is not an atom . is called a _ seq structure _ , is called a _par structure _ , and is called a _ copar structure _ , is the _ negation _ of the structure .a structure is called a _ proper par structure _ if where and . a _ structure context _ , denoted as in , is a structure with a hole .we use this notation to express the deduction rules for system , and will omit context braces when there is no ambiguity .the rules for system are simple , provided some equivalence relations on terms .the seq , par and copar structures are associative , par and copar being commutative too . also , is a neutral element for seq , par and copar structures , and a seq , par or copar structure with only one substructure is equivalent to its content .then the deduction rules for system can be expressed as in figure [ fig : bv ] .because of the contexts in the rules , the corresponding rewriting rules can be applied not only at the top of a structure , but also on each subterm of a structure , for implementing deep inference .deep inference then , combined with associativity , commutativity and as a neutral element for seq , par and copar structures leads to a huge amount of non - determinism in the calculus .a structure calculus prover implementation following strictly this description will have to deal with this non - determinism , and handle a huge search space , leading to inefficiency .the approach when using and will be to identify canonical representatives , or preferred representatives for equivalence classes , and implement the normalization for structures leading to the selection of the canonical representative by using s _ hooks_. this process requires to define the data structure first , and then define the normalization .this normalization will make sure all units in seq , par and copar structures are removed , as is a neutral for those structures .we will also make sure the manipulated structures are _ flattened _ , which corresponds to selecting a canonical representative for the associativity of seq , par and copar , and also that subterms of par and copar structures are ordered , taking a total order on structures , to take commutativity into account . when implementing the deduction rule , it will be necessary to take into account the fact that the prover only manipulates canonical representatives .this leads to simpler rules , and allow some new optimizations on the rules to be performed .we first have to give a syntactic description of the structure data - type the prover will use , to provide an object representation for the _ seq _ , _ par _ and _ copar _ structures ( , and ) . in our implementation , we considered these constructors as unary operators which take a _ list of structures _ as argument . using ,the considered data structure can be described by the following signature : .... module struct imports public sorts struc strucpar struccop strucseq abstract syntax o - > struc a - > struc b - > struc c - > struc d - > struc ... other atom constants neg(a : struc ) - > struc concpar ( struc * ) - > strucpar conccop ( struc * ) - > struccop concseq ( struc * ) - > strucseq cop(copl : struccop ) - > struc par(parl : strucpar ) - > struc seq(seql : strucseq ) - > struc .... to represent structures , we define first some constant atoms . among them , the ` o ` constant will be used to represent the unit .the ` neg ` operator builds the negation of its argument .the grammar rule ` par(strucpar ) - > struc ` defines a unary operator ` par ` of sort ` struc ` which takes a ` strucpar ` as unique argument .similarly , the rule ` concpar(struc * ) - > strucpar ` defines the ` concpar ` operator of sort ` strucpar ` .the syntax ` struc * ` indicates that ` concpar ` is a _ variadic - operator _ which takes an indefinite number of ` struc ` as arguments .thus , by combining ` par ` and ` concpar ` it becomes possible to represent the structure by ` par(concpar(a , b , c ) ) ` .note that this structure is flattened , but with this description , we could also use nested ` par ` structures , as in ` par(concpar(a , par(concpar(b , c ) ) ) ) ` to represent this structure . and are represented in a similar way , using ` cop , seq ` , ` conccop ` , and ` concseq ` .so far , we can manipulate objects , like ` par(concpar ( ) ) ` , which do not necessarily correspond to intended structures .it is also possible to have several representations for the same structure .hence , ` par(concpar(a ) ) ` and ` cop(conccop(a ) ) ` both denote the structure ` a ` , as .thus , we define the canonical ( prefered ) representative by ensuring that * , and are reduced when containing only one sub - structure : + * nested structures are flattened , using the rule : * subterms are sorted ( according to a given total lexical order ) : + if . this notion of canonical form allows us to efficiently check if two terms represent the same structure with respect to commutativity of those connectors , neutral elements and reduction rules .the first invariant we want to maintain is the reduction of singleton for _ seq _ , _ par _ and _ copar _ structures .if we try to build a ` cop ` , ` par ` or ` seq ` with an empty list of structures , then the creation function shall return the unit ` o ` .else if the list contains only one element , it has to return this element .otherwise , it will just build the requested structure .as all manipulated terms are canonical forms , we do not have for this invariant to handle the case of a structure list containing the unit , as it will be enforced by the list invariants .this behavior can be implemented as a _ hook _ for the ` seq ` , ` par ` and ` cop ` operators ..... par(parl : strucpar ) - > struc par : make ( l ) { % match(strucpar l ) { concpar ( ) - > { return ` o ( ) ; } concpar(x)- > { return ` x ; } } return ` make_par(l ) ; } .... this simple _ hook _ implements the invariant for singletons for ` par ` , and use a call to the constructor ` make_par(l ) ` to call the intern constructor ( without the normalization process ) , to avoid an infinite loop .similar hooks are added to the description for ` cop ` and ` seq ` operators .we see here how the pattern matching facilities of embedded in can be used to easily implement normalization strategies .the _ hooks _ for normalizing structure lists are more complex .they first require a total order on structures .this can be easily provided as a function , defined in a ` factory ` hook .the comparison function we provide here uses the builtin translation of generated data structures to text to implement a lexical total order. a more specific ( and efficient ) comparison function could be written , but for the price of readability ..... factory { public int comparestruc(object t1 , object t2 ) { string s1 = t1.tostring ( ) ; string s2 = t2.tostring ( ) ; int res = s1.compareto(s2 ) ; return res ; } } .... once this function is provided , we can define the hooks for the variadic operators ` concseq ` , ` concpar ` and ` conccop ` .the hook for ` concseq ` is the simplest , since the structures are only associative , with as neutral element .then the corresponding hook has to remove the units , and flatten nested ` seq ` ..... concseq ( struc * ) - > strucseq concseq : make_insert(e , l ) { % match(struc e ) { o ( ) - > { return l ; } seq(concseq(l * ) ) - > { return ` concseq(l*,l * ) ; } } return ` make_concseq(e , l ) ; } .... this _ hook _ only checks the form of the element to add to the arguments of the variadic operator , but does not use the shape of the previous arguments .the _ hooks _ for ` conccop ` and ` concpar ` are similar , but they do examine also the previous arguments , to perform sorted insertion of the new argument .this leads to a sorted list of arguments for the operator , providing a canonical representative for commutative structures ..... concpar ( struc * ) - > strucpar concpar : make_insert(e , l ) { % match(struc e ) { o ( ) - > { return l ; } par(concpar(l * ) ) - > { return ` concpar(l*,l * ) ; } } % match(strucparl ) { concpar(head , tail * ) - > { if(!(comparestruc(e , head )< 0 ) ) { return ` make_concpar(head , concpar(e , tail * ) ) ; } } } return ` make_concpar(e , l ) ; } .... the associative matching facility of is used to examine the arguments of the variadic operator , and decide whether to call the builtin construction function , or perform a recursive call to get a sorted insertion .as the structure calculus verify the de morgan rules for the negation , we could write a hook for the ` neg ` construction function applying the de morgan rules as in section [ sub : canon ] to ensure only atoms are negated .this will make implementing the deduction rules even simpler , since there is then no need to propagate negations in the rules .once the data structure is defined , we can implement proof search in system in a program using the defined data structure by applying rewriting rules corresponding to the calculus rules to the input structure repeatedly , until reaching the goal of the prover ( usually , the unit ) .those rules are expressed using s pattern matching over the data structure .they are kept simple because the equivalence relation over structures is integrated in the data structure with invariants . in this example , and structures are associative and commutative , while the canonical representatives we use are sorted and flattened variadic operators .for instance , the rule of figure [ fig : bv ] can be expressed as the two rules and , using only associative matching instead of associative commutative matching .then , those rules are encoded by the following match construct , which is placed into a strategy implementing rewriting in arbitrary context ( congruence ) to get deep inference , the ` c ` collection being used to gather multiple results : .... % match(struc t ) { par(concpar(x1*,cop(conccop(r*,t*)),x2*,u , x3 * ) ) - > { if(`t*.isempty ( ) || ` r*.isempty ( ) ) { } else { strucpar context = ` concpar(x1*,x2*,x3 * ) ; if(canreact(`r*,`u ) ) { strucpar parr = cop2par(`r * ) ; // transform a struccop into a strucpar struc elt1 = ` par(concpar ( cop(conccop(par(concpar(parr*,u)),t*)),context * ) ) ;c.add(elt1 ) ; } if(canreact(`t*,`u ) ) { strucpar part = cop2par(`t * ) ; struc elt2 = ` par(concpar ( cop(conccop(par(concpar(part*,u)),r*)),context * ) ) ; c.add(elt2 ) ; } } } } .... we ensure that we do not execute the right - hand side of the rule if either ` r ` or ` t ` are empty lists .the other tests implement restrictions on the application of the rules reducing the non - determinism .this is done by using an auxiliary predicate function ` canreact(a , b ) ` which can be expressed using all the expressive power of both and in a ` factory ` hook .the interested reader is referred to for a detailed description of those restrictions .also , the search strategy can be carefully crafted using both and constructions , to achieve a very fine grained and evolutive strategy , where usual algebraic languages only allow breadth - first or depth - first strategies , but do not let the programmer easily define a particular hybrid search strategy . while the approach of search strategies may lead to more complex implementations for simple examples ( as the search space has to be handled explicitly ) , it allows us to define fine and efficient strategies for complex cases .the implementation of a prover for system with and leads not only to an efficient implementation , allowing to cleanly separate concerns about strategy , rules and canonical representatives of terms , but also to an implementation that can be proven correct , because most parts are expressed with the high level constructs of and instead of pure . as the data structure invariants in and the deduction rules in are defined algebraically , it is possible to prove that the implemented system is correct and complete with respect to the original system , while benefiting from the expressive power and flexibility of to express non algebraic concerns ( like building a web applet for the resulting program , or sending the results in a network ) .we have presented the language , a language for describing algebraic signatures and normalization systems for the terms in those signatures .this language is kept low level by using and to express the normalization rules , and by using _hooks _ for describing how to use the normalizers .this allows an efficient implementation of the resulting data structure , preserving properties important to the implementation level , such as maximal subterm sharing and a strongly typed implementation .we have shown how this new tool interacts with the language .as provides pattern matching , rewrite rules and strategies in imperative languages like or , provides algebraic data structures and canonical representatives to . even though can be used simply within ,most benefits are gained when using it with , allowing to integrate formal algebraic developments into mainstream languages .this integration can allow to formally prove the implemented algorithms with high level proofs using rewriting techniques , while getting a implementation as result .we have applied this approach to the example of system in the structure calculus , and shown how the method can lead to an efficient implementation for a complex problem ( the implemented prover can tackle more problems than previous rule based implementation ) . as the compilation process of s pattern matching is formally verified and shown correct ,proving the correctness of the generated data structure and normalizers with respect to the description would allow to expand the trust path from the high level algorithm expressed with rewrite rules and strategies to the code generated by the compilation of and .this allows to not only prove the correctness of the implementation , but also to show that the formal parts of the implementation preserve the properties of the high level rewrite system , such as confluence or termination .* acknowledgments : * i would like to thank claude kirchner , pierre tienne moreau and all the developers for their help and comments .special thanks are due to pierre weis and frederic blanqui for fruitful discussions and their help in understanding the design issues .comon , h. and j .- p .jouannaud , _ les termes en logique et en programmation _ ( 2003 ) , master lectures at univ .paris sud .http://www.lix.polytechnique.fr / labo/% jean - pierre.jouannaud / articles / cours - tlpo.pdf[http://www.lix.polytechnique.fr / labo/% jean-pierre.jouannaud/articles/cours-tlpo.pdf ] , o. , _ implementing system bv of the calculus of structures in maude _ , in : l. a. i alemany and p. ' egr ' e , editors , _ proceedings of the esslli-2004 student session _ ,universit ' e henri poincar ' e , nancy , france , 2004 , pp . 117127 , 16th european summer school in logic , language and information .kirchner , c. , p .- e .moreau and a. reilles , _ formal validation of pattern matching code _ , in : p. barahone and a. felty , editors , _ proceedings of the 7th acm sigplan international conference on principles and practice of declarative programming _ ( 2005 ) , pp .187197 .kirchner , h. and p .- e .moreau , _ promoting rewriting to a programming language : a compiler for non - deterministic rewrite programs in associative - commutative theories _ ,journal of functional programming * 11 * ( 2001 ) , pp .207251 .moreau , p .- e ., c. ringeissen and m. vittek , _ a pattern matching compiler for multiple target languages _ , in : g. hedin , editor , _12th conference on compiler construction , warsaw ( poland ) _ , lncs * 2622 * ( 2003 ) , pp . 6176 .
this paper presents , a language for describing abstract syntax trees and generating a implementation for those trees . includes features allowing the user to specify and modify the interface of the data structure . these features provide in particular the capability to maintain the internal representation of data in canonical form with respect to a rewrite system . this explicitly guarantees that the client program only manipulates normal forms for this rewrite system , a feature which is only implicitly used in many implementations .
in a recent paper fletcher , shor and win analyze error correction schemes obtained by ab initio optimization , rather than the adaptation of classical coding techniques .the basic idea is that both encoding and recovery ( or decoding ) can be arbitrary channels , and that for a given number of invocations of a noisy discrete memoryless channel the objective is to bring the channel as close to the identity as possible .if the so - called channel fidelity used as a figure of merit ( as in and ) the optimization is clearly a semidefinite problem , by virtue of jamiolkowski - choi duality . in presented an alternative algorithm for this semidefinite problem , and used it to generate optimal codes for various noisy channels , by alternatingly optimizing the encoding channel and the recovery channel . in the recovery is optimized , which already gives a marked improvement some over previously known codes , specifically for the case of the four - bit correction of the amplitude damping channel . the possibility of further improvements by optimizingalso the encoding is noted in the discussion ( citing also ) .it so turns out that the required computation ( even for the same test case ) was already done in the autumn of 2003 in a collaboration between the first two authors and the third author of this note , with the aim of checking the power - iteration method of against the better established semidefinite method .since these results directly support the perspective forwarded in , we felt it appropriate to make them immediately available .for the theoretical background we refer to either or . in both paperssimilar ideas and notations are used , so they should be readily accessible from each other .the _ amplitude damping channel _ is the qubit channel with kraus operators these channels form a semigroup ( , which contracts to the first `` unexcited '' basis state .the channel fidelity of a noisy channel is defined as where is a maximally entangled vector .note that this requires input and output of the channel to be systems with the same hilbert space , which is adapted to comparing the channel with the identity , which is the unique channel with .this is also closely related to the average fidelity for pure input states ( with the average taken according to the unitarily invariant measure ) .the main virtue of choosing this fidelity as a figure characterizing the deviation from the identity is that it is linear in .such a linear criterion is possible only because the ideal channel is on the boundary of the set of channels .error correction results for the amplitude damping channel ( four copies ) using three methods : ( a ) no coding , ( b ) optimized decoding by fletcher et al . ( with encoding by leung et al . ) and ( c ) our iterative optimization of both , encoding and decoding .,width=264 ] the curve is the dotted line in fig . 1 .the other lines represent for various choices of and .the dashed line uses the encoding by leung et al . with an optimized decoding .in fact , we computed this line by our routines , and it coincides to within pixel resolution with the graph in .the solid line is the result of the iteration , in which and are alternatingly optimized , keeping the other operation fixed .this iteration has a strict improvement over the leung code for .the methods in and have the following characteristic features : * for a known channel these methods yield excellent results , without using any special properties of the channel like symmetry etc .. * the optimization of either encoding or decoding is a semidefinite problem for which the solution is a certified global optimum .the process of alternatingly optimizing these therefore improves the objective in every step , and hence converges to a local optimum . however , there is no guarantee for having found the global optimum . *the methods suffer from the familiar explosion of difficulty as the system size is increased .correction schemes like the five qubit code can still be handled on a pc , but a nine qubit code would involve optimization over -matrices , which is set up by multiplying and contracting some matrices in dimensions .this may be possible on large machines , but it is clear that these methods are useless for asymptotic questions . * the iteration method replacing the semidefinite package in has a slight advantage here , because it works with a fixed number of kraus operators .so for the encoding one can put in by hand an isometric encoding , which , as our study shows , is often optimal .this cuts down on dimensions , at least for the optimization of encodings . * for asymptotic coding theory onestill needs codes , which can be described also for very large dimensions , be it by explicit parameterization or by a characterization of typical instances of random codes .it is here that methods transferred from classical coding theory will continue to be useful .a. s. fletcher , p. w. shor and m. z. win , `` optimum error recovery using semidefinite programming '' , quant - ph/0606035 ( 2006 ) m. reimpell and r. f. werner , phys .lett . * 94 * , 080501 ( 2005 ) ( also quant - ph/0307138 ) . m. reimpell , `` quantum error correction as a semi - definite programme '' , 26th a2 meeting , potsdam ( 2003 ) m. reimpell and r. f. werner , `` quantum error correcting codes - ab initio '' , dpg spring conference , mnchen ( 2004 ) m. reimpell , `` ab initio optimization of quantum codes '' , phd thesis , in preparation m. reimell and r. f. werner , `` iterative optimization of quantum error correcting codes '' , qis workshop , cambridge ( 2004 ) k. audenaert and b. de moor , phys . rev .a * 65 * 030302(r ) ( 2002 ) d. w. leung , m. a. nielsen , i. l. chuang and y. yamamoto , phys .a * 56 * , ( 1997 ) b. schumacher , phys .rev . a * 54 * , 2614 , ( 1996 ) m. horodecki , p. horodecki and r. horodecki , phys . rev .a * 60 * , 1888 , ( 1999 )
in a recent paper ( [ 1]=quant - ph/0606035 ) it is shown how the optimal recovery operation in an error correction scheme can be considered as a semidefinite program . as a possible future improvement it is noted that still better error correction might be obtained by optimizing the encoding as well . in this note we present the result of such an improvement , specifically for the four - bit correction of an amplitude damping channel considered in [ 1 ] . we get a strict improvement for almost all values of the damping parameter . the method ( and the computer code ) is taken from our earlier study of such correction schemes ( quant - ph/0307138 ) .
delay required to communicate message from a source to a destination , is a key metric for communication networks .however , evaluating the optimal delay required to deliver the message in a network is not widely considered as it is very difficult to evaluate delay even in networks where no security constraint is imposed on a message .we consider a two - relay network with a secrecy constraint on a message , and do not make any assumption on the statistics of the source - to - relay channels , even on the existence of it .we evaluate the minimum delay required to communicate the message to the destination reliably and securely , and find the algorithm that achieves it .the two - relay network we consider is depicted in figure [ dd ] .the goal of the source is to communicate a _finite _ size message to the destination , while keeping it secret from the relays .source - to - relay 1 and source - to - relay 2 channels are assumed to be block erasure channels , and the states of relay channels change one block to the next in an _arbitrary manner_. furthermore , we assume there is no direct channel from source to the destination , and both relay 1-to - destination and relay 2-to - destination channels are assumed to be noiseless .we study this communication model under three set - ups each of which has a different channel state information ( csi ) assumption : 1 ) genie - aided csi set - up : the source obtains the whole channel state sequence of the relay channels before the communication starts , 2 ) zero - block - delayed csi set - up : the source obtains the state of the relay channels at the beginning of a block , and 3 ) one - block delayed csi set - up : the source obtains the state of the relay channel with a 1 block delayed feedback .we evaluate the minimum number of channel blocks required to communicate message securely and reliably .the main challenge in our problem stems from the fact that since we delay with delay , we focus on the transmission of a message with _ a finite and fixed size_. hence , we can not employ traditional asymptotic approaches to show the message is communicated securely and reliably , since such approaches focus on large message sizes .to that end , we propose encoding strategies for each csi set - up to communicate the finite size message reliably and securely to the destination .our contributions are as follows : * we provide an encoding strategy to achieve the optimal delay of genie aided csi set - up and optimal delay of zero - block delayed set - up .we observe that the optimal delays of two set - ups are equal .* we bound the optimal delay of the one - block delayed csi set - up .we show that the optimal delay of the one - block delayed csi set - up differs from that of the zero - block delayed csi set - up at most one block , if the source - to - relay 1 channel or the source - to - relay 2 channel does not experience an erasure on the channel block arriving after block ._ related work : _ in his seminal paper , wyner introduces the theoretical basis for information theoretic security for the point to point setting , where the adversary eavesdrops the communication between the transmitter and the receiver . in , cai and yeungstudy the information theoretically secure communication of a message in networks with general topologies , where the adversary can eavesdrop an unknown set of communication channels .the authors assume all the channels in the network have the same capacity . in , the authors consider the same problem in in networks in which the channels do not need to have the same capacity . in and , the authors consider the communication channels as noiseless channels , whereas the source - to - relay 1 channel and the source - to - relay 2 channel are block erasure channels in our study. in , the authors study information theoretically secure communication over _ noisy networks _ , where each channel is assumed to be block erasure channel .the authors provide upper and lower bounds to the secrecy capacity . in , the authors study a secure communication over broadcast block erasure channel with channel state feedback at the end of each block . in both and ,the channel state changes from one block to the next in an independent and identically distributed fashion , whereas the channel state changes in an arbitrary manner in our study .also , neither of and consider the delay of noisy networks , and both of them consider message size asymptotic regimes .the delay of a noisy network even without a secrecy constraint is very difficult to evaluate .we develop an encoding strategy for the genie aided csi set - up and for the zero - block delayed csi set - up , that achieves the minimum achievable delay of the two - relay network . for the one - block delayed csi set - up , we provide a novel encoding strategy , and characterize the relation of the optimal delay of the one - block delayed csi set - up with that of the zero - block delayed csi set - up .the encoding strategies we provide in the paper also keep the message secret from the relays without any assumption on the channel statistic .we study the communication system illustrated in figure [ dd ] .the source has a message to transmit to the destination over 2-relay network .the source - to - relay 1 and the source - to - relay 2 channel are block erasure channels . in the block erasure channel model , time is divided into discrete blocks each of which contains channel uses .the channel states are assumed to be constant within a block and vary from one block to the next in an arbitrary manner .relay 1-to - destination and relay 2-to - destination channels are assumed to be error - free , i.e there is a wired connection between the relays and the destination . the observed signals at the relays and the destination in the -th block are as follows : where is the transmitted signal at -th block , is the received signal by the relay 1 , is the received signal by relay , and is the received signal by the destination at -th block . with loss of generality , we assume that at each channel use , the source - to - relay channel and the source - to - relay channel accept binary inputs , .channel states and denote the state of the source - to - relay channel and the state of the source - to - relay channel at -th block , respectively .equality denotes that the source to relay channel is in on state , i.e there is no erasure at -th block and denotes that the source to relay channel is in off state , i.e there is an erasure at -th block .define ] , each of which except the last sub - message has bits .the last sub - message is padded with random bits so that it has bits . for the secure transmission of message ,the source generates a set of keys . for each ] , = [ 1,\ ; 1] ] .hence , , if \neq \left[0,\;1\right] ] . with a derivation similar to ( [ init1])-([samp_space ] ) , we find that .hence , we conclude that if is an achievable delay , it has to satisfy constraints and .note that these constraints imply that and , since and are integers .in this section , we provide lower and upper bounds for the optimal delay of the one block delayed csi set - up .the tightness of the bounds depend on the number of the consecutive off - off blocks arriving after block .if the first block arriving after block is on - on block , on - off block , or off - on block , the optimal delay of one - block delayed csi set - up differs from that of genie - aided csi set - up at most one block .[ thm1block ] the optimum delay of the one block delayed csi set - up is bounded as follows : where define an on block as a block on which at least one of the source - to - channels is in the on state .block given in theorem [ thm1block ] is the first on - block incoming after block .algorithm [ enc_alg2 ] provides an encoding strategy to achieve delay .we next provide the proof of theorem [ thm1block ] we first explain algorithm 2 and then prove the second inequality in .message is partitioned into sub - messages , , i.e. , ] we prove when .the proof of for case can be done similarly . since , , , and block is the first off - on block that arrives after block .the source employing algorithm sends sub - message on the first on block arriving after block . let the block on which the source sends sub - message is off - on block .then , the index of the off - on block is .the length of the key queues at relay 1 and relay 2 at the end of block are and , both of which are greater than zero .hence , at the end of block , the source employing algorithm is ready to send message , and .now let the block on which the source sends sub - message is either on - off block or on - on block and refer this block as block . at the end of block , the length of the key queue at relay 2 will be zero .then , the source enters into key generation phase at the end of block .the source keeps sending random packets until the end of the first off - on block arriving after block .note that the index of the first off - on block arriving after block is .the random packet sent in block will be stored at key queue at relay 2 as a key and the lengths of key queue at relay and relay are non - zero at the end of block .hence , at the end of block , the source employing algorithm is ready to send message , and . , , , , , [ alg2 ]we study the minimum delay required to communicate the finite size message reliably to the destination in a two - relay network while keeping it secret from the relays , where source - to - relay channels are assumed to be block erasure channels .we provide an encoding strategy to achieve the optimal delay when the relay feedback on the states of the source - relay channels is available on the source with no delay , i.e. , the source obtains the feedback at the beginning of a channel block. then , we consider the case in which there is an one - block delayed relay feedback on the states of the source - to - relay channels , i.e. , the source obtains the feedback at the end of a block .we show that for a set of channel state sequences , the optimal delay with one - block delayed feedback differs from the optimal delay with no - delayed feedback at most one block. 1 a. d.wyner , `` the wire - tap channel '' ._ bell syst .tech . j. _ , 54(8):13551387 , october 1975 .n. cai and r. yeung , `` secure network coding , '' in _ proc .inf . theory _ , june 2002 , pp .t cui , t . ho , and j. kliewer `` on secure network coding with nonuniform or restricted wiretap sets , '' _ ieee trans .inf . theory _ ,59 , no . 1 ,166176 , jan . 2013 .a. mills , b. smith , t. clancy , e. soljanin , and s. vishwanath , `` on secure communication over wireless erasure networks , '' in _ proc .inf . theory _ ,jul 2008 , pp .31 , pp . 558567 , 1960 . l. czap , v. m. prabhakaran , c. fragouli , and s. diggavi``secret communication over broadcast erasure channels with state - feedback , '' http://arxiv.org/abs/1408.1800
we consider a two - relay network in which a source aims to communicate a confidential message to a destination while keeping the message secret from the relay nodes . in the first hop , the channels from the source to the relays are assumed to be block - fading and the channel states change arbitrarily -possibly non - stationary and non - ergodic- across blocks . when the relay feedback on the states of the source - to - relay channels is available on the source with no delay , we provide an encoding strategy to achieve the optimal delay . we next consider the case in which there is one - block delayed relay feedback on the states of the source - to - relay channels . we show that for a set of channel state sequences , the optimal delay with one - block delayed feedback differs from the optimal delay with no - delayed feedback at most one block .