article
stringlengths
0
456k
abstract
stringlengths
0
65.5k
bell nonlocality is one of the most iconic and paradoxical aspects of quantum phenomenology .the realities of two space - like separated systems can be entangled , such that the choice of whether to measure or on one system can affect the measurement statistics of on the other .contextuality generalized these ideas , noting that the essence of non - locality remains in single , localized quantum systems .the outcome of an observable can depend on whether it is comeasured with or , even if neither observable nor disturbs the measurement statistics of . unlike nonlocality, contextuality can be exhibited by classical as well as quantum systems .all contextuality experiments so far , have relied on collecting output statistics from a series of joint measurements , , etc . on some quantum state .while this approach has been immensely successful , all outcomes in these tests can be replicated by classical systems .suppose two black boxes are tested for contextuality .one contains a genuine quantum system that faithfully outputs the required quantum measurement statistics , the other contains a classical computer which merely simulates the mathematics of quantum theory .current experimental tests of contextuality can not distinguish which of the two is quantum .this stands in stark contrast to tests of nonlocality where local hidden variables can be excluded due to space like separation of alice and bob s choice of measurement settings and no - signalling .the inability to exclude hidden variables is a key limitation of current contextuality experiments , which in turn , hinders their use as a resource in device independent scenarios - such as generating certified random numbers .this article aims to mitigate this limitation .we introduce experimental methods to exclude contextual hidden variables through physical principles and apply them to refine experimental tests of contextuality in quantum systems .our approach rests on the use of an external quantum mechanical degree of freedom . in synthesizing superpositions of different contextuality measurements conditioned on this degree of freedom, we can imprint the predictions of contextuality into correlations between the system of interest , and our quantum mechanical ancilla .the outcomes of contextuality experiments can thus be translated into a bipartite setting . in doing so, we can replace the standard ad hock assumption in contextuality tests - that any participating classical system did not reconfigure its answers based on the measurement settings - with a much more physically motivated assumption of no - signaling .this puts tests of contextuality on a similar footing to nonlocality tests , and ultimately reduces the current gap between them .the article is organized as follows .section [ sec : contextualitytests ] will introduce the formal framework of contextuality .section [ sec : results ] will outline the specifics of our proposed experiment , together with corresponding proofs of how it can distinguish contextual hidden variables from genuine quantum contextuality .section [ sec : blackbox ] describes these ideas within the general black box framework .section [ sec : discusion ] then concludes the paper with discussions .suppose that a system has -observables , including an observable which is compatible with several distinct proper subsets of .each compatible subset constitutes a measurement context for .the non - contextual hypothesis is : the outcome of is independent of which subset you choose to measure alongside .quantum systems with three or more hilbert space dimensions violate his hypothesis - they are innately contextual .tests are formulated in terms of non - contextual inequalities ; mathematical inequalities that constrain all non - contextual systems .an archetypical example is the -cycle inequality : where observables have dichotomic - outcomes , all addition is done modulo , and any consecutive pair and can be measured simultaneously .this inequality is derived under the no - disturbance assumption the marginal distributions is assumed to be independent of the measurement context , i.e. . provided this assumptionis upheld , any non - contextual hidden variable will satisfy . meanwhile quantum systems can violate up to the tsirelson bound if is even , or if is odd .contextual hidden variables can also violate this inequality . consider a hidden variable that flips an unbiased coin to generate or at random, then computes . this hidden variable will violate the -cycle inequality up to the arithmetic bound of .thus contextuality is not unique to quantum systems .it is also exhibited by classical processes which take the full complement of observables being simultaneously measured and uses this input to generate a context - dependent - outcome for each observable , i.e. a contextual hidden variable .this observation limits the capacity of standard contextuality tests to guarantee the non - classicality of an untrusted physical system .an unknown system may violate a noncontextual inequality simply by hosting a computer that executes a suitable contextual hidden variable model . indeed many such automata have already been proposed .* framework*. this motivates us to consider the following scenario .alice wishes to test if a quantum system inside a black box is contextual , while excluding contextual hidden variables .this may be motivated by a desire to generate quantum random numbers through a certifiable quantum source of contextuality .her adversary wishes to cheat her by replacing this black - box with an imitation that mimics the measurement results through contextual hidden variables .alice is tasked with distinguishing the genuinely quantum system from the imitation and simultaneously checking for contextuality - i.e. the task of discriminating between these two boxes is equivalent to excluding a contextual hidden variable explanations for her experimental data .alice has an experimental apparatus , which can be pre - configured to measure any pair of observables and .she also has a source of black boxes ; supplied by an untrusted third party with the capability to generate classical or quantum boxes . in each runalice uses the source to generate a black box , and then inserts it into the experimental apparatus , without knowing whether it contains a quantum or classical system .the apparatus then returns two measurement outcomes and , respectively . at the end of the run aliceuses the outcomes as data points for and in her experiment .we note two properties that are required for the experimental test to conclude a system is contextual : * if an observable is measured twice during a single run , then both answers should be the same .[ item1 ] * the data will violate the -cycle inequality in eqn .. [ item2 ] alice will thus screen her data for compliance with both ( i ) and ( ii ) . criterion ( i ) is often applied to strings of measurements . checking the outcomes of ( and of ) are consistent in any string of measurements , is an operational way of verifying the no - disturbance assumption .this check is often advocated and used in experimental implementations , where the emphasis is on checking the quantum measurements commute . in the following sections , we develop experimental methods for excluding contextual hidden variables anduse them to refine existing experimental tests of contextuality .our new refined tests can exclude contextual hidden variables , due to inability to satisfy the aforementioned criteria ( i ) and ( ii ) .to implement the aforementioned standard contextuality tests , alice needs to choose measurement settings at random .this involves the use of a reliable source of randomness .one method , for example , is via the use of an ancillary qubit initiated in state . by measuring this qubit in the standard pauli basis ( i.e. , the basis ) , alice can obtain a random bit that determines which measurement she will perform . for example , alice may use the random bit to decide which of two sets of mutually commuting observables , or , where to measure .that is she configures her apparatus to co - measure if the random bit if , and if .basis measurement outcomes of an ancillary qubit , are used as a quantum random number generator ( qrng ) .if the qrng outputs 0 then the observables are measured , else when the qrng outputs 1 the measurement settings are flipped to . in the revamped protocol ( b ) a control - unitary is used to reverse the order of measurement , so the settings can be flipped retroactively - that is we measure the system we are testing for contextuality , afterwards we fix the settings by measuring the ancilla in the basis .this figure highlights the tight knit relationship with quantum delayed choice protocols .[ fig : schematic],scaledwidth=50.0% ] our proposal is based on reversing this order of events . instead of pre -selecting the measurement setting , alice will delay this choice until after the system of interest , has been measured . formally , suppose there exists a unitary , that transforms each element of to an element of , i.e. , . for -cycle tests based upon the standard observables as in ,the compatible subsets and where , satisfy this assumption provided we ensure and have the same number of elements .consider the following general strategy .let represent the state of the system of interest , and the state of the ancilla qubit . instead of measuring the system directly, alice first applies the aforementioned controlled on the basis of the ancilla qubit , i.e. , alice applies . following this process, alice makes the standard measurements associated with each of the observables in to obtain outcomes .once these measurement outcomes are obtained , alice generates the random number by a measurement of the ancilla ( see fig . [fig : schematic ] for details ) .when , alice treats as the outputs of measuring the elements of and use them to collect statistics on .otherwise the outputs are treated as if the elements of were measured ; and used to evaluate for elements of .thus , alice now decides what questions to ask a possibly contextual system of interest _after _ she receives the answers to the said questions .the main idea is that the black box s incumbent phd student must now generate outcomes , without knowing exactly what observables are being measured .such a phd student can no longer execute a contextual hidden variable strategy , which uses the full complement of observables being measured to compute outcomes . inwhat follows , we demonstrate appropriate choice of and allows alice to exclude classical contextual hidden variables .the exact specifics will vary depending on whether the number of observables , , is even or odd .[ sec : evenncycles ] * when is even * , we execute the protocol above with and for some . in instances where the ancilla measurement outcome , is , we have effectively measured and and thus can collect statistics about . in instances where , similar reasoning allows us to collect information on .for example , repetition of the protocol when is set to allows estimation of and . reiterating this procedure for each time allows evaluation of each of the correlation terms in eq . .consider a contextual classical box that attempts to mimic these statistics . without loss of generality, we can model such a box by assuming it contains an internal memory - a contextual hidden variable .the box is assumed to have complete knowledge of alice s experimental setup ( i.e , what and are ) , but no knowledge about the outcome of alice s eventual measurement of the ancilla ( we will see in section [ sec : blackbox ] that the latter assumption can be verified experimentally ) . in each run alicemakes a measurement for each observable in . to successfully fool alice, the box must replicate the quantum output statistics for and . in the followingwe show that this is impossible ; a classical box with contextual hidden variables can not consistently satisfy conditions ( i ) and ( ii ) from sec .[ sec : contextualitytests ] , making it impossible for such a system to mimic genuine quantum contextuality without detection ._ proof of result ._ in order to violate the -cycle inequality , it is necessary that this holds independently for each . to see this write as , where . because contain correlation terms which satisfy , we are guaranteed that .hence in order to violate we need .we take the outcomes , collected from runs where and , in conjunction with the output of the ancilla measurement , , and evaluate we observe that if we define probability distributions and , then is the variational distance between and .let us assume a contextual hidden variable supplied the outcomes and .this assignment must be made without knowing , the future outcome of measuring the ancilla qubit in the basis .we note that equiprobable to be 0 or 1 , since the ancilla is prepared in a state and the control unitary does not effect the statistics of the ancilla ( i.e. the control unitary commutes with a basis measurement of the ancilla , since with respect to the basis of the ancilla the control unitary is , where is the direct sum and is the identity . ) . with no prior knowledge of available when is assigned , and hence the variational distance in eq .is precisely 0. a contextual hidden variable can not violate inequality and therefore will not violate the -cycle inequality .of course if the experiment is not repeated enough times , then a classical system could appear to satisfy due to random statistical fluctuations . in practicethis problem effects all contextuality tests , and has a standard resolution .quantum systems actually violate the -cycle inequality up to the tsirleson bound in sec . [sec : contextualitytests ] so the quantum statistics ideally satisfy a stricter inequality .alice will pick an satisfying , and check if the experimental outcomes satisfy : due to the central limit theorem , the probability that statistics generated by a classical hidden variable , will satisfy , dies off exponentially with the number of runs . * when is odd * , the protocol is summarized by fig .alice intends to evaluate the correlation term for some .the main difference here is that alice first implements a non - destructive measurement of observable on system , with outcome .once done , she then implements the procedure in sec .[ sec : results ] , with and .that is she introduces an ancillary qubit , and applies the unitary satisfying to , controlled on the basis of . to determine what measurements she has effectively made, alice completes the protocol by measuring the ancilla in the basis , with outcome . on runs where , alice attributes the outcomes to two consecutive measurements of .otherwise , alice treats as an outcome of measuring and as an outcome of measuring .thus , depending on the outcome , alice can collect statistics about either or .cycle case we use an alternate protocol , where each run allows us to measure for some , or . a non - destructive measurement of the observable on the register is followed by a control unitary where .finally we repeat the measurement of , and measure the ancilla in the basis generating an outcome .supposing we start with then post selecting on runs where the ancilla outcome , we can evaluate meanwhile when the data yields .[ fig3],scaledwidth=30.0% ] analogously to the even -cycle case , a classical box with contextual hidden variables must replicate the output statistics of and , for both possible measurement outcomes of the ancilla .this is not possible ._ proof of result ._ in order to satisfy the criteria ( i ) and ( ii ) from sec .[ sec : contextualitytests ] , it is necessary that this inequality must be satisfied independently for each .this follows from observing two points .firstly criterion ( i ) implies . secondly to violate the -cycle inequality ( criterion ( ii ) ) , we require for all . we can establish this using proof by contradiction : assume for some and we observe violation of inequality .this would imply .but this sum only contains terms each lower bounded by -1 .this is a contradiction . by mirroring the analysis in the even case ,specifically eq . -, we observe again , where and are defined analogously to the even case .if and are generated by a contextual hidden variable , then using the same argument as in eq .we conclude is exactly zero provided no prior information about is available to influence the choice of .hence the contextual box will not satisfy , implying that either condition ( i ) or ( ii ) from sec . [sec : contextualitytests ] , will be contradicted .as with all contextuality experiments , the physical implementation of this protocol must satisfy an important caveat . during this implementation alice will need to collect data for all correlation terms .for each correlation term alice must set up her experiment in figs .[ fig : schematic]-[fig3 ] using a different gate . if an observable is measured in two different setups i.e. the setups for and then the actual measurement must be physically identical in both cases .otherwise any observed dependence of the outcomes on whether we measure or ( i.e. any contextuality ) could be due to imperfect reconstruction of when changing the settings . in practicethis problem is solved in standard contextuality tests by ref . and others .we simply refine the contextuality test in ref . to ensure we have also fulfilled this basic tenant .the even case is illustrated in fig .[ fig2 ] , the odd case will follow the same principles and therefore has not been included here . cycles .figure ( a ) demonstrates the example , which allows us to measure the correlation terms and .unitary is applied to the register , controlled on the state of the ancilla .note is chosen so that it commutes with measurements of , i.e. , and satisfies .afterwards we measure and on the register , followed by a basis measurement on the ancilla . in ( b )we need to extend this protocol to get the next correlation term .however we must ensure that in both ( a ) and ( b ) , the measurements associated with and are physically identical . to do thiswe follow the techniques in ref .we introduce a new control unitary such that while , so that measurements of observable commute with the action of .note that measurements of will also commute with the gate ( i.e. to measure we postselect on runs where the effect of the gate is the same as an identity transformation ) . in principle , because the only difference between ( a ) and ( b ) is a gate , and this gate does not effect the measurement outcome of , or , these observables should be the same in both setups .furthermore the action of the two control unitaries is equivalent to a single control gate where while .iteratively repeating this method along with the techniques in ref . allows us to acquire the remaining correlation terms in the -cycle inequality .[ fig2],scaledwidth=50.0% ]so far we have made use of quantum mechanical terminology .this is in line with the paper s focus on describing concrete experiments that can rule out contextual hidden variables .of course , contextuality is often discussed in a completely black - box setting - where the only properties of a system are its output statistics given input questions ; and no assumptions are made that the system is quantum mechanical .our proposal generalized naturally into this picture .because the security of this protocol relies on independence of the control from the system being tested for contextuality ; we need to introduce two parties alice and bob , who share a pair of black boxes . indeed , since the quantum protocol required alice and bob to share -different entangled resource states one for each of alice s settings choices , in the -cycle test the black box analogue actually requires pairs of boxes , labeled .we want recast the measurements in figs .[ fig : schematic]-[fig2 ] as a series of questions alice and bob ask their respective boxes . in each round of the -cycle test alice( and bob ) now select a pair of boxes labeled ` ' .alice asks her box two sequential dichotomic questions and and gets answers .afterwards bob asks his box a single question which generates outcome or .the outcomes can be treated as the outcome from sec .[ sec : results ] for the aforementioned values of and . * verifying non - determinism of measurement selection . *recall that our proofs above relied on the crucial assumption that the box does not have access to the value of prior to outputting and .this relies of the assertion that there is no hidden variable that determines the value of prior to its measurement .the black - box framework gives us a natural way to explicitly verify this assumption .on some runs , after selecting their boxes alice and bob may randomly choose to perform a bell test between their box pairs . duringthis bell test alice chooses to ask her box one of two possible questions or ; while bob chooses to asks his box either question or . is selected to be the same question , , bob would have asked in delayed choice contextuality .the resulting statistics are used to test violation of the chsh inequality : the violation of this inequality certifies there is no realistic description for outcomes of question .this demonstrates that there is no hidden variable that predicts alice s decision of whether to ask questions in or .thus any capacity for the box to gain this knowledge will violate causality . in the specific case of quantum systems , the bell test is performed on the final state of the circuits in figs .[ fig2]-[fig3 ] .the control qubit is assumed to be inside bob s box ; and the register inside alice s box ` ' . represents a -measurement , in line with our constraint that it asks the same question as our refined contextuality test .provided the register was in some pure state directly before the control unitary gate , the resulting bipartite system is able to violate the chsh inequality for any for some choices of , and .specifically let be a measurement ; such that bob measures the ancilla in either the or basis .meanwhile define , where , as an effective qubit on alice s system of interest ; with associated pauli operators and ; and let and represent the two measurement settings for alice .application of these measurement settings allows violation of the chsh inequality up to , for all . for more detailssee the appendices .here we have developed experimental techniques for excluding contextual hidden variables and applied them to refine contextuality experiments . by controlling the measurement settings in the contextuality test on an external ancillary quantum mechanical degree of freedom ,we present a method for effectively concealing which observable is being measured in the contextuality test .this will effectively thwart a contextual hidden variable that needs to use prior knowledge of which observables are being measured in order to preassign a contextual outcome .one of the major stumbling blocks in the use of contextuality as a quantum resource in line with non - locality has been its capacity to be simulated by classical information processing .how can we use contextuality as a resource if we can replace the same system with a student in a box ? in developing an experimental method to certify a quantum state is contextual , and simultaneously exclude contextual hidden variables , our protocol helps address this problem .this innovation may thus lead to potential applications for contextual quantum systems - the natural candidate being , for example , the certified generation of random numbers it would be interesting to see if some of the other problems surrounding contextuality tests could be addressed using similar ideas .for instance the no - disturbance assumption which is the direct equivalent of no - signalling in bell tests is a hot button topic in contextuality tests .a key issue is that hidden variable models which do not satisfy the no - disturbance condition ( i.e. the marginal distribution they predict depends on the measurement context ) can violate a noncontextual inequality .this opens an interesting possibility : could a refined protocols , which effectively hides the measurement context ` - ' in any joint measurement `- ' , help address this problem ?furthermore the topic of contextuality as a resource is controversial , particularly since according to state independent contextuality protocols even the completely mixed state is contextual . by imprinting the predictions of contextuality into correlations between the test system and an external ancilla , we give a new interpretation for contextual statistics , which also links back to the quantum circuit formalism .this leaves a host of interesting problems which could be addressed in future work .another point of interest , is the close relationship between our protocol and quantum delayed choice , a class of quantum protocols which have been experimentally demonstrated in the context of optical interferometry .the measurement statistics of a photon exiting an interferometer depend on a setting in the interferometer which can be either open and closed . by controlling whether the interferometer is open or closed on an ancillary quantum degree of freedom, we can effectively toggle the interferometer setting between open and closed after the photon has been measured .this innovation , provides a remarkable way to rule out a hidden variable description for the photon s measurement statistics .while the principles involved are similar to wheeler s delayed choice , quantum delayed choice is known to be more versatile and have applications to ruling out hidden variables in more general settings such as bell tests .quantum delayed choice also furnishes a technique for measuring complementary phenomena using a single configuration of the measurement apparatus .our work highlights the natural confluence of these ideas with tests of quantum contextuality .j.t . would like to acknowledge input and discussions with marek wajs , kavan modi , daniel terno and kihwan kim .this project / publication was made possible through the support of the john templeton foundation ( grant number 54914 ) and the foundational questions institute .the opinions expressed in this publication are those of the author(s ) and do not necessarily reflect the views of the john templeton foundation .the project was also supported by the national research foundation and the ministry of education in singapore through the tier 3 moe2012-t3 - 1 - 009 grant random numbers from quantum processes " as well as the national basic research program of china grant 2011cba00300 , 2011cba00302 , the national natural science foundation of china grants 11450110058 , 61033001 , and 61361136003 .99 j. s. bell , physics 1 , 195 ( 1964 ) s. kochen & e.p .specker , j. math .17 , 59 ( 1967 ) m. kleinmann , o. ghne , j. r. portillo , j. a. larsson , & a. cabello , new journal of physics , 13(11 ) , 113011 .( 2011 ) b. daki , m. suvakov , t. paterek , & .brukner , phys .lett . , 101(19 ) , 190402 ( 2008 ) n. harrigan , t. rudolph & s. aaronson , ( 2008 ) .[ arxiv:0709.1149 ] r. lapkiewicz , p. li , c. schaeff , n. k. langford , s. ramelow , m. wieniak , & a. zeilinger , nature ( london ) 474 , 490 ( 2011 ) .v. dambrosio , i. herbauts , e. amselem , e. nagali , m. bourennane , f. sciarrino , & a. cabello , phys .x. 3 , 011012 ( 2013 ) x. zhang , m. um , j. zhang , s. an , y. wang , d .-deng , c. shen , l .- m .duan , & k. kim , phys .110 , 070401 ( 2013 ) j. ahrens , e. amselem , a. cabello & m. bourennane , sci .rep . 3 , 2170 ( 2013 ) m. um , x. zhang , j. zhang , y. wang , s. yangchao , d .-deng , l .- m .duan & k kim , sci .rep . 3 , 1627 ( 2012 ) m. araujo , m. t. quintino , c. budroni , m. t. cunha , & a. cabello , phys .a , 88(2 ) , 022118 ( 2013 ) s. l. braunstein , & c. m. caves , annals of physics , 202(1 ) , 22 - 56 ( 1990 ) a. a. klyachko , m. a. can , s. binicioglu , & a. s. shumovsky , phys .lett . 101 , 020403 ( 2008 ) b. s. tsirelson , j. sov .36 , 557 ( 1987 ) a. grudka , k. horodecki , m. horodecki , p. horodecki , m. pawowski , & r. ramanathan , phys .a , 90 , 032322 ( 2013 ) j. f. clauser , m. a. horne , a. shimony & r. a. holt , phys .lett . , 23 , 880 ( 1969 ) r. colbeck , & r. renner , nat . phys ., 8(6 ) , 450 - 453 ( 2012 ) s. pironio , a. acn , s. massar , a. boyer de la giroday , d. n. matsukevich , p. maunz , s. olmschenk , d. hayes , l. luo , t. a. manning & c. monroe , nature 464 , 1021 - 1024 ( 2010 ) r. ionicioiu , & d. r. terno , phys .lett . , 107(23 ) , 230406 ( 2011 ) r. ionicioiu , t. jennewein , r. b. mann , & d. r. terno nat .commun . 5 , 3997 ( 2013 ) l. c. cleri , r. m. gomes , r. ionicioiu , t. jennewein , r. b. mann , & d. r. terno , foundations of physics , 44(5 ) , 576 - 587 ( 2014 ) r. ionicioiu , r. b. mann , & d. r. terno , phys .lett . , 114 , 060405 ( 2015 ) j. s. tang , y. l. li , x. y. xu , g. y. xiang , c. f. li , & g. c. guo , nat ., 6(9 ) , 600 - 604 ( 2012 ) a. peruzzo , p. shadbolt , n. brunner , s. popescu , & j. l. obrien , science , 338(6107 ) , 634 - 637 ( 2012 ) x. s. ma , s. zotter , j. kofler , r. ursin , t. jennewein , .brukner , & a. zeilinger , nat ., 8(6 ) , 479 - 484 ( 2012 ) p.j. coles , j. kaniewski , s. wehner , nat .commun . 5 , 5814 ( 2014 ) j.a .wheeler , in quantum theory and measurement , j.a.wheeler & w.h .zurek , eds .( princeton univ . press , princeton , nj , 1984 ) ,wheeler , in mathematical foundations of quantum mechanics , a.r .marlow , ed .( academic , new york , 1978 ) , pp . 948 .here we elaborate on the bell test in more detail .we illustrate the idealised scenario where the control qubit is in a state directly before the control unitary gate , while the register is in a pure state . in standard-cycle experiments the system being tested for contextuality is pure which should substantiate this assumption , see for instance .however any additional noise in the system being tested for contextuality should simply decrease the predicted bell violation . given these assumptions the state of the circuit after the control unitary is : we use the two subspaces , where , to define an effective qubit on alice s system of interest .we then equip alice with two observables and defined interms of pauli operators for her effective qubit and . and re - expresses the circuit s final state as let s assume the control qubit goes to bob and the register to alice ; and that bob uses with the standard pauli observables and on his qubit . now alice and bob can test a bell inequality : in terms of the condensed notation , the expectation value on state is : to find the best possible bell violation , we optimize as a function of , . we knowthis quantity is greater than ( or equal to ) the value of when , i.e. : now we can use , together with the choice , and to simplify this to for any this is always greater than 2 , provided alice chooses so that . note that when ( i.e. in the limiting case ) the control unitary obviously generates no entanglement and consequently we do not expect any bell inequality violation when .this is not a problem for our application which should always fall in the regime .hence alice and bob should be able to violate a bell inequality using the output of the circuits in fig .[ fig3 ] and [ fig2 ] .in addition bob may uses and -directions as his two measurement settings during the bell test . by using these settingswe can establish the z - outcomes of the control qubit in the contextuality protocol are not realistic .consequently during any run of the contextuality test , the choice of which two observables are being measured on the register is not predetermined ( because this choice depends on the z - outcome of the control qubit - a non realistic quantity ) .-cycle contextuality tests , are usually performed on -level quantum systems in the state , where we have chosen to label the hilbert space basis .the optimal settings for observables are where any pair and are compatible and for example in the -cycle case we have : the theory of euler rotations will give a systematic method for computing , provided we use the representations of given in refs . , we provide another appendix on how this representation is related to - .all other details should follow unaltered .tests of -cycle inequalities based on eq . , are typically implemented in a 3-level quantum system .the observables are : where the ray for and .the state that violates this inequality maximally , and is therefore usually adopted in experimental tests is .the protocol could be implemented by control - unitary based : where form a basis for the 3-d hilbert space , and .this is tantamount to a rotation in the plane spanned by and .all -cycle inequalities for even have a one to one mapping onto a chained bell inequalities ( where alice and bob each have one half of a bell pair and observables ) . in this caseour 4-level system basis vectors from sec .[ sec : evenncycles ] should be identified with a basis for the 2-qubit space according to .the state that violates the chained bell inequality maximally is and the optimal settings are for an exact correspondence with sec .[ sec : evenncycles ] note that with the basis identification above in eq . .similarly , while and .we highlight that for the chsh inequality using the bell state , the settings prescribed by are the problem of finding such that and in sec . [ sec : evenncycles ] , now inheres a systematic solution from the theory of euler rotations .this solution is the form , where each single qubit unitary for , has the free parameters , ( rotation angle and axis ) .parameters are fixed by , while are fixed by . from the form of and in eq .we observe that it is always possible to set .once we have found , it is possible , albeit convoluted to rewrite in the basis .
a phd student is locked inside a box , imitating a quantum system by mimicking the measurement statistics of any viable observable nominated by external observers . inside a second box lies a genuine quantum system . either box can be used to pass a test for contextuality - and from the perspective of an external observer , be operationally indistinguishable . there is no way to discriminate between the two boxes based on the output statistics of any contextuality test . this poses a serious problem for contextuality tests to be used as viable tests for device independent quantumness , and severely limits realistic use of contextuality as an operational resource . here we rectify this problem by building experimental techniques for distinguishing a contextual system that is genuinely quantum , and one that mimics it through clever use of hidden variables .
the brain has a highly developed and complex self - generated dynamical neural activity , and this fact raises a series of interesting issues . does this self - sustained neural dynamics , its eigendynamics , have a central functional role , organizing overall cognitive computational activities ? or does this ongoing autonomous activity just serve as a kind of background with secondary computational task , like non - linear signal amplification or time encoding of neural codes ?the answer to this question is important not only to system neurobiology , but also for research in the field of cognitive computation in general .we will review here approaches based on the notion that the autonomous neural dynamics has a central regulating role for cognitive information processing .we will then argue , that this line of research constitutes an emerging field in both computational neuroscience and cognitive system research .some preliminaries , before we start .this is a mostly non - technical review with emphasis on content , an exhaustive and complete discussion of the published work on the subject is not the objective here . centrally important equations will be given and explained , but for the numerical values of the parameters involved , and for the details of the simulation set - ups , we will refer to the literature .the discussion will be given generally from the perspective of cognitive system theory , _ viz _ bearing in mind the overall requirements of prospective complete cognitive systems , akin to ones of real - world living animals .on the experimental side , the study of self - induced or autonomous neural activity in the brain has seen several developments in recent years , especially by fmri studies , and we will start by discussing some key issues arising in this respect . the vast majority of experiments in cognitive neuroscience study the evoked neural response to certain artificial or natural sensory stimuli , often involving a given task which has been trained previously . it has been realized early on , that the neural response shows strong trial - to - trial variation , which is often as large as the response itself .this variability in the response to identical stimuli is a consequence of the ongoing internal neural activities ( for a discussion see ) .experimentally one has typically no control over the details of the internal neural state and it is custom to consider it as a source of noise , averaging it out by performing identical experiments many times over .it is on the other side well known that the majority of energy consumption of the brain is spent on internal processes , indicating that the ongoing and self - sustained brain dynamics has an important functional role .two possibilities are currently discussed : _ ( a ) _ the internal neural activity could be in essence a random process with secondary functional roles , such as non - linear signal amplification or reservoir computing for the spatiotemporal encoding of neural signals ( for a theory review see ) ._ ( b ) _ the internal neural activity could represent the core of the cognitive information processing , being modulated by sensory stimuli , but not directly and forcefully driven by the input signals .indications for this scenario arise , e.g. , from studies of the visual information processing and of the attention system .the overall brain dynamics is still poorly understood and both possibilities ( a ) and ( b ) are likely to be relevant functionally in different areas . in this reviewwe will focus on the ramifications resulting from the second hypothesis .there are indications , in this regard , that distinct classes of internal states generated autonomously correspond to dynamical switching cortical states , and that the time series of the spontaneous neural activity patterns is not random but determined by the degree of mutual relations .additionally , these spontaneous cortical state may be semantic in nature , having a close relation to states evoked by sensory stimuli and to neural activity patterns induced via thalamic stimulation .a second characteristics recurrently found in experimental studies is the organization of the spontaneously active states into spatially anticorrelated networks , being transiently stable in time , in terms of firing rates , with rapid switching between subsequent states .these results indicate that certain aspects of the time evolution of the self - sustained neural activity in the brain have the form of transient state dynamics , which we will discuss in detail in sect .[ sect_transient_state_dynamics ] , together with a high associative relation between subsequent states of mind .this form of spontaneous cognitive process has been termed ` associative thought process ' .it is currently under debate which aspects of the intrinsic brain dynamics is related to consciousness .the global organization of neural activity in anticorrelated and transiently stable states has been suggested , on one side , to be of relevance also for the neural foundations of consciousness , _ viz _ the ` observing self ' .the persistent default - mode network ( for a critical perspective see ) , _ viz _ the network of brain areas active in the absence of explicit stimuli processing and task performance , has been found , on the other side , to be active also under anesthetization and light sedation . it is interesting to note , in this context , that certain aspects of the default resting mode can be influenced by meditational practices . the term ` neural transients ' characterizes evoked periods of neural activities , remaining transiently stable after the disappearance of the primary stimulating signal . in the prolonged absence of stimuli , neural architectures based on neural transientsrelax back to the quiescent default state .network setups based on neural transients therefore occupy a role functionally in between pure stimulus - response architectures and systems exhibiting continuously ongoing autonomous neural activity . an important class of neural architectures based on neural transients are neural reservoirs , which we discuss now briefly . a recurrent neural net is termed a reservoir , if it is not involved in the primary cognitive information processing , having a supporting role . a typical architecture is illustrated in fig .[ figure_reservoir_dynamics ] .the reservoir is a randomly connected network of artificial neurons which generally has only a transiently stable activity in the absence of inputs , _the reservoir has a short - term memory . in the standard mode of operation an input signalstimulates the network , giving raise to complex spatiotemporal reservoir activities .normally , there is no internal learning inside the reservoir , the intra - reservoir synaptic strengths are considered fixed .time prediction is the standard application range for reservoir computing . for this purpose the reservoiris connected to an output layer and the activities of the output neurons are compared to a teaching signal . with supervised learning , either online or off - line , the links leading from the reservoir to the output then acquire a suitable synaptic plasticity .there are two basic formulations of reservoir computing .the ` echo - state ' approach using discrete - time rate - encoding neurons , and the ` liquid state machine ' using continuous - time spiking neurons . in both casesthe dimensionality of the input signal , consisting normally of just a single line , is small relative to the size of the reservoir , which may contain up to a few hundred neurons .many nonlinear signal transformations are therefore performed by the reservoir in parallel and the subsequent perceptron - like output neurons may solve complex tasks via efficient linear learning rules .neural reservoirs are possible candidates for local cortical networks like microcolumns .the bare - bones reservoir network is not self - active , but feedback links from the output to the reservoir may stabilize ongoing dynamical activity . in any case ,reservoir nets are examples of network architectures of type ( a ) , as defined in the previous section .the task of the reservoir , non - linear signal transformation , is performed automatically and has no semantic content .all information is stored in the efferent synaptic links .there is an interesting similarity , on a functional level , of reservoir computing with the notion of a ` global workspace ' .the global workspace has been proposed as a global distributed computational cortical reservoir , interacting with a multitude of peripheral local networks involving tasks like sensory preprocessing or motor output .the global workspace has also been postulated to have a central mediating role for conscious processes , representing the dominating hub nodes of a large - scale , small - world cortical network .a central question in neuroscience regards the neural code , that is the way information is transmitted and encoded ( see for reviews ) . keeping in mind that there is probably no pure information transmission in the brain , as this would be a waste of resources , that information is also processed when transmitted, one may then distinguish two issues regarding the encoding problem . on one hand there is the question on how sensory signals are reflected , on relative short timescales , in subsequent neural activities .available neural degrees of freedom for this type of short - time encoding are the average firing rates ( rate encoding ) , transient bursts of spikes and the temporal sequence of spikes ( temporal encoding ) .in addition , the response of either individual neurons may be important , or the response of local ensembles .the subsequent sensory signal processing , on timescales typically exceeding 25 - 100ms , may , on the other hand , involve neural dynamics in terms of transiently stable activity patterns , as discussed earlier in sect .[ subsect_autonomous_brain_dynamics ] . in fig .[ figure_transient_states ] two types of model transient state activities are illustrated .alternating subsets of neurons are either active , to various degrees , or essentially silent , resulting in well characterized transient states having a certain degree of discreteness .this discreteness should be reflected , on a higher level , on the properties of the corresponding cognitive processes . of interest in this contextis therefore the ongoing discussion , whether visual perception is continuous or discrete in the time domain , on timescales of the order of about 100ms , with the discrete component of perception possibly related to object recognition .transient state dynamics in the brain may therefore be related to semantic recognition , a connection also found in models for transient state dynamics based on competitive neural dynamics . in the followingwe will examine the occurrence and the semantic content of autonomous transient state dynamics in several proposed cognitive architectures .the concept of saddle point networks is based on the premises , ( a ) that the internal ongoing autonomous dynamics organizes the cognitive computation and ( b ) that the cognitive behavior is reproducible and deterministic in identical environments .as we will discuss in the next section , the first assumption is shared with attractor relic networks , while the second is not .technically , one considers a dynamical system , _ viz _ a set of first - order differential equations and the set of the respective saddle points , compare fig . [ figure_scenarios ] .the precondition is now that every saddle point has only a single unstable direction and stable directions .any trajectory approaching the saddle point will then leave it with high probability close to the unique unstable separatrix and the system therefore has a unique limiting cycle attractor .this limiting cycle does not need to be a global attractor , but normally has a large basin of attraction . during one passage most ,if not all , saddle points are visited one after the other , giving raise to a transient state dynamics illustrated in fig .[ figure_transient_states ] , with the trajectory slowing down close to a saddle point .another condition for this concept to function is the formation of a heteroclinic cycle , which is of a set in phase space invariant under time evolution .implying , as illustrated in fig .[ figure_scenarios ] , that the unstable separatrix of a given saddle point needs to end up as a stable separatrix of another saddle point .such a behavior occurs usually only when the underlying differential equations are invariant under certain symmetry operations , like the exchange of variables . for any practical application, these symmetries need to be broken and the limiting cycle will vanish together with the heteroclinic sequence .it can however be restored in form of a heteroclinic channel , if the strength of the symmetry - breaking is not too strong , by adding a stochastic component to the dynamics . with noise ,a trajectory loitering around a saddle point can explore a finite region of phase space close to the saddle point until it finds the unstable direction .once the trajectory has found stochastically the unstable direction , it will leave the saddle point quickly along this direction in phase space and a heteroclinic channel is restored functionally . cognitive computation on the backbone of saddle point networksis therefore essentially based on an appropriate noise level .cognitive computation with saddle point networks has been termed ` winnerless competition ' in the context of time encoding of natural stimuli and applied to the decision making problem . in the later case interaction with the environmentmay generate a second unstable direction at the saddle points and decision taking corresponds to the choice of unstable separatrix taken by the trajectory .a trivial form of self - sustained neural activity occurs in attractor networks .starting with any given initial state the network state will move to the next attractor and stay there , with all neurons having a varying degree of constant firing rates , the very reason attractor nets have been widely discussed as prototypes for the neural memory . as such , an attractor network is useless for a cognitive system , as it needs outside help , or stimuli from other parts of the system , to leave the current attractor .there is a general strategy which transforms an attractor network into one exhibiting transient state dynamics , with the transient neural states corresponding to the fixpoints of the original attractor network .this procedure is applicable to a wide range of attractor networks and consists in expanding the phase space by introducing additional local variables akin to local activity reservoirs . to be concrete ,let us denote with the set of dynamical variables of the attractor network , as illustrated in fig .[ figure_scenarios ] , and by the additional reservoir variables .we assume that the reservoirs are depleted / filled when the neuron is active / inactive , together with a suitable coupling of the reservoir variables to the neural activities one can easily achieve that the fixpoints of the attractor networks become unstable , _ viz _ that they are destroyed , turning into attractor ruins or attractor relics .this situation is illustrated in fig .[ figure_scenarios ] . in the expanded phase space are no fixpoints left .it is not the case that the attractors would just acquire additional unstable directions , upon enlargement of the phase space , turning them into saddle points .instead , the enlargement of the phase space destroys the original attractors completely .the trajectories will however still slow down considerably close to the attractor ruins , as illustrated in fig .[ figure_transient_states ] , if the reservoirs are slow variables , changing only relatively slowly with respect to the typical time constants of the original attractor network . in this case the time constant entering the time evolution of the reservoir , eq .( [ eq_general_dot_phi ] ) , is large . in the limit reservoir becomes static and the dynamics is reduced to the one of the original attractor network .the dynamics exhibited by attractor relic networks is related to the notion of chaotic itinerancy , which is characterized by trajectories wandering around chaotically in phase space , having intermittent transient periods of stability close to attractor ruins .here we consider the case of attractor relics arising from destroyed point attractors . in the general case onemay also consider , e.g. , limit cycles or strange attractors .the coupling to slow variables outlined here is a standard procedure for controlling dynamical systems , and has been employed in various fashions for the generation and stabilization of transient state dynamics .one possibility is the use of dynamical thresholds for discrete - time rate - encoding neural nets . in this case oneconsiders as a slow variable the sliding - time averaged activity of a neuron and the threshold of a neuron isincreased / decreased whenever the neuron is active / inactive for a prolonged period .another approach is to add slow components to all synaptic weights for the generation of an externally provided temporal sequence of neural patterns . in the followingwe will outline in some detail an approach for the generation of transient state dynamics which takes an unbiased clique encoding neural net as its starting point , with the clique encoding network being a dense and homogeneous associative network ( dhan ) .transient state dynamics is intrinsically competitive in nature .when the current transient attractor becomes unstable , the subsequent transient state is selected via a competitive process .transient - state dynamics is a form of ` multi - winners - take - all ' process , with the winning coalition of dynamical variables suppressing all other competing activities .competitive processes resulting in quasi - stationary states with intermittent burst of changes are widespread , occurring in many spheres of the natural or the social sciences . in the context of darwinian evolution , to give an example , this type of dynamics has been termed ` punctuated equilibrium ' . in the context of research on the neural correlates of consciousness, these transiently stable states in form of winning coalitions of competing neural ensembles have been proposed as essential building blocks for human states of the mind . the competitive nature of transient state dynamicsis illustrated in fig .[ figure_competition ] , where a representative result of a simulation for a dhan net is presented . during the transition from one winning coalition to the subsequent, many neurons try to become members of the next winning coalition , which in the end is determined by the network geometry , the synaptic strengths and the current reservoir levels of the participating neurons .the transition periods from one transient state to the next are periods of increased dynamical sensibility . when coupling the network to sensory inputs , the input signal may tilt the balance in this competition for the next winning coalition , modulating in this way the ongoing internal dynamical activity .transient state dynamics therefore opens a natural pathway for implementing neural architectures for which , as discussed in the introduction , the eigendynamics is modulated , but not driven , by the sensory data input stream .a concrete example of how to implement this procedure will be discussed in sect .[ sect_influence_stimuli ] .only a small fraction of all neurons are active at any time in the brain in general , and in areas important for the memory consolidation in particular . for various reasons , like the optimization of energy consumption and the maximization of computational capabilities , sparse coding is an ubiquitous and powerful coding strategy .sparse coding may be realized in two ways , either by small non - overlapping neural ensembles , as in the single - winner - take - all architecture , or by overlapping neural ensembles .the latter pathway draws support from both theory considerations , and from experimental findings .experimentally , several studies of the hippocampus indicate that overlapping neural ensembles constitute important building blocks for the real - time encoding of episodic experiences and representations .these overlapping representations are not random superpositions but associatively connected .a hippocampal neuron could response , e.g. , to various pictures of female faces , but these pictures would tend to be semantically connected , e.g. they could be the pictures of actresses from the same tv series .it is therefore likely that the memory encoding overlapping representations form an associative network , a conjecture that is also consistent with studies of free associations .there are various ways to implement overlapping neural encoding with neural nets . herewe discuss the case of clique encoding .the term clique stems from graph theory and denotes , just as a clique of friends , a subgraph where ( a ) every member of the clique is connected with all other members of the clique and where ( b ) all other vertices of the graph are not connected to each member of the clique . in fig .[ figure_clique_encoding ] a small graph is given together with all of its cliques .also shown in fig .[ figure_clique_encoding ] are the associative interconnections between the cliques .one may view the resulting graph , with the cliques as vertices and with the inter - clique associative connections as edges , as a higher - level representation of an implicit hierarchical object definition . the clique ( 4,5,9 ) in the original graph in fig .[ figure_clique_encoding ] corresponds to a primary object and the meta - clique [ ( 4,5,9)-(2,4,6,7)-(4,5,6,8 ) ] in the graph of the cliques would in this interpretation encode a meta object , composed of the primary objects ( 4,5,9 ) , ( 2,4,6,7 ) and ( 4,5,6,8 ) .this intrinsic possibility of hierarchical object definitions when using clique encoding has however not yet be explored in simulations and may be of interest for future studies .cliques can be highly overlapping and there can be a very large number of cliques in any given graph .we will construct now a neural net where the cliques of the network are the attractors .it is a homogeneously random and dense associative network ( dhan ) , where the associative relations between cliques are given by the number of common vertices . starting from this attractor network we will introduce slow variables , as discussed in sect . [ subsec_attractor_relic_networks ] , in terms of local reservoirs .the network will then show spontaneously generated transient state dynamics , with the neural cliques as the attractor ruins . in a second stepwe will couple the dhan net to sensory stimuli and study the interplay between the internal autonomous dynamical activity and the data input stream .we will find that the cliques acquire semantic content in this way , being mapped autonomously to the statistically independent patterns of the data input stream .the starting point of our considerations is the underlying attractor network , for which we employ a continuous time formulation , with rate encoding neurons , characterized by normalized activity levels ] , with the time evolution where a neuron is active / inactive whenever its activity level is close to unity / zero .the behave functionally as reservoirs , being depleted / refilled for active / inactive neurons .the term on the rhs of eq .( [ eq_dot_phi ] ) is not essential for the establishment of transient state dynamics , but opens an interesting alternative interpretation for the slow variables . vanishes for inactive neurons and takes the value for active neurons .the reservoir levels of all active neurons are drawn together consequently .all members of the currently active winning coalition have then similar reservoir levels after a short time , on the order of .this is a behavior similar to what one would expect for groups of spiking neurons forming winning coalitions via synchronization of their spiking times . for each neuron of the winning coalitionsone could define a degree of synchronization , given by the extent this neuron contributes to the overall synchronization .initially , this degree of synchronization would have a different value for each participating neuron . on a certain timescale , denoted here by , the spiking timeswould then get drawn together , synchronized , and all members of the winning coalition of active neurons would then participate to a similar degree in the synchronized firing .the firing of the winning coalition would however not remain coherent forever .internal noise and external influences would lead to a desynchronization on a somewhat longer time scale .when desynchronized , the winning coalition would loose stability , giving way to a new winning coalition .in this interpretation the reservoirs allow for a `` poor man s '' implementation of self organized dynamical synchronization of neural ensembles , a prerequisite for the temporal coding hypothesis of neural object definition .finally we need to specify the reservoir coupling functions and entering eqs .( [ eq_r_i_plus ] ) and ( [ eq_r_i_neg ] ) .they have sigmoidal form with and a straightforward interpretation : it is harder to excite a neuron with depleted reservoir , compare eq .( [ eq_r_i_plus ] ) , and a neuron with a low reservoir level has less power to suppress other neurons , see eq .( [ eq_r_i_neg ] ) .reservoir functions obeying the relation ( [ eq_res_functions ] ) therefore lead in a quite natural way to transient state dynamics . on a short time scale the system relaxes towards the next attractor ruin in the form of a neural clique .their reservoirs then slowly decrease and when depleted they can neither continue to mutually excite each other , nor can they suppress the activity of out - of - clique neurons anymore . at this point , the winning coalition becomes unstable and a new winning coalition is selected via a competitive process , as illustrated in fig .[ figure_competition ] .any finite leads to the destruction of the fixpoints of the original attractor network , which is thus turned into an attractor relic network .the sequence of winning coalitions , given by the cliques of the network , is however not random .subsequent active cliques are associatively connected . the clique ( 1,9 ) of the 9-site network shown in fig . [ figure_clique_encoding ] , to give an example ,could be followed by either ( 4,5,9 ) or by ( 1,2,3 ) , since they share common sites .the competition between these two cliques will be decided by the strengths of the excitatory links and by the history of previous winning coalitions .if one of the two cliques had been activated recently , the constituent sites will still have a depressed reservoir and resist a renewed reactivation . and cliques ( 0,1,2 ) ,... receives sensory signals via the input layer ( middle ) in the form of certain input patterns ( bottom ) ., scaledwidth=48.0% ] the finite state dynamics of the dhan architecture is robust . for the isolated network, we will discuss the coupling to sensory input in the next section , the dynamics is relaxational and dissipative .the system relaxes to the next attractor relic and the reservoirs are relaxing either to zero or to unity , depending on the respective neural activity levels . for a network with a finite number of sites, the long - time state will be a long limiting cycle of transient states . the simulation results shown in fig .[ figure_competition ] are for a set of parameters resulting in quite narrow transitions and long plateaus .the formulation presented here allows for the modelling of the shape of the plateaus and of other characteristics of the transient state dynamics .a smaller would result in shorter plateaus , a longer in longer transition times .one can , in addition , adjust the shape of the reservoir functions and details of eqs .( [ eq_r_i_plus ] ) and ( [ eq_r_i_neg ] ) in order to tune the overall competition for the next winning coalition .the dhan architecture providing therefore a robust framework for the generation of transient state dynamics , offering at the same time ample flexibility and room for fine tuning , paving the way for a range of different applications .the transient state dynamics generated by the dhan architecture is dynamically robust .the dhan dynamics has at the same time windows of increased sensibility to outside influences during the transition periods from one transient state to the subsequent , as shown in fig .[ figure_competition ] .these transition periods are phases of active inter - neural competition , reacting sensibly to the influence of afferent signals .we couple the input signals via an appropriate input layer , as illustrated in fig .[ figure_dhan_input ] , denoting by $ ] the time dependent input signals , which we will take as black - and - white or grey - scaled patterns .we denote by the afferent links to the dhan layer , with the external contribution to the dhan - layer growth rates , compare eq .( [ eq_r_pos_neg_contribution ] ) , given by the rationale behind this formulation is the following .the role of the input signal is not to destabilize the current winning coalition , the afferent signal is therefore shunted off in this case , eq .( [ eq_r_i_ext ] ) .the input signal should influence the competition for the next winning coalition , modulating but not driving directly the dhan dynamics .this rational is realized by the above formulation .inactive neurons will receive a bias from the input layer which increases / decreases its chance of joining the next winning coalition for / .a cognitive system with a non - trivial and self - sustained internal neural activity has to decide how and when correlations with the sensory data input stream are generated via correlations encoded in the respective synaptic plasticities .this is clearly a central issue , since the input data stream constitutes the only source for semantic content for a cognitive system .it makes clearly no sense if the afferent links to the dhan layer , _ viz _ the links leading from the input to the internal network supporting a self - sustained dynamical activity , would be modified continuously via hebbian - type rules , since the two processes , the internal and the environmental dynamics , are per se unrelated .it makes however sense to build up correlation whenever the input has an influence on the internal activity , modulating the ongoing associative thought process .from the perspective of the cognitive system such a modulation of the internal dynamics by environmental stimuli corresponds to something novel and unexpected happening .novelty detection is therefore vital for neural networks with a non - trival eigendynamics processing sensory data .the importance of novelty detection for human cognition has been acknowledged indeed since long , and a possible role of dopamine , traditionally associated with reinforcement reward transmission , for the signalling of novelty has been suggested recently .the influence of modulating and of not modulating sensory signals is illustrated in fig .[ figure_input_yes_no ] , where simulation results for a dhan layer containing seven neurons coupled to an intermittent input signal are presented .the signal is not able to deactivate a currently stable winning coalition , compare eq .( [ eq_delta_r_i ] ) , but makes an impact when active during a transition period .the system has the possibility to figure out whenever the later has happened .when the input signal is relevant then in this case the internal contribution to the growth rate is negative and the input makes a qualitative difference .we may therefore define a global novelty signal obeying where we have used eq .( [ eq_r_pos_neg_contribution ] ) , , and where a is implicit on the rhs of the equation .the novelty signal needs to be activated quickly , with .learning then takes place whenever the novelty signal exceeds a certain threshold . of a dhan layer containing seven neurons ,compare fig .[ figure_dhan_input ] , the growth rates and the contributions from the input - layer , see eq .( [ eq_delta_r_i ] ) .the first input stimulus does not lead to a deviation of the transient state dynamics of the dhan layer .the second stimulus modulates the ongoing transient state dynamics , influencing the neural competition during the sensitive phase ., scaledwidth=48.0% ] bars problem ., scaledwidth=30.0% ] bars problem ., scaledwidth=48.0% ] having determined when learning takes place , we have now to formulate the rules governing how learning modifies the links afferent to the dhan layer .for this purpose we will use the hebbian principle , that positive interneural correlations are enforced and negative correlations weakened .our system is however continuously active , at no point are activities or synaptic strengths reset .the hebbian principle therefore needs to be implemented as an optimization process , and not as a maximization process , which would lead to a potentially hazardous runaway growth of synaptic strengths .there are four quadrants in the hebbian learning matrix , corresponding to active / inactive pre- and post - synaptic neurons , out of which we use the following three optimization rules : \(a ) the sum over active afferent links leading to active dhan neurons is optimized to a large but finite value , \(b ) the sum over inactive afferent links leading to active dhan neurons is optimized to a small value , \(c ) the sum over active afferent links leading to inactive dhan neurons is optimized to a small value , the , and are the target values for the respective optimization processes , where the superscripts stand for ` active ' , ` inactive ' and ` orthogonal ' .these three optimization rules correspond to fan - in normalizations of the afferent synapses .positive correlations are build up whenever dominates in magnitude , and orthogonalization of the receptive fields to other stimuli is supported by .a small but non - vanishing value for helps to generate a certain , effective , fan - out normalization , avoiding the uncontrolled downscaling of temporarily not needed synapses .knowledge about the environment lies at the basis of all cognition , before any meaningful action can be taken by a cognitive system . for simple organisms this knowledgeis implicitly encoded in the genes , but in general a cognitive system needs to extract this information autonomously from the sensory data input stream , via unsupervised online learning .this task includes signal separation and features extraction , the identification of recurrently appearing patterns , i.e. of objects , in the background of fluctuation and of combinations of distinct and noisy patterns . for the case of linear signal superpositionthis problem is addressed by the independent component analysis and blind source separation , which seeks to find distinct representations of statistically independent input patterns . in order to examine how our system of an input layer coupled to a dhan layer , as illustrated in fig .[ figure_dhan_input ] , analyzes the incoming environmental signals , we have selected the bars problem .the bars problem constitutes a standard non - linear reference task for feature extraction via a non - linear independent component analysis for an input layer .the basic patterns are the vertical and horizontal bars and the individual input patterns are made up of a non - linear superposition of the basic bars , containing any of them with a certain probability , typically , as illustrated in fig .[ figure_star_pattern ] .our full system then consist of the dhan layer , which is continuously active , and an input layer coding the input patterns consisting of randomly superimposed black / white bars .for the dhan network we have taken a regular 20-site ring , containing a total of 10 cliques , each clique having sites , as illustrated in fig .[ figure_star_pattern ] .the self - sustained transient - state process is continuously active in the dhan layer , modulated by the contributions it receives via the links from the input layer . for the simulationa few thousands of input patterns were presented to the system . in fig .[ figure_crp_graph ] we present for the bars problem the simulation results for the susceptibility of the 10 cliques in the dhan layer to the 10 basic patterns , the 10 individual horizontal and vertical bars , with , , and so on .all cliques have the size and the notation denotes the set of all sites defining the clique . at the startall are drawn randomly .the result is quite remarkable . at the beginning of the simulation the system undergoes an associative thought process without semanticcontent . during the course of the simulation , via the competitive novelty learning scheme , the individual attractor relics of the transient state dynamics , the cliques of the dhan layer , acquire a semantic connotation , having developed pronounced susceptibilities to statistically distinct objects in the sensory data input stream .this can be seen directly inspecting the clique receptive fields of the cliques in the dhan layer with respect to the input neurons , which are also presented in fig .[ figure_crp_graph ] .the clique receptive fields correspond to the averaged receptive fields of their constituent neurons .the data presented in fig . [ figure_crp_graph ]are for the bars problem .we note that simulation for larger systems can be performed as well , with similar results .the learning scheme employed here is based on optimization and not on maximization , as stressed in sect .[ subsec_afferent_link_plasticity ] .the clique receptive fields , shown in fig. [ figure_crp_graph ] , are therefore not of black / white type , but differentiated .synaptic modifications are turned progressively off when sufficient signal separation has been achieved .this behavior is consistent with the ` learning by error ' paradigm , which states that a cognitive system learns mostly when making errors and not when performing well .we may take a look at the results presented in fig .[ figure_crp_graph ] from a somewhat larger perspective .the neural activity of newborn animals consists of instinct - like reflexes and homeostatic regulation of bodily functions .the processing of the sensory signals has not yet any semantic content and internal neural activity states do not correspond yet to environmental features like shapes , colors and objects .the neural activity can acquire semantic content , philosophical niceties apart , only through interaction with the environment .this is a demanding task , since the optical or acoustical sensory signals are normally overloaded with a multitude of overlapping primary objects .the animal therefore needs to separate these non - linearly superposed signals for the acquisition of primary knowledge about the environment and to map the independent signals , the environmental object to distinct neural activity patters .this very basic requirement is performed by the dhan architecture .the internal transient states have , at the start of the simulation , no relation to environmental objects and are therfore void of semantic content . in the simulation presented here, there are 10 primary environmental objects , the 5 horizontal and vertical bars of the bars problem . in the settingused these 10 objects are independent and statistically uncorrelated . during the course of the unsupervised and online learning process, the receptive fields of the transiently stable neural states , the cliques in the dhan layer , acquire distinct susceptibilities not to arbitrary superpositions of the primary objects but to the individual primary bars themselves . a sensory signal consisting of the non - linear superposition of two or more barswill therefore lead , in general , to the activation of one of the corresponding cliques . to be concrete , comparing fig .[ figure_crp_graph ] , an input signal containing both the top - most and the bottom - most horizontal bar would activate either the clique or the clique .these two cliques will enter the competition for the next winning coalition whenever the input is not too weak and when it overlapps with a sensitive period .the present state together with its dynamical attention field will then determine the outcome of this competitions and one of the two objects present in this input signal is then recognized .the vast majority of neural nets considered to date for either research purposes , or for applications , are generalized stimulus - response networks .one has typically an input signal and an output result , as , e.g. , in speech recognition . in most settings the network is reset to a predefined default state after a given taskis completed , and before the next input signal is provided .this approach is highly successful , in many instances , but it is clearly not the way the brain works on higher levels . it is therefore important to examine a range of paradigmal formulations for the non - trivial eigendynamics of cognitive systems , evaluating their characteristics and computational capabilities . as an example for a concept situated somewhere in between a pure stimulus response net and systems with a fully developed eigendynamics , we have discussed in sect . [ subsec_reservoir_computing ] the notion of reservoir computing . for reservoir networksthe dynamics is , in general , still induced by the input signal and decays slowly in the absence of any input .any given stimulus encounters however an already active reservoir net , with the current reservoir activity caused by the preceding stimuli .the response of the network therefore depends on the full history of input signals and time prediction tasks constitute consequently the standard applications scenaria for reservoir computing .a somewhat traditional view , often presumed implicitly , is that the eigendynamics of the brain results from the recurrent interlinking of specialized individual cognitive modules .this viewpoint would imply , that attempts to model the autonomous brain dynamics can be considered only after a thorough understanding of the individual constituent modules has been achieved .here we have examined an alternative route , considering it to be important to examine the mutual benefits and computational capabilities of a range of theory proposals for the overall organization of the eigendynamics . in sect .[ subsec_saddle_point_networks ] we have examined a first proposal for the organization of the eigendynamics in terms of saddle point networks . in this frameworkthe internal neural dynamics is guided by heteroclines in a process denoted winnerless competition .this neural architecture aims to model reproducible cognitive behavior and a single robust attractor in terms of a heteroclinic channel constitutes the eigendynamics in the absence of sensory inputs . in sect .[ subsec_attractor_relic_networks ] we have examined the viewpoint that a non - trivial associative thought process constitutes the autonomous dynamics in the absence of sensory input . for any finite ( and isolated ) networkthese thought processes turn eventually into limiting cycles of transient states . in this architecturethere is however not a unique limiting cycle , but many possible and overlapping thought processes , every one having its respective basin of attractions .the transient state dynamics required for this approach is obtained by coupling an attractor network to slow variables , with the neural time evolution slowing down near the such obtained attractor relics .this is a quite general procedure and a wide range of concrete implementations are feasible for this concept .the coupling of neural nets having a non - trivial eigendynamics to the sensory input is clearly a central issue , which we have discussed in depth in sect .[ sect_influence_stimuli ] , for the case of networks with transient state dynamics based on attractor ruins , emphasizing two functional principles in this context : \(a ) the internal transient state dynamics is based intrinsically on the notion of competitive neural dynamics .it is therefore consistent to assume that the sensory input contributes to this neural competition , modulating the already ongoing internal neural competition .the sensory input would therefore have a modulating and not a forcing influence .the sensory signals would in particular not deactivate a currently stable winning coalition , influencing however the transition from one transiently stable state to the subsequent winning coalition .\(b ) the eigendynamics of the cognitive system and of the sensory signals resulting from environmental activities are , a priori , unrelated dynamically. correlations between these two dynamically independent processes should therefore be built up only when a modulation of the internal neural activity through the sensory signal has actually occurred .this modulation of the eigendynamics by the input data stream should then generate an internal reinforcement signal , which corresponds to a novelty signal , as the deviation of the internal thought process by the input is equivalent , from the perspective of the cognitive system , to something unexpected happening .we have shown , that these two principles can be implemented in a straightforward manner , resulting in what one could call an ` emergent cognitive capability ' .the system performs , under the influence of the above two general operating guidelines , autonomously a non - linear independent component analysis .statistically independent object in the sensory data input stream are mapped during the life time of the cognitive system to the attractor relics of the transient state network .the internal associative thought process acquires thus semantic content , with the time series of transient states , the attractor ruins , now corresponding to objects in the environment .we believe that these results are encouraging and that the field of cognitive computation with autonomously active neural nets is an emerging field of growing importance. it will be important to study alternative guiding principles for the neural eigendynamics , for the coupling of the internal autonomous dynamics to sensory signals and for the decision making process leading to motor output .architectures built up of interconnected modules of autonomously active neural nets may in the end open a pathway towards the development of evolving cognitive systems .gros c. emotions , diffusive emotional control and the motivational problem for autonomous cognitive systems . in : vallverdu j , casacuberta d , editors .handbook of research on synthetic emotions and sociable robotics : new applications in affective computing and artificial intelligence .igi - global ; 2009 , in press .fox md , corbetta m , snyder az , vincent jl , raichle me .spontaneous neuronal activity distinguishes human dorsal and ventral attention systems .proceedings of the national academy of sciences 2003;103:10046 - 10051 .fox md , snyder az , vincent jl , corbetta m , van essen dc , raichle me . the human brain is intrinsically organized into dynamic , anticorrelated functional networks .proceedings of the national academy of sciences 2005;102:96739678 .lin l , osan r , shoham s , jin w , zuo w , tsien jz .identification of network - level coding units for real - time representation of episodic experiences in the hippocampus .proceedings of the national academy of sciences 2005;102:6125 - 613 .
the human brain is autonomously active . to understand the functional role of this self - sustained neural activity , and its interplay with the sensory data input stream , is an important question in cognitive system research and we review here the present state of theoretical modelling . this review will start with a brief overview of the experimental efforts , together with a discussion of transient vs. self - sustained neural activity in the framework of reservoir computing . the main emphasis will be then on two paradigmal neural network architectures showing continuously ongoing transient - state dynamics : saddle point networks and networks of attractor relics . self - active neural networks are confronted with two seemingly contrasting demands : a stable internal dynamical state and sensitivity to incoming stimuli . we show , that this dilemma can be solved by networks of attractor relics based on competitive neural dynamics , where the attractor relics compete on one side with each other for transient dominance , and on the other side with the dynamical influence of the input signals . unsupervised and local hebbian - style online learning then allows the system to build up correlations between the internal dynamical transient states and the sensory input stream . an emergent cognitive capability results from this set - up . the system performs online , and on its own , a non - linear independent component analysis of the sensory data stream , all the time being continuously and autonomously active . this process maps the independent components of the sensory input onto the attractor relics , which acquire in this way a semantic meaning .
turbulence , hailed as one of the last major unsolved problems of classical physics , has been the subject of numerous publications as researchers seek to understand the underlying physics , structures , and mechanisms inherent to the flow . the standard test problem for wall - bounded studieshistorically has been turbulent channel flow because of its simple geometry and computational efficiency .even though much insight has been achieved through the study of turbulent channel flow , it remains an academic problem because of its infinite ( computationally periodic ) spanwise direction .the next simplest geometry is turbulent pipe flow , which is of interest because of its real - world applications and its slightly different dynamics .the three major differences between turbulent pipe and channel flow are that pipe flow displays a log layer , but overshoots the theoretical profile until a much higher reynolds numbers ( for a channel versus for a pipe ) , has an observed higher critical reynolds number , and is linearly stable to an infinitesimal disturbance .unfortunately , few direct numerical simulations of turbulent pipe flow have been carried out because of the complexity in handling the numerical radial singularity at the origin .although the singularity itself is avoidable , its presence causes standard high - order spectral methods to fail to converge exponentially . as a result, only a handful of algorithms found in the literature for turbulent pipe flow ; the first reported use for each algorithm is listed in table [ radialdiscr ] .these algorithms typically use a low - order expansion in the radial direction . only shan et al . using concentric chebyshev domains ( `` piecemeal '' ) and loulou et al . using basis spline ( b - spline ) polynomials provide a higher - order examination of turbulent pipe flow . using a spectral element method, this study provides not only a high - order examination but the first exponentially convergent investigation of turbulent pipe flow through direct numerical simulation ( dns ) .[ radialdiscr ] with this dns result , one of the studies that can be performed with the full flow field and time history it provides is an analysis based on an orthogonal decomposition method . in such methods , the flow is expanded in terms of a natural or preferred turbulent basis .one method used frequently in the field of turbulence is karhunen - love ( kl ) decomposition , which uses a two - point spatial correlation tensor to generate the eigenfunctions of the flow .this is sometimes referred to as proper orthogonal decomposition , empirical orthogonal function , or empirical eigenfunction analysis .work in this area was pioneered by lumley , who was the first to use the kl method in homogeneous turbulence .this method was later applied to turbulent channel flow in a series of papers by ball , sirovich , and keefe and sirovich , ball , and handler , who discovered plane waves and propagating structures that play an essential role in the production of turbulence through bursting or sweeping events . to study the interactions of the propagating structures ,researchers have examined minimal expansions of a turbulent flow .these efforts have led to recent work by webber et al . , who examined the energy dynamics between kl modes and discovered the energy transfer path from the applied pressure gradient to the flow through triad interactions of kl modes .this present study uses a spectral element navier - stokes solver to generate a globally high - order turbulent pipe flow data set .the karhunen - love method is used to examine the turbulent flow structures and dynamics of turbulent pipe flow .the direct numerical simulation of the three - dimensional time dependent navier - stokes equations is a computationally intensive task .by fully resolving the necessary time and spatial scales of turbulent flow , however , no subgrid dissipation model is needed , and thus a turbulent flow is calculated directly from the equations .dns has one main advantage over experiments , in that the whole flow field and time history are known , enabling analyses such as the karhunen - love decomposition .because of the long time integration and the grid resolution necessary for dns , a high - order ( typically spectral ) method is often used to keep numerical round - off and dissipation error small .spectral methods and spectral elements use trial functions that are infinitely and analytically differentiable to span the element .this approach decreases the global error exponentially with respect to resolution , in contrast to an algebraic decrease with standard methods such as finite difference or finite element methods .this study uses a spectral element navier - stokes solver that has been developed over the past 20 years to solve the navier - stokes equations : in the above equations , is the velocity vector corresponding to the radial , azimuthal , and streamwise direction respectively ; re is the reynolds number ; is the radius of the pipe ; is the kinematic viscosity ; and is the shear velocity based upon the wall shear stress and density .this solver employs a geometrically flexible yet exponentially convergent spectral element discretization in space , in this case allowing the geometry to be fitted to a cylinder .the domain is subdivided into elements , each containing high - order ( usually 1113 ) legendre lagrangian interpolants .the resulting data localisation allows for minimal communication between elements , and therefore efficient parallelization .time discretization is done with third - order operator splitting methods , and the remaining tensor - product polynomial bases are solved by using conjugate gradient iteration with scalable jacobi and hybrid schwarz / multigrid preconditioning .spectral elements are effective in cylindrical geometries and their use elegantly avoids the radial singularity at the origin . the mesh is structured as a box near the origin and transitions to a circle near the pipe walls ( figure [ grid ] ) , maintaining a globally high - order method at both the wall and the origin .in addition to avoiding the numerical error associated with the singularity , the method also avoids the time - step restriction due to the smaller element width at the origin of a polar - cylindrical coordinate system , which could lead to potential violations of the courant - friedrichs - levy stability criteria .= 2 in each slice , as shown in figure [ grid ] , has 64 elements , and there are 40 slices stacked in the streamwise direction , adding up to a length of 20r .each element has 12th - order legendre polynomials in each direction for re .this discretization results in 4.4 million degrees of freedom . near the wall , the grid spacing normalised by wall units ( ) denoted by the superscript is and , where isthe radius and is the azimuthal angle . near the centre of the pipe, the spacing in cartesian coordinates is .the streamwise grid spacing is a constant throughout the domain where is the streamwise coordinate .benchmarking was performed at re with the experiments and dns of eggels et al . and the dns of fukagata and kasagi . for this higher reynolds number flow , 14th - order polynomialswere used , giving grid spacings near the wall of and * , * in the centre , and .eggels et al .and fukagata and kasagi used a spectral fourier discretization in the azimuthal and axial directions and then a 2nd - order finite difference discretization in the radial direction .also , both groups used a domain length of 10r and grid sizes in their dns studies of for , , and directions , respectively .the turbulent flow was tripped with a solenoidal disturbance , and the simulation was run until transition was complete .this was estimated by examining the mean flow and root - mean - squared statistics over sequential periods of 100 until a statistically steady state was achieved , marked by a change of less than 1 percent between the current and previous period . in this case , it took to arrive at a statistically steady state .convergence was checked by examining the total fluctuating energy after 1000 time steps for increasing polynomial orders 8 , 10 , 12 , and 14 with a fixed number of elements ( 2560 ) .the absolute value of the difference of the total fluctuating energy for each simulation and the value obtained using 16th order polynomials was plotted versus the total number of degrees of freedom in figure [ convergence ] .the exponential decay of the error with increasing degrees of freedom shows that our spectral element algorithm achieves geometric or exponential convergence .=3 in the mean flow profiles in figure [ mean_180 ] correspond well with the hot wire anemometry ( hwa ) results of eggels et al . , but as seen in the root - mean - squared ( rms ) statistics shown in figure [ rms_180 ] , the spectral element calculation shows a lower peak and higher peak and compared to the hwa , particle image velocimetry ( piv ) , and laser doppler anemometry ( lda ) results .these results are in contrast to those with channel flow , as reported by gullbrand , where the 2nd order finite difference methods undershoot the spectral method wall - normal velocity rms . when compared to the experimental results of eggels et al . , the spectral element results are in better agreement than the 2nd - order finite difference results .we note that eggels et al .report that their ( piv ) system had difficulties capturing near the wall and near the centerline of the pipe due to reflection , which could explain the deviation of all of the dns results in that area . .the theoretical line is the law of the wall and the log layer .deviations from the log layer are expected in turbulent pipe flow until much higher reynolds numbers.,width=4 ] a second major difference our spectral element method and the previous pipe dns using 2nd - order finite difference is the domain size . with the spectral element method , which results in a global high - order convergence , a domain size of 10r yielded unphysical results in the flow , as seen in the bulge in the azimuthal in figure [ l10 ] at and .this unphysical result arose even with a more refined grid , and is therefore not a function of under - resolution .however , when the domain of the spectral element method was extended to 20r , this problem disappeared .we surmise that the 2nd - order finite difference method dissipated some large - scale structures after 10r that the higher - order spectral element case appropriately resolves .this undissipated structure , because of the periodic boundary conditions , then re - enters the inlet and causes the unphysical bulges in the azimuthal rms profile .this result is also supported by the work of jimnez in turbulent channel flow ( re=180 ) that shows large - scale structures that extend well past the domain size of .this benchmark confirms that the spectral element algorithm , at the given grid resolution and domain size , will generate the appropriate turbulent flow field and time history to perform the kl decomposition .= 4 in for completeness , the karhunen - love decomposition method is briefly described here , but for more detail , see .the karhunen - love method is the solution of the two - point velocity correlation kernel equation defined by where is the fluctuating component of the velocity and where the mean is determined by averaging over both homogeneous planes and time .the angle brackets represent an average over many time steps , on the order of , to sample the the entire attractor .the denotes the outer product , establishing the kernel as the velocity two - point correlation between every spatial point and . for turbulent pipe flow , with two homogeneous directions providing translational invariance in the ( azimuthal ) and ( streamwise ) direction , eq .( [ two_point ] ) becomes thus , given the kernel in eq .( [ two_point_pipe ] ) , the eigenfunctions have the form where is the azimuthal wavenumber and the streamwise wavenumber .the determination of is then given by where the denotes the complex conjugate , is the eigenvalue , and is the fourier transform of the fluctuating velocity in the azimuthal and axial direction . in the present problem ,2,100 snapshots of the flow field were taken , corresponding to one snapshot every eight viscous time steps ( ) .the results of each snapshot were projected to an evenly spaced grid with points in and , respectively .the fourier transform of the data was then taken and the kernel assembled .this kernel was averaged over every snapshot to generate the final kernel to be decomposed .since the dimension of k is 303 ( given by three velocity components on 101 radial grid points ) there are 303 eigenfunctions and eigenvalues for each fourier wavenumber pair .the eigenfunctions are ranked in descending order with the quantum number to specify the particular eigenfunction associated with the corresponding eigenvalue .thus , it requires a triplet to specify a given eigenfunction . the eigenfunctions are complex vector functions and are normalised so that the inner product of the full eigenfunction is of unit length , namely , where is the kronecker delta .the eigenvalues physically represent the average energy of the flow captured by the eigenfunction , we note that the eigenfunctions , as an orthogonal expansion of the flow field , retain the properties of the flow field , such as incompressibility and boundary conditions of no slip at the wall .the pipe flow is statistically invariant under azimuthal reflection , and taking advantage of it reduces the total number of calculations as well as the memory and storage requirements .we note that , because of its geometry , turbulent channel flow has two more symmetries a vertical reflection , and a x - axis rotation that are not present in the pipe , since a negative radius is equivalent to a 180 degree rotation in the azimuthal direction .a major consequence of this symmetry is that the resulting eigenfunctions are also symmetric , and the modes with azimuthal wave number will be the azimuthal reflection of the modes with wave number , thus reducing the total computational memory needed for this calculation .the kl method provides an orthogonal set of eigenfunctions that span the flow field . as such, the method allows the flow field to be represented as an expansion in that basis , with since the fourier modes are orthogonal to each other , equation [ coeff ] becomes with being the fourier transform of in the azimuthal and streamwise direction with wavenumbers and , respectively .the time history of the eigenfunctions can be used to examine their interactions , such as the energy interaction examined by webber et al . and bursting events by sirovich et al .this section presents the results of our analysis of turbulent pipe flow based on kl decomposition .dns was performed for .the reynolds number based on the centreline velocity is and based on the mean velocity is .this range is above the observed critical reynolds number for pipe flow and exhibits self - sustaining turbulence , as seen from the fluctuating time history of the mean velocity shown in figure [ mean_time ] .the velocity profile , seen in figure [ mean_profile ] , shows the mean velocity with respect to wall units away from the wall ( ) .the profile fits the law of the wall but fails to conform to the log law , and as mentioned in section 1 , the log law is not expected for turbulent pipe flow until much higher reynolds number .= 4 in = 4 in the rms velocity fluctuation profiles and the reynolds stress profile is shown in figures [ rms ] and [ reynolds ] .the streamwise fluctuations peak near .the azimuthal and axial velocities show a weaker peak near and , respectively , and then remain fairly flat throughout the pipe .the reynolds stress has a maximum of 0.68 at . [cols="^,^ " , ] the energy spectra of the propagating mode subclasses are shown in figure [ distribe ] .this shows that the tail wavenumber end of the inertial range is populated primarily by the lift modes and the low end of the spectra is populated predominantly by the wall modes .the physical meaning of this is that the energy starts at the wall with large scale structures and is lifted away from the wall into the outer region in small scale structures , represented by the lifting modes .the effect of higher radial quantum number is more zero crossings of the velocities in the radial direction .the effect on the location of coherent vorticity and the number of vortex cores scales with the radial quantum number , making the subclasses invariant with . the wall and lift modes retain their characteristics , in that even with more vortices , they remain close to the wall for the wall mode and close to the centreline for the lift modes , shown in figures [ 623non ] through [ 265non ] .cc & + & +although the karhunen - love method yields a preferred or natural basis for turbulence , one must be careful in the conclusions drawn from the results , as any structure or feature can be reconstructed from any given orthogonal basis .the strength of this method is that it is the most optimal basis , and by focusing on the most energetic modes a low order dynamical representation of the flow is realised .this low order dynamical system reveals three important results , and paints a picture of the energy dynamics in turbulent pipe flow .the first result is the constant phase speed of the propagating modes .this was also found in studies of turbulent channel flow , as the structures represented by the basis advect with a constant group velocity , the same average velocity of burst events .the normal speed locus of the propagating waves is shown in figure [ locus ] for the propagating modes found in the top 50 most energetic modes .for this , the phase speed is plotted in the direction .the locus is nearly circular , which is evidence that these structures propagate as a wave packet or envelope that travels with speed of * * , the point at which the circle intersects the x - axis in figure [ locus ] , which corresponds to the mean velocity in the buffer region at . .the circle is a least - squares fit to the data and represents that the wave packets are acting together as a group with speed of 8.41 , the point where the circle intersects the y - axis.,width=4 ] the second interesting result is that none of the modes with azimuthal wave number exhibit any travelling waves near the wall , and are without exception a streamwise or inclined streamwise vortex in the outer region .the reason is that the mode does not allow for a near - wall travelling wave as found by kerswell , and as such , the basis expansions for these modes have only near - centreline structures .the third result of note is that the different kl structures , as observed from our results , can be seen qualitatively as an expansion that represents the horseshoe ( hairpin ) vortex representation of turbulent structures .this horseshoe structure is supported by a large number of researchers in the turbulence community as the self - sustaining mechanism for turbulence . in the representative horseshoe structure ( see , for example ,the figure by theodorsen ) , the structures found in the kl can be seen .the wall modes represent the leg structure and its perturbation near the wall .the lift modes represent the structure lifting off the wall .the asymmetric modes represent the secondary and tertiary horseshoes that are formed .the turn modes represent the spanwise head of the horseshoe . as mentioned before , since the kl decomposition forms a basis , any flow can be recreated from these eigenmodes , so this resultis reported as an interesting qualitative result , and as an understanding that the propagating modes are the structures of interest in turbulence . the final dynamical picture of the kl modes in summarising the work done by kerswell , webber et al . , sirovich et al . , and this current work is as follows .webber et al . found that the energy enters the flow from the pressure gradient to the shear modes .the shear modes , interacting with the roll modes present in fully developed turbulence transfers the energy from the shear modes to the roll modes . shown in this study , and theoretically by kerswell, the roll modes decay much more slowly than the propagating modes . because of this slow decay, these streamwise rolls provide an energy storage role in the turbulence , similar to a capacitance .this allows enough time for the roll to interact with the propagating wave packet shown by sirovich et al . because this interaction requires a finite time to occur , the energy storage and slow decaying nature of the streamwise rolls play an integral role in the self - sustaining nature of turbulence .it is also surmised that the travelling modes found by kerswell and other researchers are represented in the kl formulation as the most unstable wave packet of kl modes .following again the work of webber et al . and applying the classifications found in this study , the majority of the energy from the roll modes are transferred to the wall modes .the wall modes then interact with themselves through an asymmetric mode catalyst , and with the lift modes through a ring mode catalyst .physically , this gives a picture of wall turbulence energy being transferred near the wall and then lifted up to the outer layer .this interaction between the wall and the lift modes is what populates the inertial range shown in figure [ distribe ] .the energy is ultimately dissipated by viscosity , faster for the higher wavenumber modes .we have presented the use of the karhunen - love expansion method with the results of a globally high - order direct numerical simulation of turbulent pipe flow .the results reveal the structure of the turbulent pipe flow as propagating ( 80.58% total energy ) and non propagating modes ( 19.42% total energy ) .the propagating modes are characterised by a constant phase speed and have four distinct classes : wall , lift , asymmetric , and ring modes .these propagating modes form a travelling wave envelope , forming a circular , normal - speed locus , with advection speed of corresponding to the mean velocity in the buffer region near .the non propagating modes have two subclasses : streamwise roll and shear modes .these represent the energy storage and mean fluctuations of the turbulent flow , respectively .the energy is transferred from the streamwise rolls to the wall modes first , and then later to the lift modes , physically representing the energy transfer from the wall to the outer region .this eigenfunction expansion , using both their structure and their time - dependent coefficients , provides a framework for understanding the dynamics of turbulent pipe flow , and will provide a basis for further analysis and comparison leading to understanding the mechanism of drag reduction by spanwise wall oscillation and the mechanism of relaminarization .we gratefully acknowledge the use of the virginia tech terascale computing facility and the teragrid san diego supercomputing facility .eggels , j.g.m . , unger , f. , weiss , w.h ., westerweel , j. , adrian , r.j . , friedrich , r. , and nieuwstadt , f.t.m . , 1994 .fully developed turbulent pipe flow : a comparison between direct numerical simulation and experiment ._ j. fluid mech . ,_ * 268 * , 175209 .fischer , p.f ., ho , l.w . , karniadakis , g.e , ronouist , e.m . , and patera , a.t . , 1988 .recent advances in parallel spectral element simulation of unsteady incompressible flows ._ comput . & struct . , _ * 30 * , 217231 .tufo , h.m . , and fischer , p.f . , 1999 .terascale spectral element algorithms and implementations , _ proceedings of the acm / ieee sc99 conference on high performance networking and computing , _ ieee computer soc . , gordon bell prize paper .
the results of an analysis of turbulent pipe flow based on a karhunen - love decomposition are presented . the turbulent flow is generated by a direct numerical simulation of the navier - stokes equations using a spectral element algorithm at a reynolds number re . this simulation yields a set of basis functions that captures 90% of the energy after 2,453 modes . the eigenfunctions are categorised into two classes and six subclasses based on their wavenumber and coherent vorticity structure . of the total energy , 81% is in the propagating class , characterised by constant phase speeds ; the remaining energy is found in the non propagating subclasses , the shear and roll modes . the four subclasses of the propagating modes are the wall , lift , asymmetric , and ring modes . the wall modes display coherent vorticity structures near the wall , the lift modes display coherent vorticity structures that lift away from the wall , the asymmetric modes break the symmetry about the axis , and the ring modes display rings of coherent vorticity . together , the propagating modes form a wave packet , as found from a circular normal speed locus . the energy transfer mechanism in the flow is a four step process . the process begins with energy being transferred from mean flow to the shear modes , then to the roll modes . energy is then transfer ed from the roll modes to the wall modes , and then eventually to the lift modes . the ring and asymmetric modes act as catalysts that aid in this four step energy transfer . physically , this mechanism shows how the energy in the flow starts at the wall and then propagates into the outer layer . corresponding author . e - mail : duggleby.edu ] direct numerical simulation , karhunen - love decomposition , turbulence , pipe flow , mechanism
undersampled images are common in astronomy , because instrument designers are frequently forced to choose between properly sampling a small field - of - view , or undersampling a larger field .nowhere is this problem more acute than on the hubble space telescope , whose corrected optics now provide superb resolution ; however , the detectors on hst are only able to take full advantage of the full resolving power of the telescope over a limited field of view . for instance, the primary optical imaging camera on the hst , the wide field and planetary camera 2 , is composed of four separate 800x800 pixel ccd cameras , one of which , the planetary camera ( pc ) has a scale of per pixel , while the other three , arranged in a chevron around the pc , have a scale of per pixel .these latter three cameras , referred to as the wide field cameras ( wfs ) , are currently the primary workhorses for deep imaging surveys on hst .however , these cameras greatly undersample the hst image .the width of a wf pixel equals the full - width at half - maximum ( fwhm ) of the optics in the the near - infrared , and greatly exceeds it in the blue .in contrast , a well - sampled detector would have pixels across the fwhm .other hst cameras such as nicmos , stis and the future advanced camera for surveys ( acs ) also suffer from undersampling to varying degrees .the effect of undersampling on wf images is illustrated by the great eye chart in the sky " in figure 1 .further examples showing astronomical targets are given in section 8 .when the true distribution of light on the sky is observed by a telescope it is convolved by the point - spread function of the optics to produce an observed image , , where represents the convolution operator .this effect is shown for the hst and wfpc2 optics by the upper - right panel in figure 1 .pixelated detectors then again convolve this image with the response function of the electronic pixel , thus .the detected image can be thought of as this continuous convolved image _ sampled _ at the center of each physical pixel .thus a shift in the position of the detector ( know as a `` dither '' ) can be thought of as producing offset samples from the same convolved image .although pixels are typically square on the detector , their response may be non - uniform , and indeed , may , because of the scattering of light and charge carriers , effectively extend beyond the physical pixel boundaries .this is the case in wfpc2 .by contrast , in the nicmos detectors , the electronic pixel is effectively smaller than the physical pixel .fortunately , much of the information lost to undersampling can be restored . in the lower right of figure 1we display an image made using one of the family of techniques we refer to as `` linear reconstruction . ''the most commonly used of these techniques are shift - and - add and interlacing . in interlacing ,the pixels from the independent images are placed in alternating pixels on the output image according to the alignment of the pixel centers in the original images .the image in the lower right corner of figure 1 has been restored by interlacing dithered images . however , due to the occasional small positioning errors of the telescope and the non - uniform shifts in pixel space caused by the geometric distortion of the optics , true interlacing of images is often infeasible . in the other standard linear reconstruction technique ,shift - and - add , a pixel is shifted to the appropriate location and then added onto a sub - sampled image. shift - and - add can easily handle arbitrary dither postions , but it convolves the image yet again with the original pixel , compounding the blurring of the image and the correlation of the noise . in this case , two further convolutions are involved .the image is convolved with the physical pixel , as this pixel is mathematically shifted over and added to the final image .in addition , when many images with different pointings are added together on the final output grid , there is also a convolution by the pixel of the final output grid .this produces a final image the last convolution rarely produces a significant effect , however , as the output grid is usually considerably finer than the detector pixel grid , and convolutions add roughly as a sum of squares ( the summation is exact if the convolving functions are gaussians ) .the importance of avoiding convolutions by the detector pixel is emphasized by comparing the upper and lower right hand images in figure 1 .the deterioration in image quality between these images is due entirely to the single convolution of the image by the wf pixel .the interlaced image in the lower - right panel has had the sampled values from all of the input images directly placed in the appropriate output pixels without further convolution by either or .here we present a new method , drizzle , which was originally developed for combining the dithered images of the hubble deep field north ( hdf - n ) and has since been widely used both for the combination of dithered images from hst s cameras and those on other telescopes .drizzle has the versatility of shift - and - add yet largely maintains the resolution and independent noise statistics of interlacing .while other methods ( _ c.f . _ lauer 1999 ) have been suggested for the linear combination of dithered images , drizzle has the advantage of being able to handle images with essentially arbitrary shifts , rotations and geometric distortion , and , when given input images with proper associated weight maps , creates an optimal statistically summed image .drizzle also naturally handles images with `` missing '' data , due , for instance , to corruption by cosmic rays or detector defects .the reader should note that drizzle does not attempt to improve upon the final image resolution by enhancing the high frequency components of the image which have been suppressed either by the optics or the detector .while such procedures , which we refer to as `` image restoration '' ( in contrast to `` image reconstruction '' ) , are frequently very valuable ( see hanisch and white 1994 for a review ) , they invariably trade signal - to - noise for enhanced resolution .drizzle , on the other hand , was developed specifically to provide a flexible and general method of image combination which produces high resolution results without sacrificing the final signal - to - noise .although the effect of drizzle on the quality of the image can be profound , the algorithm is conceptually straightforward .pixels in the original input images are mapped into pixels in the subsampled output image , taking into account shifts and rotations between images and the optical distortion of the camera . however , in order to avoid re - convolving the image with the large pixel footprint " of the camera , we allow the user to shrink the pixel before it is averaged into the output image , as shown in figure 2 . the new shrunken pixels , or drops " , rain down upon the subsampled output . in the case of the hdf - n wfpc2 imaging , the drops were given linear dimensions one - half that of the input pixel slightly larger than the dimensions of the output pixels .the value of an input pixel is averaged into an output pixel with a weight proportional to the area of overlap between the drop " and the output pixel .note that if the drop size is sufficiently small not all output pixels have data added to them from each input image .one must therefore choose a drop size that is small enough to avoid degrading the image , but large enough so that after all images are drizzled the coverage is reasonably uniform .the drop size is controlled by a user - adjustable parameter called pixfrac , which is simply the ratio of the linear size of the drop to the input pixel ( before any adjustment due to the geometric distortion of the camera ) .thus interlacing is equivalent to drizzle in the limit of , while shift - and - add is equivalent to .the degree of subsampling of the output is controlled by the user through the scale parameter , which is the ratio of the linear size of an output pixel to an input pixel . when a pixel from an input image with data value and user defined weight is added to an output imagepixel with value , weight , and fractional pixel overlap , the resulting values and weights of that same pixel , and are where a factor of is introduced to conserve surface intensity , and where and are used to distinguish the input and output pixel indices . in practice , drizzle applies this iterative procedure to the input data , pixel - by - pixel , image - by - image .thus , after each input image is processed , there is a usable output image and weight , and .the final output images , after all inputs have been processed , can be written as where for these equations , 4 and 5 , we use the einstein convention of summation over repeated indices , and where the input indices and extend over all input images .it is worth noting that in nearly all cases , since very few input pixels overlap a given output pixel .this algorithm has a number of advantages over the more standard linear reconstruction methods presently used .it preserves both absolute surface and point source photometry ( though see section 5 for a more detailed discussion of point source photometry ) .therefore flux can be measured using an aperture whose size is independent of position on the chip . andas the method anticipates that a given output pixel may receive no information from a given input pixel , missing data ( due for instance to cosmic rays or detector defects ) does not cause a substantial problem , so long as there are enough dithered images to fill in the gaps caused by these zero - weight input pixels . finally , the linear weighting scheme is statistically optimum when inverse variance maps are used as the input weights .drizzle replaces the convolution by in equation 1 with a convolution with , the pixfrac . asthis kernel is usually smaller the full pixel , and as noted earlier convolutions add as the sum of squares , the effect of this replacement is often quite significant .furthermore , when the dithered positions of the input images map directly onto the centers of the output grid , and pixfrac and scale are chosen so that is only slightly greater than , one obtains the full advantages of interlacing : because the power in an output pixel is almost entirely determined by input pixels centered on that output pixel , the convolutions with both and effectively drop away .nonetheless , the small overlap between adjacent drops fills in missing data .few hst observing proposals have sufficient time to take a number of exposures at each of several dither positions . therefore , if dithering is to be of wide - spread use , one must be able to remove cosmic rays from data where few , if any , images are taken at the same position on the sky .we have therefore adapted drizzle to the removal of cosmic rays .as the techniques involved in cosmic ray removal are also valuable in characterizing the image fidelity of drizzle , we will discuss them first .here then is short description of the method we use for the removal of cosmic rays : 1 . drizzle each image onto a separate sub - sampled output image using .2 . take the median of the resulting aligned drizzled images .this provides a first estimate of an image free of cosmic - rays .3 . map the median image back to the input plane of each of the individual images , taking into account the image shifts and geometric distortion .this can done by interpolating the values of the median image using a program we have named `` blot '' .4 . take the spatial derivative of each of the blotted output images .this derivative image is used in the next step to estimate the degree to which errors in the computed image shift or the blurring effect of taking the median could have distorted the value of the blotted estimate .compare each original image with the corresponding blotted image . where the difference is larger than can be explained by noise statistics , the flattening effect of taking the median or an error in the shift ,the suspect pixel is masked .repeat the previous step on pixels adjacent to pixels already masked , using a more stringent comparison criterion .finally , drizzle the input images onto a single output image using the pixel masks created in the previous steps . for this final combination a smaller pixfrac than in step 1 ) will usually be used in order to maximize the resolution of the final image .figure 3 shows the result of applying this method to data originally taken by cowie and colleagues .the reduction was done using a set of iraf scripts which are now available along with drizzle in the dither package of stsdas .in addition to demonstrating how effectively cosmic rays can be removed from singly dithered images ( i.e. images which share no common pointing ) , this image also displays the degree to which linear reconstruction can improve the detail of an image . in the drizzled imagethe object to the upper right clearly has a double nucleus ( or a single nucleus with a dust lane through it ) , but in the original image the object appears unresolved .the drizzling algorithm was designed to obtain optimal signal - to - noise on faint objects while preserving image resolution .these goals are unfortunately not fully compatible . as noted earlier , non - linear image restoration procedures which attempt to remove the blurring due to the psf and the pixel response function through enhancing the high frequencies in the image , such as the richardson - lucy and maximum entropy methods , directly exchange signal - to - noise for resolution . in the drizzling algorithmno compromises on signal - to - noise have been made ; the weight of an input pixel in the final output image is entirely independent of its position on the chip .therefore , if the dithered images do not uniformly sample the field , the `` center of light '' in an output pixel may be offset from the center of the pixel , and that offset may vary between adjacent pixels . dithering offsets combined with geometric distortion generally produce a sampling pattern that varies across the field .the output psfs produced by the combination of such irregularly dithered datasets may , on occasion , show variations about the true psf .fortunately this does not noticeably affect aperture photometry performed with typical aperture sizes .in practice the variability appears larger in wfpc2 data than we would predict based on our simulations .examination of more recent dithered stellar fields leads us to suspect that this excess variability results from a problem with the original data , possibly caused by charge transfer errors in the ccd .camera optics generally introduce geometric distortion of images . in the case of the wfpc2 ,pixels at the corner of each ccd subtend less area on the sky than those near the center .this effect will be even more pronounced in the case of the advanced camera for surveys ( acs ) .however , after application of the flat field , a source of uniform surface brightness on the sky produces uniform counts across the ccd .therefore , point sources near the corners of the chip are artificially brightened compared to those in the center . by scaling the weights of the input pixels by their areal overlap with the output pixel , and by moving input points totheir corrected geometric positions , drizzle largely removes this effect . in the case of ,this correction is exact . in order to study the ability of drizzle to remove the photometric effects of geometric distortionwhen pixfrac is not identically equal to one , we created a four times sub - sampled grid of artificial stellar psfs .this image was was then blotted onto four separate images , each with the original wf sampling , but dithered in a four - point pattern of half - pixel shifts . as a result of the geometric distortion of the wf camera ,the stellar images in the corners of these images appear up to brighter in the corners of the images than near the center .these images were then drizzled with a and .aperture photometry on the grid after drizzling reveals that the effect of geometric distortion on the photometry has been dramatically reduced : the rms photometric variation in the drizzled image is 0.004 mags .of course this is not the final photometric error of a drizzled image ( which will depend on the quality of the input images ) , but only the additional error which the use of drizzle would add under these rather optimal circumstances . in practice usersmay not have four relatively well interlaced images but rather a number of almost random dithers , and each dithered image may suffer from cosmic ray hits . therefore , in a separate simulation , we have used the shifts actually obtained in the wf2 f814w images of the hdf - n as an example of the nearly random sub - pixel phase that large dithers may produce on hst .in addition , we have associated with each image a mask corresponding to cosmic ray hits from one of the deep hst wf images used in creating figure 3 .when these simulated images are drizzled together , the root mean - square noise in the final photometry ( which does not include any errors that could occur because of missed or incorrectly identified cosmic rays ) is mags .figure 4 displays the results of this process .we have also evaluated the effect of drizzling on astrometry .the stellar images described in the previous section were again drizzled using the hdf shifts as above , setting and .both uniform weight files and cosmic ray masks were used .the positions of the drizzled stellar images were then determined with the imexam " task of iraf , which locates the centroid using the marginal statistics of a box about the star .a box with side equal to 6 _ output _ pixels , or slightly larger than twice the full - width at half maximum of the stellar images , was used .a root mean square scatter of the stellar positions of _ input _ pixels about the true position was found for the images created with uniform weight files and the cosmic - ray masks .however , we find an identical scatter when we down - sample the original four - times oversampled images to the two - times oversampled scale of the test images .thus it appears that _ no _ additional measurable astrometric error has been introduced by drizzle .rather we are simply observing the limitations of our ability to centroid on images which contain power that is not fully nyquist sampled even when using pixels half the original size .drizzle frequently divides the power from a given input pixel between several output pixels . as a result , the noise in adjacent pixels will be correlated . understanding this effect in a quantitative manner is essential for estimating the statistical errors when drizzled images are analysed using object detection and measurement programs such as sextractor and daophot .the correlation of adjacent pixels implies that a measurement of the noise in a drizzled image on the output pixel scale underestimates the noise on larger scales .in particular , if one block sums a drizzled image by pixels , even using a proper weighted sum of the pixels , the per - pixel noise in the block summed image will generally be more than a factor of n greater than the per - pixel noise of the original image .the factor by which the ratio of these noise values differs from n in the limit as we refer to as the noise correlation ratio , .one can easily see how this situation arises by examining figure 5 . in this figurewe show an input pixel ( broken up into two regions , a and b ) being drizzled onto an output pixel plane .let the noise in this pixel be and let the area of overlap of the drizzled pixel with the `` primary '' output pixel ( shown with a heavier border ) be , and the areas of overlap with the other three pixels be and , where , and .now the total noise power added to the image variance is , of course , ; however , the noise that one would measure by simply adding up the variance of the output image pixel - by - pixel would be the inequality exists because all cross terms ( ) are missed by summing the squares of the individual pixels . these terms , which represent the correlated noise in a drizzled image , can be significant .in general , the correlation between pixels , and thus , depends on the choice of drizzle parameters and geometry and orientation of the dither pattern , and often varies across an image .while it is always possible to estimate for a given set of drizzle parameters and dithers , in the case where all output pixels receive equivalent inputs ( in both dither pattern and noise , though not necessarily from the same input images ) the situation becomes far more analytically tractable . in this case , calculating the noise properties of a single pixel gives one the noise properties of the entire image .consider then the situation when , , is set to zero .there is then no correlated noise in the output image since a given input pixel contributes only to the output pixel which lies under its center , and the noise in the individual input pixels is assumed to be independent .let represent a pixel from any of the input images , and let be the set of all whose centers fall on a given output pixel of interest .then it is simple to show from equations 4 and 5 that the expected variance of the noise in that output pixel , when , is simply where is the standard deviation of the noise distribution of the input pixel .we term this , as it is the standard deviation calculated with the pixel values added only to the pixels on which they are _centered_. now let us consider a drizzled output image where . in this case, the set of pixels contributing to an output pixel will not only include pixels whose centers fall on the output pixel , but also those for which a portion of the drop lands on the output pixel of interest even though the center does not . we refer to the set of all input pixels whose drops overlap with a given output pixel as and note that .the variance of the noise in a given output pixel is then where is the fractional area overlap of the drop of input data pixel with the output pixel .here we choose the symbol to represent the standard deviation calculated from all pixels that contribute to the output pixel when .the degree to which and differ depends on the dither pattern and the values of and . however , as more input pixels are averaged together to estimate the value of a given output pixel in than in , .when , is by definition equal to .now consider the situation where we block average a region of pixels of the final drizzled image , doing a proper weighted sum of the image pixels .this sum is equivalent to having drizzled onto an output image with a scale size .but as , this approaches the sum over c , or , in the limit of large , . however , a prediction of the noise in this region , based solely on a measurement of the pixel - to - pixel noise , without taking into account the correlation between pixels would produce .thus we see that one can therefore obtain for a given set of drizzle parameters and dither pattern by calculating and and performing the division .however , there is a further simplification that can be made .because we have assumed that the inputs to each pixel are statistically equivalent , it follows that the weights of the individual output pixels in the final drizzled image are independent of the choice of . to see this , notice that the total weight of a final image ( that is the sum of the weights of all of the pixels in the final image ) is independent of the choice of . ignoring edge pixels , the number of pixels in the final image with non - zero weight is also independent of the choice of . yetas the fraction of pixels within of the edge scales as , and the weight of an interior pixel can not depend on n , we see that the weight of an interior pixel must also be independent of . as a result . therefore , we find that although must be calculated for any given set of dithers , there is perhaps one case that is particularly illustrative .when one has many dithers , and these dithers are fairly uniformly placed across the pixel , one can approximate the effect of the dither pattern on the noise by assuming that the dither pattern is entirely uniform and continuously fills the output plane . in this casethe above sums become integrals over the output pixels , and thus it is not hard ( though somewhat tedious ) to derive .if one defines where and , then in the case of a filled uniform dither pattern one finds , if and if , using the relatively typical values of and , one finds .this formula can also be used when block summing the output image .for example , a weighted block - sum of pixels is equivalent to drizzling into a single pixel of size .therefore , the correlated noise in the blocked image can be estimated by replacing with in the above expressions .drizzle has now been widely used for many astronomical image combination problems . in this sectionwe briefly note some of these and provide references where further information may be obtained .drizzle was developed for use in the original hubble deep field north , a project to image an otherwise unexceptional region of the sky to depths far beyond those of previous astronomical images .exposures were taken in four filter bands from the near ultraviolet to the near infra - red .the resulting images are available in the published astronomical literature as well as from the space telescope science institute via the world wide web at http://www.stsci.edu/ftp/science/hdf/hdf.html .subsequently drizzle has also been applied to the hubble deep field south . in this caseit was used for the combination of images from nicmos and stis as well as wfpc2 . in order to obtain dithered nicmos and wfpc2 images in parallel with stis spectroscopy ,hst was rotated , as well as shifted , between observations during the hdf - s .all the software developed to handle such challenging observations is now publicly available ( see section 9 ) .the hdf imaging campaigns are atypical as they had a large numbers of dither positions .a more usual circumstance , matching that described in section 3 , is the processing of a small number of dithers without multiple exposures at the same pointing .a good example of such imaging and its subsequent processing is provided in fruchter _et al . _( 1999 ) where gamma ray burst host galaxies were observed using the stis and nicmos hst cameras to obtain morphological and photometric information .similarly bally , odell and mccaughrean ( 2000 ) have used drizzle to combine dithered wfpc2 images with single exposures at each dither position in a program to observe disks , microjets and wind - blown bubbles in the orion nebula .examination of these published images may help the reader to obtain a feeling for the results of using the drizzle program .in addition an extensive set of worked examples of combining dithered data using drizzle is available in the dither handbook distributed by stsci .drizzle provides a flexible , efficient means of combining dithered data which preserves photometric and astrometric accuracy , obtains optimal signal - to - noise , and approaches the best resolution that can be obtained through linear reconstruction .an extensively tested and robust implementation is freely available as an iraf task as part of the stsdas package and can be retrieved from the space telescope science institute web page ( http://www.stsci.edu ) .in addition to drizzle , a number of ancillary tasks for assisting with determining the shifts between images and the combination of wfpc2 data are available as part of the dither " package in stsdas .we are continuing to improve drizzle , to increase both ease of use and generality .new versions of drizzle will be incorporated into stsdas software updates .additional capabilities will soon make the alignment of images simpler , and will provide the user with a choice of drizzling kernels , including ones designed to speed up the image combination with minimal change to the output image or weight an enhancement which may prove particularly useful in the processing acs images .although these additions may make drizzle somewhat more flexible , the basic algorithm described here will remain largely unchanged , as it provides a powerful , general algorithm for the combination of dithered undersampled images .drizzle was developed originally to combine the hdf - n datasets .we wish to thank our colleagues in the hdf - n team , and bob williams in particular , for encouraging us , and for allowing us to be part of this singularly exciting scientific endeavor .we also thank ivo busko for his work on the original implementation of the stsdas dither package , hans - martin adorf for many entertaining and thought provoking discussions on the theory of image combination , and stefano casertano for inciting us to develop a more general theory of the correlated noise in drizzled images .finally , we are grateful to anton koekemoer for a careful reading of the text , and to our referee , tod lauer , for numerous suggestions which significantly improved the clarity and presentation of this paper .
we have developed a method for the linear reconstruction of an image from undersampled , dithered data . the algorithm , known as variable - pixel linear reconstruction , or informally as `` drizzle '' , preserves photometry and resolution , can weight input images according to the statistical significance of each pixel , and removes the effects of geometric distortion both on image shape and photometry . this paper presents the method and its implementation . the photometric and astrometric accuracy and image fidelity of the algorithm as well as the noise characteristics of output images are discussed . in addition , we describe the use of drizzling to combine dithered images in the presence of cosmic rays .
in spite of the prevalence and importance of omnivory food webs in natural communities their population dynamics to date remain poorly understood , even for only three species in the community . even in simple systems a plethora of nonlinear effects such as flexible consumer behaviour , intraspecific interactions between competing consumers and resources , inhomogeneity of the environment and adaptive foraging precludes easy theoretical treatment and interpretation .one example of a non - trivial omnivory food web is a system with intraguild predation .intraguild predation assumes that the same organism is both competitor and predator to another member of the food web .the igp models encompass a rich dynamical behaviour including coexistence and alternative stable states .simple mathematical models have been evoked in attempt to explain the persistence of igp interactions in natural habitats .however predictions from the mathematical theory of 3-species igp systems state that a high resource carrying capacity promotes the exclusion of intermediate trophic levels and thus destabilizes interactions .what is puzzling that various empirical studies of omnivory document however coexistence , but not exclusion , over the entire range of natural resource productivities . on the basis of experimental observations a theoretical 3-species omnivory model predicts the coexistence only at superior competitive abilities of the ig prey for the communal resource . yetempirical data suggest a robust persistence of igp systems in both terrestrial and aquatic communities .theoretical models that are focused on the aspects of stability and coexistence of species in 3-level systems with the igp , as a rule , largely reduce the complexity of interactions observed in realistic systems .such oversimplifications can influence the population dynamics as well as critically impact species persistence .even though the simplest model of the igp encompasses only three species a number of empirical studies deal with larger food webs that involve more than three species potentially engaged in igp interactions .spatiotemporal heterogeneity of the environment often is invoked as one of the explanatory mechanisms for the coexistence between multiple species competing for the same resources .it has been observed that such a spatiotemporal heterogeneity can affect the diversity in prey populations .indeed an inhomogeneity in prey items that share common resource and predators is critical in determining the responses of ecological community . for systems with multiple prey composition various coexistence patterns can be found depending on the levels of resource productivity .it is not clear yet how the diversity in a prey community will affect the behaviour in the igp systems .the effect of a habitat structure on the igp is discussed in various recent models .for example a stable coexistence of the intraguild prey due to inhomogeneity of a habitat can be supported by creating temporal refuges for prey and reducing the encounter rates among preys and predators .in addition the stability of the igp can be enhanced by an inclusion of additional factors such as behaviourally mediated effects . to include the effect of an increasing diversity of resource and ig predators on population dynamics recently the 3-species igp model modified by holt and huxel ( 2007 ) .the authors extended the basic 3-species omnivory model to the so called `` partial igp '' model in which `` partial '' overlap among competitors for a single resource exists and both predators have exclusive resources to exploit .it was shown that an alternative resource enhances the tolerance of the ig prey against attacks from ig predators .independently of a competitive status of the ig prey in exploitation for a shared resource it can persists by utilizing an alternative resource . an extended formulation of the igp model with trophic supplementation has been proposed by daugherty et al .the authors investigated three forms of a supplementary feeding outside of the basic igp module and postulated a higher potential for persistence of the ig prey due to its efficient exploitation of external resources .there is growing evidence that in many systems the ig prey has a mutualistic or at least facilitative relationship with the ig predator . including such facilitation in ecological theory will fundamentally change many basic predictions and will enable a better understanding of functioning of many natural communities . especially in the igp systemsan emphasis should be given to the elucidation of the effects of facilitation on community composition and stability .contrary to the competitive exclusion principle in systems with competitors for a single resource stability stems from commensalism .hereby one consumer can in some way alter the habitat to benefit the other .recently such an interaction was observed in experiments with a microzooplankton food web community .the experimental system included two predators : a tintinnid species _favella ehrenbergii _ and a heterotrophic dinoflagellate species _gyrodinium dominans_. they are both grazing on a phototrophic dinoflagellate _scrippsiella trochoidea_. the authors showed that the ig predator _f. ehrenbergii _ can precondition a substantial part of the common resource _s. trochoidea _ during its feeding procedure by immobilizing the common prey without ingestion .such preconditioned individuals can be captured more easily by the ig prey _g. dominans _ than the mobile individuals of the same resource species .this mutualistic interaction leads to higher growth rates of the ig prey in the presence of the ig predator .the authors characterized their experimental observations as a facilitative igp relationship with a commensalistic pattern .our motivation for this modeling study was to investigate if such commensalistic patterns can create loopholes for a stable coexistence of all species in the investigated system .of our major interest was if in the igp system an immobilization or the partitioning of prey populations into distinct groups of individuals offers opportunities for competition avoidance among both consumer species .we reformulated the 3-species igp model proposed in to include multiple subpopulations of prey .furthermore , we explored the effect of diversification of the resource available to higher level consumers on the species persistence by numerical simulations of an extended igp module .specifically , we investigate how the addition of new links to a focal igp module enhances stability of population dynamics by reducing the competitive interactions of predators for their shared resource . in order to explain the results of the experimental findings of leder et al .we investigated the influence of multiple traits of the resource community on a stable coexistence in the 3-species model with different types of resource .for this purpose we adapted and reformulated the original model by holt and polis ( 1997 ) and added a new type of interaction .this link specifies the immobilization mechanism that depends on the densities of mobile and immobile resource items and the top predator which creates the immobile resource fraction during feeding .the immobilization term is used to model the interactions between the ig predator and the resource .another type of interaction considered in this paper is a resource turnover mechanism .this mechanism describes mutual interactions between species from distinct resource subpopulations .the interaction term depends exclusively on the resource subpopulation densities .the rate of turnover is constant . if no turnover or immobilization of individuals from one group to another occurs then the basic igp model with a single population of resource is recovered .we discuss the influence of immobilization and transfer of species on the coexistence patterns in a system with different subpopulations of the resource and compare the results with the basic 3-species igp .this paper is organized as follows : in the first section we introduce a general 3-species igp model with a new type of interaction that links the resource pools to the top consumers . in the following sections two distinct igp formulations with resource subpopulations are discussed .both models are derived from the basic igp module by including additional links : the immobilization by the predator and the resource turnover . in the results sectionwe numerically investigate stability of equilibrium densities for various trophic configurations .data from numerical analysis are presented for the igp model with the immobilization and for the model with the resource turnover .at last we discuss results for a general igp model with the resource turnover mechanism and subpopulations of the resource . after sketching the main conclusions we review the model predictions and compare their relevance to the immobilization experiment .furthermore we discuss possible alternative reformulations of the model . in the appendix explicit forms for the steady states for two simple analytical cases and multidimensional system are specified . as a part of a linear stability routine the jacobian matrices for two types of formulationsare given .finally , we carry over to a higher dimensional formulation and describe the parameters choice and the equilibrium densities .we introduce an omnivory model with an igp unit derived from a simple non - spatial lotka - volterra system with the linear functional responses adapted from holt and polis ( 1997 ) .the original model consists of populations of two predators ( ig predator and ig prey ) and a common resource . here, we include new features such as a resource differentiation mechanism which affects palatability of a fraction of resource for the predators .specifically , the entire resource population is subdivided into distinct groups under the assumption that the groups differ from each other by the quality and fitness of the individuals .they are consumed by the predators at different group - specific grazing rates .the differentiation of the resource could be due to damage by the predator or initial inhomogeneous distribution of the resource quality .afterwards , we generalize our model to the case of the multiple resources . the food web model for a multiple number of prey subpopulations is sketched in fig .[ fig1 ] a. the top predator and the intermediate predator are engaged in the igp and share a common resource .the resource pools are not independent because there is an exchange of individuals among different subpopulations following the links in fig .another special case of the igp with two distinct populations of resource and is presented in fig .shown is a schematic view of trophic interactions including intraguild predation and two populations and of immobilized and mobile resources respectively .the ig prey competes with the ig predator for both resource types and is also an additional resource for the ig predator .the size of the population increases due to immobilization of individuals from the population by the ig predator .we begin with an overview of a general igp model and all the important trophic links and parameters that are used to define it .later we focus specifically on two different formulations of the general igp model .the general model for a food web with an inhomogeneous resource is derived from the lotka - volterra omnivory model with the interaction term that accounts for the transitions among different pools .the lotka - volterra omnivory model consists of equations .it is used as an approximation for the food web community with the igp and mutually interacting subpopulations of the resources . in the absence of predation a basal population develops according to logistic growth .the set of equations for the population densities are written as follows : _ shared resource _ : s_1\nonumber\\ & & -z_1(s_1,s_2,\ldots , s_n , g , f),\nonumber\end{aligned}\ ] ] _ shared resource ( ) _ : s_k,\nonumber\end{aligned}\ ] ] _ intermediate predator ( ig prey ) _ : _ top predator ( ig predator ) _ : the parameters of the model and main populations are described in details in table [ tab1 ] . here is the maximum specific growth rate of the resource population , is the carrying capacity of the resource defined as enrichment factor in the previous models .the subpopulations are derived from the basal resource via immobilization or via individual - to - individual turnover .species from and are consumed by the ig predator at potentially different rates and and by the ig prey at rates and respectively .the differentiation among subpopulations is preserved by a choice of distinct predation pressures , feeding rates and mortality coefficients .the density - independent mortality rates for and are and correspondingly .they are used as factors limiting the growth of the populations in ( [ eq.1 ] ) .a key assumption of the model is that there is only one - directional movement between the basal resource and its fractions .the local interactions among individuals from alternative pools are embedded via functional terms provided in table [ tab2 ] for each type of the igp formulation .these terms account for transitions among the resource items .the general omnivory model ( [ eq.1 ] ) can be reduced to three types of igp formulations : system with immobilization and systems with the resource turnover for subpopulations and for pools . for each of the formulationsspecific expressions of functional forms and are provided in the table [ tab2 ] .the term is responsible for the exchange of individuals among subpopulations due to the species turnover or the immobilization mechanism .the transfer of individuals from the population to happens instantaneously at constant rates correspondingly .analogously are defined as instantaneous migration rates among subpopulations .the terms and are used to evaluate the total predation of the ig prey and the ig predator on the resource . to achieve a stable persistence of all speciesthe ig prey should benefit more from an alternative resource than the ig predator .for this reason , whereas the attack rates of the ig predator are equal for different resource pools , the ig prey establishes a higher predation pressure on subpopulations than on the basal pool .the numerical values for the attack rates are chosen to be close to the experimentally observed values .holt and huxel ( 2007 ) used an extended igp module with alternative resources that are defined independently .they evolve according to their own intrinsic growth rates .as opposed to the formulation given by holt and huxel ( 2007 ) and to a model with trophic supplements here we do not consider external alternative resources . in our model with immobilization the population density in every resource pool varies due to immobilization by the ig predator and consumption by the predators .similarly in the formulation with the resource turnover the transfer mechanism between resource subpopulations plays a role in exchange among the distinct resource pools .alternative pools grow due to the influx of species from the basal resource or the other pools .therefore the sizes of subpopulations are controlled mainly by a number of direct encounters with the ig predator ( immobilization ) or by a species turnover from one resource subpopulation to another .in addition , the individuals in the different pools of the basal resource are distinguished by group - specific predation pressures that establish a top - down regulation of densities of each subpopulation . in the following sections we present an explicit formulation of the model with immobilization and of the model with the resource turnover for subpopulations . ' '' '' ' '' '' .[tab1 ] the variables and parameters for the general model ( [ eq.1 ] ) .[ cols= " < , < " , ] ' '' '' ' '' '' 2 the system with the immobilization illustrated in fig .[ fig1 ] b is derived from the equations ( [ eq.1 ] ) for two resource subpopulations by substituting the interaction terms and from table [ tab1 ] .after the substitution the set of equations for the igp model with immobilization yields : + _ mobile resource : _\nonumber\\&&-(f+i_m ) f s_m,\nonumber\end{aligned}\ ] ] _ immobilized resource : _ s_i , \nonumber\end{aligned}\ ] ] _ ig prey : _ g,\nonumber\end{aligned}\ ] ] _ ig predator : _ f,\label{eq.2}\end{aligned}\ ] ] where the state variables and are the densities of mobile and immobilized species .note that the feeding rates of the top predator on both populations and are equal .by contrast , the attack rate of the ig prey on immobilized subpopulation is higher than on mobile species .the relation holds in the presence and in the absence of the predator .this assumption is well justified by the observations of an experiment with artificial immobilization . _ g. dominans _ demonstrate a strongly selective behaviour towards immobilized species when offered in a mixture with mobile cells of _ s. trochoidea_. it was measured that ingestion rates of the predator in the immobilized prey treatment were by a factor of greater than those in the control treatment .the stability of equilibrium densities and the persistence zones of the system ( [ eq.2 ] ) with a non - zero immobilization rate are discussed in section [ sec.3.1 ] . the model with the resource turnoveris derived from the general case ( [ eq.1 ] ) by substituting the functional forms from table [ tab2 ] .it is written as follows : + _ resource _ : s_1\nonumber\\ & & -[f f+t_r s_2 ] s_1\nonumber,\end{aligned}\ ] ] _ resource _ : s_2,\nonumber\end{aligned}\ ] ] _ ig prey _ : g,\nonumber\end{aligned}\ ] ] _ ig predator _ : f.\label{eq.3}\end{aligned}\ ] ] all the parameters are chosen the same as for the system with immobilization ( [ eq.2 ] ) .note that the evolution equations are written as in ( [ eq.2 ] ) but immobilization term is replaced with the transfer term that is dependent on the population densities .the transfer between the two subpopulations occurs each time whenever species from two different pools encounter each other . in the simplest casethe number of encounters is proportional to the population densities of and .if the density of second subpopulation is zero and no differentiation in the resource takes place at than the top predator outcompetes the predator due to a higher predation rate ( ) .this outcome is predicted by the basic igp model . by contrast ,whenever the turnover of species takes place and non - zero densities are produced in the resource pool the intraguild predation introduces a higher pressure on the second subpopulation .this will potentially lead to a negative effect on the population density in and to higher levels of subpopulation .the result of this interaction is that the 3-species coexistence is reached via the igp competition trade - off .we illustrate an emergent dynamical behaviour for the three formulations provided in table [ tab3 ] with stability diagrams . due to high dimensionality of the models ( [ eq.1])-([eq.3 ] )the analysis of an entire parameter space is intractable .only several illustrative examples for every formulation will be shown here . in fig .[ fig2 ] the regions of stable positive equilibrium solutions versus immobilization and enrichment are shown .the parameter space is partitioned into several stability zones associated with the regions of coexistence , exclusion of both predators and exclusion of the ig prey at .the boundaries defined for partitioning of the diagram are found from the eigenvalue analysis ( see appendix ) . as shown in fig .[ fig2 ] at low enrichment the densities of both predators decay to zero and the summed abundance of the resource reaches steady state at .the case of zero immobilization has been already considered in previous studies . at low immobilization and at high enrichmentonly the top predator and resource are stable and positive , just as in the 3-species igp model , whereas the coexistence between both predators and common resource is possible only in the regions of intermediate enrichment . a higher mortality rate for the predator results in its extinction in the region of low immobilization in fig . [ fig2]b where an extra resource can no longer support its persistence .only the ig predator and resource persist in this region of parameters .situation is different for higher immobilization where a large region of coexistence for both predators exists .the equilibrium densities shown on the diagrams are defined in the eq .( [ eq.7 ] ) in appendix .[ fig3 ] shows equilibrium densities of the four components of the food web and their dependence on enrichment and immobilization rates .for a high immobilization rate the resource population is dominated by immobilized individuals . meanwhile at low immobilization mobile and immobilized populations increase along the gradient of enrichmentan adverse pattern occurs at high immobilization .the growth rate of the ig predator is noticeably reduced at due to an increase of the competitive trade - off with the ig prey .the dependence of the population densities on the enrichment of resource is shown in fig .[ fig4 ] for the immobilization .meanwhile as predicted from the standard igp model the ig prey is excluded at high enrichment in the model with immobilization at the ig prey benefits from immobilized resource and its persistence is increased at a broader range of carrying capacities .the density of the mobile ( immobilized ) resource subpopulation reach saturation threshold at a higher enrichment ( see fig .[ fig4 ] ) . how sensitive is a stable coexistence to small variations of the attack rates of the intermediate predator ?will our predictions be still valid ? to examine the system behaviour for different attack rates of the coexistence zones are exemplified for different values of in fig .colorcode is assigned according to grazing rate .overall the stability diagram exhibits similar pattern as in fig .specifically , the region of stable coexistence enlarges for higher immobilization . as it seems reasonable the number of stable solutions and the 3-species permanence zone in fig .[ fig5 ] gradually broadens with the increase of predation pressure from predator .simultaneously fewer exclusion steady states for the predator are discovered . in this sectionthe equilibrium solutions and stability of the equilibria are discussed for the system ( [ eq.2 ] ) with the mechanism of species turnover . in fig .[ fig6 ] the regions of stable ( unstable ) equilibria are plotted versus the enrichment and the transfer rate .the results are contrasted on the stability diagrams in figs .[ fig6 ] for different predation rates of the ig predators .four different states are localized in the parameter space that corresponds to stable ( unstable ) persistence and exclusion of the ig prey ( ig predator ) .we investigate how the dynamics in the extended igp system responded to variation of enrichment levels . for each case shown in fig .[ fig6 ] an increase of enrichment is accompanied with a series of bifurcations in the system manifested by an invasion of higher trophic levels similar to predictions from the linear food chain theory .for instance , at low enrichment both regimes and are stable .further increase of at fixed results in a chain of bifurcations from a stable regime to an unstable and subsequently to a stable coexistence regime .a further increase of carrying capacity favours an exclusion of and shifts the population densities towards the dominance of the ig predator .an interesting feature is that at low transfer rate only a coexistence of the ig predator and the resource is found .the second subpopulation is extinguished fast due to predation and low transfer rate .the steady states found for low enrichment are similar to the case of a single prey population without transfer mechanism at and .as typified on the diagram in fig .[ fig6 ] d the ig prey levels remain positive . since the ig prey has an advantage as a competitor for the shared resource only the ig predator gets excluded from the system .the stability behaviour of the system ( [ eq.3 ] ) is highly sensitive to the alternations of attack rates of and the productivity of resource .changes of these parameters produce different emergent patterns as shown in fig .the location of states of stable ( unstable ) permanence and the exclusion zone of is still comparable to the patterns shown in fig .[ fig6 ] , however the region of 3-species coexistence gets visibly reduced .the reduction is more evident on the plots fig .[ fig7 ] and . at higher transfer ratesthe coexistence of all 3-species is no longer observed and only the population of ig prey and resource persist .due to low productivity the densities of the basal resource are quickly depleted and the ig predator is driven to extinction . on the contrary ,conditions become more profitable for the ig prey that is released from the igp pressure and simultaneously obtains more benefits by predation on the extra resource . at low transfer rates ( fig .[ fig7]c and d ) the ig predator is excluded independently on carrying capacity of the resource .as expected , with increase of the attack rate of the population of the ig predator is driven to extinction due competition with ig prey .however , situation becomes more favourable for the ig predator at higher values of the transfer coefficient . for high enrichment and intermediate transfer the ig preyis excluded from the system . at a fixed enrichmentseveral alternating states are found along the gradient of ( see fig .[ fig7 ] d ) . for example , at the behaviour of the food web is very sensitive even to a small alternations of .indeed , the system passes through distinct steady states just within a small increment of transfer rate .the exclusion of the ig predator is observed at , the coexistence is found at and the exclusion of the ig prey is achieved at . finally at a higher transfer values ( )both predators enter the system and persistence is reached . after presenting the results for the systems ( [ eq.2 ] ) and ( [ eq.3 ] ) we proceed to a more complex situation with of distinct subpopulations of the resource . for a multipopulation modelthe choice of parameters including predation rates can be enormously large .as a consequence more freedom is provided for choosing equilibrium densities that can fit the model ( [ eq.1 ] ) .since it is impossible to investigate the entire range of biologically plausible parameters we make a particular choice of parameters that allow an easier comparison of the case in ( [ eq.1 ] ) with the model ( [ eq.2 ] ) .the details of the procedure are provided in appendix . in this sectionwe show the results of the numerical simulation for the model with prey subpopulations .the system ( [ eq.1 ] ) for the case is integrated numerically .for the calculation of the stability diagrams at different fixed values of enrichment and transfer rate we perform simulations .the results of the simulations for and subpopulations are illustrated in fig .the percentage of stable 3-species coexistence is calculated for every point in the parameter space with fixed enrichment and limiting value .the colorcode is assigned according to the percentage of stable coexistence solutions found for food webs . in all the replicas of the simulated system the steady state densities for and fixed ( see ( [ eq.12 ] ) in the appendix ) .thus only the variations among possible equilibrium densities are examined .the constraints for the parameters of high dimensional system ( [ eq.1 ] ) are given in eqs .( [ eq.14])-([eq.16 ] ) in the appendix . the stability diagrams in fig .[ eq.8 ] show some similarities to the regions of coexistence in figs .[ fig6 ] and [ fig7 ] found for the subpopulation model ( [ eq.3 ] ) .the size of the stability zone expands with the increase of the transfer coefficient. at low transfer rates no stable persistence is found , but different alternative traits .the percentage of stable food webs with 3-species is substantially lower for a large system with subpopulations than for .this reduction in stability is independent on the number of simulated food webs and a choice of main parameters of the system .it is possible that an increase of food web connectivity in this case impacts negatively the system ( [ eq.1 ] ) stability .another feature is that for the percentage of stable equilibria at a fixed enrichment value decreases for large values of unlike in previous cases in fig .[ fig8 ] a - c .the results of the numerical simulation demonstrate that for subpopulation up to of stable systems are found at a higher transfer rate and an intermediate enrichment .second , for a larger food web with subpopulations a higher percentage of stable steady states ( up to ) are identified at low transfer rate and at high enrichment .we compare the results of simulation for four cases at fixed enrichment and variable transfer coefficient in fig .the yields are derived for simulations of food webs .the estimations of the number of steady states show that for food webs with a higher percentage of solutions with a stable coexistence are identified than for food web with pools .indeed , the yield for reaches almost meanwhile the percentage of stable food webs found for saturates at for large .the non - monotonic variations of the yields in fig .[ fig9 ] reveal a highly sensitive behaviour of the igp model ( [ eq.1 ] ) to a change in transfer rate in all cases .for the percentage of stable food webs reaches at a low transfer rate .it decreases substantially for higher values of the transfer rate . for is an overall incline from at to at of stable configurations .two types of stable equilibrium solutions are illustrated in fig .[ fig10 ] .both solutions are obtained inside the stable coexistence region as indicated in fig .the system ( [ eq.1 ] ) is simulated with number of subpopulations and initial conditions as defined in the appendix . for the steady state in fig .[ fig10 ] a and the oscillatory state in fig .[ fig10 ] b most of resource subpopulations are unstable and their densities rapidly decline to zero after some initial transient .nevertheless , coexistence in the system is typically supported by one or two resource pools with non - zero densities .+ + +there is growing evidence from theoretical and empirical studies that creating additional trophic links have a stabilizing effect on food webs .generalized models reveal that the stability of food webs can be enhanced when species at higher trophic levels graze upon multiple prey species .in particular , for low dimensional food webs it is demonstrated that an addition of alternative food resources can stabilize the interactions and open up a possibility for feedbacks on population dynamics due to apparent competition .the predictions of our model confirm the main conclusions given in a theoretical study of an extended igp model . in the alternative formulations used here the ig preyhas the access to an extra resource beyond the shared resource for which both predators compete .this extra resource is a more attractive resource item for the ig prey and is thus attacked at higher rates by the ig prey whereas the attack rate of the ig predator stays the same .moreover the ig predator indirectly stimulated the growth of the ig prey population by providing this extra resource .our predictions tested by the application of a stability analysis are robust in the sense that they are independent of the form of the interaction term that is responsible for the availability of an additional resource .we demonstrate that for different formulations of the basic igp model with the embedded interactions a stable 3-species coexistence is ultimately reached whenever a moderate strength of the omnivorous links is used .however , the problem to relate the experimental findings to the theoretical predictions of our model ( [ eq.2 ] ) still remains open .since the experiment is aimed to observe a short term populations development it is not easy to find a direct correspondence between empirical population dynamics and theoretically predicted behaviour . in the day batch culture experiment of lderet al . with all 3 species present both predators _ g. dominans _ and _ f. ehrenbergii _ displayed positive growth while the prey population _ s. trochoidea _ displayed simultaneously a sharp decline to almost zero .how can this behaviour be classified according to our theoretical model ? could it be a part of an oscillatory cycle or an unstable state ?it is not easy to answer these questions , however , we can make a guess that the short term evolution observed in the experiment recasts as a part of an oscillatory cycle for a periodic equilibrium state found at intermediate immobilization . a similar type of experiments performed for various initial species densities could furnish a justification of this hypothesis .the above results demonstrate that a persistence of ig predator , ig prey and resource is achieved even at a low value of immobilization rate .moreover , a significantly higher percentage of observed stable configurations is found when the immobilization and transfer links in ( [ eq.1 ] ) and ( [ eq.2 ] ) are strengthened . because our model is an oversimplification of the experimental behaviour the partitioning of the parameter space according to stable versus unstable coexistence could be used as an approximation of the population dynamics found in a real experimental situation .firstly , the conditions for long term stable coexistence found by numerical simulations are not so easy to examine experimentally because of technical and temporal restrictions .experimental samples in ref . are taken during days of incubation due to a decline of the prey population . secondly , due to the existence of stable limit cycles as predicted by our linear stability analysis ( see fig .[ fig10 ] ) the oscillatory solutions go through a period of very low densities and might be driven to extinction in the presence of random fluctuations of the environment .we point out that our numerical simulations of the extended 3-species igp module ( [ eq.1 ] ) do not explore the entire swath of parameters and configurations for the steady states .rather our analysis focuses on explaining conceptual features of the igp model with diverse prey populations .earlier studies focussing on dynamics of complex ecological communities demonstrated the importance of multiple prey traits in mitigating predator selection pressures and altering predator - induced behavioural shifts in natural environments .our model can also be adapted to food web communities in which differentiation among prey individuals , namely , variation in individual traits such as fitness and mobility , is a result of heterogeneities in their natural habitat and/or adaptation of the species to the local conditions of the habitat .we show that an existing diversity of resource items traits can significantly alter the emergent community patterns . adding new subpopulations of resources with distinct traits that are more vulnerable to an attack from the ig predatorfacilitates the coexistence of both igp - related predators which compete for the common food resource .thus , the presence of an alternative resource indirectly induces shifts in exploitative competition .it is important to note that a general mathematical model with density- dependent interactions and immobilization do not render a unique theoretical description for the results of the experiment .our model predictions can be tested against alternative formulations .indeed , the main features of the experimental system can be examined by the inclusion of predation rates that are dependent on the mobility of the resource species . since slow and immobile individualscan also be found among mobile species one can use an inhomogeneous distribution of velocities of the resource species in a theoretical model . to guarantee more benefit for an intermediate consumer in catching a certain type of individual distinct predation rates should be assigned according to different velocities of resource species .another question is if the growth rate of the ig predator will be affected by the inclusion of the time of resource capture . how will the inclusion of the time lag change the predictions of our immobilization model ?these extensions of a general igp model will be a topic for our future investigations .finally , we point out that it is of potential interest for biological control and conservation management to understand functioning of omnivory and igp systems in relation to global changes of the environment .since igp food webs are widespread in natural communities their adaptation and resilience behaviour is principal for understanding the restructuring of natural communities .we have used three formulations of a general igp model to explore the effects of increasing diversity in the prey population on higher trophic levels .the reformulated igp model alters the results from the basic igp theory .we show that an increase of a number of trophic interactions in the system via differentiation of resource can stabilize the population dynamics of the igp module .this conclusion holds for the densities of the ig prey that level up even when the ig predator is a superior competitor for the common basal resource .first , we show that for the system with the immobilization term up to three regions of stable trophic configurations are observed along the enrichment gradient .meanwhile at low enrichment both ig prey and ig predator are excluded , at high enrichment the presence of only small concentration of immobilized cells is sufficient to facilitate the coexistence of the competitors in the igp relationship .moreover the percentage of all admissible trophic configurations for the 3-species persistence inclines substantially for higher immobilization .second , given that immobilization is high enough it prompts the exchange between pools of mobile and immobilized resource and facilitates fast decline of the mobile population and a growth of immobilized subpopulation .meanwhile the exchange between the basal mobile resource and the predators gets weaker due to the low density of mobile species the immobilized individuals become a major food resource for the predators . because the ig prey is a superior competitor for immobilized resourcea robust coexistence of both predators will be easily supported .in addition , along an increasing gradient of immobilization the relative abundance of ig prey becomes higher than the abundance of ig predator .restructuring of the basic igp module by adding individual - to - individual turnover facilitates the coexistence and stabilizes the otherwise unstable system .moreover a strengthening of the interaction link leads to a significantly broader range of enrichment values at which stable coexistence could be found . at low transfer ratetwo types of equilibria are observed : if the ig predator is a superior competitor for the resources than at low enrichment both predators are excluded and at high enrichment only the ig predator stays in the system ; an increase of the attack rates of the ig prey depresses the population of the ig predator until it is completely excluded .numerical simulations of food web ( [ eq.1 ] ) with and distinct pools demonstrate that the high dimensional food webs overall manifest far less stable behaviour than the food webs with only two distinct subpopulations .an interesting feature is that the percentage of stable states for substantially decreases from to with an apparent increase of the value of transfer rate .by contrast , for food webs with an increase in transfer rate leads to the growth of the percentage of stable coexistence solutions from about to . in the appendixwe review the steady state solutions for the lotka - volterra models ( [ eq.2 ] ) and ( [ eq.3 ] ) and provide jacobian matrices to examine their local stability for the coexistence of both predators and the resource .first , equilibrium solutions are derived for the 3-species model ( [ eq.2 ] ) with zero immobilization ( ) and zero initial size of immobilized population ( ) .second , the steady states are given for the model ( [ eq.2 ] ) with immobilization ( ) .at last , the equilibrium solutions are presented for the system with the resource turnover ( [ eq.3 ] ) .for every case various trophic configurations are considered : exclusion of both predators , exclusion of ig prey or ig predator and the 3-species coexistence .the equilibrium solution of ( [ eq.2 ] ) for the 3-species coexistence without immobilization is stated as follows : where .the necessary condition for the coexistence requires that the right hand side is positive in ( [ eq.4 ] ) .the expressions for the equilibrium densities for the survival of the ig prey and the resource with exclusion of the ig predator at yield : the condition for persistence of the ig prey and the resource reads : . at zero density of the intermediate consumer ( ) one yields the steady states of the resource and the ig predator : the densities are positive if and only if the condition holds true . for the model ( [ eq.2 ] ) with immobilization we define a set of equilibrium densities to satisfy the equalities below : ,\nonumber\\ q_2&=&i_m f s_m - s_i ( b g+f f ) , \\q_3&=&(a'a s_m+a'b s_i - g f - m_g ) g,\nonumber\\ q_4&=&(f'f ( s_i+s_m ) + g g ' g - m_f ) f.\nonumber\label{eq.7}\end{aligned}\ ] ] the system ( [ eq.7 ] ) has four alternative solutions : exclusion of both predators at ; exclusion of the ig predator at ; the coexistence of resource and the ig predator at ; the 3-species coexistence . in the absence of immobilization mechanism is not active and the model ( [ eq.2 ] ) reduces to the system without immobilization where the equilibrium solutions written as ( [ eq.5 ] ) . upon exclusion of the ig prey in ( [ eq.7 ] ) one obtains expression for the equilibrium densities of resource and the ig predator : note that the size of mobile population is proportional to the size of immobilized population . as is expected the population of immobilized preys is impacted positively by the increase of immobilization .although the predation pressures are equal for both resource subpopulations the immobile population extinguishes faster than the mobile population . indeed , with the increase of predation rate the following approximations hold : and .the matrix diagonal is written in terms of the equilibrium densities and as follows : the solution ( ) is globally asymptotically stable in the phase space if the condition for stability is satisfied . for the stable coexistenceit is necessary that the real parts of all four eigenvalues of the stability matrix are non - positive . to obtain the boundary for stability regions in the parameter space the eigenvalues of the above stability matrixare evaluated numerically at different parameters combinations .the resulting stability diagrams are presented in fig .[ fig2 ] and fig .[ fig5 ] . as in the previous casethe system [ eq.3 ] for pools permits four steady states : the exclusion of the predators at ; the exclusion of the ig predator at ; the exclusion of the ig prey at ; the coexistence of 3-species .the solution for the coexistence of the resource and the ig prey in the absence of the ig predator is expressed as follows : note that at the ig prey is excluded and steady state density for the resource approach the carrying capacity limit .the positive solution of ( [ eq.10 ] ) exists if the parameters satisfy the inequality : .the steady state for the resource and the ig predator in the absence of ig prey yields : a nontrivial solution for the 3-species coexistence is found by solving for the equilibrium of the following system : finally , the condition for the stable coexistence is provided by solving for the eigenvalues of the stability matrix : ] . in the next step an antisymmetric matrix of individual - to - individual interactionsis defined with the upper diagonal coefficients that obey the inequalities : . the remaining lower diagonal coefficients should satisfy the antisymmetry relation : .the set of equations in ( [ eq.1 ] ) for holds true if the positive mortality rates are expressed from the equations as follows : the equation ( [ eq.15 ] ) for is used to solve for the attack rate .finally , the attack rates and the feeding rates are randomly assigned from the intervals ] correspondingly .the choice of the feeding rates enables to reduce the predation pressure on populations by a factor of .the remaining grazing rates and are defined from the relations : and . finally , two constraints hold to solve for the coefficients and : we want to emphasize that the attack rate should be higher than any of the rates . nevertheless since relation holds the total predation pressure of summed over alternative pools exceeds the predation exclusively on .this far the positive population density of the ig prey is maintained due to the consumption of alternative resources .we provided a special assignment of parameters for the predation , feeding and mortality that enables to fulfil the condition of positive equilibria that are comparable to the realistic biodensities . moreover the stability results can be conclusively compared with two different formulations ( [ eq.1 ] ) and ( [ eq.3 ] ) .00 abrams , p. a. , roth j. , 1994 .the responses of unstable food chains to enrichment .ecol . 8 , 150 - 171 .abrams , p. a. , roth , j. d. , 1994 .the effects of enrichment of three - species food chains with nonlinear functional responses .ecology 75(4 ) , 1118 - 1130 .abrams , p. a. , fung , s. r. , 2010 .prey persistence and abundance in systems with intraguild predation and type-2 functional responses .264 , 1033 - 1042 .amarasekare , p. , 2000 .coexistence of competing parasitoids on a patchily distributed host : local vs spatial mechanisms .ecology 81 , 1286 - 1296 .amarasekare , p. , 2006 .productivity , dispersal and the coexistence of intraguild predators and prey .234 , 121 - 133 .amarasekare , p. , 2007 .spatial dynamics of communities with intraguild predation : the role of dispersal strategies .naturalist 170(6 ) , 819 - 831 .arim , m. , marquet , p. a. , 2004 .intraguild predation : a widespread interaction related to species biology .lett . 7 , 557 - 564 .bonsall , m. b. , holt , r. d. , 2003 .the effects of enrichment on the dynamics of apparent competitive interactions in stage - structured systems .naturalist 162 , 780 - 795 .borer , e. t. , briggs , c. j. , murdoch , w. w. , swarbrick , s. l. , 2003 .testing intraguild predation in field system : does numerical dominance shift along a gradient of productivity ?6 , 929 - 935 .brodeur , j. , rosenheim , j. a. , 2000 .intraguild interactions in aphid parasitoids .97 , 93 - 108 .brose , u. , berlow , e. l. , martinez , n. d. , 2005 . from food websto ecological networks : linking non - linear trophic interactions with nutrient competition . in : de ruiter , p. c. , moore , j. c. , wolters , v.(eds . ) , dynamic food webs .elsevier , pp 27 - 36 .bruno , j. f. , stachowicz , j. j. , bertness , m. d. , 2003 .trends evol .18 ( 3),119 - 125 .crowley , p. h.,cox , j. j. , 2011 .intraguild mutualism , trends evol .26(12 ) , 627 - 633 . daugherty , m. p. , harmon , j. p. , briggs , c. j. , 2007 .trophic supplements to intraguild predation .oikos 116 , 662 - 677 .denno , r. f. , fagan , w. f. , 2003 . a stoichiometric perspective on omnivory in arthropod - dominated ecosystems : the importance of nitrogen limitation .ecology 84 , 2522 - 2531 .dewitt , t. j. , langerhans , r. b. , 2003 .multiple prey traits , multiple predators : keys to understanding complex community dynamics .j. sea res .49 , 143 - 155 .diehl , s. , feiel , m. , 2000 .effects of enrichment on three - level food chains with omnivory .naturalist 155 , 200 - 218 .diehl , s. , feissel , m. , 2001 .inraguild prey suffer from enrichment of their resources : a microcosm experiment with ciliates .ecology 82(11 ) , 2977 - 2983 .finke , d. l. , denno , r. f. , 2002 .intraguild predation diminishes in complex - structured vegetation : implication for prey suppression .ecology 83 , 643 - 652 .gardner , m. r. , ashby , w. r. , 1970 .connectance of large dynamic ( cybernetic ) systems : critical values for stability .nature 228 , 784 .gross , t. , rudolf , l.,levin , s. a. , dieckmann , u. , 2009 .generalized models reveal stabilizing factors in food webs . science .325 , 747 - 750 .holt , r. d. , grover , j. , tilman , d.,1994 .simple rules for interspecific dominance in systems with exploitative and apparent competiton .naturalist 144 ( 5 ) , 741 - 77 .holt , r. d. , polis , g. a. , 1997 .a theoretical framework for intraguild predation .naturalist 149 , 745 - 764 .holt , r.d . ,huxel , g. r. , 2007 .alternative prey and the dynamics of intraguild predation : theoretical perspectives .ecology 88 , 2706 - 2712 .hosack , g. r. , li , h. w. , rossignol , p. a. , 2009 .sensitivity of system stability to model structure .modelling 220 , 1054 - 1062 .hutchinson , g. e. , 1961 .the paradox of the plankton .naturalist 95 ( 882 ) , 137 - 145 .ives , a. r.,carpenter , s. r. , 2007 .stability and diversity of ecosystems .science 317 , 58 - 62 .janssen , a. , sabels , m. w. , magalhaes , s.,montserrat , m.,van der hammen , t.,2007 .habitat structure affects intraguild predation , ecology 88(11 ) , 2713 - 2719 .kivan , v. , 1996 . optimal foraging and predator - prey dynamics .49 , 265 - 290 .kivan , v. , diehl , s. , 2005 .adaptive omnivory and species coexistence in tri - trophic food webs .67 , 85 - 99 .leibold , m. a. , 1996 . a graphical model of keystone predators in food webs : trophic regulation of abundance , incidence and diversity patterns in communities .naturalist 147(5 ) , 784 - 812 .leibold , m. a. , hall s. r. , bjornstad o. , 2005 . food web architecture and its effects on consumer resource oscillations in experimental pond ecosystems . in : deruiter , p. c. , moore , j. c.,wolters , v.(eds . ) , dynamic food webs .elsevier , pp .lima , s. l. , dill , l. m. , 1990 .behavioral decisions made under the risk of predation : a review and prospectus . can .68 , 619 - 640 .lder , m. g. j. , boersma , m. , kraberg , a. c. , aberle , n. , shchekinova , e. , wiltshire , k. h. , xxxx .even smallest competitors can promote each other : commensalism between marine microzooplankton predators ( submitted ) .moore , j. c. , 2005 .variations in community architecture as stabilizing mechanisms of food webs . in : de ruiter , p. c. , moore , j. c. , wolters , v. , ( eds . ) , dynamic food webs .elsevier , pp .24 - 26 mylius , s. d. , klumpers , k. , de roos , a. m. , persson , l. , 2001 .impact of intraguild predation and stage structure on simple communities along a productivity gradient .naturalist 158 , 259 - 276 .namba , t. , tanabe , t. , maeda , n. , 2008 .omnivory and stability of food webs .complexity 5 , 73 - 85 .oksansen , l. , fretwell , s. d. , arruda , j. , niemela , p. , 1981exploitation ecosystems in gradients of primary productivity .amer . naturalist .118 , 240 - 261 .pimm , s. l.,lawton , j. h. , 1978 . on feeding on more than one trophic level .nature 275 ( 12 ) , 542 - 544 .polis , g. a. , myers , c. a. , holt , r. d. , 1989 . the ecology and evolution of intraguild predation : potential competitors that eat each other .ecol . system .20 , 297 - 330 .polis , g. a. , holt , r. d. , 1992 .intraguild predation : the dynamics and complex trophic interactions , tree 7(5 ) , 151 - 154 .rosenheim , j. a. , wilhoit , r. , armer , c. a. , 1993 .influence of intraguild predation among generalist insect predators on the suppression of an herbivore population .oecologia 96 , 439 - 449 .stoecker , d. k. , evans , g. t. , 1985 .effect of protozoan herbivory and carnivory in a microplankton food web , mar .25 , 159 - 167 .svirezhev , v. m. , logofet , d. o. , 1983 .stability of biological communities .mir publishers , moscow , ussr .thomson , r. m. , hemberg , m. , starzomski , b. m. , shurin , j. b. , 2007 .trophic levels and trophic tangles : the prevalence of omnivory in real food webs .ecology 88(3 ) , 612 - 617 .trussell , g. c. , ewanchuk , p. j. , bertness , m. d. , 2002 .field evidence of trait - mediated indirect interactions in a pocky intertidal food web .lett . 5 , 747 - 756 .vadeboncoeur , v. , mccann , k. s. , vander zanden , m. j. , rasmussen , j. b. , 2005 .effects of multi - chain omnivory on the strength of trophic control in lakes , ecosystems 8 , 682 - 693 .vandermeer , j. , 2006 . omnivory and the stability of food webs .238 , 497 - 504 .woodward , g. , thompson , r. , townsend , c. r. , hildrew , a. c. , 2005 . pattern and process in food webs : evidence from running waters . in : belgrano ,a. , scharler , u. m. , dunne , j. , ulanowicz , r. e. ( eds . ) , aquatic food webs : an ecosystem approach .oxford university press , oxford , pp .
food webs with intraguild predation ( igp ) are widespread in natural habitats . their adaptation and resilience behaviour is principal for understanding restructuring of ecological communities . in spite of the importance of igp food webs their behaviour even for the simplest 3-species systems has not been fully explored . one fundamental question is how an increase of diversity of the lowest trophic level impacts the persistence of higher trophic levels in igp relationships . we analyze a 3-species food web model with a heterogeneous resources and igp . the model consists of two predators directly coupled via igp relation and indirectly via competition for resource . the resource is subdivided into distinct subpopulations . individuals in the subpopulations are grazed at different rates by the predators . we consider two models : an igp module with immobilization by the top predator and an igp module with species turnover . we examine the effect of increasing enrichment and varying immobilization ( resource transfer ) rate on a stable coexistence of predators and resources . we explore how the predictions from the basic 3-species model are altered when the igp module is extended to multiple resource subpopulations . we investigate which parameters support a robust coexistence in the igp system . for the case of multiple subpopulations of the resource we present a numerical comparison of the percentage of food webs with stable coexistence for different dimensionalities of the resource community . at low immobilization ( transfer ) rates our model predicts a stable 3-species coexistence only for intermediate enrichment meanwhile at high rates a large set of stable equilibrium configurations is found for high enrichment as well . intraguild predation , immobilization , alternative resource , multiple resource traits , stable coexistence 2
the largest and most powerful solar telescopes are ground - based facilities . however , compared to their counterparts in space such as the solar dynamics observatory ( ( * ? ? ?* sdo ; pesnell et al .2012 ) ) , they often lack in providing user - friendly data pipelines to reduce the raw data .only few ground - based instruments have such a pipeline ( ( * ? ?* e.g. , de la cruz rodrguez 2015 ) ) .the data reduction process is often much more complex than for space missions and differs substantially from instrument to instrument .the need for automatized data reduction pipelines to produce science - ready data becomes crucial in the era of big data .therefore , in the framework of the eu - funded solarnet project , we developed a pipeline called `` stools '' to reduce and prepare data acquired with the gregor fabry - prot interferometer ( ( * ? ? ?* gfpi ; puschmann 2012 ) ) , large - format imaging cameras , and the high - resolution fast imager ( hifi ) located at europe s largest solar telescope gregor ( ( * ? ? ?* schmidt 2012 ) ) . in sect .2 we will briefly describe the instruments supported by the pipeline .section 3 is dedicated to the description of the pipeline , and sect .4 reports on the data archive and provides an outlook .the gfpi is the successor of the gttingen fabry - prot interferometer ( ( * ? ? ?* bendlin 1992 ) , ( * ? ? ?* puschmann 2006 ) ) which was attached to the vacuum tower telescope ( vtt ) at the observatorio del teide and is now installed at gregor .it is an imaging spectropolarimeter which operates in the visible and near infrared spectral range .the gfpi has two tunable etalons in a collimated setup .the two ccd cameras , one for the broad - band and another for the narrow - band images , have the same image scale and a size of pixels . the spatial scale of gfpi was derived comparing gfpi broad - band images from the year 2014 with continuum images from sdo which results in 0 . hence , the field - of - view ( fov ) of the gfpi in spectroscopic mode is .the blue imaging channel ( bic ) of the gfpi is fed by light in the visible below .a beamsplitter allows for simultaneous image acquisition at two different wavelengths with two cameras .the observer aims for high - cadence with these cameras to minimize atmospheric seeing effects and assure high - quality image reconstruction .initially , two pco.4000 ccd cameras with a size of pixels and a spatial sampling of 0 were used . in 2016 , these cameras were replaced by two synchronized scmos imagers and the instrument is now named high - resolution fast imager ( hifi ) .both cameras write their images into the same file .this requires the same exposure times for both cameras which can be achieved using neutral density filters .the hifi chips have a size of pixels and a spatial sampling of 0 . of special interestis the frame rate of these cameras which is 49hz or 98hz with the full or a pixels fov , respectively .hifi can be used independently from the gfpi , e.g. , as the context image of the gregor infrared spectrograph ( ( * ? ? ?* gris , collados 2012 ) ) .a careful acquisition of calibration data each day is crucial to produce high - quality science data after the reduction process .the following calibration data are required : dark , flat field , resolution target , and several pinhole images .in addition , the spectroscopic mode of the gfpi requires a long scan to derive the prefilter curve of the interference filter .it is crucial to have dark images with the same exposure times as all the acquired data .all routines are written in the interactive data language ( idl ) and are documented in the header . moreover ,the routines are strictly named with the prefix `` stools '' in order to avoid overlapping with other idl routines .gfpi and hifi data are recorded in the native format of davis , a software package from the company lavision gmbh , gttingen . we will focus on the data reduction of the imaging spectroscopic mode . all computed images and parameters for the data calibration are stored in one single idl save file .single variables are added to or are extracted from idl save files with special stools routines . to initialize the pipeline, the user only needs to specify the in and output directories , the observed wavelength , and the on - chip binning of the cameras .first , the average dark images are computed for both cameras . in the next stepthe flat - field images are averaged for each position of the etalon .other valuable information such as the positions of the etalon , accumulations , and step size is also saved .the pinhole and resolution target images are corrected for dark and flat - field .they are then used to derive the alignment parameters between both gfpi cameras such as rotation , displacement , pivot point , and magnification .the output is stored in an idl structure .the blueshift correction needs to be performed across the fov of the narrow - band images . for this task ,flat - field images are taken and interpolated to a narrower and equidistant wavelength grid .the resulting spectral profiles for each pixel are slightly smoothed by convolving them with a normalized kernel of units .a polynomial fit of second order to the line core of the spectral line yields the central position on the wavelength axis .we repeat this step for all spectra across the fov .the average of all central positions is then computed and subtracted from each individual position . hence , we obtain the displacement in wavelength , i.e. , the blueshift correction , for all pixels within the fov .the correction is applied by displacing the spectra using cubic spline interpolation . for the narrow - band camera , the aforementioned flat fieldcan now be used together with the blueshift correction to compute a normalized master flat , i.e. , a single two - dimensional gain table , where the spectral information is removed .the user can choose either this master flat or the former spectral - dependent flat field to correct the data .kms.,scaledwidth=99.0% ] kms.,scaledwidth=99.0% ] the last step before correcting the science observations is to derive the prefilter curve of the gfpi . to this end, we use a long scan with a small step size covering a large fraction of the etalon s scan range .this scan is ideally taken at the solar disk center under quiet - sun conditions to avoid spectral line shifts .after dark , flat - field , and blueshift corrections a gfpi mean spectrum , which represents the average transmission profile of the narrow - band interference filter , is computed . to extract the trend of the prefilter curve we take advantage of the fourier transform spectrometer ( ( * ? ? ?* fts ; neckel & labs 1984 ) ) spectrum from the kitt peak national observatory .the fts spectrum is first convolved with the theoretical gfpi transmission profile and then matched to the observed gfpi mean spectrum by minimizing the wavelength offset .it follows a resampling of the fts spectrum to the gfpi wavelength scale .afterwards , the ratio of both spectra is computed .finally , the ratio , with the exception of parts showing large variations , is fitted using a double gaussian .linear least - squares fitting between model and observations is performed with the mpfit package ( ( * ? ? ? * markwardt 2009 ) ) .all calibration data are now ready to be applied to the science data .the pipeline splits now into two different branches . on the one hand ,quick - look physical maps are being generated , e.g. , line - core intensity images , line - of - sight ( los ) velocity maps , equivalent width maps , seeing quality parameters , etc .the results are stored into an idl structure together with the reduced broad- and narrow band images ( level 1.0 data ) . on the other hand ,the data are prepared for image - restoration ( level 2.0 data ) with multi - object multi - frame blind deconvolution ( ( * ? ? ?* momfbd ; lfdahl 2002 ) , ( * ? ? ?* van noort 2005 ) ) .the restored images are finally written in the flexible image transport system format ( ( * ? ? ?* fits , wells 1981 ) ) .one example of a restored line - core image and its associated los velocity map is shown in fig .[ fig1 ] .we follow a similar approach for the hifi data reduction .the user needs to introduce the directory of the observations , telescope and user names , and the two wavelengths of the respective interference filters .the average dark , flat field , pinholes , and resolution target images are then computed and saved as individual fits files .the file name has a two - digit suffix which identifies the type of calibration data .moreover , the fits header includes basic information such as date and time of the observations , wavelength , telescope , dimensions of the images , etc .the target frames are used to align both cameras .the obtained parameters such as displacement , rotation , and magnification are stored in an idl save file . due to the setup of the cameras , both have the same image scale , however , there might be a slight displacement in the vertical and horizontal axes .the science observations are then corrected for dark and for artifacts on the chip by using the flat - field images .if more than one burst of flat - field images was taken , the pipeline chooses the one closest in time to the science observations .an image quality check to sort the images is performed at this point .the reason for this is that hifi , with its high frame rate , acquires more images than actually needed for the image restoration .usually 500 images are taken per burst and only the 100 best are used for the restoration .the other images are dropped .however , the user can change this criterion .we take advantage of the median filter - gradient similarity ( mfgs ) introduced by ( * ? ? ?* deng ( 2015 ) ) , to scrutinize the image quality .the mfgs code is implemented in an independent idl routine and is applied to every single image .each pixel has now a value between 0 and 1 .this value and additional statistics related to the mfgs are stored in a separate idl save file and also added to the fits header .finally , the average mfgs value over the whole image is used as a reference . note that the mfgs value is only calculated for the images of one of the cameras .this is justified since both cameras strictly acquire images at the same time , hence , both were recorded under the same seeing conditions .finally , the sorted images according to their mfgs value ( level 1.0 data ) are written into a fits file using standard routines of the idl astronomy user s library hosted at .the two bursts of images are written alternately in a single fits file using image extensions ( ( * ? ? ?* ponz 1994 ) ) .odd extension numbers belong to images from one filter whereas even numbers belong to the other filter .the file has a detailed primary header with all relevant information about the observations .furthermore , there are extension headers for each image showing only image specific information , e.g. , the wavelength , image statistics , and mfgs information .the data restoration is carried out using the speckle - interferometry code kisip ( ( * ? ? ?* wger & von der lhe 2008 ) ) to produce science - ready data ( level 2.0 data ) as shown in fig .gfpi and hifi data are stored at the data archive of aip ( http://gregor.aip.de[gregor.aip.de ] ) . in addition , the pipeline generates a webpage , which includes an overview of the acquired observations as well as quick - look images and statistics .the data recorded since 2014 will be made public to the solar community .the stools pipeline is under constant development .an official version will be released in 2017 via a version control system on the abovementioned webpage under creative commons license .this work was carried out as a part of the solarnet project supported by the european commission s 7th framework programme under grant agreement no .the 1.5-meter gregor solar telescope was built by a german consortium under the leadership of the kiepenheuer - institut fr sonnenphysik in freiburg ( kis ) with the leibniz - institut fr astrophysik potsdam ( aip ) , the institut fr astrophysik gttingen ( iag ) , the max - planck - institut fr sonnensystemforschung in gttingen ( mps ) , and the instituto de astrofsica de canarias ( iac ) , and with contributions by the astronomical institute of the academy of sciences of the czech republic ( ascr ) .sjgm is grateful for financial support from the leibniz graduate school for quantitative spectroscopy in astrophysics , a joint project of aip and the institute of physics and astronomy of the university of potsdam .
a huge amount of data has been acquired with the gregor fabry - prot interferometer ( gfpi ) , large - format facility cameras , and since 2016 with the high - resolution fast imager ( hifi ) . these data are processed in standardized procedures with the aim of providing science - ready data for the solar physics community . for this purpose , we have developed a user - friendly data reduction pipeline called `` stools '' based on the interactive data language ( idl ) and licensed under creative commons license . the pipeline delivers reduced and image - reconstructed data with a minimum of user interaction . furthermore , quick - look data are generated as well as a webpage with an overview of the observations and their statistics . all the processed data are stored online at the gregor gfpi and hifi data archive of the leibniz institute for astrophysics potsdam ( aip ) . the principles of the pipeline are presented together with selected high - resolution spectral scans and images processed with stools .
in recent years , the multisensor estimation fusion or data fusion has received significant attention for target tracking , artificial intelligence , sensor networks and big data ( see ) , since many practical problems involve information or data from multiple sources .the problem of multisensor estimation fusion is that how to optimally fuse sensor data from multiple sensors to provide more useful and accurate information for the purpose of estimating an unknown process state .currently , the estimation fusion technology has rapidly evolved from a loosely related techniques to an emerging real engineering discipline with standardized terminology .generally speaking , there are two traditional architectures for estimation fusion , namely , centralized fusion structure and distributed fusion structure . the centralized architecture is sending the raw data of each sensor to the fusion center , theoretically , which is nothing but an estimation problem with distributed data. moreover , the centralized fusion approach can usually reach optimal linear estimation in mean squared error ( mse ) sense . however , the distributed architecture is propagating the estimation of each sensor to the fusion center , which decreases computational burden in the fusion center , but it may not get the optimal linear estimation in mse sensse . due to its important practical significance , distributed estimation fusion has been studied extensively , see , , , , . for multisensor point estimation fusion in probabilistic setting ,many results have been obtained ( see , e.g. , books , , ) . provides the optimal linear estimation fusion method for a unified linear model . proves that the distributed fusion algorithm is equivalent to the optimal centralized kalman filtering in the case of cross - uncorrelated sensor noises , and the one for the case of cross - correlated sensor noises is proposed in .when there exists the limitation of communication bandwidth between a fusion center and sensors , achieves a constrained optimal estimation at the fusion center .in addition , proposes lossless linear transformation of the raw measurements of each sensor for distributed estimation fusion .most existing information fusion algorithms are based on the sequential estimation techniques such as kalman filter , information filter and the weighted least - squares methods , which need to know the accurate statistical knowledge of the process and measurement noises . since the limitation of human and material resources in real life, we can not obtain the exact statistical characteristics of noise , which may lead to poor performance for the state estimation ( see , ) .especially for the nonlinear target tracking systems , it is more sensitive to the precise distribution information of noise . in many engineering applications ,it is easier to obtain the upper bound and lower bound of a unknown noise . in the unknown but bounded setting ,the earliest work about the set - membership filter is proposed by at the end of 1960s , and it is later developed by and .these robust filters are derived through set - membership estimate , usually a bounding ellipsoid of containing the true state . moreover , the set - membership filter for nonlinear dynamic system has also been investigated by , , and references therein. for multisensor set - membership fusion in bounded setting , proposes a relaxed chebyshev center covariance intersection ( ci ) algorithm to fuse the local estimates , geometrically , which is the center of the minimum radius ball enclosing the intersection of estimated ellipsoids of each sensor . in order to account for the inconsistency problem of the local estimates, proposes a covariance union method ( cu ) and it is more conservative than ci fusion .however , the judgment and calculation about correlation may be difficult .since the set - membership filter only needs to know the bound of the noises , rather than the statistical properties of noises , it does not require to judge the correlation between each sensor , which inspires us to consider set - membership information fusion .when the dynamic system is linear dynamic systems , proposes some algorithms of multisensor set - membership information fusion to minimize euclidean estimation error of the state vector .however , for nonlinear dynamic systems , the multisensor set - membership information fusion has not received enough research attention .these facts motivate us to further research the more challenging set - membership fusion problem for nonlinear dynamic systems . in this paper ,two popular fusion architectures are considered : centralized and distributed set - membership information fusion .firstly , both of them can be converted into a semidefinite programming ( sdp ) problem which can be efficiently computed , respectively .secondly , their analytical solutions can be derived surprisingly by using decoupling technique .it is very interesting that they are quite similar in form to the classic information filter in mse sense . in the two analytical fusion formulae, the information of each sensor can be clearly characterized , and the knowledge of the correlation among measurement noises across sensors are not required .finally , multi - algorithm fusion is used to minimize the size of the state bounding ellipsoid by complementary advantages of multiple parallel algorithms .a typical numerical example in target tracking demonstrates the effectiveness of the centralized , distributed , and multi - algorithm set - membership fusion algorithms .in particular , it shows that multi - algorithm fusion performs better than both the centralized and distributed fusion .the rest of the paper is organized as follows .section [ sec_2 ] introduces the problem formulation for the centralized fusion and the distributed fusion . in section [ sec_3 ] ,the centralized set - membership information fusion algorithm is derived by -procedure , schur complement and decoupling technique .section [ sec_4 ] provides the distributed set - membership information fusion algorithm . a typical example in target trackingis presented in section [ sec_5 ] , while conclusion is drawn in section [ sec_6 ] .consider the -sensor _ centralized _ nonlinear dynamic system with unknown but bounded noises as follows : where is the state of system at time , is the measurement at the sensor , , is the nonlinear function of the state , is nonlinear measurement function of at the sensor , is the uncertain process noise and is the uncertain measurement noise .assume that and are confined to specified ellipsoidal sets where and are the _ shape matrix _ of the ellipsoids and , , respectively .both of them are known symmetric positive - definite matrices .suppose that when the nonlinear functions are linearized , the remainder terms can be bounded by an ellipsoid , respectively .specifically , by taylor s theorem , and can be linearized to where , , are jacobian matrices . and are high - order remainders , which can be bounded in an ellipsoid for , , respectively , i.e. , where and are the centers of the ellipsoids and , respectively ; and are the shape matrices of the ellipsoids and , respectively .note that proposes the monte carlo methods for the bounding ellipsoids of the remainders , which can effectively take advantage of the character of the nonlinear functions , and it can obtain the tighter bounding ellipsoids and to cover the remainders on line . the corresponding centralized set - membership information fusion problem can be formulated as follows .assume that the initial state belongs to a given bounding ellipsoid : where is the center of ellipsoid , and is the shape matrix of the ellipsoid which is a known symmetric positive - definite matrix . at time , given that belongs to a current bounding ellipsoid : where is the center of ellipsoid , and is a known symmetric positive - definite matrix . at next time ,the fusion center can obtain the measurements from the sensor , .for the centralized fusion system , the goal of the fusion center is to determine a prediction ellipsoid and an estimation ellipsoid at time .firstly , in prediction step , we look for and such that the state belongs to whenever i ) is in , ii ) the process noise , and iii ) the remainder .secondly , in the fusion update step , we look for and such that the state belongs to whenever i ) is in , ii ) measurement noises , , and iii ) the remainders , .moreover , we provide a state bounding ellipsoid by minimizing its size " at each time which is a function of the shape matrix denoted by . if we choose trace function , i.e. , , which means the sum of squares of semiaxes lengths of the ellipsoid , the other common size " of the ellipsoid is , which corresponds to the volume of the ellipsoid . in order to emphasize the importance of the interested state vector entry, proposes an objective of the ellipsoid as follows where is the weight coefficient with , and denotes the element in the row and the column of the matrix , .if the bound of the entry of the interested state vector is very important , we can give a larger weight to .when , , which means that each entry of the state vector is treated equally , and it is also equivalent to the trace function .therefore , we can use multi - algorithm fusion to obtain multiple bounding estimated ellipsoids , which squashed along each entry of the state vector as much as possible based on different weighted objective ( [ eqpre_170 ] ) , then the intersection of these bounding ellipsoids can derive a final state bounding ellipsoid with a smaller size . in this paper , we also consider -sensor _ distributed _ estimation fusion for the nonlinear dynamic system ( [ eqpre_1 ] ) and ( [ eqpre_2 ] ) .the problem is formulated as follows . at time , the local sensor can use the measurements to obtain the bounding ellipsoid by the single sensor recursive method .then , the local estimated ellipsoids are sent to the fusion center without communication delay for .suppose that the initial state belongs to a given bounding ellipsoid : where is the center of ellipsoid , and is the shape matrix of the ellipsoid which is a known symmetric positive - definite matrix . at time , given that belongs to a current bounding ellipsoid : where is the center of ellipsoid , and is a known symmetric positive - definite matrix . at next time , the fusion center can receive the state bounding ellipsoid of the sensor firstly , in prediction step , the goal of the fusion center is to determine a state bounding ellipsoid , i.e. , look for and such that the state belongs to whenever i ) is in , ii ) the process noise , and iii ) the remainder . secondly , in the fusion update step, we look for and such that the state belongs to whenever i ) is in , ii ) is in , .moreover , we provide a state bounding ellipsoid by minimizing its size " in prediction and update step , respectively .in this section , we discuss the centralized set - membership estimation fusion , which includes the prediction step and the fusion update step . by taking full advantage of the character of the nonlinear dynamic system and the recent optimization method proposed in for linear dynamic system, the centralized set - membership estimation fusion can be achieved by solving an sdp problem , which can be efficiently computed by interior point methods and related softwares .furthermore , the centralized set - membership information filter is derived based on the decoupling technique , which can make further to improve the computation complexity of sdp .the analytical formulae of the state prediction and estimation bounding ellipsoid at time are proposed , respectively . in the prediction step ,the state prediction bounding ellipsoid at time can be derived as follows .[ thm_1 ] at time , based on the state bounding ellipsoid , the remainder bounding ellipsoid and the noise bounding ellipsoid , the state prediction bounding ellipsoid can be obtained by solving the optimization problem in the variables , , nonnegative scalars , \label{eqpre_19 } & & ~~\mbox{subject to}~~ -\tau^u\leq0,~ -\tau^w\leq0,~ -\tau^f\leq0,\\[5 mm ] \label{eqpre_20 } & & { { \matrixfont p}}_{k+1|k}^c\succ0,\\[5 mm ] \label{eqpre_21}&&\left[\begin{array}{cc } { { \matrixfont p}}_{k+1|k}^c&\phi_{k+1|k}(\hat{{{\vectorfont x}}}_{k+1|k}^c)\\[3 mm ] ( \phi_{k+1|k}(\hat{{{\vectorfont x}}}_{k+1|k}^c))^t & ~~\xi\\ \end{array}\right]\succeq0,\end{aligned}\ ] ] where ,\\[3 mm ] \label{eqpre_23}\xi & = & \diag(1-\tau^u-\tau^w-\tau^f,\tau^u{{\matrixfont i}},\tau^w{{\matrixfont q}}_k^{-1},\tau^f{{\matrixfont i}}),\end{aligned}\ ] ] is the cholesky factorization of , i.e , , and are denoted by ( [ eqpre_6 ] ) , and is jacobian matrix . * proof : * see appendix .the objective function ( [ eqpre_18 ] ) is aimed at minimizing the shape matrix of the predicted ellipsoid , and the constraints ( [ eqpre_19])-([eqpre_21 ] ) ensure that the true state is contained in the the bounding ellipsoid .interestingly , if the objective function is the trace of the shape matrix of the bounding ellipsoid , then the _ analytically optimal solution _ of the optimization problem ( [ eqpre_18])-([eqpre_21 ] ) can be achieved for the sate prediction step .[ cor_1 ] if the objective function , then the analytically optimal solution for the state prediction is as follows : where is the jacobian matrix of the nonlinear state function denoted by ( [ eqpre_3 ] ) , and are the center and shape matrix of the bounding ellipsoid of the remainder denoted by ( [ eqpre_6 ] ) , respectively , and , , are the optimal solution of the decision variables , , , respectively . *proof : * see appendix . when the state equation is linear , there is no the remainder constraint of the nonlinear state equation , i.e. , , it is easy to observe that the optimum ellipsoid derived by theorem [ cor_1 ] coincides with the classical schweppe bounding ellipsoid . in the fusion update step ,the state bounding ellipsoid at time can be derived as follows .* proof : * see appendix .moreover , in order to reduce computation complexity , we can derive an explicit expression of .in lemma [ thm_2 ] , note that a suitable form of the orthogonal complement of can be chosen as follows ,\end{aligned}\ ] ] where ^t,\\ \label{eqpre_118}\psi_{22}&=&\left [ \begin{array}{ccccc } ( { { \matrixfont e}}_{k+1|k}^c)^{-1 } & 0 & 0 & \cdots & 0 \\ -{{\matrixfont j}}_{h_{k+1|k}^1 } & { { \matrixfont i } } & 0 & \vdots & 0 \\ -{{\matrixfont j}}_{h_{k+1|k}^2 } & 0 & { { \matrixfont i } } & \vdots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ -{{\matrixfont j}}_{h_{k+1|k}^l } & 0 & 0 & \vdots & { { \matrixfont i}}\\ 0 & -{{\matrixfont b}}_{h_{k+1}^1}^{-1 } & 0 & \vdots & 0 \\ 0 & 0 & -{{\matrixfont b}}_{h_{k+1}^2}^{-1 } & \vdots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ 0 & 0 & 0 & \cdots & -{{\matrixfont b}}_{h_{k+1}^l}^{-1 } \\ \end{array } \right].\end{aligned}\ ] ] if we denote then equation ( [ eqpre_52 ] ) is equivalent to the following form by reordering of the blocks \succeq0,\\ & & \label{eqpre_121}{{\matrixfont b}}=[{{\matrixfont i}}~\underbrace{0,\ldots , 0}_{l~ blocks}],\end{aligned}\ ] ] where and 0 have compatible dimensions .moreover , the decoupled fusion update step is given in the following theorem .* proof : * see appendix . here , we call the equations ( [ eqpre_62])([eqpre_180 ] ) _ centralized set - membership information filter _ , which has following characters : * similar to the information filter , and in ( [ eqpre_62])([eqpre_180 ] ) can be taken as the update information matrix and the gain matrix provided by the -th sensor for the estimator , respectively . are the fusion weights .* is the nonlinear correction term of the state update estimation , which relies on the nonlinear measurement functions , .* when the measurement equations are linear , there are no the remainder constraints , i.e. , , it is easy to observe that the optimum ellipsoid derived by the theorem [ cor_2 ] also similar to the classical schweppe bounding ellipsoid . if and is full - rank , then the optimization problem ( [ eqpre_49])-([eqpre_52 ] ) in lemma [ thm_2 ] is an sdp problem , the dimension of the constraint matrix ( [ eqpre_52 ] ) is and the number of decision variables is , where and are the dimensions of the state , the measurement and the number of sensors , respectively . moreover , if we use a general - purpose primal - dual interior - point algorithm to solve it , then the computation complexity of the problem is , see .therefore , in our case , the computation complexity is if , otherwise , it is . as described in , we can use a path - following interior - point method to solve ( [ eqpre_57])-([eqpre_59 ] ) in theorem [ cor_2 ] .a tedious but straightforward computation shows the practical complexity can be assumed to be , which implies an dependence on the size of the state , and dependence on the number of the sensor .therefore , for the number of sensors , the complexity of the decoupled problem ( [ eqpre_57 ] ) improves upon that of the coupled one ( [ eqpre_49 ] ) by a factor of .the centralized set membership information fusion algorithm can be summarized as follows .[ alg_1 ] + * step 1 : ( initialization step ) set and initial values such that .* step 2 : ( bounding step ) take samples from the sphere , and then determine two bounding ellipsoids to cover the remainders by ( [ eqpre_5])-([eqpre_6 ] ) . *step 3 : ( prediction step ) optimize the center and shape matrix of the state prediction ellipsoid such that by ( [ eqpre_18])-([eqpre_21 ] ) or ( [ eqpre_24])-([eqpre_25 ] ) .* step 4 : ( bounding step ) take samples from the sphere , and then determine one bounding ellipsoid to cover the remainder , , by ( [ eqpre_7])-([eqpre_8 ] ) . *step 5 : ( fusion update step ) optimize the center and shape matrix of the state estimation ellipsoid such that by solving the optimization problem ( [ eqpre_49])-([eqpre_52 ] ) or ( [ eqpre_57])-([eqpre_59 ] ) .* step 6 : set and go to step 2 .in this section , in order to reduce the computation burden of the fusion center and improve the reliability , robustness , and survivability of the fusion system , the distributed set - membership estimation fusion method is derived by fusing the state bounding ellipsoids , which are sent from the local sensors and using the character of the nonlinear state function .since the state prediction step of the distributed fusion is completely same as that of the centralized fusion , we only discuss the fusion update step of the distributed fusion .in addition , the distributed set - membership information fusion formula can also be achieved by the decoupling technique .the main results are summarized to lemma [ thm_3 ] and theorem [ cor_3 ] .the proofs are also given in appendix .[ thm_3 ] at time , based on the prediction bounding ellipsoids and the estimation bounding ellipsoids of single sensors , , the distributed state bounding ellipsoid can be obtained by solving the optimization problem in the variables , , nonnegative scalars , , \label{eqpre_82}&&~~\mbox{subject to}~~ -\tau^u\leq0,~-\tau_i^y\leq0\\[5 mm ] \label{eqpre_83 } & & -{{\matrixfont p}}_{k+1}^d\prec0,\\[5 mm ] \label{eqpre_84}&&\left[\begin{array}{cc } -{{\matrixfont p}}_{k+1}^d&\phi_{k+1}^d\\[3 mm ] ( \phi_{k+1}^d)^t & ~~-\xi-\pi\\ \end{array}\right]\preceq0,\end{aligned}\ ] ] where ,\\[3 mm ] \label{eqpre_86 } \phi_{k+1}^i&=&[\hat{{{\vectorfont x}}}_{k+1|k}^d-\hat{{{\vectorfont x}}}_{k+1}^i,{{\matrixfont e}}_{k+1|k}^d ] , \\[3 mm ] \label{eqpre_87}\pi&= & \sum_{i=1}^l\tau_i^y(\phi_{k+1}^i)^t({{\matrixfont p}}_{k+1}^i)^{-1}\phi_{k+1}^i,\\ \label{eqpre_88}\xi & = & \diag(1-\tau^u-\sum_{i=1}^l\tau_i^y,\tau^ui),\end{aligned}\ ] ] is the cholesky factorization of , i.e , .* proof : * see appendix . compared with the centralized fusion in lemma [ thm_2 ], it can be seen that the dimension of the constraint matrix ( [ eqpre_84 ] ) is independent of the number of the sensors and the number of decision variables is .however , the dimension of the constraint matrix ( [ eqpre_52 ] ) is , and the number of the decision variables is .therefore , the distributed fusion can decrease much more computation burden of the fusion center .note that ( [ eqpre_84 ] ) can be rewritten to \label{eqpre_0106 } ( \hat{{{\vectorfont x}}}_{k+1|k}^d-\hat{{{\vectorfont x}}}_{k+1}^d)^t&\upsilon_{11 } & \upsilon_{12}\\[3 mm ] ( { { \matrixfont e}}_{k+1|k}^d)^t&\upsilon_{12}^t & \upsilon_{22}\\ \end{array}\right]\succeq0,\end{aligned}\ ] ] where moreover , we can derive an analytical formula for the shape matrix and the center of the bounding ellipsoid as follows .the proof is similar to theorem [ cor_2 ] .we call the equations ( [ eqpre_110])([eqpre_111 ] ) _ distributed set - membership information filter_. in ( [ eqpre_110])([eqpre_111 ] ) , and can be taken as the update information matrix and the gain matrix provided by the -th sensor for the estimator , respectively , and are the fusion weights .the distributed set membership information fusion algorithm can be summarized as follows .[ alg_2 ] + * step 1 : ( initialization step ) set and initial values such that .* step 2 : ( bounding step ) take samples from the sphere , and then determine a bounding ellipsoid to cover the remainders by ( [ eqpre_5])-([eqpre_6 ] ) . *step 3 : ( prediction step ) optimize the center and shape matrix of the state prediction ellipsoid such that by solving the optimization problem ( [ eqpre_18])-([eqpre_21 ] ) or ( [ eqpre_24])-([eqpre_25 ] ) .* step 4 : ( fusion update step ) optimize the center and shape matrix of the state estimation ellipsoid such that by solving the optimization problem ( [ eqpre_81])-([eqpre_84 ] ) or ( [ eqpre_107])-([eqpre_109 ] ) based on the state prediction bounding ellipsoids and bounding ellipsoids of single sensors , .* step 5 : set and go to step 2 . in target tracking ,whether it is the distributed fusion or the centralized fusion , if the measurement only contain range and angle , the boundary sampling method can be used to drive the bounding ellipsoid of the remainders with less computation complexity .therefore , the bounding steps of algorithm [ alg_1 ] and algorithm [ alg_2 ] can be computed efficiently .finally , the set - membership information fusion formulae are summarized in table [ tab_1 ] ..set - membership information fusion formulae [ cols="^,^,^",options="header " , ] [ tab_1 ] as far as multi - algorithm fusion for nonlinear dynamic systems is concerned , the multiple bounding ellipsoids can be constructed to minimize the size of the state bounding ellipsoid by complementary advantages of multiple parallel algorithms .specifically , one can use multiple parallel algorithm [ alg_1 ] or [ alg_2 ] with differently weighted objectives in ( [ eqpre_170 ] ) , where the larger emphasizes the entry of the estimated state vector , then the intersection of these bounding ellipsoids can achieve a tighter bounding ellipsoid that containing the true state in fusion center .in this section , we provide an example to compare the performance of the centralized fusion with that of the distributed fusion .moreover , we also use the multi - algorithm fusion to further reduce the estimation error bound based on the different weighted objective ( [ eqpre_170 ] ) . consider a common tracking system with bounding noise and there are two sensors track a same target in different position .the state contain position and velocity of and directions . here, the dynamic system equations is as follows : {{\vectorfont x}}_k+{{\vectorfont w}}_k,\\[3 mm ] \label{eqpre_115}{{\vectorfont y}}_k^i&=&\left [ \begin{array}{c } \sqrt{({{\vectorfont x}}_k(1)-{{\vectorfont z}}_k^i(1))^2+({{\vectorfont x}}_k(2)-{{\vectorfont z}}_k^i(2))^2 } \\[3 mm ] arctan\left(\frac{{{\vectorfont x}}_k(2)-{{\vectorfont z}}_k^i(2)}{{{\vectorfont x}}_k(1)-{{\vectorfont z}}_k^i(1)}\right ) \\\end{array } \right]+{{\vectorfont v}}_k^i,\\ \nonumber & & ~for~ i=1,2.\end{aligned}\ ] ] where is the time sampling interval with .^t ] and ^t ] , ,\end{aligned}\ ] ] respectively . in order to simulate the performance of the center fusion and distributed fusion, we assume the process noise measurement noise are truncated gaussian with zeros mean and covariance and on the ellipsoidal sets , respectively . from the description of the above, we can use sensor 1 ( smf1 ) , sensor 2 ( smf2 ) , the centralized fusion ( csmf ) and distributed fusion ( dsmf ) to calculate the error bound with ] , ] , $ ] , where the * error bound * of the entry of the state can be calculated by projecting the ellipsoid along the output direction .the following simulation results are under matlab r2012a with yalmip. figs .[ fig_01]-[fig_04 ] present a comparison of the error bounds along position and velocity direction for sensors 1 , 2 using algorithm [ alg_1 ] ( l=1 ) and for the fusion center using the centralized fusion algorithm [ alg_1 ] ( l=2 ) and the distributed fusion algorithm [ alg_2 ] ( l=2 ) and the multi - algorithm fusion , respectively . from figs .[ fig_01]-[fig_04 ] , we can observe the following phenomenon : * the performance of the centralized fusion and the distributed fusion is better than that of sensors . *the performance of the centralized fusion is better than that of the distributed fusion along and position direction in figs .[ fig_01]-[fig_02 ] , but the distributed fusion performs slightly better than centralized fusion along and velocity direction in figs .[ fig_03]-[fig_04 ] .the reasons may be that the optimal bounding ellipsoid can not be obtained for the nonlinear dynamic system , and the error bound of the state vector is calculated by minimizing trace of the shape matrix of the bounding state ellipsoid rather than minimizing the error bounds along position and velocity directions , respectively . *the performance of the multi - algorithm fusion is significantly better than that of the other methods along position and velocity direction . since it extract the useful information of each entry of the state vector by the differently weighted objectives .then the intersection fusion of these estimation ellipsoids can sufficiently take advantage of the information of each sensor , which yields a tighter state bounding ellipsoidal . to 6cmto to 6cmto to 6cmto to 6cmtothis paper has derived the centralized and distributed set - membership information fusion algorithms for multisensor nonlinear dynamic system via minimizing state bounding ellipsoid .firstly , both of them can be converted into an sdp problem which can be efficiently computed , respectively .secondly , their analytical solutions can be derived surprisingly by using decoupling technique .it is very interesting that they are quite similar in form to the classic information filter in mse sense . in the two analytical fusion formulae, the information of each sensor can be clearly characterized , and the knowledge of the correlation among measurement noises across sensors are not required .finally , multi - algorithm fusion has been used to minimize the size of the state bounding ellipsoid by complementary advantages of multiple parallel algorithms . a typical example in target trackinghas showed that multi - algorithm fusion performs better than both the centralized and distributed fusion .future work will include , in multisensor nonlinear dynamic system setting , multiple target tracking , sensor management and heterogeneous sensor fusion .[ lem_1] let , be quadratic functions in variable with .then the implication holds if there exist such that [ lem_2 ] schur complements : given constant matrices , , , where and , then if and only if \preceq0\end{aligned}\ ] ] or equivalently \preceq0\end{aligned}\ ] ] [ lem_3 ] decoupling : let be matrices of appropriate size , with square and symmetric . the problem ( in variable ) \succeq 0\end{aligned}\ ] ] is feasible if and only if \succeq 0.\end{aligned}\ ] ] in this case , problem ( [ eqpre_30])is equivalent to the problem ( in variable only ) \succeq 0.\end{aligned}\ ] ] moreover , if the problem ( [ eqpre_32 ] ) is feasible , which means that suppose the objective function is either the trace function or log - det function , then whenever .thus , ( [ eqpre_30 ] ) admits a unique optimal variable given by , where is the pseudo - inverse of . :note that is equivalent to , , where is a cholesky factorization of . by the nonlinear state equations( [ eqpre_1 ] ) and ( [ eqpre_3 ] ) , if we denote by ^t,\end{aligned}\ ] ] then ( [ eqpre_33 ] ) can be rewritten as where is denoted by ( [ eqpre_22 ] ) . moreover , the condition that , whenever , i ) , ii ) the process noise , iii ) the high - order remainders of state function , which are equivalent to whenever the equations ( [ eqpre_37])([eqpre_39 ] ) are equivalent to where and are matrices with compatible dimensions . from lemma [ lem_1 ] , a sufficient condition such that the inequalities ( [ eqpre_40])-([eqpre_42 ] ) imply ( [ eqpre_36 ] ) to holdis that there exist nonnegative scalars , such that furthermore , ( [ eqpre_43 ] ) is written in the following compact form : where is denoted by ( [ eqpre_23 ] ) . applying lemma [ lem_2 ] , ( [ eqpre_44 ] ) is equivalent to \succeq0\\ \label{eqpre_46}&&{{\matrixfont p}}_{k+1|k}^c\succ0.\end{aligned}\ ] ] therefore , if , satisfy ( [ eqpre_45 ] ) , then the state belongs to , whenever , i ) is in , ii ) the process noise , iii ) the high - order remainders of state function . : if we partition the left side of ( [ eqpre_21 ] ) by appropriate block , then it can be rewritten as \succeq 0,\end{aligned}\ ] ] where ,\\ \nonumber{{\matrixfont x}}_{11}&=&1-\tau^u-\tau^w-\tau^f,\\ \nonumber{{\matrixfont x}}_{22}&=&\diag(\tau^u{{\matrixfont i}},\tau^w{{\matrixfont q}}_k^{-1},\tau^f{{\matrixfont i}}),\\ \nonumber{{\matrixfont x}}_{12}&=&0.\end{aligned}\ ] ] based on the decoupling technique in lemma [ lem_3 ] , the above matrix inequality is feasible if and only if \succeq 0.\end{aligned}\ ] ] from the expression of , it is also equivalent to thus , the optimization problem of lemma [ thm_1 ] which , by lemma [ lem_3 ] , is equivalent to it is easy to see that is nonsingular according to ( [ eqpre_48 ] ) , then , the above optimization problem is equivalent to where .therefore , based on lagrange dual function , the analytically optimal solution can be obtained in ( [ eqpre_24])-([eqpre_28 ] ) . :note that we have get in prediction step , which is equivalent to , , where is a cholesky factorization of , then , and by the nonlinear measurement equations ( [ eqpre_2 ] ) and ( [ eqpre_4 ] ) if we denote by ^t,\end{aligned}\ ] ] then ( [ eqpre_63 ] ) and ( [ eqpre_64 ] ) can be rewritten as where and are denoted by ( [ eqpre_53 ] ) and ( [ eqpre_54 ] ) , respectively .moreover , the condition that whenever i ) is in ii ) measurement noises are bounded in ellipsoidal sets , i.e. , , iii ) the high - order remainders of measurement function , , , which are equivalent to whenever the equations ( [ eqpre_69])([eqpre_71 ] ) are equivalent to where and are matrices with compatible dimensions . by -procedure lemma [ lem_1 ] and ( [ eqpre_67 ] ) ,a sufficient condition such that the inequalities ( [ eqpre_72])-([eqpre_74 ] ) imply ( [ eqpre_68 ] ) to hold is that there exist scalars and nonnegative scalars , such that furthermore , ( [ eqpre_75 ] ) is written in the following compact form : where and are denoted by ( [ eqpre_56 ] ) and ( [ eqpre_55 ] ) , respectively .if we denote is the orthogonal complement of , then ( [ eqpre_76 ] ) is equivalent to using schur complements lemma [ lem_2 ] , ( [ eqpre_77 ] ) is equivalent to \preceq0.\\ \label{eqpre_79}&&-{{\matrixfont p}}_{k+1}^c\prec0.\end{aligned}\ ] ] therefore , if , satisfy ( [ eqpre_78])-([eqpre_79 ] ) , then the state belongs to , whenever i ) is in ii ) measurement noises are bounded in ellipsoidal sets , i.e. , , iii ) the high - order remainders of measurement function , , . : in view of the optimization problem in lemma [ thm_2 ] , we can apply lemma [ lem_3 ] to the linear matrix inequalities ( [ eqpre_80 ] ) , with , and the rest of matrices defined appropriately .thus , the problem which is equivalent to where , . if one of , , is zero , then the feasible sets of and become smaller from ( [ eqpre_76 ] ) , and the objective value becomes larger .thus , the optimal , should be greater than zero , and be nonsingular .if is the optimal value of the above optimization problem , then , by using lemma [ lem_3 ] again , the optimal ellipsoid is given by based on ( [ eqpre_201 ] ) and , we retrieve the center of the ellipsoid as by the definition of and in ( [ eqpre_118 ] ) and ( [ eqpre_120 ] ) , \ ] ] then (\psi_{22}^t\xi_{22}\psi_{22})^{-1}[{{\matrixfont i}}~\underbrace{0,\ldots , 0}_{l~ blocks}]^t\\ \nonumber & = & \left(\tau^u{{\matrixfont p}}_{k+1|k}^{c^{-1}}+\sum_{i=1}^l{{{\matrixfont j}}_{h_{k+1|k}^i}^t}(\frac{{{\matrixfont r}}_{k+1}^{i}}{\tau_i^v}+\frac{{{\matrixfont p}}_{h_{k+1}^i}}{\tau_i^h})^{-1 } { { \matrixfont j}}_{h_{k+1|k}^i}\right)^{-1}.\end{aligned}\ ] ] thus , ( [ eqpre_62 ] ) can be obtained by ( [ eqpre_200 ] ) . moreover , substituting ( [ eqpre_122 ] ) , ( [ eqpre_118 ] ) and ( [ eqpre_120 ] ) into ( [ eqpre_202 ] ) , then ( [ eqpre_180 ] ) can be achieved . :note that is equivalent to , , where is a cholesky factorization of , then if we denote by ^t,\end{aligned}\ ] ] then ( [ eqpre_89 ] ) can be rewritten as where is denoted by ( [ eqpre_85 ] ) .similarly , we have where is denoted by ( [ eqpre_86 ] ) .moreover , the condition that , whenever , i ) is in , ii) , for , is equivalent to whenever , for , the equations ( [ eqpre_94])([eqpre_97 ] ) are equivalent to \xi&\leq & 0,\end{aligned}\ ] ] where and are matrices with compatible dimensions . by -procedure lemma [ lem_1 ] , a sufficient condition such that the inequalities ( [ eqpre_98])-([eqpre_101 ] ) imply ( [ eqpre_93 ] ) to holdis that there exist nonnegative scalars , , such that \preceq0\end{aligned}\ ] ] furthermore , ( [ eqpre_102 ] ) is written in the following compact form : where and are denoted by ( [ eqpre_87 ] ) and ( [ eqpre_88 ] ) , respectively .therefore , if , satisfy ( [ eqpre_104])-([eqpre_105 ] ) , then the state belongs to , whenever , i ) is in , ii ) belongs to , for .m. liggins , c. y. chong , i. kadar , m. g. alford , v. vannicola , and s. thomopoulos , `` distributed fusion architectures and algorithms for target tracking , '' _ proceeding of ieee _ , vol .85 , pp . 95107 , january 1997 .a. vempaty , y. s. han , and p. k. varshney , `` target localization in wireless sensor networks using error correcting codes , '' _ ieee transaction on information theory _ ,vol . 60 , pp . 697712 , january 2014 .x. shen , y. zhu , e. song , and y. luo , `` minimizing euclidian state estimation error for linear uncertain dynamic systems based on multisensor and multi - algorithm fusion , '' _ ieee transactions on information theroy _ , vol .57 , pp .71317146 , october 2011 .
the set - membership information fusion problem is investigated for general multisensor nonlinear dynamic systems . compared with linear dynamic systems and point estimation fusion in mean squared error sense , it is a more challenging nonconvex optimization problem . usually , to solve this problem , people try to find an efficient or heuristic fusion algorithm . it is no doubt that an analytical fusion formula should be much significant for rasing accuracy and reducing computational burden . however , since it is a more complicated than the convex quadratic optimization problem for linear point estimation fusion , it is not easy to get the analytical fusion formula . in order to overcome the difficulty of this problem , two popular fusion architectures are considered : centralized and distributed set - membership information fusion . firstly , both of them can be converted into a semidefinite programming problem which can be efficiently computed , respectively . secondly , their analytical solutions can be derived surprisingly by using decoupling technique . it is very interesting that they are quite similar in form to the classic information filter . in the two analytical fusion formulae , the information of each sensor can be clearly characterized , and the knowledge of the correlation among measurement noises across sensors are not required . finally , multi - algorithm fusion is used to minimize the size of the state bounding ellipsoid by complementary advantages of multiple parallel algorithms . a typical numerical example in target tracking demonstrates the effectiveness of the centralized , distributed , and multi - algorithm set - membership fusion algorithms . in particular , it shows that multi - algorithm fusion performs better than the centralized and distributed fusion . * keywords : * nonlinear dynamic systems , multisensor fusion , target tracking , unknown but bounded noise , set - membership filter
the genomes of a number of organisms are already known and more are being completed at a rate of perhaps one a month .once a genome has been sequenced the amino - acid sequences of _ all _ its proteins are known .the complete set of proteins of an organism is generally referred to as its proteome .here we use genome data for _ e. coli _ to systematically estimate the change in the interactions of all the proteins of this bacterium when the salt concentration is varied ._ e. coli _ can grow in environments with a very wide range of salt concentrations , and so its proteins must function _ in vivo _ over a wide range of salt concentrations .the potassium ion concentration inside the cell can vary from approximately to mm ; potassium is the predominant cation in living cells .clearly , the proteins must remain soluble over this range , and they should bind to the other proteins which they are required to bind to in order to function , but they should not interact strongly with other proteins .the study of a proteome is often called proteomics . here , as the physical properties of proteins are studied ( as opposed to their chemical properties such as catalytic function )we are doing what may be called physical proteomics .there has been extensive theoretical work on the salt dependence of the interactions in individual proteins , particularly for the protein lysozyme .see refs . for corresponding experimental work. however , as far as the author is aware , this is the first attempt to characterise the interactions of _ all _ the proteins of an organism .we will consider the proteins separately , i.e. , as single component solutions .of course inside a bacterium the proteins exist as a mixture of thousands of components .future work will consider multi - component mixtures of the proteins .we have chosen _e. coli _ as it is a bacterium , and therefore a relatively simple organism , and as it has been extensively studied .however , the distribution of charges on the proteins of almost all organisms is very similar and so our results apply to almost all organisms , including _ h.sapiens_. the only exceptions are some extremophiles . in the next section we use genome data to estimate the charges on the proteins of _e. coli_. this data is used in the third section where we calculate the variation in their second virial coefficients as the salt concentration is varied .the last section is a conclusion ._ e. coli k-12 _ has a proteome of 4358 proteins .the amino acid sequences of all of them are known from the sequencing of its genome ._ k-12 _ is the name of a strain of _e. coli_. runcong and mitaku have analysed the charge distributions of a number of organisms using a simple approximate method of estimating the charge on a protein at neutral ph from its amino - acid sequence .we will follow their analysis but use a slightly different approximation for the charge on a protein with a given amino - acid sequence . of the 20 amino acids , 5 have pk values suchthat they should be at least partially charged at neutral ph .these are two highly acidic amino acids , aspartic acid and glutamic acid , two highly basic amino acids , lysine and arginine and one somewhat basic amino acid , histidine .aspartic and glutamic acids have pk s far below 7 and lysine and arginine have pk s far above 7 and so we assume that all 4 of these amino acids are fully charged at neutral ph .aspartic and glutamic acids then each contribute to the charge on a protein , and lysine and arginine each contribute .histidine has a pk of around 6 - 6.5 ( this will depend on the environment of the amino acid ) .the equation for the fraction of a basic group such as histidine that is charged at a given ph is where pk is the pk value for the basic group .this equation is just the henderson - hasselbalch equation rearranged .taking pk and at ph=7 we have that the fraction of histidines charged is . as this is smallwe assume for simplicity that all the histidine amino acids are uncharged .thus , with these assumptions for the charges on these 5 amino acids , our estimate for the net charge on a protein is simply given by where , , and are the the protein s total numbers of lysines , arginines , aspartic acids and glutamic acids , respectively .the subscripts , etc .correspond to the standard single letter codes for the amino acids .the charge is in units of where is the elementary charge .note that runcong and mitaku assume that the histidine amino acids contribute to the charge , that is the only difference between our analysis and that of runcong and mitaku .as the histidine amino acid is quite a rare amino acid , approximately 1 in 50 amino acids is a histidine , the difference between the results we obtain and those of runcong and mitaku is not large but our charges are shifted to more negative values . using runcong and mitaku s approximation the mean charge on a protein is units more positive than the mean charge we find here . as a check on our algorithm , we can compare the prediction of equation ( [ qdef ] ) for chicken lysozyme to that of a titration experiment to determine the charge .equation ( [ qdef ] ) predicts that chicken lysozyme has a net charge of 8 at neutral ph .titration experiments on lysozyme give a titratable charge of close to at ph=7 .using equation ( [ qdef ] ) we can obtain estimates for the charges of all 4358 proteins of _ e. coli _ .the results are shown in fig .[ qcoli ] , where we have plotted the number of proteins as a function of net charge .the distribution is centered almost at a net charge , and for not - too - large the distribution is roughly symmetric and gaussian .the mean charge is .given the approximate nature of our equation for the charge on a protein , equation ( [ qdef ] ) , the data is probably consistent with a mean charge of 0 .the approximation scheme of mitaku and runcong yields a mean charge of .also , although when is not too large the distribution can be seen to be reasonably symmetric , _ e. coli _ has 12 proteins with charges but none with charges . excluding proteins with very large charges , , the root mean square charge equals .a number of other organisms , both other bacteria and eukaryotes such as yeast , have had the charge distribution on their proteomes determined by runcong and mitaku and by the author .almost all of them have a roughly gaussian distribution centered approximately at zero , like the distribution in fig . [ qcoli ] .the exceptions are some extremophiles .extremophiles are organisms that live in extreme environments , for example _ halobacterium sp ._ lives in environments with very high levels of salt .the cytosol of _ halobacterium sp . _ contains much higher levels of potassium ions than do other organisms so perhaps it is not a surprise that the distribution of charges on its proteins is different .we have fitted the gaussian function to the data for the number of proteins as a function of their charge .it is drawn as the solid curve in fig .[ qcoli ] .the fit parameters are mean charge and standard deviation .1739 is and so the distribution is normalised so that its integral gives the total number of proteins . within a couple of standard deviations of the mean the gaussian function fits the data well but it underestimates the numbers of proteins with charges such that is several times the standard deviation .we also note that there is a correlation between the net charge on a protein and its size , measured by the number of amino acids .figure [ scat_coli ] is a scatter plot of charge and number of amino acids for the proteins of _e. coli_. although at any particular size there is a wide distribution of charges , on average the more highly charged proteins are larger than average .we expect the volume of a protein to scale with .consider a dilute solution of a single one of the proteins of _e. coli_. apart from water , the only other constituents are a 1:1 salt at a concentration and a buffer which controls the ph while making a negligible contribution to the ionic strength . herewe will always assume the ph=7 but other ph s can be considered if the net charges on the proteins can be calculated .also , the counterions of the protein are assumed the same as either the anions or cations of the salt , depending on the sign of .the interactions between the protein molecules in the salt solution can be characterised by means of the protein s second virial coefficient : a function of temperature , ph and salt concentration .proteins are complex molecules and we are unable to calculate from first principles the absolute value of for any of the 4358 proteins possessed by _e. coli_. however , predicting the change in the second virial coefficient when the salt concentration varies is a much easier problem , _ if _ we assume that changing the salt concentration changes only the direct electrostatic interaction between the net charges of a protein .this is a strong assumption but studies of the simple protein lysozyme have shown that the variation of its second virial coefficient can be described using a simple model which only includes its net charge .here we will follow warren and apply his analysis of lysozyme to the complete set of proteins of _e. coli_. we will discuss which proteins are likely to be less well described by this theory than is lysozyme . a protein molecule of charge surrounded by its counterions and as the concentration of the protein is increased so is the counterion density .this increase in the counterion density decreases the translational entropy of the counterions and this contributes a positive amount to the second virial coefficient .see warren and references therein for details . has the form where is an assumed constant term due to excluded volume interactions and other interactions which are insensitive to salt concentration .the second term is from the counterions and the salt .it is quadratic in the charge and so of course is zero for uncharged proteins and is independent of the sign of the net charge on a protein .as stated above we are unable to calculate the absolute value of ; the values of of the proteins are unknown .however , we can calculate the difference in when the salt concentration is changed from to .it is this is easy to calculate for any protein and in fig .[ db2 ] we have plotted the number of proteins as a function of the change in their second virial coefficient , , when the salt concentration is decreased from 1 m to m .the results are given in units of nm .for comparison the volume of a typical bacterial protein is about 60nm and so if a protein were to interact solely via a hard repulsion it would have a second virial coefficient of about 4 times its volume or about 240nm .results for proteins with are shown .proteins with larger titratable charges are likely to have an effective charge lower than , see refs . and references therein . from the linear poisson - boltzmann equation , the potential ( divided by ) at the surface of a spherical particle with charge andradius is . is the bjerrum length , and and is the debye screening length , given by . for the dielectric constant of water 80 times that in vacuum and at room temperature , nm . globular proteins are approximately spherical and typically have radii around 2 to 4 nm .taking a protein with a radius of 3 nm , in salt at a concentration m , we have that for , the potential at the surface is about .larger charges correspond to larger surface potentials and these large potentials bind oppositely charged ions to the surface reducing the effective charge . on average, this effect will be diminished to a certain extent by the fact that the most highly charged proteins are larger than average .see fig .[ scat_coli ] , where it is clear that the charge and size of a protein are correlated .recent simulations by lobaskin __ of spheres with radius 2 nm and charge in the absence of salt found an effective charge of a little under .thus we restrict ourselves to proteins with charges of magnitude less than or equal to 30 .4300 of the 4358 proteins , or almost 99% , have charges in this range .the mean change in of these 4300 proteins when the salt concentration is decreased from 1 to m is 139nm and the standard deviation is 218nm . a couple of caveats .the first is that the effect of salt on protein solutions is known to depend not only on whether the salt is a 1:1 salt , a 1:2 salt etc . but also to the nature of ions , whether it is mg or ca for example .our generic theory applies only where there are no specific interactions between the salt and the protein .there is good agreement between experiment and theory for lysozyme plus nacl and so we may hope that it applies to nacl and many proteins but it clearly misses potentially important effects for other salts where there are specific protein - salt interactions .the second is that proteins are not simple charged spheres , for example some have large dipole moments .dipoles exert net attractions which are screened and hence weakened by added salt .thus proteins with a small charges but large dipole moments are poorly described by the current theory : if the dipole interactions are dominant then the second virial coefficient may even increase when the salt concentration is increased . discuss this point . note that although we can estimate the charge on a protein from its amino - acid sequence we can not estimate its dipole moment without knowing its three - dimensional structure , and so the sequence data from genomics is not adequate to determine dipole moments .here we have shown how data from genomics can be used to estimate the charges on the proteins of an organism .we then used these charges to estimate the changes in the second virial coefficients of 4300 ( 99% ) of the proteins of _ e. coli _ when the salt concentration is changed . note that _e. coli _ can survive and multiply in external environments with a very wide range of salt concentrations ; cayley _ et al . _ studied the growth of _e. coli _ in environments with salt concentrations ranging from very low to molar , corresponding to potassium ion concentrations inside the cell of to mm .thus , studying the change in interactions of proteins with salt concentration is of direct relevance to the _ in vivo _ behaviour of proteins . within molecular biologythere is a clear shift of emphasis away from studying the proteins of an organism one or a few at a time , and towards determining the structure and function of large sets of proteins , in particular proteomes .the systematic study of these large sets of proteins is often called proteomics .this work is a first attempt to keep up with this shift by performing a simple theoretical calculation of a solution phase physical property for a complete proteome , rather than for one or a handful of proteins as is usually done .it may be termed physical proteomics ..the charged macromolecular species in the cytosol of _ e. coli_. the data is from neidhardt . is the charge on a macromolecule . for proteinsthe range given is the mean plus and minus twice the standard deviation . is the total number of molecules of a species per cell . the volume of an _e. coli _ cell is about so 1 molecule per cell corresponds to a concentration of about molar .a prokaryote ribosome consists of about 4500 bases of rna plus protein .the ribosomal proteins are mostly quite stongly positively charged and so will decrease the net negative charge . as a rough estimate we settle on a net charge of .the charge on a trna molecule is around , from each of its bases .the charge on a mrna molecule is on average around .the charge on dna is equal to twice the number of base pairs , 4,639,221 for _ e. coli k-12 _ .the charge density from the proteins assumes that the proteins have the mean charge of that they would have if their density and charge were uncorrelated .[ t2 ] [ cols="^,^,^,^",options="header " , ] future work could consider mixtures of proteins , ultimately aiming to understand the cytosol of a living cell , which is a mixture of of order different types of proteins as well as dna , rna , ions like atp and potassium , etc .. this is of course very complex but _ if _ in the cytosol the proteins of _ e. coli _ are present in amounts which are uncorrelated with their net charge , the mean charge of the proteins will be close to the mean of the distribution of fig .[ qcoli ] .this is quite small .neidhardt has taken an inventory of the species inside _ e. coli _ , and the results for charged macromolecules are shown in table [ t2 ] .the charged macromolecules in a cell are protein , dna and the various forms of rna : transfer rna ( trna ) , messenger rna ( mrna ) and the rna in ribosomes ( rrna ) .see refs . for an introduction to the proteins , dna and rna .although for every molecule of trna molecule there are 10 of protein , for every ribosome there are 100 molecules of protein , and for every mrna there are 1000 molecules of protein , the contributions of the trna , ribosomes , dna and proteins , to the overall charge density of the macromolecules are very roughly comparable . the ribosomes contribute the largest amount . the macromoleculesare negatively charged and this negative charge is balanced by potassium ions .thus the cytosol resembles a solution of a negatively charged polyelectrolyte , except that there is not one , relatively simple , macromolecular species , but thousands of rather complex and diverse species of macromolecules .it is a pleasure to acknowledge discussions with j. cuesta , d. frenkel and p. warren .a brief introduction to the biological nomenclature : an organism s dna contains many genes , each of which codes for a protein .the complete set of genes is called the organism s genome and we will refer to the complete set of proteins as its proteome .some authors use the word proteome somewhat differently , they use it to denote the set of proteins present in the cytosol of an organism at a particular time . see for example for a more detailed definition of a genome .the complete proteome of _e. coli k-12 _ ,i.e. , the amino - acid sequences of all its proteins , can be downloaded from databases such as that at the european bioinformatics institute ( http://www.ebi.ac.uk/proteome ) ._ e. coli k-12 _ was sequenced by blattner __ .f. c. neidhardt , chemical composition of _e. coli _ , in _ _ e .coli _ and _ s. typhimurium _ : cellular and molecular biology _ edited by f. .c .neidhardt j. l. ingraham , k. b. low , b. magasanik and h. e. umbarger ( american society for microbiology , washington d. c. , 1987 ) .
bacteria typically have a few thousand different proteins . the number of proteins with a given charge is a roughly gaussian function of charge centered near zero , and with a width around ten ( in units of the charge on the proton ) . we have used the charges on e. coli s proteins to estimate the changes in the second virial coefficients of all its proteins as the concentration of a 1:1 salt is increased . the second virial coefficient has dimensions of volume and we find that on average it decreases by about twice the average volume of a protein when the salt concentration is increased from to 1 molar . the standard deviation of the decrease is of the same order . the consequences of this for the complex mixture of proteins inside an e. coli cell , are briefly discussed .
varying coefficient models have been extensively studied in the literature , and they are useful for characterizing nonconstancy relationship between predictors and responses in regression models ; see , for example , . in this paperwe consider the time - varying coefficient model where is the response , is the predictor , is the transpose operator , for some smooth function \to\mathbb r^p ] with .for example , it can be the epanechnikov kernel or the bartlett kernel . observe that ( [ eqnllmineqn ] ) has the closed form solution where for , with the convention that , and to establish an asymptotic theory for , we need to impose appropriate regularity conditions on the covariates and errors .for testing the hypothesis ( [ eqnh0 ] ) , assumed that is -mixing and stationary .to allow nonstationary predictor and error processes that can be nonstrong mixing , we assume that where is a shift process of independent and identically distributed ( i.i.d . )random variables , and and are measurable functions such that and are well defined for each ] .then , under condition ( a2 ) below , ( [ eqnformulation ] ) defines locally stationary processes .let be an i.i.d .copy of and be the coupled shift process .we define the functional dependence measure } \bigl\|\mathbf{j}(t ; \bolds { \mathcal f}_k ) - \mathbf{j } ( t ; \bolds{\mathcal f}_{k,\{0\ } } ) \bigr\|_q \quad\mathrm{and}\quad \theta_{m , q}(\mathbf{j } ) = \sum _ { j = m}^{\infty } \delta_{j , q}(\mathbf{j}).\ ] ] let be the long - run covariance matrix , and . under the short - range dependence condition , both of them are uniformly bounded over ] ; local stationarity : ; short - range dependence : for some ; the smallest eigenvalue of is bounded away from zero on ] .although can be consistently estimated by for any , the smoothed estimate can have a better rate of convergence .[ thmcltahat ] assume and for some .let and .if for some , then in theorem [ thmcltahat ] , the term can be interpreted as the bias due to nonparametric estimation , and it vanishes under the null hypothesis ( [ eqnh0 ] ) .hence the parametric component in the semi - parametric model ( [ eqnregsemi ] ) can have a -consistent estimate . for testing the null hypothesis ( [ eqnh0 ] ) ,let be a continuous mapping from ] and is symmetric .let \\[-8pt ] \nonumber \xi_{\mathbf{a},\mathbf{w},l } & = & \operatorname{tr } \biggl\{\int_0 ^ 1 \bolds\xi_{\mathbf{a},\mathbf{w}}(t)^l \,dt \biggr\}.\end{aligned}\ ] ] theorem [ thmclttn ] provides asymptotic normality for .[ thmclttn ] assume , and for some .if for some , then if in addition , then ( [ eqnclttn ] ) holds for .let be the cumulative standard normal distribution function and be the corresponding quantile .we reject the null hypothesis ( [ eqnh0 ] ) at level if let \to\mathbb{r}^{s} ] . the problem of estimatingcovariance matrices has been extensively studied ; see among others .let , and be bandwidth sequences satisfying , , and .let ] and for ] . for all with uniformly in ] , be a sequence of real matrix functions , and define .then : ^{1/q ' } \theta_{0,q}(\mathbf{j}) ] , where } [ \overline{\rho}\{\mathbf{a}_{1,n}(t)\ } + \sum_{k=1}^{n-1 } \overline{\rho}\{\mathbf{a}_{k+1,n}(t ) - \mathbf{a}_{k , n}(t)\}] ] , entailing ( ii ) .[ lembndquad ] assume , .let be real matrices and .then where .let be the -dependent approximated process and be the corresponding quadratic form .if , .hence where by lemma [ lembndlin ] and the arguments of proposition 1 in ,we have . so lemma [ lembndquad ] follows .[ lemsupbnd ] assume } \|\mathbf{j}(t;\bolds{\mathcal f}_0)\|_\iota < \infty ] . using the summation by parts formula , since has support ] and \\[-8pt ] \nonumber & & \qquad = o_p(\varphi_n \rho_n).\end{aligned}\ ] ] therefore , under our bandwidth conditions , . so theorem [ thmcltahat ] follows .let .lemma [ lemlrcov ] provides continuity properties of long - run covariance matrices for stochastically lipschitz continuous processes .[ lemlrcov ] assume and .then : for any nonnegative sequence , ; if , in addition , for some , then ; and if } \underline\rho\{\bolds{\lambda}(\mathbf{j } , t)\ } > 0 ] to symmetric matrices in . for , define ] .[ lemclttndiamond ] assume and . if and , then \rightarrow n \bigl(0 , 4 k^*_2 \lambda_{\mathbf{w}_0,2}\bigr),\ ] ] and let and be its -dependent counterpart and be the corresponding long - run covariance matrix .then uniformly as .let , , and . the central limit theorem ( [ eqnclttndiamond ] ) is a multivariate generalization of theorem a1 in by using propositions [ propmatrixnorm ] and [ propinversenorm ]. we shall only detail steps that require special attention on the dimensionality .essentially , we need to show that \\ & & \qquad = k_2^ * \lambda_{\mathbf{w}_0,2},\end{aligned}\ ] ] where . since , by lemma [ lemlrcov ] , we have \biggl(\int_0 ^ 1 w_{i , n}(t ) w_{j , n}(t ) \,dt \biggr)^2 \\ & & \qquad\quad { } + o\bigl\{n^2b_n \bigl(n^2b_n \bigr)^{-2}\bigr\ } + o\bigl\{\ell(m ) n^2b_n \bigl(n^2b_n\bigr)^{-2}\bigr\ } + o\bigl\{mn \bigl(n^2b_n\bigr)^{-2}\bigr\}\end{aligned}\ ] ] for some function as . then ( [ eqnclttndiamond ] ) follows . for ( [ eqntndiamondmean ] ) , by the proof of theorem 1 in , we have where since , ( [ eqntndiamondmean ] ) follows . proof of theorem [ thmclttn ] let \cup[1-b_n,1] ] and , by ( [ eqnbahadurrep ] ) and lemmas [ lembndlin ] and [ lemsupbnd ] , since , by lemma [ lembndquad ] , we have . since , ( i ) follows . for ( ii ) , since , we have where , by ( i ) , ^ 2 - \sum_{i=1}^n e_i^2 = o_p\bigl\{n\varphi_n ( \varphi_n + \rho_n)\bigr\}\ ] ] and , by lemma [ lembndlin ] and the argument on the quantity in ( i ) , \mathbf{x}_{\bar d , i}^\top\bolds{\beta}_{\bar d}(i /n ) \cr & = & o_p\bigl(n^{1/2 } + b_n^{-1 } + n\varphi_n\rho_n + n^{1/2}b_n^2 \bigr).\end{aligned}\ ] ] in addition , by lemma [ lembndlin ] , lemma [ lemrss ] follows .proof of theorem [ thmmodelselect ] by lemma [ lembndlin ] , . lemma [ lemrss ] implies for , and + o_p(1)\ ] ] for . since and , theorem [ thmmodelselect ] follows .proof of proposition [ thmcovest ] by lemma [ lembndlin ] , } \bigl\|\hat{\mathbf{m}}(\mathbf{g } , t ) - e\bigl\{\hat { \mathbf{m}}(\mathbf{g } , t)\bigr\}\bigr\| = o\bigl\{(n\varpi_n)^{-1/2 } \bigr\},\ ] ] and , by lemma [ lembndquad ] , } \bigl\|\hat{\bolds{\lambda}}(\mathbf{l } , t ) - e\bigl\{\hat { \bolds{\lambda}}(\mathbf{l } , t)\bigr\}\bigr\| = o\bigl(\varrho_n^{1/2 } \bigr).\ ] ] by ( [ eqnbndt1t2 ] ) , we have proposition [ thmcovest ] follows by properties of local linear estimates . proof of proposition [ proptvar ] consider the process that satisfies the recursion then , for each ] .hence , by condition ( t2 ) and induction , we have \\[-8pt ] \nonumber & & { } + c \sum_{j=1}^{k-1 } \frac{j \rho_{\mathbf{a}}^j } { n } , \qquad k \geq 2.\end{aligned}\ ] ] since and , ( [ eqntvarmaxdiff ] ) follows by letting .it suffices to show that . for this , by a similar argument of ( [ eqntvarmaxdiffk ] ) , we have for any , } \|\mathbf{z}_{t_1,i } - \mathbf{z}_{t_2,i}\| \leq\rho_{\mathbf{a}}^k \sup_{t_1,t_2 \in[0,1 ] } \| \mathbf{z}_{t_1,i - k } - \mathbf{z}_{t_2,i - k}\| + c |t_1-t_2| \sum_{j=0}^{k-1 } \rho_{\mathbf{a}}^j.\ ] ] since and , proposition [ proptvar ] follows by letting .we are grateful to the editor , an associate editor , and two anonymous referees for their helpful comments and suggestions .
we consider parameter estimation , hypothesis testing and variable selection for partially time - varying coefficient models . our asymptotic theory has the useful feature that it can allow dependent , nonstationary error and covariate processes . with a two - stage method , the parametric component can be estimated with a -convergence rate . a simulation - assisted hypothesis testing procedure is proposed for testing significance and parameter constancy . we further propose an information criterion that can consistently the true set of significant predictors . our method is applied to autoregressive models with time - varying coefficients . simulation results and a real data application are provided . . .
radial velocity ( rv ) surveys of nearby stars have been employed in the search for extra - solar planets for nearly two decades ( see marcy , cochran & mayor 2000 ) . as these efforts continue into the next decade , they will be supplemented by precision astrometric searches , e.g. , by fame , keck interferometer , and sim . in previous papers , we examined the rv and astrometric techniques in detail , paying particular attention to the regime where the time - baseline of the observations is shorter than the orbital period of the extra - solar companion ( eisner & kulkarni 2001a , b ; hereafter ek2001a , b ) .this regime is interesting because one expects giant planets to form in the colder regions of the proto - planetary nebula , and thus one expects such objects to possess periods of many years to centuries . in ek2001a , b we demonstrated that one can achieve a significant improvement in sensitivity ( over current techniques ) if the orbital amplitude _ and phase _ are included in the analysis . here, we examine the benefits of combining simultaneous astrometric and rv observations .specifically , we examine the sensitivity of a combined astrometric and rv detection technique applied to an edge - on orbit , where the full rv signature and one dimension of the astrometric signature can be observed .the plan for the paper is straightforward .first , we simulate large numbers of hypothetical data sets containing ( 1 ) noise only , and ( 2 ) signal and noise , and determine the frequentist type i and ii errors . as in ek2001a, b we acknowledge that a frequentist approach is not as rigorous as a full bayesian analysis . however , this approach is simple enough that it is amenable to deriving ( semi-)analytical estimates of the sensitivity a principal goal of the paper .we conclude by discussing the parameter space opened up by combining fame , keck interferometer , or sim astrometric surveys with ongoing precision rv studies .we will assume edge - on circular orbits throughout this discussion .the astrometric signature of an edge - on circular orbit is given by where and are the proper motion and parallax of the planetary system , respectively , and here , is the distance to the system , is the mass of the star , and is the mass of the planet .we ignore the annual parallax .however , annual parallax should be included in modeling of planets with periods around one year .the rv signature of this orbit is given by the derivative of the orbital position along the line of sight : here is the radial velocity of the planetary system , and thus , we can express the sensitivity ( defined as the minimum - mass planet that can be detected ) of the rv and astrometric techniques in terms of : however , it is more difficult to identify planets with long periods than equation [ eq : sensitivity ] might suggest . in the so - called `` long - period regime '' , defined as where is the duration of the survey , we observe a fraction of the orbit . as a result , in this regime, the sensitivity is expected to depend critically on the orbital phase .the reflex velocity is covariant with and thus the rv technique is most sensitive when ( ek2001a ) .in contrast , the astrometric signal of an edge - on orbit is covariant with and , and thus the astrometric technique is sensitive when ( ek2001b ) . thus the rv and astrometric techniques achieve their maximal sensitivities for different orbital phases , and we expect , on general grounds , that combining the two techniques should yield a substantial benefit in the long - period regime .the signal analysis for the astrometric and rv techniques consists of fitting the observations to the models specified in equations [ eq : astrom - orbit ] and [ eq : rv - orbit ] . as noted by several authors ( e.g. * ? ? ?* ; * ? ? ?* ek2001a ) the most optimal fitting is obtained by using the technique of least squares .first , we convert the physical model specified by equations [ eq : astrom - orbit ] and [ eq : rv - orbit ] to equations linear in the unknowns : here , , , , , and . in ek2001a , b we discuss the importance of the , and terms .these three variables are not directly relevant in detecting or characterizing a companion planet but they are unknown and in the long - period regime are covariant with some of the orbital parameters .thus the three variables must be solved for in order to correctly model the observations .using equations [ eq : lsq - model1 ] and [ eq : lsq - model2 ] as our physical model , we perform the following analysis .first , we simulate a large number of data - sets containing only gaussian nose ( i.e. no signal ) . for each of these data sets, we perform a least squares fit to three models : a model using only astrometric measurements ( equation [ eq : lsq - model1 ] ) , a model using only rv measurements ( equation [ eq : lsq - model2 ] ) , and a model that utilizes both astrometric and rv measurements . in each case , for each simulated data set we fit for amplitude and phase .we note here that for the rv+astrometry model , since the two measurements have different variances we minimize the ( where is the difference between the model and the rms - weighted measurements ) . specifically , we simulate data - sets , sampled at one month intervals for years ( with no loss of generality , we take the time interval to go from to ) , and we explore periods from 5 to 100 years .we assume that the measurement noise in both the rv and astrometric surveys is characterized by gaussian noise with rms of and respectively .the best achieved m s . the anticipated astrometric precision of fame is between 50 and 100 , that of the keck interferometer ( narrow angle ) between and 50 , and that of sim between 1 and 10 .we note that yields approximately equivalent sensitivity to rv technique with 3 m s rms for a planet orbiting a star located at distance pc with years ( equation [ eq : rv - amplitude ] ) .next , for each of three models , we determine the ellipse ( in space ) within which 99% of the fitted amplitudes and phases lie .this ellipse , denoted by , describes the `` type i '' errors of the detection technique .thus the inferred and have a 1% chance of being outside the ellipse ( in the absence of a signal ) . as discussed earlier ( [ sec : equations ] ) we expect rv and astrometric models to show orthogonal sensitivity .indeed , as can be seen from figure [ fig : ellipses ] , the and are out of phase in the long period regime . on a basic level , we can understand the benefit of combining rv and astrometric observations by noting that the intersection of and is much smaller than either of the individual ellipses , and thus it is easier to detect signals over the level of the noise .in fact , this is verified by the combined analysis : the ellipse for the combined analysis , , lies entirely within _ both _ and ( figure [ fig : ellipses ] ) .analytic expressions for the type i errors for rv or astrometric techniques are given in ek2001a and ek2001b , respectively .given these expressions , it is not difficult to infer an analytic expression for the type i errors in the case of combined astrometric+rv technique .as noted earlier for m s and , the semi - minor axes for and are approximately equal ( pc ) , and thus for the combined analysis will be a circle whose radius is given by here , , , and the factor of reflects the fact that in the short - period regime , there are essentially twice as many measurements . as illustrated in figure [ fig : ac ] , this analytic function provides an excellent fit to the data .next , we evaluate `` type ii '' errors for the three models .type ii errors describe the probability of failing to detect a genuine signal due to contamination by noise . to understand the type ii statistics , we simulate a large number of data sets consisting of a simulated signal and noise ( see ek2001a , b for further details ) .the signal is a sinusoidal wave with an amplitude ( astrometry ) , and the corresponding velocity amplitude is ( we set pc ) ; the phase , is randomly chosen from the interval $ ] ( uniform distribution ) . for each model , we increment the amplitude[s ] until 99% of the fitted orbital parameters lie outside of the appropriate ellipse ; this amplitude is denoted by ( for each method ) .the benefit of combining rv and astrometric analysis accrues mainly from the fact that the error ellipses for the two techniques in parameter space are perpendicular to each other ( figure [ fig : ellipses ] ) .rv+astrometry analysis will be most useful in cases where the error ellipses for the two techniques are roughly the same size ( otherwise , one error ellipse might lie entirely within the other , and no additional benefit would arise from combining the two techniques ) . as mentioned above ,the current precision of rv techniques is m s , which means that for a 10 year survey , we must use astrometric measurements with precision ( for a system at pc ) to reap the maximal benefit from rv+astrometry technique .this is approximately the sensitivity that will be obtained by future instruments like keck interferometer and fame .as illustrated by figure [ fig : m99]a , rv+astrometry analysis ( with comparable rv and astrometric measurement accuracies ) applied to edge - on orbits attains approximately the same sensitivity as an astrometric analysis applied to face - on orbits .this similarity stems from the fact that in both cases , the highly elliptical 1-d is circularized through the addition of a second dimension .another way of thinking about this is that no matter what part of the orbit , when we observe both dimensions we can always see the full orbital curvature .thus , combining astrometric and rv techniques in a large survey ensures good sensitivity for all orbital inclination angles .it is also worth noting that rv+astrometry yields valuable gains in the short - period regime ( ) .when is comparable to , the sensitivity of rv+astrometry is better by over rv or astrometry alone .furthermore , noting that the sensitivity of astrometry to face - on orbits is ( ek2001b ) , we see that the short - period sensitivity of rv+astrometry is approximately independent of orbital inclination .we have also examined the sensitivity when astrometric data is combined with rv data of a longer time - baseline , specifically for several upcoming missions ( recall that rv surveys will have been underway for 1520 years by the time astrometric surveys commence ) .we investigate fame ( years , years ) , keck interferometer ( years , years ) , and sim ( years , years ) .we find that in the cases of fame and keck interferometer , the addition of longer time - baseline rv measurements has a significant impact ( figures [ fig : m99]b[fig : m99]c ) .in fact , rv+astrometry analysis can easily detect saturns when astrometric analysis alone does nt come close ( figure [ fig : m99]b ) . in the case of sim ,the main benefit of rv+astrometry is that one can achieve optimal sensitivity over a wider range of inclination angles ( figure [ fig : m99]d ) .we have also examined the sensitivities of the rv , astrometric , and combined techniques applied to shorter duration surveys , in order to compare the various sensitivities for short - period companions .specifically , we investigate the prospect of finding companions around m dwarfs with a 2-year survey combining rv measurements with astrometric measurements from ao systems on large telescopes like keck or the palomar .previous authors have successfully searched for companions around m - dwarfs using rv measurements and astrometric measurements on ao systems ( e.g. , * ? ? ?* ) although they have not used the combined rv+astrometry analysis described here ( i.e. , they analysed the rv data and the astrometric data separately ) .as illustrated by figure [ fig : bd ] , the rv technique gains significantly over astrometry for short periods ( because ; equation [ eq : rv - amplitude ] ) . if astrometry is to contribute meaningfully then mas .there is some expectation that such a precision can be obtained for binary stars .if so , combined rv+astrometry surveys of nearby m dwarfs can measure masses of jupiter and saturn - mass companions .99 danner , r. , unwin , s. , & allen , r.j .1999 , space interferometry mission : taking measure of the universe ( washington , dc : nasa ) dekany , r. , angel , r. , hege , k. , & wittman , d. 1994 , ap&ss , 212 , 299 delfosse et al . 1999delfosse+99delfosse , x. , forveille , t. , beuzit , j .- l . ,udry , s. , mayor , m. , & perrier , c. 1999344897 marcy , g.d . ,cochran , w.d . , & mayor , m. 2000 , protostars and planets iv , eds v. mannings , a.p . boss , & s.s .russell , university of arizona press , p. 1285
the astrometric and radial velocity techniques of extra - solar planet detection attempt to detect the periodic reflex motion of the parent star by extracting this periodic signal from a time - sampled set of observations . the extraction is generally accomplished using periodogram analysis or the functionally equivalent technique of least squares fitting of sinusoids . in this paper , we use a frequentist approach to examine the sensitivity of least squares technique when applied to a combination of radial velocity and astrometric observations . we derive a semi - analytical expression for the sensitivity and show that the combined approach yields significantly better sensitivity than either technique on its own . we discuss the ramifications of this result to upcoming astrometric surveys with fame , the keck interferometer , and sim .
in recent years , three dimensional ( 3d ) displays using computer - generated holograms ( cghs ) have been well studied , since cghs can faithfully reconstruct the light waves of 3d images .such 3d display is referred as to `` electroholography '' .electroholography can reconstruct 3d images by displaying cghs generated by diffraction calculations on a spatial light modulator ( slm ) such as amplitude or phase - modulated lcd panels . in electroholography ,several important issues must be considered : calculation cost for cgh , narrow viewing angle , small reconstructed images and color reconstruction . as color reconstruction is an important issues ,many color reconstruction methods have been studied .for instance , color reconstruction methods using three slms to display three cghs corresponding to the red , green and blue components of a 3d image have been proposed .color reconstruction methods using a single slm have also been proposed : space - division and depth - division methods .another single slm method is the time - division method , which temporally switches rgb lights by synchronizing signals .regardless of the color reconstruction methods as mentioned above , we need to compute three diffraction calculations corresponding to the rgb components of a color image .we developed a calculation reduction method for color cghs using color space conversion .color cghs have been calculated on rgb space . in this paper, we calculate color cghs in other color spaces : for example , ycbcr color space , which is widely used in digital image processing like jpeg and mpeg formats . in ycbcr color space , a rgb image is converted to the luminance component ( y ) , blue - difference chroma ( cb ) and red - difference chroma ( cr ) components . in terms of the human eye , although the negligible difference of the luminance component is well - recognized , the difference of the other components is not . in this method ,the luminance component is normal sampled and the chroma components are down - sampled .the down - sampling allows us to accelerate the calculation of the color cghs .we calculate diffraction from the components , and then convert the diffracted result in ycbcr color space to rgb color space . in section 2 ,we explain the calculation reduction method for color cgh using color space conversion . in section 3, we verify the proposed method on computer simulation .section 4 concludes this work .for simple notation we introduce diffraction operator }} ] is fourier transform , and is the propagation distance between the source and destination planes .if we calculate color cghs in rgb color space , we need to calculate three diffracted fields and from the corresponding components of the rgb images , and as follows : } } \label{eqn : rgb_cgh1},\\ u_g(\bm x_2)={{{\rm d}^{z}_{\lambda_g } [ g(\bm x_1 ) ] } } \label{eqn : rgb_cgh2},\\ u_b(\bm x_2)={{{\rm d}^{z}_{\lambda_b } [ b(\bm x_1 ) ] } } \label{eqn : rgb_cgh3},\end{aligned}\ ] ] where and are the wavelengths of rgb lights , respectively . in this paper, we call this the `` direct color cgh calculation '' .if there are spatial light modulators that are capable of displaying complex amplitudes , we can reconstruct a clear rgb image without direct and conjugated lights from eqs.([eqn : rgb_cgh1])-([eqn : rgb_cgh3 ] ) by the three slm method , time - division method and space - division method and so forth . when using the amplitude or phase - modulated slms , we need to convert the complex amplitudes of eqs.([eqn : rgb_cgh1])-([eqn : rgb_cgh3 ] ) to the amplitude cghs by taking the real part or phase - only cgh by taking the argument . from here ,let us consider the calculation of color cghs in the ycbcr color space .converting a rgb image to a ycbcr image is as follows , where , and are the luminance , blue - difference chroma and red - difference chroma , respectively .we convert the ycbcr image to the rgb image by , from eq.([eqn : y_r ] ) , the diffracted fields of rgb images passing through the ycbcr color space are expressed as , }}=b_{11 } { { { \rm d}^{z}_{\lambda_r } [ y(\bm x_1 ) ] } } + b_{12 } { { { \rm d}^{z}_{\lambda_r } [ c_b(\bm x_1 ) ] } } + b_{13 } { { { \rm d}^{z}_{\lambda_r } [ c_r(\bm x_1 ) ] } } \label{eqn : diff_y_r1 } , \\{ { { \rm d}^{z}_{\lambda_g } [ g(\bm x_1)]}}=b_{21 } { { { \rm d}^{z}_{\lambda_g } [ y(\bm x_1 ) ] } } + b_{22 } { { { \rm d}^{z}_{\lambda_g } [ c_b(\bm x_1 ) ] } } + b_{23 } { { { \rm d}^{z}_{\lambda_g } [ c_r(\bm x_1 ) ] } } \label{eqn : diff_y_r2 } , \\ { { { \rm d}^{z}_{\lambda_b } [ r(\bm x_1)]}}=b_{31 } { { { \rm d}^{z}_{\lambda_b } [ y(\bm x_1 ) ] } } + b_{32 } { { { \rm d}^{z}_{\lambda_b } [ c_b(\bm x_1 ) ] } } + b_{33 } { { { \rm d}^{z}_{\lambda_b } [ c_r(\bm x_1 ) ] } } \label{eqn : diff_y_r3}.\end{aligned}\ ] ] even though the direct color cgh calculation of eqs.([eqn : rgb_cgh1])-([eqn : rgb_cgh3 ] ) requires only three diffraction calculations , we need to calculate nine diffraction calculations in eqs .( [ eqn : diff_y_r1])-([eqn : diff_y_r3 ] ) .to improve this problem we use the following important relation of the diffraction operator : } } = { { { \rm d}^{z\lambda_2/\lambda_1}_{\lambda_2 } [ u(\bm x ) ] } } .\label{eqn : ope}\ ] ] applying the relation of } } = { { { \rm d}^{z}_{\lambda_r } [ u(\bm x)]}} ] to eq.([eqn : diff_y_r3 ] ) , the following equation is derived , }}&= & b_{31 } { { { \rm d}^{z \lambda_b/\lambda_r}_{\lambda_b } [ y(\bm x_1 ) ] } } + b_{32 } { { { \rm d}^{z \lambda_b/\lambda_r}_{\lambda_b } [ c_b(\bm x_1 ) ] } } + b_{33 } { { { \rm d}^{z \lambda_b/\lambda_r}_{\lambda_b } [ c_r(\bm x_1 ) ] } } , \nonumber \\ & = & b_{31 } { { { \rm d}^{z}_{\lambda_r } [ y(\bm x_1 ) ] } } + b_{32 } { { { \rm d}^{z}_{\lambda_r } [ c_b(\bm x_1 ) ] } } + b_{33 } { { { \rm d}^{z}_{\lambda_r } [ c_r(\bm x_1)]}}. \label{eqn : diff_y_5}\end{aligned}\ ] ] finally , we can obtain a color cgh passing through the ycbcr color space in matrix notation as follows : } } \\ { { { \rm d}^{z\lambda_g/\lambda_r}_{\lambda_g } [ g(\bm x_1 ) ] } } \\ { { { \rm d}^{z\lambda_b/\lambda_r}_{\lambda_b } [ b(\bm x_1 ) ] } } \end{pmatrix}= \begin{pmatrix } b_{11 } & b_{12 } & b_{13 } \\b_{21 } & b_{22 } & b_{23 } \\b_{31 } & b_{32 } & b_{33 } \\\end{pmatrix } \begin{pmatrix } { { { \rm d}^{z}_{\lambda_r } [ y(\bm x_1 ) ] } } \\ { { { \rm d}^{z}_{\lambda_r } [ c_b({\bm x_1 } ) ] } } \\ { { { \rm d}^{z}_{\lambda_r } [ c_r({\bm x_1 } ) ] } } \end{pmatrix}. \label{eqn : diff_y_r_mat}\ ] ] as you can see , this conversation includes only three diffraction operators .unfortunately , the propagation distance of the diffracted results of green and blue components change from to and .when placing cghs corresponding to each component at same the location , the reconstructed rgb images from the cghs are reconstructed out of position .we can compensate for the out of location by placing the green and blue cghs at and , respectively . since eq.([eqn : diff_y_r_mat ] ) has three diffraction operators , the calculation cost is the same as the direct calculation of the direct color cgh calculation by eqs.([eqn : rgb_cgh1])-([eqn : rgb_cgh3 ] ) . in terms of the human eye , although the negligible difference of the luminance component y is well - recognized , the difference of the other components cb and cr is not . using this property , the luminance component y is normal sampled and the chroma components are down - sampled .the down - sampling allows us to accelerate the calculation of the color cghs .note that we need to change the sampling pitches that are required for the diffraction operator of down - sampled and because the areas of and are smaller than .for example , when down - sampling 1/4 and , we change the sampling pitches of them to 4 where is the sampling pitch of .let us summarize the color cgh calculation passing through the ycbcr color space .first , we convert a rgb image to a ycbcr image using eq.([eqn : r_y ] ) .second , we down - sample the cb and cr components .third , we compute diffraction calculations from the components , then , we up - sample } } ] .next , we calculate the rgb cghs using eq.([eqn : diff_y_r_mat ] ) .last , we reconstruct the color images by placing the green and blue cghs at and , respectively .) -([eqn : rgb_cgh3 ] ) ) and the proposed method ( eq.([eqn : diff_y_r_mat ] ) ) , respectively , width=566 ] we verify the proposed method on computer simulation .we use three rgb color images . for the calculation condition , the propagation distance cm ,the sampling pitch is 10 , the wavelengths of rgb lights are 633 nm , 532 nm and 450 nm , respectively .the resolution of all of the images is pixels .we assume that we use a complex amplitude slm .we use the nearest - neighbor interpolation for the down - sampling and up - sampling in the color space conversion .the nearest - neighbor interpolation is simple interpolation , so that the interpolation is faster than other interpolation methods such as linear and cubic interpolations .figure [ fig : img1 ] shows the reconstructed images of fruits from cghs , which are calculated by direct color cgh calculation ( eqs.([eqn : rgb_cgh1])-([eqn : rgb_cgh3 ] ) ) and the proposed method ( eq.([eqn : diff_y_r_mat ] ) ) , respectively .the notations of `` r : g : b=1:1:1 '' , `` r : g : b=4:1:1 '' mean a color image is not down - sampled and is down - sampled by 1/4 of the green and blue components , respectively .likewise , the notation of `` y : cb : cr=4:1:1 '' mean that a color image is down - sampled by 1/4 of cb and cr components , respectively .the insects show magnified images of a part of the reconstructed images . the reconstructed images of `` r : g : b=4:1:1 '' and `` r : g : b=8:1:1 '' are blurred because the green and blue components affect the luminance of the image .in contrast , the reconstructed images of `` y : cb : cr=4:1:1 '' and `` y : cb : cr=8:1:1 '' maintain the sharpness of the texture because cb and cr components do not seriously affect the luminance however , the brightness is little decreased , compared with other reconstructed images . )-([eqn : rgb_cgh3 ] ) ) and the proposed method ( eq.([eqn : diff_y_r_mat ] ) ) , respectively , width=566 ] ) -([eqn : rgb_cgh3 ] ) ) and the proposed method ( eq.([eqn : diff_y_r_mat ] ) ) , respectively , width=566 ] figures [ fig : img2 ] and [ fig : img3 ] show the reconstructed images of mandrill and `` tifanny '' from cghs , which are calculated by direct color cgh calculation ( eqs.([eqn : rgb_cgh1])-([eqn : rgb_cgh3 ] ) ) and the proposed method ( eq.([eqn : diff_y_r_mat ] ) ) , respectively .the reconstructed images of `` r : g : b=4:1:1 '' and `` r : g : b=8:1:1 '' are blurred .in contrast , the reconstructed images of `` y : cb : cr=4:1:1 '' and `` y : cb : cr=8:1:1 '' maintain the sharpness of the texture .the calculation times of `` r : g : b=1:1:1 '' , `` r : g : b=4:1:1 '' and `` y : cb : cr=4:1:1 '' are about 660 ms , 270 ms and 396 ms , respectively .the calculation times of `` r : g : b=8:1:1 '' and `` y : cb : cr=8:1:1 '' are about 260 ms and 400 ms , respectively .therefore , the proposed method can maintain the sharpness of the reconstructed image and accelerate the calculation speed .we proposed a calculation reduction method for color cghs using color space conversion . the proposed method succeeded in maintaining the sharpness of the reconstructed image and accelerating the calculation speed .the proposed method will be acceptable not only for ycbcr color space but also other color spaces such as yiq , yuv and so forth .in addition , the proposed method will be useful for color digital holography to accelerate the numerical reconstruction .the proposed method applies only to a pure amplitude image or weak phase image , so in future study we will attempt to develop a calculation reduction method for color cghs with full complex amplitude .this work is supported by japan society for the promotion of science ( jsps ) kakenhi ( grant - in - aid for scientific research ( c ) 25330125 ) 2013 , and kakenhi ( grant - in - aid for scientific research ( a ) 25240015 ) 2013 .99 h. yoshikawa , t. yamaguchi , and r. kitayama , `` real - time generation of full color image hologram with compact distance look - up table , '' osa topical meeting on digital holography and three - dimensional imaging 2009 , dwc4 ( 2009 ) .h. nakayama , n. takada , y. ichihashi , s. awazu , t. shimobaba , n. masuda and t. ito , `` real - time color electroholography using multiple graphics processing units and multiple high - definition liquid - crystal display panels , '' appl .opt . * 49 * , 59935996 ( 2010 ) .t. ito , t. shimobaba , h. godo , and m. horiuchi , `` holographic reconstruction with a 10- m pixel - pitch reflective liquid - crystal display by use of a light - emitting diode reference light , '' opt . lett . * 27 * , 14061408 ( 2002 ) .m. makowski , m. sypek , i. ducin , a. fajst , a. siemion , j. suszek , and a. kolodziejczyk , `` experimental evaluation of a full - color compact lensless holographic display , '' opt .express * 17 * , 2084020846 ( 2009 ) .t. shimobaba , a. shiraki , y. ichihashi , n. masuda and t. ito,``interactive color electroholography using the fpga technology and time division switching method , '' ieice electron .express * 5 * , 271277 ( 2008 ) .m. oikawa , t. shimobaba , t. yoda , h. nakayama , a. shiraki , n. masuda , and t. ito,``time - division color electroholography using one - chip rgb led and synchronizing controller , '' opt .express * 19 * 1200812013 ( 2011 ) .
we report a calculation reduction method for color computer - generated holograms ( cghs ) using color space conversion . color cghs are generally calculated on rgb space . in this paper , we calculate color cghs in other color spaces : for example , ycbcr color space . in ycbcr color space , a rgb image is converted to the luminance component ( y ) , blue - difference chroma ( cb ) and red - difference chroma ( cr ) components . in terms of the human eye , although the negligible difference of the luminance component is well - recognized , the difference of the other components is not . in this method , the luminance component is normal sampled and the chroma components are down - sampled . the down - sampling allows us to accelerate the calculation of the color cghs . we compute diffraction calculations from the components , and then we convert the diffracted results in ycbcr color space to rgb color space . _ keywords _ : computer - generated hologram , digital holography , color holography , color space conversion , ycbcr color space
in the past , insurance companies did not have a large amount of information about their customers so everyone would pay similar prices , despite the large variations in driving habits .recently , technological advances have created ways to observe human driving behavior .taking advantage of this technology , some us - based insurance companies now offer consumers vehicle insurance policies with the option of installing a tracking device into their vehicle .this allows the insurance companies to charge less to safe drivers , in an attempt to attract those people as customers .however , unlike a truck driver who is being monitored while doing a job , these consumers will be monitored during all of their daily activities , public and private .although some of these monitoring devices are based upon gps information and offer no privacy protections , ( such as onstar ) , many other devices and insurance programs are advertised as being privacy - preserving .for instance , as an alternative to gps data some of these devices log speed information instead . in this paper , we demonstrate that logging this timestamped speed data is also not privacy - preserving , despite the insurance companies claims .the privacy problem introduced in this paper is significant because the data collection by insurance companies is an `` always on '' activity .data may be logged forever , thus , deducing location data from speed traces represents a huge breach of privacy for drivers using these programs . even if insurance companies are not currently obtaining location traces from their data today , this does not guarantee it will not present problems in the future . because the data is not considered location data it is probably not being treated as such , and may eventually be obtained by any number of antagonists that do know how to process the data traces to obtain location information .for example , law enforcement agencies may request the information as they have done with other data sets , e.g. electronic toll records . breaching user privacy with just a starting point andspeed information is a difficult task , otherwise insurance companies would not claim that the information being gathered is privacy preserving .indeed , matching a single speed trace to all of the roads in a country or state is extremely difficult , but having the home address of a person , as the insurance companies do , provides a starting location for some paths simplifying the problem .number of path choices within a given distance of a starting location . the pattern of growth on a log - linear scale shows that the number of paths increases exponentially , making exploration of all possible paths infeasible .within one mile there are over paths diverging from a grocery store and over paths from a residential area .locations central to major transportation routes , like a grocery store , will see a higher increase in paths compared to a residential location .residential roads are also more likely to dead - end : in this example the number of paths from the residence actually falls at the beginning because only one path does not dead - end immediately . ] reproducing an exact driving path from just speed data and a starting location is challenging .speed data does not indicate if a person is turning at an intersection or merely stopping at a stop sign or red light , so multiple alternative pathways exist that match some of the speed data. of course multiple routes can be explored , but even within just a few minute s drive of a person s home in our data set there may be thousands of unique paths that the person could have taken , so exploring every single possible path is infeasible .the growth of possible paths within a distance of just one mile of a starting location is illustrated by figure [ fig : path_growth ] .there may only be a few tens or hundreds of roads within a mile of a starting location , but there are many unique ways to travel along those roads . at one mile , there are over 100,000 possible paths the driver could have taken when the trip starts from a grocery store next to several highways .in addition to the explosion of possible paths the driver could have taken , the problem is made hard by the fact that speed data is ambiguous ; that is to say it does not uniquely match just a single path .for example , the speed of a vehicle making a right turn can also match the speed of a vehicle making a left , or the speed of a vehicle that decelerates because of a pot hole .there is no certain way to tell which way a car goes at an intersection with just speed data , meaning that a segment of speed data may match multiple paths .since there are so many possible paths and speed data is ambiguous we must score paths according to some error metric and seek out the final path with the least error .however , finding the optimal path from an exponentially growing search space is not easy .simple solutions , such as greedily following a single path , may end in failure without even finding any solution ; for instance when the path chosen by a greedy algorithm suddenly dead ends . later in the paperwe discuss several other approaches that do not work .this paper makes two major contributions . 1 .we present a novel algorithm _ elastic pathing _ to extract location traces from speedometer data and starting locations .applying this algorithm and to real - world traces , we show how the data collected by many insurance companies is not privacy preserving despite their claims ._ to best of our knowledge , our work is the first to demonstrate how people s locations can be discovered based only on timestamped speed data and knowledge of a single starting location . _ we will also show that even without predicting path endpoints with 100% accuracy long - term data collection allows an antagonist to eventually identify physical regions that are frequently visited .the combination of our algorithm with analysis of long - term driving traces can obtain private information from what is currently considered privacy - preserving data . as minor contributions ,we have collected real - world traces over several months that serve as a testing set for our approach , we offer suggestions for future work that may improve path prediction further , and provide a discussion of privacy - preserving alternatives to the collection of speedometer data that do not require cryptography to implement .in several countries , including countries in north america , europe and japan , some insurance companies have introduced `` usage - based '' insurance programs . in addition to basing insurance premiums on the characteristics of the car and history and knowledge about the drivers , usage - based insurance programs also take into account how and when the car is driven .the incentive to participate by insurance policy holders is a potential discount to their premiums for good driving behavior .in addition to providing potential premium discounts , some companies also provide their customers statistical data , such as overviews of driving habits and information about how much they drive per day or per week . in the pastmost of these programs used to track drivers locations with gps , but currently there are several programs that claim to be privacy preserving because they use other kinds of data . in this paper, we focus on a more recent type of a program , which is enabled by collecting speedometer among other data directly from the car .the idea is that by not collecting gps data and instead collecting other driving information , such as speedometer data , these approaches are more privacy - preserving since the insurance companies are not collecting drivers locations .there are several insurance companies in the us that use this method .for example , the snapshot device provided by progressive and the drivewise device provided by allstate will record vehicle speed data while not collecting gps locations . the data in these programs is collected by connecting a device ( provided by the insurance company ) to the car diagnostic port obd - ii . since 1996 , obd - ii has been made mandatory for all cars sold in the united states .the european union has also made the european equivalent eobd mandatory for both petrol and diesel engined cars in europe .this standard interface allows for collecting comprehensive amounts of diagnostic information .the data that is collected varies with each insurance company , but in the cases discussed above , it includes time and speedometer data .the devices will regularly upload this data to the insurance company s servers via a mobile data link .the driving habits that some insurance companies state that they are interested in include `` how often you make hard brakes , how many miles you drive each day and how often you drive between midnight and 4 a.m. '' there is also a very important subtlety here . for example , progressive claims they do not collect the data about when people are speeding ( which some people in the public interpret as meaning that speed data is not collected at all ) .instead , they _ do _ collect speed data , but it follows from the logic that since they do not know your location , they do not know if are you speeding or not . for our analysis of the privacy concerns of these programs we need to know the sampling rate of the data collection devices , but the companies do not disclose on their web sites how often do they sample e.g. speedometer data .however , e.g. allstate includes the following information `` hard braking events are recorded when your vehicle decelerates more than 8 mph in one second ( 11.7 ) [ and ] extreme braking events are recorded when your vehicle decelerates more than 10 mph in one second ( 14.6 ) . ''we also know that companies will sample data much faster than they need to . in the united states federal vehicular safety regulations mandate that vehicles traveling at 20 mph must be able to brake to a full stop in 20 feet at a deceleration rate of 21 feet / second .since we know what kind of events the companies are interested in and the maximum performance of vehicles on the road , we can simply estimate the bounds on sampling rates based upon the amount of information required to detect these events .the events that are to be detected occur over a time interval of just one second ( ) . using the nyquist sampling theorem , we know that we must sample at twice this rate to detect these features , a rate of two samples per second .this is a lower bound on any feature detection , and companies may sample at significantly higher rates to detect other features .consider that a driver that tailgates very closely will often tap their own brakes for very short time durations . if insurance companies wished to detect these events they would need to sample at a rate that was twice the speed of these brief taps . to summarize the privacy issues ,the insurance companies using speedometer based measurements are claiming that they can not find people s locations based on their collected data .our work shows this is not true. we will show in further sections that a lot private location information is indeed obtainable , contrary to the claim of 100% privacy protection .further , by finding out correct paths on the roads , the insurance companies could also deduce subtle issues such as were the drivers speeding , despite their claims to the contrary .we will show that our elastic pathing algorithm is able to do more than just identify the endpoint of a trip it also eliminates a large number of locations where the trip could not have ended .this is also a serious privacy concern because it allows to identify changes in regular routines very easily .for example , if a person usually drives for groceries after work but the predicted path goes in a different direction , then it is highly likely the driver is breaking their routine .indeed , previous research has shown that location ( derived from gps data ) can be automatically analyzed to produce profiles of driver s behavior , social activities and work activities . a comprehensive discussion on the the consequences of breaches to location privacy are given by blumberg and eckersley .they discuss how location databases can be used by others to ask questions such as `` did you go to an anti - war rally on tuesday ? '' , `` did you see an aids counselor ? '' , `` have you been checking into a motel at lunchtimes ? '' , `` why was your secretary with you ? '' , or `` which church do you attend ?which mosque ? which gay bars ? ''a lot of work has concentrated on anonymizing or obfuscating location traces because location traces contain a great deal of behavioral information that people consider private .krumm has written an overview of computational location privacy techniques , and zang and bolot have recently questioned the possibility of releasing privacy - preserving cell phone records while still maintaining research utility in those records . accordingly , predicting future mobility patterns and paths from human mobility traces is a well - explored topic , including the following results .based on analysis of location data , such as mobile phone cell tower locations , researchers have shown 1 ) the nature of individual mobility patterns is bounded , and that humans visit only a few locations most of the time ( e.g. , just two ) ; 2 ) that there is high level of predictability for future and current locations ( e.g. most trivially , `` 70% of the time the most visited location coincides with the user s actual location '' ) ; and 3 ) that mobility patterns can also be used to predict social network ties ( e.g , ) . in a more applied domain , gps traces have been used to learn transportation modes ( e.g. walking , in a bus , or driving ) , for predicting turns , route , and destination , for predicting family routines ( e.g. picking up or dropping of children to activities ) , probabilistic models describing when people are home or away , and recommending friends and locations . however, none of the above work can be used to discover locations based solely on speedometer measurements and a user s starting location .related work in the field of dead reckoning does suggest that speedometer traces should have some level of information that can be extracted .dead reckoning works by using speed or movement data and a known location to deduce a movement path .dead reckoning has typically been used for map building with mobile robots or as an addition to global navigation satellite systems ( gnss ) such as gps . when supplementing gnss data , odometer data for dead reckoning might only be used when gnss information is unavailable , such as when a vehicle passes through a tunnel or an area with many tall buildings . however , this kind of dead reckoning can not work without frequent location ground truths as there is no perfect method to match speed data to turns in a road .map building with robots is interesting because there are often no exact ground truths , only location estimates .golfarelli et al . describe a method that can assemble maps from arbitrary distance estimates to and from landmarks .the problem of deducing a traveled path from only speed data and a starting point is , in some ways , the reverse of this map building problem : we already have the map , but have difficulty identifying landmarks ( in this case turns ) from just speed data .ubiquitous computing devices help people during their daily lives in different contexts , for instance by monitoring our activity and fitness levels or interesting events social that we have attended .however , the ubiquitous nature of these monitoring devices has security and privacy ramifications that are not always considered .for example , a lack of sufficient protection in the nike+ipod sport kit allowed other people to wirelessly monitor and track users during their daily activities .although these kinds of devices clearly appeal to consumers , the potential loss of privacy is a growing concern . in the specific area of usage - based insurance ,the privacy concerns of these insurance programs have been studied before by troncoso et al .however , this work dates back to schemes which would send raw location ( gps ) coordinates to either insurance providers or brokers , and troncoso et al .proposed a cryptographic scheme pripayd to address the problem .our work shows that speedometer - based solutions , which were not considered by troncoso et al ., are not privacy - preserving either .finally , most closely related to our work are side - channel attacks that use accelerometer data from smartphones .projects such as accomplice and autowitness have used accelerometers and gyroscopes for localization of drivers .however , the information from the smartphone can be used to actually detect when turns occur .in contrast , we have only a time series of speed data available .while this speed data might indicate when a vehicle stops or slows down at an intersection , unlike accelerometer data , it does not indicate if any turn is taken .to test the hypothesis that a driving route can be reconstructed from a known starting point and a speed trace we needed to collect a large body of testing data .we require both a ground truth of gps data and a sample set of speed data .this data can be collected with the approach that the insurance companies use , by connecting a device to the onboard diagnostics standard obd - ii connector .devices connected to the car this way will receive accurate information about the car s speed directly from the vehicle s computer and can pair these speed values with gps information to provide an accurate ground truth . to log this information we used a cell phone with gps and a bluetooth enabled odb - ii diagnostic device .we have also logged a much higher volume of gps - only data , from which we can reconstruct the speed trace . using the speedometer data from the odb - ii deviceis straightforward because raw speed data is immediately available .however , collecting data using the odb - ii device required obtaining the devices and was more cumbersome ; in the rush of daily activities it was easier to use just gps data gathered from a smartphone .this data requires a small amount of processing to obtain speed values from latitude / longitude pairs .this was a two step process , as outlined below .first , we used the haversine formula ( see appendix ) to approximate distances from the raw gps coordinates .next we simply divide this value by the time interval ( in fractions of an hour ) to get an instantaneous speed value for each time interval . each speed valueis then given a time stamp and converted into the same format as the speedometer traces .we note that we did not see differences in the accuracy of our algorithm between the two approaches of data collection .we had seven volunteers collecting data .five volunteers collected gps - only data over the period of several months , and two volunteers gathered both gps and speedometer data with an odb - ii device over a period of one month .there were 240 data traces that we used for pathing , totaling nearly 1200 miles ( nearly 2000 km ) of driving data collected , with a median trip length of 4.65 miles ( 7.5 km ) , minimum trip distance of 0.38 miles ( 0.62 km ) , and maximum trip length of 9.96 miles ( 16.0 km ) .these traces comprise 46 unique destinations , with more than half of these destinations visited multiple times by individual drivers .we believe this represents a diverse testing set ; if our algorithm works well on this data , with all of its variety ( e.g. the differences between driver s habits ) , then it should perform similarly in many other driving scenarios .in addition to the differences between driving habits of participating drivers , there are also differences in their vehicles. examples of vehicles we used in the study are small sedans , sports utility vehicles , and a pickup truck .we feel that this represents a good sample of driving habits and vehicle types , so a pathing algorithm that works properly on all of these traces will work correctly across the driving public as a whole . additionally , the insurance companies know the make and model of the vehicle they are insuring and might be able to use additional information , such as the turning radius of particular vehicles , that we do not use. map data was collected from the openstreetmap ( osm ) project using their open rest api .we parsed this data into node adjacency lists , turn restrictions , and other road characteristics ( e.g. speed , directionality ) and stored the data in a local sql database for rapid querying .the open street map data is divisible into two main types : _ nodes _ and _ ways_. nodes are numbered latitude / longitude pairs .they may document significant map features like bends in roads or intersections .nodes may consist of multiple attributes but they always include a latitude , longitude and a unique node i d .ways are collections of node ids and some context tags that identify features of the road that the group of nodes describe .when ways cross the nodes at the intersection belong to both ways , indicating that a turn is possible .turn restrictions are indicated by the way - ness of the road ( one - way or two - way ) and additional meta - information , such as `` no u - turn '' or `` only right turn '' .in this section , we describe the algorithm we use to recreate a person s driving path given data trace with speed / time tuples .our approach relies upon two features of vehicular travel .first , there are physical limitations to a vehicle s turning radius at high speeds , so a vehicle that travels a particular path must travel at speed at or below the maximum speed possible .second , people will only stop when they need to , e.g. at traffic lights and stop signs , until they reach their final destination .this second assumption is not always true ; a person may need to stop because of actions of other vehicles , road construction , accidents , etc .however , when we continually observe a driver only a small number of traces will have extraordinary events .we use data from open street maps ( osm ) to identify possible routes and locations of turns and we find the route that best fits the given speed data .osm data is in the form of nodes that are associated with different roads so a path will be a sequence of nodes .each node has a physical location , given by a latitude and longitude , allowing us to determine distances between segments of road and the turning angle of a path of nodes .each sample from the data trace corresponds to some amount of distance traveled from one node to the next along a path .the osm data provides adjacency data for each node , so once a node is reached , we must choose which node to begin moving towards .for instance , a four - way intersection will have a single node at the intersection and four adjacent nodes .since we reach the intersection from one direction we have three possible next nodes from which to choose .any pathing algorithm must follow some basic steps .first , as discussed previously and illustrated in figure [ fig : path_growth ] , the growth of paths is too great to explore all possible paths .therefore , any algorithm must have some methods of comparing paths and choosing `` better '' paths that more closely match the given speed trace .this also means that the algorithm must keep a list of possible paths , while also either limiting growth of the number of paths or removing paths as they become too plentiful .another problem to overcome is that no path perfectly matches the speed data , because drivers swerve around objects in the road , or take turns more widely or sharply than we expect .increasingly the estimation of a vehicle s progress along a path will become incorrect and mismatches might make even the correct path seem impossible .for example , if the progress along a segment of road is ahead of the vehicle s actual position then the vehicle might be going too quickly to make a turn , but if we somehow corrected the distance traveled to account for some of these distance estimation errors then we would find that the speed traces lines up perfectly with the turn .thus , any algorithm must also correct paths as it explores them , trying to take into account these variations in the travel distance. one could think of many ways to attempt this pathing , and we thought of and explored several approaches that do not work before finding an approach that gave surprisingly good results to be harmful for people s privacy . before describing our novel approach, we will first document the failed approaches to provide more background information to the community .if deducing a driving route from speed data and a starting location was easy then insurance companies would not claim that their data collection was privacy preserving .although such path reconstruction may seem simple in concept , it is actually very difficult .the first major difficulty is detecting turns with only speed data .when someone breaks and then accelerates , did they make a turn or did they slow down because the person in front of them was turning ? if a person comes to a complete stop and then accelerates , did they go straight or turn? when a person stops at an intersection are they the car closest to the stop light , or are they behind ten other cars ?the speed data alone does not give enough information , and a single incorrectly identified turn will result in an error possibly as large as the entire trip distance .even if a turn is identified from the speed data , if the predicted position of the car is off by a few meters then the wrong intersecting road may be chosen as the destination of a turn . in principle , turns could be predicted probabilistically based upon past driver behavior . however , we assume that we do not already know information about an individual driver , so we can not build a model of their specific driving. we could consider using a model developed using many other drivers in the same area , but that has the disadvantage of predicting the most common turns and paths rather than the unique path of an individual .estimating that a person probably shops at a supermarket near them or that they probably work at the largest employer in their town is not necessarily a considerable breach of their privacy .instead , we identify specific locations that a particular individual visits , and are especially interested in those places that are unique to the individual and not common to many people .since the heart of this pathing problem is the lack of turn information one approach is to treat this as a classification problem . here, we examine the speed traces to extract features : stop , right turn , left turn , or straight road .if classification would work well , we can assign probabilities to different choices at each intersection and eventually find the highest probability path . however , road conditions and driving habits are very diverse . with our dataset we were unable to classify turn features with high enough accuracy to build correct paths .the sheer volume of paths to explore meant that unless the correct path was extremely different from all of the incorrect paths , the algorithm would find some incorrect paths that looked just as good , or better , than the correct path . another approach we tried was to switch from a probabilistic model to a model where we identified time windows where turns could occur and did not follow any paths that would make turns outside of these windows .we expected this to reduce the number of paths we needed to explore , perhaps leaving just a few paths at completion . however , because of the accumulation of distance estimation errors , turns in the speed data do not align perfectly with the map data so we need to widen the time intervals where turns were allowed . as the time windows widen the number of accepted paths also increased , until we had too many possible paths to distinguish a single best fit .we also tried a dynamic programming approach , keeping track of the score of the best possible path at each node of the ways in the osm data . however , there is a subtle flaw in this approach .whatever error metric is used for each path , that error will slowly increase as the path grows in size due to the slight discrepancies in estimated and real travel distance .if there are two possible paths to reach a node then this dynamic programming approach will tend to favor the shorter path that reaches that node .however , if we explored the path further we might find that the longer path with a worse score at the previous node better matches the road going forward but with this dynamic programming approach we would have already dropped that path because it had the worse score .this error would be unrecoverable .the approach that works must keep track of the best path in terms of time ; if two paths have the same error score but one path has traveled through more of the speed samples then that path is better than the other ( up to that point ) because it is accumulating less error per unit time .next , we describe our successful approach we call_ elastic pathing_. this algorithm is based around the observation that as we attempt to match the speed data to a path we stretch or compress the predicted distance moved .however , after reconciling differences between a section of road and the speed trace we must `` pin '' the path at that point ( which we call a landmark ) because any movement would break apart the newly reconciled speed / time trace and distance data . for instance , if the speed data goes to 0 indicating a stop where there is no intersection we might pull the path forward by some distance to reach an intersection .after this has been done the path from this point and earlier can not be moved since it aligns with a feature in the road .we call this approach _ elastic pathing _ because of the stretching and compressing of the speed trace to fit the road is conceptually similar to stretching a piece of elastic along a path while pinning it into place at different points . to help understanding the algorithm, we introduce a small set of definitions .* _ calculated distance _ the distance a vehicle traveled calculated from the speed and time values in the speed trace . * _ predicted distance _ the distance along a possible route on the road at a certain time in the speed trace . * _ error _ the difference between calculated distance and predicted distance of a possible route . * _ feature _ a vehicle stop in the speed trace or an intersection in the road . * _ landmark _ a place where the speed trace and road data match , but would become unmatched if any stretching or compression of the predicted distance moved occurs .this includes the previously mentioned feature , but also any other place where a mismatch between speed and road data can occur .we can compute the fit of a path by the amount of stretching and compressing that needed to be done to make the speed data match the path . the more stretching and compressing along a path the worse its score . at each iteration of the algorithmwe sort all of the partial paths by their current scores and then explore the path with the best score . that path advances until it reaches a feature that requires a `` pin '' operation .at this point there may be multiple ways to advance the path so several new paths may be created .each new path s score is adjusted to reflect the stretching or compression of the distance traveled from the last pinned landmark and the algorithm proceeds to the next iteration .the algorithm follows with the best score at each iteration , so the first path to finish can not be worse than any other path .the pseudo - code for this algorithm is given in algorithm [ alg : pathing ] .the most complicated operation done in the _ elasticpathing _ algorithm is _ gotobranch _ function , whose pseudo code appears in algorithm [ alg : gotobranch ] .the _ gotobranch _function advances a path until it comes to a feature on the map and returns every possible branching from that feature .there are two features : an intersection in the road of the path and a zero speed section of the speed samples . when a path reaches an intersection it explores every possible direction .if the path is going at a speed where it can move along the curve in the road then a landmark is set at that location with the _ pin _ function .if the curve in the road is too great for the current speed then the distance moved from the last landmark is compressed with the _ compressa _ function and stretched with the _ stretcha _ function .thus , when there is a mismatch between the speed samples and road data there are two ways to resolve it and thus two new possible paths .a similar situation occurs when the speed data indicates that the vehicle has come to a stop .if the path is already at an intersection then the landmark is set with the _ pin _ function .otherwise the two solutions are found with the _ compressb _ and _ stretchb _ functions . rather than compressing or stretching the speed trace as in the previous compress and stretch functions ,these functions compress or stretch the route data from the last landmark until an intersection is found to match the zero speed segment .after any landmarks are set , the ratio of stretching or compression from the last landmark increases the error of each path .this means that a preexisting path may have less error than the current paths and should be explored instead , so all possible branches are returned to the elastic pathing function , the paths are placed in sorted order by their error , and pathing continues with the new best path .there are several details that we do not show in the pseudo - code .for instance , when we check if we are at an intersection we allow space for the number of lanes in the intersecting road and offset for another car in front of our vehicle . determining the maximum speed of a turnis done by assuming the maximum allowed incline in the road ( 7% ) , the turn radius of the road based upon the number of lanes in the road and typical lane widths , and the turn angle equation .the best possible case , a path from a shopping store to a home with no error .the ground truth is indicated with a solid line while the nodes along the predicted are shown as dots . in this casethe predicted and actual paths match perfectly . ]we generated results for 240 traces comprising 46 unique destinations of 7 different individuals .of these destinations , 28 were visited more than once and 10 locations were visited more than 5 times .the median trip length was 4.65 miles ( 7.5 km ) with a minimum trip distance of 0.38 miles ( 0.62 km ) and maximum trip length of 9.96 miles ( 16.0 km ) .the 2009 national household travel survey nhts found the average drive distance is 9.62 miles on weekdays and 10.03 miles on weekends , so our drivers had short , but not unreasonable commutes .traces were taken at many different times of day and in varying road conditions over several months , and we believe that these traces are representative of common driving behavior . a ruby implementation of the elastic - pathing algorithm processed all of the traces in under 30 minutes on a 2-core 2.2ghz machine , with an average running time of less than ten seconds per trace .even without an implementation in a high - performance language , the algorithm is able to process traces far faster than they are generated . since data analysiscould essentially be done in real - time as a vehicle s speed trace is being collected , the limiting factor on the time delay between data recording and path prediction is probably the time overhead of collecting , transmitting , and preprocessing ( e.g. matching a trace to a specific driver and their known locations ) speed traces .this means that it may be entirely feasible to do near real - time tracking of drivers with just speed data , although the tracking will not be 100% accurate .the cdf of the distance from the pathing algorithm s endpoint compared to the ground truth s endpoint . in 10% of tracesthe algorithm found an endpoint less than 250 meters from its actual location and 30% of the traces were within 800 meters of the actual end point . with thousands of driving traces gathered per individual per year this gives an insurance company a very detailed view of an individual s lifestyle and habits .results did vary with the individual ; the top 30% of one driver s traces were all within 500 meters while that same percentage were only within 1300 meters for another driver . ]the cdf of errors is shown in figure [ fig : cdf ] . in 10% of the traces ,the pathing algorithm found an endpoint that was within 250 meters of the actual endpoint and 30% of the traces were within 800 meters of the actual endpoint .this distance is easily accurate enough to tell when a person goes home , what areas a person drives through , and can identify many destinations of a driver . since the size of a parking lot in a movie theater or shopping center is comparable to these error values , this level of detail is enough to gain an in - depth view of an individual s lifestyle and habits .when we consider that individuals often take multiple trips to the same locations it is clear that , over time , every location that is commonly visited will be identified . the average error of endpoints relative to trip distance .this figure shows that there is no noticeable decrease in the performance of the elastic pathing algorithm with trip distance .manual analysis of our algorithm s performance shows that prediction accuracy is more strongly influenced by the route taken and traffic encountered than by the trip distance .the expected error from guessing is simply the distance from the starting point to a random location along the circumference of a circle centered at the starting point and with radius equal to the trip distance . ]the relative inaccuracy of our approach does not go up with trip distance , as shown in figure [ fig : relative_error ] .the percent of predicted endpoints within 250 meters of the actual endpoint also does not decrease with distance in our data set , with trips as long as 10.5 miles still having endpoints correctly predicted to within 250 meters .our approach clearly does much better than random guessing and almost always correctly identifies at least the general direction of travel .the direction of travel is not a very serious privacy concern though , so we will use another filtering step to screen out `` noise '' from incorrectly predicted points .we can further take advantage of multiple available traces for each driver to screen out points that may be noise .we note that erroneous predictions are unlikely to repeatedly find the same wrong location and instead usually make mistakes in different locations .thus , we can use the frequency that locations are predicted as a way to remove poor predictions and focus only on common destinations , as shown in figure [ fig : endpoint_circles ] .the first row of the figure , figure [ fig : endpoint_circles](a , b ) , shows predicted endpoints while the second row , figure [ fig : endpoint_circles](c , d ) , shows ground truths .the first column , figure [ fig : endpoint_circles](a , c ) , shows all endpoints , while the second column , figure [ fig : endpoint_circles](b , d ) , shows only the endpoints that occurred with high frequency in the traces .although the initial endpoint predictions ( figure [ fig : endpoint_circles]a ) are very noisy , filtering the endpoints by frequency in figure [ fig : endpoint_circles]b creates a collection of endpoints that is fairly close to the ground truth in figure [ fig : endpoint_circles]d .we also noted that some individuals were more easily tracked than others .although the top 10% of traces for traces from all individuals had a maximum error of 250 meters , for one individual the top 10% had a maximum error error of only 164 meters , and the top 30% of those individual s traces still had errors of less than 500 meters .these individuals actually commuted along the same major road and lived in similar areas , so the differing performance of the pathing algorithm on these two individuals is likely to stem from differences in their personal driving characteristics and the driving characteristics of their vehicles than from the areas where they drove .the practical meaning of this is that learning the common locations and habits of some individuals might be much easier than for others .the pathing algorithm is able to do more than just identify the endpoint of a trip it also eliminates a large number of locations where the trip could not have ended .this is also a serious privacy concern because it allows an antagonist to identify changes in regular routines very easily .for example , if a person usually goes home after work but the predicted path goes in a different direction then it is highly likely the person is breaking their routine .when the algorithm has enough information and a route is very unique for instance the spacing between intersections on the road travelled does not match the spacing in any other road the algorithm can do very well .this is illustrated in figure [ fig : best_case ] .however , this case is not normal , and at the very end when the algorithm chooses the correct left turn it may just as well have chosen to turn right or gone straight . however , even if the algorithm made the wrong choice it would still be close to 100 meters from the actual destination .* we believe the probability of error is closely tied to a few factors : * * homogeneity of roads .if every road seems the same ( same speed , similar intersection intervals , etc ) , especially as in areas built in a grid , then they can not be distinguished . * * traces with only slow speeds . without high speeds to rule out turns and constrain paths to a few major roads the correct path is indistinguishable from any other . * * unpredictable stops from traffic , construction , etc .the largest barrier to improved performance is distinguishing one road from another .if two roads have similar features ( e.g. same speed , similarly spaced intersections ) then there is little way for the algorithm to distinguish between the two .this is also affected by the traffic pattern of the trip ; if a vehicle stops at every intersection because of red lights then there is a great deal of information about the spacing of those intersections .if , however , the vehicle stops at no lights then there is no information about the spacing of intersection on the current road .with more prior information we may be able to do better for instance , if we knew the average waiting time at different lights then the waiting time at an intersection might distinguish one road from another even if a vehicle only stops at one or two lights. this additional information , such as the average wait time at certain lights , or the average traffic speeds of different roads during different times of day , may already be available to insurance companies and this information is not difficult to gather .if the algorithm had this information available it could score different turning choices with higher accuracy , leading to improved results .the implications of our results are that , given just timestamped speed traces and a starting location , the privacy of a driver will be eroded as more of their speed traces are collected .this is even though our approach does not yet predict traces and trip endpoints with 100% accuracy .driving habits are often regular and locations are visited multiple times , so an antagonist gets many chances to correctly identify traveled paths . as a persons driving traces are collected and analyzed , a set of possible destinations will be built for that person .a set of thousands of possible destination will be reduced to a handful .incorrect endpoint predictions are noisy and unlikely to repeat , so over time the endpoints that are predicted the most are increasingly likely to be correct predictions of a person s common destinations .thus , we believe that the large body of data already collected and still being collected would allow an antagonist to deduce many details about a person s regular routine and irregular events . starting from a person s home addressan antagonist could identify work locations , commonly visited stores , locations associated with regular hobbies , etc .additionally , any breaks in routines would be very obvious because the details that could be obtained from a speed trace , such as distance traveled , duration of trip , and predicted endpoint of the trip , would all change .it is possible for companies to adjust their sampling rates to reduce the potential for privacy breaches , however , there is no guarantee that someone could not improve upon our algorithm and obtain results better than ours even with a lower sampling rate .perhaps the best way to improve the privacy protection of these systems is to do data processing in the car before sending it to a remote server , so that only the end results are ever transmitted and stored .although it is tempting for a company to capture as much information as possible in case that data becomes useful later , this also presents unnecessary privacy risks since . in the case of auto insurance companies, a speed trace is not vital to this system , as the real goal is the detection of unsafe driving habits , which can be detected with a more privacy preserving set of driving features .there are alternatives to collecting speedometer readings which should have much better privacy protection .in fact , some usage - based insurance programs are already offering more privacy - preserving products .companies such as gmac insurance , milemeter and corona direct offer programs that only require the mileage information from vehicles .payd coverage provided by hollard insurance company collects mileage information and driver style information .companies like paygo systems collect driving zone information instead of fine - grained location , besides basic information including mileage and minutes of use .the best practice for such monitoring systems is to only record relevant data .for example , devices could check speed data but only send a notification when an even of interest was detected . in general , this approach could be applied to other data collection methods , including gps .an example path where the algorithm correctly follows most of a path but predicts the wrong turn when the speed trace runs out of features . ]our main limitation is the difficulty distinguishing between similar roads . when a vehicle stops at an intersection and then accelerates slowly a turn or straight path are both possible , so our algorithm relies upon later features in the speed trace to determine which of the possible routes was correct .at the ends of paths there are simply too few features to make this distinction . at the beginning of pathsthis may simply cause there to be too many possible paths to sort through .these problems are exacerbated in residential areas , where repeated , similar intersections are common .we notice that travel through residential areas tends to occur at either ends of paths rather than during the middle , because traveling through a residential area is unlikely to be the fastest route to a destination .fixing mistakes in residential areas by incorporating additional knowledge about the area may greatly improve our algorithm s success since we had little difficulty reconstructing paths along major roads , but tend to make mistakes once the path leaves major roads .an example of this is in figure [ fig : follow_highway ] .there are two ways these errors could be addressed .first , the street addresses of customers are available so the algorithm could check for paths to a person s known home address at the end of predicted paths .second , at the beginnings of paths drivers move from local roads to larger arteries , so the algorithm could also attempt to simply take the shortest paths to major roads if there are too many ambiguous turn choices at the beginning of a path . when accepted for publication , we will open source our work , which contains tools to get the needed map data from open street maps , and implementations of our algorithms .we will also include a subset of the collected traces that has been explicitly authorized to be made publicly available on crawdad or similar archival sites .we have demonstrated an attack against user location privacy that uses speed data from driving traces and an initial starting location .this problem seems simple in concept , but is difficult in execution largely because turn detection with speed data is often ambiguous .for instance , if someone breaks and then accelerates they could have turned or they could have slowed down because the person in front of them was turning .we have presented the design and implementation of a novel elastic pathing algorithm and used a large corpus of real - world traces to show its accuracy .our trials showed that the algorithm is fast , and could also be used for real - time tracking of people just based on their speed data .our results show that with a large corpus of data an entity can errode individuals privacy and gain information about typical routes and destinations . in 20% of individual traceswe predicted the correct trip end point to within 500 meters .this means that a location visited daily can be identified in about a week ; locations visited on a weekly basis could be identified with slightly more than a month of data .variations in daily routines are easier to identify than endpoints .even when the algorithm does not correctly find the proper end point it does correctly follow much of the actual travelled path . if that predicted path breaks from a normal routine , such as going home after work , then a malicious entity may be able to either deduce where the person could have gone with some foreknowledge of the person s habits or take advantage of their altered routine . in time, we expect that improvements will be made to our initial approach .the data , once collected , does not go away and a continually improving path prediction algorithm may plumb more private information from these traces .these traces should be treated as any other private information instead of being handled as already anonymized data .our work is not directed against any company or organization experiences have shown that even well - intentioned uses of data can result in losses of privacy so we wish to highlight potential dangers of this type of data collection .usage - based insurance has been introduced in at least 39 states of the united states , and more are approving the approach because it has been claimed to be privacy - preserving .our work shows that speed - based data collection erode user privacy of policy holders over time .we do not claim that insurance companies are violating policy holder privacy in this way .however , the general principle of any privacy preserving data collection is to collect only the data that is necessary for a particular application , and no more .this material is based upon work supported by the national science foundation under grant numbers 1228777 and 1211079 .any opinions , findings , and conclusions or recommendations expressed in this material are those of the author(s ) and do not necessarily reflect the views of the national science foundation . m. golfarelli , d. maio , and s. rizzi . elastic correction of dead - reckoning errors in map building . in _ intelligent robots and systems , 1998 .proceedings . ,1998 ieee / rsj international conference on _ , volume 2 , pages 905911 vol.2 , 1998 .e. krakiwsky , c. harris , and r. wong .a kalman filter for integrating dead reckoning , map matching and gps positioning . in _ position location and navigation symposium , 1988 .navigation into the 21st century .ieee plans 88 ., ieee _ , pages 3946 , 1988 .l. zhao , w. y. ochieng , m. a. quddus , and r. b. noland .an extended kalman filter algorithm for integrating gps and low cost dead reckoning system data for vehicle performance and emissions monitoring .
today people increasingly have the opportunity to opt - in to `` usage - based '' automotive insurance programs for reducing insurance premiums . in these programs , participants install devices in their vehicles that monitor their driving behavior , which raises some privacy concerns . some devices collect fine - grained speed data to monitor driving habits . companies that use these devices claim that their approach is privacy - preserving because speedometer measurements do not have physical locations . however , we show that with knowledge of the user s home location , as the insurance companies have , speed data is sufficient to discover driving routes and destinations when trip data is collected over a period of weeks . to demonstrate the real - world applicability of our approach we applied our algorithm , _ elastic pathing _ , to data collected over hundreds of driving trips occurring over several months . with this data and our approach , we were able to predict trip destinations to within 250 meters of ground truth in 10% of the traces and within 500 meters in 20% of the traces . this result , combined with the amount of speed data that is being collected by insurance companies , constitutes a substantial breach of privacy because a person s regular driving pattern can be deduced with repeated examples of the same paths with just a few weeks of monitoring . [ privacy ]
the deployment of large groups of autonomous vehicles is rapidly becoming possible because of technological advances in networking and in miniaturization of electro - mechanical systems . in the near future large numbers of robotswill coordinate their actions through ad - hoc communication networks and will perform challenging tasks including search and recovery operations , manipulation in hazardous environments , exploration , surveillance , and environmental monitoring for pollution detection and estimation .the potential advantages of employing teams of agents are numerous .for instance , certain tasks are difficult , if not impossible , when performed by a single vehicle agent .further , a group of vehicles inherently provides robustness to failures of single agents or communication links .working prototypes of active sensing networks have already been developed ; see . in , launchable miniature mobile robots communicate through a wireless network . the vehicles are equipped with sensors for vibrations , acoustic , magnetic , and ir signals as well as an active video module ( i.e. , the camera or micro - radar is controlled via a pan - tilt unit ) .a second system is suggested in under the name of autonomous oceanographic sampling network ; see also . in this case ,underwater vehicles are envisioned measuring temperature , currents , and other distributed oceanographic signals .the vehicles communicate via an acoustic local area network and coordinate their motion in response to local sensing information and to evolving global data . this mobile sensing network is meant to provide the ability to sample the environment adaptively in space and time . by identifying evolving temperature and current gradients with higher accuracy and resolution than current static sensors ,this technology could lead to the development and validation of improved oceanographic models .a fundamental prototype problem in this paper is that of characterizing and optimizing notions of quality - of - service provided by an adaptive sensor network in a dynamic environment . to this goal ,we introduce a notion of _ sensor coverage _ that formalizes an optimal sensor placement problem .this spatial resource allocation problem is the subject of a discipline called locational optimization .locational optimization problems pervade a broad spectrum of scientific disciplines .biologists rely on locational optimization tools to study how animals share territory and to characterize the behavior of animal groups obeying the following interaction rule : each animal establishes a region of dominance and moves toward its center .locational optimization problems are spatial resource allocation problems ( where to place mailboxes in a city or cache servers on the internet ) and play a central role in quantization and information theory ( the design of a minimum - distortion fixed - rate vector quantizer is a locational problem ) .other technologies affected by locational optimization include mesh and grid optimization methods , clustering analysis , data compression , and statistical pattern recognition .because locational optimization problems are so widely studied , it is not surprising that methods are indeed available to tackle coverage problems ; see . however , most currently - available algorithms are not applicable to mobile sensing networks because they inherently assume a centralized computation for a limited size problem in a known static environment .this is not the case in multi - vehicle networks which , instead , rely on a distributed communication and computation architecture .although an ad - hoc wireless network provides the ability to share some information , no global omniscient leader might be present to coordinate the group .the inherent spatially - distributed nature and limited communication capabilities of a mobile network invalidate classic approaches to algorithm design .in this paper we design coordination algorithms implementable by a multi - vehicle network with limited sensing and communication capabilities .our approach is related to the classic lloyd algorithm from quantization theory ; see for a reprint of the original report and for a historical overview .we present lloyd descent algorithms that take into careful consideration all constraints on the mobile sensing network . in particular, we design coverage algorithms that are adaptive , distributed , asynchronous , and verifiably asymptotically correct : adaptive : : : our coverage algorithms provide the network with the ability to address changing environments , sensing task , and network topology ( due to agents departures , arrivals , or failures ) . distributed : : : our coverage algorithms are distributed in the sense that the behavior of each vehicle depends only on the location of its neighbors .also , our algorithms do not required a fixed - topology communication graph , i.e. , the neighborhood relationships do change as the network evolves .the advantages of distributed algorithms are scalability and robustness .asynchronous : : : our coverage algorithms are amenable to asynchronous implementation .this means that the algorithms can be implemented in a network composed of agents evolving at different speeds , with different computation and communication capabilities .furthermore , our algorithms do not require a global synchronization and convergence properties are preserved even if information about neighboring vehicles propagates with some delay .an advantage of asynchronism is a minimized communication overhead .verifiable asymptotically correct : : : our algorithms guarantees monotonic descent of the cost function encoding the sensing task .asymptotically the evolution of the mobile sensing network is guaranteed to converge to so - called centroidal voronoi configurations that are critical points of the optimal sensor coverage problem .let us describe in some detail what are the contributions of this paper .section [ sec : review ] reviews certain locational optimization problems and their solutions as centroidal voronoi partitions .section [ sec : coverage - control ] provides a continuous - time version of the classic lloyd algorithm from vector quantization and applies it to the setting of multi - vehicle networks . in discrete - time, we propose a family of lloyd algorithms .we carefully characterize convergence properties for both continuous and discrete - time versions ( appendix [ sec : appendix ] collects some relevant facts on descent flows ) .we discuss a worst - case optimization problem , we investigate a simple uniform planar setting , and we present numerical results .section [ sec : distributed - asynchronous ] presents two asynchronous distributed implementations of lloyd algorithm for ad - hoc networks with communication and sensing capabilities .our treatment carefully accounts for the constraints imposed by the distributed nature of the vehicle network .we present two asynchronous implementations , one based on classic results on distributed gradient flows , the other based on the structure of the coverage problem .section [ sec : vehicle - dynamics ] considers vehicle models with more realistic dynamics .we present two formal results on passive vehicle dynamics and on vehicles equipped with individual local controllers .we present numerical simulations of passive vehicle models and of unicycle mobile vehicles .next , section [ sec : geometric - patterns ] describes density functions that lead the multi - vehicle network to predetermined geometric patterns .recent years have witnessed a large research effort focused on motion planning and coordination problems for multi - vehicle systems .issues include geometric patterns , formation control , and conflict avoidance .algorithms for robotic sensing tasks are presented for example in .it is only recently , however , that truly distributed coordination laws for dynamic networks are being proposed ; e.g. , see and the conference versions of this work .heuristic approaches to the design of interaction rules and emerging behaviors have been throughly investigated within the literature on behavior - based robotics ; see .an example of coverage control is discussed in . along this line of research ,algorithms have been designed for sophisticated cooperative tasks .however , no formal results are currently available on how to design reactive control laws , ensure their correctness , and guarantee their optimality with respect to an aggregate objective .the study of distributed algorithms is concerned with providing mathematical models , devising precise specifications for their behavior , and formally proving their correctness and complexity . via an automata - theoretic approach , the references treat distributed consensus , resource allocation , communication , and data consistency problems . from a numerical optimization viewpoint , the works in distributed asynchronous algorithms as networking algorithms , rate and flow control , and gradient descent flows .typically , both these sets of references consider networks with fixed topology , and do not address algorithms over ad - hoc dynamically changing networks .another common assumption is that , any time an agent communicates its location , it broadcasts it to every other agent in the network . in our setting, this would require a non - distributed communication set - up .in this section we describe a collection of known facts about a meaningful optimization problem .references include the theory and applications of centroidal voronoi partitions , see , and the discipline of facility location , see . along the paper, we interchangeably refer to the elements of the network as sensors , agents , vehicles , or robots .let be a convex polytope in and let denote the euclidean distance function .we call a map a _ distribution density function _ if it represents a measure of information or probability that some event take place over . in equivalent words, we can consider to be the bounded support of the function .let be the _ location of sensors _ , each moving in the space .because of noise and loss of resolution , the _ sensing performance _ at point taken from sensor at the position degrades with the distance between and ; we describe this degradation with a non - decreasing differentiable function .accordingly , provides a quantitative assessment of how poor the sensing performance is . as an example , consider mobile robots equipped with microphones attempting to detect , identify , and localize a sound - source ._ how should we plan to robots motion in order to maximize the detection probability ? _ assuming the source emits a known signal , the optimal detection algorithm is a matched filter ( i.e. , convolve the known waveform with the received signal and threshold ) .the source is detected depending on the signal - to - noise - ratio , which is inversely proportional to the distance between the microphone and the source .various electromagnetic and sound sensors have signal - to - noise ratios inversely proportional to distance . within the context of this paper ,a _ partition _ of is a collection of polytopes with disjoint interiors whose union is .we say that two partitions and are equal if and only differ by a set of -measure zero , for all .we consider the task of minimizing the locational optimization function where we assume that the sensor is responsible for measurements over its `` dominance region '' .note that the function is to be minimized with respect to both ( 1 ) the sensors location , and ( 2 ) the assignment of the dominance regions .this problem is referred to as a facility location problem and in particular as a continuous -median problem in .note that if we interchange the positions of any two agents , along with their associated regions of dominance , the value of the locational optimization function is not affected . to eliminate this discrete redundancy, one could take the discrete group of permutations with the natural action on , and consider as the configuration space for the position of the vehicles .one can easily see that , at fixed sensors location , the optimal partition of is the _ voronoi partition _ generated by the points : we refer to for a comprehensive treatment on voronoi diagrams , and briefly present some relevant concepts .the set of regions is called the voronoi diagram for the generators .when the two voronoi regions and are adjacent , is called a _ ( voronoi ) neighbor _ of ( and vice - versa ) .the set of indexes of the voronoi neighbors of is denoted by . clearly , if and only if .we also define the -face as .voronoi diagrams can be defined with respect to various distance functions , e.g. , the - , - , - , and -norm over , and voronoi diagrams can be defined over riemannian manifolds ; see .some useful facts about the euclidean setting are the following : if is a convex polytope in a -dimensional euclidean space , the boundary of each is the union of -dimensional convex polytopes . inwhat follows , we shall write note that , \nonumber\end{aligned}\ ] ] that is , the locational optimization function can be interpreted as an expected value composed with a min operation . this is the usual way in which the problem is presented in the facility location and operations research literature .remarkably , one can show that and deduce some smoothness properties of .since the voronoi partition depends at least continuously on , the function is at least continuously differentiable .let us recall some basic quantities associated to a region and a mass density function .the ( generalized ) mass , centroid ( or center of mass ) , and polar moment of inertia are defined as additionally , by the parallel axis theorem , one can write , where is defined as the polar moment of inertia of the region about its centroid .let us consider again the locational optimization problem , and suppose now we are strictly interested in the setting that is , we assume . applying the parallel axis theorem leads to simplifications for both the function and its partial derivative : it is convenient to define and . therefore , the ( not necessarily unique ) local minimum points for the location optimization function are _ centroids _ of their voronoi cells , i.e. , each location satisfies two properties simultaneously : it is the generator for the voronoi cell and it is its centroid accordingly , the critical partitions and points for are called _ centroidal voronoi partitions_. we will refer to a sensors configuration as a _ centroidal voronoi configuration _ if it gives rise to a centroidal voronoi partition .this discussion provides a proof alternative to the one given in for the necessity of centroidal voronoi partitions as solutions to the continuous -median location problem .in this section , we describe algorithms to compute the location of sensors that minimize the cost , both in continuous and in discrete - time . in section [ se : continuous - lloyd ] , we propose a continuous - time version of the classic lloyd algorithm . here , both the positions and partitions evolve in continuous time , whereas lloyd algorithm for vector quantization is designed in discrete time . in section [ se : discrete - lloyd ] , we develop a family of variations of lloyd algorithm in discrete time . in both setting , we prove that the proposed algorithms are _ gradient descent flows_. assume the sensors location obeys a first order dynamical behavior described by consider a cost function to be minimized and impose that the location follows a gradient descent . in equivalent control theoretical terms , consider a lyapunov function and stabilize the multi - vehicle system to one of its local minima via dissipative control .formally , we set where is a positive gain , and where we assume that the partition is continuously updated . for the closed - loop system induced by equation , the sensors location converges asymptotically to the set of critical points of , i.e. , the set of centroidal voronoi configurations on . assuming this set is finite , the sensors location converges to a centroidal voronoi configuration . under the control law, we have by lasalle s principle , the sensors location converges to the largest invariant set contained in , which is precisely the set of centroidal voronoi configurations .since this set is clearly invariant for , we get the stated result .if consists of a finite collection of points , then converges to one of them , see corollary [ corollary : lasalle - finite ] . if is finite , and , then a sufficient condition that guarantees exponential convergence is that the hessian of be positive definite at .this property is known to be an open problem , see .note that this gradient descent is not guaranteed to find the global minimum .for example , in the vector quantization and signal processing literature , it is known that for bimodal distribution density functions , the solution to the gradient flow reaches local minima where the number of generators allocated to the two region of maxima are not optimally partitioned .let us consider the following class of variations of lloyd algorithm .let be a continuous mapping verifying the following two properties : \(a ) for all , , where denotes the component of , \(b ) if is not centroidal , then there exists a such that .property ( a ) guarantees that , if moving , the agents of the network do not increase their distance to its corresponding centroid .property ( b ) ensures that at least one robot moves at each iteration and strictly approaches the centroid of its voronoi region .because of this property , the fixed points of are the set of centroidal voronoi configurations . [ prop : discrete - lloyd ] let denote the initial sensors location .then , the sequence converges to the set of centroidal voronoi configurations . if this set if finite , then converges to a centroidal voronoi configuration .consider as an objective function for the algorithm .note that with strict inequality if .moreover , the parallel axis theorem guarantees as long as for all , with strict inequality if for any , .in particular , , with strict inequality if , where denotes the set of centroids of the partition .now , we have because of .in addition , because of property ( a ) of , inequality yields and the inequality is strict if is not centroidal by property ( b ) of .hence , is a descent function for the algorithm .the result now follows from the global convergence theorem [ lemma : discrete - lasalle ] and proposition [ prop : discrete - lasalle - surprising ] .lloyd algorithm in quantization theory is usually presented as follows : given the location of agents , , ( i ) construct the voronoi partition corresponding to ; ( ii ) compute the mass centroids of the voronoi regions found in step ( i ) .set the new location of the agents to these centroids ; and return to step ( i ) .lloyd algorithm can also be seen as a fixed point iteration .consider the mappings for let be defined by .clearly , is continuous ( indeed , ) , and corresponds to lloyd algorithm .now , , for all . moreover , if is not centroidal , then the inequality is strict for all .therefore , verifies properties ( a ) and ( b ) .different sensor performance functions in equation correspond to different optimization problems .provided one uses the euclidean distance in the definition of , the standard voronoi partition computed with respect to the euclidean metric remains the optimal partition . for arbitrary ,it is not possible anymore to decompose into the sum of terms similar to and .nevertheless , it is still possible to implement the gradient flow via the expression for the partial derivative .assume the sensors location obeys a first order dynamical behavior , .then , for the closed - loop system induced by the gradient law , , the sensors location converges asymptotically to the set of critical points of .assuming this set is finite , the sensors location converges to a critical point .more generally , various distance notions can be used to define locational optimization functions .different performance function gives rise to corresponding notions of `` center of a region '' ( any notion of geometric center , mean , or average is an interesting candidate ) .these can then be adopted in designing coverage algorithms .we refer to for a discussion on voronoi partitions based on non - euclidean distance functions and to for a discussion on the corresponding locational optimization problems .next , let us discuss an interesting variation of the original problem . in , minimizing the expected minimum distance function in equationis referred to as the _ continuous -median problem_. it is instructive to consider the worst - case minimum distance function , corresponding to the scenario where no information is available on the distribution density function . in other words ,the network seeks to minimize the largest possible distance from any point in to any of the sensor locations , i.e. , to minimize the function = \max_{i\in\{1,\ldots , n\ } } \left [ \max_{q\in v_i } \|q - p_i\| \right ] \ , .\end{aligned}\ ] ] this optimization is referred to as the _ -center problem _ in .one can design a strategy for the -center problem analog to the lloyd algorithm for the -median problem : each vehicle moves , in continuous or discrete - time , toward the center of the minimum - radius sphere enclosing the polytope . to the best of our knowledge, no convergence proof is available in the literature for this algorithm ; e.g. , see .we refer to for a convergence analysis of the continuous and discrete time algorithms . in what follows , we shall restrict our attention to the -median problem and to centroidal voronoi partitions . in this section, we investigate closed - form expression for the control laws introduced above. assume the voronoi region is a convex polygon ( i.e. , a polytope in ) with vertexes labeled such as in figure [ fig : polygon ] .it is convenient to define .furthermore , we assume that the density function is . by evaluating the corresponding integrals , one can obtain the following closed - form expressions to present a simple formula for the polar moment of inertia , let and , for .then , the polar moment of inertia of a polygon about its centroid , becomes the proof of these formulas is based on decomposing the polygon into the union of disjoint triangles .we refer to for analog expressions over .a second observation is that the voronoi polygon s vertexes can be expressed as a function of the neighboring vehicles .the vertexes of the voronoi polygon which lie in the interior of are the circumcenters of the triangles formed by and any two neighbors adjacent to .the circumcenter of the triangle determined by , , and is where is the area of the triangle , and .equation for a polygon s centroid and equation for the voronoi cell s vertexes lead to a closed - form _ algebraic _ expression for the control law in equation as a function of the neighboring vehicles location .to illustrate the performance of the continuous - time lloyd algorithm , we include some simulation results .the algorithm is implemented in ` mathematica ` as a single centralized program . for the setting, the code computes the bounded voronoi diagram using the ` mathematica ` package ` computationalgeometry ` , and computes mass , centroid , and polar moment of inertia of polygons via the numerical integration routine ` nintegrate ` .careful attention was paid to numerical accuracy issues in the computation of the voronoi diagram and in the integration .we illustrate the performance of the closed - loop system in figure [ fig : coverage-1 ] .in this section we show how the lloyd gradient algorithm can be implemented in an asynchronous distributed fashion . in section [ subsec : modeling ]we describe our model for a distributed asynchronous network of robotic agents .next , we provide two distributed algorithms for the local computation and maintenance of the voronoi cells . finally , in section [ subsec : asynchronous - lloyd ] we propose two distributed asynchronous implementations of lloyd algorithm : the first one is based on the gradient optimization algorithms as described in and the second one relies on the special structure of the coverage problem .we start by modeling a robotic agent that performs sensing , communication , computation , and control actions .we are interested in the behavior of the asynchronous network resulting from the interaction of finitely many robotic agents . a theoretical framework to formalize the following conceptsis that developed in the theory of distributed algorithms ; see .let us here introduce the notion of _ robotic agent with computation , communication , and control capabilities _ as the element of a network .the agent has a processor with the ability of allocating continuous and discrete states and performing operations on them .each vehicle has access to its unique identifier .the agent occupies a location and it is capable of moving in space , at any time for any period of time , according to a first order dynamics of the form : .\ ] ] the processor has access to the agent s location and determines the control pair .the processor of the agent has access to a local clock , and a _ scheduling sequence _ ,i.e. , an increasing sequence of times such that and .the processor of the agent is capable of transmitting information to any other agent within a closed disk of radius .we assume the communication radius to be a quantity controllable by the processor and the corresponding communication bandwidth to be limited .we shall alternatively consider networks of _ robotic agents with computation , sensing , and control capabilities_. in this case , the processor of the agent has the same computation and control capabilities as before .furthermore , we assume the processor can detect any other agent within a closed disk of radius .we assume the sensing radius to be a quantity controllable by the processor .a key requirement of the lloyd algorithms presented in section [ sec : coverage - control ] is that each agent must be able to compute its own voronoi cell .to do so , each agent needs to know the relative location ( distance and bearing ) of each voronoi neighbor .the ability of locating neighbors plays a central role in numerous algorithms for localization , media access , routing , and power control in ad - hoc wireless communication networks ; e.g. , see and references therein. therefore , any motion control scheme might be able to obtain this information from the underlying communication layer . inwhat follows , we set out to provide a distributed asynchronous algorithm for the local computation and maintenance of voronoi cells .the algorithm is related to the synchronous scheme in and is based on basic properties of voronoi diagrams .we present the algorithm for a robotic agent with sensing capabilities ( as well as computation and control ) .the processor of the agent allocates the information it has on the position of the other agents in the state variable .the objective is to determine the smallest distance for vehicle which provides sufficient information to compute the voronoi cell .we start by noting that is a subset of the convex set where and the half planes are provided is twice as large as the maximum distance between and the vertexes of , all voronoi neighbors of are within distance from and the equality holds . the minimum adequate sensing radius is therefore .we are now ready to state the following algorithm .a similar algorithm can be designed for a robotic agent with communication capabilities .the specifications go as in the previous algorithm , except for the fact that steps ` 2 : ` and ` 7 : ` are substituted by send within radius receive from all agents within radius further , we have to require each agent to perform the following event - driven task : if the agent receives at any time a `` request to reply '' message from the agent located at position , it executes send within radius we call this algorithm adjust communication radius algorithm .next , we present an algorithm whose objective is to maintain the information about the voronoi cell of the agent , and detect the presence of certain events .we consider only robotic agents with sensing capabilities .we call an agent active if it is moving and we assume that the agent can determine if any agent within radius is active or not .two events are of interest : ( i ) a voronoi neighbor of the agent becomes active and ( ii ) a new active agent becomes a voronoi neighbor of the agent . in both cases , we require a trigger message `` request recomputation '' to an appropriate control algorithm that we shall present in the next section . before presenting the algorithm ,let us introduce the map that assigns to the state vector a tuple according to the algorithm is designed to run for times $ ] .let us now present two versions of lloyd algorithm for the solution of the optimization problem that can be implemented by an asynchronous distributed network of robotic agents . for simplicity ,we assume that at time all clocks are synchronized ( although they later can run at different speeds ) and that each agent knows at the exact location of every other agent .the first algorithm is designed for robotic agents with communication capabilities , and requires the adjust communication radius algorithm ( while it does not require the monitoring algorithm ) . as a consequence of theorem 3.1 and corollary 3.1 in , we have the following result .let denote the initial sensors location .let be the sequence in increased order of all the scheduling sequences of the agents of the network .assume .then , there exists a sufficiently small such that if , the coverage behavior algorithm i converges to the set of critical points of , that is , the set of centroidal voronoi configurations .next , we focus on distributed asynchronous implementations of lloyd algorithm that take advantage of the special structure of the coverage problem .the following algorithm is designed for robotic agents with sensing capabilities , it requires the monitoring and the adjust sensing radius algorithms .two advantages of this algorithm over the previous one are that there is no need for each agent to exactly go toward the centroid of its voronoi cell nor to take a small step at each stage . the control law in step ` 7 : ` can be defined via a saturation function .for instance , then set .resorting to the discussion in section [ se : discrete - lloyd ] on the convergence of the discrete lloyd algorithms , one can prove that the coverage behavior algorithm ii verifies properties ( a ) and ( b ) . as a consequence of proposition [ prop : discrete - lloyd ], we then have the following result .let denote the initial sensors location .the coverage behavior algorithm ii converges to the set of critical points of , that is , the set of centroidal voronoi configurations .in this section we investigate various extensions and applications of the algorithms proposed in the previous sections .we extend the treatment to vehicles with passive dynamics and we also consider discrete - time implementations of the algorithms for vehicles endowed with a local motion planner .finally , we describe interesting ways of designing density functions to solve problems apparently unrelated to coverage . here ,we consider vehicles systems described by more general linear and nonlinear dynamical models . _ coordination of vehicles with passive dynamics_. we start by considering the extension of the control design to nonlinear control systems whose dynamics is passive .relevant examples include networks of vehicles and robots with general lagrangian dynamics , as well as spatially invariant passive linear systems .specifically , assume that for each , the vehicle state includes the spatial variable , and that the vehicle s dynamics is passive with input , output and storage function .furthermore , assume that the input preserving the zero dynamics manifold is . for such systems ,we devise a proportional derivative ( pd ) control via , where and are scalar positive gains .the closed - loop system induced by this control law can be analyzed with the lyapunov function yielding the following result .[ le : second - order ] for passive systems , the control law achieves asymptotic convergence of the sensors location to the set of centroidal voronoi configurations . if this set is finite , then the sensors location converges to a centroidal voronoi configuration .consider the evolution of the function , by lasalle s principle , the sensors location converges to the largest invariant set contained in .given the assumption on the zero dynamics , we conclude that for , i.e. , the largest invariant set corresponds to the set of centroidal voronoi configurations .if this set is finite , lasalle s principle also guarantees convergence to a specific centroidal voronoi configuration . in figure[ fig : second - order ] we illustrate the performance of the control law for vehicles with second - order dynamics .vehicles with second order dynamics .the environment and gaussian density function are as in figure [ fig : coverage-1 ] .the control gains are and .,title="fig : " ] vehicles with second order dynamics .the environment and gaussian density function are as in figure [ fig : coverage-1 ] .the control gains are and .,title="fig : " ] vehicles with second order dynamics .the environment and gaussian density function are as in figure [ fig : coverage-1 ] .the control gains are and .,title="fig : " ] _ coordination of vehicles with local controllers_. next , consider the setting where each vehicle has an arbitrary dynamics and is endowed with a local feedback and feedforward controller .the controller is capable of strictly decreasing the distance to any specified position in in a specified period of time .assume the dynamics of the vehicle is described by , where denotes its state , and is such that .assume also that for any and any , there exists such that the solution of verifies .[ prop : local - feedback+feedforward ] consider the following coordination algorithm . at time , , each vehicle computes and ; then , for time , the vehicle executes . for this closed - loop system, the sensors location converges to the set of centroidal voronoi configurations .if this set is finite , then the sensors location converges to a centroidal voronoi configuration .the proof of this result readily follows from proposition [ prop : discrete - lloyd ] , since the algorithm verifies properties ( a ) and ( b ) of section [ se : discrete - lloyd ] . as an example, we consider a classic model of mobile wheeled dynamics , the _ unicycle model_. assume the vehicle has configuration evolving according to where are the control inputs for vehicle .note that the definition of is unique up to the discrete action . given a target point , we use this symmetry to require the equality for all time .should the equality be violated at some time , we shall redefine and as from time onwards . following the approach in ,consider the control law where is a positive gain .this feedback law differs from the original stabilizing strategy in only in the fact that no final angular position is preferred .one can prove that is guaranteed to monotonically approach the target position when run over an infinite time horizon .we illustrate the performance of the proposed algorithm in figure [ fig : coverage - mobile ] . herewe suggest the use of decentralized coverage algorithms as formation control algorithms , and we present various density functions that lead the multi - vehicle network to predetermined geometric patterns. in particular , we present simple density functions that lead to segments , ellipses , polygons , or uniform distributions inside convex environments .consider a planar environment , let be a large positive gain , and denote .let be real numbers , consider the line , and define the density function similarly , let be a reference point in , let be positive scalars , consider the ellipse , and define the density function we illustrate this density function in figure [ fig : coverage - circle ] . during the simulations, we observed that the convergence to the desired pattern was rather slow . [ cols="^ " , ] finally , define the smooth ramp function , and the density function this density function leads the multi - vehicle network to obtain a uniform distribution inside the ellipsoidal disk .we illustrate this density function in figure [ fig : coverage - disk ] .vehicles to an ellipsoidal disk .the density function parameters are the same as in figure [ fig : coverage - circle ] , and , .,title="fig : " ] vehicles to an ellipsoidal disk .the density function parameters are the same as in figure [ fig : coverage - circle ] , and , .,title="fig : " ] vehicles to an ellipsoidal disk .the density function parameters are the same as in figure [ fig : coverage - circle ] , and , .,title="fig : " ] it appears straightforward to generalize these types of density functions to the setting of arbitrary curves or shapes .the proposed algorithms are to be contrasted with the classic approach to formation control based on rigidly encoding the desired geometric pattern .one disadvantage of the proposed approach is the requirement for a careful numerical computation of voronoi diagrams and centroids .we refer to for previous work on algorithms for geometric patterns , and to for formation control algorithms .we have presented a novel approach to coordination algorithms for multi - vehicle networks . the scheme can be thought of as an interaction law between agents and as such it is implementable in a distributed asynchronous fashion .numerous extensions appear worth pursuing .we plan to investigate the setting of non - convex environments and non - isotropic sensors .we are currently implementing these algorithms on a network of all - terrain vehicles .furthermore , we plan to extend the algorithms to provide collision avoidance guarantees and to vehicle dynamics which are not locally controllable .this work was supported by nsf grant cms-0100162 , aro grant daad 190110716 , and darpa / afosr muri award f49620 - 02 - 1 - 0325 .10 c. r. weisbin , j. blitch , d. lavery , e. krotkov , c. shoemaker , l. matthies , and g. rodriguez , `` miniature robots for space and military missions , '' , vol . 6 , no .3 , pp . 918 , 1999 .e. krotkov and j. blitch , `` the defense advanced research projects agency ( darpa ) tactical mobile robotics program , '' , vol .18 , no . 7 , pp . 76976 , 1999 .e. rybski , n. p. papanikolopoulos , s. a. stoeter , d. g. krantz , k. b. yesin , m. gini , r. voyles , d. f. hougen , b. nelson , and m. d. erickson , `` enlisting rangers and scouts for reconnaissance and surveillance , '' , vol . 7 , no .1424 , 2000 . t. b. curtin , j. g. bellingham , j. catipovic , and d. webb , `` autonomous oceanographic sampling networks , '' , vol .3 , pp . 8694 , 1993 .r. m. turner and e. h. turner , `` organization and reorganization of autonomous oceanographic sampling networks , '' in _ ieee int .conf . on robotics and automation _ , leuven , belgium , may 1998 , pp .20607 .e. eberbach and s. phoha , `` samon : communication , cooperation and learning of mobile autonomous robotic agents , '' in _proceedings 11th international conf . on tools with artificial intelligence ( tai )_ , chicago , il , nov . 1999 , pp. 22936 .a. okabe , b. boots , and k. sugihara , `` nearest neighbourhood operations with generalized voronoi diagrams : a review , '' , vol .1 , pp . 4371 , 1994 .z. drezner , ed ., , springer series in operations research .springer verlag , new york , ny , 1995 .a. suzuki and a. okabe , `` using voronoi diagrams , '' in drezner , pp .103118 . a.okabe and a. suzuki , `` locational optimization problems solved through voronoi diagrams , '' , vol .3 , pp . 44556 , 1997 .a. okabe , b. boots , k. sugihara , and s. n. chiu , , wiley series in probability and statistics .john wiley & sons , new york , ny , second edition , 2000 .q. du , v. faber , and m. gunzburger , `` centroidal voronoi tessellations : applications and algorithms , '' , vol .4 , pp . 637676 , 1999 .s. p. lloyd , `` least squares quantization in pcm , '' , vol .2 , pp . 129137 , 1982 , presented as bell laboratory technical memorandum at a 1957 institute for mathematical statistics meeting .r. m. gray and d. l. neuhoff , `` quantization , '' , vol .6 , pp . 23252383 , 1998 , commemorative issue 1948 - 1998 . h. yamaguchi and t. arai , `` distributed and autonomous control method for generating shape of multiple mobile robot group , '' in _ieee / rsj int . conf . on intelligent robots & systems _ ,munich , germany , sept .1994 , pp .800807 .k. sugihara and i. suzuki , `` distributed algorithms for formation of geometric patterns with many mobile robots , '' , vol .12739 , 1996 .t. balch and r. arkin , `` behavior - based formation control for multirobot systems , '' , vol .92639 , 1998 .m. egerstedt and x. hu , `` formation constrained multi - agent control , '' , vol .17 , no . 6 , pp . 94751 , 2001 .j. p. desai , j. p. ostrowski , and v. kumar , `` modeling and control of formations of nonholonomic mobile robots , '' , vol .17 , no . 6 , pp . 9058 , 2001 .tabuada , g. pappas , and p. lima , `` feasible formations of multi - agent systems , '' , 2002 , submitted. r. olfati - saber and r. m. murray , `` graph rigidity and distributed formation stabilization of multi - vehicle systems , '' in _ ieee conf .on decision and control _ , las vegas , nv , 2002 , to appear .c. tomlin , g. j. pappas , and s. s. sastry , `` conflict resolution for air traffic management : a study in multiagent hybrid systems , '' , vol .4 , pp . 50921 , 1998 .e. frazzoli , z. h. mao , j. h. oh , and e. feron , `` aircraft conflict resolution via semi - definite programming , '' , vol . 24 , no .1 , pp . 7986 , 2001 .h. choset , `` coverage for robotics - a survey of recent results , '' , vol .31 , pp . 113126 , 2001 .r. bachmayer and n. ehrich leonard , `` vehicle networks for gradient descent in a sampled environment , '' in _ ieee conf . on decision and control2002 , to appear .a. jadbabaie , j. lin , and a. s. morse , `` coordination of groups of mobile autonomous agents using nearest neighbor rules , '' , july 2002 , to appear .e. klavins , `` communication complexity of multi - robot systems , '' in _ workshop on algorithmic foundations of robotics _ , nice , france , dec .2002 , submitted .j. corts , s. martnez , t. karatas , and f. bullo , `` coverage control for mobile sensing networks , '' in _ ieee int .conf . on robotics and automation _ ,arlington , va , may 2002 , pp . 13271332 .j. corts , s. martnez , t. karatas , and f. bullo , `` coverage control for mobile sensing networks : variations on a theme , '' in _mediterranean conference on control and automation _ ,lisbon , portugal , july 2002 , electronic proceedings .r. a. brooks , `` a robust layered control - system for a mobile robot , '' , vol .1 , pp . 1423 , 1986 . c. w. reynolds , `` flocks , herds , and schools : a distributed behavioral model , '' , vol . 21 , no .4 , pp . 2534 , 1987 .r. c. arkin , , cambridge university press , new york , ny , 1998 .m. s. fontan and m. j. mataric , `` territorial multi - robot task division , '' , vol .5 , pp . 815822 , 1998 .a. c. schultz and l. e. parker , eds ., , washington , dc , june 2002 .kluwer academic publishers , proceedings from the 2002 nrl workshop on multi - robot systems .t. balch and l. e. parker , eds ., , a. k. peters ltd . , 2002 .l. e. parker , `` distributed algorithms for multi - robot observation of multiple moving targets , '' , vol .3 , pp . 23155 , 2002 .a. howard , maja j. mataric , and g. s. sukhatme , `` mobile sensor network deployment using potential fields : a distributed scalable solution to the area coverage problem , '' in _ proceedings of the 6th international conference on distributed autonomous robotic systems ( dars02 ) _ , fukuoka , japan , 2002 , pp .299308 .n. a. lynch , , morgan kaufmann publishers , san mateo , ca , 1997 .g. tel , , cambridge university press , new york , ny , second edition , 2001. j. n. tsitsiklis , d. p. bertsekas , and m. athans , `` distributed asynchronous deterministic and stochastic gradient optimization algorithms , '' , vol .9 , pp . 80312 , 1986 .d. p. bertsekas and j. n. tsitsiklis , , athena scientific , 1997 .s. h. low and d. e. lapsey , `` optimization flow control i : basic algorithm and convergence , '' , vol . 7 , no . 6 , pp . 86174 , 1999 .g. leibon and d. letscher , `` delaunay triangulations and voronoi diagrams for riemannian manifolds , '' in _ proceedings of the sixteenth annual symposium on computational geometry ( hong kong , 2000 ) _ , new york , 2000 , pp .341349 , acm .r. klein , , vol .400 of _ lecture notes in computer science _ ,springer verlag , new york , ny , 1989 .a. suzuki and z. drezner , `` the p - center location problem in an area , '' , vol .1/2 , pp . 6982 , 1996 .j. corts and f. bullo , `` distributed lloyd flows for disk covering and sphere packing problems , '' 2002 , preprint . c. cattani and a. paoluzzi , `` boundary integration over linear polyhedra , '' , vol .2 , pp . 1305 , 1990 .j. gao , l. j. guibas , j. hershberger , li zhang , and an zhu , `` geometric spanner for routing in mobile networks , '' in _ acm international symposium on mobile ad - hoc networking & computing _ , long beach , ca , oct .2001 , pp . 4555 .x .- y . li andwan , `` constructing minimum energy mobile wireless networks , '' , vol .5 , no . 4 , 2001 .s. meguerdichian , s. slijepcevic , v. karayan , and m. potkinjak , `` localized algorithms in wireless ad - hoc networks : location discovery and sensor exposure , '' in _ acm international symposium on mobile ad - hoc networking & computing _ , long beach , ca , oct . 2001 .m. cao and c. hadjicostis , `` distributed algorithms for voronoi diagrams and application in ad - hoc networks , '' preprint , oct .a. astolfi , `` exponential stabilization of a wheeled mobile robot via discontinuous control , '' , vol . 121 , no .1 , pp . 1217 , 1999 . h. k. khalil , , prentice hall , englewood cliffs , nj , second edition , 1995 . d. g. luenberger , , addison - wesley , reading , massachusetts , second edition , 1984 .in this section we collect some relevant facts on descent flows both in the continuous and in the discrete - time settings .we do this following and , respectively .we include proposition [ prop : discrete - lasalle - surprising ] as we are unable to locate it in the linear and nonlinear programming literature .consider the differential equation , where is locally lipschitz and is an open connected set .a set is said to be ( positively ) invariant with respect to if implies , for all ( resp . ) . a descent function for on , , is a continuously differentiable function such that on .we denote by the set of points in where and by be the largest invariant set contained in . finally , the distance from a point to a set is defined as .[ lemma : continuous - lasalle ] let be a compact set that it is positively invariant with respect to .let and be an accumulation point of . then and as .let be a subset of .an algorithm is a continuous mapping from to .a set is said to be positively invariant with respect to if implies .a point is said to be a fixed point of if .we denote the set of fixed points of by . a descent function for on , ,is any nonnegative real - valued continuous function satisfying for , where the inequality is strict if .typically , is the objective function to be minimized , and reflects this goal by yielding a point that reduces ( or at least does not increase ) .[ lemma : discrete - lasalle ] let be a compact set that it is positively invariant with respect to .let and denote , .let be an accumulation point of the sequence .then , and as .let be an accumulation point of and assume the whole sequence does not converge to it .then , there exists an such that for all , there is a such that . let be the minimum of all the distances between the points in .fix .since is continuous and is finite , there exists such that , with , implies ( that is , for each , there exists such , and we take the minimum over ) .now , since , there exists such that for all , . also , we know that there is a subsequence of which converges to , let us denote it by . for , there exists such that for all , we have .let .take such that then , now we are going to prove that . if , then this claim is straightforward , since .if , suppose that . since , then , there exists such that .necessarily , .now , by the triangle inequality , .then , which contradicts. therefore , .this argument can be iterated to prove that for all , we have .let us take now such that . since , we have , and therefore which is a contradiction .therefore , converges to .
this paper presents control and coordination algorithms for groups of vehicles . the focus is on autonomous vehicle networks performing distributed sensing tasks where each vehicle plays the role of a mobile tunable sensor . the paper proposes gradient descent algorithms for a class of utility functions which encode optimal coverage and sensing policies . the resulting closed - loop behavior is adaptive , distributed , asynchronous , and verifiably correct . coverage control , distributed and asynchronous algorithms , centroidal voronoi partitions
the flourish of cloud computing technology nowadays is largely due to its outstanding features , on - demand self - service , resource pooling , rapid elasticity , etc.[1 ] all users in cloud environment share a public pool of configurable and virtualized computing resources , such as cpus , disks or network . users can easily scale - up or scale - down their cloud resources according to their real time demands .for example , before the landing of nasa s curiosity rover , it engineers are allowed to deploy as many servers running on the aws ( amazon web services ) cloud as they need . then , when they are done , they may shut down additional servers to avoid paying for those resources [ 2 ] .besides , the centralization of servers makes cloud computing technology more environment friendly and energy saving .compared to setting up their own data center , individuals and enterprises are now becoming more favorable to deploy their businesses on cloud [ 3 ] [ 4 ] .cloud computing has leveraged users from hardware requirements , while reducing overall client side requirements and complexity [ 5 ] . as the fast growth of cloud computing , security issuesare considered as the obstacles on the highway , which largely hinder the big enterprises wills of porting their business from traditional data center to cloud .apparently , the security of cloud computing seems to be improved due to the centralization of data and increased security - focused resources [ 6 ] .the fact , however , is that the security of cloud computing now is considered still in infancy [ 7 ] , especially the network security which faces many new challenges . generally , to protect an enterprise network against cyber - attack , we traditionally adopt network security devices such as firewalls , dmz hosts or intrusion detection systems ( ids ) [ 8 ] .these traditional network defense strategies , however , can not be applied to cloud computing environment adaptively due to not only the attacks can rise internally but also the dynamic and elastic features of cloud computing [ 9 ] . to settle such problems , new methods are proposed continually in the past years , such as distributed cloud intrusion detection model proposed by irfan gul and m. hussain[10 ] , integrating an ids into cloud computing environment proposed by claudio mazzariello , roberto bifulco and roberto canonico[11 ] or control the inter - communication among virtual machines method proposed by hanqian wu , yi ding[12 ] , etc .these novel methods , however , merely try to reinforce cloud computing s internal network via porting traditional network defense means .such methods are not only unsystematic but also impossible to implement when the scale of physical hosts reaches at least half million [ 13 ] . besides, once the cyber - attack causes some vms overloaded which in turn causes physical hosts which they reside overloaded , all the services on this overloaded physical hosts will be affected or even be corrupted . moreover , due to the logical coupling between vms in a common virtual sub - network , for example , the coupling relationship between load - balancers and servers or between servers and databases , disasters will spread dramatically and then quickly collapse a large part of cloud network . in this paper , we first propose a new two layers model to describe the cloud s complex internal network with the full consideration of the interactions between physical and virtual networks .based on this model and complex network theory , a novel solution is introduced to systematically and globally settle such a problem that the traditional network defense strategies are no longer suitable for cloud . this solution can make the whole network in cloud computing environment more robust to resist the malicious cyber attacks and to maintain the infrastructures as operatively as possible , even before collapsing .different from the traditional networks , network in cloud computing environment can be divided into two layers : the virtual layer and the physical layer .the physical layer contains chunks of physical network facilities , such as switchers , routers , severs or other common network devices .the virtual layer , however , is built on the physical layer and is implemented via various virtualization technologies , such as container technologies , virtual machine technologies or software define network technologies [ 14 ] .all these virtual resources , such as vm instances , distributed databases or distributed storage , run on physical hosts and are inter - connected via virtual networks which also run on some physical hosts [ 15 ] .fig 1 shows the relationship between virtual and physical layers . due tothe sharing of physical resource pool , crash of one vm instance can cause other vm instances on the same physical host to collapse .furthermore , because of the logical coupling of different components in a sub - virtual network , such collapsing may spread along different paths on both physical layer and virtual layer .this process is the avalanche effect in cloud computing s cyber attack .2 shows the avalanche process when only one vm instance in the network is attacked by malicious hacker . according to the complex network theory ,such avalanche effect can ruin a network rapidly , even the scale of a network is really large [ 16 ] .in the real cloud environment , vm instances and other virtual components compose the virtual sub - network which represents a full functional application , such as a web application or scientific computing platform .the whole virtual layer consists of various virtual sub - networks that have different scales . in this article , we use scale - free network to model such a virtual sub - network , due to the fact that many kinds of practical computer networks , including the internet and local area networks , are all scale - free [ 17 ] . here, we suppose that the distribution of virtual sub - virtual networks with different scale obey the power law distribution , i.e. the larger the virtual sub - network is , the litter it appears [ 18 ] . after this, we can deploy these virtual components onto physical machines randomly .the modeling process can be divided into two steps : 1 ) generate various scale - free networks whose scale distribution obey the power law distribution .each vertex in this network represents a virtual component , such as vm instance on which runs different services .2 ) create the two - layers model by adding physical vertices into the network and randomly add edges between these physical vertices and the vertices in virtual layer .to apply complex network methods to the model , we need to simplify the two layers model into single layer . ignoring the specific functions of facilities in the two layers , all virtual or physical components in cloud can be treated as a vertex in network , and inter - connections between verticescan be abstracted as edges . due to that virtual networkis built on the physical network , edges between vertices on physical layer can be omitted .so , this two - layer network can be further simplified to a single layer network as we can see from fig .this simplification reduces the complexity of analysis and makes it possible to use the mature complex network theory and tools .then , we come to analyze the robustness of this system .we usually use the size of giant component after initially removing a fraction of nodes to measure the robustness of a network .first , we consider the situation in which no immune nodes are set up to guarantee the function of the whole network .bond percolation process can be a great tool to model the dynamic process in the system .edges are occupied only when the end nodes of the edges are not initially removed and both the end nodes are not infected ( node are infected with probability , we will discuss it later ) .we define as the probability that node belongs to a small clusters of exactly nodes .since the network is sparse enough , we can assume that the network topology is locally tree - like .this means that in the limit of large network size an arbitrarily large neighborhood around any nodes takes the form of a tree , then the calculation using message - passing algorithms can give a good approximation of the clusters . assuming that the networks to be locally tree - like , according to brian karrer and m. e. j. newman s recent theory [ 22 ] , can be write as : \delta \big(s-1 , \sum_{j \in n_{j } } s_{j}\big)\ ] ] where is the kronecker delta which is defined as follows : we can now introduce a probability generating function , whose value is given by [ 22 ] : \delta(s-1,\sum_{j \in n_{i}}s_{j})\\ & = z\prod_{j \in n_{i}}\sum_{s_{j}=0}^{\infty}\pi_{i \gets j}(s_{j})z^{s_j } \end{split}\ ] ] we can simplify the equation as [ 22 ] : where . to calculate , we note that is zero if the edge between and is unoccupied ( with probability ) and nonzero otherwise ( ) , which means that in which : where stands for the fraction of nodes initially removed . and for : \delta(s-1 , \sum_{k \in n_{j\backslash i}}s_k ) \end{split}\ ] ] where the denotes that the set of neighbors of without . substituting this equation into the definition of above, we then find that : then the expected fraction of the network occupied by the entire percolating cluster is given by the average over all nodes : =1-\frac{1}{n}\sum_{i=1}^n\prod_{j\in n_i}h_{i\gets j}(1)\ ] ] setting in equation ( 7 ) we have : we can calculate the size of the remaining greatest connected component of the networks , i.e. the percolating cluster by solving this equation .some nodes in this network may have the immune ability against malicious attacks due to that they are well protected by some virtual network security equipments which are deployed by professional network administrators .usually , large corporations have enough money and awareness to employ professional security counselors and managers to protect their it facilities ( physical or virtual ) from cyber - attacks . according to this common sense , in our model, vertices in a large virtual sub - network will have a high probability to avoid crash when they are attacked by hackers .the immunity probability of a specific node v can be calculated as : where is the number of vertices in a virtual sub - network which node v belongs to and is the number of vertices in the whole virtual layer .coefficient stands for that even a virtual sub - network is well protected , it is also possible to be ruined inevitably by some cases . to enhance the robustness of cloud computing network, we may place some key vertices behind virtual network security components , such virtual firewall , virtual ids etc . [21 ] in our virtual layer model , we do nt take account of virtual network security components due to that they are transparent to the application users and ca nt be attacked directly .virtual network security components can be deployed rapidly and conveniently without much more consumptions .the key vertices which are selected to protect are that have the highest degrees in the network . usually , vertex has high degree somehow means that they are important or even crucial . .in this figure , vm2 is protected by a virtual firewall . in our model, we can place such key nodes behind the security components.,width=302 ] to simulate the crashing process , initially , we randomly remove some vertices from the network to simulate that some vms are ruined . then all the vertices which are the neighbors of crashed nodes are affected . due tothat each node in the network has its own immune coefficient which we have mentioned before , the neighboring nodes may survive and avoid crashing during the process .these new crashed nodes in turn affected their own neighbors .this process will continue until the system reaches a stable state that no more vertices are affected . in each spreading step , we use the number of nodes in the largest connected cluster to represents the current state of network . based on the modelwe have discussed before , we now consider the situation with immune nodes which are totally immune to the infections and will never collapse with some protection . in this paper, we select the nodes with greatest degrees as the immune nodes . as the introduce of immune nodes into the system, there will be some changes for .(1-p_{j\in b})\bigg)\ ] ] where stands for the probability that is in the selected group of immune nodes .if the immune nodes are randomly selected , for all and is the fraction of protected nodes .it is obvious that , thus the expectation of size of giant component will be greater . in this paper , we selected the nodes with greatest degrees as immune nodes . thus will be a function of its degree and . andif is in the group with greatest degrees , the effect of protecting this node will be greater .therefore the network will be more robust .from what we have discussed in the above sections , we can conclude that as long as the immune nodes are added into the network , the probability of existing larger cluster is improved as well . to verify the robustness improvement after applying our novel method to cloud computing s network, we have simulated this avalanche process with different ratio of initial immune nodes and initial attacked nodes .we use 5000 physical hosts with 10 vm instances running on each of them to simulate the attack process . here, we assume that the largest scale of virtual sub - network contains at most 500 vm instances . the results in fig .5 show that the number of key nodes that have the ability to resist the cyber - attack will finally affect the robustness of the whole network , and the initial number of attacked nodes also affects the network s robustness . as the ratio of initial attacked nodes increase , the number of survived nodes in the largest connected cluster decrease accordingly . also , with different ratio of protected nodes , the robustness ( measured by the number of nodes in the largest connected cluster ) of network varies significantly .the more the nodes are protected , the higher the robustness is .5 demonstrates that if we only select 5% ( 2500 vms , 250 physical hosts ) key nodes to give the ability to resist the cyber - attack , the ratio of final survived nodes to the total nodes can increased over 40% or even 70% .also , if we protect 20% key nodes , this ratio will stably over 60% and in some optimistic cases it will over 90% ( 0.5% nodes initially be attacked ) . in practice , benefited from the elastic and dynamic features of cloud computing , nodes can by rapidly protected by virtual network security devices on demand .besides , the sdn technology has the ability to detect the real time topology and to re - calculate the degree of all nodes in network rapidly .so that , when we detect the change of network , no matter physical or virtual , we can re - select the key nodes ( nodes have the highest degree ) and protect those new key nodes by the virtual network components to obtain the immunity in a short period .in summary , we have introduced a novel method based on complex network theory that can significantly improve the robustness of cloud computing s network to defense malicious attacks with low costs .our approach shows that with a reasonable protection of some key nodes in the network , significant gains can be achieved for the robustness while the network s functional topology keep unchanged .this result reveals the fact that instead of deploying security equipment on each rack , protecting the key nodes with virtual network security components is more efficient , economic and energy - saving .the applications of our results are imminent on one hand to guide the improvement of the existing cloud computing networks but also serve on the other hand to design future cloud infrastructures with improved robustness .mell , peter , and tim grance .`` the nist definition of cloud computing . '' national institute of standards and technology 53.6 ( 2009 ) : 50 .cloud insights .( 2014 , aug ) .case study : big data cloud computing helps nasa rover curiosity land on mars . [ online ] .available : https://www.cloudinsights.com/case-study-nasa-jpl-666995202.html rightscale .2015 state of the cloud report .[ online ] .available : http://assets.rightscale.com/uploads/pdfs/rightscale-2015-state-of-the-cloud-report.pdf amazon .amazon.com announces first quarter sales up 15% to $ 22.72 billion .[ online ] .available : http://phx.corporate-ir.net/phoenix.zhtml?c=97664&p=irol-newsarticle&id=2039598 zissis , dimitrios , and dimitrios lekkas .`` addressing cloud computing security issues . ''future generation computer systems 28.3 ( 2012 ) : 583 - 592 .so , kuyoro .`` cloud computing security issues and challenges . ''international journal of computer networks 3.5 ( 2011 ) .carlin , sean , and kevin curran .`` cloud computing security . ''( 2011 ) . sans institute infosec reading room .building a secure internet data center network .[ online ] .available : http://www.sans.org/reading-room/whitepapers/modeling/building-secure-internet-data-center-network-infrastructure-73 shin , seungwon , and guofei gu .`` cloudwatcher : network security monitoring using openflow in dynamic cloud networks ( or : how to provide security monitoring as a service in clouds ? ) . '' network protocols ( icnp ) , 2012 20th ieee international conference on .ieee , 2012 .gul , irfan , and m. hussain .`` distributed cloud intrusion detection model . ''international journal of advanced science and technology 34 ( 2011 ) : 71 - 82 .mazzariello , claudio , roberto bifulco , and roberto canonico .`` integrating a network ids into an open source cloud computing environment . ''information assurance and security ( ias ) , 2010 sixth international conference on .ieee , 2010 .wu , hanqian , et al .`` network security for virtual machine in cloud computing . ''computer sciences and convergence information technology ( iccit ) , 2010 5th international conference on .ieee , 2010 .amazon data center size .13/amazon- data- center- size/. malhotra , lakshay , devyani agarwal , and arunima jaiswal .`` virtualization in cloud computing . ''j inform tech softw eng 4.136 ( 2014 ) : 2 .openstack , openstack installation guide for red hat enterprise linux 7 , centos 7 , and fedora 21 .[ online ] .available : http://docs.openstack.org/kilo/install-guide/install/yum/content/ch_overview.html .barrat , alain , marc barthelemy , and alessandro vespignani .dynamical processes on complex networks .cambridge university press , 2008 .mathematics and the internet : a source of enormous confusion and great potential .defense technical information center , 2009 .power law .[ online ] .available : https://en.wikipedia.org/wiki/power_law .kivel , mikko , et al .`` multilayer networks . ''journal of complex networks 2.3 ( 2014 ) : 203 - 271 .openstack , firewall - as - a - service ( fwaas ) overview .[ online ] .available : http://docs.openstack.org/admin-guide-cloud/content/install_neutron-fwaas-agent.html openstack , openstack networking ( `` neutron '' ) .[ online ] .available : https://wiki.openstack.org/wiki/neutron#why_neutron.3f karrer , brian , m. e. j. newman , and lenka zdeborov .`` percolation on sparse networks . '' physical review letters 113.20 ( 2014 ) : 208702 .
as a novel technology , cloud computing attracts more and more people including technology enthusiasts and malicious users . different from the classical network architecture , cloud environment has many its own features which make the traditional defense mechanism invalid . to make the network more robust against a malicious attack , we introduce a new method to mitigate this risk efficiently and systematically . in this paper , we first propose a coupled networks model which adequately considers the interactions between physical layer and virtual layer in a practical cloud computing environment . based on this new model and our systematical method , we show that with the addition of protection of some specific nodes in the network structure , the robustness of cloud computing s network can be significantly improved whereas their functionality remains unchanged . our results demonstrate that our new method can effectively settle the hard problems which cloud computing now is facing without much cost .
in this paper we consider the following system of equations:{c}\frac{\partial u\left ( x , t\right ) } { \partial t}=-u\left ( x , t\right ) + \int_{-\infty}^{\infty}j\left ( x - y\right ) q\left ( y , t\right ) s\left ( u\left ( y , t\right ) \right ) dy\\ \frac{1}{\varepsilon}\frac{\partial q\left ( x , t\right ) } { \partial t}=1-q\left ( x , t\right ) -\beta q\left ( x , t\right ) s\left ( u\left ( x , t\right ) \right ) \end{array } \right . , \ \label{-1}\ ] ] where is a normalized exponential and the firing rate function is given by for certain positive parameters and the variable is the synaptic input current for a neural network with synaptic depression , the effect of which is represented by the scaling factor these equations were proposed and studied by g. faye in .the faye model is a simplified version of one first introduced by kilpatrick and bressloff in .these authors included a variable and equation to allow for spike frequency adaptation .however they show by numerical computation that adaptation has little effect on the resulting waves .faye dropped the adaptation equation and variable in to get his system ( [ -1 ] ) .see and for further information on the physical background of ( [ -1 ] ) . in author proves two interesting results about the system ( [ -1 ] ) , namely the existence of a travelling pulse solution and the stability of this solution .a travelling pulse solution of ( [ -1 ] ) is a non - constant solution of the form such that both and exist and these limits are equal . in this paperwe are interested in the existence of values of for which ( [ -1 ] ) has such a solution . as we describe briefly below ,using ( [ -2 ] ) leads to a set of four ode s in which is a parameter . to show that a travelling pulse exists for some , faye uses the theory of geometric singular perturbation initiated by fenichel in and extended by jones and kopell in .the blowup method is also employed . herewe extend the existence result in in several ways .we show that for sufficiently small there are at least two travelling pulses , hence a fast pulse and a slow pulse , for speeds . also , we remove an important hypothesis used in , one which can only be verified by numerical integration of a related ode system .( this hypothesis is stated and discussed in section [ discussion ] . )our proof is for a general class of firing functions which includes the specific for which faye states his theorem .further , we use a method which allows , in some sense , a larger range of than seems possible with geometric perturbation .this will be made precise in the statements of our theorems .we believe , based on our past experience with a similar problem , that it is feasible to check existence rigorously for particular positive values of using precise numerical analysis based on interval arithmetic , but we have not carried out such a check . this will be explained further in section [ discussion ] .we now mention two well - known predecessors of the kilpatrick - bressloff and faye models . in 1992 ,ermentrout and mcleod studied the equation as above , is positive , bounded , and increasing . since there is no feedback in the equation , ( [ emc ] )supports only traveling fronts , where is monotone . in the landmark paper ermentrout and mcleod proved the existence of fronts for a wide variety of symmetric positive weight functions and firing rates (their work applied to a more general equation ) subsequently , in , pinto and ermentrout introduced the needed negative feedback in order to get pulses .their system is{c}\frac{\partial u\left ( x , t\right ) } { \partial t}=-u - v+\int_{-\infty}^{\infty } j\left ( x - y\right ) s\left ( u\left ( t , y\right ) \right ) dy\\ \frac{1}{\varepsilon}\frac{\partial q\left ( x , t\right ) } { \partial t}=u-\gamma v \end{array } \right . .\label{pe}\ ] ] they analyzed this system primarily for the case where is the heaviside function and is a constant representing a firing threshold . while some partial results have been obtained recently by scheel and faye ( see section [ discussion ] ) , we are not aware of any existence proof for pulses which covers all reasonable smooth functions we discuss what we mean by reasonable in section [ discussion ] , where we also indicate why our method does not appear to apply to this model , and why we expect that ( [ pe ] ) supports a richer family of bounded traveling waves than exist for ( [ -1 ] ) .travelling pulse solutions of ( [ -1 ] ) with ( [ -2 ] ) are shown to satisfy a system of ode s by letting and computing and .we find that {c}u^{\prime}=\frac{v - u}{c}\\ v^{\prime}=w\\ w^{\prime}=b^{2}\left ( v - qs\left ( u\right ) \right ) \\ q^{\prime}=\frac{\varepsilon}{c}\left ( 1-q-\beta qs\left ( u\right ) \right ) .\end{array } \right . \label{1}\ ] ] we will denote solutions of this system by and we look for values of for which there is a non - constant solution such that and both exist and are equal .the orbit of such a solution of ( [ 1 ] ) is called homoclinic . in the language of dynamical systems, is a pulse solution of ( [ -1 ] ) if and only if the orbit of is homoclinic .we make the following assumptions on .[ c0 ] the function is positive , increasing , bounded , and has a continuous first derivative [ c0a]the function has one local maximum followed by one local minimum , and no other critical points .[ c1] is such that the system ( [ 1 ] ) has exactly one equilibrium point , say .[ c2]the function is also such that the fast system{c}u^{\prime}=\frac{v - u}{c}\\ v^{\prime}=w\\ w^{\prime}=b^{2}\left ( v - q_{0}s\left ( u\right ) \right ) \end{array } \right . \label{2}\ ] ] has three equilibrium points , , and with [ c3] for convenience we will assume that on then conditions [ c0]-[c2 ] imply that , and we will denote solutions of ( [ 2 ] ) by the local minimum of will be denoted by in specific ranges of and are given so that these conditions are satisfied by the function given in ( [ -3 ] ) . in figure [ figurea ]we show the graphs of , ( the nullcline ) , and , when is given by ( [ -3 ] ) .we use the same parameter values as were chosen for illustration in . ]we can now state our first main result .[ thm1a]if conditions [ c0]- [ c3 ] are satisfied , and is positive and sufficiently small , then there are at least two positive values of say such that ( [ 1 ] ) has a non - constant solution satisfying in order to state our remaining theorems it is convenient first to give some basic information about the fast system , ( [ 2 ] ) .we state this information as a pair of lemmas , which will be used in proving our theorems .their proofs are given in the appendix .[ lem1]if conditions [ c2 ] and [ c3 ] are satisfied , then for each the equilibrium point of ( [ 2 ] ) is a saddle point , with a one dimensional unstable manifold and a two dimensional stable manifold there is , for each a unique solution of ( [ 2 ] ) with for all and satisfying the conditions {c}u_{0,c}\left ( 0\right ) = u_{m}\\ w_{0,c}>0\text { on } ( -\infty,0 ] .\end{array } \right .\label{3}\ ] ] further , there is a unique such that on and in other words , the branch of pointing into the positive octant is a heteroclinic orbit connecting to also , on , which implies that and .this solution is called a front for ( [ 2 ] ) .a front for ( [ 2 ] ) can be characterized as a solution of this equation which exists on is nonconstant and bounded , and satisfies on [ lem2]if then on , and if then is initially positive and has a unique zero .also , has a unique zero , and if and is the zero of , where is a maximum , then and suppose finally that for some on an interval and then for any , [ rem1]we conjecture that the condition would imply the same conclusion , but we have not been able to prove this .the positive number defined in lemma [ lem1 ] plays an important role throughout this paper .[ thm1]suppose that conditions [ c0]- [ c3 ] are satisfied .suppose also that there is a , such that if is the unique zero of ( which exists by lemma [ lem2 ] ) , then assume as well that for some there is a solution of ( [ 1 ] ) with which has the following properties:{c}u_{\varepsilon , c_{1}}^{\prime}>0\text { on some interval \thinspace}(-\infty , t_{1})\text { and } u_{\varepsilon , c_{1}}^{\prime\prime}\left ( t_{1}\right ) < 0\\ u_{\varepsilon , c_{1}}^{\prime}<0\text { on some interval } ( t_{1},t_{3}]\text { and } u_{\varepsilon , c_{1}}\left ( t_{3}\right ) = 0 \end{array } \right . \tag{ii}\label{ii}\ ] ] then for the given there are two values of say and such that ( [ 1 ] ) has a homoclinic orbit .figure [ fig2a ] below includes a graph of the orbit of a solution satisfying ( [ i ] ) and ( [ ii ] ) projected onto the plane , with the points and marked ( as well as an additional point which is explained later ) . the other solution shown in that figure satisfies ( [ i ] ) but not ( [ ii ] ) .theorem [ thm1a ] is implied by theorem [ thm1 ] and the following result .[ thm2]if conditions [ c0]- [ c3 ] are satisfied then there is a satisfying the conditions in the second sentence of theorem [ thm1 ] .further , with this if is sufficiently small , then the solution of ( [ 1 ] ) with satisfies ( [ i ] ) and ( [ ii ] ) of theorem [ thm1 ]. it will follow from the proofs of these results that as .the following result is all we have proved about the asymptotic behavior of [ thm3] however there is an independent of such that if there is a homoclinic orbit for then for a given pair , the hypotheses of theorem [ thm1 ] can be verified by checking one solution of ( [ 2 ] ) at and one solution of ( [ 1 ] ) , with the given and for the specific model considered in , standard numerical analysis ( non - rigorous ) easily finds specific values of where these hypotheses are apparently satisfied satisfies the conditions in theorem [ thm1 ] .if the conjecture in remark [ rem1 ] is true then it appears that would work . ] . in the discussion sectionwe describe how this could , in principle , be checked rigorously using uniform asymptotic analysis near the equilibrium points of ( [ 2 ] ) and [ 1 ] ) , and then a rigorous numerical ode solver ( using interval arithmetic ) over two compact intervals .we cite a paper where a similar procedure was followed successfully , but we have not attempted it here . in only one homoclinic solution is found , and there is an extra hypothesis about the system ( [ 2 ] ) .( hypotheses 3.1 ) as far as we know , this hypothesis can only be checked by numerically solving the system ( [ 2 ] ) . we discuss this further in section [ discussion ] .we need two simple preliminary results about the behavior of solutions . [ prop1]for any the regions and are positively invariant open sets for the system ( [ 1 ] ) .we are assuming that for all hence , if and if therefore is positively invariant .further , if then if and if the result follows .note as well that because is bounded , all solutions of ( [ 1 ] ) exist on [ prop0 ] if is a solution of ( [ 1 ] ) , and for some then either or this follows from condition [ c1 ] , which implies that the graph of the decreasing function in the plane , where , passes under the point .( see figure [ figurea ] . ) in the first , and longest , part of the proof of theorem [ thm1 ] we show that there is a fast pulse , with speed which tends to as tends to zero . in the second partwe look for a slow pulse , with a speed which tends to zero as tends to zero .we will show that for any possible homoclinic orbit , .we look for homoclinic orbits such that , as well , in . in searching for the fast solution we will consider for each a certain uniquely defined solution such that .we will show that there is a nonempty bounded set of positive values of , called , such that , among other properties of , either exceeds at some point , or becomes negative .we then examine the behavior of where .the goal is to show that .this is done be eliminating all the other possible behaviors of , often by showing that a particular behavior implies that all values of close to are not in .the following result is basic to our analysis of the full system ( [ 1 ] ) .the proof is routine and again left to the appendix .[ lem2b ] suppose that conditions [ c0]- [ c3 ] hold , and let be the unique equilibrium point of ( [ 1 ] ) .then for any and the system ( [ 1 ] ) has a one dimensional unstable manifold at , say with branch starting in the region if is a solution lying on this manifold , then for large negative and also , while if then for large negative the invariant manifold depends continuously on in ( the meaning of continuity here is made clear in the text below . ) finally , if is the positive eigenvalue of the linearization of ( [ 1 ] ) around then for each and the following proposition follows trivially from ( [ 1 ] ) and will be used a number of times , often without specific mention .[ prop2 ] we use the fourth item in this list to prove [ lem7]for any and if is a solution on and on an interval ] hence but then this again implies that on some interval to the left of contradicting the definition of this completes the proof of lemma [ lem7 ] .[ lem2c ] if is a solution on then on an interval ] , then on this interval . ) since as long as , lemma [ lem7 ] implies that as long as , proving lemma [ lem2c ] .hence the conditions and on ] where since in such an interval and also , as long as we can consider , , and as functions of say that , , and .then{c}u^{\prime}(v)=\frac{v - u(v)}{cw(v)}\\ w^{\prime}(v)=\frac{b^{2}(v - q(v)s(u(v))}{w(v)}\end{array } \right .\label{a1}\ ] ] we compare with the solution when let then we can write , , and .the equations become {c}u_{1}^{\prime}(v)=\frac{v - u_{1}(v)}{cw_{1}(v)}\\ w_{1}^{\prime}(v)=\frac{b^{2}(v - q_{0}s(u_{1}(v))}{w_{1}(v)}\end{array } \right .\label{a2}\ ] ] since ( lemma [ lem2b ] ) , it is seen by considering eigenvectors of the linearization of ( [ 1 ] ) around is given in appendix b. ] that for sufficiently close to ( i.e. for large negative ),{c}u\left ( v\right ) < u_{1}\left ( v\right ) \\w\left ( v\right ) > w_{1}\left ( v\right ) \end{array } \right . .\label{a3}\ ] ] if , at some first one of these inequalities should fail while the other still holds , then a contradiction results from comparing ( [ a1 ] ) and ( [ a2 ] ) , because and is increasing .for example , if and , then ( [ a1 ] ) and ( [ a2 ] ) imply that , a contradiction because on . also ,if and then , since as long as .this is also a contradiction of the definition of .if both inequalities fail at the same then there is still a contradiction because hence , if for then ( [ a3 ] ) holds in this interval .this implies that for any if has a first zero at then in the proof of lemma [ lem2 ] it is shown that and combining these shows that if then this contradiction completes the proof of lemma [ lem3a ] .[ lem3]if then , and as this follows from lemma [ lem2 ] and the comparison used to prove lemma [ lem3a ] .we are now ready to apply a shooting argument to obtain the fast pulse . still with as in theorem [ thm1 ] , for each let and set \text { and } u < u_{0}\text { on } ( t_{2},t_{3}].\right\}\end{aligned}\ ] ] ( see figures [ fig2a ] . ) [ lem5 ] is an open subset of the half line .suppose that and choose as in the definition of note from ( [ 1 ] ) that if then there is a such that also , if hence and also , ( [ 1 ] ) implies that if and then .since is a smooth function of , uniformly for in , say , , ] this lemma eliminates the graph in figure [ figcc]-a .suppose that is not defined .then on since is the only equilibrium point of ( [ 1 ] ) , this implies that for some , and then these inequalities hold at for nearby and by proposition [ prop1 ] , for hence on and so contradicting the definition of therefore is defined .we now show that again assume that and suppose that if then is a local maximum of which is not possible because is the first zero of hence at , if then at by lemma [ lem7 ] .this implies that changes sign from negative to positive at again a contradiction of the definition of hence at or , since also , in some interval however is bounded by and does not tend to a limit above therefore changes sign at some .since on , ] but .then first consider the case on . ] with we claim that on any such half - closed interval in which this follows because , by proposition [ prop2 ] , at any point where and we next show that on any interval ] then if then in some interval in this case , for close enough to changes sign after but before or else crosses and back again , and such can not lie in a contradiction . hence , but then , because in the region where and hence , and again in an interval to the right of but before a contradiction as before .the only other possibility contradicting lemma [ lem9 ] is that there is a first where and we consider two cases : ( a ) and and ( b ) first consider ( a ) . in an interval , and and so at if then also , but is impossible because it means that even for nearby after but before therefore at , and then but on the nullcline with , and this implies that has a local minimum at , whereas we know that in this contradicts the definition of turning to case ( b ) , we now have that at thus if then to the right of as before , if is close to then either crosses twice , or does nt reach the region before both of which mean that .if then is again an equilibrium point .the third possibility , implies that ( ii ) of lemma [ lem5a ] is satisfied , and thus again gives a contradiction .this completes the proof of lemma [ lem9 ] .if on then and on , and is homoclinic .this proves lemma [ lem6 ] .thus , for if is not homoclinic then exists with and on . ] and either or , for otherwise and this has already been ruled out . therefore if then and for otherwise nearby values of are once again not in if is not homoclinic and , then there must be a first with , and suppose that this is the case and also ( this is pictured in figure [ figcc]-c . ) then if then ( iii ) of lemma ( [ lem5a ] ) applies and gives a contradiction .hence then at and ( this is pictured in figure [ figcc]-d . butthis is case ( ii ) of lemma [ lem5a ] and so also impossible .we have established that if is not homoclinic ( with on ) then for large and this is only possible if is homoclinic ( with and for large ) .this proves lemma [ lem4 ] . to complete the proof of theorem [ thm1 ] we look for a second homoclinic orbit , with againwe adapt the method in .it is stated so as to be useful in the proofs of theorems [ thm2 ] and [ thm3 ] , as well as theorem [ thm1 ] .[ lem10]there are and both independent of such that if and then the solution remains in the region on and crosses since the proof uses some of the easier parts of the proof of lemma [ lem2 ] , it is included in the appendix . from here to the end of this section the parameters and remain as in the previous subsection .lemma [ lem10 ] implies that if we extend to with otherwise the same definition as above , then this suggests that corresponds to a homoclinic orbit .the problem with this argument is that the concept of front in the sense used in the method of geometric perturbation , breaks down for small the slow homoclinic orbit is not close , even up to the first zero of to the front found when more precisely , our proof of the first sentence of lemma [ lem6 ] is no longer valid , because we can not assert that the first zero of occurs with hence we must modify our shooting set on the axis .this requires several steps .our argument from here no longer refers to a point where changes sign , but instead considers solutions such that changes sign .let ~|~\text{there is a } \tau_{1}>0\text { such that } q_{c}^{\prime}<0\text { on } \left ( -\infty,\tau_{1}\right ) , \right .\\ & \left .\text { } q_{c}^{\prime}\left ( \tau_{1}\right ) = 0\text { , and } u_{c}^{\prime}\left ( \tau_{1}\right ) < 0\right\ } .\end{aligned}\ ] ] our argument does not require that if then has only one zero in though numerically this appears to be the case . recall that and were chosen so that has a unique zero . as in the proof of lemma [ lem9] , this implies that has a unique zero , say and hence .also , lemma [ lem10 ] shows that there is an interval which contains no points of let [ lem11]there is a such that on and if on then there is a such that and from the continuity of with respect to the same is true for if is sufficiently close to in particular , again on .but then contradicting the definition of .therefore a first is defined such that .then also , by proposition [ prop2 ] , if then by the implicit function theorem , is defined for nearby as the first zero of with and on contradicting the definition of hence . if then is a local minimum of contradicting the definition of if then and since and and this implies that on an interval again a contradiction .hence completing the proof of lemma [ lem11 ] .thus , in some interval this result implies that however the interval \subset\sigma. ] near to are not in since the corresponding solutions on must have a change of sign of from positive to negative after let we claim that is a homoclinic orbit .the proof uses techniques very similar to those above .first observe that and therefore is defined as in the definition of then use the following result .[ lemslow2]if then on any interval ] further , on .let lemma [ lemslow2 ] implies the existence of suppose there is a first with from the definitions of and on since is decreasing and if can not increase indefinitely .hence there is a with a contradiction then results from ( iv ) of lemma [ lem5a ] .now apply the technique of lemma [ lem5 ] , including use of proposition [ prop2 ] and lemma [ lem5a ] , to show that and on .in particular , lemma [ lem5a ] is used to show that there is no ( in fact , no at all ) with it follows that on and so indeed , is homoclinic .this completes the proof of theorem [ thm1 ] .as mentioned above , theorem [ thm1a ] follows from theorems [ thm1 ] and [ thm2 ] . in theorem [ thm2] is not fixed . also , is any number in which however is fixed at this stage for the rest of this section .let ] then in some neighborhood of exists for and is a continuous function of from the third sentence of lemma [ lem2 ] it follows that can be chosen in such that if then where is the unique zero of this proves the first assertion of theorem [ thm2 ] .we now choose in this way .[ lem2a]there is an and a such that if and ] ( hence , satisfies the the first condition of theorem [ thm1 ] . )further , can be chosen so that satisfies conditions ( [ i ] ) and ( [ ii ] ) of theorem [ thm1 ] . from the choice of lemma [ lem2 ]implies that for any there is a such that if then in the interval ] with then implies the first conclusion of the lemma .the remaining assertion of lemma [ lem2a ] follows by similar arguments .we have now proved theorems [ thm1 ] , [ thm2 ] , and [ thm1a ] , in that order . to prove theorem [ thm3 ] apply a continuity argument similar to that just used to show that for any there is an such that if and the pair satisfy the hypotheses on in theorem [ thm1 ] .it follows that pulses exist for some and some but lemma [ lem10 ] implies that theorem [ thm3 ] follows .as stated earlier , there is an additional hypothesis in the existence result given in , namely hypothesis 3.1 in that paper . this hypothesis is interesting in a broader context , and we will include some comments on its relation to the well - known pde model of fitzhugh and nagumo . to state this hypothesis we need to introduce a basic tool in the method of geometric perturbation , the so - called singular solution . the singular solution of ( [ 1 ] ) is a continuous piecewise smooth curve in consisting of four smooth pieces .the first piece is the front with speed found in lemma [ lem1 ] .( recall that fronts were defined just after the statement of this lemma . ) in figure [ figfront ] the green line segment is the projection of the graph of the front onto the plane .the second piece of the singular solution is a segment of the nullcline ( with ) as shown in figure [ figslow ] .it is obtained from ( [ 1 ] ) by letting formally setting in the resulting system of ode s for and solving the resulting set of one differential equation and three algebraic equations , one of which is for more information on this segment , and the singular solution in general , see .we do nt need to say more about this segment here . to define the third part of the singular solution ( crucial in hypothesis 3 of ), we consider the fast system ( [ 2 ] ) , but with replaced by as a parameter ranging between and {c}u^{\prime}=\frac{v - u}{c}\\ v^{\prime}=w\\ w^{\prime}=b^{2}\left ( v - qs\left ( u\right ) \right ) \end{array } \right .\label{fast}\ ] ] for each ] all tend to zero as we suspect that a similar picture holds for the model of faye , but it is not clear that our analysis is able to prove this much . the fitzhugh - nagumo condition that is small would be replaced here by requiring that is small .we said in the introduction that there was only a partial existence result for the model in .we were referring there to a recent paper by faye and scheel , which uses an interesting extension of the geometric perturbation method to infinite dimensional spaces to handle this kind of problem .the method is powerful because it allows an extension beyond the sorts of kernel which reduce the problem to an ode .faye and scheel remark that their paper appears to apply to the pinto - ermentrout model .this is true , but with a limitation .a key hypothesis in their paper is that for the singular solution , as described above , the jump down occurs above the knee . in a private communication professor ermentrouthas observed that while this is true for the pinto - ermentrout model in some parameter ranges , it is also common for the down - jump to occur at the knee .this is why we characterized their result as " partial for pinto - ermentrout .unfortunately , we have not been able to make our approach work for pinto - ermentrout .the reason may be related to an important difference between ( [ 1 ] ) and the equivalent set of ode s obtained from ( [ pe ] ) .the linearization of ( [ 1 ] ) around its equilibrium point has only real eigenvalues , for any while the equivalent linearization for ( [ pe ] ) has complex eigenvalues for a range of positive .thus , a homoclinic orbit would oscillate around equilibrium .a few oscillations could occur even for the very small values of where the eigenvalues are real .the final steps in our proof above clearly do not allow such oscillations .in [ ] we showed that there was co - existence of complex roots and a homoclinic orbit for fitzhugh - nagumo , and observed that work of evans , fenichel and feroe then implied the existence of many periodic solutions and a form of chaos .this leads to a conjecture that the pinto - ermentrout model supports a richer variety of bounded solutions than the model of faye .the new solutions are probably unstable , however , so their physical importance is unclear .a local stability result for the fast solution was proved by faye .his proof depends crucially on analysis of both the front and back of his solution , as described above .the analysis of the back is less standard because of the assumption that the jump down is at the knee .here he relies on previous work on similar problems .presumably if the jump were above the knee ( where , however his existence proof is not claimed to apply ) , the stability analysis would be easier .condition ( [ i ] ) says that the solution is on the unstable manifold of ( [ 1 ] ) at the equilibrium point to check condition ( [ ii ] ) we must follow until a point where our proposal for doing this is based on , where a similar procedure was followed for the well known equations of lorenz . using a standard ode solver we can arrive at a conjectured value for to begin analyzing we would expand the solutions around a high order expansion of results in algebraic expressions which are then evaluated using rigorous numerical analysis based on interval arithmetic . with this techniqueone hopes to show that enters a very small box near . for the example in this box had a diameter of about this gives us an initial estimate accurate to ( say ) 68 significant digits .from there , a rigorous ode solver , as described for example in , would be used to continue until whether this can be done can not be determined ahead of time .one has to run the solver .the number of guaranteed accurate digits decreases as the integration proceeds .we then hope that some significant digits would be maintained long enough to reach it must be checked along the way that based on the great sensitivity of the lorenz equations to the initial conditions , we expect that this would be easier for the faye model than it was in . we are not aware of any proposal to try to estimate for the method of geometric perturbation . 99 alefeld , g. and mayer , g. , interval analysis , theory and applications , _ j. comp. appld . math _* 121 * ( 2000 ) , 421 - 464 .dumortier , f. , llibre , c. , and arts , c. _ qualitative behavior of planar systems _ , springer , 2006 .ermentrout , g. b. and mcleod , j. b. , existence and uniqueness of waves for a neural network , _ proc .edinburgh sect . a _ * 123 * , 451 - 478 . faye , g , existence and stability of travelling pulses in a neural field equation with synaptic depression , _ siam j. of dynamical systems _ * 10*(2013 ) , 147 - 160 .faye , g. and scheel , a. , existence of pulses in excitable media with nonlocal coupling , _ advances in mathematics _ * 270 * 2015 , 400 - 456 .fenichel , n. , geometric perturbation theory for ordinary differential equations , _* 31 * ( 1979 ) , 53 - 98 .hassard , b. , hastings , s. p. , troy , w. c. , zhang , j. , a computer proof that the lorenz equations have chaotic solutions .math . letters _ * 7 * 1994 , 79 - 63 .hastings , s. p. , on wave solutions of the hodgkin - huxley equations , _ arch .anal _ * 60 * 1972 , 229 - 257 .hastings , s. p. , single and multiple pulse waves for the fitzhugh - nagumo equations , _siam j. applied math _ * 42 * 1982 , 247 - 260 .hastings , s. p. and mcleod , j. b. , _ classical methods in ordinary differential equations , _ amer .hartman , p. , _ ordinary differential equations _ , classics in applied mathematics , siam , 2002 .hodgkin , a.l . ,huxley , a.f ., a quantitative description of membrane current and its application to conduction and excitation in nerve , _ journal of physiology _ * 117 * 1952 , 500 - 544 .jones , c. k. r. t. , kopell , n. , langer , r. , construction of the fitzhugh - nagumo pulse using differential forms , _ patterns and dynamics in reactive media _ , i m a volumes in mathematics and its applications * 37 * ( 1991 ) , 101 - 116 .kilpatrick , z. and bressloff , p. , effects of synaptic depression and adaptation on spatio - temporal dynamics of an excitatory neural network ._ physica d. _ * 239 * 2010 , 547 - 560 .krupa , m. , sandstede , b. , and szmolyan , p. , fast and slow waves in the fitzhugh - nagumo equations , _* 133 * 1997 , 49 - 97 .pinto , d. , ermentrout g. b. , spatially structured activity in synaptically coupled neuronal networks : i traveling fronts and pulses , _siam j. applied math _ * 62 * 2001 , 206 - 225 .rauch , j. , smoller , j. , qualitative theory of the fitzhugh - nagumo equations , _ advances in mathematics _ * 27 * 1978 , 12 - 44 .we prove these results together .the linearization of ( [ 2 ] ) around is the system with {ccc}-\frac{1}{c } & \frac{1}{c } & 0\\ 0 & 0 & 1\\ -b^{2}q_{0}s^{\prime}\left ( u_{0}\right ) & b^{2 } & 0 \end{array } \right ) .\ ] ] the characteristic polynomial of is recall that .condition [ c2 ] implies that the equation has three solutions , and by condition [ c0a ] , it follows that therefore also , and both and are positive for hence has one real positive eigenvalue . also , which implies that has two real negative eigenvalues .further , it is easily seen that there is an eigenvector corresponding to the positive eigenvalue of which points into the positive octant .if is a solution lying on the branch of the unstable manifold of ( [ 2 ] ) at , then initially , and are positive .it follows from the first two equations of ( [ 2 ] ) that as long as .also , for and so while is in . ] and suppose that on a maximal interval where possibly then in we can consider and as functions of , letting and this defines the functions and on the interval and for in this interval , [ lem12 ] if then in the interval , {c}u_{d_{2}}<u_{d_{1}}\\ w_{d_{2}}>w_{d_{1}}\end{array } \right . .\label{a1a}\ ] ] the first sentence follows by proving ( [ a1a ] ) on the smaller of the two intervals .we first show that these inequalities hold on some initial interval .this is seen by comparing unit eigenvectors corresponding to the positive eigenvalues and of the linearizations of ( [ 2 ] ) around .suppose that for a particular the eigenvector corresponding to is then inequalities ( [ a1a ] ) follow near if for this we turn to the characteristic polynomial of given in ( [ 17 ] ) but now denoted by .it is easier to work with noting that the positive eigenvalue of is determined by the equation and the condition then since and for . , it follows that if but so indeed , therefore ( [ a1a ] ) holds on some interval suppose that the first inequality fails at a first while the second holds over . ] a similar argument eliminates the other possibilities , using the fact that is increasing , and this completes the proof of the lemma [ lem12 ] .[ cor1]if then [ lem12b]if , and then either or suppose that and lemma [ lem12 ] implies that and so a contradiction because is the first zero of next we must show the existence of [ lem13a ] for sufficiently large from ( [ a01 ] ) , in the interval where is increasing , recall that in as long as and this implies that for large , in the interval where grows rapidly and so , in turn , does . in particular , before and this implies that now we wish to show that for small before it is in this step that condition [ c3 ] is used .[ lem14 ] there is a such that for any if and then for and leaves the interval . if then crosses , while if then crosses .let .since if then for from which follows that must leave before [ lem15]if on ] then on this interval . with , so if then also , if in then since as , the first sentence of the lemma follows and the second is similar. based on this lemma , we consider , in addition to ( [ 2 ] ) , the system{c}v^{\prime}=w\\ w^{\prime}=b^{2}\left ( v - q_{0}s\left ( v\right ) \right ) \end{array } \right ., \label{10}\ ] ] this system has equilibrium points at and and a standard phase plane analysis , assuming condition [ c3 ] , shows that the positive branch of unstable manifold of ( [ 10 ] ) at is homoclinic .also we consider the system {c}v^{\prime}=w\\ w^{\prime}=b^{2}\left ( v - q_{0}s\left ( v-\hat{c}\right ) \right ) \end{array } \right . ,\label{11}\ ] ] for small choose so small that this system also has three equilibrium points , and a homoclinic orbit based at the left most of these .this orbit entirely encloses the homoclinic orbit of ( [ 10 ] ) .finally we consider the system {c}v^{\prime}=w\\ w^{\prime}=b^{2}\left ( v - q_{0}s\left ( v+\hat{c}\right ) \right ) \end{array } \right . , \label{12a}\ ] ] for sufficiently small this system also has a homoclinic orbit .this orbit lies entirely inside the homoclinic orbit of ( [ 10 ] ) .however , the lower left branch of the unstable manifold of this system crosses the homoclinic orbits of ( [ 10 ] ) and ( [ 11 ] ) , and this branch will play a role below .( see figure 12 . )[ figure77 ] from now on , and will denote the unique solutions of the systems ( [ 10 ] ) , ( [ 11 ] ) , and ( [ 12a ] ) respectively which lie on the homoclinic orbits of those systems and satisfy in each of these cases , if is homoclinic then is bounded by .this follows from the definition of in lemma [ lem14 ] , the results of which also apply to ( [ 11 ] ) and ( [ 12a ] ) , with the same proofs .if exceeds then is not bounded .recall that in lemmas [ lem1 ] and [ lem2 ] , denoted the unique solution on the unstable manifold such that and on . ] [ lem13]condition [ 13 ] implies that if on , ] .this follows because is a continuous function of lemmas [ lem13a ] and [ lem13 ] imply that these sets are nonempty , and their definitions and proposition [ prop1 ] imply that they are disjoint . since the interval is connected , the existence of some positive which is not in either set .its uniqueness follows from the corollary to lemma 12 . from the definition of on that there is an with and on then and because on giving this contradiction completes the proof of lemma [ lem1 ] . to complete lemma [ lem2 ] we must prove the assertions in the third and fourth sentences . in the third sentence , when and at the first zero of so the implicit function theorem and the comparison ( [ a1a ] ) imply the limit statement .for the last sentence of lemma [ lem2 ] , it suffices to prove that suppose instead that since lemma [ lem12 ] and the hypotheses of lemma [ lem2 ] imply that hence which contradicts the definition of .this completes the proof of lemma [ lem2 ] .this result is about system ( [ 1 ] ) .however the argument in lemma [ lem14 ] , initially about system ( [ 2 ] ) , applies equally well to ( [ 1 ] ) , so if increases monotonically to above then crosses followed by hence we can assume that if is the first zero , if any , of , then on .as earlier in obtaining ( [ 13 ] ) , it follows that if , then on .therefore , uniformly on ] and for .this is proved by the same argument which lead to ( [ x1 ] ) , .now consider the equation obtained from ( [ 1 ] ) by formally setting in ( [ 1 ] ) , namely where because ( [ 1 ] ) has only one equilibrium point , if hence there is a such that if for some and , then ( here is independent of the particular solution involved . )lemma [ lem10 ] then follows from ( [ x1 ] ) and ( [ x2 ] ) .suppose that the linearization of ( [ 1 ] ) around is then {rrrr}-\frac{1}{c } & \frac{1}{c } & 0 & 0\\ 0 & 0 & 1 & 0\\ -b^{2}q_{0}s^{\prime}\left ( u_{0}\right ) & b^{2 } & 0 & -b^{2}s\left ( u_{0}\right ) \\-\frac{\varepsilon}{c}\beta q_{0}s^{\prime}\left ( u_{0}\right ) & 0 & 0 & -\frac{\varepsilon}{c}\left ( 1+\beta s\left ( u_{0}\right ) \right ) \end{array } \right ) .\label{16}\ ] ] the characteristic polynomial of is while proving lemma [ lem1 ] we showed that if then one of the non - zero eigenvalues of is positive and two are real and negative .we also saw that and therefore , if since the trace of is also negative , if then has either one or three eigenvalues with negative real part , and for sufficiently small it has three , all of which are real . in fact , since , , and if has exactly one real positive eigenvalue for every in the positive quadrant for each as increases the other roots of remain in the left hand plane unless , for some two of them are pure imaginary .consideration of the characteristic polynomial in this case ( one negative , one positive , and two pure imaginary roots ) shows that the coefficients of and have the same sign .this is not the case with because the coefficient of is positive and the coefficient of is negative .hence , as asserted in lemma [ lem2b ] , the unstable manifold of ( [ 1 ] ) at is one dimensional .further , because it follows from ( [ 16 ] ) that if is the unit eigenvector of with then and also ( [ 16 ] ) implies that if then and if then the claimed behavior for large negative of solutions on follows .the continuity of for follows from theorem 6.1 in chapter 6 in the text of hartman , . in this casethe unstable manifold all we need , is the set of all solutions which tend to at an exponential rate as to obtain the desired continuity of with respect to and apply hartman s theorem to ( [ 2 ] ) augmented with equations and this is the closest we come to center manifolds in our approach . ]the final assertion of the lemma , that if and follows by writing the characteristic polynomial of in the form we see that since , hence completing the proof of lemma [ lem2b ] .
in 1992 g. b. ermentrout and j. b. mcleod published a landmark study of travelling wavefronts for a differential - integral equation model of a neural network . since then a number of authors have extended the model by adding an additional equation for a recovery variable , thus allowing the possibility of travelling pulse type solutions . in a recent paper g. faye gave perhaps the first rigorous proof of the existence ( and stability ) of a travelling pulse solution for a model of this type , treating a simplified version of equations originally developed by kilpatrick and bressloff . the excitatory weight function used in this work allowed the system to be reduced to a set of four coupled odes , and a specific firing rate function , with parameters , was considered . the method of geometric singular perturbation was employed , together with blow - ups . in this paper we extend faye s results on existence by dropping one of his key hypotheses , proving the existence of pulses at at least two different speeds , and in a sense , allowing a wider range of the small parameter in the problem . the proofs are classical , and self - contained aside from standard ode material .
the internal antarctic plateau is , at present , a site of potential great interest for astronomical applications .the extreme low temperatures , the dryness , the typical high altitude of the internal antarctic plateau ( more than 2500 m ) , joint to the fact that the optical turbulence seems to be concentrated in a thin surface layer whose thickness is of the order of a few tens of meters do of this site a place in which , potentially , we could achieve astronomical observations otherwise possible only by space . in spite of the exciting first results the effective gain that astronomers might achieve from ground - based astronomical observation from this location still suffers from serious uncertainties and doubts that have been pointed out in previous work . a better estimate of the properties of the optical turbulence above the internal antarctic plateau can be achieved with both dedicated measurements done simultaneously with different instruments and simulations provided by atmospheric models .simulations offer the advantage to provide volumetric maps of the optical turbulence ( ) extended on the whole internal plateau and , ideally , to retrieve comparative estimates in a relative short time and homogeneous way on different places of the plateau . in a previous paper group performed a detailed analysis of the meteorological parameters from which the optical turbulence depends on , provided by the general circulation model ( gcm ) of the european center for medium - range weather forecasts ( ecmwf ) . in that work we quantified the accuracy of the ecmwf estimates of all the major meteorological parameters and , at the same time , we pointed out which are the limitations of the gcms . in contexts in which the gcms fail , mesoscale models can supply more accurate information because they are conceived to reconstruct phenomena that develop at a too small spatial and temporal scale to be described by a gcm . in spite of the fact that mesoscale models can attain higher resolution than the gcms ,some parameters , such as the optical turbulence , are not explicitly resolved but are parameterized , i.e. the fluctuations of the microscopic physical quantities are expressed as a function of the corresponding macroscopic quantities averaged on a larger spatial scale ( cell of the model ) . for classical meteorological parameters the use of a mesoscale model should be useless if gcms such as the one of the ecmwf could provide estimate with equivalent level of accuracy .for this reason the hagelin et al . paper ( 2008 ) has been a first step towards the exploitation of the mesoscale meso - nh model above the internal antarctic plateau . in that studywe retrieved an exhaustive characterization of all the meteorological parameters from the ecmwf analyses ( wind field , potential temperature , absolute temperature ... ) and , at the same time , we defined the ecmwf s analyses limitations : we concluded that in the first 10 - 20 m , the ecmwf analyses show a discrepancy with respect to measurements of the order of 2 - 3 m.s for the wind speed and of 4 - 5 k for the temperature .[ fig : oro ] the meso - nh model has been proven to be reliable in reproducing 3d maps of optical turbulence and it has been statistically validated above mid - latitude astronomical sites .preliminary tests concerning the optimization of the model configuration and sensitivity to the horizontal and the vertical resolution have already been conducted by our team for the internal antarctic plateau . in this paperwe intend to quantify the performances of the model above this peculiar environment .more precisely , our goals are : * to compare the performances of the mesoscale meso - nh model and the ecmwf gcm in reconstructing wind speed and absolute temperature ( main meteorological parameters from which the optical turbulence depends on ) with respect to the measurements .this analysis will quantify the performances of the meso - nh model with respect to the gcm from the ecmwf . * to perform simulations of the optical turbulence above dome c ( 75``s , 123''e ) employing different model configurations and compare the typical simulated thickness of the surface layers well as the seeing in the free atmosphere with the one measured by ( hereafter tr2008 ) . in this way we aim to establish which configurationis necessary to reconstruct correctly the . in summarywe aim to validate the meso - nh model on the antarctic site . the two issues : ( 1 ) the surface layer thickness and ( 2 ) the typical seeing in the free atmosphere are certainly the two main features that might get this place on the earth extremely appealing for astronomers and it might be extremely useful to have an independent confirmation from models of the typical values measured on the site . this study is focused on the winter season . in section 2we present the meso - nh model and the different configurations that were used to perform numerical weather simulations above the internal antarctic plateau .section 3 is devoted to a statistical comparison of standard meteorological parameters ( wind speed and temperature ) deduced from meso - nh simulations , ecmwf analyses and radiosoundings . in section 4we present the results of the computation with meso - nh of the surface layer thickness for 15 nights in winter time and a comparison with the observed surface layer thickness from tr2008 .finally conclusions are drawn in section 5 .meso - nh is the non - hydrostatic mesoscale research model developed jointly by mto - france and laboratoire darologie .it can simulate the temporal evolution of the three - dimensional atmospheric flow over any part the globe .the prognostic variables forecasted by this model are the three cartesian components of the wind , , , the dry potential temperature , the pressure , the turbulent kinetic energy .the system of equation is based upon an anelastic formulation allowing for an effective filtering of acoustic waves . a gal - chen and sommerville on the vertical and a c - grid in the formulation of arakawa and messinger for the spatial digitalization is used .the temporal scheme is an explicit three - time - level leap - frog scheme with a time filter .the turbulent scheme is a one - dimensional 1.5 closure scheme with the bougeault and lacarrre mixing length .the surface exchanges are computed in an externalized surface scheme ( surfex ) including different physical packages , among which isba for vegetation .masciadri et al .( 1999a , b ) implemented the optical turbulence package to be able to forecast also the optical turbulence ( 3d maps ) and all the astroclimatic parameters deduced from the .we will refer to the astro - meso - nh code to indicate this package . to compare simulations with measurementsthe integrated astroclimatic parameters are calculated integrating the with respect to the zenith in the astro - meso - nh code .the parameterization of the optical turbulence and the reliability of the astro - meso - nh model have been proved in successive studies in which simulations have been compared to measurements provided by different instruments .this has been achieved thank to a dedicated calibration procedure that has been proposed and validated by the same authors .the atmospheric meso - nh model is conceived for research development and for this reason is in constant evolution .one of the major advantages of meso - nh that was not available at the time of the masciadri s studies is that it allows now for the use of the interactive grid - nesting technique .this technique consists in using different imbricated domains with increasing horizontal resolutions with mesh - sizes that can reach 10 meters .we use in this study the astro - meso - nh package , implemented in the most recent version of the atmospheric meso - nh model . to facilitate the put in the context of this work , the differences that have been implemented in the model configuration with respect to the previous masciadri s studies are listed here : * a higher vertical resolution near the ground has been selected .we still work with a logarithmic stretching near the ground up to 3.5 km but we start with a first grid point of 2 m ( instead of 50 m ) with 12 points in the first hundred meters . this configuration has been allowed thanks to the extremely smooth orography of this region of the earth .it is obviously preferable because it permits to better quantify the turbulence contribution that typically develops in the thin vertical slabs in the first hundred of meters above the internal antarctic plateau .above 3.5 km the vertical resolution is constant and equal to =600 m as well as in masciadri s previous work .the maximum altitude is 22 kilometers . *the grid - nesting ( see table [ tab1 ] ) is implemented with 3 imbricated domains allowing a maximum horizontal resolution of 1 km in a region around the concordia station ( 80 km 80 km ) . *the simulations are forced at synoptic times ( every 6 hours ) by analyses from the ecmwf .this permits to perform a real forecast of the optical turbulence . to avoid misunderstandings , we highlight indeed that , as it has been extensively explained in previous studies ( masciadri et al .2004 , masciadri & egner 2006 ) , the meso - nh model has been used so far for simulations of the optical turbulence in a configuration permitting a quantification of the mean optical turbulence during a night and not a forecast of the optical turbulence .we perform therefore a step ahead with respect to results obtained so far with the astro - meso - nh code . in spite of the fact that the orographic morphology is almost flat above antarctica, it is known that even a weak slope can be an important factor to induce a change in the wind speed at the surface in these regions .the physics of the optical turbulence strongly depend on a delicate balance between the wind speed and temperature gradients . in order to study the sensitivity of the model to the horizontal resolution and to identifywhich configuration provides more reliable estimates we performed two sets of simulations with different model configurations . in the first configuration ( that we will call _ monomodel _ ) we used an horizontal resolution =100 km covering the whole antarctic continent ( figure [ fig : oro]a , b and table [ tab1 ] ) .we selected this configuration because it permits us to discuss , where it is possible , our results with respect to those obtained by ( hereafter sg2006 ) with the regional atmospheric model mar above the antarctic plateau . in that case , indeed , the authors used this extremely low horizontal resolution that has the advantage to be cheap from a computational point of view .this model configuration permits fast simulations but it is certainly necessary to verify that it is high enough to correctly resolve the most important features of the optical turbulence near the ground and in the high part of the atmosphere . .meso - nh model configuration . in the second columnthe horizontal resolution , in the third column the number of grid points and in the fourth column the horizontal surface covered by the model domain . [ cols="^,^,^,^",options="header " , ] [ see_fa ] again we observed that results are weakly dependent on the temporal range on which the means values are calculated and for this reason we report just the 12 - 16 utc case .[ fig : corr_see ] shows the correlation between the observed and simulated values for the seeing in the free atmosphere and in the whole atmosphere .the median of the observed seeing in the free atmosphere for the 15 nights is 0.3.2 arcsec ; the median seeing in the free atmosphere simulated by meso - nh with the high horizontal resolution is 0.35.24 arcsec and with the low horizontal resolution is 0.42.28 arcsec .both median simulated values ( with low and high resolution ) match the median value obtained with observation within the statistical error even if the high resolution is much better correlated ( relative error of 16 , .05 " ) .if we look at the total seeing developed on the whole atmosphere it is well visible ( table [ see_fa ] and fig.[fig : corr_see ] ) that the model overestimates the measurements with both resolutions .we have a simulated median 3.58.42 arcsec and 2.29.38 arcsec versus an observed median 1.6.2 arcsec .even if we take into account the more accurate estimates ( high resolution ) we obtain a dispersion simulations / observations 0.7 arcsec . the excess of optical turbulence reconstructed by the meso - nh model is clearly concentrated in the surface layer .we can not exclude an underestimate from measurements but there are , at present time , no major elements that lead to this assumption .we are working , on the contrary , on a paper to explain this discrepancy and overcome this limitation . considering that we proved that the meteorological parameters are well reconstructed by the meso - nh model near the surface ( section 3 ) and that the surface numerical scheme ( interaction soil biosphere atmosphere isba ) responsible of the control of the budget of the turbulent ground / air fluxes has been recently optimized for antarctic applications ( ) in the context of our project, we concentrated our attention on the dynamical and optical numerical turbulence schemes . in terms of comparison with the sg2006 study we note that the latter study indicates a typical underestimated total seeing of 1.16 arcsec with respect to the observed one ( 1.6 arcsec ) .the discrepancy is smaller from a quantitative point of view ( .45 arcsec ) with respect to what we find and it is in the opposite direction .the questionable issue in the sg2006 study is that the turbulence kinetic energy provided by sg2006 in the first levels of the mar model is often of the order of 10 m ( ) .this values is extremely low and it basically indicates no turbulent kinetic energy on the first level of the model and such a condition is contrary to what observed with measurements .this is consistent with the fact that the mar model underestimates the seeing in the surface layer .we conclude that the meso - nh model , in the present configuration , reconstructs with good statistical reliability the and the seeing in the free atmosphere while shows a tendency in overestimating the strength of the seeing in the surface layer .the interesting result of this paper is therefore the fact that the most important features for astronomical interest ( the surface layer thickness and the typical seeing in the free atmosphere ) observed with measurements are confirmed with mesoscale atmospherical model .we note that the this is the first confirmation made by a mesoscale model of the typical seeing in the free atmosphere . besides it is worth to highlight that these are the first simulations ever done above the internal antarctic plateau and extended all along the whole atmosphere .figure [ see_free ] shows the temporal evolution of the profile in the free atmosphere ( more precisely in the ( 1,12 ) km vertical slab ) related to three selected nights in the sample of the 15 simulated nights . in all of the three nightsit is well visible that , even at such high altitudes , the model is active and the vertical distribution of the optical turbulence changes in time with a not negligible dynamic from a quantitative point of view .the values extend , indeed , on the logarithmic scale ( -18,-16.5 ) . in all the 3 cases it appears clearly that the high - horizontal resolution provides a better temporal variability as expected .these results are therefore very promising in terms of predictions of the 3d maps on long time scales .[ see_free ]in this paper we study the performances of the meso - nh mesoscale meteorological model in reconstructing meteorological parameters ( wind speed and temperatures ) as well as the optical turbulence above concordia station in the dome c area , a site in the internal antarctic plateau .this is , at our knowledge , the first study concerning the optical turbulence reconstructed with an atmospherical mesoscale model above antarctica on the whole atmosphere .this study is concentrated on the winter season i.e. the most interesting for stellar astronomical applications .the validation of the model for the meteorological parameters has been done comparing measurements ( radiosoundings ) and simulations on a sample of 47 nights .the validation of the model for the optical turbulence has been done comparing simulations with measurements on a sample of 15 nights .two different model configurations were tested : monomodel simulations using a low horizontal resolution ( =100 km ) and grid - nesting simulations with high horizontal resolution ( =1 km for the innermost domain ) .the low resolution model permitted us to discuss the results obtained previously in the literature .the observations used for the validation are , for the meteorological parameters , the analyses from the ecmwf global circulation model and radiosoundings ( 47 nights ) and , for the optical turbulence , the and seeing values ( 15 nights ) measured in situ ( trinquet et al . 2008 ) .the main conclusions of this study are : * we showed that near the surface , meso - nh retrieved better wind speed vertical gradient ( wind shear ) than the ecmwf analyses from a qualitative as well as quantitative point of view , thanks to the use of a highest vertical resolution .we expect therefore a better reconstruction of the katabatic winds typical of these regions by the meso - nh model than the gcm models .also meso - nh better reconstructs the thermal stability near the surface than the gcms .the analysis of the first vertical grid point permits us to conclude that the meso - nh model surface temperature is closest to the observations .60 k than the ecmwf general circulation model ( .74 k ) which is too warm .the improvement for the estimate of the wind speed is even more evident ( 0.04 m.s versus 2.49 m.s ) .+ * for what concerns the parameters concerning the optical turbulence , again the results are resolution dependent .the simulations with low resolution provides a too thick surface layer ( almost double of the observed one ) while those with high resolution provide a mean h.9.6 m versus an equivalent observed h.3.1 m. taking into account the statistical error we observe that the high horizontal mode provides a surface layer thickness that is statistically just 6 m higher than the observed one but within the dispersion of the observations . +* the integral of the above the h i.e. the seeing in the free atmosphere 0.3.20 arcsec is reconstructed with an excellent level of reliability ( .05 arcsec ) by the model used with the high resolution configuration 0.35.24 arcsec .the low resolution provides a worse estimate even if within the of the observations . +* the model still shows a tendency in overestimating the turbulence in the surface layer .for an observed 1.6 arcsec we have a simulated 2.29 arcsec with the model in high horizontal resolution mode .this is the subject of an on - going study conceived to answer to this open question . + * the results concerning the computation of the mean thickness of the surface layer as well as the seeing in different vertical slabs are not very dependent of the time interval used to average it .this widely simplifies the analysis of simulations .+ * estimates obtained with the grid - nested simulations are closer to the observations than those obtained with monomodel simulations .this study highlighted the necessity of the use of high horizontal resolution to reconstruct a good meteorological field as well as the parameters characterizing the optical turbulence in antarctica , even if the orography is almost flat over the internal antarctic plateau .the employment of the low resolution ( 100 km ) alone can hardly be used to identify _ the best site on the antarctic plateau_. however , it can be used to identify rapidly , on the whole antarctic plateau , the most interesting regions in which to focus , successively , simulations at high horizontal resolutions on smaller surfaces domains . with `` the most interesting regions '' we mean those with the lowest surface layer thickness for example .+ * the meso - nh model is able to reconstruct a mean profile well fitting the vertical optical turbulence distribution measured in the first 20 km from the ground .the model also shows a not negligible temporal variability in the whole 20 km from the ground in a very small dynamic range .the latter is to be considered a very interesting feature because it is known that this is a region of the atmosphere in which in general the mesoscale models are less sensible than near the ground .it is therefore a further indication that the meso - nh model is well placed to forecast the turbulence evolution at these time scales .+ once the tendency in overestimating the strength of the turbulence in the surface layer will be solved ( forthcoming paper ) we plan to run the meso - nh model in other regions of the internal antarctic plateau to identify the best locations for astronomical observations i.e. the places with the best turbulence characteristics from an astronomical point of view .this study has been funded by the marie curie excellence grant ( forot ) - mext - ct-2005 - 023878 .ecmwf analyses are extracted from the catalog mars , .radiosoundings come from the progetto di ricerca `` osservatorio meteo climatologico '' of the programma nazionale di ricerche in antartide ( pnra ) , _http://www.climantartide.it_. agabi , a. , aristidi , e. , azaouit , m. , fossat , e. , martin , f. , sadibekova , t. , vernin , j. , ziad , a. , 2006 , pasp , 118 , 344 .arakawa , a. , messinger , f. , 1976 , garp tech ., 17 , wmo / icsu , geneva , switzerland aristidi , e. , agabi , k. , azouit , m. , fossat , e. , vernin , j. , travouillon , t. , lawrence , j. s. , meyer , c. , storey , j. w. v. , halter , b. , roth , w. l. , walden , v. , 2005 , a&a , 430 , 739 asselin , r. , 1972 , mon . weather, 100 , 487 bougeault , p. , lacarrre , p. , 1989, mon . weather ., 117 , 1872 cuxart , j. , bougeault , p. , redelsperger , j .- l . , 2000 , q. j. r. meteorol .soc . , 126 , 1 gal - chen , t. , sommerville , c. j. , 1975 , j. comput . phys . , 17 , 209 galle , h. , arena site testing workshop , june 2007 , _ http://www.concordiastation.org/domec/pdf/20060426 /programma.pdf _ geissler , k. , masciadri , e. , 2006 , pasp , 118 , 1048 hagelin , s. , masciadri , e. , lascaux , f. , stoesz , j. , 2008 , mnras , 387 , 1499 lafore , j .-p . , stein , j. , asencio , n. , bougeault , p. , ducrocq , v. , duron , j. , fischer , c. , hereil , p. , mascart , p. , masson , v. , pinty , j .-p . , redelsperger , j .- l . , richard , e. , vil - guerau de arellano , j. , 1998 , annales geophysicae , 16 , 90 lascaux , f. , masciadri , e. , stoesz j. , hagelin , s. , 20 - 22 march 2007 , symposium on seeing , kona - hawaii , proceeding available at _ http://weather.hawaii.edu/symposium/publications _ lawrence , j. , ashley , m. , tokovinin , a. , travouillon , t. , 2004 , nature , 431 , 278 le moigne , p. , noilhan , j. , masciadri , e. , lascaux , f. , pietroni , i. , 2008 , optical turbulence - astronomy meets meteorology , 15 - 18 september 2008 , alghero , sardegna , italy , , in press masciadri , e. , vernin , j. , bougeault , p. , 1999a, a&ass , 137 , 185 masciadri , e. , vernin , j. , bougeault , p. , 1999b, a&ass , 137 , 203 masciadri , e. , vernin , j. and bougeault , p. 2001, a&a , 365 , 699 masciadri , e. , jabouille , p. , 2001, a&a , 376 , 727 masciadri , e. , avila , r. , sanchez , l. j. , 2004 , rmxaa , 40 , 3 masciadri , e. , egner , s. , 2006 , pasp , 118 , 849 , 1604 noilhan , j. , planton , s. , 1999 , mon ., 117 , 536 stein , j. , richard , e. , lafore , j .-p . , pinty , j .-, asencio , n. , cosma , s. , 2000 , meteorol .phys . , 72 , 203 stoesz , j. , masciadri , e. , lascaux , f. , hagelin , s. , spie , 7012 , 70124c , 2008 swain , m. , galle , h. , 2006 , pasp , 118 , 1190 trinquet , h. , agabi , k. , vernin , j. , azouit , m. , aristidi .e. , fossat , e. , 2008 , pasp , 120,864 , 203
these last years ground - based astronomy has been looking towards antarctica , especially its summits and the internal continental plateau where the optical turbulence appears to be confined in a shallow layer close to the icy surface . preliminary measurements have so far indicated pretty good value for the seeing above 30 - 35 m : 0.36 `` , 0.27 '' and 0.3 " at dome c. site testing campaigns are however extremely expensive , instruments provide only local measurements and atmospheric modeling might represent a step ahead towards the search and selection of astronomical sites thanks to the possibility to reconstruct 3d maps over a surface of several kilometers . the antarctic plateau represents therefore an important benchmark test to evaluate the possibility to discriminate sites on the same plateau . our group has proven that the analyses from the ecmwf global model do not describe with the required accuracy the antarctic boundary and surface layers in the plateau . a better description could be obtained with a mesoscale meteorological model . the mesoscale model meso - nh has proven to be reliable in reproducing 3d maps of optical turbulence above mid - latitude astronomical sites . in this paper we study the ability of the meso - nh model in reconstructing the meteorological parameters as well as the optical turbulence above dome c with different model configurations ( monomodel and grid - nesting ) . we concentrate our attention on the model abilities in reproducing the optical turbulence surface layer thickness ( ) and the integral of the in the free atmosphere and in the surface layer . it is worth to highlight that these are the first estimates ever done so far with a mesoscale model of the optical turbulence above the internal antarctic plateau . [ firstpage ] site testing atmospheric effects turbulence
one from the most important physical processes is electrodiffusion .it describes both diffusional motion of mass and charge flow due to applied electric field .the electric potential distribution is govern by the poisson equation and total transport of particles is given in terms of the continuity equation .the significance of this equation is broadly described in existing physical , chemical and biological literature and lots of scientific articles , particularly , those which concern properties of nano and micro transport .+ mathematically , equations of electrodiffusion constitute a set of coupled nonlinear equations where the laplace operator appears together with the first order partial time derivative .the laplace operator is the basic operator met in many physical situations .thus the first step to deal with the electrodiffusional problem is to approximate the solution of the laplace equation with help of the finite elements method .practically , it means that an appropriate mesh should be designed for a prescribed 3d domain .the mesh must fit well to the physical conditions like e. g. symmetry of the problem .therefore , different mesh shapes could be desired ( spherical , cylindrical , conical or cubic ) up to problem . after having accurate basic spatial solutions on appropriate meshes ,the problem should be extended to the time dependent case of the diffusion equation by finding discrete approximation in time .it could be done by means of truncated taylor series or other single step procedures like the crank nicolson scheme or the gurtin s approach to finite element approximation in terms of variational principle . from now , further extension of above - presented computations involving non - linear terms could be easily implemented and numerically solved using the newton s method .the equation of electrodiffusion has the form where is a number of -th ions , - electric potential , - the diffusion coefficient of -th particles , - boltzmann constant , - temperature , - valence of the -th kind of ions , - electric charge . to find the electric potential the poisson equationmust be solved where , .thus equations ( [ electrodiffusion ] ) and ( [ poisson ] ) both constitute the system of coupled equations .next , they can be solved numerically using the finite element method where the problem is represented as \omega = 0 \\ \displaystyle \int_\gamma \mathbf{\tilde{v}}^t \mathbf{b(u ) } d\gamma \equiv \int_\gamma [ \tilde{v}_1 b_1(\mathbf{u } ) + \tilde{v}_2 b_2(\mathbf{u } ) + \tilde{v}_3 b_3(\mathbf{u } ) ] d\gamma = 0 \end{array}\ ] ] where \ ] ] and and are sets of arbitrary functions equal in number to the number of equations ( or components of ) involved . and are given by the following formulas , respectively \nabla \mathbf{u}^t\left [ \begin{array}{c } % \vspace{-10pt } 0\\ % \vspace{-10pt } 0\\ 1 \end{array } \right ] - \tilde{d_+ } \nabla\mathbf{u}^t \left [ \begin{array}{c } % \vspace{-10pt } 1\\ % \vspace{-10pt } 0\\ 0 \end{array } \right ] \right ) + [ 1 , ~0 , ~0 ] \displaystyle\frac{\partial}{\partial t}\mathbf{u}\\ \vspace{10pt } a_2(\mathbf{u } ) = \nabla^t \left(-k_-\mathbf{u}^t \left [ \begin{array}{c } % \vspace{-10pt } 0 \\ % \vspace{-10pt } 1 \\ 0 \end{array } \right ]\nabla \mathbf{u}^t\left [ \begin{array}{c } % \vspace{-10pt } 0\\ % \vspace{-10pt } 0\\ 1 \end{array } \right ] - \tilde{d}_- \nabla\mathbf{u}^t \left [ \begin{array}{c } % \vspace{-10pt } 0\\ % \vspace{-10pt } 1\\ 0 \end{array } \right ] \right ) + [ 0,~1,~0 ] \displaystyle\frac{\partial}{\partial t}\mathbf{u}\\ a_3(\mathbf{u } ) = \epsilon_0\epsilon\nabla^t\nabla\mathbf{u}^t \left [ \begin{array}{c } % \vspace{-10pt } 0\\ % \vspace{-10pt } 0\\ 1 \end{array } \right ] + [ 1 , ~-1 , ~0]ze\mathbf{u}. \end{array}\ ] ] where , .an expression gives the boundary conditions on , however , we choose a _ forced _ type of boundary conditions on i. e. let us substitute = \mathbf{n}\mathbf{\tilde{u } } ] in the taylor series we obtain where takes values from ]let us take as a boundary condition .now we seek a solution of the equation ( [ diffusion ] ) which satisfies this boundary condition and prescribed initial condition at the time .the solution of the equation is approximated by the triple sum where are unknown coefficients that must be determinated from the initial condition : in the case of the domain being \times[0~ \pi]\times[0~ \pi] ] , and ] . and for the function equals 0 everything on apart from one can approximate the exact solution by for where the laplace equation has the solution defined in for where denotes volume of b(0,1 ) in and equals .below are listed a few technical remarks referring to the mesh generation routine applied to obtain a designed 3d mesh .an initial mesh is built on the basis of main surface nodes ( _ outer _ nodes ) which define a figure s shape .the whole figure is considered as divided into perpendicular to -axis layers .thus the _ outer _ nodes are distributed on the edges of layers . in the center of each layer and also in the middle between two layers are located inner nodes .they are connected with _outer _ nodes creating in this way the main figure s construction .initial mesh elements obtained in such a manner are of tetrahedral shape .the boundary of the figure is defined by set of _ surface _equations for vertical and horizontal segment lines linking _ outer _ nodes . after each mesh iterationnew nodes are created and labeled as _ outer _ or _ones according to surface equations .moreover , the location of each node ( i. e. on which exactly vertical , horizontal line or surface patch the node is lying ) is also stored .new elements are created by a division of already existing elements . at the beginning of the routine, the surface of division mainly connects a new node born on the longest element edge with two other nodes belonging to that mesh element and one node from the divided edge .the procedure constitutes a 3d extension of the 2d mesh generation routine described already in .however , during the routine a number of small elements is increasing , and the division of the longest bar is not anymore the optimal way of proceeding .that is why , before choosing an edge to the division volume of elements common to it is checked .the edge that will not produce new elements having its volume smaller than an assumed * critical volume * is chosen to be cut .the optimization is done with help of the metropolis algorithm .the system energy is calculated as a sum of discrepancies between an element volume and assumed element volume where denotes a prescribed length of the edge thus the smaller a degeneracy from a designed volume distribution the more optimal state .the metropolis routine starts from a nodal configuration given by described above procedure .the main point is to reach the optimal global configuration by ascertaining local optimal states .they arise from such a configuration of -th node and its neighboring nodes which gives smaller energy .this partial energy is calculated from the sum eq .( [ energy ] ) taken over elements containing the node of interest . to compute new positions for each node ( giving new configuration ) the following expression is put forward where denotes a shifting strength and is the length of edge .the value of determines the strength of a nodal shift and varies from 0 to 1 .it is also worthy considering to choose its value as a random number from uniform distribution . for meshes in a cubic domain \times[0, \pi]\times[0 , \pi] ] with a ) unique elements created for and b ) meshes optimized with the metropolis algorithm with .,title="fig : " ] within the metropolis routine the transition probability is calculated by the formula where is a boltzmann constant ( here set as 1 ) , temperature and is a difference between energies of these two states .if a value of is greater than a random number from new state is accepted .otherwise the old one is preserved .+ all above - described local metropolis steps can lead to different global configurations .therefore , for each division number this distribution of nodes which gives a lower energy of the total system should be kept . to find this ,again the metropolis rule is employed , but this time changes in the total energy of the whole system are examined . to estimate maximal temperature ( in expression ( [ tran_prob ] ) ) a range of changes in potential energy corresponding to current number of elements must be found . moreover , in each global metropolis step temperature might decrease according to where parameter .for the same domain as in fig .[ length ] and for a ) unique elements ; b ) elements with optimized with the metropolis algorithm.,title="fig : " ] for the same domain as in fig . [ length ] and for a ) unique elements ; b ) elements with optimized with the metropolis algorithm.,title="fig : " ] to improve mesh quality the following transformations are applied . * three elements common to an edgeare transformed to two elements , when one of the elements fails to satisfy the delaunay criterion . *four elements common to an edge are transformed to a new configuration of four elements , when one of the elements does not meet the delaunay criterion .the new pattern is chosen from two different possibilities .additionally , too small boundary elements could be destructed by a projection of its internal node to the center of the outer patch of element that is opposite to it .such approach is justified in the case of boundary elements , however , in the case of internal ones leads to creation of so called irregular nodes .the accuracy of fem approximation of the laplace equation on different meshes were examined .numerical results vs. analytical ones for cubic and spherical domain are presented in fig .( [ laplace_cube ] ) .the relative difference between both analytical and numerical solutions has been calculated as the laplace equation has been solved for the cubic domain \times[0 , \pi]\times[0 , \pi] ] .the exact solutions for both considered cases are evaluated precisely in sec .fem approximation has been computed for the , , linear order of tetrahedron .however , the comparison between both orders of approximation i .e , , linear and , , quadratic for the uniform mesh with has been performed .the formulas for higher orders of approximation i. e. quadratic and cubic can be found in .the results show that mean discrepancy between numerical and analytical solutions calculated according to eq .( [ difference_eq ] ) for the laplace equation equal in the linear case and in quadratic approximation , respectively .thus in further studies linear approximation will be used as being sufficiently accurate . in the case of cubic domain a mesh of unique element volume ( non optimized ) has been applied in contrast to the spherical domain where mesh has been used after its enhancement with both the metropolis algorithm and the delaunay routine .( as described in sec .4 ) vs. the analytical result together with b ) a distribution of differences between numerical and analytical solutions obtained for each node in domain ; c ) fem approximation ( linear ) vs. the exact solution of the laplace equation for the spherical domain with the boundary values defined by putting an elementary charge outside the sphere in ] b ) the volume distribution in the spherical domain optimized with the metropolis algorithm with elements of a prescribed volume equals .,title="fig : " ] ( as described in sec .4 ) vs. the analytical result together with b ) a distribution of differences between numerical and analytical solutions obtained for each node in domain ; c ) fem approximation ( linear ) vs. the exact solution of the laplace equation for the spherical domain with the boundary values defined by putting an elementary charge outside the sphere in ] b ) the volume distribution in the spherical domain optimized with the metropolis algorithm with elements of a prescribed volume equals .,title="fig : " ] to test accuracy of discrete approximation in time the equation of diffusion ( see sec .3 ) has been examined . to find particles distribution at moment in time the taylor expansion ( see eq .[ discrete_in_time ] ) with has been applied .the diffusion equation with the following initial condition for the cubic domain and where denotes a radius of the cylindrical domain has been solved .the boundary value of the is set as 0 . results for both domains : cubic ( with uniform elements see fig .[ volume]a ) and cylindrical ( see fig . [ regular ] ) , this one tuned to the designed element volume with the metropolis recipe , are shown in fig .[ diffusion_pic ] . in the case ofcylindrical domain mean values of the ratio calculated at each point of domain for times have the average value . andat the time with [ units ] vs. the analytical result together with b ) a distribution of differences ( eq . [ difference_eq ] ) between numerical and analytical solutions obtained for each node in domain ; c ) fem approximation ( linear ) of the diffusion equation for the cylindrical domain at and at the following times : ( blue ) , ( black ) and ( red ) with [ units ] ; d ) volume profile of elements within the cylindrical domain where and ; mesh quality were enhanced with help of the metropolis algorithm.,title="fig : " ] and at the time with [ units ] vs. the analytical result together with b ) a distribution of differences ( eq . [ difference_eq ] ) between numerical and analytical solutions obtained for each node in domain ; c ) fem approximation ( linear ) of the diffusion equation for the cylindrical domain at and at the following times : ( blue ) , ( black ) and ( red ) with [ units ] ; d ) volume profile of elements within the cylindrical domain where and ; mesh quality were enhanced with help of the metropolis algorithm.,title="fig : " ] and at the time with [ units ] vs. the analytical result together with b ) a distribution of differences ( eq .[ difference_eq ] ) between numerical and analytical solutions obtained for each node in domain ; c ) fem approximation ( linear ) of the diffusion equation for the cylindrical domain at and at the following times : ( blue ) , ( black ) and ( red ) with [ units ] ; d ) volume profile of elements within the cylindrical domain where and ; mesh quality were enhanced with help of the metropolis algorithm.,title="fig : " ] and at the time with [ units ] vs. the analytical result together with b ) a distribution of differences ( eq . [ difference_eq ] ) between numerical and analytical solutions obtained for each node in domain ; c ) fem approximation ( linear ) of the diffusion equation for the cylindrical domain at and at the following times : ( blue ) , ( black ) and ( red ) with [ units ] ; d ) volume profile of elements within the cylindrical domain where and ; mesh quality were enhanced with help of the metropolis algorithm.,title="fig : " ] the system of coupled equations describing process of electrodiffusion ( [ electrodiffusion ] ) written in terms of the fem method ( see eq .( [ pnp_final ] ) ) with the following values of constants : and the time step equals has been numerically solved by using the newton s algorithm .the boundary values of are set as 1 at , , , , and and equal 2 at .an initial guess of and distributions has been chosen as 0 everywhere in the domain apart from its boundaries .the system of equations has been computed up to the final time .[ electrodiffusion_pic ] presents obtained profiles of cations and the potential at the center of the domain i. e. at .there is no visible difference between cations and anions distributions so the latter is not shown .the maximum of is decaying from 0.023 for ( ) to 0.0093 for ( ) .the maximum of difference between and distributions computed at each node equals . employing physical notionit means that the system of charged particles is electroneutral .moreover , the system of particles is tending to its stationary state . andat the time with the values of parameters [ units ] , , [ units ] a ) the distribution of cations ; b ) the profile of the potential ; computations have been performed on the uniform mesh with the volume of tetrahedron and the element size .,title="fig : " ] and at the time with the values of parameters [ units ] , , [ units ] a ) the distribution of cations ; b ) the profile of the potential ; computations have been performed on the uniform mesh with the volume of tetrahedron and the element size .,title="fig : " ] additionally , components and of the total flux of particles flowing through the domain have been computed .they are shown in fig .[ electrodiffusion_current ] .presence of a difference in an amount of particles at the both sides of in the i. e. at and causes a non zero flow along axis whereas a lack of such a difference in the two other directions i. e. and leads to the vanishing flows and in the center of the domain . and at the time .computations have been done with the following values of parameters : the time step [ units ] , nonlinear term multipliers [ units ] , diffusion coefficients [ units ] .this figure shows approximated solutions for flux components for cations in the direction a ) and in the direction b ) .the computations have been performed on the uniform mesh with the volume of tetrahedron and the element size .,title="fig : " ] and at the time .computations have been done with the following values of parameters : the time step [ units ] , nonlinear term multipliers [ units ] , diffusion coefficients [ units ] .this figure shows approximated solutions for flux components for cations in the direction a ) and in the direction b ) .the computations have been performed on the uniform mesh with the volume of tetrahedron and the element size .,title="fig : " ]the presented software offers a 3d mesh generation routine as well as its further application to the 3d electrodiffusional problem . + the proposed mesh generator offers a confident way to creature a quite uniform mesh built with elements having desired volume .mesh elements have been adjusted to assumed sizes by making use of both the metropolis algorithm and the delaunay criterion .mesh quality depicted in histograms occurs to be fairly satisfactory .moreover , goodness of obtained meshes together with robustness of their applications to the finite element method have been also tested by solving the 3d laplace problem and the 3d diffusion equation on them .comparison between these numerical solutions and analytical results shows very good agreement .+ to find solutions to a nonlinear problem defined by a system of coupled equations describing electrodiffusion the fem approach and the newton method have been jointly applied .analysis of obtained results confirms usefulness of the presented solver to deal with nonlinear differential problems . c. t. kelley , _ solving nonlinear equations with newton s method in fundamentals of algorithms _ siam , 2003 ; j. brzzka , l. dorobczyski , _ matlab .rodowisko oblicze naukowo - technicznych ._ ( pwn , warszawa , 2008 )
the presented article contains a 3d mesh generation routine optimized with the metropolis algorithm . the procedure enables to produce meshes of a prescribed volume of elements . the finite volume meshes are used with the finite element approach . the fem analysis enables to deal with a set of coupled nonlinear differential equations that describes the electrodiffusional problem . mesh quality and accuracy of fem solutions are also examined . high quality of fem type space dependent approximation and correctness of discrete approximation in time are ensured by finding solutions to the 3d laplace problem and to the 3d diffusion equation , respectively . their comparison with analytical solutions confirms accuracy of obtained approximations .
complex networks , evolved from the erds - rnyi random graph , are powerful models for describing many complex systems in biology , sociology , and technology . in the past decade , the explosion of the general interest in the structure and the evolution of most real - world networks is mainly reflected in two striking characteristics .one is the small - world property , which suggests that a network has a highly degree of clustering like regular networks and a small average distance among any two nodes similar to random networks .the small - world phenomenon has been successfully described by network models with some degree of randomness .the other is the scale - free behavior , which means a power - law distribution of connectivity , , where is the probability that a node in the network has connections to other nodes and is a positive real number determined by the given network . the origin of the scale - free behavior has been traced back to two mechanisms that are observed in many systems , growing and preferential attachment .recently , with the progress of research in networks , many other statistical characteristics of networks appeared on the stage .of particular renown is the so - called community(or modularity ) .that is to say , a network is composed of many clusters of nodes , where the nodes in the same cluster are highly connected , while there are few links among the nodes belonging to different clusters .for instance , groups are formed in scientific collaboration networks .also , it has been found that dynamical processes on networks are affected by community structures , such as tendencies spread well within communities and diffusion between different communities is slow . in the study of community networks ,most research has been directed in two distinct directions . on the one hand ,attention has been paid to designing algorithms for detecting community structures in real networks .a pioneering method was made by girvan and newman , who introduced a quantitative measure for the quality of a partition of a network into communities .later , a number of algorithms have been proposed in order to find a good optimization with the least computational cost .the fastest available procedures use greedy techniques and extremal optimization , which are capable of detecting communities in large networks . on the other hand ,research has focused on modeling of networks with community structures . in ref . , a static social network was introduced where individuals belong to groups that in turn belong to groups of groups and so on . in ref . , a networked seceder model was suggested to illustrate group formation in social networks . in ref . , a growing bipartite network for social communities with group structures was proposed .each of those models is constructed based on one aspect of reality . in this paper, we introduce a network model with communities that gives a realistic description of local events .the model incorporates three processes , the addition of new nodes intra - community and new links intra- or inter - community . using growing and preferential attachment mechanisms, we generate the community network with a good right - skewed distribution of nodes degrees , which has been observed in many social systems .the barabsi - albert network only describes a particular type of evolving networks , the addition of new nodes preferential connecting to the nodes already present in the network .systems in the real world , however , are much richer . for example , in scientific collaboration networks , a multidisciplinary scientist is not only collaborate with scientists in his research fields but also has a stronger desire to collaborate with scientists in other fields . in friendship networks, a person usually makes friends with people belonging to different communities besides the community he belongs to . to give a realistic description of the network construction like that ,we introduce a growing model of community networks based on local events , the addition of new nodes intra - community and new links intra- or inter - community .the proposed model is defined as follows .we start with ( ) isolated communities and each community consists of a small number of isolated nodes . at each time step , we perform one of the following three operations. \(i ) with probability we add a new node in a randomly chosen community . herethe randomly chosen means that the community is selected according to the uniform distribution .the new node is only connected to one node that already present in the selected community .we denote it as the commnuity .the probability that node in community will be selected is proportional to its intra - community degree where the sum runs over nodes in community and is the intra - community degree of node in community .\(ii ) with probability we add a new link in a randomly chosen community . for thiswe randomly select a node in a randomly chosen community as the starting point of the new link .the other end of the link is selected in the same community with the probability given by eq .( [ nodeprob ] ) .\(iii ) with probability we add a new link between two communities . for thiswe randomly select a node in a randomly chosen community as the starting point of the new link .the other end of the link selected in the other community is proportional to its inter - community degree where the sum runs over nodes in all communities except for community and is the inter - community degree of node in community .after time steps , this scheme generates a network of nodes and links .the parameters , , and control the network structure .in the case of small , the generated network will have a strong community structure .notice that whatever process is chosen in the network growth , only one link is added to the system at each time step ( duplicate and self - connected edges are forbidden ) , however , this is not essential .we choose link probabilities and to be proportional to and , respectively , such that there is a nonzero probability of isolated nodes acquiring new links .in our community network , the degree of a node consists of two parts , the intra - community degree and the inter - community degree .increase in the node s connectivity can be divided into two processes , the increases of the intra - community degree and the inter - community degree . in each process , we assume that and change continuously , and the probabilities and can be interpreted as the rates at which and change , respectively . thus , the operations ( i)-(iii ) all contribute to , each being incorporated in the continuum theory as follows .\(i ) addition of a new node in a randomly chosen community with probability : \(ii ) addition of a new link in a randomly chosen community with probability : ,\ ] ] where is the number of total nodes .the first term on the right - hand side ( rhs ) corresponds to the random selection of one end of the new link , while the second term on the rhs reflects the preferential attachment ( eq .( [ nodeprob ] ) ) used to select the other end of the link .\(iii ) addition of a new links between two communities with probability : .\ ] ] the first term on the rhs represents the random selection of one end of the new link , while the second term on the rhs considers the preferential attachment ( eq .( [ linkprob ] ) ) used to select the other end of the link in the other community . combing the contribution of above processes, we have with we can simplify eqs .( [ evolve1 ] ) and ( [ evolve2 ] ) for large the boundary conditions of the intra - community degree and the inter - community degree at initial time can be estimated in the sense of mathematical expectations , and , respectively .so we write the solutions of eqs .( [ eqintra ] ) and ( [ eqinter ] ) in random networks , the degree distribution can be calculated by which gives ^{-(3+\frac{p}{p+q } ) } , \label{degreeintra}\\ p(k^{\text{inter } } ) & = & \frac{2p-2pq - p^2}{2-p-4q-2p^2 + 2q^2 + 2p^2q+pq^2+p^3 } \notag\\ & \times & \left[\frac{2 - 2q+pk^{\text{inter}}}{2+p-2q - pq - p^2}\right]^{-(3+\frac{p}{1-p - q})}. \label{degreeinter}\end{aligned}\ ] ] thus , the degree distribution of our network obeys a generalized power - law form ^{-\gamma(p , q)}. \label{distribution}\ ] ] ) and ( [ degreeinter ] ) , respectively .the solid line in ( c ) is guide to the eye with power - law decay exponent .the experiment network has a total number of nodes with parameters , , , and , respectively.,width=529 ] , , , and , respectively.,width=377 ] in fig .[ fig1 ] we present numerical results of distributions of the intra - community degree , the inter - community degree , and the total degree of nodes in log - log scale .the experimental network is generated by the proposed scheme with , , , , and , respectively .the distributions of the intra - community degree and the inter - community degree , shown in figs . [ fig1](a ) and [ fig1](b ) , agree with analytical results of eqs .( [ degreeintra ] ) and ( [ degreeinter ] ) , respectively .the small deviations between computer simulations and analytical solutions at both ends of the distributions appears to be the mathematical approximation of the boundary conditions and the finite size effect due to the relatively small network sizes used in the simulations . according to the evolving rule of our network ,nodes with larger intra- ( or inter- ) degree have higher probability to gain new links , then the usual degree preferential attachment is reasonably kept .this means that the right - skewed character of the network , such as the node s total degree , will retain . as shown in fig .[ fig1](c ) , the total degree distribution of nodes is well expected showing a good right - skewed character , which is reasonably in agreement with the condition of many realistic systems . to illustrate the predictive power, we also compare the numerical result of our network with the statistics of an econophysics collaboration network . in the econophysics collaboration network , each node represents one scientist .if two scientists have collaborated one or more papers , they would be connected by an edge ._ took the largest connected component of this network , which includes nodes and edges , and provided the best division , i.e. , . in fig .[ fig2 ] we plot the degree distribution of econophysicists of the econophysics collaboration network which is fitted by computer simulations of our network starting with communities . to gain and , we fit the connectivity distribution obtained from this collaboration network with eq .( [ distribution ] ) , obtaining a good overlap for and ( fig .[ fig2 ] ) .networks with community structures underlie many natural and artificial systems .it is becoming essential to model and study this kind topological feature .we presented a simplified mechanism for networks organized in communities , which corresponds to local events during the system s growth . the generated network is highly clustered and has a good rightskewed distribution of connectivity , which have been found very common in most realistic systems .the present paper only suggests a simple way for generating community networks .the shape of the resulting network is deterministic in some extent .it is more interesting to model the evolution of communities , especially the self organization ( or emergence ) of communities in the natural world , e.g. , expansion and shrinkage , which is left to future work .the authors acknowledge financial support from nsfc/10805033 , socialnets/217141 , stcsm/08zr1408000 , ptdc / fis/71551/2006 , and fct / sfrh / bpd/30425/2006 .
the study of community networks has attracted considerable attention recently . in this paper , we propose an evolving community network model based on local processes , the addition of new nodes intra - community and new links intra- or inter - community . employing growth and preferential attachment mechanisms , we generate networks with a generalized power - law distribution of nodes degrees . , , complex networks ; community networks
urface plasmon effects in gold nanoparticles is a physical phenomena that has been observed in colored glass objects since ancient times .the most fascinating and useful features of the plasmonic resonances in metal nanoparticles is first of all the mere existence of these resonances that may occur at free - space wavelengths that are many order of magnitudes larger than the structure itself , and secondly ( and contrary to intuition ) that the corresponding resonance frequencies are virtually independent of the size of the particles ( if they are sufficiently small ) , but does depend on its shape and orientation , see _e.g. _ , . today, new theory and applications of plasmonics are constantly being explored in technology , biology and medicin .the topic includes the study of surface plasmonic resonances in small structures of various shapes , possibly embedded in different media , see _e.g. _ , .the present study is restricted to passive surrounding materials , but future applications of plasmonics may even include amplifying ( active ) media as described in _e.g. _ , .the classical theories as well as most of the recent studies on plasmonic resonance effects are concerned with metal nanoparticles and photonics where the exterior domain is lossless , see _e.g. _ , .there are very few results developed for absorption and plasmonic resonance effects in particles or structures surrounded by lossy media . as _e.g. _ , in is given geometry independent absorption bounds for the plasmonic resonances in metal nanoparticles in vacuum , and an indication is given about how their results can be extended to lossy surrounding media .there exists a general mie theory for the electromagnetic power absorption in small spherical particles surrounded by lossy media , with explicit expressions and asymptotic formulas for the corresponding absorption cross section , see _e.g. _ , .even though these formulas are derived for spherical geometry , they are in general quite complicated and difficult to interpret .however , as will be demonstrated in this paper , a new simplified formula for the absorption cross section can be derived which is valid for small ellipsoidal particles embedded in lossy media , and which facilitates a definition of the corresponding optimal plasmonic resonance .a new potentially interesting application area for the plasmonic resonance phenomena is with the electrophoretic heating of gold nanoparticle suspensions as a radiotherapeutic hyperthermia based method to treat cancer . in particular ,gold nanoparticles ( gnps ) can be coated with ligands ( nutrients ) that target specific cancer cells as well as providing a net electronic charge of the gnps .the hypothesis is that when a localized , charged gnp suspension has been taken up by the cancer cells , it will facilitate an electrophoretic current and a heating that can destroy the cancer under radio or microwave radiation , and this without causing damage to the normal surrounding tissues .hence , the potential medical application at radio or microwave frequencies provides a motivation for studying optimal plasmonic resonances in lossy media .however , it is also important to consider the complexity of this clinical application with many possible physical and biophysical phenomena to take into account , including cellular properties and their influence on the dielectric spectrum , as well as thermodynamics and heat transfer , see _ e.g. _ , .it is also interesting to note that several authors have questioned whether metal nanoparticles can be heated in radio frequency at all , see _e.g. _ , . based on the above mentioned results as well as our own pre - studies in , we are proposing that straightforward physical modeling can be used to show that the most basic electromagnetic heating mechanisms , such as standard joule heating and inductive heating , most likely can be disregarded for this medical application , whereas the potential application remains with the feasibility of achieving plasmonic ( electrophoretic ) resonances .recently , an optimal plasmonic resonance for the sphere has been defined as the optimal conjugate match with respect to the surrounding medium , _i.e. _ , the optimal permittivity of the spherical suspension that maximizes the absorption at any given frequency .it has been demonstrated in that for a surrounding medium consisting of a weak electrolyte solution ( relevant for human tissue in the ghz range ) , a significant radio or microwave heating can be achieved inside a small spherical gnp suspension , provided that an electrophoretic particle acceleration ( drude ) mechanism is valid and can be tuned into resonance at the desired frequency . in this paper , we generalize the results in to include small structures of ellipsoidal shapes embedded in lossy media , and we provide explicit expressions for the corresponding absorption cross section and optimal conjugate match ( optimal plasmonic resonance ) .we investigate the necessary and sufficient condition regarding the feasibility of tuning a drude model to optimal conjugate match at a single frequency , and we discuss the relation between the optimal conjugate match and the classical frlich resonance condition .a relative absorption ratio is defined to facilitate a quantitative and unitless indicator for the achievable local heating , and some general expressions are finally given regarding the orientation of the ellipsoid in the polarizing field .numerical examples are included to illustrate the theory based on simple spheroidal geometries , and which at the same time are relevant for the potential medical application with electrophoretic heating of gnp suspensions in the microwave regime .the following notation and conventions will be used below .classical electromagnetic theory is considered based on si - units and with time convention for time harmonic fields .hence , a passive dielectric material with relative permittivity has positive imaginary part .let , , and denote the permeability , the permittivity , the wave impedance and the speed of light in vacuum , respectively , and where and .the wavenumber of vacuum is given by , where is the angular frequency and the frequency .the cartesian unit vectors are denoted and the radius vector is where is the radial unit vector in spherical coordinates .finally , the real and imaginary part and the complex conjugate of a complex number are denoted , and , respectively .consider a small spherical region of radius ( ) consisting of a dielectric material with relative permittivity and which is suspended inside a lossy dielectric background medium having relative permittivity .both media are assumed to be homogeneous and isotropic .the absorption cross section of the small sphere is given by where the scattering cross section , the extinction cross section and the absorption cross section with respect to the ambient material , are given by , \label{eq : cext } \\c_{\rm amb}=\frac{8\pi}{3}k_0r_1 ^ 3{\operatorname{im}}\{\sqrt{\epsilon}\},\label{eq : cinc}\end{aligned}\ ] ] see _ e.g. _ , . by algebraic manipulation of through , exploiting relations such as , and , it can be shown that the absorption cross section can also be expressed in the simplified form see also .in particular , from it can be shown that the optimal conjugate match is the maximizer of for , and which defines the optimal plasmonic resonance for the sphere in a lossy surrounding medium .the polarizability of the sphere is given by where is the volume of the spherical particle , see _e.g. _ , . by inserting into through, the following expression can be obtained alternatively , can be rewritten as and inserted into to yield at this point , it is emphasized that both expressions and have been derived based on the spherical assumption via .when is real valued the expression reduces to the well known expression for the absorption cross section of small particles of arbitrary shape that are surrounded by lossless media , _ i.e. _ , , see . on the other hand , the expression is in a more simple form which is well suited for the derivation of the optimal plasmonic resonance in connection with .it should be noted that the denominator in does not represent a pole of at , its significance is instead to cancel the corresponding zero that is present in the polarizability given by . to derive the polarizability of a small homogeneous structure or a particle, it is assumed that the excitation is given by a constant static electric field , with the polarization defined by the direction of the cartesian axis .the fundamental equations to be solved are given by where and are the electric field intensity and the electric flux density ( electric displacement ) , respectively , and where denotes the complex valued relative permittivity which is assigned the appropriate constant values inside and outside the structure .the equations in are solved by introducing the scalar potential where , and where satisfies the laplace equation , together with the continuity of as well as the continuity of the normal derivative at the boundary of the structure .finally , the scalar field must satisfy the asymptotic requirement .the resulting dipole moment relative the background is then given by where denotes the electric field inside the structure and the letter is used to denote the domain of the structure as well as its volume .consider now a small ellipsoidal region consisting of a dielectric material with relative permittivity and volume , and which is suspended inside a lossy dielectric background medium having relative permittivity , see figure [ fig : rfsetup4pdf ] .both media are assumed to be homogeneous and isotropic .let the largest spatial dimension of the ellipsoid be denoted and assume that .a solution to the electrostatic problem for the ellipsoid is provided by , and it is shown that when the applied field is aligned along one of the axes of the ellipsoid the resulting electric field is constant inside the particle and parallel to the applied field . from the analytical solution of this problem ,the polarizability of the ellipsoid is then finally obtained from the definition the resulting formula for the polarizability of the ellipsoid with semiaxes parallel to the cartesian axes , , and excitation is given by where for and where and , see . here , , and are geometrical factors satisfying .( 50,150 ) ( 40,10)(150,120 ) and volume , surrounded by a lossy background material with permittivity .the figure also illustrates some typical dimensions of coated gold nanoparticles constituting the ellipsoidal suspension with spatial dimension , see also .,title="fig:",width=226 ] ( 152,94 ) ( 140,75 ) ( 95,90 ) ( 132,25 ) ( 189,25 ) ( 175,70) note that is the additional dipole moment added to the background polarization .this is obvious from the expression implying that when .note also that is the additional permittivity inside the particle with respect to the background .hence , the total polarization of the medium inside the particle can be written and the additional polarization yields the additional dipole moment relative the background where is a constant vector . by comparison of and , and exploiting that and parallel , it is found that the interior field of the particle is given by the poynting s theorem gives the total power loss inside the particle as where has been used .the power density of a plane wave in a lossy medium is given by where and .hence , the absorption cross section is finally obtained as which is identical to the formula given in .consider the real valued function where is a complex variable with and a constant with .take the complex derivative of with respect to to yield showing that is a stationary point .it has furthermore been shown in that is a strictly concave function with a local maximum at , and hence we refer to as the optimal conjugate match .the absorption cross section for the ellipsoid with polarizability , is given by by comparison of and , it is immediately seen that the optimal conjugate match for the ellipsoid is given by and which hence defines the optimal plasmonic resonance for the ellipsoid in a lossy surrounding medium .the sphere is a special case of the ellipsoid with yielding , and which reproduces the corresponding result given in .the notion of the optimal resonance defined in as being `` plasmonic '' is motivated by the fact that a `` normal '' lossy background medium would have and hence , which is a typical feature of plasmonic resonances and which can be achieved _e.g. _ , by tuning a drude model .if we consider the optimal conjugate match in as a function of frequency , then it represents a metamaterial in the sense that it has a negative real part ( a dielectric medium with inductive properties ) , and which can not in general be implemented as a passive material over a fixed bandwidth , see also .however , as will be shown below , in many cases a drude model can be tuned to optimal plasmonic resonance at any desired center frequency . a generalized drude model for the permittivity of the ellipsoidal particleis given by where corresponds to the background material and where the static conductivity and the relaxation time are the parameters of the additional drude model .it is assumed that the background material is a `` normal '' material with and over the bandwidth of interest .the drude parameters may correspond to _e.g. _ , an electrophoretic mechanism where and , where is the number of charged particles per unit volume , the particle charge , the friction constant of the host medium and the mass of the particle , see _e.g. _ , .the drude parameters can be tuned to the optimal conjugate match by solving the equation where is given by and is the desired resonance frequency .this means that the following two equations corresponding to the real and imaginary parts of must be satisfied to find a solution to , it is necessary and sufficient that both equations have a right - hand side that is positive . for a `` normal '' surrounding material with , it is readily seen from that and hence that .for the imaginary part , the requirement that together with leads directly to the condition when the condition is fulfilled , the system can be solved to yield the following tuned drude parameters see also .consider the interpretation of the condition in the case with spheroidal shapes .choose for example the axis as the direction of the applied electric field , and let where . the ellipsoid is then a prolate spheroid when , a sphere when and an oblate spheroid when .the interpretation of is that the sphere and the prolate spheriod can always be tuned by a drude model to match the optimal value at any desired center frequency for which .an oblate spheroid , however , can only be tuned into optimal plasmonic resonance using the drude model , when the shape is not too flat and .this result agrees well with intuition , since polarizability ( and hence resonance ) is enhanced by prolongation of the particle shape in the direction of the polarizing field .the result generalizes the classical frhlich condition in the sense that gives the condition for an optimal plasmonic resonance of a small homogeneous ellipsoid , which is not an approximation and which is valid for a surrounding lossy medium .hence , the frhlich condition for the ellipsoid can be obtained from in a sequence of approximations as follows .first , the criterion is approximated as assuming that the imaginary parts of both and are small . using the following form of the drude model where is the plasma frequency given by , the equation can be solved to yield the following frhlich resonance frequency where the last approximation is valid when . for a lossless surrounding medium with real valued , the frhlich resonance frequency for a sphere consisting of a drude metalis given by , see .the absorption cross section of a small volume with respect to the ambient material is given by , and which is valid for volumes of arbitrary shape , see also .a unitless relative absorption ratio for the ellipsoid can now be defined as where has been used , as well as the relationship . by inserting the optimal conjugate match into , the following optimal relative absorption ratio is obtained for excitation along the axis of the ellipsoid the relative absorption ratio given by and can be useful as a quantitative unitless measure showing how much more heating that potentially can be obtained in a small resonant region in comparison to the ambient local heating .it is important to note , however , that a complete system analysis would take into account not only the local heating capabilities , but also the significance of the frequency dependent penetration ( skin ) depth of the bulk material , see also .finally , a general expression is given for the absorption cross section of a small homogeneous ellipsoidal particle with arbitrary orientation with respect to the applied field .consider a small ellipsoidal region with its semiaxes aligned along the cartesian unit vectors , , and an applied electric field given by .due to the linearity of the fundamental equations , it is straightforward to generalize the expressions on absorption cross section given in sections [ sect : abshomellipsoids ] and [ sect : optplasmresellipsoid ] above .the polarizability can now be expressed in terms of the diagonal polarizability dyadic where , and the interior field is given by instead of .the total power loss inside the particle is now given by and the corresponding absorption cross section where . by using ,the absorption cross section and the relative absorption ratio finally becomes and it is immediately seen that the two expressions in and are strictly concave functions in terms of the complex variable for ( a positive combination of concave functions is a concave function , etc ) and the corresponding optimal plasmonic resonance is therefore well - defined and unique .however , it is no longer possible to obtain a simple closed form expression for the optimal conjugate match as in .to illustrate the theory , a numerical example is considered with parameter choices relevant for the application with microwave absorption in gold nanoparticle suspensions , see _e.g. _ , .hence , the resonant frequency is chosen here as to mimic a typical system operating in the microwave regime , see _e.g. _ , . as a lossy ambient mediumis taken the typical characteristics of human tissue .information about the dielectric properties of biological tissues can be found in _e.g. _ , giving measurement results of most organs including brain ( grey matter ) , heart muscle , kidney , liver , inflated lung , spleen , muscle , etc . from these measurement resultswe conclude that human tissue can be realistically modelled by using a conductivity in the order of and a permittivity similar to water at a frequency of .hence , a simple conductivity - debye model for saline water is considered here where the surrounding medium is a weak electrolyte solution with relative permittivity where , and are the high frequency permittivity , the static permittivity and the dipole relaxation time in the corresponding debye model for water , respectively , and the conductivity of the saline solution . in the numerical examples below , these parameters are chosen as , , and . in figures [ fig : matfig503 ] through [ fig : matfig504b ] are shown the calculated relative absorption ratios for the ellipsoid with optimal , tuned drude and mismatched drude parameters , respectively .the optimal parameter is given by , the tuned drude parameter is given by and , and the mismatched drude parameter is again the drude parameter given by and , but which is constantly mismatched to the sphere using .a spheroidal shape is considered with the geometrical factors and , and where the applied electric field is aligned along the axis of the spheroid and hence is given by .the relative absorption ratios in the three cases described above are denoted , and corresponding to the parameters , and respectively .the parameter choices in figures [ fig : matfig503 ] through [ fig : matfig504b ] , are ( prolate spheroid ) , ( sphere ) and ( oblate spheroid ) which is close to the limiting case expressed in . from these examples , it is seen how the increased conductivity and losses ( figures [ fig : matfig503b ] and [ fig : matfig504b ] ) limits the usefulness of the local heating .but even in the latter example , where , the potential of local heating amounts to a relative absorption ratio of about 10:1 . in the case with the mismatched drude model , it is interesting to see how a prolongation of the spheroid lowers the resonance frequency , and a flattening of the spheroid yields a higher resonance frequency .( 50,140 ) ( 100,0)(50,130 ) with corresponding to the optimal parameter and corresponding to the tuned drude parameter . here, the surrounding medium is a saline solution with .,title="fig:",width=321 ] ( 50,140 ) ( 100,0)(50,130 ) with corresponding to the optimal parameter and corresponding to the mismatched drude parameter tuned to a sphere . here, the surrounding medium is a saline solution with .,title="fig:",width=321 ] ( 50,140 ) ( 100,0)(50,130 ) with corresponding to the optimal parameter and corresponding to the tuned drude parameter . here, the surrounding medium is a saline solution with .,title="fig:",width=321 ] ( 50,140 ) ( 100,0)(50,130 ) with corresponding to the optimal parameter and corresponding to the mismatched drude parameter tuned to a sphere . here, the surrounding medium is a saline solution with .,title="fig:",width=321 ]a new general formula has been derived for the absorption cross section of small ellipsoidal particles that are surrounded by lossy media .the new formula is expressed explicitly in terms of the polarizability of the particle and can be used to define an optimal plasmonic resonance for a given surrounding medium .the new formula can be derived from general mie scattering theory for a spherical particle in a lossy medium which generalizes to particles of ellipsoidal shape in the limiting case with small particles .the formula can furthermore be derived directly from the knowledge about the static solution to the ellipsoidal polarizability problem .a canonical example is presented based on the polarizability of a homogeneous spheroid .the example shows how an optimal plasmonic resonance can be designed based on a tuned drude model and illustrates the typical shape dependent resonance frequency of the surface plasmon .the numerical example is furthermore motivated by the medical application with radiotherapeutic hyperthermia based on electrophoretic heating of gold nanoparticle suspensions using microwave radiation .this work was supported by the swedish foundation for strategic research ( ssf ) .s. link and m. a. el - sayed , `` shape and size dependence of radiative , non - radiative and photothermal properties of gold nanocrystals , '' _ int . reviews in physical chemistry _ , vol . 19 , no . 3 , pp .409453 , 2000 .o. d. miller , a. g. polimeridis , m. t. h. reid , c. w. hsu , b. g. delacy , j. d. joannopoulos , m. soljacic , and s. g. johnson , `` fundamental limits to optical response in absorptive systems , '' _ optics express _ , vol .24 , no . 4 , pp . 33293364 , 2016 .t. lund , m. f. callaghan , p. williams , m. turmaine , c. bachmann , t. rademacher , i. m. roitt , and r. bayford , `` the influence of ligand organization on the rate of uptake of gold nanoparticles by colorectal cancer cells , '' _ biomaterials _ , vol .32 , pp . 97769784 , 2011 .g. w. hanson , r. c. monreal , and s. p. apell , `` electromagnetic absorption mechanisms in metal nanospheres : bulk and surface effects in radiofrequency - terahertz heating of nanoparticles , '' _ j. appl ._ , vol . 109 , 2011 , 124306 .s. j. corr , m. raoof , y. mackeyev , s. phounsavath , m. a. cheney , b. t. cisneros , m. shur , m. gozin , p. j. mcnally , l. j. wilson , and s. a. curley , `` citrate - capped gold nanoparticle electrophoretic heat production in response to a time - varying radio - frequency electric field , '' _ j. phys . chem .116 , no .45 , pp . 2438024389 , 2012 .s. nordebo , m. dalarsson , y. ivanenko , d. sjberg , and r. bayford , `` on the physical limitations for radio frequency absorption in gold nanoparticle suspensions , '' __ , vol .50 , no .15 , pp . 112 , 2017 .m. marquez , e. garcia , and m. camacho , `` hyperthermia devices and their uses with nanoparticles , '' jun .11 2013 , us patent 8463397 , https://www.google.com/patents/us8463397 .[ online ] .available : https://www.google.com/patents/us8463397 mariana dalarsson received the m.s .degree in microelectronics in 2010 , licentiate degree in electromagnetic theory in 2013 and ph.d .degree in electromagnetic theory in 2016 from the royal institute of technology , stockholm , sweden .since 2016 , she is a postdoctoral researcher at the department of physics and electrical engineering , linnus university .her research interests are in mathematical physics , metamaterials , electromagnetic wave propagation and absorption , inverse problems and imaging .sven nordebo received the m.s .degree in electrical engineering from the royal institute of technology , stockholm , sweden , in 1989 , and the ph.d .degree in signal processing from lule university of technology , lule , sweden , in 1995 .he was appointed docent in signal processing at blekinge institute of technology , in 1999 . since 2002he is a professor of signal processing at the department of physics and electrical engineering , linnus university .his research interests are in statistical signal processing , optimization , electromagnetics , wave propagation , inverse problems and imaging .daniel sjberg received the m.sc .degree in engineering physics and ph.d .degree in engineering , electromagnetic theory from lund university , lund , sweden , in 1996 and 2001 , respectively . in 2001, he joined the electromagnetic theory group , lund university , where , in 2005 , he became a docent in electromagnetic theory .he is currently a professor and the head of the department of electrical and information technology , lund university .he serves as the chair of swedish ursi commission b fields and waves since 2015 .his research interests are in electromagnetic properties of materials , composite materials , homogenization , periodic structures , numerical methods , radar cross section , wave propagation in complex and nonlinear media , and inverse scattering problems .richard bayford received the m.sc .degree in engineering from cranfield institute of technology , uk , in 1981 and ph.d .degree in engineering , from middlesex university , uk , in 1994 .he is currently professor of bio - modelling , head of biophysics at the middlesex university centre for investigative oncology , head of biophysics and engineering group , and visiting professor at ucl , department of electrical and electronic engineering , uk .his expertise is in biomedical imaging , bio - modelling , nanotechnology , deep brain stimulation , tele - medical systems , instrumentation and biosensors .he has had long collaborations with research groups both in the uk and overseas .he has published over 270 scientific papers and served as editor - in - chief for physiological measurements .
a new simplified formula is derived for the absorption cross section of small dielectric ellipsoidal particles embedded in lossy media . the new expression leads directly to a closed form solution for the optimal conjugate match with respect to the surrounding medium , _ i.e. _ , the optimal permittivity of the ellipsoidal particle that maximizes the absorption at any given frequency . this defines the optimal plasmonic resonance for the ellipsoid . the optimal conjugate match represents a metamaterial in the sense that the corresponding optimal permittivity function may have negative real part ( inductive properties ) , and can not in general be implemented as a passive material over a given bandwidth . a necessary and sufficient condition is derived for the feasibility of tuning the drude model to the optimal conjugate match at a single frequency , and it is found that all the prolate spheroids and some of the ( not too flat ) oblate spheroids can be tuned into optimal plasmonic resonance at any desired center frequency . numerical examples are given to illustrate the analysis . except for the general understanding of plasmonic resonances in lossy media , it is also anticipated that the new results can be useful for feasibility studies with _ e.g. _ , the radiotherapeutic hyperthermia based methods to treat cancer based on electrophoretic heating in gold nanoparticle suspensions using microwave radiation . particle absorption , plasmonic resonances .
in previous publications , an objective description of a quantum system in the time interval between two complete measurements has been proposed in terms of _ two _ state vectors , together with a new type of physical quantity , the weak value " of a quantum mechanical observable .specifically , for a system drawn from an ensemble preselected in the state and postselected in the state , the weak value for the observable is defined as where the real part is the quantity of primary physical interest ( and to which the term weak value " shall henceforth apply unless otherwise noted ) .the suggestion was motivated operationally by the fact that both real and imaginary parts of weak values can be linked to conditional measurement statistics predicted by standard quantum mechanics for the general class of weak measurements " , defined so as to minimize the disturbance to the system as a result of a diminished interaction with the measuring instrument . under these conditions ,joint weak measurements of two non - commuting observables can be made with negligible mutual interference , thus ensuring that the simultaneous assignment of weak values to all elements of the observable algebra is operationally consistent .the usefulness of this description has been demonstrated , both theoretically and experimentally , in a number of applications in which novel aspects of quantum processes have been uncovered when analyzed in terms of weak values .these include photon polarization interference , barrier tunnelling times , photon arrival times , anomalous pulse propagation , correlations in cavity qed experiments , complementarity in which - way " experiments , non - classical aspects of light , communication protocols and retrodiction paradoxes " of quantum entanglement . a certain amount of skepticism has nevertheless prevailed regarding the physical status of weak values , particularly in the light of the unconventional range of values that is possible according to ( [ weakvdef ] ) .indeed , the real part of , describing the pointer variable " response in a weak measurement , may lie outside the bounds of the spectrum of .manifestly eccentric " weak values , as are negative kinetic energies or negative particle numbers , are not easily reconciled with the physical interpretation that is traditionally attached to the respective observables .less intuitive yet is when stands for a projection operator , in which case the weak value suggests weak probabilities " taking generally non - positive values .such bizarre interpretations call for a sharper clarification of what physical meaning should be attached to the weak value of an observable .another item of skepticism surrounding the physical significance of weak values has to do with their general domain of applicability .it seems reasonable to demand of any new physical concept that it be applicable to a wide variety of situations outside the restricted context in which it is defined operationally .although progress has been made in this direction , convincing evidence of the general validity of the concept of the weak value is still lacking . with these questions in mind , the aim of this paper is two - fold : first , we address the physical meaning of weak values by showing that there exists an unambiguous interpretation of the real part of the weak value as a _definite _ mechanical effect of the system on a measuring probe that is specifically designed to minimize the dispersion in the back - reaction on the system .second , based on this interpretation , we present a new framework for the analysis of general von neumann measurements , in which the measurement statistics are interpreted as _ quantum averages of weak values _ ( qawv ) .we believe this framework is physically intuitive and provides compelling evidence for the ubiquity of weak values in more general measurement contexts .in particular , we show that for arbitrary system ensembles , the expectation value of the reading of any von neumann - type measurement is an average of weak values over a suitable posterior probability distribution .we furthermore show how qawv framework has a natural correspondence in the classical limit with the posterior analysis of measurement data according to the classical inference framework .thus , we can establish a correspondence between weak values and what in the macroscopic domain are regarded as objective classical dynamical variables .the paper is structured as follows : in sec .[ eigvvswv ] , we motivate the idea of averaging weak values by discussing the connection between pre - selected and pre- and postselected statistics in arbitrary measurements von - neumann type measurements . in sec .[ mechint ] we present the operational definition of the weak value as a definite mechanical effect associated with infinitesimally uncertain unitary transformations .the qawv framework is then introduced in sec . [ qaves ] for arbitrary strength measurements .we provide an illustration in section [ likecc ] , where we discuss a number of measurement situations in which the framework gives a simple characterization of the outcome statistics .finally , we establish in sec .[ clas ] the classical correspondence of the qawv framework .some conclusions are given in sec .[ concl ] .the conventional interpretation of a quantum mechanical expectation value , such as for an observable , is as an average of the eigenvalues of over a probability distribution that is realized in the context of a complete strong measurement of .our main suggestion in this paper is that for a wide class of generalized conditions on the von neumann measurement of , the statistics of measurement outcomes can alternatively be related to an underlying statistics of a different quantity , the weak value of , which is to be regarded as a definite physical property of an unperturbed quantum system in the time interval between two complete measurements .we shall therefore begin by discussing in this preliminary section the connection between pre- and pre - and post - selected measurement statistics of arbitrary strength von neumann measurements , and from this discussion show an instance in which averages of weak values more aptly describe the posterior break - up of the measurement outcome distribution . in the von neumann measurement scheme ,the device is some external system , described by canonical variables and , with =i ] , which from ( [ likeapp ] ) is spproximately the sum of two equally - shaped narrow gaussians at .thus , conditions are achieved for the superposition of two weak measurements at , both sampling in this case the least eccentric weak value on the orbit , .the two peaks in these cases are in fact quite similar to the single peak from the gaussian profile of fig .[ fig : likeffex]e at ; thus , even while the prior pointer distributions in [ fig : likeffex]e through [ fig : likeffex]f differ substantially in their shapes , the resulting ppme pointer distributions for all three cases share essentially the same envelope , with the last two cases showing interference fringes from the superposition of the two weak measurement sampling points .this interference pattern can then be connected to the spectral distribution expected from a strong measurement : given weak value and likelihood curves symmetric about , and a posterior distribution in with two similarly - shaped narrow peaks at locations , the resulting ppme pointer distribution will be the ppme pointer distibution for the single peak weak measurement at , but modulated by the term describing the interference pattern .for the situation depicted in fig .[ fig : likeffex ] , the phase shift is easily obtained from eq .( [ wvalj ] ) and is given by for , we have ; hence , interference patterns similar to those of figs [ fig : likeffex]f and [ fig : likeffex]g will show maxima at integer values of ( corresponding to integer values of ) , or at half - integer values of for half - integer , consistently with spectrum of .the foregoing suggests a fairly general picture underlying the transition from weak to strong measurement conditions for fixed initial and final conditions , as the width of the prior distribution in is varied . illustrating this passage with a gaussian prior of variable width centered at ( fig .[ fig : trans ] ) for the same measurement , we find the onset of a transitional behavior at a critical value of ( fig .[ fig : trans]b ) where the gaussian approximation fails . beyond this critical value ,the exponential rise of the likelihood factor dominates the prior on both sides , thus producing two symmetrically opposed peaks , the locations of which gradually move towards as is increased . this transitional behavior is reflected in the resulting pointer distribution by the emergence of an interference pattern with increasingly closer fringes , modulated by an envelope that gradually shifts with the sampled weak value from the eccentric to the normal region of expectation .the pattern eventually settles at the characteristic shape of the strong measurement distribution when the location of the two peaks reaches , only becoming sharper with increasing when the tails of the gaussian prior activate " the next likelihood peaks at , etc .the connection between macroscopic classical " properties and weak values has already been suggested in the literature . in this sectionwe give further evidence of this connection by showing the correspondence of the qawv framework in the classical limit .in particular , we show that in the semi - classical limit , the necessary conditions for a precise measurement of a classical dynamical quantity according to classical mechanics are at the same time the conditions that in the quantum description guarantee a weak measurement of the corresponding observable yielding the same numerical outcome .let be the configuration variable of a classical system , with free dynamics described by the lagrangian .for simplicity , we concentrate on a measurement of a function of the configuration variable alone , with a measurement lagrangian of the form coupling the system and an external classical apparatus with pointer variable and canonical conjugate . to connect with the results of section [ qaves ] , we interpret the pre- and post- selection as the fixing of initial and final boundary conditions on the system trajectory : and with .let us assume for simplicity throughout that only one solution is possible for the euler - lagrange equations . for non - zero ,the trajectory of the system will differ from its free trajectory due to a modification of the equations of motion by an additional -dependent impulsive force arising from the back reaction of the apparatus on the system . then , since the actual trajectory will be some function of the boundary conditions and , the quantity will generally depend on as well . in analogy with our previous notation ,define the function as one can show from the equations of motion , the classical action for the total lagrangian evaluated on the trajectory , \ , , \ ] ] serves as a generating function for , i.e. , .thus , from the equations of motion for the apparatus , we find that the pointer variable suffers the impulse at the time in direct correspondence with eq .( [ canshift ] ) .we now turn to the probabilistic aspects of the measurement . allowing for uncertainties in the initial state ( i.e. , the point in phase space ) of the apparatus, we describe our knowledge with a prior p.d.f . for the state of the apparatus before the measurement , where denotes all available prior information .we also assume that initial conditions on the system are irrelevant for this prior assessment of probabilities so that .since the variable enters the equations of motion of the system , knowledge of the final condition becomes relevant for inferences about at the time of the measuring interaction , and will therefore determine a re - assessment of prior probabilities .we must therefore compute the posterior p.d.f . for the apparatus , conditioned on the endpoints of the system trajectory , at the time _ before _ the interaction .the dynamics of the measurement can then be described by the liouville evolution generated by , i.e. , using bayes theorem , we find that where we have used the fact that is the only relevant apparatus variable entering the dynamics of the system , thus yielding a likelihood factor analogous to in the quantum case . finally , evolving to the time after the measurement through eq .( [ classtatetarns ] ) and marginalizing , we obtain for the pointer variable distribution after the measurement : where the dummy variables and are averaged over the reassessed initial phase space p.d.f . for the apparatus .this distribution is in complete analogy with eq .( [ quantpdf ] ) if averages over are identified with averages over the reassessed state and if is identified with the -dependent weak value . with this identification , eq .( [ finalmoms ] ) for the associated moments can be used for both the classical or quantum descriptions .furthermore , the terms and in ( [ finalmoms ] ) can also be eliminated in the classical case by requiring that the prior phase space distribution factors as with the expectation value of vanishing over . we can now show that under appropriate semi - classical conditions on a corresponding quantum system , the above analogy is not only formal but rather constitutes a true numerical correspondence between classical and quantum averages .for this , we need to calculate the so - far unspecified likelihood factor in eq .( [ clasbayes ] ) , which plays the role of in the state reassessment of eq .( [ statereassess ] ) . in the classical description ,the probability of being at at the time is proportional to the integral over all possible initial momenta of the system , yielding where is the value of the initial momentum as determined from the boundary conditions .this initial momentum can be obtained from a variation of the classical action , , so that ( known as van vleck determinant from its extension to higher dimensions ) .correspondence with the quantum description can now be established by calculating the quantum mechanical propagator for the corresponding quantum system , with being the time evolution operator associated with the classical lagrangian . as is easily verified , this is the relevant amplitude for the von neumann measurement of at the time with the given boundary conditions .under appropriate semiclassical conditions ( e.g. , small times , large masses , slowly varying potentials , etc . ), the propagator reduces to the semiclassical or wkb form where is classical action of eq .( [ clasact ] ) . consequently, under semiclassical conditions , the weak value of at the time coincides with the classical ; similarly , the likelihood factor in the re - assessment of the initial state of the apparatus ( eq .[ statereassess ] ) is the square root of the the likelihood factor involved in the re - assessment probabilities in the classical description .thus , assuming the conditions ensuring =0 , the final posterior mean value of will be given both in the classical and quantum descriptions by the average value over the respective posterior distributions in , which can be made to coincide .this allows us to claim a stronger correspondence between the classical and quantum descriptions when the system satisfies semiclassical conditions : for the same prior distributions in , the classical and quantum expectation values and variances of _ are numerically equal _ and hence , in particular , the final pointer expectation values are equal .it follows that the minimum dispersion conditions on the variable that in a classical description are required for a precise measurement of ( i.e. , ) , are at the same time the conditions that in the quantum description will guarantee a _weak _ measurement of yielding the same numerical value .this correspondence strongly suggests that indeed , what we call macroscopic classical " properties , are in fact weak values .let us elaborate on this assertion : the use of classical mechanics to describe macroscopic systems or other quantum systems exhibiting classical behavior relies on the fact that individual measurements may be devised so that : a ) the effect on the measurement device accurately reflects the numerical value of the classical observable being measured , b ) no appreciable disturbance is produced on the system as a result of the measurement interaction ; and c ) the effect on the measurement device is statistically distinguishable ( i.e. , the signal to noise ratio is large ) .the three conditions can be stated as follows : a ) , b ) , and c ) . in the quantum description , conditions a ) andb ) are weak measurement conditions and can be attained asymptotically by making the posterior uncertainty tend to zero , with the posterior average fixed at ; however , condition c ) can not be upheld in the limit since due to the uncertainty principle .equivalently , conditions a ) and b ) can not be fulfilled if condition c ) is to be satisfied by demanding as in the case of an ideally strong measurement .while it is therefore impossible to satisfy the three conditions either in the absolute strong or weak limits , relatively weak measurement conditions can nevertheless be found as a compromise in the uncertainty relations so that conditions a ) , b ) and c ) are simultaneously satisfied for all practical purposes " when classical - like physical quantities are involved .indeed , for such quantities one expects to be in a sense large " relative to atomic scales , or more precisely , to scale extensively with some scale parameter growing with the size or classicality " of the system ( such the mass , or the number of atoms ) .one can then choose a scaling relation for , i.e. , so that in which case conditions a ) b ) and c ) can be satisfied in the limit .assuming that scales as , then with the aid of the uncertainty relation and , we find that this is possible in the quantum description if can be made to scale as in which case as was recently shown , this is precisely the scaling relation of the optimal compromise for measurements of classical " collective properties ( such as center of mass position or total momentum ) of a large number of independent atomic constituents .in this paper we have advanced the claim that weak values of quantum mechanical observables constitute legitimate physical concepts providing an objective description of the properties of a quantum system known to belong to a completely pre- and postselected ensemble .this we have done by addressing two aspects , namely the physical interpretation of weak values , and their applicability as a physical concept outside the weak measurement context . regarding the physical meaning of weak values, we have shown that the weak value corresponds to a definite mechanical response of an ideal measuring probe the effect of which , from the point of the system , can be described as an infinitesimally uncertain unitary transformation .we have stressed how from this operational definition the weak value of an observable is tied to the role of as a generator of infinitesimal unitary transformations .we believe that this sharper operational formulation of weak values in terms of well - defined mechanical effects clarifies the sense in which weak values describe new and surprising features of the quantum domain . regarding the applicability of the concept of weak values in more general contexts, we have shown that arbitrary - strength von neumann measurements can be analyzed in the framework of quantum averages of weak values , in which dispersion in the apparatus variable driving the back - reaction on the system entails a quantum sampling of weak values .the framework has been shown to merge naturally into the classical inferential framework in the semi - classical limit .it is our hope that the framework introduced in the present paper may serve as a motivation for a refreshed analysis of the measurement process in quantum mechanics .y. a. acknowledges support from the basic research foundation of the israeli academy of sciences and humanities and the national science foundation .a.b . acknowledges support from colciencias ( contract no .245 - 2003 ) .this paper is based in part on the latter s doctoral dissertation , the completion of which owes much to prof .yuval neeman and financial support from colciencias - bid ii and a one - year scholarship from icsc - world laboratory .
we re - examine the status of the weak value of a quantum mechanical observable as an objective physical concept , addressing its physical interpretation and general domain of applicability . we show that the weak value can be regarded as a _ definite _ mechanical effect on a measuring probe specifically designed to minimize the back - reaction on the measured system . we then present a new framework for general measurement conditions ( where the back - reaction on the system may not be negligible ) in which the measurement outcomes can still be interpreted as _ quantum averages of weak values_. we show that in the classical limit , there is a direct correspondence between quantum averages of weak values and posterior expectation values of classical dynamical properties according to the classical inference framework .
mechanical models for tumor growth are used extensively in recent years for the prediction of cancer evolution based on imaging analysis .such models are based on the assumption that the growth of the tumor is mainly limited by the competition for space .mathematical modeling , analysis and numerical simulations together with experimental and clinical observations are essential components in the effort to enhance our understanding of the cancer development . the goal of this article is to make a further step in the investigation of such models by presenting a convergent explicit finite difference scheme for the numerical approximation of a hele - shaw - type model for tumor growth and by providing its detailed mathematical analysis .even though the main focus in the present work is on the investigation of the evolution of the proliferating cells , it provides a mathematical framework that can potentially accommodate more complex systems that account for the presence of nutrient and drug application. this will be the subject of future investigation . in the present contextthe tissue is considered as a multi - phase fluid and the ability of the tumor to expand into a host tissue is then primarily driven by the cell division rate which depends on the local cell density and the mechanical pressure in the tumor .the dynamics of the cell population density under pressure forces and cell multiplication is described by a transport equation where represents the number density of tumor cells , the velocity field and the pressure of the _tumor_. is a bounded domain in , .the pressure law is given by where .following , we assume that growth is directly related to the pressure through a function which satisfies the pressure is usually called _ homeostatic pressure . _ here , and in what follows , for simplicity we let for some . the continuous motion of cells within the tumor region , typically due to proliferation , is represented by the velocity field given by an alternative to darcy s equation known as _ brinkman s equation _ where is a positive constant describing the viscous like properties of tumor cells and is the pressure given by .relation consists of two terms .the first term is the usual darcy s law , which in the present setting describes the tendency of cells to move down pressure gradients and results from the friction of the tumor cells with the extracellular matrix . the second term ,on the other hand , is a dissipative force density ( analogous to the laplacian term that appears in the navier - stokes equation ) and results from the internal cell friction due to cell volume changes .a second interpretation of relation is the tumor tissue can be viewed as fluid like . " in other words , the tumor cells flow through the fixed extracellular matrix like a flow through a porous medium , obeying brinkman s law .( accounting for the friction among the tumor cells themselves . ) and is analogous to the laplacian term that appears in the navier - stokes equation .darcy s law describes the tendency of cells to move down pressure gradients accounting for the friction of the tumor cells with the extracellular matrix . on the other hand , the dissipative force density in brinkman s law results from the internal cell friction due to cell volume changes representing the friction among the tumor cells themselves.the laplace term accounts for the friction among the tumor cells themselves . the resulting model , governed by the transport equation for the population density of cells , the elliptic equation for the velocity field and a state equation for the pressure law , now reads at a first look , appears as an over damped force balance .a second interpretation of this relation states that the tumor tissue is fluid like " and that the tumor cells flow through the fixed extracellular matrix like a flow through a porous medium , obeying brinkman s law . we complete the system with a family of initial data satisfying ( for some constant ) the objective of this work is to establish the global existence of weak solutions to the nonlinear model for tumor growth by designing an efficient numerical scheme for its approximation and by showing that this scheme converges when the mesh is refined .the main ingredients of our approach and contribution to the existing theory include : 1 . the introduction of a suitable notion of solutions to the nonlinear system consisting of the transport equation and the brinkman regularization .2 . the construction of an approximating procedure which relies on an artificial vanishing viscosity approximation and the establishment of the suitable compactness in order to pass into the limit and to conclude convergence to the original system ( cf .section [ s3 ] , lemma 3.7 ) .3 . the design of an efficient numerical scheme for the numerical approximation of the nonlinear system - .the proof of the convergence of the numerical scheme . in the center of the analysis lies the proof of the strong convergence of the cell densities .this is achieved by establishing the weak continuity of the _ effective viscous pressure _ in the spirit of lions ( cf . section [ s4 ] , lemma 4.8 ) .the design of numerical experiments in order to establish that the finite difference scheme is effective in computing approximate solutions to the nonlinear system ( cf .section 5 ) . for relevant results on the analysis and the numerical approximation of a two - phase flow model in porous mediawe refer the reader to .related results on the numerical approximation of compressible fluids employing the weak compactness tools developed by of lions in the discrete setting have been established by karper __ and gallout _ et al ._ .relevant work on the mathematical analysis of mechanical models of hele - shaw - type have been presented by perthame _the analysis in establishes the existence of traveling wave solutions of the hele - shaw model of tumor growth with nutrient and presents numerical observations in two space dimensions .the present article is according to our knowledge the first article presenting rigorous analytical results on the global existence of general weak solutions to hele - shaw - type systems .a different approach yielding results on the global existence of weak solutions to a nonlinear model for tumor growth in a general moving domain without any symmetry assumption and for finite large initial data is presented in .the approach introduced in relies on the _ penalization _ of the boundary behavior , diffusion and viscosity .the work by donatelli and trivisa in establishes the global existence of weak solutions to a nonlinear system modeling tumor growth in a general moving domain without any symmetry assumption and for finite large initial data .but in contrast to the present nonlinear system , the transport equation for the evolution of cancerous cells in has a source term which is linear with respect to cell density .relevant results on nonlinear models for tumor growth governed by the darcy s law for the evolution of the velocity field are presented by zhao based on the farmework introduced by friedman __ .the objective of the present work is to establish the global existence of weak solutions to the nonlinear model for tumor growth - by designing an efficient numerical scheme of its approximation and by showing that this scheme converges when the mesh is refined .the main ingredients of our approach include : 1 . the introduction of a suitable notion of solutions to the system consisting of the transport equation and the brinkman regularization .2 . the design of an efficient numerical scheme for the numerical approximation of the nonlinear system - with the aid of a finite difference scheme .the proof of the convergence of the numerical scheme . in the heart of the analysis lies the establishment of the strong convergence of the cell densities by establishing the weak continuity of the _ effective viscous pressure_. 4 . the design of numerical experiments in order to establish that the finite difference scheme is effective in computing approximate solutions to the nonlinear system - .the paper is organized as follows : section [ s1 ] presents the motivation , modeling and introduces the necessary preliminary material .section [ s2 ] provides a weak formulation of the problem and states the main result .section [ s3 ] is devoted to the global existence of solutions via a vanishing viscosity approximation . in section [ s4 ]we present an efficient finite difference scheme for the approximation of the weak solution to system on rectangular domains and section [ s5 ] is devoted to numerical experiments . a discretized aubin - lions lemma and some technical lemmas are presented in appendices a and b respectively .for , , we will denote by and the gradient and divergence in the spatial direction in .[ d2.1 ] let a bounded domain in , , which is either rectangular or has a smooth boundary and a finite time horizon .we say that is a weak solution of problem - supplemented with initial data satisfying ( [ ic ] ) provided that the following hold : represents a weak solution of - on , i.e. , for any test function \times{\mathbb{r}}^d ) , t>0 ] , and almost everywhere .all quantities in are required to be integrable , and in particular , ;h^{2}({\omega})). ] . using a calderon - zygmund inequality ( e.g. ( * ? ? ?* thm . 9.11 . ) ) , we obtain ; w^{2,q}({\omega})) ] and ; w^{-1,2}({\omega})) ] . the second term on the left hand side is contained in ;w^{-1,2}({\omega})) ] and ; w^{-1,2}({\omega})) ] by the sobolev embedding ( ) .the preceeding lemma implies that the time derivative of the approximation of the pressure where is uniformly bounded in \times { \omega}) ] .hence where ;h^1({\omega})) ] , solves ( see ( * ? ? ?6.1 ) for a proof of the second statement ) .hence ;w^{1,r}({\omega})) ] .using aubin - lions lemma for and , we obtain strong convergence of a subsequence in \times{\omega}) ] .moreover , from the estimates in lemma [ lem : we ] we obtain that \times{\omega})\cap l^{\infty}([0,t];w^{2,q}({\omega})) ] and ;h^1({\omega})) ] satisfy in the sense of distributions .then for all continuously differentiable functions , {\operatorname{div}}{{\bm{u}}},\ ] ] in the sense of distributions .we let be a smooth , radially symmetric mollifier , i.e. and , with and denote for , .then we choose as a test function in , with is compactly supported in where includes all the points in which have distance and do a change of variables : integrating in , this becomes we define and and choose as a test function for a smooth compactly supported in ( which is possible since is smooth and bounded thanks to the convolution . ) .then we can rewrite the last identity using chain rule as {\operatorname{div}}{{\bm{u } } } + b'({n}_\delta ) r_\delta \right)\phi \, dxdt}.\end{gathered}\ ] ] where . by (* lemma 2.3 ) , we have that in and thanks to the properties of the convolution that almost everywhere as well as a.e . when .thus we obtain that in the limit , satisfies {\operatorname{div}}{{\bm{u } } } \right)\phi \ , dxdt}.\ ] ] which is exactly in the sense of distributions . applying lemma [ lem : renormalized ] for the weak limit in with , we obtain that satisfies for any test functions . on the other hand , from for we obtain after integrating in space and time passing to the limit in this inequality , we have where denotes the weak limit of and and are the weak limits of and respectively .letting in this inequality , we obtain , thanks to the boundedness of the integrand on the right hand side , on the other hand , since is convex , we have and hence .we now choose smooth test functions approximating }(t) ] , in inequality and then pass to the limit in the approximation to obtain the inequality subtracting from , we have now using the explicit expression of , , the first term on the right hand side can be estimated as follows : where we have used ( * ? ? ?* lemma 3.35 ) , which implies , for the first inequality .to estimate the second term on the right hand side , we use that is bounded thanks to lemma [ lem : we ] and that by the convexity of .hence for the last term , we use the following lemma , [ lem : effve ] the weak limits of the sequences satisfy for smooth functions , where , , are the weak limits of , and respectively .applying this lemma to the second term in with , we can estimate it by using that ( cf .thus , hence grnwall s inequality implies by convexity of the function we also have almost everywhere and so almost everywhere in .therefore we conclude that the functions converge strongly to almost everywhere and in particular also which means that the limit is a weak solution of the equations .we multiply the equation for by and integrate over , passing to the limit , we obtain on the other hand , using the smooth function as a test function in the weak formulation of the limit equation and passing to the limit , we obtain combining the last identity with , we obtain .we consider the problem in two space dimensions in a rectangular domain , for simplicity we use ^ 2 ] with cell midpoints .in addition , we denote , , where for some final time . the approximation of a function at grid point and time will be denoted .we also introduce the finite differences , and define the discrete laplacian , divergence and gradient operators based on these , for ease of notation , we also let and denote the discrete velocities in the transport equation , specifically , given , we let given at time step , we define the quantities at the next time step by [ eq : expscheme1 ] where and the fluxes , are defined by we use homogeneous neumann or periodic boundary conditions for both variables : the initial condition we approximate taking averages over the cells , in the following , we will prove estimates on the discrete quantities obtained using the scheme . we therefore define the piecewise constant functions \times{\omega},\\ \ ] ] where .we first prove that stays nonnegative and uniformly bounded from above .[ lem : linfn_h ] if uniformly in and the timestep satisfies the cfl condition ( where ) , then for any , the functions are uniformly ( in ) bounded and nonnegative , specifically , defining , we have for all , the proof goes by induction on the timestep .clearly , by the assumptions , we have . for the induction stepwe therefore assume that this holds for timestep and show that it implies the nonnegativity and boundedness at timestep .we first show that the are bounded in terms of the .to do so , let us assume it has a local maximum in a cell , for some . then ( if or , then because of the neumann boundary conditions , the forward / backward difference in direction of the boundary is zero and thus the previous inequality is true as well ) .hence therefore , similarly , at a local minimum of , we have and hence which implies thus , now we rewrite the scheme as where \\ \alpha_{i , j}^{(2),m}&= { \delta t}\,{{{\bm{g}}}}(p_{i , j}^m)+\frac{{\delta t}}{h}\left[u_{i+1/2,j}^m - u_{i-1/2,j}^m+v_{i , j+1/2}^m - v_{i , j-1/2}^m\right]\\ \beta_{i , j}^m&=\frac{{\delta t}}{2h } \left ( u_{i+1/2,j}^m+| u_{i+1/2,j}^m|\right)\\ \zeta_{i , j}^m&=\frac{{\delta t}}{2h}\left(|u_{i-1/2,j}^m|-u_{i-1/2,j}^m\right)\\ \eta_{i , j}^m&=\frac{{\delta t}}{2h } \left ( v_{i , j+1/2}^m+| v_{i , j+1/2}^m|\right)\\ \theta_{i , j}^m&=\frac{{\delta t}}{2h}\left(|v_{i , j-1/2}^m|-v_{i , j-1/2}^m\right)\\ \end{split}\end{aligned}\ ] ] we note that , and that under the cfl - condition , also . hence , assuming that for all , we have we proceed to showing the boundedness of .thanks to the cfl - condition , we have moreover , . using the induction hypothesis that for all and the nonnegativity of which we have just proved , we can estimate : we can rewrite and bound using the equation for , , where we have used for the first inequality , that for some intermediate value ] uniformly in and therefore also uniformly bounded in any other -space , which implies together with the above identity , that \times { \omega}) ] for .to prove strong convergence of the approximating sequence , it will be useful to derive entropy inequalities for .to this end , the following lemma will be useful : [ lem : fn ] let be a smooth convex function and assume that satisfies the cfl - condition denote and a piecewise constant interpolation of it as in .then satisfies the following identity + \frac{h}{4 } d^-_2 \left[f'({n}_{i , j}^m)|v_{i , j+1/2}^m| d^+_2 { n}_{i , j}^m\right]\\ \label{eq : t3 } & \hphantom{=}+\frac{h}{4 } d^+_1 \left[f'({n}_{i , j}^m)|u_{i-1/2,j}^m| d^-_1 { n}_{i , j}^m\right ] + \frac{h}{4 } d^+_2 \left[f'({n}_{i , j}^m)|v_{i , j-1/2}^m| d^-_2 { n}_{i , j}^m\right]\\ \label{eq : t4 } & \hphantom{=}-\frac{h^2}{4}d^-_1\left [ f''(\widetilde{n}_{i+1/2,j}^m ) u_{i+1/2,j}^m |d^+_1 { n}_{i , j}^m|^2\right]\\\label{eq : t5 } & \hphantom{=}-\frac{h^2}{4}d^-_2\left [ f''(\widetilde{n}_{i , j+1/2}^m ) v_{i , j+1/2}^m |d^+_2 { n}_{i , j}^m|^2\right]\\\label{eq : t6 } & \hphantom{= } -\frac{h}{4}f''(\widehat{{n}}_{i-1/2,j}^m ) |u_{i-1/2,j}^m||d^-_1 { n}_{i , j}^m|^2-\frac{h}{4}f''(\widehat{{n}}_{i , j-1/2}^m ) |v_{i , j-1/2}^m||d^-_2 { n}_{i , j}^m|^2\\ \label{eq : t7 } & \hphantom{= } -\frac{h}{4}f''(\widehat{{n}}_{i+1/2,j}^m ) |u_{i+1/2,j}^m||d^+_1 { n}_{i , j}^m|^2-\frac{h}{4}f''(\widehat{{n}}_{i , j+1/2}^m ) |v_{i , j+1/2}^m||d^+_2 { n}_{i , j}^m|^2\\ \label{eq : t8 } & \hphantom{=}+(f'({n}_{i , j}^m){n}_{i , j}^m - f_{i , j}^m ) \delta_h { w}_{i , j}^m+f'({n}_{i , j}^m ) { n}_{i , j}^m { { { \bm{g}}}}(p_{i , j}^m)\\ \label{eq : t9 } & \hphantom{=}+\frac{{\delta t}}{2 } f''(\widetilde{{n}}^{m+1/2}_{i , j } ) |d_t^+ { n}_{i , j}^m|^2 , \ ] ] where ] and ] and ; w^{-1,q}({\omega})) ] , we can write where and are intermediate values . hence, multiplying equation by , it becomes -\frac{h}{4}f''(\widehat{{n}}_{i-1/2,j}^m ) |u_{i-1/2,j}^m||d^-_1 { n}_{i , j}^m|^2\\ & \hphantom{=}+\frac{h}{4 } d^-_2 \left[f'({n}_{i , j}^m)|v_{i , j+1/2}^m| d^+_2 { n}_{i , j}^m\right ] -\frac{h}{4}f''(\widehat{{n}}_{i , j-1/2}^m ) |v_{i , j-1/2}^m||d^-_2 { n}_{i , j}^m|^2\\ & \hphantom{=}+\frac{h}{4 } d^+_1 \left[f'({n}_{i , j}^m)|u_{i-1/2,j}^m| d^-_1 { n}_{i , j}^m\right ] -\frac{h}{4}f''(\widehat{{n}}_{i+1/2,j}^m ) |u_{i+1/2,j}^m||d^+_1 { n}_{i , j}^m|^2\\ & \hphantom{=}+\frac{h}{4 } d^+_2 \left[f'({n}_{i , j}^m)|v_{i , j-1/2}^m| d^-_2 { n}_{i , j}^m\right ] -\frac{h}{4}f''(\widehat{{n}}_{i , j+1/2}^m ) |v_{i , j+1/2}^m||d^+_2 { n}_{i , j}^m|^2\\ & \hphantom{=}+f'({n}_{i , j}^m){n}_{i , j}^m \delta_h { w}_{i , j}^m+f'({n}_{i , j}^m ) { n}_{i , j}^m { { { \bm{g}}}}(p_{i , j}^m)\\ & = \frac{{\delta t}}{2 } f''(\widetilde{{n}}^{m+1/2}_{i , j } ) |d_t^+ { n}_{i , j}^m|^2\\ & \hphantom{=}+ \frac{1}{2 } d^-_1\left ( u_{i+1/2,j}^m \left(f_{i , j}^m+f_{i+1,j}^m\right)\right ) + \frac{1}{2 } d_2 ^ -\left ( v_{i , j+1/2}^m \left(f_{i , j}^m + f_{i , j+1}^m\right)\right)\\ & \hphantom{=}+\frac{h}{4 } d^-_1 \left[f'({n}_{i , j}^m)|u_{i+1/2,j}^m| d^+_1 { n}_{i , j}^m\right ] + \frac{h}{4 } d^-_2 \left[f'({n}_{i , j}^m)|v_{i , j+1/2}^m| d^+_2 { n}_{i , j}^m\right]\\ & \hphantom{=}+\frac{h}{4 } d^+_1 \left[f'({n}_{i , j}^m)|u_{i-1/2,j}^m| d^-_1 { n}_{i , j}^m\right ] + \frac{h}{4 } d^+_2 \left[f'({n}_{i , j}^m)|v_{i , j-1/2}^m| d^-_2 { n}_{i , j}^m\right]\\ & \hphantom{=}-\frac{h^2}{4}d^-_1\left [ f''(\widetilde{n}_{i+1/2,j}^m ) u_{i+1/2,j}^m |d^+_1 { n}_{i , j}^m|^2\right]\\&\hphantom{=}-\frac{h^2}{4}d^-_2\left [ f''(\widetilde{n}_{i , j+1/2}^m ) v_{i , j+1/2}^m |d^+_2 { n}_{i , j}^m|^2\right]\\&\hphantom{= } -\frac{h}{4}f''(\widehat{{n}}_{i-1/2,j}^m ) |u_{i-1/2,j}^m||d^-_1 { n}_{i , j}^m|^2-\frac{h}{4}f''(\widehat{{n}}_{i , j-1/2}^m ) |v_{i , j-1/2}^m||d^-_2 { n}_{i , j}^m|^2\\ & \hphantom{= } -\frac{h}{4}f''(\widehat{{n}}_{i+1/2,j}^m ) |u_{i+1/2,j}^m||d^+_1 { n}_{i , j}^m|^2-\frac{h}{4}f''(\widehat{{n}}_{i , j+1/2}^m ) |v_{i , j+1/2}^m||d^+_2 { n}_{i , j}^m|^2\\ & \hphantom{=}+(f'({n}_{i , j}^m){n}_{i , j}^m - f_{i , j}^m ) \delta_h { w}_{i , j}^m+f'({n}_{i , j}^m ) { n}_{i , j}^m { { { \bm{g}}}}(p_{i , j}^m).\end{aligned}\ ] ] which implies . in particular , for , this becomes -\frac{h^2}{2}d^-_2\left [ v_{i , j+1/2}^m |d^+_2 { n}_{i , j}^m|^2\right]\\&\hphantom{=}+\frac{h}{2 } d^-_1 \left[{n}_{i , j}^m|u_{i+1/2,j}^m| d^+_1 { n}_{i , j}^m\right]+\frac{h}{2 } d^-_2 \left[{n}_{i , j}^m|v_{i , j+1/2}^m| d^+_2 { n}_{i , j}^m\right ] \\ & \hphantom{=}+\frac{h}{2 } d^+_1 \left[{n}_{i , j}^m|u_{i-1/2,j}^m| d^-_1 { n}_{i , j}^m\right ] + \frac{h}{2 } d^+_2 \left[{n}_{i , j}^m|v_{i , j-1/2}^m| d^-_2 { n}_{i , j}^m\right]\\ & \hphantom{=}-\frac{h}{2 } |u_{i-1/2,j}^m||d^-_1 { n}_{i , j}^m|^2-\frac{h}{2}|v_{i , j-1/2}^m||d^-_2 { n}_{i , j}^m|^2\\ & \hphantom{= } -\frac{h}{2 } |u_{i+1/2,j}^m||d^+_1 { n}_{i , j}^m|^2-\frac{h}{2}|v_{i , j+1/2}^m||d^+_2 { n}_{i , j}^m|^2\\ & \hphantom{=}+ f_{i , j}^m \delta_h { w}_{i , j}^m+2 f_{i , j}^m { { { \bm{g}}}}(p_{i , j}^m ) , \end{split}\end{aligned}\ ] ] we estimate the first term on the right hand side of the inequality inserting , + \frac{h}{2 } d^-_2 \left[|v_{i , j+1/2}^m| d^+_2 { n}_{i , j}^m\right]\biggr|^2 \\ & \hphantom{\leq } + 2\bigl|{n}_{i , j}^m \delta_h { w}_{i , j}^m+{n}_{i , j}^m { { { \bm{g}}}}(p_{i , j}^m)\bigr|^2\\ & \leq 4\biggl|\frac{1}{2 } u_{i+1/2,j}^m d^+_1 { n}_{i , j}^m+\frac{1}{2 } u_{i-1/2,j}^m d^-_1 { n}_{i , j}^m+\frac{h}{2 } d^-_1 \left[|u_{i+1/2,j}^m| d^+_1 { n}_{i , j}^m\right]\biggr|^2\\ & \hphantom{\leq } + 4\biggl|\frac{1}{2 } v_{i , j+1/2}^m d^+_2 { n}_{i , j}^m+\frac{1}{2 } v_{i , j-1/2}^m d^-_2 { n}_{i , j}^m+\frac{h}{2 } d^-_2 \left[|v_{i , j+1/2}^m| d^+_2 { n}_{i , j}^m\right]\biggr|^2\\ & \hphantom{\leq } + 2\bigl|{n}_{i , j}^m \delta_h { w}_{i , j}^m+{n}_{i , j}^m { { { \bm{g}}}}(p_{i , j}^m)\bigr|^2\\ & \leq 8 \big| u_{i+1/2,j}^m d^+_1 { n}_{i , j}^m\big|^2 + 8\big| u_{i-1/2,j}^m d^-_1 { n}_{i , j}^m\big|^2 + 8 \big| v_{i , j+1/2}^m d^+_2 { n}_{i , j}^m\big|^2\\ & \hphantom{\leq } + 8\big| u_{i , j-1/2}^m d^-_2 { n}_{i , j}^m\big|^2 + 2\bigl|{n}_{i , j}^m \delta_h { w}_{i , j}^m+{n}_{i , j}^m { { { \bm{g}}}}(p_{i , j}^m)\bigr|^2\\ & \leq 8\max_{i , j } |{\nabla}_h { w}_{i , j}^m|\bigl\ { | u_{i+1/2,j}^m|\ , |d^+_1 { n}_{i , j}^m\big|^2+| u_{i-1/2,j}^m|\ , |d^-_1 { n}_{i , j}^m\big|^2\\ & \hphantom{\leq 8\max_{i , j } |{\nabla}_h { w}_{i , j}^m|\bigl\{}+| v_{i , j+1/2}^m|\ , |d^+_2 { n}_{i , j}^m\big|^2+| v_{i , j-1/2}^m|\ , |d^-_2 { n}_{i , j}^m\big|^2\bigr\}\\ & \hphantom{\leq } + 2\bigl|{n}_{i , j}^m \delta_h { w}_{i , j}^m+{n}_{i , j}^m { { { \bm{g}}}}(p_{i , j}^m)\bigr|^2\end{aligned}\ ] ] thus if we assume that satisfies the cfl - condition , we have now summing over all , multiplying with and using the latter inequality , we obtain where is a constant independent of , thanks to the -bounds on and obtained in lemma [ lem : linfn_h ] and [ lem : wh ] .this implies that and therefore using hlder s inequality and the uniform -bounds on , . using summation by parts, we realize that the other terms , are in ;w^{-1,q}({\omega})) ] and ; w^{-1,q}({\omega})) ] for and by standard results , , ; l^2({\omega})) ] .the estimates from lemma [ lem : wh ] imply that the velocity ;l^{2^*}({\omega})) ] . using the `` discretized '' aubin - lions lemma [ lem : a - l - lem ] for and , we obtain strong convergence of a subsequence in \times{\omega}) ] .moreover , from the estimates in lemma [ lem : wh ] we obtain that \times{\omega})\cap l^{\infty}([0,t];h^2({\omega})) ] , where is the weak limit of . to conclude that the limit is a weak solution of, we proceed as in the previous section [ s4 ] and show that in fact converges strongly : first , we recall that the limit satisfies . on the other hand , from ,we obtain ( under the cfl - condtion ) -\frac{h^2}{2}d^-_2\left [ v_{i , j+1/2}^m |d^+_2 { n}_{i , j}^m|^2\right]\\ & \hphantom{=}+\frac{h}{2 } d^-_1 \left[{n}_{i , j}^m|u_{i+1/2,j}| d^+_1 { n}_{i , j}^m\right]+\frac{h}{2 } d^-_2 \left[{n}_{i , j}^m|v_{i , j+1/2}| d^+_2 { n}_{i , j}^m\right ] \\ & \hphantom{=}+\frac{h}{2 } d^+_1 \left[{n}_{i , j}^m|u_{i-1/2,j}| d^-_1 { n}_{i , j}^m\right ] + \frac{h}{2 } d^+_2 \left[{n}_{i , j}^m|v_{i , j-1/2}| d^-_2 { n}_{i , j}^m\right]\\ & \hphantom{=}+|{n}_{i , j}^m|^2 \delta_h { w}_{i , j}^m+2|{n}_{i , j}^m|^2 { { { \bm{g}}}}(p_{i , j}^m ) , \end{split}\end{aligned}\ ] ] considering this inequality in terms of the piecewise constant functions , and , multiplying it with a nonnegative -test function , integrating and then passing to the limit , we obtain ( using the bounds , the weak convergence of and and the strong convergence of and ) , where denotes the weak limit of and and are the weak limits of and respectively . adding and , we have we now choose smooth test functions approximating }(t) ] , in this inequality and then pass to the limit to obtain by convexity of , we have , on the other hand , the discrete -entropy inequality , , implies which gives , passing to the limit , letting , the second term on the right hand side vanishes ( as the integrand is bounded ) , and we obtain we deduce that almost everywhere and that therefore the second term on the left hand side of is zero .we have already estimated the first two terms on the right hand side of in and . to bound the other term , we use a discretized version of lemma [ lem : effve ] : [ lem : effv ] the weak limits of the sequences satisfy for any smooth function , where , , are the weak limits of , and respectively . applying this lemma to the last term in with , we can estimate it by using again that by exercise 3.37 in , .thus , grnwall s inequality thus implies by convexity of the function we also have almost everywhere and hence almost everywhere in .therefore we conclude that the functions converge strongly to almost everywhere , thus also and so the limit is a weak solution of the equations .we multiply the equation for by and integrate it over the spatial domain , passing to the limit in the last equation , we obtain on the other hand , using (x) ] and with pressure law and and . strictly speaking ,these are not homogeneous neumann boundary conditions , but since the gradient of near the boundary is very small , this works well in practice . [ cols= " < , > " , ] the approximations computed at times are shown in figure [ fig : eg2 ] .the interface between the area with maximum cell density and zero cell density seems to be sharper than in the previous example , this appears to be caused by the pressure law with the higher exponent .further tests with higher and lower exponents confirmed that assertion .[ lem : a - l - lem]let be a piecewise constant function defined on a grid on , a bounded rectangular domain , satisfying for some , uniformly with respect to and where is a first order linear finite difference operator , and are piecewise constant functions , satisfying uniformly in , for some .then in . denote a piecewise linear interpolation of in space piecewise constant in time and similarly , let , and piecewise linear interpolations of , and respectively in space and piecewise constant in time such that by ladyshenskaya s norm equivalences ( * ? ? ?* ff ) , we have where the right hand sides are bounded by assumptions and . since for , we have that ;w^{-1,s}({\omega})) ] .the piecewise constant function can be written as we also need the following auxilary result : [ lem : truncl2 ] let solve the difference equation under the assumptions of lemma [ lem : elll1 ] .then for some constant independent of .given , we multiply equation by and integrate over the domain . after changing variables in the integrals ,we obtain the right hand side can be bounded by using hlder s inequality .the left hand side , we can rewrite and estimate as follows \right)\right)\cdot { \nabla}_h s_k(u_h ) + c_h \left(u_h - s_k(u_h)\right ) s_k(u_h)\ , dx\\ & \quad \geq \eta\|{\nabla}_h s_k(u_h)\|^2_{l^2({\omega})}+\nu \|s_k(u_h)\|^2_{l^2({\omega } ) } \\ & \quad\quad+ \int_{{\omega } } \left(a_h \left({\nabla}_h \left[u_h - s_k(u_h)\right]\right)\right)\cdot { \nabla}_h s_k(u_h ) + c_h \left(u_h - s_k(u_h)\right ) s_k(u_h)\ , dx . \end{split}\ ]] is either zero or has the same sign as .therefore and in order to prove that the other term is positive as well , we will show that the proof of this fact consists of boring case distinctions and is exactly analoguous for , therefore we will do it only for and omit writing the tuple index .then we have the potential reader is welcome to check that these are all the possible cases and that each of the terms on the right hand side is nonnegative .thus we have that which implies together with the estimate on the right hand side of first , we note that by the discrete gagliardo - nirenberg - sobolev inequality , ( * ? ? ?3.4 ) , where if and any number with if , and where is a constant depending on but not on . by lemma [ lem : truncl2 ] , we can bound the right hand side and obtain therefore now we define the set by we have and therefore , using , for ( which is if ) since the choice of was arbitrary .now denote where . informally speaking ,the cells in have a neighbor cell which is contained in .we have by .now let , and decompose hence on and the cells bordering the set , we have and therefore .hence we can estimate the size of the second set in the above inequality , where we have used chebyshev inequality for the last step .now we can estimate the size of the set using once more , choosing , we obtain if , we have and so for .for , since is an arbitrary finite positive number , we can achieve the same . using the embedding of the marcinkiewicz spaces ,, we obtain the claim of the lemma .the work of k.t . was supported in part by the national science foundation under the grant dms-1211519 .the work of f.w . was supported by the research council of norway , project 214495 liqcry .f.w . gratefully acknowledges the support by the center for scientific computation and mathematical modeling at the university of maryland where part of this research was performed during her visit in fall 2014 . o. a. ladyzhenskaya . .second english edition , revised and enlarged . translated from the russian by richard a. silverman and john chu. mathematics and its applications , vol .2 . gordon and breach , science publishers , new york - london - paris , 1969 .l . lions . , volume 3 of _ oxford lecture series in mathematics and its applications_. the clarendon press , oxford university press , new york , 1996 .incompressible models , oxford science publications .
mechanical models for tumor growth have been used extensively in recent years for the analysis of medical observations and for the prediction of cancer evolution based on imaging analysis . this work deals with the numerical approximation of a mechanical model for tumor growth and the analysis of its dynamics . the system under investigation is given by a multi - phase flow model : the densities of the different cells are governed by a transport equation for the evolution of tumor cells , whereas the velocity field is given by a brinkman regularization of the classical darcy s law . an efficient finite difference scheme is proposed and shown to converge to a weak solution of the system . our approach relies on convergence and compactness arguments in the spirit of lions . [ multiblock footnote omitted ]
the jaynes - cummings model ( jcm ) , proposed in 1963 , constitutes an excellent theoretical approach to describe analytically the interaction of a two level atom with a single mode of a quantized radiation field .the field frequency may belong either to the optical domain or to the microwave one . in the first case the researchers use common atoms whereas in second case they use ( highly excited ) rydberg atoms .the issue was also extended to other systems , as ( i ) in nanocircuits operating in microwave domain , either through the substitution of the atom by a copper - pair box ( cpb ) and the field by a nanomechanical resonator in nanocavities ; ( ii ) or the cpb inside a chip ; ( iii ) substituting the atom by quantum a dot embedded in a photonic - crystal ad ; ( iv ) using spin in quantum - dot arrays , etc . in spite of its simplicitythe jcm gives exact solutions of the schrdinger equation in many examples that occur in such physical systems .the jcm has been employed in the study of various fundamental quantum aspects involving the matter - radiation . to give some examples we mention : collapse and revival of the atomic inversion ; the rabi frequency of oscillation for a given atomic transition acted upon by a light field ; nonclassical statistical distributions of light fields , antibunching effect ; squeezed states , and others .an alternative model that maintains various characteristics of the jcm and offers advantages in certain situations was proposed by buck - sukumar in 1981 , abbreviated as bsm .it is called intensity - dependent jcm , since it substitutes the jcm interaction by another interaction that includes the number operator in this way : with and in the previous expressions ( ) stands for annihilation ( creation ) operator , ( ) is lowering ( raising ) operator , ( ) is the number operator , and stands for the atom - field coupling .this model also leads to analytical solution of the schrdinger equation .it has been argued that its physical simulation in laboratory could be implemented via matrices of waveguides ; optical analogies of quantum systems realized in waveguide arrays have recently impacted the field of integrated optical structures . in particular , susy photonic lattices can be used to provide phase matching conditions between large number of modes allowing the pairing of isospectral crystals . in spite of its apparent theoretical naturethe bsm has attracted the attention of various researchers in the quantum optical community .shanta , sivakumar , ahmed , rlm , yang , cardimona , yang1,yang2,dukelsky , reeta , buzek , valverde , sayed , cordeiro . in 1992 p. shanta ,s. chaturvedi , and v. srinivasana ( scs - model ) proposed an extension of the intensity - dependent jcm .this model interpolates between the jcm and the bsm . in this approachthe authors assumed the modified hamiltonian , where is the number operator and the operators , are quons operators satisfying the the commutation relation ; is a c - number restricted to the interval .$ ] accordingly , quons would stand for particles intermediate between bosons ( ) and fermions ( ) .the authors then use specific connections between the operator and and and prove that the scs model interpolates between the bsm and jcm in the limits and respectively , with playing the role of the interpolating parameter . however , although being a creative approach , here we will not take it forward because we are restricting ourselves to photonic field , not to quons .according to ref . there are other nonlinear models in this context , but they treats the coupled system only approximately vad another type of intensity - dependent jcm , was proposed in 2002 by s. sivakumar , named here as sivakumar model ( sm ) .this model also interpolates between the jcm and the bsm via the following hamiltonian , where and stand respectively for annihilation and creation operators .the change from to aims to get a convenient deformed algebra for various theoretical applications , as in group theory , field theory , and others .as established in , for one has the heisenberg - weyl algebra generated by and for one finds the algebra . for _ _ _ _ all values of the algebra is closed , =2\hat{k}_{0},\text { } [ \hat{k}_{0},\hat{k}^{\dagger } ] = k\hat{k}^{\dagger } , \text { } [ \hat{k}_{0},\hat{k}]=-k\hat{k } , \label{asm2}\ ] ] with we note some resemblance between the hamiltonian in eq .( [ asm1 ] ) and that given by the bsm for as pointed out by the authors , the bsm is only reached when the mean photon number of the field satisfies the condition leading the term to an approximate form of bsm a somewhat ` similar ' model , also intensity - dependent , was proposed in 2014 by rodrgues - lara , named here as rodrgues - lara model ( rlm ) , constituting a generalization of bsm since it substitutes the operator of the bsm by the operator .the rlm recovers the bsm in the limit but it includes the counter - rotating terms , due to the form of the interaction hamiltonian , where the decomposition explains the appearance of counter - rotating terms and as well known , separately they do not conserve energy . also , due to the inclusion of the counter rotating terms , this model puts a restriction on the average number of photons . in this reportwe present a generalized hamiltonian that provides a continuous and exact interpolation between various hamiltonian models , including the jcm , bsm , sm , and rlm .the plan of the paper is as follows . in sec .[ sec - model ] we briefly discuss this class of hamiltonian , showing its interpolating property . in sec .[ basic ] we obtain the solution of the schrdinger equation in this extended scenario . in sec .sec - apli we give some applications , in the sec .[ c1 ] we calculate mandel parameter .[ cc ] contains comments and the conclusion .the hamiltonian described by the jcm , widely referred to as the jcm in the rotating wave approximation , is given in the form , stands for the field frequency , is the atomic frequency , and stands for atom - field coupling .now , our mentioned class of interpolating hamiltonians is obtained substituting by given by where and for and here it is easily seen that the hamiltonian in eq.([a1a ] ) interpolates between the various interaction models of hamiltonians , as follows : * the jaynes - cummings model ( jcm ) for and * the buck - sukumar model ( bsm ) for and * the sivakumar model ( sm ) for and * the rodrgues - lara model ( rlm ) for and some basic properties involving these atomic and field operators are , & = & \pm 2\hat{\sigma}_{\pm } , \text{\ } [ \hat{\sigma}_{+},\hat{\sigma}_{-}]=\hat{\sigma}_{z } , \label{cc1 } \\ \lbrack \hat{a},\hat{a}^{\dagger } ] & = & 1,\text { } [ \hat{a},\hat{n}]=\hat{a},\text { } [ \hat{a}^{\dagger } , \hat{n}]=-\hat{a}^{\dagger } , \end{aligned}\ ] ] =\hat{r},\text { } [ \hat{r}^{\dagger } , \hat{n}]=-\hat{r}^{\dagger } , \text { } [ \hat{r},\hat{r}^{\dagger } ] = 2\hat{r}_{0}=\delta + \xi + 2\xi \hat{n } , \label{cc3}\ ] ] with thus we have a closed algebra in this scenario , =2\hat{r}_{0},\text { } [ \hat{r}_{0},\hat{r}^{\dagger } ] = \xi \hat{r}^{\dagger } , \text { } [ \hat{r}_{0},\hat{r}]=-\xi \hat{r}.\ ] ] the eq.([a1a ] ) can be rewritten in the form , where , with next we can use the eqs.(cc1 ) and ( [ cc3 ] ) to show that and are constant of motion , namely , =[\mathscr{h},\hat{\mathscr{h}}_{i}]=[\hat{\mathscr{h}}_{a},\hat{\mathscr{h}}_{i}]=0.\]]all essential dynamic properties contained in a state of the atom - field system described by any of the previous interpolating hamiltonians , can also be described by the interpolating hamiltonian proposed here , , considering that contributes only for general phase factors , usually not relevant .let us consider a simple example assuming the system in resonance , , to analyze the time evolution of the coupled atom - field system we solve the time dependent schrdinger equation using the hamiltonian in eq.(ac ) , we can write the formal solution of eq .( [ a3a ] ) as , where , is the ( unitary ) evolution operator .next , using the expression decomposing the above sum in their even and odd terms , plus the use of the two following relations and , , we get the evolution operator in a convenient form for systems involving a two - level atom, and , given above. we will assume the entire system initially decoupled , the atom in its ground state and the field in arbitrary state so , the wavefunction describing the atom - field system for arbitrary times is obtained from equation with given in eq .( [ alfa ] ) .after an algebraic procedure we find , paradoxical evolution of average number of photons[sec - apli ] ------------------------------------------------------------ the paradox concerned with the time evolution of the average number of photons , discussed by luis , used the jcm . here we treat this paradox for the various interpolating hamiltonians mentioned above .this is obtained directly from our hamiltonian by an appropriate choice of the pair and .the mean number of photons of the field is calculated as, _ _ __ is the density operator . in this sectionwe study the dynamic behavior of the average number of photons , and where , \label{ng}\ ] ] , \label{ne}\ ] ] where regardless of the types of interpolations , i.e. , jcm bsm sm rlm , and eventually others obtained by varying the pair and _ _ , _ _ the essential features of the paradox discussed in ref . remains for all these interpolation models .now , for small times , the following relation is valid , irrespective of the interpolating model . in the plots of fig .( [ figura_1 ] ) we have assumed the initial field in a coherent state , assuming the average number of photons here we have used mathematical expressions more general than those in eqs .( [ ng ] , [ ne ] and [ n ] ) , hence the following plots are not restricted to small times__. _ _ we observe in fig .( figura_1 ( a ) ) the occurrence of the mentioned paradox , which starts immediately and remains up to for the jcm ; in fig . ( [ figura_1 ] ( b ) ) for the bsm ; in fig .( figura_1 ( c ) ) for the sm ; and in fig .( figura_1 ( d ) ) for the rlm .( solid curve ) ( dashed curve ) and ( dotted curve ) , for an initial coherent state with a ) jcm , b ) bsm , c ) sm and d ) rlm.,width=302,height=264 ] hence , these results show that the paradox raised by _ _luis luis using the jcm happens no matter what kind of hamiltonian model used within the class considered here .a quantized photon field with sub - poissonian statistics is characterized when the variance is smaller than the average number of photons , namely : the opposite chacterizes a super - poissonian photon field and if the photon field exhibits poissonian statistics , characterizing all coherent states .the mandel s parameter tells us what kind of statistics the field displays ; it is given by the relation, so , when the field is super - poissonian ; when it is sub - poissonian ; and poissonian for . in fig .( [ figura_2 ] ) , we represent our hamiltonian in eq .( [ cv2 ] ) interpolating between the four hamiltonians : jcm , bsm , sm and rlm ., for an initial coherent state with a ) jcm , b ) bsm , c ) sm and d ) rlm.,width=302,height=264 ] fig.([figura_3 ] ) exhibits various plots of the mandel parameters in these different models of hamiltonian .the various plots show that , by conveniently adjusting the pair of parameters and in the present model hamiltonian we can interpolate continuously from the jcm to the bsm , the sm , and the rlm . in these interpolations we have observed in which way the mandel parameter modifies during the time evolutions , as shown in fig.([figura_3 ] ) ,plots ( a ) , ( b ) , and ( c ) ; also , this interpolation occurs in a softly way , from the jcm to bsm .the same happens for the interpolation from the jcm to the sm , shown in fig.([figura_3 ] ) , plots ( d ) , ( e ) , and ( f ) ; and also from the jcm to the rlm , fig.([figura_3 ] ) , plots ( g ) , ( h ) , and ( i ) . for an initial coherent state with ; interpolating from jcm to bsm a ) for and b ) for and c ) for and ; interpolating from jcm to sm d ) for and e ) for and ; f ) for and ; interpolating from jcm to rlm g ) for and ; h ) for and i ) for and , for an initial coherent state ; a ) with para and b ) with para and c ) with para and d ) with para and e ) with para and f ) with para and ] we can note in fig.([figura_4 ] ) that , when we compare the case where the system state has a small average excitation with those having larger values of , the mandel parameters for different hamiltonians differ sensitively from each other for small values of , the region where the quantum nature of the system state is more evident .contrarily , for larger values the corresponding plots are very similar . in thesesexamples we are analizing the mandel parameter close to bsm , fig.(figura_4 a ) and d ) ) , with other close to sm , fig.([figura_4 ] b ) , and e ) ) , and another close to rlm , fig.([figura_4 ] c ) and f ) ) .this shows a great sensitivity of the system to the parameters and in the quantum regime of small numbers , as usually expected .in addition , for small values of the field state exhibts a greater sub - poissonian effect .we have proposed a ( two parameters ) interpolating hamiltonian .it allows one to extend from ( a ) the jcm , ( b ) the bsm , ( c ) the sm , and ( d ) the rlm .this new hamiltonian employs the basic operators , , and which form a closed algebra . as mentioned before, it contains all essential dynamic properties contained in a state of the atom - field system described by the previous interpolating hamiltonians . to give an example we have verified that , essentially , the results found in the paradox discussed by a. luis in the jcm remains in the scenario of this extended hamiltonian ( see fig .( [ figura_1 ] ) ) , no matter the chosen extension , say : from ( a ) to ( b ) , from ( a ) to ( b ) , from ( a ) to ( c ) , and from ( a ) to ( d ) .we have also calculated the mandel parameter to obtain the evolution of the statistical properties of the system state and their time evolution when we pass from our interpolating model to another after appropriate choices of the pair in these time evolutions we have highlighted the influence of the average excitation when large or small , upon the statistical properties of the system . from what we have learned in quantum optics , concerning the degradation caused by decoherence effects affecting quantum states , for practical purposes this result would lead us to give priority to states with smaller excitations , the quantum region of small numbers , where some types of interpolating hamiltonians have problems .we thank the brazilian funding agencies cnpq and fapeg for the partial supports .t. yoshie , a. scherer , j. hendrickson , g. khitrova , h. m. gibbs , g. rupper , c. ell , o. b. shchekin , and d. g. deppe ) , _ vacuum rabi splitting with a single quantum dot in a photonic crystal nanocavity , _ nature * 432 * , 200 - 203 ( 2004 ) m. brune , _ _schmidt - kaler , a. maali , j. dreyer , e. hagley , j. m. raimond , and s. haroche , _ quantum rabi oscillation : a direct test of field quantization in a cavity , phys _ , rev . lett . *76 * , 01800 ( 1996 ) .b. m. rodrguez - lara , f. soto - eguibar , a. z. crdenas , and h. m. moya - cessa , _ a classical simulation of nonlinear jaynes cummings and rabi models in photonic lattices _express * 21 * , 12888 ( 2013 ) .m. brune , e. hagley , j. dreyer , x. matre , a. maali , c. wunderlich , j. m. raimond , and s. haroche , _ observing the progressive decoherence of the meter in a quantum measurement , _ phys .lett . * 77 * , 4887 ( 1996 ) .
we introduce a new hamiltonian model which interpolates between the jaynes - cummings model and other types of such hamiltonians . it works with two interpolating parameters , rather than one as traditional . taking advantage of this greater degree of freedom , we can perform continuous interpolation between the various types of these hamiltonians . as applications we discuss a paradox raised in literature and compare the time evolution of photon statistics obtained in the various interpolating models . the role played by the average excitation in these comparisons is also highlighted .
cryptography is created to satisfy the people s desire of transmitting secret messages . with the development of the quantum computation , especially the proposal of shor s algorithm , the base of the most important classic cryptographic scheme was shocked .but at the same time , the principles of quantum mechanics have also shed new light on the field of cryptography as these fundamental laws guarantee the secrecy of quantum cryptosystems .any intervention of an eavesdropper , eve , must leave some trace which can be detected by the legal users of the communication channel .all kinds of quantum key distribution ( qkd ) schemes , such as bb84 protocol , b92 protocol , and the epr scheme have been proposed .recently , quantum cryptography with -state systems was also introduced. experimental research on qkd is also progressing fast , for instance , the optical - fiber experiment of bb84 and b92 protocols have been realized up to 48 km , and qkd in free space for b92 scheme has been achieved over 1 km distance . in paper , lior andlev presented a quantum cryptography based on orthogonal states firstly .then there is quantum cryptographic scheme involving truly two orthogonal states .the basic technic is to split the transfer of one - bit information into two steps , ensuring that only a fraction of the bit information is transmitted at a time .then the non - cloning theorem of orthogonal states guarantee its security .based on the impossibility of cloning nonorthogonal mixed states , the no - cloning theorem of orthogonal states says that the two ( or more ) orthogonal states of the system composed of and can not be cloned if the reduced density matrices of the subsystem which is available first ( say ) ] states in the horizontal dominoes . for the system, the same result can be reached .that is to say there is a limitation in the probability of the success eavesdropping when the hilbert space becomes large enough .and it is evident that in this strategy only particle may be demolished , and particle is not infected at all .the function of the operation to which is depended on the result of the measurement on is just to extract more information .eve may adopt the complementary eavesdropping strategy , in which eve try to eavesdrop some information by intercepting and operating only on the second particle , which may cause demolition to it .then for the set of states in systems , whose graphic depictions are -fold rotation symmetric , the probability to eavesdrop some information without being detected is equal to that of the first strategy , i.e. , . but for those states without such symmetry , it can be verified that one of the success probabilities for the complementary strategies is larger than .so we employ the symmetric states in the present scheme .of course , there are other strategies , for example , she can hold up the first particle and send out a substitute particle to bob .when comes , she makes a collective measurement under the two - particle orthogonal basis , then sends out a particle in the state of . in this strategy ,eve can eavesdrop the information entirely , but the probability for she to pass the checking process is only , which tends to zero , for the state of particle is randomly chosen from a hilbert space .we have proposed the general conditions for the orthogonal product states to be used in qkd , then presented a qkd scheme with the orthogonal product states of system which has several distinct features , such as high efficiency and great capacity .the generalization to the -state systems , and eavesdropping is analyzed where a peculiar limitation , for the success probability of an efficient eavesdropping strategy is found as _ _ __ becomes large enough .
the general conditions for the orthogonal product states of the multi - state systems to be used in quantum key distribution ( qkd ) are proposed , and a novel qkd scheme with orthogonal product states in the hilbert space is presented . we show that this protocol has many distinct features such as great capacity , high efficiency . the generalization to systems is also discussed and a fancy limitation for the eavesdropper s success probability is reached . pacs number(s ) : 03.67.dd , 03.65.bz
nowadays information is one of the most important resources . we can defeat the enemy in a war just manipulating his data .if we can guess the mechanism of the generation of ( pseudo-)random numbers used by a casino , then we can efficiently cheat in gambling . however , most of the so - called random number generators has a deterministic algorithm inside .it is very difficult to develop a reliable pseudo - random number generation ( prng ) method .although there are tests that allow to check whether a sequence of numbers conforms to a particular probability distribution , we can never be sure its security without the knowledge how the sequence was generated .one of the measures of randomness is so - called min - entropy . in particular , in the context of authentication , min - entropy is the probability of guessing the easiest key in a given distribution of keys .if we know the pseudo - random generating algorithm and the initial seed ( or some sequence of generated numbers ) , then the randomness of such a source is equal to zero .all classical prngs have this drawback . on the other hand, quantum physics confuses philosophers with randomness on its deepest level .this randomness is unavoidable . we knowthat if certain observables ( _ bell operators _ , which are linear functions of observed probabilities occurring in the experiment ) attain certain thresholds , then the process must be intrinsically random , or we would have to abandon some ideas that are fundamental to all physical theories .thus , values of these observables guarantee that the results of performed measurements are indeed random , no matter how does the measuring apparatus work .this way the idea of the quantum randomness certification emerged .if we want to be sure that the device we are using does really produce random numbers , we perform _ bell experiment _ , which is a kind of self testing .such an experiment involves at least two separated parties that perform subsequent measurements with different settings without any communication between them .after series of such measurements , the collected data is used to estimate the joint probabilities of the outcomes conditioned on the settings used .the most prominent example of bell operator is so - called clauser - horne - shimony - holt ( chsh) . because such self testing works independently of the internal workings of the device used ( in particular ,the exact form of the performed measurements is not important ) , if the bell inequality attains some value , we are sure that the generated results are indeed random , even if the device has been construed by a malevolent party . the amount of the obtained secure randomness is precisely quantified by means of min - entropy .this approach , in which we do not trust the vendor of our devices and draw conclusions only from the observed results , is called _ _ device - independent__ , referred further as _di_. still , bell experiments are very difficult to do .they require a high degree of precision and extremely high detection efficiencies . so far loophole - free bell experiment has not been successfully performed .but when we allow to send a state from one part of the device to another , then we do not have any non - locality , which is crucial for that way of certification .it was shown that , if we can bound the dimension of the communicated system , we still may use this _ prepare and measure scheme _ to certify the randomness . since we have to know something about the construction of the device ,this approach is called _ _ semi - device - independent__ ( denoted hereafter _ sdi _ ) .this offers a good compromise between security and experimental feasibility .currently all commercial quantum random generators are based on the prepare and measure scheme , _e.g. _ the _ i d quantique_s device _ quantis _ , or the _ qstream _ by _ _quintessence labs__ .these devices do not perform any self testing , so we are forced to trust their vendors . for this reason ,methods for certifying randomness in the prepare and measure scheme with the semi - device - independent approach should be investigated . in this framework analogs of bell inequalities , called _ dimension witnesses _ , are used .before we proceed we should stress that what we call random number generation is in fact randomness _ expansion _ , the process that starts with some amount of initial randomness and uses it to obtain more of it .the presented self testing procedure of the device also requires some amount of randomness ( in order to choose the measurement settings in rounds of testing experiments ) .strictly speaking , all quantum random number generators that use bell inequalities or dimension witnesses to certify the randomness are randomness expanders .after generation of a string of bits with a certain amount of min - entropy , it is possible to _ extract _ its randomness what means using a certain algorithm to produce a shorter string with a larger min - entropy per bit .in our previous paper we have investigated the relation between random number expansion protocols based on correlations occurring in the scenario where two parties share an entangled state , and on protocols relying on the prepare and measure scheme . in this paperwe develop these ideas .the organization of this paper is as follows . in the section [ sec : motivation ] we presentscenario in which we are working .next , in sections [ sec : bi ] and [ sec : dw ] , we give basic information about bell inequalities and dimension witnesses .then , in the section [ sec : bitodw ] , we recapitulate a heuristic method of obtaining a dimension witness from a bell inequality .this method was introduced in . in sections[ sec : dwtobi ] and [ sec : binarydw ] we precisely state the conditions when the randomness certified by the violation of a bell inequality lower - bounds the randomness certified by a certain value of dimension witness in the semi - device - independent scenario . in the section [ sec : symdw ] we investigate the properties of a certain class of dimension witnesses and introduce a procedure of dimension witness reduction , which can be used to obtain from an existing witness a new one with higher amount of certifiable randomness . in the section[ sec : explicitexamples ] we give examples of application of the presented methods .the aims of this paper are as follows .we clarify the methods from our previous paper and give a tighter lower bound on randomness . using these methods we obtain better dimension witnesses , in particular the one based on the braunstein - caves bell inequality .we also extend the applicability of the methods from to arbitrary dimensions .suppose we are a developer of a random number generating device .since consumers do not trust us , we are interested in finding a way of certification for our device .common method for the certification of quantum random number generators that are based on measurements on entangled particles is to estimate the value of a certain bell inequality that is attained in this device .still , it is too difficult to observe a loophole - free violation of bell inequality .thus we prefer prepare and measure protocols . both for prepare and measure protocols in the semi - device - independent approach , and for correlation protocols in the device - independent scheme, we would like to define a value that measures how reliable is its particular realization . as this valuewe take the expectation value of the relevant dimension witness or bell inequality , respectively , attained in the relevant protocol .this value is called a _ security parameter_. it is possible to consider several relations .one may ask whether , having a protocol of one type , we can relate it to some protocol of another type , in such a way that for the same value of their security parameters the min - entropy certified in one of them , is upper or lower bounded by min - entropy certified by the other one . one may start with a protocol based on a bell inequality and construct out of it a prepare and measure protocol certifying a reasonable amount of min - entropy .this is useful since there are many randomness expansion protocols based on bell inequalities and it is easy to obtain new ones .another situation is when we begin with some sdi protocol and want to lower bound the certified randomness using efficient numerical methods from , that works in the device - independent approach. we present a way to obtain a new bell inequality with the property that the di protocol using it certifies at most as much randomness as the sdi protocol .as mentioned above , sdi protocols are much easier to implement than the protocols based on entanglement .for this reason it is useful to have a method that allows to develop devices of the first kind with the help of the well established knowledge about the devices of the second type .we define for a di protocol : [ probdi ] let , , , and be sets .probability distribution in di scheme is a conditional probability distribution such that where and are sets of povms on a hilbert space , and is a density matrix on , and = 0 \text { if } x \neq y.\ ] ] we denote this probability by .\nonumber\ ] ] if , then is called binary .the set of all di probability distributions for given , , and is denoted by .let us take two sets , and , that label the measurement settings of alice and bob in di scheme , and two sets , and , that label their respective outcomes .a bell inequality is a linear function defined , in particular , for probability distributions .it is of the form \equiv \\ & = \sum_{a \in a } \sum_{b \in b } \sum_{x \in x } \sum_{y \in y } \alpha_{a ,b , x , y } p(a , b|x , y ) + c_i , \end{aligned}\ ] ] where .we omit if it is obvious which probability distribution is considered .the constant term in a bell inequality does not change its properties .still , we retain this general form , both for bell inequalities , and dimension witnesses in the next section . in the following sections this allows to keep the same maximal expected value when performing a transformation leading from one expression to another .a particular form of bell inequality is the following correlation form \equiv \\ & = \sum_{x \in x } \sum_{y \in y } \hat{\alpha}_{x , y } c(x , y ) + \hat{c}_i , \end{aligned}\ ] ] with , and obviously , the form ( [ bi ] ) conforms the form ( [ bihat ] ) if , and only if , and is binary . for given , , , , , , bell inequality and we define the following terms : \geq s \end{aligned}\ ] ] the expression is called min - entropy , and is the min - entropy certified by the value of .for a sdi scheme , we have the following definition of the allowed probability distribution [ probsdi ] let , , and be sets , and be a hilbert space of a finite dimension .a probability distribution in sdi scheme is a conditional probability distribution such that for , and we have , where is a set of density matrices on , and are povms on for all .we say that is realized by sets and , and denote it .\nonumber\ ] ] if , then is called a binary probability distribution .the set of all sdi probability distributions for given , , and is denoted by .the set of all sdi probability distributions with restrictions that , and is denoted by .let and be sets labeling the settings of alice and bob , in the sdi scheme , and let be a set of the outcomes that bob can obtain .dimension witnesses are linear functions of probability distributions of the form \equiv \\ & = \sum_{b \in \bar{b } } \sum_{x \in \bar{x } } \sum_{y \in \bar{y } } \beta_{b , x , y } p(b|x , y ) + c_w , \end{aligned}\ ] ] where , and . if , then the dimension witness is called binary .if , then the dimension witness is called zero - summing . for given , , , , , dimension witness , and we define the following terms : \geq s \end{aligned}\ ] ] the expression called min - entropy , and is the min - entropy certified by the value of ( for the dimension ) .the following lemma summarizes some properties of dimension witnesses .[ dwlemma ]let be a hilbert space of a dimension , and let be a binary dimension witness defined by certain , , and .let be a set of states on , and be a set of binary povm on .let ] .2 . if , and , then for , which is a set of states on , = -s ] . 1 .let us take .let , , and .obviously + now , we prove that .there exist an orthonormal basis in that and where ] .+ let us define . denote by , and similarly by .+ we have , and if , then we take and , otherwise we take and .for it is easy to see that = \sum_y \max(s_{y,0},s_{y,1 } ) \geq \sum_y s_y = s. \nonumber\ ] ] the first statement in this lemma says that in the dimension the condition that all measurement operators have trace is not restrictive with regards to the set of values possible to attain .the second statement gives sufficient conditions under which an operation of negation of all states gives the same value of a dimension witness but with opposite sign .the third statement , which may be used to complement the first one , shows that under certain conditions it is not restrictive to use only projective measurements in case when the values possible to be attained are considered .consider the following bell experiment .suppose we are given a bell inequality of the form ( [ bi ] ) .alice and bob share an entangled state .alice chooses a measurement setting , and obtains an outcome . for each setting and result , we assign a conditional probability .alice s measurement prepares some state at bob s side .next , bob chooses a measurement setting , and obtains an outcome . the probability that bob gets , knowing both the setting and the result of alice , is .we rewrite the joint conditional probability of a given pair of results for a given pair of settings as .thus , defining , the initial bell inequality is transformed to the _ form _ of a dimension witness ( see the equation ( [ dw ] ) ) , with .we have , , and .the fact that it is possible to transform a bell inequality into the form of a dimension witness , leads us to some _heuristic _ method to achieve an sdi protocol that certifies a reasonable amount of randomness , once we have a di protocol .we get the sdi protocol if , instead of measuring on alice s side , she gets `` the outcome '' as a part of her input with the probability distribution .thus , we obtain a pair that we use as an index of the state to be send .this way , the device on the side of alice prepares one of states .bob still has measurement settings .in this section we construct a sequence of devices that shows that the randomness certified by an sdi protocol can be lower bounded by the randomness certified in a certain di protocol minus .we consider a device that we get from an untrusted vendor , and that consists of two black boxes .its only parameter that we can verify ( or trust ) , is the dimension of the message send from one part of it , to the another one .we assume , that the device can not communicate with the world outside the laboratory .the black box on alice s side has buttons with labels and emits one of the states of the dimension from the set of states .the states are unknown to us , and are of arbitrary , possibly mixed , form .the black box on bob s side has buttons with labels and , after receiving the qubit from alice s black box , it performs one of the measurements given by povms from the set .we do not know , how the measurements are performed .this description is semi - device independent , since we know only the dimension .suppose we are given a dimension witness w ( of the form ( [ dw ] ) ) that achieves in the experiments on the device the expected value .we denote the conditional probability of obtaining the outcome when the chosen settings are and , by . the device is not trusted , but it is possible to consider another device , , that consists of two parts , with buttons labeled by and on the alice s side and on the bob s side , respectively .the parts are sharing a maximally entangled state of the dimension .the part on the alice s side performs some measurement , depending on the chosen input .this measurement projects the alice s part of the singlet on the state that is the same as the relevant state from the device .if the projection succeeded , which happens with the probability , then the device returns and changes the state on the alice s side into the state , otherwise it returns .since the shared state is a singlet , this measurement prepares the same -dimensional state on the bob s side .then he performs the same povm as the device , and returns the outcome .the probability that alice gets the outcome with the setting , and simultaneously bob gets the outcome with the setting is denoted by .it is easy to see , that .now let us consider another device , .it has the same interface like , but the conditions on the internal working are relaxed , _ viz ._ we do not assume anything about the performed measurements , and alice s and bob s parts are allowed to share any , possibly entangled , state of an arbitrary dimension .the probability of obtaining the outcomes and with given pair of settings and for alice and bob , respectively , are denoted by .we apply a constraint , where is the probability of getting the outcome by alice with the setting with the device .obviously , all the conditional probability distributions that are possible to be obtained by the device ( and thus also by the device ) , are also possible to be obtained by this device .note that this description is fully device - independent , and that there are semi - definite programs in the npa hierarchy that efficiently approximate the probability distributions of the device .since the device is a relaxed version of the initial device , if both of them have the same value of the relevant security parameters , then the certified amount of min - entropy generated by the device gives a lower bound of the min - entropy certified to be generated by the device .we recapitulate the above results in the following theorem [ dwtobitheorem ] let , and be sets .let us take , , a bell inequality of the form ( [ bi ] ) , and a dimension witness of the form ( [ dw ] ) , satisfying .let be a subset of with ( see the definition [ probsdi ] ) that satisfies .let be a set of all probability distribution defined by , where is a device - independent probability distribution such that = s ] .thus , since the potential adversary is interested in increasing the probability of a particular outcome of the measurement as much as possible , the form of these measurements that maximizes his guessing probability is the following : by the lemma [ dwlemma].1 and [ dwlemma].3 it is not restrictive for the vendor to use only projectors of trace for the measurements different than .the strategy of using a measurement of the form ( [ optmeasy0 ] ) for the setting , and projectors of trace for all remaining measurements is equivalent to using the following mixed strategy . in cases , a projective measurement of trace used for the measurement ( we call this strategy : p ) , and in cases the outcome is deterministic - this is referred hereafter as a deterministic strategy , or simply : d. for the remaining measurements the same projective measurements of trace are used in both cases .the guessing probability for the strategy d is , and for the strategy p is , thus the average guessing probability is in the case of a zero - summing dimension witness with the deterministic strategy , measurements with the setting give no contribution to the value of the witness .thus the certification of the randomness with the dimension witness when the vendor of the device uses the mixed strategy is , after applying certain affine transformation ( see equation ( [ deltaaffine ] ) ) , equivalent to the certification with a dimension witness with defined in the equation ( [ deltabeta ] ) , and the strategy p , where the guessing probability of eve is given by the equation ( [ deltaaffine ] ) .since the vendor may choose any ] .let be a set probability distributions defined by , where is a device - independent probability distribution that satisfies = s ] .the figure [ fig : t3_npa_p ] shows the min - entropies certified with the bell inequality for different additional conditions . in the figure [ fig : t3_cert_p ] lower - bounds on the certified min - entropy obtained by theorem [ dwtobitheorem ] from the npa hierarchy with additional condition are plotted .these values assume that the untrusted vendor uses the strategy p ( see the section [ sec : binarydw ] ) .figures [ fig : t3_npa_delta ] and [ fig : t3_cert ] contains the relevant data for the mixed strategy .[ [ sub : t2 ] ] t2 ~~ a simple bell inequality is obtained from the symmetric dimension witness of the to qrac used in .it has the following form where is defined in the equation ( [ ctowfull ] ) and .the reduced form of this dimension witness is where is defined by the equation ( [ ctow ] ) and .robustness of the reduced version has been already investigated in , in the figure .the randomness certified by these two dimension witnesses is lower - bounded by the values obtained with the following two bell inequalities . for the dimension witness defined in the equation ( [ t2dwfull ] ), we use a bell inequality and for the dimension witness from the equation ( [ t2dw ] ) , the operator defined in the equation ( [ t2bi ] ) is exactly the chsh bell operator .lower bounds for this case are shown in figs [ fig : t2_npa_p ] , [ fig : t2_cert_p ] , [ fig : t2_npa_delta ] and [ fig : t2_cert ] .the reduced witness ( [ t2dw ] ) has recently been experimentally realized .the values obtained in this experiment refer to ( 5.51 in the scaling used there ) and ( 5.56 ) , concluded therein to certify and bits of randomness , respectively . if the reduction had not been performed , then only and would have been certified .in the following bell operator is investigated : this bell operator is similar in the form to the dimension witness introduced in .since the relevant bell inequality is very robust in certifying the randomness , the dimension witness with randomness lower - bounded by it , may also be expected to be robust .assuming , we turn it into the following dimension witness since this dimension witness is symmetric , we follow the steps which lead from the expression ( [ ctowfull ] ) , to the expression ( [ ctow ] ) , to obtain the following reduced dimension witness if we start with the dimension witness defined in the equation ( [ modchshdwfull ] ) , and do not use the symmetry , we get the following lower - bounding bell inequality the dimension witness from the equation ( [ modchshdwfull ] ) lower - bounds the dimension witness from the equation ( [ modchshdw ] ) , and thus both are lower - bounded ( in the sense of the theorem [ dwtobitheorem ] and the conjecture below it ) by the bell inequality from the equation ( [ modchshbifull ] ) , but only the second dimension witness is proved to be lower - bounded by ( see the equation ( [ modchshbi ] ) ) .lower - bounds for this set of di and sdi protocols are shown in figs [ fig : modchsh_npa_p ] , [ fig : modchsh_cert_p ] , [ fig : modchsh_npa_delta ] , and [ fig : modchsh_cert ] .in this paper we explained in more details the ideas from our previous paper .in particular all steps of the proof of the theorem [ dwtobitheorem ] were provided .a tighter bound , using condition in di scheme , has been introduced .we have presented a new method of dimension witness reduction and a clear distinction between reduced and full dimension witnesses has been made .reduced dimension witnesses have been shown to be able to certify more randomness .min - entropies of several protocols , that had not been considered previously in , were evaluated .recently a new method that allows to lower - bound the randomness obtained in a sdi scheme directly , using semi - definite programming , has been introduced in . however , the complexity of the algorithm from increases significantly with the dimension of hilbert space while in our case the same computation provides a bound for all dimensions .it remains an open question , what are the conditions on a dimension witness under that the adversary has no gain in using the mixed strategy rather than p.sdp was implemented in octave using sedumi toolbox .this work is supported by ideas plus ( idp2011 000361 ) , ncn grant 2013/08/m / st2/00626 , fnp team and the national natural science foundation of china ( grant no.11304397 ) .the major part of this work has been written in the forests of sopot .a. rukhin , j. soto , j. nechvatal , m. smid , e. barker , s. leigh , m. levenson , m. vangel , d. banks , a. heckert , j. dray , s. vo , _ special publication 800 - 22 revision 1a _ , national institute of standards and technology , u.s .department of commerce , available at http://csrc.nist.gov/publications/pubssps.html r. koenig , r. renner , c. schaffner , ieee trans ., vol . * 55 * , no . 9( 2009 ) .w. e. burr , d. f. dodson , e. m. newton , r. a. perlner , w. t. polk , s. gupta , e. a. nabbu , _ nist special publication 800 - 63 - 2 _ , national institute of standards and technology , u.s .department of commerce , available at http://csrc.nist.gov/publications/pubssps.html s. pironio , a. acin , s. massar , a. boyer de la giroday , d. n. matsukevich , p. maunz , s. olmschenk , d. hayes , l. luo , t. a. manning , c. monroe , nature * 464 * , 1021 ( 2010 ) .j. f. clauser , m.a .horne , a. shimony , r. a. holt , phys .* 23 * , 880 ( 1969 ) .r. colbeck , a. kent , j. phys . a : math .theor . , * 44*(9 ) 095305 ( 2011 ) .d. mayers and a. yao , in focs 98 : _ proceedings of the 39th annual symposium on foundations of . computer science _ ( ieee computer society , washington , dc , usa ) , 503 ( 1998 ) .li , z .- q .yin , y .- c .wu , x .- b .zou , s. wang , w. chen , g .- c .guo , z .- f .han , phys .a * 84 * , 034301 ( 2011 ) .c . liang , t. vertesi , n. brunner , phys .a * 83 * , 022108 ( 2011 ) .n. brunner , s. pironio , a. acin , n. gisin , a. a. methot , v. scarani , phys .lett . * 100 * , 210503 ( 2008 ) .r. gallego , n. brunner , c. hadley , a. acin , phys .lett . * 105 * , 230501 ( 2010 ) .m. pawowski , n. brunner , phys .a * 84 * , 010302(r ) ( 2011 ) .m. dallarno , e. passaro , r. gallego , a. acin , phys .a * 86 * , 042312 ( 2012 ) .l. trevisan , _ journal of the acm _ * 48 * , 860 ( 2001 ) . a. de , c. portmann , t. vidick , r. renner , siam journal on computing * 41*(4 ) , 915 ( 2012 ) . m. tomamichel , c. schaffner , a. smith , r. renner , ieee trans .57*(8 ) , ( 2011 ) .li , p. mironowicz , m. pawowski , z .- q .yin , y .- c .wu , s. wang , w. chen , h .-hu , g .- c .guo , z .- f .a * 87 * , 020302(r ) ( 2013 ) .braunstein , c.m .caves , phys .lett . * 61 * , 662 ( 1988 ) .m. navascues , s. pironio , a. acin , phys .. lett . * 98 * , 010401 ( 2007 ) .m. navascues , s. pironio , a. acin , new j. phys .* 10 * , 073013 ( 2008 ) .a. ambainis , a. nayak , a. ta - shma , and u. vazirani , _ dense quantum coding and a lower bound for 1-way quantum automata _ , in proceedings of 31st acm symposium on theory of computing , 376 ( 1999 ) .li , m. pawowski , z .- q .yin , g .- c .guo , z .- f .han , phys .a * 85 * , 052308 ( 2012 ) .d. collins , n. gisin , n. linden , s. massar , s. popescu , phys .lett . * 88 * , 040404 ( 2002 ) .sturm , optimization methods and software * 11 * , 625 ( 1999 ) .
in this paper we develop a method for investigating semi - device - independent randomness expansion protocols that was introduced in [ li _ et al . _ phys . rev . a * 87 * , 020302(r ) ( 2013 ) ] . this method allows to lower - bound , with semi - definite programming , the randomness obtained from random number generators based on dimension witnesses . we also investigate the robustness of some randomness expanders using this method . we show the role of an assumption about the trace of the measurement operators and a way to avoid it . the method is also generalized to systems of arbitrary dimension , and for a more general form of dimension witnesses , than it the previous paper . finally , we introduce a procedure of dimension witness reduction , which can be used to obtain from an existing witness a new one with higher amount of certifiable randomness . the presented methods finds an application for experiments [ ahrens _ et al . _ phys . rev . lett . * 112 * , 140401 ( 2014 ) ] .
computers have become an integral part of modern life , and are essential for most academic research . since the middle of last century , researchers have invented new techniques to boost their calculation rate , e.g. by engineering superior hardware , designing more effective algorithms , and introducing increased parallelism .due to a range of physical limitations which constrain the performance of single processing units , recent computer science research is frequently geared towards enabling increased parallelism for existing applications . by definition, parallelism is obtained by concurrently using the calculation power of multiple processing units . from small to large spatial scales , this is respectively done by : facilitating concurrent operation of instruction threads within a single core , of cores within a single processor , of processors within a node , of nodes within a cluster or supercomputer , and of supercomputers within a distributed supercomputing environment .the vision of aggregating existing computers to form a global unified computing platform , and to focus that power for a single purpose , has been very popular both in popular fiction ( e.g. , the borg collective mind in star trek or big brother in orwell s 1984 ) and in scientific research ( e.g. , amazon ec2 , projects such as teragrid / xsede and egi , and numerous distributed computing projects ) .although many have tried , none have yet succeeded to link up more than a handful of major computers in the world to solve a major high - performance computing problem .very few research endeavors aim to do distributed computing at such a scale to obtain more performance .although it requires a rather large labour investment across several time zones , accompanied with political complexities , it is technically possible to combine supercomputers to form an intercontinental grid .we consider that combining supercomputers in such a way is probably worth the effort if many machines are involved , rather than a few .combining a small number of machines is hardly worth the effort of doubling the performance of a single machine , but combining hundreds or maybe even thousands of computers together could increase performance by orders of magnitude . herewe share our experiences , and lessons learned , in performing a large cosmological simulation using an intercontinental infrastructure of multiple supercomputers .our work was part of the cosmogrid project , an effort that was eventually successful but which suffered from a range of difficulties and set - backs .the issues we faced have impacted on our personal research ambitions , and have led to insights which could benefit researchers in any large - scale computing community .we provide a short overview of the cosmogrid project , and describe our initial assumptions in section [ sec : vision ] .we summarize the challenges we faced , ascending the hierarchy from thread to transcontinental computer , in section [ sec : parallel ] and we summarize how our insights affected our ensuing research agenda in section [ sec : after ] .we discuss the long - term implications of cosmogrid in section [ sec : future ] and conclude the paper with some reflections in section [ sec : discuss ] .the aim of cosmogrid was to interconnect four supercomputers ( one in japan , and three across europe ) using light paths and 10 gigabit wide area networks , and to use them concurrently to run a very large cosmological simulation .we performed the project in two stages : first by running simulations across two supercomputers , and then by extending our implementation to use four supercomputers concurrently .the project started as a collaboration between researchers in the netherlands , japan and the united states in october 2007 , and received support from several major supercomputing centres ( sara in amsterdam , epcc in edinburgh , csc in espoo and naoj in tokyo ) .cosmogrid mainly served a two - fold purpose : to predict the statistical properties of small dark matter halos from an astrophysics perspective , and to enable production simulations using an intercontinental network of supercomputers from a computer science perspective . for cosmogrid , we required a code to model the formation of dark matter structures ( using particles in total ) over a period of over 13 billion years .we adopted a hybrid tree / particle - mesh ( treepm ) n - body code named _ greem _ , which is highly scalable and straightforward to install on supercomputers .greem uses a barnes - hut tree algorithm to calculate force interactions between dark matter particles over short distances , and a particle - mesh algorithm to calculate force interactions over long distances . later in the project, we realized that further code changes were required to enable execution across supercomputers .as a result , we created a separate version of greem solely for this purpose .this modified code is named _ sushi _ , which stands for simulating universe structure formation on heterogeneous infrastructures .our case for a distributed computing approach was focused on a classic argument used to justify parallelism : multiple resources can do more work than a single one .even the world s largest supercomputer is about an order of magnitude less powerful than the top 500 supercomputers in the world combined . in terms of interconnectivitythe case was also clear .our performance models predicted that a 1 gbps wide area network would already result in good simulation performance ( we had 10 gbps links at our disposal ) , and that the round - trip time of about 0.27 s between the netherlands and japan would only impose a limited overhead on a simulation that would require approximately 100 s per integration time step .world - leading performance of our cosmological n - body integrator was essential to make our simulations possible , and our japanese colleagues optimized the code for single - machine performance as part of the project .snapshots / checkpoints would then be written distributed across sites , and gathered at run - time . at the start of cosmogrid , we anticipated to run across two supercomputers by the summer of 2008 , and across four supercomputers by the summer of 2009 .we assumed a number of political benefits : the simulation we proposed required a large number of core hours and produce an exceptionally large amount of data .these requirements would have been a very heavy burden for a single machine , and by executing a distributed setup we could mitigate the computational , storage and data i / o load imposed on individual machines .we also were aware of the varying loads of machines at different times , and could accommodate for that by rebalancing the core distribution whenever we would restart the distributed simulation from a checkpoint .overall , we mainly expected technical problems , particularly in establishing a parallelization platform which works across supercomputers . installinghomogeneous software across heterogeneous ( and frequently evolving ) supercomputer platforms appeared difficult to accomplish , particularly since we did not possess administrative rights on any of the machines .in addition , the greem code had not been tested in a distributed environment prior to the project .although we finalized the production simulations about a year later than anticipated , cosmogrid was successful in a number of fundamental areas .we managed to successfully execute cosmological test simulations across up to four supercomputers , and full - size production simulations across up to three supercomputers .in addition , our astrophysical results have led to new insights on the mass distribution of satellite halos around milky - way sized galaxies , on the existence of small groups of galaxies in dark - matter deprived voids , the structure of voids and the evolution of barionic - dominated star clusters in a dark matter halo . however , these results , though valuable in their own right , do not capture some of the most important and disturbing lessons we have learned from cosmogrid about distributed supercomputing . herewe summarize our experiences on engineering a code from the level of threads to that of a transcontinental machine , establishing a linked infrastructure to deploy the code , reserving the required resources to execute the code , and the software engineering and sustainability aspects surrounding distributed supercomputing codes .we are not aware of previous publications of practical experiences on the subject , and for that reason this paper may help achieve a more successful outcome for existing research efforts in distributed hpc .the greem code was greatly re - engineered during cosmogrid .this was necessary to achieve a complete production simulation within the core hour allocations that we obtained from ncf and deisa .our infrastructure consisted of three cray xt4 machines with little endian intel chips and one ibm machine with big endian power7 chips . at the start of cosmogrid ,the power7 architecture was not yet in place .greem had been optimized for the use of sse and avx instruction sets , executing 10 times faster when these instructions are supported .sse was available in intel chips and avx was expected to be available in power7 chips .however , the support for avx in power7 never materialized , forcing us to find alternative optimization approaches .an initial 2-month effort by ibm engineers , leading to a 10% performance improvement , did not speed up the code sufficiently .we then resorted to manual optimization without the use specialized instruction sets , e.g. , by reordering data structures and unrolling loops .this effort resulted in a performance increase , which was within a factor of 3 of our original target .much of the parellelization work on greem was highly successful , as evidenced by the gordon bell prize awarded in 2012 to ishiyama et al .however , one unanticipated problem arose while scaling up the simulation to larger problem sizes .greem applies a particle - mesh ( pm ) algorithm to resolve the interactions beyond a preconfigured cutoff limit .the implementation of is algorithm was initially serial , as the overhead was a negligible component ( % ) of the total execution time for smaller problems .however , the overhead became much larger we scaled up to mesh sizes beyond mesh cells , forcing us to move from a serial implementation to a parallel implementation .initially we considered executing greem as - is , and using a middleware solution to enable execution across supercomputers . in the years leading up to cosmogrid , a large number of libraries emerged for distributed hpc ( e.g. , mpich - g2 , openmpi , mpig and pacx - mpi ) .many of these were strongly recommended by colleagues in the field , and provided mpi layers that allowed applications to pass messages across different computational resources .although these were well - suited for distributed hpc across self - administered clusters , we quickly found that a distributed supercomputing environment was substantially different .first , supercomputers are both more expensive and less common than clusters , and the centres managing them are reluctant to install libraries that require administrative privileges , due to risks of security and ease of maintenance ( mpi distributions tend to require such privileges ) .second , the networks interconnecting the supercomputers are managed by separate organizational entities , and the default configurations at each network endpoint are almost always different and frequently conflicting .this is not the case in more traditional ( national ) grid infrastructures , where uniform configurations can be imposed by the overarching project team .the heterogeneity in network configurations resulted in severe performance and reliability penalties when using standard tcp - based middleware ( such as mpi and scp , unless we could find some way to either ( a ) customize the network configuration for individual paths or ( b ) adopt a different protocol ( e.g. , udp ) which ignores these preset configurations . in either case , we realized that using standard tcp - based mpi libraries for the communication between supercomputers was no longer a viable option .using any other library had the inevitable consequence of modifying the main code , and eventually we chose to customize greem ( the customized version is named sushi ) and establish a seperate communication library ( mpwide ) for distributed supercomputing . having a code to run , and computer time to run it on is insufficient to do distributed concurrent supercomputing .the amount of data to be transported is per integration time step and should not become the limiting factor in measuring performance .each integration step would take about 100s . when allowing a 10% overhead we would have to require a network speed of / s .our collaboration with cees de laat ( university of amsterdam ) and kei hiraki ( tokyo university ) enables us to have two network and data transport specialists at each side of the light path . at the time, russia had planned to make their military 10gbps dark fiber available for scientific experiments , but due to their enhanced military use we were unable to secure access to this cable .the eventual route of the optical cable is presented in fig.[fig : cgnetworktopology ] .we had a backup network between the ntt - com router at jgn2plus and the starlight interconnect to guarantee that our data stream remained stable throughout our calculations .one of the interesting final quirks in our network topology was the absence of an optical network interface in the edinburgh machine ( which was installed later ) , and the fact that the optical cable at the japanese side reached the computer science building on the mitaka campus in tokyo next to where the supercomputer at naoj was located .a person had to go physically to dig a hole and install a connecting cable between the two buildings . from a software perspective , we present our design considerations on mpwide fully in groen et al . . herewe will summarize the main experiences and lessons that we learned from cosmogrid , as well as our experiences in readying the network in terms of software configuration .we initially attempted to homogenize the network configuration settings between the different supercomputers .this effort failed , as it was complicated by the presence of over a dozen stakeholder organizations , and further undermined by the lack of diagnostic information available to us .for example , it was not always possible for us to pinpoint specific configuration errors to individual routers , and to their respective owners .we also assessed the performance of udp - based solutions such as quanta and udt , which operate outside of the tcp - specific configuration settings of network devices .however , we were not able to universally adopt such a solution , as some types of routers filter or restrict udp traffic .we eventually converged on basing mpwide on multi - stream tcp , and combine this with mechanisms to customize tcp settings on each of the communication nodes .our initial tests were marred with network glitches , particularly on the path between amsterdam and tokyo ( see fig .[ fig : cgglitch ] for an example where packets were periodically stalled ) .however , later runs resulted in more stable performance once we used a different path and adjusted the mpwide configuration to use very small tcp buffer sizes per stream . particles , run in parallel across the supercomputers in amsterdam and tokyo ( 32 cores per site ) .we present the total time spent ( blue dotted line ) , time spent on calculation ( green dashed line ) , and the time spent on communication with mpwide ( red solid line ) .stalls in the communication due to dropped packets resulted in visible peaks in the communication performance measurements .reproduced from groen et al .note : the communication time was relatively high ( of total runtime ) ) in this test run due to the small problem size , in we also present results from a test run with particles , which had a communication overhead of of total runtime.,width=288 ] the ability to have concurrent access to multiple supercomputers is an essential requirement for distributed supercomputing . within cosmogrid, we initially agreed that the four institutions involved would provide so - called `` phone - based '' advance reservation for the purpose of this project .however , due to delays in the commisioning of the light path , and due to political resistance regarding advance reservation within some of the supercomputing centres ( in part caused due to increasing demand of the machines ) , it was no longer possible to use this means of advance reservation on all sites .eventually , we ended up with a different `` reservation '' mechanism for each of the supercomputers .the original approach of calling up was still supported for the huygens machine in the netherlands . for the louhi machine in finland ,calling up was also possible , but the reservation was established through an exclusively dedicated queue , as no direct reservation system was in place .this approach had the side effect of locking other users out , and therefore we only opted to use it as a last resort .for the hector machine in the uk , it was possible to request higher priority , but not to actually reserve resources in advance .support for advance reservation was provided there shortly after cosmogrid concluded , but at the time the best method to `` reserve '' the machine was to submit a very large high priority job right before the machine maintenance time slot .we then need to align all other reservations to the end of that maintenance time slot , presumed usually to be 6 hours after the start of the maintenance .for the cray machine in japan , reservation was no longer possible due to the high work load .however , some mechanisms of augmented priority could be established indirectly , e.g. by chaining jobs , which allowed for a job to be kept running at all times .the combination of these strategies made it impossible to perform a large production run using all four sites .we did perform smaller tests using the full infrastructure ( using regular scheduling queues and hoping for the best ) , but we were only able to do the largest runs using either the three european machines , or huygens combined with the cray in tokyo .the task of engineering a code for distributed supercomputing is accompanied with a number of unusual challenges , particularly in the areas of software development and testing . at the time, we had to ensure that greem and sushi remained fully compatible with all four supercomputer platforms . with no infrastructure in place to do continuous integration testing on supercomputer nodes ( even today this is still a rarity ) , we performed this testing periodically by hand . in general , testing on a single site is straightforward , but testing across sites less so .we were able to arrange proof - of - concept tests across 4 sites using very small jobs ( e.g. , 16 cores per site ) by putting these jobs with long runtimes in the queue on each machine and waiting with starting the run until the jobs are running simultaneously . for slightly larger jobs ( e.g. , 64 cores per site )this became difficult during peak usage hours as the queuing times of each job became longer and less predictable .however , we have been able to perform a number of tests during more quiet periods of the week ( e.g. , at 2 am ) , without using advance reservation . for yet larger test runswe were required to use advance reservation , reduce the number of supercomputers involved , or both .software testing is instrumental to overall development , particularly when developing software in a challenging environment like an intercontental network of supercomputers . here, the lack of facilities for advance reservation and continuous integration made testing large systems prohibitively difficult , and had an adverse effect on the development progress of sushi .we eventually managed to get an efficient production calculation running for 12 hours across three sites and with the full system size of particles , but with better facilities for testing across sites we could well have tackled larger problems using higher core counts .more emphasis and investment in testing facilities at the supercomputer centres would have boosted the cosmogrid project , and such support would arguably be of great benefit to increase the user uptake of supercomputers in general .our experience with cosmogrid changed how we approached our computational research in two important ways .first , due to the political hardships and the lack of facilities for advance reservation and testing , we changed our emphasis from distributed concurrent supercomputing towards improving code performance on single sites .second , our expertise enabled us the enter the relatively young field of distributed multiscale computing with a head start .multiscale computing involves the combination of multiple solvers to solve a problem that encompasses several different length and time scales .each of these solvers may have different resource requirements , and as a result such simulations are frequently well - suited to be run across multiple resources .drost et al .applied some of our expertise , combining it with their experience using ibis , to enable the amuse environment to run different coupled solvers on different resources concurrently .our experiences with cosmogrid were an important argument towards redeveloping the muscle coupling environment for the mapper project . in this eu - funded consortium ,borgdorff et al . developed a successor ( muscle 2 ) , which is optimized for easier installation on supercomputers , and automates the startup of solvers that run concurrently on different sites ( a task that was done manually in cosmogrid ) .in addition , we integrated mpwide in muscle2 and used mpwide directly to enable concurrently running coupled simulations across sites .distributed high - performance supercomputing is not for the faint at heart .it requires excellent programming and pluralization skills , stamina , determination , politics and hard labour .one can wonder if it is worth the effort , but the answer depends on the available resources and the success of proposal writing . about 2030% of the proposals submitted to incite ( http://www.doeleadershipcomputing.org/faqs/ ) or prace ( http://www.prace-ri.eu/prace-kpi/ ) are successful , butthese success rates are generally lower for the very largest infrastructures ( e.g. , ornl titan ) . in addition , some of the largest supercomputers ( such as tianhe-2 and the k computer ) provide access only to closely connected research groups or to projects that have a native principal investigator .acquiring compute time on these largest architectures can be sufficiently challenging that running your calculations on a number of less powerful but earlier accessible machines may be easier to accomplish . accessing several of such machines through one project is even harder , and probably not very realistic .similarly , for the different architectures , it would be very curious to develop a code that works optimal on k computer and ornl titan concurrently .achieving 25 pflops on titan alone is already a major undertaking , and combining such an optimized code with a tofu - type network architecture ( which is present on the k computer ) would make optimization a challenge of a different order .we therefore do not think that distributed architectures will be used to beef - up the world s fastest computers , nor to connect a number of top-10 to top-100 supercomputers to out - compute the number 1 .the type of distributed hpc as discussed in this article is probably best applied to large industrial or small academic computer clusters .these architectures are found in many academic settings or small countries , and are relatively easily accessible , by peer review proposals or via academic license agreements . in this context, we think it is more feasible to connect 10 to 100 of such machines to outperform a top 1 to 10 computer .we have presented our experiences from the cosmogrid project in high - performance distributed supercomputing .plainly put , distributed high - performance supercomputing is a tough undertaking , and most of our initial assumptions were proven wrong .much of the hardware and software infrastructure was constructed with very specific use - cases in mind , and was simply not fit for purpose to do distributed high - performance supercomputing .a major reason why we have been able to establish distributed simulations at all is due to the tremendous effort of all the people involved , from research groups , networking departments and supercomputer centres .it was due to their efforts to navigate the project around the numerous technical and political obstacles that distributed supercomputing became even possible .cosmogrid was unsuccessful in establishing high - performance distributed supercomputing as the future paradigm for using very large supercomputers .however , the project did provide a substantial knowledge boost to our subsequent research efforts , which is reflected by the success of projects such as mapper and amuse . the somewhat different approach taken in these projects ( aiming for more local resource infrastructures , and with a focus on coupling different solvers instead of parallelizing a single one ) resulted in tens of publications which relied on distributed ( super-)computing .the hpc community has recently received criticism for its conservative approaches and resistance to change ( e.g. , ) . through cosmogrid, it became obvious to us that resource providers can be subject to tricky dilemmas , where the benefits of supporting one research project need to be weighed against the possible reduced service ( or support ) incurred by other users . in light of that, we do understand the conservative approaches followed in hpc to some extent . in cosmogrid, we tried to work around that by ensuring that our software was installable without any administrative privileges , and we recommend that new researchers who wish to do distributed supercomputing do so as well ( or , perhaps , adopt very robust , flexible and well - performing tools for virtualization ) .in addition , we believe that cosmogrid would have been greatly helped if innovations such as automated advance reservation systems for resources and network links , facilities for systematic software testing and continuous integration , and streamlined procedures for obtaining access to multiple sites had been in place . even today , such facilities make hpc infrastructures more convenient for new types of users and applications , and strengthen the position of the hpc community in an increasingly cloud - dominated computing landscape .both spz and dg contributed equally to this work .we are grateful to tomoaki ishiyama , keigo nitadori , jun makino , steven rieder , stefan harfst , cees de laat , paola grosso , steve macmillan , mary inaba , hans blom , jeroen bdorf , juha fagerholm , tomoaki ishiyama , esko kernen , walter lioen , jun makino , petri nikunen , gavin pringle and joni virtanen for their contributions to this work .this research is supported by the netherlands organization for scientific research ( nwo ) grant # 639.073.803 , # 643.200.503 and # 643.000.803 and the stichting nationale computerfaciliteiten ( project # sh-095 - 08 ) .we thank the deisa consortium ( eu fp6 project ri-031513 and fp7 project ri-222919 ) for support within the deisa extreme computing initiative ( gbbp project ) .this paper has been made possible with funding from the uk engineering and physical sciences research council under grant number ep / i017909/1 ( http://www.science.net ) .s. hettrick , m. antonioletti , l. carr , n. chue hong , s. crouch , d. de roure , i. emsley , c. goble , a. hay , d. inupakutika , m. jackson , a. nenadic , t. parkinson , m. i. parsons , a. pawlik , g. peru , a. proeme , j. robinson , and s. sufi , `` uk research software survey 2014 , '' dec . 2014 .[ online ] .available : http://dx.doi.org/10.5281/zenodo.14809 a. gualandris , s. portegies zwart , and a. tirado - ramos , `` performance analysis of direct n - body algorithms for astrophysical simulations on distributed systems . ''_ parallel computing _ , vol .33 , no . 3 , pp .159173 , 2007 .d. groen , s. portegies zwart , t. ishiyama , and j. makino , `` high performance gravitational n - body simulations on a planet - wide distributed supercomputer , '' _ computational science and discovery _, vol . 4 , no .015001 , jan .e. agullo , c. coti , t. herault , j. langou , s. peyronnet , a. rezmerita , f. cappello , and j. dongarra , `` qcg - ompi : \{mpi } applications on grids , '' _ future generation computer systems _ , vol .27 , no . 4 , pp . 357 369 , 2011 .[ online ] .available : http://www.sciencedirect.com/science/article/pii/s0167739x10002359 f. j. seinstra , j. maassen , r. v. van nieuwpoort , n. drost , t. van kessel , b. van werkhoven , j. urbani , c. jacobs , t. kielmann , and h. e. bal , `` , '' in _ _ , ser .computer communications and networks , m. cafaro and g. aloisio , eds .1em plus 0.5em minus 0.4emspringer london , 2011 , pp .167197 .m. ben belgacem , b. chopard , j. borgdorff , m. mamonski , k. rycerz , and d. harezlak , `` distributed multiscale computations using the mapper framework , '' _ procedia computer science _ ,18 , no . 0 , pp . 1106 1115 , 2013 , 2013 international conference on computational science .[ online ] .available : http://www.sciencedirect.com/science/article/pii/s1877050913004195 j. borgdorff , m. ben belgacem , c. bona - casas , l. fazendeiro , d. groen , o. hoenen , a. mizeranschi , j. l. suter , d. coster , p. v. coveney , w. dubitzky , a. g. hoekstra , p. strand , and b. chopard , `` performance of distributed multiscale simulations , '' _ philosophical transactions of the royal society a : mathematical , physical and engineering sciences _ , vol .372 , no . 2021 , 2014 .g. hoekstra , a. , s. portegies zwart , m. bubak , and p. sloot , _ towards distributed petascale computing_.1em plus 0.5em minus 0.4empetascale computing : algorithms and applications , by david a. bader ( ed . ) .chapman & hall / crc computational science series 565pp .( isbn : 9781584889090 , isbn 10 : 1584889098 ) , 2008 .s. portegies zwart , t. ishiyama , d. groen , k. nitadori , j. makino , c. de laat , s. mcmillan , k. hiraki , s. harfst , and p. grosso , `` simulating the universe on an intercontinental grid , '' _ computer _ , vol .43 , pp . 6370 , 2010 .t. ishiyama , t. fukushige , and j. makino , `` greem : massively parallel treepm code for large cosmological n - body simulations , '' _ publications of the astronomical society of japan _ ,61 , no . 6 , pp .13191330 , 2009 .d. groen , s. rieder , and s. portegies zwart , `` high performance cosmological simulations on a grid of supercomputers , '' in _ proceedings of infocomp 2011_.1em plus 0.5em minus 0.4emthinkmind.org , sep . 2011 .t. ishiyama , s. rieder , j. makino , s. portegies zwart , d. groen , k. nitadori , c. de laat , s. mcmillan , k. hiraki , and s. harfst , `` the cosmogrid simulation : statistical properties of small dark matter halos , '' _ the astrophysical journal _ , vol .767 , no . 2 , p. 146[ online ] .available : http://stacks.iop.org/0004-637x/767/i=2/a=146 t. ishiyama , k. nitadori , and j. makino , `` 4.45 pflops astrophysical n - body simulation on k computer : the gravitational trillion - body problem , '' in _ proceedings of the international conference on high performance computing , networking , storage and analysis _sc 12.1em plus 0.5em minus 0.4emlos alamitos , ca , usa : ieee computer society press , 2012 , pp .5:15:10 .n. karonis , b. toonen , and i. foster , `` mpich - g2 : a grid - enabled implementation of the message passing interface , '' _ journal of parallel and distributed computing _ , vol .63 , no . 5 , pp . 551 563 , 2003 , special issue on computational grids . c. coti , t. herault , and f. cappello ,`` , '' in _ _ , ser .lecture notes in computer science , h. sips , d. epema , and h .- x .lin , eds.1em plus 0.5em minus 0.4emspringer berlin heidelberg , 2009 , vol .5704 , pp .466477 .s. manos , m. mazzeo , o. kenway , p. v. coveney , n. t. karonis , and b. toonen , `` distributed mpi cross - site run performance using mpig , '' in _ proceedings of the 17th international symposium on high performance distributed computing _hpdc 08.1em plus 0.5em minus 0.4em new york , ny , usa : acm , 2008 , pp .229230 .m. muller , m. hess , and e. gabriel , `` grid enabled mpi solutions for clusters , '' in _cluster computing and the grid , 2003 .ccgrid 2003 .3rd ieee / acm international symposium on_.1em plus 0.5em minus 0.4emieee , 2003 , pp .d. groen , s. rieder , p. grosso , c. de laat , and s. portegies zwart , `` a light - weight communication library for distributed computing , '' _ computational science and discovery _ , vol . 3 , no .015002 , aug . 2010 .e. he , j. alimohideen , j. eliason , n. krishnaprasad , j. leigh , o. yu , and t. defanti , `` quanta : a toolkit for high performance data delivery over photonic networks , '' _ future generation computer systems _ , vol .19 , no . 6 , pp . 919 933 , 2003. t. j. hacker , b. d. athey , and b. noble , `` the end - to - end performance effects of parallel tcp sockets on a lossy wide - area network , '' in _ vehicle navigation and information systems conference , 1993 . ,proceedings of the ieee - iee _ , oct 1993 .j. bdorf , e. gaburov , m. s. fujii , k. nitadori , t. ishiyama , and s. portegies zwart , `` 24.77 pflops on a gravitational tree - code to simulate the milky way galaxy with 18600 gpus , '' in _ proceedings of the international conference for high performance computing , networking , storage and analysis _ , ser .sc 14.1em plus 0.5em minus 0.4em piscataway , nj , usa : ieee press , 2014 , pp .[ online ] .available : http://dx.doi.org/10.1109/sc.2014.10 n. drost , j. maassen , m. van meersbergen , h. bal , f. pelupessy , s. portegies zwart , m. kliphuis , h. dijkstra , and f. seinstra , `` high - performance distributed multi - model / multi - kernel simulations : a case - study in jungle computing , '' in _ parallel and distributed processing symposium workshops phd forum ( ipdpsw ) , 2012 ieee 26th international _ , may 2012 , pp .150162 .j. borgdorff , m. mamonski , b. bosak , k. kurowski , m. ben belgacem , b. chopard , d. groen , p. v. coveney , and a. g. hoekstra , `` distributed multiscale computing with \{muscle } 2 , the multiscale coupling library and environment , '' _ journal of computational science _, vol . 5 , no . 5 , pp .719 731 , 2014 .[ online ] .available : http://www.sciencedirect.com/science/article/pii/s1877750314000465 d. groen , j. borgdorff , c. bona - casas , j. hetherington , r. nash , s. zasada , i. saverchenko , m. mamonski , k. kurowski , m. bernabeu , a. hoekstra , and p. coveney , `` flexible composition and execution of high performance , high fidelity multiscale biomedical simulations , '' _ interface focus _, vol . 3 , no . 2 , p. 20120087 , 2013 .s. f. portegies zwart , s. l. w. mcmillan , a. van elteren , f. i. pelupessy , and n. de vries , `` multi - physics simulations using a hierarchical interchangeable software interface , '' _ computer physics communications _184 , no . 3 , pp .456 468 , 2013 .
we describe the political and technical complications encountered during the astronomical cosmogrid project . cosmogrid is a numerical study on the formation of large scale structure in the universe . the simulations are challenging due to the enormous dynamic range in spatial and temporal coordinates , as well as the enormous computer resources required . in cosmogrid we dealt with the computational requirements by connecting up to four supercomputers via an optical network and make them operate as a single machine . this was challenging , if only for the fact that the supercomputers of our choice are separated by half the planet , as three of them are located scattered across europe and fourth one is in tokyo . the co - scheduling of multiple computers and the gridification of the code enabled us to achieve an efficiency of up to for this distributed intercontinental supercomputer . in this work , we find that high - performance computing on a grid can be done much more effectively if the sites involved are willing to be flexible about their user policies , and that having facilities to provide such flexibility could be key to strengthening the position of the hpc community in an increasingly cloud - dominated computing landscape . given that smaller computer clusters owned by research groups or university departments usually have flexible user policies , we argue that it could be easier to instead realize distributed supercomputing by combining tens , hundreds or even thousands of these resources .
many scientists have contributed to our knowledge and understanding of the accelerating universe and the properties of the dark energy . in keeping with the request of the conference organizers, this paper only addresses my contributions to this field .the years 1997 and 1998 were exciting times for cosmology !i had been studying powerful classical double radio galaxies since the early 1990s , and had proposed a new method of using radio galaxies as a modified standard yardstick in 1994 .this work continued with princeton university phd thesis students eddie guerra , lin wan , and greg wellman , and some of this work included cosmological studies ( , , , , ) , and studies of outflows from the supermassive black holes that power the radio sources ( , , , , ) .the cosmological studies were done in the context of two cosmological world models : one that included non - relativistic matter , a cosmological constant , and space curvature , and another that included `` quintessence '' with constant equation of state , non - relativistic matter , and zero space curvature .later , radio galaxies were studied in the context of a cosmological model that included a rolling scalar field , non - relativistic matter , and zero space curvature , .the cosmological results eventually published by were presented on january 9 , 1998 at the aas meeting in washington , d. c. ; the sample of twenty radio galaxies studied is briefly mentioned in aas bulletin abstract 95.04 .these results indicated that a cosmological constant provided a good fit to the radio galaxy data . in late 1997 , steve maran from the aas press office invited me to prepare a press release on cosmological studies with radio galaxies to be presented on january 8 , 1998 , and i accepted this invitation .the press release explains how distant radio galaxies can be used to study the expansion history of the universe . fora given observed angular size of the radio source , a large intrinsic size meant that the coordinate distance to the source was large , and that the universe was accelerating in its expansion , or , in the words of the release , `` the expanding universe will continue to expand forever , and will expand more and more rapidly as time goes by . ''this is explained again later in the release where it states `` the universe will continue to expand forever and will expand at a faster and faster rate as time goes by . ''the press release session included supernova results presented by adam riess and saul perlmutter , and i had an opportunity to discuss my conclusions with adam and saul in detail .they were surprised to hear that a cosmological constant provided a good fit to the radio galaxy data , which implied the universe would expand at an ever increasing rate .each expressed a similar concern , this concern being the dependence of the result on the cosmological model or world view under consideration .the result that i was reporting on was obtained in the context of a cosmological model that allowed for non - relativistic matter , a cosmological constant , and space curvature .if different components were present in the universe , would the data still imply that the universe was accelerating ?in fact , addressing this concern was part of the motivation for developing a completely model - independent approach to the analysis and interpretation of radio galaxy and supernova data ( described below ) .two supernova groups showed that a cosmological constant provides a good description of the supernova data ( e.g. , ) .as time passed and more data were analyzed , it became clear that these two completely independent methods , friib radio galaxies and type ia supernovae , based on totally different types of sources and source physics , yield very similar results , , , , , , .this was important because it suggested that neither method was plagued by unknown systematic errors .one of the reasons the radio galaxy method provides interesting results with a relatively small number of sources is that many of the radio sources are at relatively high redshift .for example , the highest redshift source in either the radio galaxy or supernovae samples is the radio galaxy 3c 239 at a redshift of 1.79 , which has been included in the radio galaxy studies since 1998 .differences between predictions of various cosmological models become large at high redshift , so high redshift data points can have a strong impact on results .the methods of using type ia supernova and type iib radio galaxies for cosmological studies are empirically based .it could be empirically demonstrated that the methods worked well , but the underlying physical processes were not understood well enough to explain why the methods worked so well .this changed in 2002 for the radio galaxies , when the reason that the radio galaxy method works so well began to become clear .the radio galaxy method is applied to very powerful classical double radio galaxies , such as the radio source cygnus a ( 3c 405 ) .these friib radio galaxies are powered by very energetic , highly collimated outflows from regions very close to a supermassive black hole located at the center of a galaxy .when the collimated outflow impacts the ambient gas , a strong shock wave forms , and a shock front separates the radio emitting material from the ambient gas .the physics of strong shocks is fairly simple and straight - forward , and makes these systems ideal for cosmological studies .large - scale outflows from supermassive black holes are thought to be powered by the spin energy of the hole ( e.g. , , ) .when cast the radio galaxy method in the language of the blandford - znajek model to extract the spin energy from a rotating black hole , it became clear that the outflow from the hole occurs when the strength of the magnetic field near the hole reaches a maximum or limiting value .this value can be written as a function of the black hole mass , spin , and the radio galaxy model parameter , . when the radio galaxy model parameter has one particular value , , the relationship between the magnetic field strength and the properties of the rotating hole is greatly simplified , and the field strength depends only upon the black hole spin .empirical studies by and found that the value of is very close to 1.5 , .thus , the reason the radio galaxy model works so well is that the outflow from the supermassive black hole is triggered when the magnetic field strength reaches a maximum or limiting value that depends only upon the black hole spin .interestingly , other models , such as that by , have the same functional form as the blandford - znajek model but with a different constant of proportionality , and the results of apply to any model with the same functional form as the blandford - znajek model .from 1998 to 2002 the study of the acceleration of the universe was done in the context of particular cosmological world models , and the question of whether the acceleration of the universe could be studied independent of a particular cosmological model and independent of a theory of gravity captivated my interest . to address this question, i worked to develop an assumption - free , or model - independent , method of analyzing supernova , radio galaxy , or other data sets that provide coordinate distances .the method was proposed in 2002 , and , in collaboration with george djorgovski , was developed and applied to supernova and radio galaxy data sets , , , , . assuming only that the friedmann - lematre - robertson - walker ( flrw ) line element is valid , coordinate distance measurements can be used to obtain the expansion and acceleration rates of the universe as functions of redshift .coordinate distance measurements are easily obtained from luminosity distances or angular size distances to any type of source ( e.g. supernovae or radio galaxies ) .the flrw line element is the most general metric describing a homogeneous and isotropic four - dimensional space - time .these determinations of the expansion and acceleration rates of the universe are independent of a theory of gravity , and independent of the contents of the universe ( , ) .it was shown by that the zero redshift value of the dimensionless acceleration rate of the universe is independent of space curvature , and that very similar results are obtained for the dimensionless acceleration rate and the expansion rate of the universe for zero and reasonable non - zero values of space curvature .thus , the model - independent method can be applied without requiring that space curvature be set equal to zero .it was shown by , , , , and that the universe is accelerating today and was most likely decelerating in the recent past , and this result is independent of a theory of gravity , of the contents of the universe , and of whether space curvature is non - zero ( for reasonable non - zero values ) .recent determinations of and obtained using the model - independent method are compared with predictions in a standard lambda cold dark matter ( lcdm ) model in fig .1 ( the thin solid line shows the lcdm prediction ) .these results indicate that the lcdm model provides a good description of the data to a redshift of about one ( e.g. , , , , ) .the lcdm model assumes that general relativity ( gr ) is the correct theory of gravity , space curvature is equal to zero , and two components contribute to the current mass - energy density of the universe , a cosmological constant and non - relativistic matter with 70 % and 30 % , respectively , of the normalized mean mass - energy density of the universe at the current epoch . as discussed by , a comparison of model - independent determinations of and with predictions in the lcdm and other models provides a large - scale test of gr .current observations suggest that gr provides an accurate description of the data over look back times of about ten billion years .there is a hint of a deviation of the data from predictions in the lcdm model at redshifts of about one ( , ) .the model - independent approach can be extended to solve for the properties of the dark energy as a function of redshift , where the `` dark energy '' is the name given to whatever is causing the universe to accelerate .assuming that gr is valid on very large length scales , and that space curvature is zero , supernova and radio galaxy data can be used to solve for the pressure , energy density , equation of state , and potential and kinetic energy densities of the dark energy as functions of redshift ( , ) , as shown in fig .2 . results obtained using the model - independent approach can provide valuable information to theorists developing new ideas to explain the acceleration history of the universe and the properties of the dark energy .this is complementary to the commonly adopted approach of assuming a particular dark energy model and cosmological world model and solving for best fit model parameters ( e.g. , , , , , , ) . in studies of the properties of the dark energy ,the equation of state of the dark energy has surfaced as an important parameter . a cosmologicalconstant has an equation of state that is always equal to . to study the equation of state of the dark energy in a model - independent manner, defined a new model - independent function , called the dark energy indicator .the dark energy indicator provides a measure of deviations of the equation of state from as a function of redshift .current data suggest that a value of provides a good description of data at redshift less than 1 ( see fig .the radio galaxies described above are powered by large - scale outflows from the vicinity of supermassive black holes .studies of the properties of a radio galaxy allow the energy per unit time , known as the `` beam power , '' that is being channeled from the vicinity of the supermassive black hole to the large - scale outflow to be determined .studies of the beam power and other source properties provide important insights and information on these black hole systems ( e.g. , , ) .for example , the beam power can be combined with the radio galaxy model parameter to solve for the total energy that will be channeled away from the vicinity of the supermassive black hole over the full lifetime of the outflow ( e.g. , , ) .the total energy of the outflow can be combined with the black hole mass to obtain a lower bound on the spin of the supermassive black hole , assuming only that the highly collimated outflow is powered by the spin energy of the supermassive black hole .this is one of the very few direct indications of the spin of supermassive black holes known at present .the ratio of the total outflow energy to the black hole mass appears to be constant for these black hole systems .this ratio provides an important diagnostic of the physical state of the black hole system at the time the outflow is generated , and the results of indicate that each system is in a similar physical state when the outflow is triggered . thus , these studies provide insights into the physical conditions of supermassive black holes systems and their state at the time powerful outflows are generated. dimensionless coordinate distances ( filled symbols indicate radio galaxies and open symbols indicate supernovae ) , for supernovae ( middle left panel ) and radio galaxies ( bottom left panel ) , and for supernovae ( top right panel ) and radio galaxies ( bottom right panel ) ; from daly et al ., width=292 ] dimensionless coordinate distances ( filled symbols indicate radio galaxies and open symbols indicate supernovae ) , for supernovae ( middle left panel ) and radio galaxies ( bottom left panel ) , and for supernovae ( top right panel ) and radio galaxies ( bottom right panel ) ; from daly et al .( 2008 ) . , width=292 ] dark energy pressure ( top left panel ) , energy density ( middle left panel ) , and equation of state ( bottom left panel ) , and the dark energy indicator ( right panel ) as functions of redshift for a combined sample of supernovae and radio galaxies ; from daly et al .( 2008 ) . , width=292 ] dark energy pressure ( top left panel ) , energy density ( middle left panel ) , and equation of state ( bottom left panel ) , and the dark energy indicator ( right panel ) as functions of redshift for a combined sample of supernovae and radio galaxies ; from daly et al ., width=292 ] it is a pleasure to thank david cline for encouraging me to give this presentation , and my collaborators in these endeavors , especially george djorgovski , chris odea , preeti kharb , and stefi baum .this work was supported in part by u.s .nsf grants ast-0096077 , ast-0206002 , and ast-0507465 .
the years 1998 to 2008 were very exciting years for cosmology . it was a pleasure to accept this invitation to describe my contributions to the development of our knowledge and understanding of the universe over the course of the past decade . here , i begin by describing some of my work on radio galaxies as a modified standard yardstick and go on to describe model - independent studies of the accelerating universe and the properties of the dark energy . during the course of these studies , i came upon interesting ways to study the spin and other properties of supermassive black holes , some of which are briefly mentioned . address = department of physics , penn state university , berks campus , p. o. box 7009 , reading , pa 19610
the welded pipelines subjected to high pressure and temperature are widely used in different branches of industry . under such conditions , the creep and damage effects should be taken into account for accurate assurance of long - term reliability .application of computational continuum creep damage mechanics ( see , for example , ) coupled with increasing power of computers can accomplish this task . in recent years the finite element method has become the widely accepted tool for the structural analysis in the creep range .a user defined creep material subroutine with appropriate constitutive and evolution equations can be developed and incorporated into the commercial finite element code to perform a numerical time step solution of creep and long term strength problems .on the other hand , in addition to more and more sophisticated numerical analysis , simplified models of creep response are required .these models should provide a better intuitive insight into the problem and give a quantitative description of the solution .the assessment of reliability of user - defined creep material subroutines and the choice of suitable numerical parameters like the element type , the mesh density , and time step control are complicated problems , particulary if studying creep of multi - material structures .therefore , it is important to have reference solutions of benchmark problems .such solutions should be obtained by use of alternative analytical or semi - analytical methods which do not require the spatial discretization techniques and allow for studying the behavior of stress and deformation gradients .the objective of this paper is to develop an alternative semi - analytical solutions to creep problems for multi - material pipe structures .particularly we address the analysis of stress gradients in the local zones of material connections . to obtain a semi - analytical solution we shall make the following simplifications .we assume the idealized material behavior having the secondary creep stage only . in this casethe steady state solution of creep in the pipe exists , for which the stresses do not depend on time .we assume that the difference between the material properties of constituents is not great .particularly the difference between the minimum creep rates for the same stress level should not exceed the value of 2 .the lifetime of a welded pipe under creep conditions is less than that of homogeneous one .the effect of reliability reduction is of big interest , therefore large numbers of model problems were proposed .the most commonly used approach simulates weldment as a region with non - uniformly distributed material properties ( , , , , , ) . within the framework of this approachit is often necessary to consider a number of parameter distribution cases .the amount of problems to be solved increases with the number of changing material parameters and parametric analysis becomes very complicated .drawing an analogy with some simple systems , for which an analytical solution is available ( , ) is useful for understanding how the parameter change can affect the solution .however this is not enough for a proper estimation of stress distribution .two main types of constitutive equations are often used in weld modelling , namely , norton s steady - state creep law and continuum damage mechanics equations for tertiary creep . below only the constitutive equations of norton s laware considered . even by this assumptionthe structural response may be captured very well .some special techniques are used to predict the failure life more precisely using steady - state solution ( , , , ) . as shown in , for the particular pipe welds investigated ,the steady - state analysis underestimates the failure time by about 20 - 40 percent , but predicts failure position quite well . to simplify the parametric analysis, we study a family of boundary - value problems for multi - component pipe creep depending on a small parameter . when , the problem is reduced to the case of homogeneous pipe creep .the corresponding solution for the steady - state stress distribution is well known ( , , ) and considered to be the basic solution . in order to get the common solution from this basic one, correction terms should be added .the equations for the correction terms are formally obtained as perturbations of a boundary - value problem with respect to .these equations formulate a problem of linear elasticity with respect to linear - elastic solid with anisotropic elastic properties .the utility of used technique is guaranteed especially because the theory of linear elasticity is in a very satisfactory state of completion ; every complicated case of parameter distribution can be treated in a routine manner as a combination of simple ones .correction terms are obtained numerically for some problems with the help of the ritz method . in some casesthe simplicity of geometry enables us to construct an approximate analytical solution .an exact expression for stress jumps at the interface is obtained .numerical solutions of the nonlinear problem are obtained with the help of ansys finite element code for comparison . throughout light - face letterswe denote scalars , the bald - face letters stand for tensors .the notation is used in vector - matrix form of constitutive equations to designate vectors and of stress and strain components .in this section we consider a two - material model only . it will be shown later that the solution for some multi - material models can be reduced to this case .the configuration analyzed is shown in fig .[ fig1 ] .[ m][][1][0]parent [ m][][1][0]material [ m][][1][0] [ m][][1][0]weldment [ m][][1][0] [ m][][1][0](a ) [ m][][1][0](b ) [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] assume that , , describe the volume occupied by the solid , by the weld metal , and by the parent material , respectively .here \times[0,h) ] , .the basic equations of the problem are given below .equations of equilibrium where is the stress tensor . in the volumetric forces are ignored .strain - displacement relations where is the linearized strain tensor and is the displacement vector .the governing equations can be summarized as norton s creep law ( see , for example , ) where is the time derivative , is the stress deviator , is the von mises equivalent stress , is the second rank unit tensor , and , are material constants .note that norton s creep law is often written as .it means in our notation . if we take into account that the problem is axisymmetric , we have in cylindrical coordinates the following equilibrium equations strain - displacement relations and boundary conditions finally we define the distribution of the parameter in norton s creep law as a piecewise constant function in the stress field , defined by , does not change if instead of we use , where is any nonzero constant .consequently , without loss of generality it can be assumed that is not necessary a small parameter . in practice , and might differ essentially . ] if we eliminate displacements from the strain - displacement relations , we obtain compatibility equations we consider the weak form of compatibility equations expressed by the equation of complementary virtual power principle here we use the brackets to enclose the argument of a linear operator ; is the strain rate defined by and ; ; is a virtual stress field that satisfy the equations of equilibrium and homogeneous boundary conditions in what follows we search function , such that suppose that , i.e. . the problem is reduced to a one - dimensional , and the solution of this problem is well known ( , , ) let be a small parameter .assume that here is the unknown derivative which must satisfy the equations of equilibrium and the homogeneous boundary conditions ; is the little - o landau symbol .from it follows that with , . substituting and in , we get since and as , we have an equation for we will analyze more closely the linear operator in the next section .in the previous section it was shown that the correction term can be found from the linear equation .it is clear that where let us introduce a vector notation as substituting in and differentiating we obtain here we have introduced the compliance matrix as follows \,.\ ] ] thus , the left - hand side of can be treated as an internal complementary virtual work with respect to linear elastic solid with constitutive law the eigenvalues of the compliance operator are the problem of steady - state creep is reduced to the elasticity problem for orthotropic , incompressible , and inhomogeneous solid .we now seek to convert the right - hand side of into a surface integral through gauss theorem .it can be proved that let us show that prescribes a jump of displacements at the interface ( see fig .[ fig2 ] ) .[ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] consider the principle of virtual complementary work for linear elastic solids and .one gets suppose latexmath:[\[\label{dispjump } u^+_z - u^-_z=0 , \quad u^+_r - u^-_r=3^{\frac{n-1}{2}}a_r that the solution of has the form we define a notation for jumps of field variables at the interface =v^+ - v^-.\ ] ] substituting for in , we get =-c / r^2 , \quad [ \varepsilon_\theta]=c / r^2 , \quad c=3^{\frac{n-1}{2}}a_r |a_r|^{n-1}\frac{1}{n^n}.\ ] ] if we combine this with , , and , we obtain =-\frac{\sqrt{3 } a_r |a_r|}{n^3 } r^{-4/n } , \quad [ \sigma^1_{\theta}]=\frac{\sqrt{3 } a_r |a_r|}{n^3 } r^{-4/n}.\ ] ] in this subsection we consider -material structure ( ) \times [ z_j , z_{j+1 } ] , \z_1=0 , \ z_{m+1}=h,\ ] ] there exist unique and such that ( fig .[ fig3 ] ) [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] here is the heaviside function .if we use the small parameter method to solve this problem , then we get an approximation in the form ; is given by and is obtained from . since is valid , it follows that the right - hand side of has the form owing to the fact that is linear , the perturbation is a linear combination of solutions of type .in this section we construct an analytical solution for the first perturbation term . consider a pair of stress functions , such that condition is satisfied these functions were constructed from compatibility equations , using the standard technique .this choice of stress functions simplifies the application of variational methods .we use the kantorovich method ( see , for example , ) to reduce the 2-d variational problem to 1-d variational problem .suppose that for , that define the solution of , the following static hypothesis is valid where are given kantorovich trial functions , such that we use the kantorovich method to obtain a system of differential equations and boundary conditions for .this method gives a projection of on the subspace , defined by .it is more convenient to solve in the form . for the sake of brevitywe describe only the application of the kantorovich method for equation . for equationthis procedure can be arranged in a similar manner .since and are fixed , the variation of gives let us rewrite , in terms of here and are defined by , , and .using gauss theorem and the fundamental lemma of the calculus of variation we obtain compatibility equations and boundary conditions here and are defined by , .after integration , system has the form elimination of from gives after integration in , , we obtain dealing with , in a similar fashion we obtain compatibility equations and boundary conditions for in . finally , taking into account , , we have eight boundary conditions =0 , \quad [ \frac{d \psi_2}{dz}]=0,\ ] ] =0,\ ] ] =\int^{r_o}_{r_i } \frac{c}{r^2 } \psi_1(r ) dr.\ ] ] solution is uniquely defined by these equations combined with two ordinary differential equations ( ode ) of fourth order ( one equation in and another in ) . in this sectionwe solve problem numerically with the help of the ritz method .let be a system of stresses , satisfying and .suppose then the unknown constants are defined from a system of linear algebraic equations we use the system , produced by means of .the complete system of is given by combinations of trigonometric functions in order to approximate the stress - jump at the interface it could be useful to consider additionally discontinuous functions here is the heaviside function .if we put , then the solution tends to the approximate analytical solution from the previous section .the specimen dimensions , applied loads , and material constants are given by , , , , , .first we compare the approximate analytical solution of ( section 5.1 ) with the numerical solution which was obtained using the ritz method ( section 5.2 ) .we used in to construct the analytical solution .values of constants are given in appendix .the numerical solution by the ritz method is performed witn in .we use to illustrate on fig .[ fig4 ] the distribution of stress components along the axis . along stress is negligibly small for both solutions , therefore we do not plot this component .hypothesis imposes essential restrictions on the class of solutions , nevertheless the value of shear stress is captured very well .the error for the hoop stress and the radial stress is significant in the vicinity of the interface .[ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] the approximate analytical solution gives only a smooth part of the stress field .this solution can be more useful in case of smooth changing material properties .we investigate the error of the perturbation method .numerical solutions of the system are obtained with the help of ansys finite element code for a set of .the geometry of the half of the pipe was represented by 3200 axisymmetric plane183 finite elements ( fig .[ fig5 ] ) .we used a uniform mesh with 160 elements along the axis direction z and 20 elements along the radial direction r. [ m][][1][0]pressure this type of element can model creep behavior but the norton s constitutive law is available only as a secondary creep equation .that is , the total strain is a sum of elastic strain and creep strain if the applied load remains the same with time , then we have and we obtain the steady - creep solution as . in calculations we used for young modulus and for poisson ratio to simulate the elastic material .we consider the solution to be close enough to the asymptotic solution at the moment of time .the solution of the auxiliary problem is given by the ritz method ( see section 5.2 ) .here we used series with .the leading term in the asymptotic series is a good approximation even if the `` small '' parameter equals 0.5 ( fig .[ fig6 ] ) . with subsequent increase of error grows dramatically .[ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] [ m][][1][0] application of the perturbation method to the steady - state creep problem was investigated .high performance of this method in prediction of creep response was validated .the perturbation method allow one to reduce an initial nonlinear problem to the sequence of simpler ones .this technique is especially attractive if the unperturbed solution is given in closed form as it was in this paper .another example is the creep response of the thick - walled homogeneous pipe under plane stress conditions . such solution could be used for perturbation analysis of creep in open - ended pipes .the error of the perturbation method becomes substantial when the creep properties differ from one another by one order of magnitude .nevertheless we note that asymptotic expansion gives a good simplified model of structure response .this model treats changes in parameter distribution as jumps of displacements in linear elastic material . in that way, the solution for every complicated case of parameter distribution is represented as a combination of simple solutions .a.v . shutov is grateful for the support provided by the german academic exchange service .assume , , .after integration in , , we have the following values of constants the ode has four linearly independent solutions where is one of solutions of characteristic equation to be definite , we use h. altenbach , v. kushnevsky , k. naumenko . 2001 . on the use of solid- and shell - type finite elements in creep - damage predictions of thin - walled structures , archive of applied mechanics , vol .71 , 164 - 181 .browne , b. cane , j.d .parker and d. walters .creep failure analysis of butt welded tubes . in proceedings of the conference on creep and fracture of engeneering materials and structures , swansea , 645 - 649 .hayhurst . 2001a .computational continuum damage mechanics : its use in the prediction of creep in structures - past , present and future .iutam symposium on creep in structures , kluwer academic publishers , 175 - 188 .hayhurst , m.t .wong , f. vakili - tahami .2001b . the use of cdm analysis techniques in high temperature creep failure of welded structures , proceedings of creep 7 , japan society of mechanical engineering ( tsukuba , june 3 - 8 , 2001 ) , sa-10 - 6(106 ) 519 - 526 .hyde , a. yaghi , m. proctor .use of the reference stress method in estimating the life of pipe bends under creep conditions , international journal of pressure vessels and piping , vol .75 , 161 - 169 .nikitenko , v.a .design calculation of structural elements with vulnerability of the material in creep taken into account , strength of materials ( historical archive ) , vol .11(4 ) , 354 - 360 . i.j .perrin , d.r .continuum damage mechanic analyses of type iv crepp failure in ferritic steel crossweld specimens , international journal of pressure vessels and piping , vol .76 , 599 - 617 .roche , c.h.a .townley , v. regis , h. hubel .1992 . structural analysis and available knowledge , in : larson lh(ed . ) , high temperature structural design .mechanical engineering publications , london , 161 - 180 .
the stress analysis of pressurized circumferential pipe weldments under steady state creep is considered . the creep response of the material is governed by norton s law . numerical and analytical solutions are obtained by means of perturbation method , the unperturbed solution corresponds to the stress field in a homogeneous pipe . the correction terms are treated as stresses defined with the help of an auxiliary linear elastic problem . exact expressions for jumps of hoop and radial stresses at the interface are obtained . the proposed technique essentially simplifies parametric analysis of multi - material components . key words : creep , circumferential pipe weldments , stress analysis , parametric analysis , perturbation method _ ams subject classification _ : 74g10 , 74d10 , 74g70 , 74s05 .
the recent proliferation of smartphones and tablets is seen as a key enabler for anywhere , anytime wireless communications .the rise of websites , such as facebook and youtube , significantly increases the frequency of users online activities . due to this continuously increasing demand for wireless access, a tremendous amount of data is circulating over today s wireless networks .this increase in demand is straining current cellular systems , thus requiring novel approaches for network design . in order to cope with this wireless capacity crunch ,device - to - device ( d2d ) communications underlaid on cellular systems has recently emerged as a promising technique that can significantly boost the performance of wireless networks . in d2d communication , user equipments ( ues )transmit data signals to each other over a direct link instead of through the wireless infrastructure , i.e. , the cellular network s evolved node bs ( enbs ) .the key idea is to allow direct d2d communications over the licensed band and under the control of the cellular system s operator .furthermore , as d2d communication often occurs over shorter distances , it is expected to yield higher data rates for the ues than infrastructure - based communications .d2d communication is regarded as a promising technology for improving the spectral utilization of wireless systems .recent studies have shown that the majority of the traffic in cellular systems consists of the download of content such as videos or mobile applications .if we can cut off the traffic of this part , a large amount of capacity can be freed out .usually , popular contents , such as certain youtube videos , are requested by much more frequently than others . as a result ,the enbs often end up serving different mobile users with the same contents for multiple times .as the enb has already sent the contents to mobile users , the contents are now locally accessible to other users in the same area , if cellular ues resource blocks ( rbs ) can be shared with others .upcoming users who are within the transmission distance can request the contents from those users through d2d communication . in this case , the enb will have to only serve those users who request new " content , which has never been downloaded before . through this d2d communication , we can reduce considerable redundant requests to the enb , so that the traffic burden of enb can be released .the main contribution of this paper is to propose a novel approach to d2d communications that allows to exploit the social network characteristics so as to improve the performance and reduce the load on the wireless cellular system . to achieve this goal , there are some key points we are going to discuss .the first of all is to establish a stable d2d subnetwork to maintain the data transmission successfully .a stable connection should guarantee that once the d2d communication is set up , the link should not be dropped easily . as a d2d subnetworkis formulated by individual users , the connectivity among users sometimes is intermittent .if the connection is too sensitive to users movement and easy to interrupt , it can neither offload the traffic of enb nor meet users satisfaction .it is difficult to employ such dynamic information to make reasonable decisions .however , the social relations in real world tend to be stable over time .such social ties can be utilized to achieve efficient data transmission in the d2d subnetwork .we name this social relation assisted data transmission network by offline social network ( offsn ) .second , we assess the amount of traffic that can be offloaded to d2d communication , i.e. , what probability the requested contents can be served locally . to analyze this problem, we study the probability that a certain content is selected .this probability is affected by two different aspects : the external ( external influence from media , friends , etc . ) , and internal ( user s own interests ) aspects . while users interests are hard to know , the external influence is more easier to estimate .the choices of the users are mutually dependent .consequently , the network operator ( e.g. , via the enb ) can generate an online social network ( onsn ) to keep track of users access to online websites , and maintain the distribution over offsn users selection .if we can estimate people s selection based on external influences , we can get the probability of contents being served locally . in this paper, we will adopt practical metrics to establish offsn , and involve the novel learning process - indian buffet process to model the influence in onsn .then we can get solutions for the previous two problems .latter we will integrate offsn and onsn to carry out the traffic offloading algorithm for the cellular network .our simulations also proved our analysis for the traffic offloading performance .consider a cellular network with one enb and users .the ues can receive signals from the enb through cellular network , or from other ues through d2d pairs using licensed spectrum resources . in the system ,two network layers exist over which information is disseminated .the first layer is onsn .the links of contents spread out on popular websites , users access the links to contents . hence , the onsn is the platform over which users acquire the links of contents .once a link is accessed , the data package of contents must be transmitted to the ues through the actual physical network .taking advantage of the social ties , offsn is the physical layer network for contents behind the links spread out .information dissemination in both onsn and offsn.,scaledwidth=28.0% ] an illustration of this proposed model is shown in fig .[ fig : onsn and offsn ] .each user active in the onsn corresponds to an ue in the offsn .users access the link of a content in an increasing order of their labels . in onsn, the link of a content is spread out according their popularity from frequent users to regular users .in particular , a group of users , which we refer to as , have a high online activity , and , thus , are the main source of influence and information dissemination . in this respect ,the choices of the , who access the onsn less frequently , are usually influenced by the frequent users . in the offsn , the first request of the contentis served by the enb .subsequent coming users can thus be served by previous users who hold the content , if they are within the d2d communication distance . in this section, we will study the properties of offsn and onsn , and quantify the relation between these two networks , before developing the proposed offloading algorithm . in the area covered by an enb , the distribution of the users can often be properly modeled .for instance , in public areas such as office buildings and commercial sites , the density of users is much higher than in other locations such as sideways and open fields .in addition , users are less likely to browse the web when they are walking or driving .indeed , the majority of the data transmissions occurs in those fixed places . in such high density locations , forming d2d network as offsn becomes a natural process .thus , we can distinguish two types of areas : highly dense areas such as office buildings , and white " areas such as open fields . in the former, we assume that d2d networks are formed based on users social relations . while in the latter , due to the low density , the users are served directly by the enb .d2d communications carry out in those subnetworks can effectively improve the cellular system throughput . if proper resource management is adopted , the interference among each subnetwork can be restricted .thus , each offsn can be mutually independent from others to some extent .optimizing the performance of each offsn can improve the overall performance of the cellular network .the offsn is a reflection of local users social ties .proper metrics need to be adopted to depict the degree of the connections among users in the offsn .for example , users who are within the d2d communication range and more regularly meet can be seen as more robust connection . dueto mobility , grouping users by using only their last known location , can lead to dropped connections .indeed , in public areas such as airports and train stations , most of the users have high mobility patterns and it is difficult to predict their future locations .however , if we define users connection degree in offsn according to their encounter history or daily routes , stable d2d connections can be formed .for example , officemates , classmates and family members always meet one another more regularly and frequently than others .thus , those social ties lead to higher probabilities to transmit data among users. if any two ues are within the d2d communication distance , the enb can detect and mark them as encountered .but the specific way to collect data is not our main focus in this paper .the contact duration distribution between two users is assumed to be a continuous distribution , which has a positive value for all real values greater than zero . for users who regularly meet ,their encounter duration usually centers around a mean value .so we can adopt a distribution to model the encounter duration between two users .this distribution is widely used in modeling the call durations and has been shown to have a high accuracy .to find the value for the two parameter and , we need to derive the mean and variance of the contact duration . as shown in fig .[ fig : encounter history ] , given the contact duration and the number of encounters between user and user , an estimate of the expected contact duration length : encounter history between user i and j.,scaledwidth=13.0% ] the variance represents the fluctuation in the contact period .if two cases have the same average contact period , the one with larger fluctuations would be less preferable since the contact length would be more uncertain .thus , we measure the variance of the contact period distribution to reflect the fluctuation using an irregularity metric defined by : given the mean and variance of the encounter period , we can derive the encounter duration distribution : .the probability density distribution ( pdf ) of encounter duration is : where .then , we can calculate the probability of the contact durations that are qualified for data transmission .if the contact duration is not enough to complete a data package transmission , the communication session can not be carried out successfully .we adopt a closeness metric , to represent the probability of establishing a successful communication period between two ues and , which ranges from to .the qualified contact duration is the complementary of the disqualified communication duration probability , so can be represented as : where is the minimal contact duration required to successfully transmit one content data package . is a random variable that depends on the channel conditions between the two ues ( e.g. , the higher the signal strength between the ues , the smaller ) .moreover , should depend on the content size . is the lower incomplete gamma function .larger closeness indicates a better future contact opportunity between user and user .an offsn can be seen as a d2d network that is constructed by a group of users with stable connections .hence , we can use the closeness metric to describe the communication probability between two users , which can also be seen as the weight of the link between user and user .then , a threshold can be defined to filter the boundary between different offsns and white " areas .if we have lower , more users will be added in , and the covered area of the offsn will increase . even though the probability that a user will be served locally will increase , it is likely that the group members who own the content are located far away . in this case , the prospective gain would not compensate the cost on associated power consumption for relaying and transmitting . also , offsns may overlap with each other , and cause inter - offsn interference which can further complicate the analysis . on the other hand ,if is too high , only a small number of users can be grouped which makes it difficult to perform d2d communication .therefore , one important problem is to find a proper that can balance the tradeoff between the cost and gain , and thus to get the best performance for the system .first , we define the number of users in onsn as which is corresponding to ues in offsn .the total number of available content in onsn is denoted by . given the large volume of data available online , we can assume that , . represents the set of contents that have viewing histories and is the set of contents that do not have any viewing history . in onsn, users select contents partially based on external influence .it is possible to predict current users selections by analysing the onsn activities of previous users within the same offsn . to draw the probability of current user s selection , we adopt the indian buffet process ( ibp ) which serves as a powerful analytical tool for predicting users behaviors . in an ibp, there are infinite dishes for customers to choose . the first customer will select its preferred dishes according to a distribution with parameter .since all dishes are new to this customer , no reference or external information exists so as to influence this customer s selection .however , once the first customer completed the selection , the following customers will have some prior information about those dishes based on the first customer s feedback .therefore , the decisions of subsequent customers are influenced by the previous customers feedback .customers learn from the previous selections to update their beliefs on the dishes and the probabilities with which they will choose the dishes .the behavior of content selection in onsn is analogous to the behavior of dish selection in an ibp .if we view onsn as an indian buffet , the online content as the infinite number of dishes , and the users as customers , we can interpret the contents spreading process online by an ibp .users enter onsn sequentially to download their desired content .when a user downloads its content , the recorded downloading times of contents changed .this action will affect the probability that this content to be requested .popular contents will be requested more frequently . while those contents that are only favored by a few number of people , or those new produced content will be requested less frequently .so the probability distribution can be implemented from the ibp directly .indian buffet process . ] in fig .[ fig : ibp ] , we show one realization of ibp .customers are labeled by ascending numbers in a sequence .the shaded block represent the user selected dish . in ibp, the first customer selects each dishes with equal probability of , and ends up with the number of dishes follows distribution . for subsequent customers , the probability of also having dish already belonging to previous customers is , where is the number of customers prior to with dish . repeating the same argument as the first customer, customer will also have new dishes not tasted by the previous customers follows a distribution .the probabilities of selecting certain dishes act as the prior information . for old " dishes which have been tasted before , . for new" dishes which have not been sampled before , .after user completes its selection , the prior will be updated to .this learning process is also illustrated in fig .[ fig : ibp ] . is the number of dishes that have not been sampled before user s selection session .we can see the selection probability for dish updated every time after each customer s selection .as the inter - offsn interference of d2d communication can be restricted by methods such as power control . for simplicity , we place an emphasis on the intra - offsn interference due to resource sharing between d2d and cellular communication . in the offsn that acts as the subnetwork of cellular network, the d2d transmissions and the enb transmissions will interfere . during the downlink period of d2d communication, ues will experience interference from other cellular and d2d communications as they share the same subchannels . in this respect ,the received power of link to can be expressed as : where is the transmit power , is path loss exponent which ranges from , is the channel response of the link to , is the complex gaussian channel coefficient that follows the complex normal distribution .thus , we can define the transmission rate of users served by the enb and by d2d communication with co - channel interference : where , and are the transmit power of enb , d2d transmitter and , respectively , is the additive white gaussian noise ( awgn ) at the receivers , and represents the presence of interference from d2d to cellular communication , satisfying , otherwise . here, , so represents the interference from the other d2d pairs that share spectrum resources with pair .the transmission rate of the users that are only serviced by the enb and that do not experience co - channel interference in the offsn is given by : in the studied model , the enb maintains the contents distribution of each onsn .when the user starts to surf online , this user will sample the content based on the prior information . to this end , this user will access old content with probability , and access new content with a distribution .after content selection , is updated to the posterior probability .the total amount of content each user selected can be draw from a distribution in the proposed model , the users aim at maximizing their data rate while minimizing the cost . without loss of generality , we assume that every content has the same size . consequently , we propose the following utility function for user : ( r_d - c_t)+m_n^0 ( r_c- c_m),\ ] ] where is the cost related to the d2d transmission power in ( 6 ) . is the cost paid by the user to the enb for the data flow .the utility function consists of two parts .the first part is the utility of receiving old content via d2d communication .the second part is the utility of downloading new content from the enb . from the enb perspective, the goal is to maximize the overall data rate while also offloading as much traffic as possible . even though enb can offload traffic by d2d communication , controlling the switching over cellular and d2d communication causes extra data transmission . therethus exists some cost such as control signals transmission and information feedback during the access process .therefore , for the enb that is servicing a certain user , we propose the following utility function : +m_n^0 r_c - m_n c_c,\ ] ] where is the cost for controlling the resource allocation process .the total traffic enb offloaded by d2d communication is . *offsn generation * enb collects encounter information in cellular network locate the frequent users in high user density areas find the closeness between two users * 2 .user activity detection * * 3 . service based on onsn activities * we propose a novel algorithm form which consists of multiple stages . in the first stage ,the enb focuses on high user density areas , and collects the encounter history between users .the enb locates one frequent user and its neighboring users via well - known algorithms in order to compute their closeness . by checking if , i.e. , if user and user satisfy the predefined closeness threshold , the enb can decide on whether to add this user into the offsn or not . by choosing a proper and power control ,the interference among different offsns can be avoided .this process will continue until no more user can be further added to the enb s list .then , the users in the established offsn can construct a communication session with only intra - offsn interference . for websites that provide a portal to access content , such as facebook and youtube , the enb will assign a special tag .once a user visits such tagged websites , the enb will inspect whether the user is located in an offsn or a white " area .if the user is in white " area , user s any requests will be served by the enb directly .if the user is located in an offsn , the enb will wait until the user requests contents . by serving previous user s requests ,the enb has already built up a history file including the prior information of the content distribution in the onsn .as soon as receiving user requests data , the enb detects if there are any resources in the offsn , and then choose to set up a d2d communication or not based on the feedback . for old content , the enb will send control signal to the ue with the highest closeness with user .then ue and ue establish the d2d communication .even if the d2d communication is setup successfully , the enb still wait until the process finishes successfully .if the d2d communication fails , the enb will revert back to serving the user directly . for new content, the enb serves the user directly .after the selection is complete , the prior information updates to the posterior probability .the proposed d2d communication algorithm is summarized in algorithm .in this section , we give the simulation results to show traffic offloading performances of the system from different aspects and provide numerical results of how different parameters affect the system s performance .consider active users than are randomly distributed in an offsn .the size of the content library is unbounded .we assume that the content selection process has been carried on for a period of times .so the enb can obtain the prior information of the content distribution . in our simulation, we have proved that , once the parameters are specified , the order in which the users perform their selection does not affect the performance of the system .the main physical layer parameters are listed as follows .the radius of an offsn is set up as m .noise spectral density is / hz . noise figure at device is .antenna gains and transmit power of enb is and . for device, it is and , respectively . the impact of the parameter on offloading traffic.,scaledwidth=27.5% ] in fig .[ fig : traffic and alpha ] , we investigate whether the user online activity degree will affect the amount of traffic that can be offloaded . as we can see from fig .[ fig : traffic and alpha ] , with the increase of parameter , the amount of traffic that is offloaded from the enb decreases , but traffic still can be released compared to the enb serving only system .the data rate is nearly when is small .this result is due to the fact that , when is low , users are more likely to choose old content . here, the requests are served by d2d communication in most of the cases , and cause nearly no traffic on the enb . as increases, the data rate begins increase . indeed ,when the users tend to make more selections , they may choose more new content .the offloaded traffic amount generally decreases with the increase of online activities , which is coincidence with the common sense that , more contents downloading will cause more traffic to enb .the traffic includes not only the contents data , but also the control signals the enb needs to send for the d2d communication arrangement .the relationship between the offloading traffic and maximum d2d communication distance.,scaledwidth=27.5% ] as the d2d communication distance increases , the enb will have more possibilities for detecting available contents providers . as a result, the performance of traffic offloading will be better with larger maximum distance .this assumption is shown in fig .[ fig : traffic and distance ] . in this figure, we can see that , with setting to , increasing the maximum communication distance , yields a decrease in the enb s data rate and an increase in the amount of offloaded traffic . however , we note that , with the increase of the transmission distance , the associated ue costs ( e.g. , power consumption ) will also increase .thus , the increase of d2d communication distance will provides additional benefits to the enb , but not for users .average sum - rate at the enb , as the cost for control signaling varies.,scaledwidth=27.5% ] in fig .[ fig : traffic and enb cost ] , we show the variation of the sum - rate at the enb as the cost for control signal varies . as the enb has to arrange the inter change process between cellular and d2d communication , necessary control information are needed . moreover ,additional feedback signals are required for monitoring the d2d communication and checking its status .those costs will affect the traffic offloading performance of the system . in our simulation, we define the cost as the counteract to the gain in data rate from 5% to 50% . as we can see from fig .[ fig : traffic and enb cost ] , increasing the cost on control signal , the offload traffic amount is decreased .in this paper , we have proposed a novel approach for improving the performance of d2d communication underlaid over a cellular system , by exploiting the social ties and influence among individuals .we formed the offsn to divide the cellular network into several subnetworks for carrying out d2d communication with only intra - offsn interference .also we established the onsn to analyse the offsn users online activities . by modeling the influence among users on contents selection online by indian buffet process, we obtain the distribution of contents requests , and thus can get the probabilities of each contents to be requested . with the algorithm we proposed ,the traffic of enb has been released .simulation results have shown that different parameters for enb and users will lead to different traffic offloading performances .users with more online activities will increase and fluctuate the data rate of the enb .enable larger maximum d2d communication distance will release more traffic burden of the enb . while if the cost on d2d communication arrangement is high , the traffic offloading performance will decrease . c. xu , l. song , z. han , d. li , and b. jiao , resource allocation using a reverse iterative combinatorial auction for device - to - device underlay cellular networks , " _ ieee globe communication conference ( globecom ) _ , anaheim , ca , dec .3 - 7 , 2012 .n. golrezaei , a. f. molisch , and a. g. dimakis , base - station assisted device - to - device communication for high - throughput wireless video networks , " _ ieee international conference on communications ( icc ) _ , pp . 7077 - 7081 , ottawa , canada , jun .10 - 15 , 2012 .j. guo , f. liu , and z. zhu estimate the call duration distribution parameters in gsm system based on k - l divergence method , " _ ieee international conference on wireless communications , networking and mobile computing ( wicom ) _ , pp . 2988 - 2991 , shanghai , china , sep .21 - 25 , 2007 .f. li and j. wu , localcom : a community - based epidemic forwarding scheme in disruption - tolerant networks , " _ proc .ieee conference sensor , mesh and ad hoc communications and networks ( secon ) _ , pp .574 - 582 , rome , italy , jun .22 - 26 , 2009 .
device - to - device ( d2d ) communications is seen as a major technology to overcome the imminent wireless capacity crunch and to enable novel application services . in this paper , we propose a novel , social - aware approach for optimizing d2d communications by exploiting two network layers : the social network and the physical , wireless network . first we formulate the physical layer d2d network according to users encounter histories . subsequently , we propose a novel approach , based on the so - called indian buffet process , so as to model the distribution of contents in users online social networks . given the online and offline social relations collected by the evolved node b , we jointly optimize the traffic offload process in d2d communication . simulation results show that the proposed approach offload the traffic of evolved node b successfully .
in everyday experience , even when we do not realize it , we use our intuition to draw inferences . for examplewhen we hear doorbell we immediately ask ourselves `` who has come ? '' . on the way to the door we consider many different possibilities .maybe it is our friend , who said some days before that he visit us .maybe it is our neighbor , who came to say that we should lower the music or maybe somebody came to inform us that we have won in the lottery and so on and so on .all of these possibilities are more or less probable , with some outcomes which we can not figure out _ex ante_. but every bit of information influences our expectations and we stick on outcome which seems to be most probable ( subjectively ) to us .so we look through the window and see a car on the street , that looks like our friend s , we are almost sure that it is he in fact . or if we really play music very loudly we can expect that sooner or later somebody will be angry . when we do not expect somebody we know, we assume that it may be the postman or someone who got the wrong address .in fact we always choose the simplest case .it means that we never assume that superman is standing behind the door , even when there is a reason why he should visit us .this intuitive feeling to choose the simplest solution is called occam s razor and it states formally _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ `` accept the simplest explanation that fits the data '' . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ in our example the data means in fact what we already know .this example is very easy and our faculties manage very well to solve problems of this kind .however there is a vast variety of problems for which there are uncountable possibilities and such problems can be treated only in an approximated manner . on the other handthere are problems , which are too tedious to be solved by humans in reasonable time and we should employ to do job .so , can we enclose this intuitive knowledge in the form of mathematically defined theory and use it instead of mind ?the answer is yes , this exciting idea is embodied in the form of bayesian inference .below we introduce bayesian inference and show how it works in practice .we start from general considerations which lead us to the connection with thermodynamics .subsequently we show how to write down problems on computer with use of the monte - carlo approach ( the metropolis algorithm ) .we describe possible applications in the modern cosmology .there is a huge number of places in cosmology where bayesian inference can often be applied .we have residual observations and a lot of theories .some of them are easy , some are pure models , some are brilliant new ideas .this is as with our example with the door bell .we hear the bell and we must predict who is at the door . without any information we can not predict who is ringing , because the sound of the bell is always the same. however some people ring only once and some of them more times .therefore we must listen very carefully when something is ringing in cosmology .in the last section we try to put bayesian inference into a larger perspective : containing epistemological aspects of the method as well as suggested limitations .our goal it to present a way to choose among alternative theories taking into account their conformity with data .of course these theories can have different basis .they can be connected with everyday experience , data analysis , biology , physics etc .because we want to apply finally bayesian inference in physics we can restrict ourselves now , without loss of generality , to physical theories .so let us consider some unknown physical phenomenon and let us say we possess theories that can potentially describe it .all these theories differ from one another .information about phenomenon investigated is contained in the collection of experimental data . both theories and experimental data we may consider as elements of the same set space so .the set together with measure and -algebra build probability space .in such a well defined theory of probability , a natural concept of conditional probability occurs .so the probability of a given theory , when we have data , is defined as this probability tells us which theory describes experimental data better and is called posterior probability . on the other hand we can ask about probability of outcomes when theory is the true one this probability tells us about different predictions from the theory and is a marginal likelihood ,commonly called evidence . because we can combine equations ( [ posterior ] ) and ( [ evidence ] ) what give us this equation is the famous bayes theorem . the probability in this equation is called prior probability and it is in fact hard to describe this number .it describes our initial beliefs about a given theory .it is a human factor to choose this number and can be non - objective .sometimes one theory is chosen because of its mathematical beauty although a more probable alternative exists .the other factor is that some theory can well describe variety of others similar phenomena .however , if we do not have strong motivation to introduce some initial selection of the theories , then the most natural choice is to assume a homogeneous distribution of the prior probability then none of theories is favored .the probability is a simple normalization constant , what we can calculate thanks to the normalization condition which together with the bayes theorem ( [ bayes ] ) give us each of the theories contains a number of parameters described by the vector for a particular theory .the simple theories ( simple mathematically ) contain in general a small number of parameters .the main increase of number of parameters enlarges the complexity of theories .this complexity can be in some cases accepted due to intrinsic beauty of a mathematical structure of theory .nevertheless the theory which has one factor to explain a phenomenon is preferable over the theory which employ many factors for description of it .effective theories belong to the type of simple theories . parameters of the model are elements of the set space .the values of these parameters may be fixed or significantly bounded by a theory .but when no limits are put on these parameters ( there is no prior knowledge ) the evidence is calculated as the marginal probability integrated over all the allowed range of values of the parameters of the model .now we can go back to the bayes theorem and explain the idea of bayesian inference . considering the bayes theorems for two models and and dividing the respective equations ( [ bayes ] ) by sides we obtain which is called the bayes factor .if the priors for all are equal then the bayes factor reduces to the ratio of evidences ( ) .the values of can be interpreted as follows : if then inference is inconclusive , if we have weak , if we have moderate and if we have strong evidence in favor of a model indexed by over the model indexed by .so the main problem is now to calculate the evidence .the direct calculation is generally impossible .that is the reason is to use the monte carlo methods to do it .first we introduce the parameter and redefine evidence to the form where the is in fact likelihood and we denote it by .so and = \exp \int_0 ^ 1 d\lambda \langle \log l\rangle_{\lambda}. \label{eq12}\ ] ] now we can show a connection between our approach and thermodynamics . introducing obtain and equation ( [ eq11 ] ) takes a known form this is the energy of the system in the temperature . when we calculate it for different temperatures we can directly evaluate the integral in the expression ( [ eq12 ] ) and hence the evidence . as we see , to perform the bayesian inference we have to calculate the thermodynamical integral ( [ int ] ) .this kind of integrals can be solved analytically only in case of very simple systems .numerical methods to solve this kind of problems are known as the monte carlo .it is not the subject of this paper to describe how they work in detail .however to make this paper self - contained we add a short appendix a introducing basics of the monte carlo methods .we also present experimental demonstration of property of ergodicity which is important in the context of monte carlo simulation ( see appendix b ) .an interested reader can find more on monte carlo simulations e.g. in .now we have all theoretical equipment to show this approach in action . in this examplewe show how to perform the bayesian inference in a very simple case .we consider a very simple kind of theories and a small sample of data - points to make computer computation short .we also design it for clarity and better understanding .however generalizations to more advanced problems are straightforward . in the next sectionwe will mention how to apply bayesian methods to more complicated problems .let us consider some experiment in which we perform measurements of some physical variable for six different values of parameter . in the experiment we also measure standard error of the outcomes .in fact we one can repeat many times measurements of for a given value .then one can obtain the mean values of parameter together with its dispersion .these data points we present in table [ tab:1 ] ..in the table we collect the exemplary pairs together with the uncertainty of .the uncertainty can be the result of the instrumental resolution . [cols="^,^,^",options="header " , ] the phenomenon which we instigate is still not undetermined , but we possess three polynomial models to describe them .we list these models below models 1 and 3 look more simple because each is described by two parameters when model 2 contains three unknown parameters .the first step of bayesian inference is to fit these models to experimental data .we can use for example method of least squares .we obtain the important ingredient in the bayesian inference is a choice of priors .it is the choice of the intervals and probability distribution for parameters .the parameter intervals should be specified , because we must perform the integration ( look for the solution ) in a finite parameter space .standard errors of the parameters give us intervals necessary for bayesian inference .of course , the different choices of the parameter intervals can lead to the different values of the posterior probabilities . in our case , we choose the intervals as the - confidence interval for the parameters .namely , ] the inductive generalization , which has the simple pattern : extrapolation from particular data do general conclusions , suffers several problems called paradoxes of confirmation .goodman s paradox , know in the literature as problem of `` grue '' , is particularly interesting .especially the question of its counterpart in the field of cosmology . in a traditional version : * we have two hypotheses : ( 1 ) all emeralds are green and ( 2 ) all emeralds are grue ( green if examined until some time and blue otherwise ) . * evidence : _ found emerald is green _ confirms both : ( 1 ) and ( 2 ) .any satisfying resolutions to the paradox propose additional assumptions ; for example pointing out on `` green '' as a natural kind term instead of `` grue '' . in a search for possible cosmological version of the paradox we can compare for example two related models the cold dark matter cosmological model ( cdm model ) and the lambda cold dark matter cosmological model with the positive cosmological constant term ( lcdm model ) .the latter seems to be the simplest candidate for the dark energy description .the bayesian method of confirmation dedicated to select between these two models reveals a quite opposite verdict while used in the 90s and currently . using the sample of perlmutter et al . there is not enough information to distinguish these models .the extended sample with additional 42 high snia gives a weak evidence to favor the lcdm model over the cdm one .however , in our opinion it is a misunderstanding to treat this study case as a paradox in goodman s sense .it becomes obvious , because when new observational data confirm better the lcdm model in comparison with the cdm model , the latter simply disappears out the stage .the paradox of confirmation would occur when related to a certain family of models there will be the same degree of confirmation ( the same time and evidence ) assigned to hypotheses differing from each other for example with regard to foreseeable future scenarios of universe evolution .to illustrate this situation let us consider two hypotheses 1 .the universe decelerates .the universe decelerates until some time and accelerates afterward .the cdm model is valid with the first hypothesis and the lcdm model is in agreement with the second one . from the 60 sit was known that the universe is expanding with the decelerating rate .so we have a paradox here .however the evidences of accelerating universe due to snia data falsified the first model . andthe paradox is naturally solved .this example teaches us that paradoxes of goodman s type ( in the logic of induction ) are common in evolutionary sciences but they are not dangers because we hope that new evidences ( which appear due to science development ) we discriminate between two hypotheses . in goodman s paradoxthere is only one kind of evidence ; we need to draw an emerald and check its color . in the case of cosmological hypotheseswe are not left with only one evidence .a new evidence appears and resolves the paradox in favor of one of the hypothesis .it comes from new observations .we know that this evidence will appear eventually because we , scientists look for it .the reason that there is no paradox after a new evidence appears , is that one hypotheses is falsified ( the cdm model ) and only one hypothesis ( the lcdm model ) becomes in agreement with this new evidence .it is often said that a scientific theoretical research means achieving two specific goals : ( 1 ) finding a model which approximates a phenomenon best and ( 2 ) constructing a hypothesis that offers best prediction .it is a good example to show how in this context two criteria of model selection are being compared : the akaike information criterion ( aic ) and bayesian information criterion ( bic ) . although these model comparison methods are put together as competitors , they in fact try to ask different questions .the aic estimates predictive power of an elaborated hypothesis , while the bic goodness - of - fitting .m. forster and e. sober have explained this nuance with respect to the fitting problem : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ even though a hypothesis with more adjustable parameters would fit the data better , scientists seem to be willing to sacrifice goodness - of - fit if there is a compensating gain in simplicity .( ) + since we assume that observation is subject to error , it is overwhelmingly probable that the data we obtain will not fall exactly on that true curve .( ) since the data points do not fall exactly on the true curve , such a best - fitting curve will be _false_. if we think of the true curve as the ` signal ' and the deviation from the true curve generated by errors of observation as ` noise ' , then fitting the data perfectly involves confusing the noise with the signal .it is overwhelmingly probable that any curve that fits the data perfectly is false ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the general comments of this section can be summed up by a statement that bayesian inference is a method dedicated to specific goals in scientific practice . with respect to cosmology , the mentioned lcdm cdm models comparison reveals in bayesian inference context another problem .it strictly concerns currently changing concept of the model in physics . at presentthere is a special emphasis placed on effectiveness and mediating function of models in physics .this status of scientific models is determined by the way they are designed : they are not simply derived from the underlying theory , nor fixed by the evidence only .their `` nature '' is determined by a mediating role ( between a theory and phenomena ) .morrison states , as follows : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ although they are designed for a specific purpose these models have an autonomous role to play in supplying information , information that goes beyond what we are able to derive from the data / theory combination alone . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ in cosmology built on general relativity , the solutions of einstein equations can be treated as the geometrical models of the universe .a construction of a model starts from assuming specific idealizations ( symmetries , etc ) .it means in a practice that we reduce degrees of freedom ( all apart gravitational ones are neglected ) .for example assumption of spacial homogeneity means that the einstein equations which constitute the system of non - linear partial differential equation system reduce to the ordinary differential equation system in the cosmological time .it is said that those formulations of scientific laws are certain approximations of the investigated phenomena .there has been recently quite an important and interesting discussion about validity of application the bayesian inference to idealization itself .the problem concerns idealized hypotheses and a question of assignment probability to them , since they can be treated as counterfactuals .what is a posterior probability of the ideal gas law or the law of motion for simple pendulum ?jones showed that solution lies exactly in the understanding of the procedure of elaborating a model .if we treat the model idealizations not as a result of abstraction but as a distortion , the methodological consequences may exclude for bayesian inference : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ given that most scientific hypotheses are idealized in some way , bayesianism seems to entail that most scientific hypotheses can not be confirmed .+ bayesians thus confront an apparent trilemma : either develop a coherent proposal for to assign prior probabilities to counterfactuals ; or embrace the counterintuitive result that idealized hypotheses can not be confirmed ; or reject bayesianism . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the general bayesian conception of empirical evidence can be put into three main statements / consequences : * less probable evidence delivers best confirmation to hypothesis ; * evidence confirms better those hypothesis in context of which it is more probable . *if the hypothesis probability is very little , it can be confirmed only by very strong evidence .in this paper we have presented basics of bayesian inference and showed how to use it in practice . we have introduced some mathematical background and formulated a problem in the similarity with thermodynamics . as a case studywe choose three simple models .then the known monte carlo methods and metropolis algorithm were used to select the best model in the light of data .the general remark which can be derived from these considerations is to be careful in evaluation of the models in the light of the data and in using the complementary indicators .the bayesian methods started to be popular due to new discoveries in cosmology at the beginning of xxi century .we presented the areas of cosmology where bayesian inference has been applied , namely problem of dark energy , dark matter , and testing quantum effects by astronomical data .subsequently we have studied epistemological aspects of the bayesian confirmation theory in the context of problems of modern cosmology where the bayesian approach offers not only the estimation of model parameters from the observational data but also methods of the comparison of models ( selection ) .we have demonstrated that the bayesian inference is based on some assumptions of philosophical character .the philosophical issues of inference in context of cosmological models on the example of models without and with the dark energy component ( the cosmological constant ) are discussed .we pointed out that goodman s famous paradox does not appear in the cosmology reconstructed using bayesian methodology .the reason for this we are looking for new evidences which falsify one hypothesis such that only one hypothesis becomes in agreement with observational data .note that the bayesian framework enable us to test and select between competing hypotheses so one can construct the ranking of cosmological models explaining acceleration of the current universe .therefore , we obtain the best model favored by data .in this appendix we show how to compute thermodynamical integral ( [ int ] ) with use of the monte carlo simulations .our short introduction to this subject is based partially on this made in .let us consider state of the system labeled by and corresponding energy .our task is to compute integral where integration is performed over all available states . since in numerical computationswe always discretize the system , integration is replaced by the summation .our task now is to write a program which generates states from the canonical ensemble given with the probability .the crucial observation is that we do not have to generate all possible states to calculate ( [ intapp ] ) .the main contribution to their value comes from the equilibrium states .therefore the idea is to find these equilibrium states and average over them .starting from some arbitrary initial state we create a sequence of states finally finding ensemble of equilibrium states. then one can calculate in order to find equilibrium states the markov chain method can be applied .we consider sequence of transitions with probability .moreover we assume practical realization of the above conditions is given by _metropolis algorithm_. namely it states : * take initial state . *make some move to neighboring state . *if , accept the change . *if , accept the change conditionally with the probability /t} ] are required .{1l01.eps } & \includegraphics[width=6cm]{1l1.eps } \\\includegraphics[width=6cm]{1l10.eps } & \includegraphics[width=6cm]{1l100.eps } \end{array } $ ]the very important question related to monte carlo simulations is the `` ergodicity '' of the algorithm .it means that in the finite number of steps ( finite time ) the system must be freely close to any point in the phase space .this prevents the system to being trapped in a subset of states . in the monte carlo simulations it causes that we can always find a proper energy minimum , even for very low temperatures when fluctuations are small . to check it we performed markov chains in the low temperature system . in such a system ,when algorithm is not ergodic , a markov chain can not always lead to the proper minimum . in fig .[ erg ] we show markov chain in the parameters space for the third model considered in sec . [ simpleexample ] .we show that starting from the different points in the parameter space system always go to the same region where the proper minimum is placed .this is a visual proof of ergodicity for a kind of function considered .it is possible that it is not true for more complicated kind of functions .
we discuss epistemological and methodological aspects of the bayesian approach in astrophysics and cosmology . the introduction to the bayesian framework is given for a further discussion concerning the bayesian inference in physics . the interplay between the modern cosmology , bayesian statistics , and philosophy of science is presented . we consider paradoxes of confirmation , like goodman s paradox , appearing in the bayesian theory of confirmation . as in goodman s paradox the bayesian inference is susceptible to some epistemic limitations in the logic of induction . however goodman s paradox applied to cosmological hypotheses seems to be resolved due to the evolutionary character of cosmology and accumulation new empirical evidences . we argue that the bayesian framework is useful in the context of falsificability of quantum cosmological models , as well as contemporary dark energy and dark matter problem .
to explain our intended approach to integrating high and low level planning , we introduce the high level motion plan random _ trajectory _ variable that is governed by the distribution and conditioned on symbolic data .we treat the high level plan as a random variable because of the following : the high level planner must be able to accommodate local disturbances returned by the low level motion planner . in turn, high level motion plans must be able to adjust to online goal changes ; these high level changes must then trickle down to low level behavior .conceptually then , the high level plan and the low level plan are _ coupled _ variables ; if either is restricted to a single hypothesis ( as is typical in conventional approaches to hierarchical planning ) , then the high and low level plans are unable to influence each other .similarly , we represent the low level motion plan with a random trajectory variable that is governed by the joint distribution over the platform and environmental agents , where ] .thus , converges to in the situation of figure [ fig : gotog ] . recovering the ros navigation stack with this approach is trivial : at each time step , sample local paths , and weight each sample according to first factor encodes global compatibility , while the second factor encodes kinematic feasibility .choose the sample with the highest weight as the inputs to the actuators .the probabilistic formulation allows us to approach the dwa ros navigation stack in a more general manner : in the ros navigation stack , sampling from and then weighting amounts to straightforward importance sampling .however , the distribution can be approximately inferred using a host of methods : markov chain monte carlo , laplace approximation , hybrid monte carlo , etc any approximate inference technique is at our disposal .in contrast , the ros navigation stack ( http://wiki.ros.org/base_local_planner ) , does not pose the high level to low level path planning problem as a probability distribution , so it is not immediately clear how to employ approximate inference techniques to find more accurate solutions in a more efficient manner . [ [ single - global - operator - instruction - multiple - static - and - dynamic - obstacles ] ] single global operator instruction , multiple static and dynamic obstacles ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ in figure [ fig : gotog_multiple ] we introduce the notion of multiple global plans , each of which have nontrivial value .in particular , global plans have values in the static map of .the global plan distribution thus takes the form in figure [ fig : gotog_multiple_dynamic ] , we introduce a local crowd disturbance in the bottom right of the map .we assume that the crowd enters into the robot s field of view near the center corridor ; thus , the robot has to make a planning decision according to when the crowd is not in the robot s field of view , , and the low level planner stays close to the optimal global plan .however , when , it is no longer obvious which global plan to follow . with our probabilistic approach , which global path to followis determined by balancing the capabilities of the low level planner in the crowd ( effectively , how much probability is in near the global plan ) against how much more efficient is than ( or , how compares to ) .heuristically , one can think of the distribution as having three modes , each ( roughly ) centered around the global plans . the relative probability mass in each mode ends up determining the map value of the distribution .thus , the global plan s fitness represented by balanced against the challenge of the local situation , which is represented by . now, suppose that the operator has provided a global goal ( and thus high level plans are generated ) , but intervenes via a joystick at random times according to and , as in figure [ fig : intervene ] ( the difference between this scenario and the scenario in figure [ fig : gotog_multiple_dynamic ] is the presence of the joystick data ) .now , not only do we have to balance global considerations ( the weights of the global plans ) against local disturbances , but also the online desires of the operator .in particular , the robot will move through the environment in the same manner as in figure [ fig : gotog_multiple_dynamic ] , until the operator intervenes with the joystick at . at this point ,the global plan distribution will become , and thus influence local decision making by `` pulling '' towards the more peaked regions of are able to simultaneously represent high level operator desires with online refinements .our full joint distribution now becomes .importantly , in the absence of a global goal , the formulation reduces to .this is the case of fully assistive shared control , where the absence of a global map or corrupted localization data renders the global goal meaningless . in this case, the global plan is revealed incrementally via local user input data .this capability becomes important when , for instance , the robot enters a crowd , and standard localization techniques start to fail at this point the robot must `` share awareness '' with the operator by inferring global destinations from local operator input data .while the success of previous experiments and the utility of the markov random field factorization lend credence to our model above , we point out that results from formal methods ( and thus provably correct constructions ) can guide how we model our joint distribution ( courtesy of ufuk topcu ) . to see how, we refer to figure [ fig : pgm - to - hier ] , where we have plotted the state of the art formal methods decision stack next to its corresponding graphical model decomposition .note that the results from formal methods suggests that a `` tactical variable '' , which we call , is used to mediate information between the high level and the low level ( we assume that some form of tactical data , informs the governing distribution ) .this graphical model in turn represents the factorization one of our ongoing research objectives is to fully understand how formal methods can guide our probabilistic decompositions while the probabilistic approach is well suited to capture dependencies between variables and flexible enough to capture the vagaries of human behavior ( or online manipulation of robot objectives ) , balancing tractability and fidelity in the factorization of the joint distribution is more of an anecdotal art than a science .results from formal analysis , however , can provide guidance on our decomposition and potentially insight into the form of our `` cooperation functions '' and that link the mission , tactical , and trajectory levels .furthermore , it is not immediately clear how to relate data coming in at various levels to planning level variables ( e.g. , high level symbolic data clearly relates to ; however , the introduction of other planning levels necessitates understanding of how lower level data such as joystick commands in the form of lower level planning variables ) .the hierarchical decision stack illustrated in figure [ fig : pgm - to - hier ] was tied to a specific application , and so is not in general the correct hierarchical decomposition .however , the approach of finding provably correct hierarchical decompositions for arbitrary scenarios , and then reading off the corresponding graphical model ( and thus probabilistic decomposition ) is fully general ; we depict this approach in figure [ fig : arbitrary - stacks ] . in combination with human - learning and symbolic planner approaches ( which guide how we model and adapt distributions , such as and , at specific levels of the planning stack ), our approach has the potential to be both flexible enough to accommodate a wide variety of online manipulation of global robot objectives while maintaining the rigor of formal analysis .
we present a possible method for integrating high level and low level planning . to do so , we introduce the global plan random _ trajectory _ \to \mathbb r^2 $ ] , measured by goals and governed by the distribution . this distribution is combined with the low level robot - crowd planner ( from ) in the distribution . we explore this _ integrated planning _ formulation in three case studies , and in the process find that this formulation 1 ) generalizes the ros navigation stack in a practically useful way 2 ) arbitrates between high and low level decision making in a statistically sound manner when unanticipated local disturbances arise and 3 ) enables the integration of an onboard operator providing real time input at either the global ( e.g. , waypoint designation ) or local ( e.g. , joystick ) level . importantly , the integrated planning formulation highlights failure modes of the ros navigation stack ( and thus for standard hierarchical planning architectures ) ; these failure modes are resolved by using . finally , we conclude with a discussion of how results from formal methods can guide our factorization of .
networked storage systems have gained prominence in recent years . these include various genres , including decentralized peer - to - peer storage systems , as well as dedicated infrastructure based data - centers and storage area networks .because of storage node failures , or user attrition in a peer - to - peer system , redundancy is essential in networked storage systems .this redundancy can be achieved using either replication , or ( erasure ) coding techniques , or a mix of the two .erasure codes require an object to be split into parts , and mapped into encoded fragments , such that any encoded fragments are adequate to reconstruct the original object .such coding techniques play a prominent role in providing storage efficient redundancy , and are particularly effective for storing large data objects and for archival and data back - up applications ( for example , cleversafe , wuala ) .redundancy is lost over time because of various reasons such as node failures or attrition , and mechanisms to maintain redundancy are essential .it was observed in that while erasure codes are efficient in terms of storage overhead , maintenance of lost redundancy entail relatively huge overheads .a naive approach to replace a single missing fragment will require that encoded fragments are first fetched in order to create the original object , from which the missing fragment is recreated and replenished .this essentially means , for every lost fragment , -fold more network traffic is incurred when applying such a naive strategy .several engineering solutions can partly mitigate the high maintenance overheads .one approach is to use a ` hybrid ' strategy , where a full replica of the object is additionally maintained .this ensures that the amount of network traffic equals the amount of lost data .a spate of recent works argue that the hybrid strategy adds storage inefficiency and system complexity .another possibility is to apply lazy maintenance , whereby maintenance is delayed in order to amortize the maintenance of several missing fragments .lazy strategies additionally avoid maintenance due to temporary failures .procrastinating repairs however may lead to a situation where the system becomes vulnerable , and thus may require a much larger amount of redundancy to start with .furthermore , the maintenance operations may lead to spikes in network resource usage .it is worth highlighting at this juncture that erasure codes had originally been designed in order to make communication robust , such that loss of some packets over a communication channel may be tolerated .network storage has thus benefitted from the research done in coding over communication channels by using erasure codes as black boxes that provide efficient distribution and reconstruction of the stored objects .networked storage however involves different challenges but also opportunities not addressed by classical erasure codes .recently , there has thus been a renewed interest in designing codes that are optimized to deal with the vagaries of networked storage , particularly focusing on the maintenance issue . in a volatile network where nodes may fail , or come online and go offline frequently, new nodes must be provided with fragments of the stored data to compensate for the departure of nodes from the system , and replenish the level of redundancy ( in order to tolerate further faults in future ) . in this paper, we propose a new family of codes called _ self - repairing codes _( src ) , which are tailored to fit well typical networked storage environments . in ,dimakis et al .propose regenerating codes ( rgc ) by exposing the need of being able to reconstruct an erased encoded block from a smaller amount of data than would be needed to first reconstruct the whole object .they however do not address the problem of building new codes that would solve the issue , but instead use classical erasure codes as a black box over a network which implements random linear network coding and propose leveraging the properties of network coding to improve the maintenance of the stored data .network information flow based analysis shows the possibility to replace missing fragment using network traffic equalling the volume of lost data .unfortunately , it is possible to achieve this optimal limit only by communicating with all the remaining blocks .consequently , to the best of our knowledge , regenerating codes literature generally does not discuss how it compares with engineering solutions like lazy repair , which amortizes the repair cost by initiating repairs only when several fragments are lost .furthermore , for rgcs to work , even sub - optimally , it is essential to communicate with at least other nodes to reconstruct any missing fragment .thus , while the volume of data - transfer for maintenance is lowered , rgcs are expected to have higher protocol overheads , implementation and computational complexity .for instance , it is noted in that a randomized linear coding based realization of rgcs takes an order of magnitude more computation time than standard erasure codes for both encoding and decoding .the work of improves on the original rgc papers in that instead of arguing the existence of regenerating codes via deterministic network coding algorithms , they provide explicit network code constructions . in ,the authors make the simple observation that encoding two bits into three by xoring the two information bits has the property that any two encoded bits can be used to recover the third one .they then propose an iterative construction where , starting from small erasure codes , a bigger code , called hierarchical code ( hc ) , is built by xoring subblocks made by erasure codes or combinations of them .thus a subset of encoded blocks is typically enough to regenerate a missing one .however , the size of this subset can vary , from the minimal to the maximal number of encoded subblocks , determined by not only the number of lost blocks , but also the specific lost blocks .so given some lost encoded blocks , this strategy may need an arbitrary number of other encoded blocks to repair . while motivated by the same problem as rgcs and hcs , that of efficient maintenance of lost redundancy in coding based distributed storage systems , the approach of self - repairing codes ( src ) tries to do so at a somewhat different point of the design space .we try to minimize the number of nodes necessary to reduce the reconstruction of a missing block , which automatically translates into lower bandwidth consumption , but also lower computational complexity of maintenance , as well as the possibility for faster and parallel replenishment of lost redundancy .we define the _ concept of self - repairing codes _ as codes designed to suit networked storage systems , that encode fragments of an object into encoded fragments to be stored at nodes , with the properties that : + ( a ) _ encoded fragments can be repaired directly from other subsets of encoded fragments without having to reconstruct first the original data_. + more precisely , based on the analogy with the error correction capability of erasure codes , which is of any losses independently of which losses , + ( b ) _ a fragment can be repaired from a fixed number of encoded fragments , the number depending only on how many encoded blocks are missing and independent of which specific blocks are missing ._ to do so , srcs naturally require more redundancy than erasure codes .we will see more precisely later on that there is a tradeoff between the repair ability and this extra redundancy .consequently , srcs can recreate the whole object with fragments , though unlike for erasure codes , these are not arbitrary fragments , though many such combinations can be found ( see section [ sec : static ] for more details ) .note that even for traditional erasure codes , the property ( a ) may coincidentally be satisfied , but in absence of a systematic mechanism this serendipity can not be leveraged . in that respect , hcs be viewed as a way to do so , and are thus the closest example of construction we have found in the literature , though they do not give any guarantee on the number of blocks needed to repair given the number of losses , i.e. , property ( b ) is not satisfied , and has no deterministic guarantee for achieving property ( a ) either .we may say that in spirit , src is closest to hierarchical codes - at a very high level , src design features mitigate the drawbacks of hcs . in this work , we make the following _ contributions _ : + ( i ) we propose a new family of codes , self - repairing codes ( src ) , designed specifically as an alternative to erasure codes ( ec ) for providing redundancy in networked storage systems , which allow repair of individual encoded blocks using only few other encoded blocks .like ecs , srcs also allow recovery of the whole object using encoded fragments , but unlike in ecs , these are not any arbitrary fragments. however , numerous specific suitable combinations exist .+ ( ii ) we provide a deterministic code construction called _ homomorphic self - repairing code _ ( hsrc ) , showcasing that src codes can indeed be realized .+ ( iii ) hsrc self - repair operations are computationally efficient .it is done by xoring encoded blocks , each of them containing information about all fragments of the object , though the encoding itself is done through polynomial evaluation , not by xoring .+ ( iv ) we show that for equivalent static resilience , marginally more storage is needed than traditional erasure codes to achieve self - repairing property .+ ( v ) the need of few blocks to reconstruct a lost block naturally translates to low overall bandwidth consumption for repair operations .srcs allow for both eager as well as lazy repair strategies for equivalent overall bandwidth consumption for a wide range of practical system parameter choices .they also outperform lazy repair with the use of traditional erasure codes for many practical parameter choices .+ ( vi ) we show that by allowing parallel and independent repair of different encoded blocks , srcs facilitate fast replenishment of lost redundancy , allowing a much quicker system recovery from a vulnerable state than is possible with traditional codes .since this work aims at designing specifically tailored codes for networked storage systems , we first briefly recall the mechanisms behind erasure codes design . in what follows , we denote by the finite field with elements , and by the finite field without the zero element . if , an element can be represented by an -dimensional vector where , , coming from fixing a basis , namely where forms a -basis of , and is a root of an irreducible monic polynomial of degree over .the finite field is nothing else than the two bits 0 and 1 , with addition and multiplication modulo 2 .a linear erasure code over a -ary alphabet is formally a linear map which maps a -dimensional vector to an -dimensional vector .the set of codewords , , forms the code ( or codebook ) .the third parameter refers to the minimum distance of the code : where the hamming distance counts the number of positions at which the coefficients of and differ .the minimum distance describes how many erasures can be tolerated , which is known to be at most , achieved by maximum distance separable ( mds ) codes .mds codes thus allow to recover any codeword out of coefficients .let be an object of size bits , that is , and let be a positive integer such that divides .we can write which requires the use of a code over , that maps to an -dimensional binary vector , or equivalently , an -dimensional vector since the work of reed and solomon , it is known that linear coding can be done via polynomial evaluation . in short , take an object of size , with each in , and create the polynomial .\ ] ] now evaluate in elements , to get the codeword [ ex : exrs23 ] suppose the object has 4 bits , and we want to make fragments : , .we use a reed - solomon code over , to store the file in 3 nodes . recall that where .thus we can alternatively represent each fragment as : , .the encoding is done by first mapping the two fragments into a polynomial $ ] : and then evaluating into the three non - zero elements of , to get a codeword of length 3 : where , , , so that each node gets two bits to store : at node 1 , at node 2 , at node 3 .encoding linearly data as explained in section [ sec : lincod ] can be done with arbitrary polynomials .we now first describe a particular class of polynomials that will play a key role in the construction of homomorphic codes , a class of self - repairing codes presented in subsection [ subsec : src ] .since we work over finite fields that contains , recall that all operations are done in characteristic 2 , that is , modulo 2 .let , for some .then we have that and consequently recall the definition of a linearized polynomial .a _ linearized polynomial _ over , , has the form we now define a _ weakly linearized polynomial _ as a _ weakly linearized polynomial _ over , , has the form we will see below why we chose this name .we use the notation since later on it will indeed correspond to the number of data symbols that can be encoded with the proposed scheme .we start with a useful property of such polynomials .[ lem : sr ] let and let be a weakly linearized polynomial given by .we have note that if we evaluate in an element , we get , using ( [ eq : frob ] ) , that we can strengthen the above lemma by considering instead a polynomial over , , of the form : where , ( makes a linearized polynomial ) .we now get : let and let be the polynomial given by , , .we have if we evaluate in , we get again by ( [ eq : frob ] ) , and using the property that for .we now mimic the way encoding works for reed - solomon codes ( see subsection [ subsec : encodpoly ] ) for weakly linearized polynomials . note that neither the encoding nor the decoding process described below are actual efficient algorithms .implementations of these processes is a separate issue to be dealt with . 1 .take an object of length , with a positive integer that divides .decompose into fragments of length : 2 .take a linearized polynomial with coefficients in and encode the fragments as coefficients , namely take , .3 . evaluate in non - zero values of to get a -dimensional codeword and each is given to node for storage .in particular , we need 1 . given linearly independent fragments , the node that wants to reconstruct the file computes linear combinations of the fragments , which gives points in which is evaluated .lagrange interpolation guarantees that it is enough to have points ( which we have since for ) to reconstruct uniquely the polynomial and thus the data file .this requires a codeword constructed with the above procedure is of the form , where each coefficient is in and .we will denote by the maximum value that can take , namely .we know that contains a -basis with linearly independent elements .if , the , , can be expressed as -linear combinations of the basis elements , and we have from lemma [ lem : sr ] that in words , that means that an encoded fragment can be obtained as a linear combination of other encoded fragments . in terms of computational complexity , this further implies that the cost of a block reconstruction is that of some xors ( one in the most favorable case , when two terms are enough to reconstruct a block , up to in the worst case ) . on the other hand ,if are contained in , then the code has no self - repairing property . for any choice of a positive integer that divides , we work in the finite field . to do explicit computations in this finite field , it is convenient to use the generator of the multiplicative group , that we will denote by .a generator has the property that , and there is no smaller positive power of for which this is true .[ ex : complete]take a data file of bits , and choose fragments .we have that , which satisfies ( [ eq : boundk ] ) , that is .the file is cut into 3 fragments , , .let be a generator of the multiplicative group of , such that .the polynomial used for the encoding is the -dimensional codeword is obtained by evaluating in elements of , by ( [ eq : boundn ] ) . for , if we evaluate in , , then the 4 encoded fragments are linearly independent and there is no self - repair possible .now for , and say , , we get : note that suppose node 5 which stores goes offline .a new comer can get by asking for and , since table [ tab : enumerate ] shows other examples of missing fragments and which pairs can reconstruct them , depending on if 1 , 2 , or 3 fragments are missing at the same time ..ways of reconstructing missing fragment(s ) in example [ ex : complete ] [ cols="^,^ " , ] a potential schedule to download the available blocks at different nodes to recreate the missing fragments is as follows : in first time slot , , , , nothing , , and are downloaded separately by seven nodes trying to recreate each of respectively . in second timeslot , , , , , and are downloaded .note that , besides , all the other missing blocks can now already be recreated . in third time slot , can be downloaded to recreate it .thus , in this example , six out of the seven missing blocks could be recreated within the time taken to download two fragments , while the last block could be recreated in the next time round , subject to the constraints that any node could download or upload only one block in unit time .even if a full copy of the object ( hybrid strategy ) were to be maintained in the system , with which to replenish the seven missing blocks , it would have taken seven time units . while , if no full copy was maintained , using traditional erasure codes would have taken at least nine time units .this example demonstrates that src allows for fast reconstruction of missing blocks .orchestration of such distributed reconstruction to fully utilize this potential in itself poses interesting algorithmic and systems research challenges which we intend to pursue as part of future work .we propose a new family of codes , called self - repairing codes , which are designed by taking into account specifically the characteristics of distributed networked storage systems .self - repairing codes achieve excellent properties in terms of maintenance of lost redundancy in the storage system , most importantly : ( i ) low - bandwidth consumption for repairs ( with flexible / somewhat independent choice of whether an eager or lazy repair strategy is employed ) , ( ii ) parallel and independent ( thus very fast ) replenishment of lost redundancy .when compared to erasure codes , the self - repairing property is achieved by marginally compromising on static resilience for same storage overhead , or conversely , utilizing marginally more storage space to achieve equivalent static resilience .this paper provides the theoretical foundations for srcs , and shows its potential benefits for distributed storage .there are several algorithmic and systems research challenges in harnessing srcs in distributed storage systems , e.g. , design of efficient decoding algorithms , or placement of encoded fragments to leverage on network topology to carry out parallel repairs , which are part of our ongoing and future work .99 r. bhagwan , k. tati , y. cheng , s. savage , g. voelker , `` total recall : system support for automated availability management '' , _ networked systems design and implementation ( nsdi ) _ , 2004 .a. g. dimakis , p. brighten godfrey , m. j. wainwright , k. ramchandran , `` the benefits of network coding for peer - to - peer storage systems '' , _workshop on network coding , theory , and applications ( netcod ) _ , 2007 .a. g. dimakis , p. brighten godfrey , y. wu , m. o. wainwright , k. ramchandran , `` network coding for distributed storage systems '' , available online at_ http://arxiv.org / abs/0803.0632_. a. datta , k. aberer , `` internet - scale storage systems under churn a study of the steady - state using markov models '' , _ peer - to - peer computing ( p2p ) _ , 2006 .a. duminuco , e. biersack , `` hierarchical codes : how to make erasure codes attractive for peer - to - peer storage systems '' , _ peer - to - peer computing ( p2p ) _ , 2008 .a. duminuco , e.w .biersack , `` a practical study of regenerating codes for peer - to - peer backup systems '' , _ intl .conference on distributed computing systems ( icdcs ) _ , 2009 .x. liu , a. datta , `` redundancy maintenance and garbage collection strategies in peer - to - peer storage systems '' , _ intl .symposium on stabilization , safety , and security of distributed systems ( sss ) _ 2009 .d. grolimund , `` wuala - a distributed file system '' , google tech talk _k. v. rashmi , n. b. shah , p. v. kumar and k. ramchandran , `` explicit construction of optimal exact regenerating codes for distributed storage '' , _ allerton conf . on control , computing and comm .i. s. reed and g. solomon , `` polynomial codes over certain finite fields '' , _ journal of the society for industrial and appl . mathematics _ , no 2 , vol . 8 , siam , 1960 .r. rodrigues and b. liskov , `` high availability in dhts : erasure coding vs. replication '' , _ workshop on peer - to - peer systems ( iptps ) _ 2005 .
erasure codes provide a storage efficient alternative to replication based redundancy in ( networked ) storage systems . they however entail high communication overhead for maintenance , when some of the encoded fragments are lost and need to be replenished . such overheads arise from the fundamental need to recreate ( or keep separately ) first a copy of the whole object before any individual encoded fragment can be generated and replenished . there has been recently intense interest to explore alternatives , most prominent ones being regenerating codes ( rgc ) and hierarchical codes ( hc ) . we propose as an alternative a new family of codes to improve the maintenance process , which we call _ self - repairing codes _ ( src ) , with the following salient features : ( a ) encoded fragments can be repaired directly from other subsets of encoded fragments without having to reconstruct first the original data , ensuring that ( b ) a fragment is repaired from a fixed number of encoded fragments , the number depending only on how many encoded blocks are missing and independent of which specific blocks are missing . these properties allow for not only low communication overhead to recreate a missing fragment , but also independent reconstruction of different missing fragments in parallel , possibly in different parts of the network . the fundamental difference between srcs and hcs is that different encoded fragments in hcs do not have symmetric roles ( equal importance ) . consequently the number of fragments required to replenish a specific fragment in hcs depends on which specific fragments are missing , and not solely on how many . likewise , object reconstruction may need different number of fragments depending on which fragments are missing . rgcs apply network coding over erasure codes , and provide network information flow based limits on the minimal maintenance overheads . rgcs need to communicate with at least other nodes to recreate any fragment , and the minimal overhead is achieved if only one fragment is missing , and information is downloaded from all the other nodes . we analyze the _ static resilience _ of srcs with respect to traditional erasure codes , and observe that srcs incur marginally larger storage overhead in order to achieve the aforementioned properties . the salient src properties naturally translate to _ low communication overheads _ for reconstruction of lost fragments , and allow reconstruction with lower latency by facilitating _ repairs in parallel_. these desirable properties make self - repairing codes a good and practical candidate for networked distributed storage systems . * keywords : * coding , networked storage , self - repair
weak gravitational lensing of the microwave background anisotropies offers a unique opportunity to study the dark matter and energy distribution at intermediate redshifts and large scales .in addition to producing modifications in the cmb temperature and polarization power spectra , lensing of the cmb fields produces higher - order correlations between the multipole moments .quadratic combinations of the cmb fields can be used to form estimators of the projected gravitational potential , and therefore of the projected mass .the minimum variance quadratic estimator can in principle map the projected mass on large angular scales out to multipole moments of and contains nearly all of the information in the higher moments of the lensed temperature field . substantially more information lies in the lensed polarization fields allowing high signal - to - noise lensing reconstruction and extending the angular resolution out to .lensing reconstruction techniques involving the polarization fields have previously only been developed for small surveys where the sky can be taken to be approximately flat . since lensing is intrinsically most sensitive to the projected potential at or several degrees on the sky , a treatment incorporating the curvature of the sky is desirable .in fact it is necessary for its application in removing the lensing contaminant to gravitational wave polarization across large regions of the sky .we present a concise treatment of the effect of gravitational lensing on cmb temperature and polarization harmonics in sect .[ sect : cmblensingmultipoles ] .we construct the full sky quadratic estimators of the lensing potential and compare their noise properties to that for the flat sky expressions in sect .[ sect : quadraticestimators ] .we provide an efficient algorithm for the construction of all estimators in sect .[ sect : angularspaceestimators ] .we summarize some useful properties of spin - weighted functions in appendix [ appendix : spinsfunctions ] .finally , we derive the flat sky limits of the estimators and draw the connection to results in in sect . [appendix : flatsky ] .in this section , we give a pedagogical but concise derivation of the lensing effect on the cmb temperature and polarization fields on the sphere .we emphasize the connections between the formalism using spin - weighted spherical harmonics and a tensorial approach which will be useful for the lensing reconstruction in the following sections .the temperature perturbation is characterized by a scalar function , whose harmonic transform is given by the polarization anisotropy of the microwave background is characterized by a traceless , symmetric rank 2 tensor , which can be represented as ( e.g. ) where we have defined the complex stokes parameters according to the spin projection vectors are given with respect to the measurement basis by , \label{eqn : vm}\\ { \bar{{\ensuremath{\bm{m}}}}}&=&\frac{1}{\sqrt{2}}\left [ { \ensuremath{\hat{\bm{e}}}}_1 - i { \ensuremath{\hat{\bm{e}}}}_2\right ] , \label{eqn : vp}\end{aligned}\ ] ] and form an eigenbasis under local rotations of basis vectors ( see appendix [ appendix : spinsfunctions ] ) . in spherical polar coordinates , and . under a local , right - handed rotation of the basis by an angle , the complex stokes parameters acquire a phase .they act as spin-2 functions , with a corresponding harmonic transform in terms of spin - weighted spherical harmonics given by a lens with a projected potential maps the temperature and polarization anisotropies according to where tildes denote the unlensed fields . in the case of a weak gravitational field under consideration , lensing potential is obtained by a line - of - sight projection of the gravitational potential , where is the conformal time , is the epoch of last scattering and is the angular diameter distance in comoving coordinates . taking the harmonic transform of eqn .( [ eqn : tlensmap ] ) , one readily shows that the the change to the temperature moments are given by with denoting the integral the integral can be performed analytically using the relation ^{1/2 } { \left ( \begin{array}{ccc}l_1&l_2&l_3\\ -s_1&-s_2&-s_3\end{array } \right ) } { \left ( \begin{array}{ccc}l_1&l_2&l_3\\ m_1&m_2&m_3\end{array } \right ) } \label{eqn : threejtosylm}\ ] ] to yield with the definition \sqrt{\frac{(2l+1)(2l+1)(2l'+1)}{16\pi}}{\left ( \begin{array}{ccc}l&l&l'\\ \pm s&0&\mp s\end{array } \right ) } .\label{eqn:2fdefinition}\ ] ] the multipole expansion for the polarization fields proceeds by noting that are the spin components of the polarization tensor .since the contraction with the spin projection vectors projects out the spin 2 piece of a symmetric tensor , the change in the complex stokes parameters is given by & = & { { \ensuremath{\bm{m}}}}^i{{\ensuremath{\bm{m}}}}^j \delta{\bm{\mathcal p}}_{ij}\nonumber \\& \approx & { { \ensuremath{\bm{m}}}}^i{{\ensuremath{\bm{m}}}}^j\left [ \nabla_k \tilde { \bm{\mathcal p}}_{ij}({\ensuremath{\hat{\bm{n}}}})\right ] \left [ \nabla^k\phi({\ensuremath{\hat{\bm{n}}}})\right ] .\label{eqn : lensingcontribution}\end{aligned}\ ] ] the expression for the contribution to is obtained by replacing by in the above .we denote the product using a spin - gradient derivative ( see eq .[ eqn : gradientop ] ) , and write the lensing contribution as \approx d^i\phi({\ensuremath{\hat{\bm{n } } } } ) d_i[{}_{\pm 2}\tilde{a}({\ensuremath{\hat{\bm{n } } } } ) ] .\label{eqn : lensingcontrib2}\ ] ] this relationship was given in with the shorthand convention corresponding to the action of covariant derivatives on the spin components of symmetric trace free tensors given in eqn .( [ eqn : covderivfinal ] ) . expanding and inspin - weighted spherical harmonics and evaluating the inner product of their gradients using eqn .( [ eqn : gradientdot ] ) , we obtain the lensing corrections & \approx & \sum_{lm}\sum_{l'm ' } \phi_l^m{}_{\pm 2}\tilde{a}_l^m{}_{\pm 2}i_{lll'}^{mmm ' } , \label{eqn : lensedpmalm}\end{aligned}\ ] ] where we define we will be interested in the lensing expressions for the rotationally invariant combinations , \label{eqn : emultipole } \\b_l^m & = & \frac{1}{2i}\left [ { } _ { + 2}a_l^m - { } _ { -2}a_l^m\right ] \label{eqn : bmultipole},\end{aligned}\ ] ] which are the curl - free ( `` e - mode '' ) and gradient - free ( `` b - mode '' ) components of the polarization field .> from the expressions ( [ eqn : lensedtlm ] ) and ( [ eqn : lensedpmalm ] ) , we find the general expression for a lensed multipole moment to be , \label{eqn : lensedxlm}\end{aligned}\ ] ] where may be multipole moments of , , or , and ensure that the associated terms are nonzero only when is even or odd , respectively . denotes the parity complement of , i.e. , , .lensing of the cmb fields mixes different multipoles through the convolution ( [ eqn : lensedxlm ] ) , and therefore correlates modes across a band determined by the power in the deflection angles .the unlensed cmb multipoles are assumed to be gaussian and statistically isotropic , so that the statistical properties are characterized by diagonal covariances or power spectra the assumption of parity invariance implies that the lensing potential is also assumed to be statistically isotropic so that where we have multiplied through by to reflect the weighting of deflection angles .it follows then that the lensed multipoles are also statistically isotropic with power spectra since denotes the measured multipoles , the power spectra contain all sources to the variance , including detector noise .detector noise will be taken to be homogeneous , with power spectra given by where and characterize detector noise , and is the fwhm of the beam .we employ the specifications of a nearly ideal reference experiment , with , , and ( see for an exploration of noise properties ) . if we instead consider an ensemble of cmb fields lensed by a _deflection field , the multipole covariance acquires off - diagonal terms and becomes where the subscript on the average indicates that we consider a fixed lensing field . are weights for the different quadratic pairs denoted by , given by +{}_{s_b}f_{l_2ll_1 } \left [ { \ensuremath{\epsilon}}_{l_1l_2l}\tilde{c}_{l_1}^{ab } -{\ensuremath{\beta}}_{l_1l_2l}\tilde{c}_{l_1}^{a \bar b } \right ] , \label{eqn : fdefinition}\ ] ] where and are the spins of the and fields respectively .specific forms for the six quadratic pairs are given in table [ table : fforms ] ..functional forms for .`` even '' and `` odd '' indicate that the functions are non - zero only when is even or odd , respectively . [ cols="^,^",options="header " , ] because has a zero mean , the off - diagonal terms of the two - point correlations taken over a statistical ensemble would vanish .however , in a given realization , we can construct an estimator for the deflections as a weighted sum over multipole pairs , and find weights that minimize the variance of the estimator .we write a general weighted sum of multipole pairs as where and are the observed cmb multipoles , denotes the specific choice of and , and the sum includes the diagonal ( , ) pieces .> from expression ( [ eqn : lensaverage ] ) for the average over a fixed lens realization , .\label{eqn : estimatorlensreduced}\end{aligned}\ ] ] where we have used the relations the diagonal terms in eq .( [ eqn : estimatordefinition ] ) only contribute to the unobservable monopole piece and we hereafter implicitly consider only .the normalization is set by the condition to be we derive the minimum variance estimator by minimizing the gaussian variance with respect to and find that note that for , and for ( e.g. , for or ) , the gaussian noise covariance \ ] ] is given by \right \ } , \label{eqn : noisecovariance}\ ] ] with , . for ,the above reduces simply to . following the treatment of the flat sky case in , we combine the measured quadratic estimators to further improve the signal to noise by a forming minimum variance estimator with weights and variance given by we will hereafter ignore contributions from the estimator , since the primordial contributions to the -mode power spectrum is expected to be small on scales where the lensed multipoles are employed .we plot the noise power spectra for the five estimators , as well as the minimum variance estimator , in fig .[ fig : fullskynoise ] , assuming the noise properties of the reference experiment .the quadratic estimators involve both filtering and convolution in harmonic space .it is useful in practice to express the convolution as a product of the fields in angular space .the estimators can then be constructed using fast harmonic transform algorithms . to simplify the construction of the estimatorswe will assume as is appropriate for the standard cosmology .aside from the estimator , derived in under the flat sky approximation , the angular space estimators involving polarization are new to this work .generalizing the construction in for the estimator , consider the fact that lensing correlates the ( lensed ) temperature and polarization fields to the their ( unlensed ) angular gradients .we show in appendix [ appendix : spinsfunctions ] that the all - sky analog to the gradient operation on a spin- field is .the quadratic estimator is then built out of the general operation on two fields and \equiv - d^i [ x({\ensuremath{\hat{\bm{n } } } } ) d_i y({\ensuremath{\hat{\bm{n } } } } ) ] \ , .\label{eqn : gradientcorr}\ ] ] the properly normalized estimators then take the form where \,,\nonumber\\ e^{\theta e } ( { \ensuremath{\hat{\bm{n } } } } ) & = \frac{1}{2 } \left ( p[{}_{+2 } a_e , { } _ { -2 } a_{\theta e } ] + p[{\rm cc } ] \right ) + p[{}_0 a_\theta , { } _ 0 a_{e\theta}]\ , , \nonumber\\ e^{\theta b } ( { \ensuremath{\hat{\bm{n } } } } ) & = \frac{1}{2 } \left ( p [ { } _ { + 2 } a_{ib } , { } _ { -2 } a_{\theta e } ] + p[{\rm cc } ] \right ) \,,\nonumber\\ e^{e e } ( { \ensuremath{\hat{\bm{n } } } } ) & = \frac{1}{2 } \left ( p [ { } _ { + 2 } a_e , { } _ { -2 } a_{ee } ] + p[{\rm cc } ] \right ) \,,\nonumber\\ e^{e b } ( { \ensuremath{\hat{\bm{n } } } } ) & = \frac{1}{2 } \left ( p [ { } _ { + 2 } a_{ib},{}_{-2 } a_{ee } ] + p[{\rm cc } ] \right ) \,,\end{aligned}\ ] ] where cc denotes the operation with the complex conjugates of the fields and the filtered fields themselves are given by the general prescription we omit the estimator under the assumption that the unlensed -power is small at high multipoles .it is straightforward to verify that all of the estimators are the same as the harmonic space ones with except for . herethe weights on the multipole combination are and are slightly non - optimal compared with the minimum variance weighting .furthermore and they must be calculated separately . however a direct calculation of the noise spectrum through eqn .( [ eqn : noisecovariance ] ) shows that the differences are less than , and essentially indistinguishable from the minimum variance estimator ( see fig .[ fig : tenoise ] ) .these estimators may therefore be used in place of a direct multipole summation for efficient lens reconstruction .the gradient operations in eqn .( [ eqn : gradientcorr ] ) are efficiently evaluated in harmonic space since their action on spin harmonics simply raises and lowers the spin index in accordance with eqn .( [ eqn : spingradient ] ) .counterintuitively , the gravitational lensing of the cmb temperature and polarization fields is a small scale manifestation of the very large scale properties of the intervening mass distribution .it therefore requires very challenging , high angular resolution ( ) but wide - field surveys ( few degrees ) to exploit .we have provided expressions for quadratic estimators of the lensing potential valid on the entire sky , as well as the expected noise covariances for the estimators . as expected , on small angular scales ( ) ,the flat sky approximations differ from the full sky expressions by less than , indicating that the flat sky approximations is adequate .this regime is however not where the signal - to - noise peaks .we have also provided a practical means of implementing these estimators using fast harmonic transforms , either with spherical harmonics or fourier harmonics , to perform the required harmonic convolutions and filtering .we have shown that even the approximate estimator has a noise performance that is essentially indistinguishable from the minimum variance estimator .these techniques should provide a means to study the impact of real world issues such as finite - field , inhomogeneous noise , and foregrounds on the science of cmb lensing .this work was supported by nasa nag5 - 10840 and the doe oji program .we clarify the relation between spin- functions and tensor quantities on the sphere , and derive the relation between spin raising and lowering operators and covariant derivatives on the sphere .suppose we construct an orthonormal basis at each point on the sphere , with denoting the outward - facing normal vector .we define a local rotation as a right - handed rotation of the basis vectors by an angle around the vector , so that the new basis vectors are related to the original vectors by the transformation a function is said to carry a spin - weight if , under the rotation ( [ eqn : basisrotation ] ) , the function transforms as .this convention conforms to , and defines rotations in a sense opposite to that in .we define vectors and with respect to the basis according to ,\\ { \bar{{\ensuremath{\bm{m}}}}}&= & \frac{1}{\sqrt{2}}\left [ { \ensuremath{\hat{\bm{e}}}}_1 - i { \ensuremath{\hat{\bm{e}}}}_2\right ] , \label{eqn : projectionvectors}\end{aligned}\ ] ] which have the property that given a vector field , it can easily be shown that the quantities and transform as spin and objects , respectively , so that and act as spin projection vectors .more generally , given a rank- tensor , the quantity transforms as a spin- object , since under the rotation ( [ eqn : basisrotation ] ) , each factor of contributes a phase .the spin- functions therefore also provide a complete basis for the totally symmetric trace - free portion of a rank- tensor where the trace - free condition refers to the vanishing under contraction of any two indices in the tensor .for example , the polarization tensor can be written as covariant differentiation of such a tensor is related to the raising and lowering of the spin weight : { \bar{{\ensuremath{\bm{m}}}}}_{i_1}\cdots{\bar{{\ensuremath{\bm{m}}}}}_{i_s } + { } _ sf({\ensuremath{\hat{\bm{n}}}})\nabla_{(k}{\bar{{\ensuremath{\bm{m}}}}}_{i_1}\cdots{\bar{{\ensuremath{\bm{m}}}}}_{i_s ) } \nonumber \\ & + & \left [ \partial_k{}_{-s}f({\ensuremath{\hat{\bm{n}}}})\right ] { { \ensuremath{\bm{m}}}}_{i_1}\cdots{{\ensuremath{\bm{m}}}}_{i_s } + { } _ { -s}f({\ensuremath{\hat{\bm{n}}}})\nabla_{(k}{{\ensuremath{\bm{m}}}}_{i_1}\cdots { { \ensuremath{\bm{m}}}}_{i_s)}. \label{eqn : covariantderivative}\end{aligned}\ ] ] we evaluate the covariant derivatives etc . explicitly in the spherical basis with coordinates , yielding with those for given as complex conjugates of the above . using these, it can be shown that where in spherical coordinates .the covariant derivative of is therefore given by {\bar{{\ensuremath{\bm{m}}}}}_{i_1}\cdots{\bar{{\ensuremath{\bm{m}}}}}_{i_s}+ \left [ d_k \ , { } _ { -s}f({\ensuremath{\hat{\bm{n}}}})\right ] { { \ensuremath{\bm{m}}}}_{i_1}\cdots{{\ensuremath{\bm{m}}}}_{i_s } , \label{eqn : covderivfinal}\ ] ] where we define the spin - dependent gradient operator as a covariant derivative operating on the spin- piece of a tensor is equivalent to a gradient operation on its spin- weighted representation . as an example , the components of the covariant derivative of the polarization tensor ,\nonumber \\ { \bar{{\ensuremath{\bm{m}}}}}^i{\bar{{\ensuremath{\bm{m}}}}}^j\nabla_k { \mathcal p}_{ij}&=&d_k[{}_{-2 } a({\ensuremath{\hat{\bm{n } } } } ) ] .\label{eqn : dkexample}\end{aligned}\ ] ] the gradient operator is related to spin raising and lowering operators . using the expressions ( [ eqn : jevaluation ] ) and expressing the operator in the basis , we obtain the desired relations & = & -\frac{1}{\sqrt{2}}\left \{\left [ \edth { } _ sf({\ensuremath{\hat{\bm{n}}}})\right ] { \bar{{\ensuremath{\bm{m}}}}}_i + \left [ \baredth { } _ sf({\ensuremath{\hat{\bm{n}}}})\right ] { { \ensuremath{\bm{m}}}}_i \right \}.\label{eqn : derivtospinladder}\end{aligned}\ ] ] by virtue of the rotational properties of ( , ) , the ladder operators and , defined by \sin^{-s}\theta{}_sf(\theta,\varphi ) , \label{eqn : lowering}\\ \baredth{}_sf(\theta,\varphi)&=&-\sin^{-s}\theta\left [ \frac{\partial}{\partial\theta}-i\csc\theta\frac{\partial}{\partial\varphi } \right ] \sin^s\theta{}_sf(\theta,\varphi),\label{eqn : raising}\end{aligned}\ ] ] raise and lower the spin weight by .for example , the gradient operation on the spin- spherical harmonic yields = -\frac{1}{\sqrt{2 } } \left ( [ ( l - s)(l+s+1)]^{1/2 } { } _ { s+1}y_l^m { \bar{{\ensuremath{\bm{m}}}}}_i - [ ( l+s)(l - s+1)]^{1/2 } { } _ { s-1}y_l^m { { \ensuremath{\bm{m}}}}_i\right ) .\label{eqn : spingradient}\ ] ] note that the inner product of two gradients [ d_i\ , { } _ { s_2 } f_2({\ensuremath{\hat{\bm{n } } } } ) ] = \frac{1}{2}\left \ { [ \baredth\ , { } _ { s_1 } f_1({\ensuremath{\hat{\bm{n}}}})]\ , [ \edth{}_{s_2}f_2 ( { \ensuremath{\hat{\bm{n } } } } ) ] + [ \edth { } _ { s_1 } f_1({\ensuremath{\hat{\bm{n}}}})]\ , [ \baredth \ , { } _ { s_2}f_2({\ensuremath{\hat{\bm{n } } } } ) ] \right \ } \label{eqn : gradientdot}\end{aligned}\ ] ] leaves the total spin - weight of the product unchanged . inverting the relation ( [ eqn : derivtospinladder ] ) , we obtain the ladder operators in the tensor representation for , and with replaced by for .this relationship was first proven in , albeit with a different sign convention .the all - sky estimators derived in sect .[ sect : quadraticestimators ] reduce to the flat sky estimators , based on fourier harmonics of the fields in the small angle limit . herewe explicitly show this correspondence .the full sky harmonics in multipole space are related to the flat sky harmonics in fourier space by rewriting eq .( [ eqn : estimatordefinition ] ) using the above , to go further , we can utilize the approximation with the trigonometric functions defined through the cosine and sine rules , and the 3-j symbol on the rhs for the odd case represents a continuation of the analytic expression for the even case ^{1/2 } , \label{eqn : threejexpression}\ ] ] where .the approximations for and can likewise be written as where and are defined as the unbarred quantities with replaced by .we will also utilize the relation between plane waves and spherical harmonics , using eq .( [ eqn : barquantities ] ) to rewrite eq .( [ eqn : flatestimator1 ] ) , and applying relations ( [ eqn : threejtosylm ] ) and ( [ eqn : planewave ] ) , the estimator becomes taking , the above reduces to with corresponding to the filters in .the normalization reduces to the flat sky expression in in a similar fashion , by using the approximations ( [ eqn : barquantities ] ) to relate the full sky quantities to trigonometric functions on the flat sky .it is simple to show that the efficient all - sky estimator in eqn .( [ eqn : angularestimator ] ) reduces to efficient flat sky estimators with the replacements in eqn .( [ eqn : gradientcorr ] ) and the spherical harmonic transform in eqn .( [ eqn : angularestimator ] ) with a fourier transform . under the assumption that , they again reproduce the properties of the minimum variance quadratic estimators in eqn .( [ eqn : flatestimatorfinal ] ) and allow fast fourier transform techniques to be employed in their construction . fig .[ fig : comparison ] shows fractional differences between the noise in flat sky estimators derived in and the noise in full sky estimators , defined as . because most of the information comes from multipole pairs at high multipole moments , the flat sky expressions deviate at less than for , mainly in the direction of overestimating the noise .u. seljak , * 463 * , 1 ( 1996 ) .m. zaldarriaga and u. seljak , * 58 * , 023003 ( 1998 ) .f. bernardeau , astron .& astrophys .* 324 * , 15 ( 1997 ) .f. bernardeau , astron . & astrophys .* 338 * , 375 ( 1998 ) .m. zaldarriaga and u. seljak , * 59 * , 123507 ( 1999 ) .j. guzik , u. seljak , and m. zaldarriaga , * 62 * , 043517 ( 2000 ) .w. hu , * 64 * , 083005 ( 2001 ) .w. hu , lett * 557 * , l79 ( 2001 ) .hirata and u. seljak , * in press * , astro - ph/0209489 ( 2002 ) .w. hu and t. okamoto , * 574 * , 566 ( 2002 ) . l. knox and y. s. song , * 89 * , 011303 ( 2002 ) .m. kesden , a. cooray , and m. kamionkowski , * 89 * , 011304 ( 2002 ) .w. hu , * 62 * , 043007 ( 2000 ) .a. challinor and g. chon , * in press * , ( 2000 ) .m. zaldarriaga and u. seljak , * 55 * , 1830 ( 1997 ) .m. kamionkowski , a. kosowsky , and a. stebbins , * 55 * , 7368 ( 1997 ). j. n. goldberg _ et al ._ , j. math .* 8 * , 2155 ( 1967 ) .a. blanchard and j. schneider , astron .& astrophys .* 184 * , 1 ( 1987 ) .goldberg and d.n .spergel , * 59 * , 103002 ( 1999 ) .d. a. varshalovich , a. n. moskalev , and v. k. kersonskii , _ quantum theory of angular momentum_. ( world scientific , singapore , 1989 ) .l. knox , * 52 * , 4307 ( 1995 ) . k.m .gorski et al . , preprint * astro - ph/9905275 * , ( 1999 ) .muciaccia , p. natoli , and n. vittorio , * 488 * , 63 ( 1997 ) .e. newman and r. penrose , j. math .* 7 * , 863 ( 1966 ) . m. white , j.e .carlstrom , m. dragovan , and w.l .holzapfel , * 514 * , 12 ( 1999 ) .
gravitational lensing of the microwave background by the intervening dark matter mainly arises from large - angle fluctuations in the projected gravitational potential and hence offers a unique opportunity to study the physics of the dark sector at large scales . studies with surveys that cover greater than a percent of the sky will require techniques that incorporate the curvature of the sky . we lay the groundwork for these studies by deriving the full sky minimum variance quadratic estimators of the lensing potential from the cmb temperature and polarization fields . we also present a general technique for constructing these estimators , with harmonic space convolutions replaced by real space products , that is appropriate for both the full sky limit and the flat sky approximation . this also extends previous treatments to include estimators involving the temperature - polarization cross - correlation and should be useful for next generation experiments in which most of the additional information from polarization comes from this channel due to sensitivity limitations .
the hard - disk system is a fundamental model of statistical and computational physics . during more than a century, the model and its generalization to -dimensional spheres have been central to many advances in physics .the virial expansion is an example : boltzmann s early calculations of the fourth virial coefficient ultimately led to lebowitz and onsager s proof of the convergence of the virial expansion up to finite densities for all and to the general and systematic study of virial coefficients .the theory of phase transitions provides another example for the lasting influence of the hard - disk model and its generalizations .kirkwood and monroe first hinted at the possibility of a liquid solid transition in three - dimensional hard spheres .this prediction was surprising because of the absence of attractive interactions in this system .the depletion mechanism responsible for the effective - medium attraction was also first studied in hard spheres , by asakura and oosawa . in two dimensions , the liquid solid phase transition was first evidenced by alder and wainwright .it lead to far - reaching theoretical , computational and experimental work towards the understanding of 2d melting . in mathematics , hard disks and hard sphereshave also been at the center of attention . a rigorous existence proof of the melting transition in hard spheres is still lacking , but the ergodicity of the molecular dynamics evolution of this system has now been established rigorously .arguably the most important role for the hard - disk model has been in the development of numerical simulation methods .molecular dynamics and markov - chain monte carlo were first formulated for hard disks .the early algorithms have continued to be refined : within the molecular dynamics framework , this has lead to highly efficient event - scheduling strategies and , for monte carlo , to the development of cluster algorithms .even the modern simulation algorithm remain slow , however , and revolutions like the cluster algorithms for spin systems have failed to appear .moreover , rigorous mathematical bounds for the correlation time ( mixing time ) of monte carlo algorithms were obtained in the thermodynamic limit only for small densities , which are far inside the liquid phase . at higher densities , close to the liquid solid transition , many numerical calculations have suffered from insufficient simulation times until recently . in the present article, we discuss computational aspects of the hard - disk model , starting with an introduction ( section [ s : algos ] ) .in particular , we reinterpret hard - sphere monte carlo in terms of the sampling of points from high - dimensional polytopes ( section [ s : polytope ] ) .local monte carlo amounts to random walks in a sequence of such polytopes , while event - chain monte carlo is equivalent to molecular dynamics evolutions with particular initial conditions for the velocities .we analyze the convergence properties of the algorithms in these polytopes for the hard - disk case .parallel event - chain algorithms emerge naturally as molecular dynamics with more general initial conditions ( section [ s : parallel ] ) .we describe several parallelization strategies and report on implementations .[ s : algos ] we consider equal hard disks of unit radius in a square box of size . in the following ,we assume without mentioning periodic boundary conditions for positions and pair distances .the statistical weights are equal to unity for configurations without overlaps ( all pair distances larger than ) and zero for illegal configurations ( with overlaps ) .the phase diagram of the system depends only on the packing fraction . in the following , the letters , , , , label hard - disk configurations of disks , given by the coordinates of the disk centers .the letters , , number disks . between configurations and ( each arrow stands for a probability flow of same magnitude ) ._ left _ : global balance , as required for markov - chain monte carlo . the total flow into the configuration equals the flow out of it ._ center _ : detailed balance : the net flow between any two configurations is zero , . _ right _ : another special case of global balance : maximal global balance at ( ) .[ f : flows_detailed_global ] ] [ s : balances ] markov - chain monte carlo algorithms are governed by balance conditions for the flows from configuration to configuration ( see fig . [f : flows_detailed_global ] ) ; is the conditional probability to move from to , given that the system is in . to converge towards the stationary distribution ,the _ global _ balance condition must be satisfied : the total flow onto configuration must equal the total flow out of , the local monte carlo algorithm , introduced by metropolis et al . in 1953 ( see fig . [f : event_chain_move ] ) , uses the more restrictive _ detailed _ balance condition for which the net flow between each pair of configurations and is zero . moving from configuration to involves sampling the disk to be displaced and the displacement . for detailed balance , the probability to sample at must equal the probability to sample at position . in order to be ergodic , the displacements chosen such that each disk can eventually reach any position in the system . .with periodic boundary conditions , the event - chain move is rejection - free ., width=529 ] unlike the local monte carlo algorithm , a single move of the event - chain algorithm may displace several disks .an event - chain move is parametrized by a total displacement and a direction , which together form a vector .the move starts by sampling a disk and `` sliding '' it in the direction until it hits another disk , or at most for the distance .the disk is then displaced in its turn , also in the direction , see fig .[ f : event_chain_move ] .this process continues until the displacements of the individual disks sum up to .after this , a new disk and possibly a new direction are sampled for the next move . with periodic boundary conditions ,no rejections occur in this algorithm . for a given displacement vector , any disk configuration can reach other configurations , using each of the disks to start an event chain .likewise , can be reached from other configurations which may be reconstructed by event chains with displacement vector .this implies that the event - chain satisfies the global balance condition , eq .( [ e : globalbal ] ) .if the vectors are equally likely , it also satisfies detailed balance . in order to be ergodic ,the displacements must span space : by choosing , the event - chain algorithm realizes the maximal global balance ( see fig .[ f : flows_detailed_global ] ) , where flow between two configurations is possible only in one direction .this version is more efficient than detailed balance versions ( for example , and ) .it is again possible to alternate repeated moves in the direction with repeated moves in without destroying the correctness of the algorithm . for displacements smaller than the mean free path , the event - chain algorithm is roughly equivalent to the local monte carlo algorithm .it accelerates for increasing , and for much larger than the mean free path , it is about two orders of magnitude faster than the local monte carlo method , and about ten times faster than the best current implementations of event - driven molecular dynamics ( see ref . ) .disks in a square box with periodic boundary conditions at packing fraction ._ left _ : disk configurations and their local orientational field for one simulation run .frame are separated by iterations ( sweeps ) of local monte carlo .the slow decorrelation of the orientation is manifest ._ right _ : evolution of the global orientational order parameter , eq .( [ e : global_psi ] ) , in the complex plane , for the same simulation run .[ f : orientation_movie ] ] the characteristic challenge of numerical simulations for the hard - disk model resides in the extremely long correlation time .this is illustrated in fig .[ f : orientation_movie ] using snapshots of configurations obtained during a long simulation run .the system is quite small and not extremely dense , yet correlations in the orientation of the system persist over millions of monte carlo moves . to quantify the orientations and their correlations, we consider the local orientational field where is the number of voronoi neighbors of disk .the ( with ) are normalized weights according to the length of the voronoi interface between disks and , and is the angle of the vector between the disk centers . the average of eq .( [ e : local_psi ] ) over all disks yields the global orientational order parameter , in a square box , the mean value of is zero because of the symmetry , and its correlation function decays to zero for infinite times .we conjecture that is the slowest observable in the system . for large times , global orientational correlations decay exponentially , , and we obtain the empirical correlation time from an exponential fit to .[ s : polytope ] ) are and ._ center _ : molecular dynamics evolution in the polytope corresponding to two event chains with moves of disk 1 ( blue segments ) and disk 2 ( red segments ) .the trajectory begins with disk , and it depends on the choice of the starting disk ( or ) for the second chain .periodic boundary conditions are ignored for clarity .snapshots of the configuration are sketched along the trajectory ._ right _ : hard - disk configuration with its constraint graph for motion along the axis .each node has at most three forward and three backward links .this graph is invariant under event - chain moves in the direction . ]event - chain moves along a single direction sample a restricted configuration space . for the remainder of this section , we take the chains to move in the positive direction , unless specified otherwise , to simplify the notation .since all coordinates are fixed , two disks whose coordinates differ by less than radii can not slide across each other , and their relative order is fixed . furthermore , while in collision mode , any disk can collide with not more than six other disks , at most three in the forward direction , and at most three in backward direction ( see fig .[ f : polytope_ballistics_plus_graph ] ) .the collision partners of a disk may include itself , because of boundary conditions .the relations among disks constitute a _ constraint graph _ , which expresses the partial order between them ( see fig . [f : polytope_ballistics_plus_graph ] ) .this graph remains invariant while performing event - chain moves in the direction .each directed edge from to corresponds to a linear inequality for the coordinates of the disks and : with .the constant can be adjusted to also account for periodic boundary conditions in the direction .the inequalities eq .( [ e : polytopeequations ] ) imply that no more than three forward collision partners can be present collides forward with , and collides forward with , we have ; if now , the disks and can never come into contact ; disk _ covers _ disk . applying this rule iteratively ,the disks in the forward direction can be reduced to at most three : at most one each with , with and with ] yield legal hard - sphere configurations . ] .the invariant constraint graph allows for fast lookup of possible collision partners , and may even replace the customary cell grids ( see , for example , section 2.4 of ref .while computation of the actual constraint graph requires depth search , a superset sufficient for practical computations can be computed efficiently , see the footnote on page .hard - disk system at packing fraction ._ left _ : slowly decaying modes .configurations are shown with red disks moving in the direction and green disks in .the modes shown are the eigenvectors of the largest eigenvalues of ._ center _ : remaining correlation after of event - chain moves in the horizontal direction .these are the largest eigenvectors of .correlations in the horizontal direction have all but disappeared . _right _ : decay of the slowest modes , for the event - chain simulations with various total displacements , and for both the global ( gb ) and the detailed ( db ) balance version.[f : eigenmodes ] ] although the sampling problem from the invariant polytope concerns a convex body , it is notoriously nontrivial .the inequalities eq .( [ e : polytopeequations ] ) essentially amount to a system of coupled one - dimensional hard - disk problems . to study the relaxation behavior effected by the event - chain algorithm in the polytope, we consider the cross - covariance of the disk coordinates , where is the coordinate of the disk , compensated for the overall translation of the system due to the event - chain moves , here , is for the global balance version of the event - chain algorithm ( chains only in direction ) , and for the detailed balance version ( ) .the eigenvectors of are the polytope s normal modes , , in the sense of principal component analysis .the nature of the modes depends on the structure of the invariant polytope and captures the relative order of colliding disks and their frozen - in coordinates .the normal modes to the largest eigenvalues are large - scale cooperative rearrangements of the disks ( see fig .[ f : eigenmodes ] ) .they are the slowest modes to decay under both local and event - chain monte carlo and govern the global decorrelation of the disk configuration .in particular , two modes dominated by antiparallel flow bands are very slow to decay ( mode 1 and 2 in fig .[ f : eigenmodes ] ) . at delay times , the cross - covariance captures residual correlations among the disk coordinates .the event - chain moves couple more efficiently to the longitudinal modes of the system , and we find that after , the event - chain algorithm has virtually erased longitudinal correlations .the most prominent residual correlations carry a transverse band structure ( see fig . [f : eigenmodes ] ) .the result is a substantial decrease in efficiency of the algorithm for simulated duration in a single direction larger than . to estimate the convergence time , we study the projection of the system s evolution onto a single mode , .the autocorrelation function is , for short chain lengths , monotonously decaying .larger chain lengths accelerate the decay , as the coupling to large - scale modes is improved ( fig .[ f : eigenmodes ] ) . for chains spanning several times the box ,however , the autocorrelation functions develop oscillations with very weak damping , offsetting the benefits of longer chains .the detailed balance version of event - chain monte carlo is generally slower and less prone to oscillations . for optimal performance , the global balance versionshould thus be used with larger , but on the order of , and for times ( see fig . [f : eigenmodes ] ) . for disk configurations larger than the correlation length , can be reduced appropriately .[ s : fullharddisk ] the invariant polytope representation allows us to interpret the convergence of the full hard disk sampling problem .the conceptually simplest monte carlo algorithm for hard disks consists entirely in polytope sampling : one iteration amounts to direct sampling a new configuration from the invariant polytope of the starting configuration , and exchanging the and coordinates of all the disks .this markov - chain algorithm satisfies detailed balance . in our experiments ,the timescale , measured in iterations , for relaxation to equilibrium increases only as for large systems , implying that most of the complexity of the hard - disk sampling problem resides in the polytope sampling . sincedirect sampling is a hard problem for high - dimensional polytopes ( see section [ s : generalpolytope ] ) , we replace it by markov chains of a fixed number of event - chain moves , in effect performing molecular dynamics in the invariant polytopes for fixed duration : this algorithm satisfies detailed or global balance depending on the version of the event - chain algorithm that is used for polytope sampling .autocorrelation function for several switching intervals , as a function of the simulated md time ( _ bottom axis _ ) , or alternatively , the number of collisions per disk ( _ top axis _ ) . _ right _ : decay of as a function of the number of / switches .as approaches , the curves approach the limit of direct sampling from the polytope , with a mixing time of cycles .all curves were averaged from systems of disks at packing fraction ; the chain length was ._ inset _ : the mixing time first increases rapidly with system size , but only grows as for larger systems ( also ) ., title="fig : " ] autocorrelation function for several switching intervals , as a function of the simulated md time ( _ bottom axis _ ) , or alternatively , the number of collisions per disk ( _ top axis _ ) ._ right _ : decay of as a function of the number of / switches . as approaches ,the curves approach the limit of direct sampling from the polytope , with a mixing time of cycles .all curves were averaged from systems of disks at packing fraction ; the chain length was ._ inset _ : the mixing time first increases rapidly with system size , but only grows as for larger systems ( also ) ., title="fig : " ] we study the influence of the switching interval on convergence properties . in fig .[ f : switchingandmixing ] , the autocorrelation function of the complex order parameter is plotted vs. cumulative molecular dynamics time . decays most quickly when the switching interval is small , but the decay speed deteriorates very slowly with . only at ( corresponding to about - collisions per disk at these densities ) ,the algorithm becomes notably less efficient .the efficiency drop thus follows the decay of longitudinal ( in direction ) correlations in the invariant polytope , and is to be expected from the results in section [ s : polytope ] . in the limit ,the event - chain algorithm realizes direct sampling in the invariant polytope .the approach to this limit is illustrated in fig .[ f : switchingandmixing ] by plotting against the number of x / y switching cycles .as the switching interval increases , the autocorrelation functions approach an asymptotic curve , where is the correlation time of the direct sampling algorithm .we find that for practical purposes , event - chain monte carlo reaches the asymptotic regime for , and thus samples an approximately independent point in the invariant polytope in operations .importantly , the correlation time increases rapidly only for small system size .after the system size surpasses the correlation length , grows only as .[ s : generalpolytope ] the invariant polytope is bounded by hyperplanes which are normal to coordinate axes and have unit derivative along the remaining axes . by choice of the ,the molecular dynamics evolution is aligned with the coordinate axes at all times , and computations of intersections are of complexity . as shown in section [ s : fullharddisk ] , the event - chain algorithm seems to achieve an effective mixing time of collision events , so that the cost of sampling the hard - disk polytope appears as .the event - chain algorithm also allows to sample general polytopes .direct sampling from polytopes is straightforward only in low dimensions , especially in : a two - dimensional polytope with edges ( a convex -sided polygon ) , can be decomposed into triangles , using an interior point .triangles may then be sampled according to their areas , and a random point may be sampled inside the sampled triangle ( see , e.g. chap .6.2 of ) . in higher dimensions ,triangulation by simplices generalizes this decomposition . since polytopes such as the invariant hard - disk polytope have an exponential number of facets , direct sampling algorithms are no longer practical .markov - chain sampling achieves mixing times of steps , where is the number of bounding hyperplanes ( for hard disks ) , and where each move may be implemented in steps. it will be interesting to see how event - chain polytope sampling compares with existing polytope sampling methods , in particular the ` hit - and - run ' algorithms .[ s : parallel ] in view of the long running times of monte carlo simulations and of the current standstill in computer clock speeds , it is essential to develop _ parallel monte carlo methods _ which distribute the work load among several _ threads _ performing independent computation with as few communication as possible .such methods will allow to study not only the standard hard disk ensemble , but also related systems such as soft disks and polydisperse disk packings .however , parallel monte carlo algorithms for continuum systems pose many more problems than for lattice models , for example the ising spins , where straightforward parallel application of local metropolis updates converges to the boltzmann distribution .a massively parallel implementation of the local monte carlo algorithm was applied recently to the hard - disk melting problem .it sets up square cells according to a four - color checkerboard pattern .disks in same - color cells can be updated simultaneously , but moves across cell boundaries are rejected . to ensure ergodicity ,a new cell grid must be sampled periodically .massive parallelism of threads on a graphics card offsets the slowness of local monte carlo compared to event - chain algorithm monte carlo . these calculations confirmed the first - order liquid - hexatic phase transition in hard disks .disk packing at .we plot the number of collisions in accepted chains per hour of computation . on the same machine ,the serial version has a performance of about collisions per hour .the event - chain routine is the same in both programs ._ right _ : the acceptance ratio for chains depends on the thickness of the active layers ( which is decreases as more threads are added ) and the total displacement of the chains .[ f : stripescheme ] ] for parallel implementations of event - chain monte carlo , we consider only parallel threads that run chains in the same direction .this minimizes the chance that two chains cross each other and move the same disks .it also allows us to apply the invariant polytope framework of section [ s : polytope ] .it is instructive to realize that the effects of an event - chain move can be summarized in the difference vector of the new and old coordinates : , with . moreover ,if two chains are _ independent _ , meaning their sets of disks touched are disjoint , the net effect of running both chains is the sum of their individual difference vectors , . if , however , any disk is touched by both chains , the chain reaching this disk earlier in md time has precedence , the later chain sees a modified environment , and consequently takes a different evolution .thus , interdependent chains can not be added arithmetically , is always admissible . ] .the primary obstacle in parallelizing event - chain monte carlo consists in preserving the correct causal relations between subsequent chains , as required for the convergence to correct equilibrium distribution . in the following ,we discuss three strategies to parallelize event - chain monte carlo .predict / execute algorithm _distributes work among threads for a model of chains that follow each other chronologically .the effects of several chains are predicted in advance from the current disk configuration .the effects of the chains are then applied to the system state in the chronological order in which the starting disks were sampled . to detect conflicts, it is sufficient to compute the intersection of the set of disks touched by the current chain and of the chains that ran since the beginning of planning ; if this intersection is not empty , the chain has to be recomputed from the updated state of the disk configuration .planning and execution of chains can proceed in parallel on a shared - memory machine . using lock - free data structures ,we attain collision rates in excess of per hour in collision mode on a four - processor machine . due to its serial nature, this algorithm does not scale well beyond a few threads , however ; with too many chains predicted in advance , the probability for recomputations rises . moreover , switching between and collision modes requires reinitialization of the data structures and is rather expensive .a variation of the four - color scheme adapted to the event - chain algorithm partitions the system in horizontal stripes , separated by _ frozen isolation layers _ of thickness disk radii ( see fig .[ f : stripescheme ] ) .disks with their centers in the isolation layers are kept fixed , and thus guarantee the independence of chains running in neighboring stripes . to preserve the isolation layers , chains colliding with a frozen diskare rejected . as there are rejected moves ,the global balance condition is no longer guaranteed : the number of accepted forward chains can be different from the number of accepted backward chains ( see section [ s : balances ] ) . when allowing chains in both the directions , however , the isolation layer algorithm satisfies detailed balance .furthermore , in order to limit the rejection rate , the per - chain total displacement has to be kept lower than in the serial algorithm . in view of the discussion of section [ s : polytope ] , these necessities reduce somewhat the efficiency of the method . due to the isolation layers ,the accessible configuration space is restricted , and for ergodicity , the layer boundaries have to be resampled periodically , as in the four - color version of local monte carlo .we have implemented the isolation layer algorithm in parallel on a shared - memory machine .using several cores in parallel , it is possible to achieve effective collision rates which are 1030 times the single - core performance ( see fig .[ f : stripescheme ] ) , for systems of sufficient size . for systems too small , less threads can be used without shrinking the active strips to a point where the acceptance ratio becomes a limiting factor .systems of physical interest , however , are on the order of , and allow to use 1020 cores with moderate . at this time, the algorithm is not bound by rejection rates , but by communication between threads . finally, the event - chain scheme is not fundamentally limited to a single moving disk at any time .we may indeed launch multiple _ concurrent chains _ , which run at the same simulated md time , and interact with each other .this is different from the parallel simulation of chains which interact in sequential manner . in the invariant polytope picture, multiple concurrent chains correspond to choosing more general initial conditions , where more than one disk is given an initial velocity of .after time , multiple chains have executed , and possibly interacted with each other ; there is no rejection in this algorithm .the problem has some resemblance with event - driven molecular dynamics , because the scheduling of collisions must be foreseen , but there are several simplifications : all velocities are in the same direction and of magnitude or . as a consequence , two moving diskscan not collide with one other ; however , the faithful simulation of chains close by and possibly interacting requires careful synchronization among threads . in our experiments , this limits the speedup by parallelization .our most efficient method at this point is the isolation layer algorithm .we have reached in this paper a better understanding of the event - chain monte carlo algorithm for the hard - disk sampling problem . by restricting the algorithm to chains in a single direction, a connection appears to the well - known problem of sampling random points from a polytope : a move of the event - chain algorithm consists in performing a finite - time molecular dynamics simulation in the invariant polytope of the disk configuration .this connection offers new strategies to solve the hard - disk sampling problem in terms of polytope sampling ; it also suggests to investigate the utility of event - chain methods for the sampling of general polytopes .finally , it will be interesting to study the combinatorial structure of the typical invariant polytope , and its dependence on thermodynamical parameters . by the study of correlation functions, we have shown that the monte carlo relaxation process in the invariant polytope separates into two phases : a rapid longitudinal relaxation , followed by a much slower relaxation of the transverse degrees of freedom .we have given recommendations for the parameters of the algorithm based on these results .finally , we have discussed several strategies for parallelizing monte carlo algorithms for hard disks , alleviating the problem of the long simulation times in hard disk monte carlo .the parallelization of the hard - disk ensemble remains challenging due to its unique combination of very little actual computation and long correlation times .efficient methods to tackle the hard - disk ensemble are , however , crucial in order to treat related systems such as soft disks with the same level of success as the hard disks .new concepts such as the link to polytope sampling will be essential in this effort .we thank p. diaconis , e. p. bernard , s. leitmann and m. hoffmann for fruitful discussions .99 l. boltzmann , sitzber .wien , math .naturw . kl .2a ) ( 1896 ) . j. l. lebowitz and o. penrose , _phys . _ * 5 * 841 ( 1964 ) .j. g. kirkwood , e. monroe , _* 8 * 845 ( 1940 ) .s. asakura , f. oosawa , _ j. chem. phys . _ * 22 * 1255 ( 1954 ) .b. j. alder , t. e. wainwright , _ phys . rev . _ * 127 * , 359 ( 1962 ) .j. m. kosterlitz , d. j. thouless , _ j. phys .c : solid state phys . _ * 6 * 1181 ( 1973 ) ; b. o. halperin , d. r. nelson _ phys .* 41 * 121 ( 1978 ) . c. h. mak , _ phys .e _ * 73 * 065104(r ) ( 2006 ) .e. p. bernard , w. krauth , _ phys .* 107 * , 155704 ( 2011 ) .k. zahn , r. lenke , g. maret , _ phys .lett . _ * 82 * 2721 ( 1999 ) .p. diaconis , _ j. stat ._ * 144 * 445 ( 2011 ) . y. g. sinai , _ russian mathematical surveys _ * 25 * , 137 ( 1970 ). n. simanyi , _ inventiones mathematicae _ * 154 * , 123 ( 2003 ) . b. j. alder , t. e. wainwright , _ j. chem* 27 * , 1208 ( 1957 ) .d. c. rapaport , _phys . _ * 34 * , 184 ( 1980 ) .n. metropolis , a. w. rosenbluth , m. n. rosenbluth , a. h. teller , e. teller , _ j. chem .* 21 * 1087 ( 1953 ) .m. isobe , _ int . j. modc _ * 10 * , 1281 ( 1999 ) . c. dress , w. krauth ,_ j. phys . a , math ._ * 28 * l597 ( 1995 ) .a. jaster , _ phys .e _ * 59 * , 2594 ( 1999 ) .e. p. bernard , w. krauth , d. b. wilson , _ phys .e _ * 80 * , 056704 ( 2009 ) .r. h. swendsen , j. s. wang , _ phys .lett . _ * 58 * 86 ( 1987 ) .u. wolff , _ phys .lett . _ * 62 * 361 ( 1989 ) .r. kannan , m. w. mahoney , r. montenegro , in t. ibaraki , n. katoh , h. ono ( eds . ) : algorithms and computation , 14th international symposium , isaac 2003 . proceedings .lecture notes in computer science 2906 , springer ( 2003 ) .d. b. wilson , _ random struct. algorithms _ * 16 * , 85 ( 2000 ) . c. chanal , w. krauth , _ phys .e _ * 81 * 016705 ( 2010 ) .j. a. anderson , m. engel , s. c. glotzer , m. isobe , e. p. bernard , w. krauth , arxiv:1211.1645 . w. mickel , s. c. kapfer , g. e. schrder - turk , k. mecke , to appear in_ j. chem .( 2013 ) ; arxiv:1209.6180 .m. e. dyer , a. m. frieze , _siam j. comput . _* 17 * 967 ( 1988 ) ; v. kaibel , m. e. pfetsch , in _ algebra , geometry and software systems _ , edited by m. joswig , n. takayama ( springer , 2003 ) ; arxiv : math/0202204 .w. krauth , _ statistical mechanics : algorithms and computations _ , oxford university press ( 2006 ) .r. l. smith , _ operations research _ * 32 * , 1296 ( 1984 ) .p. a. rubin , _ communications in statistics - simulation and computation _ * 13 * , 375 ( 1984 ) .m. dyer , a. frieze , r. kannan , _j. acm _ * 38 * , 1 ( 1991 ) .r. kannan and h. narayaran , _ mathematics of operations research _ * 37 * , 1 ( 2012 ) .b. a. berg , _ markov chain monte carlo simulations and their statistical analysis _ , world scientific , ( 2004 ) .j. a. anderson , e. jankowski , t. l. grubb , m. engel , s. c. glotzer , arxiv:1211.1646 .
the hard - disk problem , the statics and the dynamics of equal two - dimensional hard spheres in a periodic box , has had a profound influence on statistical and computational physics . markov - chain monte carlo and molecular dynamics were first discussed for this model . here we reformulate hard - disk monte carlo algorithms in terms of another classic problem , namely the sampling from a polytope . local markov - chain monte carlo , as proposed by metropolis et al . in 1953 , appears as a sequence of random walks in high - dimensional polytopes , while the moves of the more powerful event - chain algorithm correspond to molecular dynamics evolution . we determine the convergence properties of monte carlo methods in a special invariant polytope associated with hard - disk configurations , and the implications for convergence of hard - disk sampling . finally , we discuss parallelization strategies for event - chain monte carlo and present results for a multicore implementation .
entangled states are used as resource for quantum teleportation . the success of quantum teleportation is quantified by optimal teleportation fidelity .the strength of entanglement is proportional to the success of teleportation for pure entangled resources .maximally entangled pure states give maximum optimal teleportation fidelity of unity . on the other hand , for mixed states , success of teleportationdepends , in addition to strength of entanglement , on mixedness of the state .it was shown in that , for mixed entangled states , there is an upper bound on mixedness above which the state is useless for quantum teleportation .upper bounds on measures of mixedness , namely , von neumann entropy and linear entropy were obtained for a general bipartite state .mixed entangled states can be classified according to their rank , where the rank varies between and for a bipartite system . in the present work , we obtain rank dependent upper bounds on measures of mixedness , above which the states of respective ranks become useless for quantum teleportation . in our previous work , we observed that mixedness and entanglement of a mixed state resource independently influence teleportation . for a state with a fixed value of mixedness ,the state being entangled is only necessary for it to be a resource for quantum teleportation , but not sufficient .states with low mixedness and high entanglement turn out to be ideal resources for quantum teleportation .we argued based on the numerical work on a class of maximally entangled mixed states that there exists a rank dependent lower bound on a measure of entanglement such as concurrence , below which the states are useless for quantum teleportation . in this article , we derive rank dependent lower bounds on concurrence for a bipartite systems .werner state defined as a probabilistic mixture of maximally entangled pure state and maximally mixed separable state , exhibits highest mixedness for a given optimum teleportation fidelity among all the two qubit mixed entangled states known in the literature .werner state is a rank 4 state . in the present study, we generalize the construction and obtain second and third rank werner states .we show that rank dependent werner states exhibit the respective rank dependent bounds obtained on measures of mixedness and entanglement for a given value of teleportaion fidelity .we consider two measures of mixedness of a state , namely , von nuemann entropy defined as and linear entropy given as $ ] .further , the maximum achievable teleportaion fidelity of a bipartite system in the standard teleportation scheme is , where is the singlet fraction of given by . herethe maximum is over all maximally entangled states . and maximum fidelity achieved classically is .this shows that the state is useful quantum teleportation for ( ) .let denote a bipartite mixed state of system whose rank is , .firstly , we prove a theorem that gives rank dependent upper bounds on von neumann entropy , above which the state is useless for quantum teleportation ._ theorem _ : if the entropy of a given state of rank of system exceeds , then the state is not useful for quantum teleportation._proof _ : we know for a given state of systems implies the state is useful for quantum teleportaion .we have state of rank of systems as , is the basis formed from maximally entangled states .we have from the definition of singlet fraction the largest element , say of is greater than or equal to . andit is known that von neumann entropy of a given state is less than or equal to shannon entropy . _ shannon entropy in eq.([shannon ] ) is maximum for and rest of elements are equal , subjected to the constraint , we get the upper bound as this implies , this shows that for satisfying eq.([von ] ) , the singlet fraction is greater than . thus , we prove if , state is useless for quantum teleportation .we also obtain an analytical expression for the rank dependant upper bound on linear entropy as a measure of mixedness as for , we have where is the rank of the state , which varies from 2 to 4 . in our previous work , based on the analysis of a class of maximally entangled mixed states of bipartite system given in , we observed that for a given value of linear entropy , there exists a rank dependent upper bound on the optimal teleportation fidelity and the upper bound increases with increase in the rank .this is equivalent to stating that for a given value of optimal telportation fidelity , there exists a rank dependent upper bound on linear entropy and the upper bound increases with rank .the above result allows us to calculate the upper bounds explicitly .if the optimal teleportation fidelity is fixed as , the classical fidelity , the upper bound on linear entropy for states of second , third and fourth ranks are , and respectively .states with linear entropy above the respective rank dependent upper bounds are useless for teleportation .a class of maximally entangled mixed states ( mems ) is constructed by ishizaka et .al and the construction is as follows . where are the eigenvalues of the state and .the werner state , which is a convex sum of maximally entangled pure state and maximally mixed separable state , given by werner state corresponds to a choice of eigenvalues given by and .werner state is a state of rank 4 .the singlet fraction for werner state is and linear entropy is estimated as .the value of linear entropy at which fidelity is equal to classical limit can be found as , which is same as the upper bound obtained above for states of rank 4 .this correspondence motivated us to construct werner like states of ranks 2 and 3 .a rank werner state can be constructed by using eigenvalues as , and .we have and linear entropy is equal to .thus it is clear that for a value of linear entropy greater than , which is the obtained upper bound for rank 3 states . to construct rank werner state we take , and .we get singlet fraction and linear entropy as and respectively .it clearly shows that rank werner state is useless for teleportation when linear entropy is greater than , coinciding with the theoretical upper bound shown above .thus , we illustrate that the constructed rank dependent werner states exhibit the theoretical upper bounds on the linear entropy for a given value of optimal teleportaion fidelity .the relationship between the concurrence as a measure of entanglement of state and it s purity is well studied , the degree of entanglement decreases as purity decreases .the maximum possible value of concurrence of a state for a spectrum of eigenvalues in descending order is given as in our previous work , we also observed that there exist a rank dependent lower bound on concurrence for a fixed value of optimal teleportation fidelity and the lower bound decreases with increase in the rank of the state . in the present work ,we obtain the exact rank dependent lower bounds on the concurrence for a bipartite states .we find that the eigenvalues and equal values of rest of the minimize among all eigenvalue spectra . by substituting the values for ,that is , corresponding to rank , and rest of elements are equal , we get the lower bound on the concurrence of state of rank below which the state fails to be a source for quantum teleportation .we obtain the values of lower bound on concurrence for the failure of teleportation for rank 4,3 and 2 states as follows , and respectively . from thiswe clearly show that there exist a lower bound on concurrence for a fixed value of fidelity , this lower bound decreases as rank increases for two qubit systems .the lower bound on concurrence for failure of quantum teleportation for rank dependent werners states coincide with the bounds derived in this work .lower and upper bounds on teleportation fidelity as a function of concurrence are obtained in . the upper and lower bounds on fidelity for a concurrence are given by and respectively .the upper bound on fidelity as a function of concurrence coincides with werner state of rank 4 . based on the properties of lower rank werner states constructed in this work, we conjecture that there exist rank dependant bounds on fidelity as a function of concurrence and these bounds which also coincide with werner states of lower ranks .the maximum amount of fidelity of a rank three state as a function of concurrence is given as , in the same way the amount of teleportation fidelity for rank two state is given by werner state of rank 2 as .teleportation fidelity as a function of concurrence for rank dependent werner states are presented in fig .[ fig:1 ] .werner states of different ranks exhibit respective rank dependent lower bounds on concurrence below which the state is useless for quantum teleportation .it can be seen that , in the fidelity - concurrence plane , the allowed values of teleportation fidelity for a fixed value of concurrence of states is bounded below and above by curves corresponding to second and fourth rank werner states respectively .fidelity of second and fourth rank werner state also coincide respectively with lower and upper bounds on fidelity of states obtained in .the curve corresponding to werner state of rank 3 , which lies between the curves of upper and lower bounds , serves as an upper bound on teleportation fidelity of rank 3 mixed states .this implies , quantum teleportation can be achieved with a resource of low value of concurrence , then it has to be a high rank state .for example , if we have to use a state with concurrence 0.1 as a quantum teleportation resource , it has to be necessarily a fourth rank state .in it is shown that local environment can enhance fidelity of quantum teleportation .it is shown that for a class of density matrices , interaction with environment enhance the singlet fraction above , and thus making the state useful for quantum teleportation .there are other methods like entanglement purification , local filtering , entanglement concentration for single and multiple number of qubits can be made use to improve the teleportation fidelity of quantum channels . in this contextit is important to understand whether the rank dependant bounds on fidelity as a function of linear entropy and concurrence are preserved under operations that enhance fidelity .we understand that the operations involved in enhancing fidelity need not preserve the rank and hence rank dependant bounds are valid as long as the state is of the respective ranks .it can be stated that for fixed values of concurrence and linear entropy of a state of fixed rank , the maximum achievable teleportation fidelity is that of werner state of rank .we proved the existence of rank dependent upper bound on the von neumann entropy as well as linear entropy as measures of mixedness of a general mixed state of a bipartite system for failure of the state to be resource for quantum teleportation .we constructed rank 3 and rank 2 werner states .rank dependent werner states exhibit the theoretical upper bounds obtained on von neumann entropy and linear entropy .further , we proved the existence of rank dependent lower bounds on the concurrence of a mixed states for the failure of states as resource for quantum teleportation and showed that the lower bound on concurrence for rank dependent werner states coincide with the theoretical lower bounds .we also showed the rank dependant werner states give upper and lower bounds on fidelity as a function concurrence , which are consistent with the results in the literature .100 c. h. bennett , g. brassard , c. crepeau , r. jozsa , asher peres and w. k. wooters , phys .* 70 * , 1895 ( 1993 ) .sandu popescu , phys ., * 72 * , 797 ( 1994 ) .r. horodecki , m. horodecki and p. horodecki , phys .a. , * 222 * , 21 ( 1996 ) .michael a. nielsen and issac l. chuang , _ quantum computation and quantum information _ , ( cambridge university press , cambridge , england , 2000 ) .m. horodecki , p .horodecki and r. horodecki , phys .a. , * 60 * , 1888 ( 1999 ) . s.bose and v. vedral , phys .a. , * 61*,040101 ( 2000 ) .k.g paulson and s.v.m satyanarayana , * 14 * , 1227 - 1237 ( 2014 ) .reinhard f. werner , phys .a. , * 40 * , 4277 ( 1989 ) .j. von neumann , mathematical foundation of quantum mechanics ( princeton univeristy press,1995 ) satoshi isizaka and tohya hiroshima , phys .a. , * 62 * , 022310 ( 2000 ) .frank verstraete , koenraad audenaert , tijl de bie , bart de moor , phys .a. , * 64 * , 012316 ( 2001 ) .satoshi isizaka and tohya hiroshima ( 2000 ) , _ maximally entangled mixed states under nonlocal unitary operations in two qubits _ , phys .a. , * 62 * , 02231 frank verstraeteab and henri verschelde , phys .rev . a 66 : 022307 ( 2002 ) , w.k wootters , quantum information and computation,*1*,27 - 44 p.badziag , m. horodecki , p . horodecki and r. horodecki ( 2000 ) ,_ local enviornment can enhance fidelity of quantum teleportation _a. , * 62 * , 012311 c.h bennet et al ., `` purification of noisy entanglement and faithful teleportation via noisy channels '' , phys . rev .76 : 722 - 725,c .h. bennett et al . , phys .a 53 , 2046 ( 1996 ) frank verstraete , jeroen dehane and bart demoor , phys .a. , * 64 * , 010101(r),p .kwiat et al . ,nature 409 , 1014 ( 2003 )
entanglement and mixedness of a bipartite mixed state resource are crucial for the success of quantum teleportation . upper bounds on measures of mixedness , namely , von neumann entropy and linear entropy beyond which the bipartite state ceases to be useful for quantum teleportation are known in the literature . in this work , we generalize these bounds and obtain rank dependent upper bounds on von nuemann entropy and linear entropy for an arbitrary bipartite mixed state resource . we observe that the upper bounds on measures of mixedness increase with increase in the rank . for two qubit mixed states , we obtain rank dependent lower bound on the concurrence , a measure of entanglement , below which the state is useless for quantum teleportation . werner state , which is a fourth rank state , exhibits the theoretical upper bound on mixedness among two qubit mixed states . we construct werner like states of lower ranks and show that these states possess the theoretical rank dependent upper bounds obtained on the measures of mixedness and theoretical rank dependent lower bounds on the concurrence .
it is fair to say that tetrad calculations are generally considered superior to classical coordinate methods for the calculation of curvature in spacetime .experiments by campbell and wainwright now dating back many years showed that tetrad methods are faster than coordinate methods by factors of .even larger factors have been obtained by maccallum in a euclidean context . within the well known system sheep , for example, the advice to the beginner is to always use frame versions of the metric ( e.g. maccallum and skea p. 23 ) . on the commercial side , within the system macsyma2 demonstration ctensor4 begins with an explanation that `` frame fields '' ( orthonormal bases ) allow the computations to run much more quickly .the demonstration calculates the bases components of the ricci tensor for the kerr - newman spacetime in boyer - lindquist coordinates , and is a good place to begin our discussion .+ in table 1 we have reproduced this demonstration within the system grtensorii running under maplev release 3 , and have included the calculation of the weyl tensor .the table demonstrates some interesting properties .the theoretical advantage of the frame approach is clearly demonstrated in the boyer - lindquist coordinates ( column bkn ). however , under the elementary coordinate transformation this advantage fails to deliver superior performance ( column bknu ) .the importance of strategic application of simplification at intermediate steps is illustrated in column bkns .for this test simplification of components has been carried out only after the components of the ricci and weyl tensors are calculated .it is worth noting that without some optimization in the simplification strategy ( e.g. post - calculation simplification only ) this calculation can not be executed in maplev on a 32 bit machine . [cols="<,>,>,>",options="header " , ]the following sections list tetrads and metrics used as inputs for the tests listed in tables 13 .this list has been produced directly from the input files which were used in the tests ( listed in appendix [ app : a ] ) and converted to latex using maplev s latex output facility with only minor modifications to improve readability .for this set of tests ( whose output is given in table 1 ) , the kerr - newman spacetime is described by a frame consisting of four independent covariant vector fields whose inner product is the constant matrix .the basis vectors and corresponding line element for each case are given below .+ ' '' '' + spacetime : kerr - newman ( boyer - lindquist coordinates ) ( bkn and bkns ) + ' '' '' + input file = frame + ' '' '' + bkn.mpl , = ] + ] + corresponding line element : + + + + + + ' '' '' + spacetime : kerr - newman ( boyer - lindquist coordinates , ) ( bknu ) + ' '' '' + input file = frame + ' '' '' + bknu.mpl ] + ] + corresponding line element : + + + + ' '' '' + this section lists the set of null tetrads for the test cases used to generate table 2 . for each spacetime , four forms of input were used .the times listed in column of table 2 are obtained using a contravariant null tetrad as input . in columns and the metric is calculated from this tetrad and used for subsequent calculation . in column a covariant tetrad is loaded , and in and , its corresponding metric is used .though the metric of column is , of course , equivalent to that of , there are often differences in representation which can in principle alter calculation times ( though in practice we have found this effect to be minimal ) .thus the line elements used as inputs for each tests are listed along with the tetrad used to calculate them . ' '' ''+ spacetime : griffiths ( * grif * ) + ' '' '' + input file = contravariant tetrad + ' '' '' + npupgrif.mpl ] + + \mbox{\hspace{74pt}} ] + corresponding line element : + + + + ' '' '' + input file covariant tetrad + ' '' '' + npdngrif.mpl ] + ] + corresponding line element : + + + + ' '' '' + spacetime : lewis - papapetrou ( * l - p * ) + ' '' '' + input file contravariant tetrad + ' '' '' + npuplew.mpl ] + ] + corresponding line element : + + + ' '' '' + input file covariant tetrad + ' '' '' + npdnlew.mpl ] + ] + corresponding line element : + + + ' '' '' + spacetime : bondi ( * bondi1 * , * bondi2 * ) + ' '' '' + input file contravariant tetrad + ' '' '' + npupbon.mpl ] + ] + corresponding line element : + + + ' '' '' + input file covariant tetrad + ' '' '' + npdnbon.mpl ] + \mbox{\hspace{352pt}} ] + corresponding line element : + + + ' '' '' + spacetime : debever ( * deb * ) + ' '' '' + input file contravariant tetrad + ' '' '' + npupdeb.mpl + ] + + ] + corresponding line element : + + + + ' '' '' + input file covariant tetrad + ' '' '' + npdndeb.mpl ] + ] + corresponding line element : + + + + ' '' '' + spacetime : debever - mclenaghan - tariq ( * dmt1 * ) + ' '' '' + input file contravariant tetrad + ' '' '' + npupsdmt.mpl + ] + + ] + corresponding line element : + + + + ' '' '' + input file covariant tetrad + ' '' '' + npdnsdmt.mpl ] + ] + corresponding line element : + + + + ' '' '' + spacetime : debever - mclenaghan - tariq ( modified ) ( * dmt2 * ) + ' '' '' + input file contravariant tetrad + ' '' '' + npupdmt.mpl + ] + + ] + corresponding line element : + + + + + ' '' '' + input file covariant tetrad + ' '' '' + npdndmt.mpl ] + ] + corresponding line element : + + + + + ' '' '' + spacetime : kerr - newman ( ) ( * kn - euc1 * ) + ' '' '' + input file contravariant tetrad + ' '' '' + npupkn1.mpl + \mbox{\hspace{188pt } } ] + ] + corresponding line element : + + + + ' '' '' + input file covariant tetrad + ' '' '' + npdnkn1.mpl + \mbox{\hspace{178pt } } ] + ] + corresponding line element : + + + + ' '' '' + spacetime : kerr - newman ( ) ( * kn - euc2 * ) + ' '' '' + input file contravariant tetrad + ' '' '' + npupkn2.mpl + \mbox{\hspace{188pt } } ] + ] + corresponding line element : + + + + ' '' '' + input file covariant tetrad + ' '' '' + npdnkn2.mpl + \mbox{\hspace{178pt } } ] + ] + corresponding line element : + + + + ' '' '' + spacetime : kerr - newman ( boyer - lindquist coordinates ) ( * kn - bl1 * ) + ' '' '' + input file contravariant tetrad + ' '' '' + npupkn3.mpl ] + + \mbox{\hspace{180pt } } ] + corresponding line element : + + + + + ' '' '' + input file covariant tetrad + ' '' '' + npdnkn3.mpl ] + + \mbox{\hspace{84pt}} ] + corresponding line element : + + + + + ' '' '' + spacetime : kerr - newman ( eddington - finklestein coordinates ) ( * kn - ef1 * ) + ' '' '' + input file contravariant tetrad + ' '' '' + npupkn4.mpl ] + ] + corresponding line element : + + + + + + ' '' '' + input file covariant tetrad + ' '' '' + npdnkn4.mpl ] + ] + corresponding line element : + + + + + + ' '' '' + spacetime : kerr - newman ( boyer - lindquist coordinates , ) ( * kn - bl2 * ) + ' '' '' + input file contravariant tetrad + ' '' '' + npupkn5.mpl ] + + \mbox{\hspace{195pt } } ] + corresponding line element : + + + + + ' '' '' + input file covariant tetrad + ' '' '' + npdnkn5.mpl ] + + \mbox{\hspace{22pt } } ] + corresponding line element : + + + + + ' '' '' + spacetime : kerr - newman ( eddington - finklestein coordinates ) ( * kn - ef2 * ) + ' '' '' + input file contravariant tetrad + ' '' '' + npupkn6.mpl ] + ] + corresponding line element : + + + + + + ' '' '' + input file covariant tetrad + ' '' '' + npdnkn6.mpl ] + ] + corresponding line element : + + + + + + ' '' '' + for this set of tests , not only are the basis vectors varied , but also their inner product .the input used for these tests is listed below . ' '' ''+ spacetime : mixmaster ( * mix * ) + ' '' '' + input file = frame + ' '' '' + mix.mpl inner product of basis vectors : + + basis vectors : + = ] + ] + corresponding line element : + + + + + ' '' '' + spacetime : mixmaster ( * mix1 * ) + ' '' '' + input file frame + ' '' '' + mix1.mpl inner product of basis vectors : + + basis vectors : + ] + ] + corresponding line element : + + + + + + ' '' '' + spacetime : mixmaster ( * mix2 * ) + ' '' '' + input file = frame + ' '' '' + mix2.mpl inner product of basis vectors : + + basis vectors : + ] + ] + corresponding line element : + + + + + ' '' '' + spacetime : mixmaster ( * mix3 * ) + ' '' '' + input file = frame + ' '' '' + mix2.mpl inner product of basis vectors : + + basis vectors : + ] + ] + corresponding line element : + + + + + + ' '' '' +the following is the input file : .... ndim _ : = 4 : x1 _ : = t : x2 _ : = r : x3 _ : = u : x4 _ : = phi : eta12 _ : = 1 : eta34 _ : = -1 : b11 _ : = ( r^2+a^2)/(r^2 - 2*m*r+a^2+q^2 ) : b12 _ : = 1 : b14 _ : = a/(r^2 - 2*m*r+a^2+q^2 ) : b21 _ : = 1/2*(r^2+a^2)/(r^2+u^2 ) : b22 _ : = -1/2*(r^2 - 2*m*r+a^2+q^2)/(r^2+u^2 ) : b24 _ : = 1/2*a/(r^2+u^2 ) : b31 _ : = 1/2*(i*(a^2-u^2)^(1/2)*r+(a^2-u^2)^(1/2)*u)*2^(1/2)/(r^2+u^2 ) : b33 _ : = 1/2*(-(a^2-u^2)^(1/2))*(r - i*u)*2^(1/2)/(r^2+u^2 ) : b34 _ : = 1/2*i*(r - i*u)*a/(a^2-u^2)^(1/2)*2^(1/2)/(r^2+u^2 ) : b41 _ : = 1/2*(-i*(a^2-u^2)^(1/2)*r+(a^2-u^2)^(1/2)*u)*2^(1/2)/(r^2+u^2 ) : b43 _ : = 1/2*(-(a^2-u^2)^(1/2))*(r+i*u)*2^(1/2)/(r^2+u^2 ) : b44 _ : = -1/2*i*a*(r+i*u)/(a^2-u^2)^(1/2)*2^(1/2)/(r^2+u^2 ) : info_:=`contravariant nptetrad for kerr - newman metric ( u = a*cos(theta ) to boyer - lindquist coordinates ) ` : .... unfortunately , the output obtained from each of these methods are not in exactly the same form . since p2 is smaller as measured in maple _ words _, this must be considered the fully simplified form .in fact , by performing the maplev operation factor on the denominators of ( an operation requiring less than cpu seconds ) , it can be reduced to the required fully simplified form . in all tests listed in appendix [ app : a ]the final results of calculation are either presented in exactly equivalent forms , or equivalent within some simplification operation requiring a negligible amount of time .
we examine the relative performance of algorithms for the calculation of curvature in spacetime . the classical coordinate component method is compared to two distinct versions of the newman - penrose tetrad approach for a variety of spacetimes , and distinct coordinates and tetrads for a given spacetime . within the system grtensorii , we find that there is no single preferred approach on the basis of speed . rather , we find that the fastest algorithm is the one that minimizes the amount of time spent on simplification . this means that arguments concerning the theoretical superiority of an algorithm need not translate into superior performance when applied to a specific spacetime calculation . in all cases it is the global simplification strategy which is of paramount importance . an appropriate simplification strategy can change an untractable problem into one which can be solved essentially instantaneously .
last paper of the opera collaboration generates a lot of reactions and comments about the observation of the advance of 60.7 ns of the neutrino beam coming from the cern compared to the time of travel given by the speed of light .one of these comments was given by contaldi , explaining the advance by an effect of general relativity for travelling atomic clocks .three errors and misunderstanding by the author of this article are detailed here : one is an angular definition , the second is a missing term and third a misunderstanding of the method used by opera and cern to continuously resynchronize their atomic clocks with gps devices .thus the supposed correction of is shown to be irrelevant .c. r. contaldi in his paper arxiv:1109.6160 uses a polynomial expansion up to the quadrupolar term of the geoid model .this type of geoid model is quite usual except the fact an error was introduced in the angular definition .contaldi states : such a model describes a prolate spheroid ( rugby balloon shape ) , with an equatorial radius smaller than the polar radius ( see fig [ geoidcontaldi ] ) .unfortunately , we knew for few centuries that the surface of the earth does not have this shape ( cf newton 1687 , huyghens 1690 , la condamine , bouguer and godin 1736 , clairaut 1736 , maupertuis 1742 , maclaurin 1742 ) .the equatorial radius of the earth is greater than the radii at the poles .the famous equatorial flatenning of the earth gives at this order of polynomial expansion an oblate shape to the earth .in wgs-84 referential , the adopted value of the flattening of the earth is 1:298.257223563 .thus a correct description of the earth would be : effective gravitational potential of the earth including a centripetal contribution and a quadrupolar contribution ( see eq .2 ) is described .contaldi states : +{{1 \over 2}(\omega_e r)^2 } \end{aligned}\ ] ] a mistake was done in the second term of this expression , as the radius used here is not the distance to the center of the earth but to the axis of rotation of the earth .the centripetal correction to the effective potential should be null at the poles and is maximal at the equator of the earth .this can be corrected inserting a cosine of the latitude ( or ) in the second term of the potential as follows : +{{1 \over 2}(\omega_e r sin(\theta))^2 } \end{aligned}\ ] ] thus , assuming the following latitude for the cern : and for the gran sasso laboratory ( lngs ) : , we find : so while the author of found a variation of the effective potential correction term between cern and lngs , taking into account theses two errors we have computed here a value : , which is twice the value previously found by contaldi .and last but not least , a profound conceptual mistake was done believing that an atomic clock or any timekeeper apparatus was carried in a journey by car or plane between cern to gran sasso as it was switched on and keeping the time during the travel . as explained by the opera collaboration the atomic clocks used either at cern or at gran sasso are continuously resynchronized through two gps devices at each location with the satellites .moreover these two gps devices were independently calibrated by metas and ptb for common view mode time transfert .consequently , the variation of potential term applies only for the time correction of the travel of the neutrino itself .as the cern - lngs distance is about 730 km , this time corrections applies to 2.43 millisecond instead of 12 hours + 4 days at rest as mentioned in contaldi s scenario .thus instead of a correction claimed by contaldi in a travel of 12 hours plus 4 days at rest for an atomic clock , we have found a time correction to be applied only on the neutrino travel itself .of course , such a time correction is far away from the sensitivity of the opera experiment , and it can not explain any advance of the detected neutrinos of 60.7 ns as observed by opera .three important errors are found in the paper of c.r .contaldi trying to explain some time corrections in the synchronization of clocks for the neutrino beam travelling from cern to lngs as observed by the opera experiment .one first error is due to a wrong definition of a polar angle related to the latitude in the calculation of the geoid model of the surface of the earth , leading to rugby ball shape instead of an oblate shape with an equatorial flattening .a second error in the paper arxiv:1109.6160 , is due to the missing of a sine term of the polar angle in the centripetal correction term . andthird , a conceptual error appeared in believing that an atomic clock has been carried at the surface of the earth by car or plane during several hours or days between cern and lngs in order to synchronize the atomic clocks . instead of that , the atomic clocks are continuously synchronized thanks to gps satellites at a high level of precision ( see ) .thus the time correction of claimed by contaldi , is reduced to .so the mystery of the neutrino faster the speed of light remains complete .i must thanks j. rich , b. vallage and g. vasseur for fruitful discussions about the measurement of the speed of the neutrino from cern observed at lngs by the opera collaboration .
we found three mistakes in the article `` the opera neutrino velocity result and the synchronisation of clocks '' by contaldi . first , the definition of the angle of the latitude in the geoid description leads to a prolate spheroid ( rugby ball shape ) instead of an oblate spheroid with the usual equatorial flattening . second , contaldi forgot a cosine of the latitude in the centripetal contribution term . and last but not least , a profound conceptual mistake was done in believing that an atomic clock or any timekeeper apparatus was carried in a journey by car or plane between cern and gran sasso ; instead of that atomic clocks are continuously resynchronized through a gps device , and the variation of the potential term applies only for the neutrino travel itself . thus instead of a correction claimed by the author in a travel of 12 hours plus 4 days at rest for an atomic clock , we have found a time correction only for the neutrino itself ! that means , that this paper does not give the right explanation why the neutrino is seen travelling faster than the speed of light in the opera neutrino experiment .
in many realistic systems , dynamics undergo a pronounced slow - down , a feature characteristic of `` complex '' landscapes .such phenomena arise for instance in physical systems ( thermal relaxation ) , in optimisation problems ( diminishing returns on search efforts ) , and in evolutionary dynamics ( punctuated equilibria , i.e. , long periods of stasis in the evolutionary record ) . in this work , inspired by the theoretical framework of statistical physics for glassy systems , we reconsider a simple model of evolutionary dynamics , namely rna secondary structure evolution .we find : ( i ) slow evolutionary dynamics , whereby the time to find an advantageous phenotypic change has a distribution with a fat ( power - law ) tail ; ( ii ) non - self - averaging behavior , _i.e. _ , even for long rna molecules , directional selection for some targets will lead to significantly slower dynamics than for other targets ; ( iii ) only weak out - of - equilibrium effects : the genotypes visited by an evolutionary trajectory are quite similar to those arising in equilibrium under stabilizing selection , so except near the ends of periods of stasis , the genotypes produced during an evolutionary trajectory do not have anomalously high or low mutational robustness .the paper is organised as follows . in sect .[ sect : rnaevolution ] we introduce the evolutionary model . in sect .[ sect : slow ] we exhibit the non - exponential nature of the relaxation process ; empirically , relaxation seems to follow an inverse power law .we also show that the relaxation curves remain sensitive to the target used for directional selection even in the limit of very long molecules . finally in sect .[ sect : innovation ] we give evidence that at nearly all times the evolutionary trajectory is in quasi - equilibrium ; more precisely , the `` innovative '' genotypes produced by a transition to a new period of stasis are only a bit different from random genotypes as measured by the phenotypic effects of mutations ; furthermore , during the periods of stasis , the initial genotype seems to be quickly forgotten , so no significant trace of innovation seems to be maintained .concluding remarks are given in the final section .for the purpose of this study , rna molecules will be thought of as chains or strings of nucleotides , taken from an alphabet of four possible nucleotides ( a , c , g and u ) .chemically , the bases can pair via hydrogen bonds .in addition to watson - crick pairings ( a - u and g - c ) , the u - g pairing is also possible , though it is weaker .the pairings between bases give rise to an rna secondary structure , as illustrated in fig.[fig : structure ] .apart from this graphical representation , the secondary structure of an rna molecule can be specified by the more convenient dot - bracket notation where a non - paired base of the sequence is denoted by a dot `` . '' and a paired base is denoted by a left or right parenthesis .this representation allows one to reconstruct the pairings as long as the secondary structure is `` planar '' .( planarity means that if the bases are positioned on a line and the pairings are represented by arcs between the bases , these arcs can be drawn in the plane without any crossings . )an rna molecule will spontaneously ( _ i.e. _ , via thermodynamic forces only ) fold into the structures with lowest free energies . to simulate this folding in silico, we use the `` vienna rna package '' to find , for any given sequence , the pairings ( secondary structure ) which lead to the _ minimum _ free energy . in effect, this procedure produces a map from sequences ( genotypes ) to secondary structures ( phenotypes ) .the cpu time needed to determine a minimum free energy structure is for a chain of bases ., width=226,height=302 ] all genotypes that have the same phenotype form a _neutral network _ .more precisely , given a genotype to phenotype map , the neutral network associated with the phenotype is a graph whose nodes are all the genotypes having that phenotype ; the edges of that graph connect genotypes only if they are nearest neighbors .it is thus necessary to introduce a notion of neighborhood in genotype space . by convention , two rna genotypes ( sequences )will be considered as nearest neighbors if and only if they differ by a single base .it is also possible to define a distance between two _ phenotypes_. in our case , we take the distance between two rna secondary structures and to be times the hamming distance between their dot - bracket string representations . from this , one can introduce a fitness landscape , where genotype and phenotype spaces both have a distance metric , and where each phenotype can be assigned `` fitness '' .we take the fitness of a phenotype to be a monotonically decreasing function of its distance to some `` target '' phenotype . for our purpose, we will extend the notion of a neutral network to that of a _neutral ensemble _ : we call neutral ensemble the union of all neutral networks whose corresponding phenotype has a given fitness , regardless of the specific phenotype .this definition reflects that fact that secondary structures are relevant in the evolutionary search mainly via their distance to .rna sequences with their associated fitness form a fitness landscape with hills , valleys and passes ( saddles ) .analogous landscapes also arise in other systems ; for example spin glasses are physical materials in which energy is often identified with fitness ; the landscape s ruggedness and many valley structure are important for the associated dynamical properties .we take the unit of time to be the expected waiting time between two mutations .thus if is the mutation rate per rna molecule and per generation , our unit of time is . for our evolutionary dynamics, we simulate the process of rna evolution towards a target structure by allowing a genotype to change by a point mutation at each unit of time . the directional selection associated with this evolutionary search then proceeds as follows . at each step , the genotype is mutated at one base taken at random ; the corresponding phenotype ( secondary structure for that genotype ) is determined , and selection is applied : if the distance to the target structure has increased , the mutation is refused and the previous genotype is reinstated , otherwise the new genotype is accepted .this process is usually referred to as `` blind ant '' dynamics or as an adaptive walk .( if one were to keep track only of the accepted moves that are neutral , _i.e. _ , that do not change the distance to the target phenotype , one would have a random `` neutral '' walk performing myopic ant dynamics . ) in practice , we continue the random walk until the target structure is reached , or until a maximum number of trial mutations is reached . in our work , we allow only neutral or improving moves in the landscape , which corresponds to zero temperature dynamics in physical systems , or hill climbing in optimisation theory .the trajectories are stochastic and are influenced both by the initial genotype chosen ( e.g. an arbitrary rna _ sequence _ ) , and by the target ( an arbitrary _ secondary structure _ ) .we thus need to average over many trajectories , and consider the dependence of our results on these choices . in a more general context, one can consider the evolution of a _ population _ in the fitness landscape .if is the mutation rate and the population size , then the effective number of genotypes in a population scales approximately linearly with .when , the population remains essentially monomorphic , and this is the case we focus on in this work .if instead were large , the evolutionary dynamics would occur in a polymorphic population with many different genotypes . to test whether the out - of - equilibrium evolutionary dynamics generates atypical genotypes, one must define the `` null hypothesis '' ; clearly we want to compare to _ random _ genotypes in the fitness landscape .this means that we need to sample _ uniformly _ the fitness landscape for each given distance to the target phenotype ; this means we focus on the genotypes that have a given fitness .the algorithmic procedure to do so is to start with any genotype in the fitness landscape with the specified fitness and then produce a long random walk with importance sampling using the metropolis monte carlo algorithm . in our rna neutral ensemble context , this simply corresponds to using the blind ant dynamics , accepting only the mutations that do not change the distance to the target ; this is then identical to the dynamics under stabilizing selection . from this sampling, we shall obtain equilibrium averages in this space .we will be particularly interested in the mean mutational robustness , where the mutational robustness of a genotype is defined as the fraction of the single - base mutations that do not change the fitness ; this is the same thing as the number of neighbors of this genotypye that belong to the neutral ensemble divided by its total number of neighbors .consider a typical evolutionary trajectory starting from a random initial genotype . at the beginning, there is a high frequency of advantageous mutations , so the phenotypic distance to the target structure initially decreases fast .but at long times , as first realized in , the frequency of favorable mutations becomes small and long periods of `` stasis '' appear where the fitness remains constant .this is illustrated for two typical evolutionary trajectories in fig .[ fig : trajectory ] .successive plateaus in fitness are separated by small changes in the hamming distance to the target : the steps decrease this distance by 1 or 2 units typically , rarely more than that .also , the time of a stasis period typically increases as the distance to the target decreases : one can speak of diminishing returns for the effects of mutations in approaching the target . to claim that the convergence to the target is particularly slow ,it is appropriate to have a comparison benchmark . for this , consider the situation where directional selection is for a target genotype instead of a target phenotype .just as for phenotypic distances , we define the genotypic distance of two genotypes and as where is the hamming distance bewteen the two strings defining the sequences for and .if is then the distance between the _ current _ genotype and the target genotype , a mutation has a probability to produce a strictly better genotype .it follows that the convergence to the target is exponential in time : consider now the times when a favorable mutation arises ; a transition between one stasis period and the next generates such an event which diminishes by .if is the density of these times , we have we then obtain .this shows that selection for a target genotype leads to fast ( exponential ) relaxation .the situation is completely different in our system where selection is instead for a target phenotype .guided by the above analysis , we have determined the corresponding distribution in this case .[ fig : fat_tails ] shows the result when averaging over target and initial genotypes at .this distribution has a very clear fat tail which seems compatible with a power law .this same behavior is observed for the other values of tested ( data not shown ) .one can thus say that convergence to the target under phenotypic selection is `` slow '' , much slower than when compared to genotypic selection .this difference between the selection types can also be seen by looking at the average distance to the target as a function of evolutionary time . in landscapes associated with disordered systems such as glasses or spin glasses ,the relaxation processes encountered typically have non - exponential dynamics .empirically , two families of functions have been used to perform fits : the stretched exponential family which for our purposes corresponds to $ ] , and the shifted power law family for which for small , both types of fits lead to satisfactory results , but at larger , the data favor a shifted power law behavior . in fig .[ fig : loglogaveragetrajectory ] , we show that decreases rather slowly , roughly as an inverse power of time .this leads to a nearly straight line on a log - log plot ; also shown in that figure are the fits to shifted power laws .this overall behavior should be contrasted with the law ( cf .[ eq : dg_exp ] ) found when using selection for a target genotype : there the approach to the target was much faster ., width=415,height=302 ] to further analyse the slow approach towards a target structure , we investigated how the evolutionary dynamics changes with the target . first , we ran simulations for genotypes of length 100 for different targets . in these simulations , for each target , we averaged the relaxation curves for different randomly generated initial genotypes .[ fig : diff60tar ] illustrates how the relaxation is different for different target structures .we conclude that there is slow dynamics whose speed depends on the target structure , a conclusion that holds for all the values of the chain length we have investigated .although we can not deal with arbitrary because of computational limitations , it is nevertheless relevant to ask whether this dependence on targets survives for arbitrarily large .it seems possible that the relaxation behavior is not self - averaging , _i.e. _ , that fluctuations associated with different targets do not become negligible when grows . to test this , we carried out simulations with multiple targets for different lengths of rna chains , to determine whether the relaxation curves have smaller dispersion for the different targets when the chain length increases .we carried out simulations for thirty different targets , and averaged the relaxation curve for each target over 1000 evolutionary trajectories with random initial genotypes .we then measured the standard deviation of the relative distance to the target phenotype at times where the _ mean _ distance ( averaged over all curves ) to the target was percent of the chain length .data are summarised in the following table . [ cols="<,<,<",options="header " , ] here denotes the mean time at which the average distance to the target phenotype reaches the value , being the chain length .the simulations were quite time - consuming which prevented us from testing lengths larger than 120 .however , from the table we see that the dispersion decreases initially , but then remains practically unchanged .the relaxation process towards different targets therefore is compatible with a non self - averaging behavior .[ fig : rmu_two ] the transition from one period of stasis to the next is due to an evolutionary `` innovation '' .it is appropriate to ask whether the associated transition genotypes can be considered to be atypical according to some measurable quantity .here we shall study the _ short term evolvability _ of the entry genotype in a period of stasis and compare it to that of _ random _ genotypes at the same distance to the target . for a given genotypewe define its short - term evolvability as the fraction of single base mutations that lead to a phenotype with higher fitness .we found that the mean fraction of such beneficial mutations is approximately 40% _ lower _ for the entry genotype than for genotypes randomly sampled from the neutral ensemble .thus with our definition , the entry genotype has an atypically low evolvability .since it is close to a neutral ensemble of less fit phenotype , this result is expected at a qualitative level .however , we also find that the entry genotype has a smaller fraction of deleterious mutations than the neutral ensemble average .( these mutations produce a phenotype with increased distance to the target . )this result should be contrasted with what is naively expected .indeed , by construction the entry genotype has one particular mutation which is known to be deleterious ( taking it back to the previous plateau ) . neglecting all other effects, one would predict that on average the entry genotype would have its fraction of deleterious mutations be above that of genotypes in the neutral ensemble .instead , the effect is four times larger and in the opposite direction . during evolution on a plateau ,one goes from the entry genotype ( which we just saw is atypical according to some objective measure ) to more random genotypes : to some extent , one looses the memory of the entry genotype through successive mutations .is it a slow change of genotypes that is responsible for the stasis periods ? to address this question , we monitored the distance between a mutating genotype and the entry genotype of the corresponding period of stasis . because is moderately large in our simulations ,the initial growth in distance is linear in the number of accepted mutations . at larger times , we see a saturation effect caused by multiple substitutions , as displayed in fig . [fig : meangeno ] .overall , the evolutionary dynamics on the neutral ensemble shows that even in the absence of phenotypic change , genotypes diffuse on a neutral ensemble at a rate not much slower than if their diffusion was not constrained by this set , until they discover a new phenotype closer to the target . to understand the expected dynamics in the absence of selection consider that a given site mutates with probability at each step , so the relaxation time scales as . at long timesone approaches the average distance .this is the behavior , translated mathematically in the jukes - cantor formula , that the uppermost curve in fig.[fig : meangeno ] represents . in summary ,because of the intricate relation between genotype and phenotype , there can be slow dynamics in the approach to the target phenotype despite rapid evolutionary change of genotypes .in fact , as displayed in fig . [fig : meangeno ] , the rate of change of genotypes does not seem to be significantly different when comparing easy and difficult targets for the directional selection .mappings from genotypes to phenotypes play a central role in biology , from the molecular scale up to whole organisms . working at the level of rna allowed us to use a framework for such mappings that is not biologically arbitrary , even though it is clearly idealised .one of this mapping s main advantages is that it is computationally tractable . within this mapping, we showed a rich phenomenology of the evolutionary dynamics towards an optimum phenotype : ( 1 ) the `` relaxation '' towards the target undergoes severe slowing down as the target is approached ; ( 2 ) this slowing down gives rise to stasis periods with fat tails , typical of what is expected in complex fitness landscapes ; ( 3 ) the relaxation curves remain sensitive to the choice of the target , even in the limit of long rna sequences ; ( 4 ) the diffusion in genotype space during the periods of stasis is not slow , in obvious contrast with what happens at the level of phenotypes .we observed that the probability of generating a favorable mutation decreases severely as one approaches the optimum , a property of the fitness landscape itself , _i.e. _ , the fraction beneficial mutations goes down in this limit , much faster than in systems undergoing exponential relaxation . why this leads to inverse power laws remains open , just as in many other landscape problems coming from other fields .furthermore , because the stasis periods are long , the few innovative genotypes appearing in an evolutionary trajectory represent a tiny fraction of the whole , and thus most genotypes visited have little trace of the out - of - equilibrium dynamics .99 s. gavrilets , _ fitness landscapes and the origin of species _ , princeton university press , 2004 a. p.young , _ spin glasses and random fields _ , world scientific publishing , 1998 h. h. hoos , t. sttzle , _ stochastic local search : foundations and applications _ , elsevier , 2005 n. eldredge , j. n. thompson , p. m. brakefield , s. gavrilets , d. jablonski , j. b. c. jackson , r. e. lenski , b. s. lieberman , m. a. mcpeek , w .miller iii , _ the dynamics of evolutionary stasis _ , paleobiology v. 31 ; no .2_suppl , june 2005 m. huynen , p. stadler w. fontana : _ smoothness within ruggendess : the role of neutrality _ , proc .usa , vol .93 , 397 - 401 ( 1996 ) .w. fontana , p. schuster : _ slow evolutionary dynamics of rna structures and evolvability _ , science 29 , vol .5368 , pp .1451 - 1455 ( 1998 ) w. fontana , p. schuster , _ shaping space : the possible and the attainable in rna genotype - phenotype mapping _, j. theor .biology , vol .194 , issue 4 , 491 - 515 ( 1998 ) gesteland , r.f ., atkins , j.f .( ed . ) : _ the rna world _ , cold spring harbor laboratory press , 1993 http://www.tbi.univie.ac.at/~ivo/rna/ [ http://www.tbi.univie.ac.at/~ivo/rna/ ] w. fontana p. f. stadler , e. g. bornberg - bauer , t. griesmacher , i. l. hofacker , m. tacker , p. tarazona , e. .d weinberger , p .schuster , _ rna folding and combinatory landscapes _ , phys .e 47 , 1993 a. wagner , _ robustness and evolvability : a paradox resolved _ , proc ., vol 275 , 91 - 100 , 2008 p. f. stadler , _ landscapes and their correlation functions _ j.math.chem .20 : 1 - 45 , 1996 c. monthus , j .- p .bouchaud , _ models of traps and glass phenomenology _ , j. phys .29 , 3847 - 3869 , 1996 j .- p .bouchaud , _ weak ergodicity breaking and aging in disordered systems _ , j. phys .i ( france ) vol . 2 , 1705 - 1713 , 1992 b. d. hughes , _ random walks and random environments _ , oxford : clarendon , 1996 s. kauffman , s. levin , _ toward a general theory of adaptive walks on rugged landscapes _, j. theor . biol .1987 , 128 , 11 - 45 e. van nimwegen , j. p. crutchfield , m. huynen , _ neutral evolution of mutational robustness _ ,usa , vol .96 , 9716 - 9720 , 1999 k. binder , d. w. heermann,_monte carlo simulation in statistical physics : an introduction _ , springer , 2002 m. zuker , p. stiegler , _ optimal computer folding of large rna sequences using thermodynamics and auxiliary information _ ,nucleic acids res . , jan 10;9(1):133 - 48 , 1981 m. kimura , t. ohta , _ on the stochastic model for estimation of mutational distance between homologous proteins _, j. mol .evol . , 87 - 90 , 1972 m. a. huynen , p. stadler , w. fontana , _ smoothness within ruggedness : the role of neutrality in adaptation _ , proc .usa , vol . 93 , 1996 m. kirschner , j. gerhart , proc . natl. acad .evolvability _ , vol .95 , 8420 - 8427 , 1998 l. a. meyers , f. d. ancel , m. lachmann , _ evolution of genetic potential _ , plos computational biology , vol . 1 , 2005 m. c. cowperthwaite , l. a. meyers , _ how mutational networks shape evolution : lessons from rna models _ , annu .
we re - examine the evolutionary dynamics of rna secondary structures under directional selection towards an optimum rna structure . we find that the punctuated equilibria lead to a very slow approach to the optimum , following on average an inverse power of the evolutionary time . in addition , our study of the trajectories shows that the out - of - equilibrium effects due to the evolutionary process are very weak . in particular , the distribution of genotypes is close to that arising during equilibrium stabilizing selection . as a consequence , the evolutionary dynamics leave almost no measurable out - of - equilibrium trace , only the transition genotypes ( close to the border between different periods of stasis ) have atypical mutational properties .
few bodies problems have been studied for long time in celestial mechanics , either as simplified models of more complex planetary systems or as benchmark models where new mathematical theories can be tested .the three body problem has been a source of inspiration and study in celestial mechanics since newton and euler , in particular the restricted three body problem ( r3bp ) has demonstrated to be a good model of several systems in our solar system such as the sun jupiter asteroid system , and with less accuracy the sun earth moon system , in these systems the r3bp was used to know preliminary orbits in some space missions . in analogy with the r3bp , in this paperwe study a restricted problem of four bodies consisting of three primaries moving in circular orbits keeping an equilateral triangle configuration and a massless particle moving under the gravitational attraction of the primaries .it is known that in our solar system we can find such configurations , the so called trojan asteroids of jupiter , mars and neptune form approximately an equilateral configuration with their respective planet and the sun , saturn tethys telesto , saturn tethys calypso or saturn dione helen are good examples of such configuration .several authors , , have considered the restricted four body body problem to model the dynamics of a spacecraft in the sun - jupiter - asteroid - spacecraft system .+ + g. w. hill developed his famous lunar theory as an alternative approach for the study of the motion of the moon .as a first approximation , this approach consider a kepler problem ( earth - moon ) with a gravitational perturbation produced by a far away massive body ( sun ) , some orbital elements such as the eccentricities of the orbits of the moon and the earth and the inclination of the moon are supposed to be zero .previously to the hill s work , the approach to study the dynamics of the moon consisted on considering two kepler problems , one for the motion of the earth and the moon around their center of mass and other for the motion of the sun and such center of mass .however , this approach had several difficulties because of the solutions were given in terms of formal power series of orbitals elements , the principal inconvenience was due to the poor convergency of these series in terms of the ratios of the mean motions of the earth and the moon , the so called critical parameter .the success of the hill s approach was given by using his model to obtain a periodic orbit of the trajectory of the moon and then he included the orbital elements to correct it , in such a way , he avoided the computation of expansions in terms of the critical critical parameter . in a four body problem context , the smallness of one primary creates complicated equations of motion where an analytical study is extremely difficult to make and even there are technical inconveniences in the accuracy of numerical simulations . in the next sections we develop a model as a first approximation of the dynamics of a masses particle in a sun - planet - asteroid system , as possible applications of this model we can consider the massless body like a spacecraft or a small satellite like the moon of the trojan asteroid 624 hektor . in future works we may include relevant effects produced by inclinations and librations of the asteroids or perturbations due to other bodies for example .consider three point masses , called , moving in circular periodic orbits around their center of mass under their mutual newtonian gravitational attraction , forming an equilateral triangle configuration .a fourth massless particle is moving under the gravitational attraction of the primaries , this problem is known as the equilateral restricted four body problem or simply as the restricted four body problem ( r4bp ) .the equations of motion in the usual dimensionless coordinates of the massless particle referred to a synodic frame of reference , where the primaries remain fixed , are : , width=240 ] where and , for .the general expressions of the coordinates of the primaries in terms of the masses of the three primaries are given by }{2k\sqrt{m_{2}^{2}+m_{2}m_{3}+m_{3}^{2}}},\ ] ] where and the three masses satisfy the relation .it can be proved that the equations of motion have a first integral where is a constant .it is worth noting that when we make and we recover the coordinates of the restricted three body problem ( r3bp ) , and , now the `` phantom '' mass is located in the so called equilibrium point of the r3bp . in the following, it will be necessary to consider the hamiltonian of the system this section we will discuss the how to compute the limit when for the r4bp .we use a similar procedure as shown in by considering a symplectic scaling of the hamiltonian and expansions in taylor series in a neighborhood of the small mass .the resulting hamiltonian will be a three degrees of freedom system depending on a parameter which is the mass of the primary .the limit of the hamiltonian ( [ originalhamiltonian ] ) restricted to a neighborhood of exists and gives rise to a new hamiltonian where and . _proof_. we consider the hamiltonian of the restricted four body problem ( r4bp ) in the center of mass coordinates where and denotes the position of the primary for .we make the change of coordinates , , , , , , therefore in these new coordinates the hamiltonian ( [ originalhamiltonian ] ) becomes where now we have for .we expand the terms and in taylor series around the new origin of coordinates , if we ignore the constant terms we obtain the following expressions where is a homogenous polynomial of degree for in order to take the limit as , we perform the following symplectic scaling , , , , with multiplier , therefore a straightforward computation shows where it is important to note that the the first partial derivative is given by for therefore we obtain and now if we recall that the three masses are in equilateral configuration and we use the relation we obtain \ ] ] ,\ ] ] in terms of the coordinates of the primaries ( [ coordinatesprimaries ] ) , we can write =-m_{3}^{2/3}m_{2}s_{1}(m_{1},m_{2},m_{3})+m_{3}^{2/3}y_{3},\ ] ] where and a similar computation shows that the coefficient ] , the case where corresponds to the equal massive bodies case . *it will proved in the next section that this system have 4 equilibrium points in a neighborhood of and such equilibrium points will posses the same stability properties as in the full r4bp when is small but non zero .now the gravitational and effective potential are respectively .the equations of motion can be written as in the full problem but is given by the equation ( [ hillefectivepotential ] ) .in this section we prove that the system has 4 equilibrium points and we will be able to compute them explicitly in terms of the mass parameter .so , in order to find the equilibrium points of the limit case , as usual , we need to find the critical points of the effective potential ( [ hillefectivepotential ] ) , an easy computation shows that the equation implies that so the equilibrium points of the system are coplanar .therefore , it is enough to study the critical points of the planar effective potential in matrix notation . where and is the matrix the above matrix has eigenvalues with respective eigenvectors and where .the eigenvectors have been chosen such that .the equation to be solved is , or explicitly we can use the invertible matrix to solve the above equation , if we consider the linear change of variables , substitute in the equation ( [ gradient ] ) and multiply by , we obtain or equivalently where is given by the diagonal matrix in terms of coordinates the equation ( [ gradientdiagonal ] ) is equivalent to the system it is clear that in the above equations the case corresponds to a singularity and the case , gives rise a contradiction , therefore when we have or equivalently {\lambda_{1}}\vert v_{1}\vert},\ ] ] on the other hand , when we have or equivalently {\lambda_{2}}\vert v_{2}\vert},\ ] ] but , therefore we obtain four equilibrium points given by {\lambda_{2 } } } ) , l'_{2}=(0,-\frac{1}{\sqrt[3]{\lambda_{2 } } } ) , l'_{3}=(\frac{1}{\sqrt[3]{\lambda_{1}}},0 ) , l'_{4}=(-\frac{1}{\sqrt[3]{\lambda_{1}}},0),\end{aligned}\ ] ] or in the original coordinates we have for .it is easy to see that {\lambda_{2}}}v_{2 } , l_{3}=\frac{1}{\sqrt[3]{\lambda_{1}}}v_{1},\end{aligned}\ ] ] and , . in the previous subsection we obtained explicit expression of the four equilibrium points in terms of the parameter so, we can analyze the stability in the whole range ] so the equilibrium point is unstable for this range of values of the mass parameter , in fact , the eigenvalues are given by and with and .there exists a value such that , as a consequence , this equilibrium point has the following properties : for the eigenvalues are and , for we have a pair of the eigenvalues of multiplicity 2 , finally when $ ] the eigenvalues are with and . in the figure [ limithillregions ]we show the so called hill s regions for the planar case and for that corresponds to mass ratio of the sun - jupiter system , in the first two figures of the first row we show the hill s regions for the limit problem and for the full r4bp when , the mass ratio of the asteroid 624 hektor , the lines in the second figure are imaginary lines that connect with the remaining masses .we have marked the position of the fixed mass with a black dot and the positions of the four equilibrium points with red dots .99 baltagiannis , a.n . ,papadakis , k.e . ; periodic solutions in the sun - jupiter - trojan asteroid - spacecraft system .planetary and space science . * 75 * , 148157 ( 2013 ) .ceccaroni m. , biggs j. ; extension of low - thrust propulsion to the autonomous coplanar circular restricted four body problem with application to future trojan asteroid missions . in : 61st int .congress iac 2010 prague , czech republic ( 2010 ) ., researches in the lunar theory.american journal of mathematics .* 1 * 5 - 26 ( 1878 ) .marchis , f. , et al : the puzzling mutual orbit of the binary trojan asteroid ( 624 ) hektor .astropysical letters , apj * 783 * , l37 .meyer k. ; introduction to hamiltonian dynamical systems and the n - body problem .springer verlag .
the restricted four - body problem studies the dynamics of a massless particle under the gravitational force produced by three masses ( primaries ) in an equilateral configuration . one primary , say , is considered too small compared with the other ones . in a similar way as in the classical hill s problem , we study the limit case in the hamiltonian of the r4bp . in this paper we prove that such limit exists and the resulting limit problem produces a new hamiltonian that inherits some basic features of the restricted three and four body problems . we analyze some dynamical aspects of this new system that can be considered as a generalization of the hill s problem . * keywords : * four body problem , hill s problem , equilibrium points , stability , trojan asteroids . * ams classification : * 70f10 , 70f15
the double pendulum has been studied as an example of the chaotic motion in physics .if we consider the first cycle of the pendulum , the double pendulum model holds application in sports such as golf , baseball , and tennis . since the double pendulum is not a simple linear system , the motion of the pendulum can not be optimized as a simple analytic form .the swing pattern utilised to maximize the angular velocity of the hitting rod such as racket , bat , and club has been analyzed on the assumption that the angular velocity is the dominant factor for the speed of the rebound ball . for a tennis stroke ,the two rods for double pendulum are an arm and a forearm for first rod and a racket for the second rod . in our model, we added the collision process between the ball and the racket . without the collision process ,there is no criteria to attain high speed of the rebound ball except for the angular speed of the second rod , racket .if we set the impact angle of the first rod at which the ball hits the racket , the speed of the rebound ball is mainly dependent on the angular velocity of the second rod . on the other hand ,if we release the impact angle of the first rod , the speed of the rebound ball does not remain as a simple function of the angular speed of the hitting rod .we showed that the speed of the rebound ball is different even though the angular velocities of the second rod at the contact time are same .furthermore , considering the whole stroke , the maximum angular velocity for the lower speed of the rebound ball is greater than the maximum angular velocity for the higher speed of the rebound ball . therefore , to get maximum speed of the rebound ball , it s not sufficient to set the condition to generate high angular velocity of the hitting racket .the collision between the racket and ball has been studied in various ways . in our simple collision model, we assumed that the racket is a simple one dimensional rod without any nodal motion and the collision occurs in one dimension .this assumption is valid if the racket and a ball moves in the same line for the short collision time .although , our model does not give any detailed information on the collision about the effect of the tension of the string and the mass distribution of the racket , however provides some insights on the proper swinging pattern to get maximum speed of the rebound ball in this article , we also analyzed the tame lagged torque effect for the double pendulum system . by applying time independent constant torques on first rod and the second rod, the speed of the rebound ball can be calculated for certain initial conditions . in the same condition ,if we simply hold the racket for a short time without enforcing a torque and with subsequent application of torque , the naive intuition estimates decrease in the speed of the rebound ball .the reason behind this is application of less energy for the double pendulum system .however , the speed of the rebound ball increase by by choosing the proper delay time .it s mainly because the double pendulum system is not a simple linear system .adding energy to the double pendulum system does not directly increase the speed of the rebound ball .we also analyzed movement of the elbow to which the first rod of the double pendulum is attached . at a first glance ,if we add extra movement towards the rebound ball s direction at contact time , the speed of the ball increases ; however , with varying results .apparently , it becomes clear that , the double pendulum system is really a nonlinear system .the present paper is organized as follows : in section ii , we introduce a double pendulum system including the collision process . in section iii ,the differential equations obtained in section ii are solved , we numerically showed that the speed of the rebound ball does not simply depend on the angular speed of the racket .for some cases , the higher angular velocity gives lower speed of the rebound ball . in section iv, we analyzed the dependence of the racket mass and length of the first rod .the general properties of swing system has been demonstrated .when we applied a time dependent torque on the racket , we could increase the speed of the rebound ball .the time lagged rotation of the racket was analyzed in section v. in section vi , the elbow movement is analyzed to add additional speed to the ball . in section vii ,we summarize the main results and discuss the application of our results .the geometry of the double pendulum model for the swing of a racket is shown in fig [ figracket]-(a ) .though , this geometry is for the left - handed player if we see from the direction , it s originally related to the real double pendulum problem in the gravitational field . our basic model and some notationsare closely related with those in work reported by rod cross .the elbow moves in the plane and the arm and the racket also moves in plane .we also modeled the arm and the forearm as a simple uniform rod with mass and length .the racket including the hands is also treated as a uniform rod with mass and length . the arm and the racket rotate in a clockwise direction in a plane at angular velocities and , respectively .if we assume that the velocity of the elbow is then the velocities of the center of the first rod ( arm and forearm system ) and the second rod ( hand and racket system ) becomes , where , are the velocity of the center of the mass of the first rod and the second rod , respectively .the center of masses are located in the middle of the rod , and , since we assumed uniform rods . ,and the length of the racket is .the angles and are defined from the direction .( b ) the force on two rods . is the force acted on the first rod at the joint between elbow and the first rod . is the force acted on the second rod at the joint between the two rods . is the force on the second rod by a ball ., width=188 ] let the force from the elbow to the first rod on joint point between the elbow and the first rod be , and the force from the first rod on joint point between two rods be , then the equations of the motion for two center of masses become the two forces are reaction force of from the second rod .we added the force from the ball to the second rod at the point of from the center of the mass .this force from the collision between the racket and the ball changes the torque equation in and the two torque equations becomes , where is the torque on joint point between the elbow and the first rod and is the torque on the joint point between the two rods .from the eqs .[ eqf1 ] - [ torque1 ] , we obtain two equations for the time derivative of two angular velocities as follows where , , and \nonumber \\ q & = & c_2 + m_2 h_2 l_1 \omega_1 ^2 \sin ( \phi-\theta ) + m_2 h_2 [ a_x \cos \phi + a_y \sin \phi ] \label{eqpq}\end{aligned}\ ] ] and are similar to the results in with setting , and the collision force from the ball add following two terms \nonumber \\ s_2 & = & f_{col } [ ( 2 h_h ( i_{1,cm } + m_1 h_1 ^2 + m_2 l_1 ^2 ) \nonumber \\ & + & h_2 ( 2 i_{1,cm } + 2 m_1 h_1 ^2 + m_2 l_1 ^2 ) ) \cos \phi - h_2 m_2 l_1 ^2 \cos ( 2 \theta - \phi ) ] \label{eqs12}\end{aligned}\ ] ] the collision force defines the motion of the ball as follow , , where is the component of the hitting part of the racket which locates from the bottom of the racket , and is the component of the joint between the elbow and arm . the force between the racket and the ball should be repulsive , so that the force form becomes if we consider this harmonic force between the ball and the second rod , the period of the oscillation is since the half of this period is the collision duration between the ball and the racket , we controlled the value from to .subsequently , the collision time varies from to .however , if the contraction length is large , the hook s model is not valid for the ball and the racket system . then the force can be rewritten as . or in order to limit the maximum contraction length .however , in our numerical calculation we restricted our system in order to follow the hook s rule . in this article, we assumed a simple model for the ball and racket collision . in our model ,the ball is assumed to hit the racket when the racket is parallel to axis ( ) .in addition to this , the ball is assumed to be moving in axis . in this case, we numerically calculated and till the time , when the racket is parallel to the axis . with these numerical results ,we set new initial conditions just before the collision .we assumed that the collision occurs at , where is the period of the harmonic oscillator system between the racket and the ball . at this time, we numerically solved the ball and two rod system attached to the moving elbow numerically till the ball is rebound from the racket .we determined the speed of the ball at that moment . based on our simple model ,the coupling constant determines the collision duration , but the speed of the rebound ball is not altered very much .the reason for such a behavior is consideration of one dimensional collision .however , the purpose of this article was to find the optimum path for the swing of the racket , we did not extend our model to two dimension .we only restricted our numerical conditions to render presence of our setups in the valid region .we assumed that an arm and a forearm forms a simple rod and the moment of inertia about the center of mass is .the moment of inertia about the center of mass of the hand - racket system is also assumed as . although this assumptions are not enough to study the tennis stroke in detail , we focused our attention on the double pendulum model , which has its application in the tennis stroke . in this section , the torques applied to the first forearm system and to the racket system were set by and as described in rod cross work .the velocity ( ) of the elbow is assumed as follows , in fig .[ fthetadeltalp3 ] , we plotted the speed of the rebound ball from the racket as a function of the initial angles .the angle is defined as the angle between the first rod and the racket . is the initial angle at .we assumed that the length of the first rod ( an arm and a forearm system ) as and the mass of the first rod as . and the length of the racket system as and the mass of the racket system as .we assumed that the initial velocity of the ball just before the contact is to the positive direction .the , a hitting position from the center of mass assumed to be as .the length of the first rod ( an arm and forearm system ) assumed to be as .this length is projection length in planes .we plotted initial conditions which gives maximum speed of the rebound ball .as changes from to , the becomes smaller and reaches . in fig .[ fthetadeltalp4 ] , we also plotted the speed of the rebound ball from the racket as a function of the initial angles .we only changed the length of the first rod , in other words we extended the distance between the elbow and the hand . at this timethe is smaller as compared to the case when .when the initial angle is , the racket and the first rod should be in the same line so as to get the maximum speed of the ball .unit as a function of the initial angle of the racket and arm with respect to the axis . : initial angle of the arm . : initial angle between the racket and the first rod .the length of the first rod is .red line indicates two initial angles which gives the maximum speed ., width=188 ] unit as a function of the initial angle of the racket and arm with respect to the axis . : initial angle of the arm . : initial angle between the racket and the first rod .the length of the first rod is .red line indicates two initial angles which gives the maximum speed , width=188 ] in our model , the speed of the rebound ball is not a simple function of the angular velocity . in fig .[ fw1nw2 ] , we plotted the angular velocities and for two cases . for the casea ) the initial angle is and the contact time and the angular velocity is . for the caseb ) , the initial angle is and the contact time and the angular velocity .however , the angular velocities at the contact times have the similar value as .furthermore , the angular velocity increases even after the collision time .the interesting thing is that the speed of the rebound ball for the case b ) is smaller than the speed of the ball for the case a ) .the speeds are and , respectively .if we only check the angular velocity , we may conclude that the case b ) gives higher speed of the rebound ball .but the angular velocity is not the entire factor to determine the speed of the ball .this can be explained if we examine the angle of the first rod when the racket contacts the ball . s are and for the cases a ) and b ) , respectively .the speed of the ball ( ) is also a functions of as well as and . in order to get the maximum speed of the rebound ball, the double pendulum system should be examined as a whole system . and for two cases .the initial angle is for the case a ) and for the case b ) . and are the contact time.,width=188 ]the mass of the racket - hand system is assumed to be , but the mass of the racket may be changed .all the movements are assumed to be in the plane in our model , the projected length between the elbow and the hand in plane can be changed by controlling the angle between the forearm and arm in actual tennis stroke . in fig .[ flxmyvout ] , we plotted the speed of the rebound ball from the racket as a function of the racket mass and the projected length in plane between the elbow and the hand .if the mass of the racket is about , the speed of the rebound ball is decreased as the projected length between the elbow and the hand is increased . however, if the racket mass is getting heavier , the speed of the rebound ball increases . since an actual mass of the tennis racket is around , and limited , the speed of the ball is restricted . in fig .[ flxmytheta ] , we plotted the angle ( ) of the forearm system at the impact time as a function of the racket mass and the projected length in plane between the elbow and the hand regardless of the racket mass ( ) , the forearm angle decrease to as the length increase .this results shows that for the player with folded arm whose effective projected length of the forearm system is small , the contact angle ( ) should be about .if the length and the contact angle are reduced the ball should be hit relatively close to the body .this tendency may explain the impact point of the ball in tennis stroke and golf . and the projected length in plane between the elbow and the hand .the unit of is , width=188 ] and the projected length in plane between the elbow and the hand .,width=188 ]in actual tennis stroke , most of the players use time lagged racket movement .they intentionally keep the racket back as forearm rotates then they start to move the racket to get a high angular velocity .the main difference of double pendulum when compared to the single pendulum , is the separate movement of the first rod ( arm and forearm system ) and the second rod ( the racket system ) . in fig .[ fcbtau ] , we plotted the speed of the rebound ball as a function of increase in the torque and the delay time .we set the projected length in plane between the elbow and the hand as .the torque applied to the racket at the joint of two rods , varies from to .the time delay is the starting time at which we applied the torque to the racket . after waiting for second, the torque suddenly changes from zero to a certain value till in our numerical calculation .this time is far behind the contact time ( around ) . at first glance ,if we start to apply the torque early , the angular velocity of the racket may accelerate little bit more .it s simply because the acceleration of the angular velocity depends on the torque .if the torque is less than , the maximum speed of the rebound ball is obtained when the delay time is zero as we expected .however , for the higher torque ,the numerical results in fig .[ fcbtau ] are different from our simple intuition . for a given torque value , there always exist a certain time delay at which the speed of the rebound ball is maximum and the is not zero . in other words ,the important thing to get high speed of the rebound is not the total amount of impulse ( torque ( applied time ) ) , but the timing when the torque starts . and the time lag .the initial angle is , the which angle gives the maximum speed for the angle . , width=188 ] if the torque is lagged , the initial angle to get a maximum speed of the rebound ball for the fixed initial angle is no more the optimum condition .in order to see the time lag effect clearly , we set the torque . in fig .[ ftimedelta ] , we plotted the speed of the rebound ball as a function of the time lag and initial angle with an initial angle . the new optimum condition to get maximum speed of the rebound ball is the time lag and the initial angle . and initial angle with an initial angle ,width=188 ] in fig .[ ftimedelay ] , we plotted the speed of the rebound ball as a simple function of time delay . if the time lag of the torque is , the rebound ball speed is .and has its maximum ( ) at .the speed increases by about by adjusting the time delay with the same magnitude of the torque .the angular velocity of the forearm system and the racket system is shown in fig .[ ftimedelayw1w2 ] . for two time delay and ,the difference in the contact time is less then . the dashed line in fig .[ ftimedelayw1w2 ] indicates the time at which the ball collides with the racket .we note that the racket hit the ball before the racket has its maximum angular velocity . in earlier work ,the authors have analyzed the double pendulum system in order to get the maximum angular velocity .however , this may mislead the double pendulum system .considering fig .[ fw1nw2 ] , it is clear that higher angular velocity gives lower speed of the rebound ball . in fig .[ ftimedelayw1w2 ] , when the angular velocity has its maximum , the angular velocity is almost zero .this is explained for the double pendulum model of tennis stroke .the angular momentum and energy of the first rod ( arm and forearm system ) were totally transferred to the racket system in order to get maximum angular velocity of the racket . .at , the has it s maximum .,width=188 ] and the racket system ( ) . indicates the angular velocities when the time delay is , and indicates the angular velocities when the time delay is ,width=188 ] we plotted the first rod ( forearm and arm system ) and the racket system for the time delays represented as and in fig .[ timelaggedstroke ] and [ constantstroke ] , respectively .when the first rod starts to rotate , the racket stayed back in fig .[ timelaggedstroke ] till time . after applying the torque ,the racket suddenly starts to rotate , and hits the ball with the angle .the racket and the first rod are shown in red at the contact time . comparing the stroke with a constant torque ( fig .[ constantstroke ] ) , the racket rotates more rapidly at the contact time . in double pendulum model for the stroke , this phenomena was expected qualitatively . in our modelwe quantitatively demonstrated that why time lagged stroke is needed and the extent to which the velocity can be increased .the motions are captured from to evenly .the red one is the racket position at the contact time ., width=188 ] .the motions are captured from to evenly .the red one is the racket position at the contact time.,width=188 ]in the previous section , we noted that the important thing is not the total impulse to accelerate the racket , but the timing in order to increase the speed of the rebound ball .now , we analyze the influence of the elbow movement on the speed of the rebound ball .we made a simple model to add an additional movement of the elbow . the added force around the time as follows , where is the angle measured from the axis counterclockwise , we also set , and the width . since the ball is rebound to axis , the proper direction of the force seems to be in the direction .we plotted the speed of the rebound ball when the extra force is applied in the direction of and with extra time delay in fig .[ fforcem ] .numerical results show that we can obtain the maximum speed of the rebound ball when the angle is .when the direction of the force , the speed of the rebound ball is almost same to the speed of the ball without the additional force .in other words , the external movement towards direction does not give any additional speed to the rebound ball . in fig .[ farmm ] , we plotted the trajectory of the elbow from the time till the contact time . is the path when no additional force is added . are elbow s trajectory when the direction of the force are and , , respectively .table [ tb1 ] demonstrates the and the angular velocity for 4-cases .the movement with is slightly backward and mainly perpendicular to the direction of the rebound ball . from this result , we conclude that the main factor for the increased speed of the rebound ball is not the linear momentum added to the racket but the angular momentum added to the racket system . and with a extra time delay , width=188 ] till the contact time . is the path when no additional force is added . are elbow s trajectory when the direction of the force s are , respectively.,width=188 ] .additional force dependent output [ cols="^,^,^,^,>",options="header " , ]the double pendulum model is applied to the baseball , tennis , and golf .it analyzes the swing pattern to maximize the angular velocity of the hitting rod such as racket , bat , and club on the assumption that the angular velocity is the dominant factor to attain speed of the rebound ball .if we set , , the angle of the first rod at the impact time , it seems to be obviously reasonable . on the other hands , if we release the angle , the speed of the rebound ball is not a simple function of the angular speed of the hitting rod .the speed of the rebound ball is different even though the angular velocities at the contact time are same in fig .[ fw1nw2 ] , . furthermore, considering the whole stroke , the maximum angular velocity for the lower speed of the rebound ball is greater than the maximum angular velocity for the higher speed of the rebound ball .therefore , to attain maximum speed of the rebound ball , it s not sufficient to set the condition to generate high angular velocity of the hitting rod . in the double pendulum ,the efficient way to generate high angular velocity of the hitting rod is the energy transfer from the first rod to the second hitting rod . in other words , when the angular velocity of the forearm system is zero , the angular velocity of the hitting racket has maximum value as shown in fig .[ ftimedelayw1w2 ] .we analyzed the time lagged torque effect for the double pendulum system . with applying constant torques on the forearm system and on the racket , respectively, the speed of the rebound ball can be calculated . for the same condition , if we simply hold the racket for a short time without enforcing a torque , then applying the torque at the proper time ( ) , the speed of the rebound ball increases by as can be seen in fig .[ ftimedelay ] .the reason is mainly because the double pendulum system is not a simple linear system . adding the energy to the double pendulum systemdoes not directly increase the speed of the rebound ball .we also analyzed the elbow movement effect .in addition to the velocity of the elbow for the medium pace forehand , we added extra movement of the elbow . at a first glance ,if we add extra movement towards the rebound ball s direction , the speed of the ball is increased .but the double pendulum system is not a simple linear system .when the direction of the elbow movement is perpendicular to the ball s direction , the speed of the rebound increases .actually the direction is towards the center of the elbow s circular movement . in other words ,the added centripetal force does not add not the linear momentum of the racket , but the angular velocity of the racket .although our collision model is applied in one dimension , this collision process allows us to analyze the double pendulum system in a more realistic manner .we showed that the speed of the rebound ball does not simply depend on the angular velocity of the racket .the increase in the ball speed by the proper time lagged racket rotation was numerically studied . the elbow movement for addingthe ball s speed was counter intuitive .the addition of simple linear momentum to the elbow is not important ; however , the elbow should move in order to add angular velocity to the racket . in actual tennis stroke ,the motion occurs in three dimensions and the magnitudes of the forces and torques are dependent on the muscle shape and movement .we did not include any bio - mechanical information such as pronation and we did not include the spin of the ball in any way .the numerical data may also be not suitable for some players .however , our study on the double pendulum system for tennis stroke provides some insights to attain an efficient way to stroke a tennis ball .this study was supported by the basic science research program through the national research foundation of korea ( nrf ) funded by the ministry of education , science and technology ( nrf-2014r1a1a2055454 ) r.b .levien , and s. m. tan , `` double pendulum : an experiment in chaos '' am . j. phys .* 61 * , 1038 - 1044 ( 1993 ) 1 t .t. shinbrot , c. grebogi , j. wisdom , and j. a. yorke , `` chaos in a double pendulum '' , am .* 60 * , 491 - 499 ( 1992 ) g. vadai , z. gingl and j. mellar , `` real - time demonstration of the main characteristics of chaos in the motion of a real double pendulum '' , eur .* 33 * , 907 ( 2012 ) d. williams , `` the dynamics of the golf swing . '' , q. j. mech .appl . math . * 20 * , 247 - 264 ( 1967 ) c. b. daish , _ the physics of ball games " _ .( english university press , london , 1972 ) t. jorgensen , `` on the dynamics of the swing of a golf club , '' am . j. phys .* 38 * , 644 - 651 ( 1970 ) t. jorgensen , _ the physics of golf _ ( springer - verlag , new york , 1999 ) , 2nd ed .r. cross , `` a double pendulum swing experiment : in search of a better bat , '' am .* 73 * , 330 - 339 ( 2005 ). r. cross , `` a double pendulum model of tennis strokes '' , am .* 79 * ( 5 ) , 470 ( 2011 ) .r. cross,``the coefficient of restitution for collisions of happy balls , unhappy balls , and tennis balls '' , am . j. phys .* 68 * , 1025 - 1031 ( 2000 ) .r. cross , `` impact of a ball with a bat or racket , '' am .67 , 692 - 702 ( 1999 ) r. cross , `` oblique impact of a tennis ball on the strings of a tennis racket '' , sports engineering , * 6 * , 235 - 254 ( 2003 )
by means of adding a collision process between the ball and racket in double pendulum model , we analyzed the tennis stroke . it is possible that the speed of the rebound ball does not simply depend on the angular velocity of the racket , and higher angular velocity sometimes gives lower ball speed . we numerically showed that the proper time lagged racket rotation increases the speed of the rebound ball by . we also showed that the elbow should move in order to add the angular velocity of the racket .
the floor layout problem ( flp ) , also known as the ( unequal areas ) facility layout problem , is central to the design of objects such as factory floors and very - large - scale integration ( vlsi ) computer - chips .the designer is given a fixed rectangular floor and rectangular boxes to place onto the floor .each box must sit completely on the floor , and they can not overlap .each box has a fixed area , but the widths and heights can be varied to change the shape , subject to constraints on the area and aspect ratio of the components .the objective is to minimize the weighted sum of the manhattan norm distances between each pair of boxes .the flp can be naturally described as a disjunctive programming problem , which are often reformulated as mixed - integer programming ( mip ) problem such as to take advantage of state - of - the - art mip solvers . however , the flp and its various mip formulations have proven extremely difficult to solve to optimality . in this work , we take a systematic approach to generating mip formulations for the flp that unifies existing mip formulations from the literature and leads to new formulations and valid inequalities .we also computationally compare the range of formulations , and show that the new approaches can be used to solve previously unsolved instances .the main contributions of this work include : 1 .* case study on systematic construction of effective mip formulations : * the number , heterogeneity , and complexity of existing mip formulations for the flp and the fact that it remains computationally challenging make it an excellent candidate for such a study . through the use of the _ embedding formulation _ approaches of and and through a systematic treatment of alternative disjunctive descriptions of the flp, we are able to recover and unify all existing , seemingly ad - hoc , mip formulations . in addition, we are able to derive new formulations that can provide a significant computational advantage and solve previously unsolved instances .while the study concentrates on specific characteristics of the flp , it exemplifies generic formulation techniques and practices that should be useful for a wide range of problems . 2 . * valid inequalities for alternative formulations of the flp : * using the embedding approach , we are able to construct a variety of new valid inequalities for flp .one key of the embedding formulation approach of this work is the flexible use of variables to model disjunctive constraints or unions of polyhedra .however , such flexibility can cloud the `` interprebility '' of the variables , which is often needed to construct valid inequalities to strengthen formulations . in this work ,we show how ideas from can be used to translate valid inequalities between formulations of the flp that allows us to state a broad class of valid inequalities in a generic form .* comprehensive computational study of flp : * while the flp has been extensively studied , most existing works compare only a small subset of formulations and valid inequalities when making comparison . in this workwe collect several instances from the literature to construct a publicly available library and present a comprehensive computational study of existing and new formulations on this library .furthermore , our systematic approach allows us to compare a host of formulations and a wide range of common valid inequalities when making our comparison . in particular , while no single formulation seems to be dominant , we may offer a small collection of techniques which prove particularly effective for the flp .furthermore , we also study various theoretical and practical aspects of these approaches that help explain their success .the remainder of this work is organized as follows . in section[ sec : lit ] we present a literature review of the existing solution techniques for the flp . in section [ sec : def ] we formally define the flp and show how it can be cast as a disjunctive programming problem . in section [ sec : formulations ] we review the formulation techniques we use to transform the flp into a mip , and in section [ sec : twobox ] we use the techniques to construct formulations that are based on the interaction of two boxes at a time .then in section [ sec : inequalities ] we develop valid inequalities that can be used to strengthen formulations and show how they can be translated from one formulation to the other . in section [ sec : inequalities ] we also restrict attention to the interaction of two boxes at a time , so in section [ sec : multibox ] we develop formulations and inequalities that are based on the interaction of larger collections of boxes .finally , in section [ sec : computations ] we present results of our computational experiments , and in section [ sec : conclusions ] we present a brief summary of this work . complementary material andomitted proofs are included in the appendix .the floor layout problem can be viewed as a specific version of a general layout problem that consists of orthogonally packing rectangular pieces onto a rectangular floor ; offer a taxonomy of variations of the flp and its relatives .originally studied primarily in the context of factory design , the emergence of the field of very - large scale integration ( vlsi ) computer - chip design saw renewed interest in layout problems such as the flp . broadly , algorithmic approaches to these layout problemscan be grouped into two classes : exact and heuristic .exact algorithms were predominant in the earlier literature , although the boom of applications in computer - chip design require solving large scale instances beyond the reach of existing exact approaches . as a result , a bevy of work has appeared over the past three decades , proposing heuristic approaches to produce good solutions for large - scale instances .much of the work applies existing metaheuristic frameworks to the flp , for example and .contrastingly , many of the novel heuristics for the flp take advantage of ideas and machinery from mathematical programming : e.g. , and , albeit in a way that can not prove optimality .we note in particular the surveys of and , which collect pointers to much of the heuristic literature . in keeping with the mip approach taken in this paper , we will survey the existing exact methods for the flp in detail .early work can be traced back to , who studies a discretized version of the flp . introduced a natural mip model for the flp , along with a collection of valid inequalities and techniques to help reduce solution time . introduces novel formulations for a single pair of boxes , as well as useful computational techniques such as symmetry breaking constraints and branching priorities . presents a new mip formulation for the flp with fewer binary variables , alongside a number of additional formulations and approaches inspired by nonlinear and mixed - integer nonlinear optimization . presents another formulation inspired by a technique from that reduces redundancy in the solution set . as detailed in the following section , the inclusion of certain non - linear area constraints in the flp result on its formulations being second - order - cone mip ( soc - mip ) problemsgiven that early formulations were developed before the availability of efficient soc - mip solvers , careful attention has been paid on constructing and proving desirable properties for specific linear approximations for the nonlinear area constraints in and .the flp has a natural one - dimensional analogue in the single - row floor ( facility ) layout problem , which asks for an optimal layout of boxes of fixed length in a straight line .this problem is already np - hard , and strong formulations and cutting planes have been developed for the problem by and .an intriguing line of research has investigated the flp from the dual perspective , attempting to construct tight lower bounds .this is of particular interest for the flp , where relaxations typically give poor bounds , even with strengthening valid inequalities . presents a lower bounding technique for the single - row flp .another line of work investigates using semidefinite programming formulations to construct bounds for the flp by and the single - row flp by . leverage the semidefinite approach to produce optimal solutions for the single - row flp using a cutting - plane approach , and to produce high - quality solutions for larger instances in . present a combinatorial dual bounding scheme for the flp and compare it against existing techniques .consider a rectangular floor \times [ 0,l^y] ] . with each pair of boxes , there is an associated nonnegative unit communication cost .the floor layout problem then is to optimally lay out each box completely on the floor , such that the area and aspect ratio constraints are satisfied , and such that no two boxes overlap .natural decision variables for each box are the position of its center and the lengths in each direction . the objective function used is based on the so - called `` manhattan '' norm : most of the constraints described are simple to describe with linear or conic inequalities .for instance , lies completely on the floor iff the area constraints take the form which is second - order - cone - representable .the aspect ratio constraints take the form this can be represented with two linear constraints per box , but it can also be enforced on the flp merely through bounds on the widths of the boxes . along with the area constraints , imposing the following bounds on the box widths is sufficient to impose the aspect ratio constraints : [ eqn : bounds ] {\mbox{\normalfont\tiny\sffamily def}}}{=}}}\min\left\ { \sqrt{\alpha\beta } , l^s \right\ } \quad & \forall s \in \{x , y\ } , i \in \llbracket n \rrbracket \label{eqn : upperbound } \\\ell^s_i & \geq lb^s_i { \mathrel{\overset{\makebox[0pt]{\mbox{\normalfont\tiny\sffamily def}}}{=}}}\frac{\beta}{ub^s_i } \quad & \forall s \in \{x , y\ } , i \in \llbracket n \rrbracket\label{eqn : lowerbound}\end{aligned}\ ] ] we note that , since , we have that for each and .the last remaining constraint for the flp requires that the boxes can not overlap on the floor .one natural way to formulate this is by requiring each pair and to be separated in either the direction or the direction ( or both ) .[ defn : precede ] we say that _ precedes _ in direction ( denoted by ) if therefore , we can enforce the constraint that and do not overlap with the disjunctive constraint {\mbox{\normalfont\tiny\sffamily def}}}{=}}}\bigvee_{k=1}^4 d^k_{i , j} ] if we omit the nonlinear area constraints we obtain the set {\mbox{\normalfont\tiny\sffamily def}}}{=}}}\left\ { ( c_i , c_j,\ell_i,\ell_j ) \in { \mathbb{r}}^8 : \eqref{eqn : sitb},\;\eqref{eqn : bounds},\ ; d^4 \right\} ] be a polyhedron , ] for bethe family of polyhedra obtained by combining and , and be an encoding composed of _ pairwise distinct _ -vectors .if the recession cones for all , then a non - extended ( linear ) mip formulation for , or , equivalently for , is any ( linear ) mip formulation for the embedding {\mbox{\normalfont\tiny\sffamily def}}}{=}}}\bigcup_{k=1}^k p^k \times \{h^k\} ] , which we denote the _ unary encoding _ , as it uses one bit per branch of the disjunction .for example , we have that is a formulation for with .however , the real flexibility comes from the possibility of non - unary encodings , as the specific assignment of codes to branches of the disjunctions does not change the structure of the formulation .for instance , to obtain a valid formulation for with we simply need to interchange and in .in contrast , for other types of encodings the specific assignment can be significant in terms of the complexity of the resulting embedding object and formulations ( e.g. see section [ sec : binaryform ] and ) .deriving ideal non - extended formulations for embeddings with any encoding can be done using a geometric construction introduced in .however , such construction can be hard to analyze , and many choices of encodings may naturally have very large ideal formulations ( i.e. many inequalities ) .fortunately , non - extended formulations can also be constructed using ad - hoc approaches or through simple constructions such as the generalization of the big- approach to arbitrary encodings introduced in . in the coming sectionswe will see how this generic approach can be used to construct a range of formulations for our disjunctive set .in particular , varying the ingredients , , and lead to different embedding objects , which in turn will necessitate different formulations . for examples beyond the flp where varying the encoding choice results in different formulations we refer the reader to and . in the following subsections , we provide such examples for varying inputs and .the new formulation for the flp proposed in section [ sec : refined - disjunction ] hinges on a logical refinement of the disjunction that removes many redundant solutions from the resulting formulation .to illustrate this idea , we provide a simple example , which is independent of the flp . consider the disjunctive constraint {\mbox{\normalfont\tiny\sffamily def}}}{=}}}{\left[x_1+x_2\leq 1\right ] }\vee { \left [ x_2\leq x_1\right]} ] , for which {\mbox{\normalfont\tiny\sffamily def}}}{=}}}\{x \in q_1 : d_1^a\} ] and the set of linear inequalities [ alllinearineq ] suppose we want to construct a mip formulation for {\mbox{\normalfont\tiny\sffamily def}}}{=}}}{\left\{x\in { \mathbb r}^2\,:\,\eqref{alllinearineq } , d_2\right\}} ] we have that both and are bounded and hence satisfy the conditions of definition [ embeddingdef ]. then , we can at first ignore linear inequality and construct a formulation for ( depicted by the and light shaded region in figure [ fig3 ] ) and then impose on the resulting formulation . for instance , an ideal formulation of is given by [ partialform ] a formulation of is then given by and . however , a second option is to include all inequalities into {\mbox{\normalfont\tiny\sffamily def}}}{=}}}{\left\{x\in { \mathbb r}^2\,:\ , \eqref{linearineq1 } \text{--}\eqref{linearineq}\right\}} ] , {\mbox{\normalfont\tiny\sffamily def}}}{=}}}{\left\{x_1\in { \mathbb r}\,:\ , 0\leq x_1\leq 4\right\}} ] , and suppose we want to solve .an ideal formulation for is given by which together with a standard lp modeling trick to linearize the absolute value in the objective leads to the mip formulation of the complete problem given by alternatively , we could instead include the linearization trick in the common constraints to obtain {\mbox{\normalfont\tiny\sffamily def}}}{=}}}{\left\{{\left(x_1,y_1\right)}\in { \mathbb r}^2\,:\ , 0\leq x_1\leq 4,\quad x -2 \leq y_1,\quad -x_1 + 2 \leq y_1 \right\}} ] , depicted in figure [ fig4 ] . an integral formulation for is given by plus which leads to the mip formulation of the complete problem given by we can check that the optimal value of the lp relaxation of is equal to one . in contrast, we can also check that the optimal value of the lp relaxation of is zero .that is , we have constructed a stronger mip formulation for minimizing a nonlinear objective over a union of polyhedra by directly including the linearization of the objective in our construction procedure . given that incorporating additional structure in the ground set can allow us to construct stronger formulations , it seems at first that the optimal approach will be to simply add all constraints .however , this can quickly lead to embedding objects that are very complex or difficult to study ; if is restricted to some minimal `` interesting '' substructure , we will see that we are better equipped to study and construct strong formulations .we start by analyzing a simple , yet nontrivial , substructure for which we are able to construct a strong ( i.e. ideal ) formulation .take {\mbox{\normalfont\tiny\sffamily def}}}{=}}}\left\{(c_i ,c_j,\ell_i,\ell_j ) \in { \mathbb{r}}^8 : \eqref{eqn : sitb},\;\eqref{eqn : lowerbound } \right\} ] , and the second corresponds to the codes {\mbox{\normalfont\tiny\sffamily def}}}{=}}}\left\ { ( 0,0 ) , ( 1,1 ) , ( 1,0 ) , ( 0,1 ) \right\} ] where we have taken a refinement of by splitting the regions satisfying two branches at once into the new branches , and , and shrinking the other branches to exclude these new regions .see figure [ fig:8-configurations ] for an illustration . with 8 branches in the disjunctionwe need codes of length at least .however , in lieu of chasing the formulation with the smallest number of variables ( i.e. a binary formulation ) , we instead take the encoding {\mbox{\normalfont\tiny\sffamily def}}}{=}}}\left\ { { { \bf e}}^1 , { { \bf e}}^1+{{\bf e}}^2 , { { \bf e}}^2 , { { \bf e}}^2+{{\bf e}}^3 , { { \bf e}}^3 , { { \bf e}}^3+{{\bf e}}^4 , { { \bf e}}^4 , { { \bf e}}^4+{{\bfe}}^1 \right\ } \subseteq \{0,1\}^4 ] and consider an inequality with that is valid for .then * is valid for , and * is valid for , where is the affine mapping that identifies with and {\mbox{\normalfont\tiny\sffamily def}}}{=}}}\begin{pmatrix } -1 & -1 \\ 1 & -1 \\ 1 & 1 \\ -1 & 1 \end{pmatrix }w + \begin{pmatrix } 1 \\ 0 \\ -1 \\ 0 \end{pmatrix}.\ ] ] is the affine mapping that identifies with . we prove the second , as the first follows in the same way . consider a feasible layout and take the corresponding feasible codes {\mbox{\normalfont\tiny\sffamily def}}}{=}}}\left\{w \in gb^4 : ( \hat{c},\hat{\ell},w ) \in { \operatorname{em}}(q^{flp},d^4,gb^4 ) \right\} ] and {\mbox{\normalfont\tiny\sffamily def}}}{=}}}\left\{z \in c^8 : ( \hat{c},\hat{\ell},z ) \in { \operatorname{em}}(q^{flp},d^8,c^8 ) \right\} ] .[ prop : ub - cuts ] for any assignments and , then is a valid inequality for . if , is valid for both and .see appendix [ app : ub - cuts ] .the objective is nonlinear but is straightforward to linearize in the usual fashion with auxiliary variables and the constraints even though this type of linearization is a very common mip formulation technique , it is often not incorporated into polyhedral studies explicitly . to do this for the pairwise flp , consider the augmented base set .the resulting encoding leads to a collection of inequalities that serve to lower bound the auxiliary objective variables .[ prop : obj - cuts ] choose and some assignment . then the following are valid inequalities for : see appendix [ app : obj - cuts ] .note that we are now adding both constraints and variables to our ground set .these inequalities are especially significant , since they explicitly incorporate the objective function , and the mip relaxation lower bounds for the flp are quite poor ( see section [ sec : lowerbounds ] ) .thus far we have only considered representations for , the relationships between a single pair of boxes . in this sectionwe address how to use the results derived for the pairwise formulations to construct strong formulations for the original -box floor layout problem .since all the constraints for the flp involve at most two boxes , it suffices to consider each pair of boxes separately , construct a pairwise formulation , and identify all repeated variables across these pairwise formulations as follows .[ prop : pairwise - to - nbox ] consider pairwise formulations for each pair of boxes over the variables .if {\mbox{\normalfont\tiny\sffamily def}}}{=}}}\sum_{(i , j ) \in { \mathscr{p } } } m_{i , j} ] .our function enjoys the following property : this will function as an ( underestimator for the ) indicator function for when we have a particular chain of boxes along direction .we can use this to extend the logic of the pairwise inequalities we have developed . for a simple example , if , then we know that and are separated in direction by at least the smallest width can take along that direction , and so .this tightening can be exploited in the inequalities derived previously , leading a host of new valid inequalities for the multi - box flp .[ prop : multibox - cuts ] consider the pair and an arbitrary path , where and and .choose assignments and and define {\mbox{\normalfont\tiny\sffamily def}}}{=}}}\sum_{\xi=1}^m lb^s_{t^\xi} ] , and distinct vectors .take any mip formulation for the set , and some affine functions such that where {\mbox{\normalfont\tiny\sffamily def}}}{=}}}\{x \in q : a^sx \leq b^s\}$ ] .then is a valid formulation for ( and , hence , for ) .it is clear from the definition of that for any , there is some branch such that , and so is feasible for the mip formulation by the construction of the . to show that any feasible solution for the mip formulation lies in ,consider some feasible for the mip formulation .then and implies that for some .then implying that satisfies the corresponding branch of the disjunction .apply theorem [ thm : bigm ] with , , and the general proof technique is as follows .first , we will construct the components needed to apply theorem [ thm : bigm ] : namely , a ground set describing shared constraints across all feasible layouts , a disjunction we are interested in modeling , a valid formulation for the codes , and some big- functions that encapsulate the logic between the codes and the branches of the disjunction .this will leave us with a valid formulation for our set .we will then do ad - hoc tightening of some of the resulting constraints , giving the system described in .first , we choose the ground set and the disjunction .we see that {\mbox{\normalfont\tiny\sffamily def}}}{=}}}\left\{(z^y_{i , j},z^x_{i , j},z^y_{j , i},z^x_{j , i } ) \in \{0,1\}^4 : z^x_{i , j}+z^x_{j , i}+z^y_{i , j}+z^y_{j , i } \geq 1 , \ : z^x_{i , j}+z^x_{j , i } \leq 1 , \ : z^y_{i , j}+z^y_{j , i } \leq 1\right\}\ ] ] is a valid formulation for .choose big- functions based on the disjunction in the following way .take as the -th clause defining in section [ sec : refined - disjunction ] ( recall that ) .for example , . then take note that , when is a statement of the form `` precedes in direction '' , we get the same big- functions as appeared in the unary formulation for the same logic. now apply theorem [ thm : bigm ] and recover the valid formulation , where note that , since the branches of the disjunction share constraints ( and corresponding big- functions ) , many of the resulting constraints will be equivalent and duplicates can be removed .we now wish to tighten some of these constraints by lifting them in an ad - hoc manner . by the same argument as in the proof for theorem [ thm : unary ] , we may tighten to . to tighten the new constraints, we can do a case analysis .consider , , and ; the others follow analogously .* reduces to the linear constraint , which follows from .* we have that in this case and adding to this gives the desired inequality * in this case , we have that we can add to this to get the desired inequality . *not feasible by .when using the alternative codes , we may construct a formulation that is quite similar to the one presented in , but with slightly different big- terms : this formulation , with the addition of the area constraints , forms the basis for the bldp1 formulation from .the sequence - pair formulation flp - sp from may be constructed from with the addition of global constraints on the 0/1 variables , based on observations made by . in particular , consider an box instance of the flp and the corresponding formulation derived from proposition [ prop : pairwise - to - nbox ] where each pairwise formulation is given by the gray binary formulation .then the addition of the following constraints yields the flp - sp formulation : where notationally {\mbox{\normalfont\tiny\sffamily def}}}{=}}}\begin{cases } w^{p , q}_k & p < q \\ 1-w^{p , q}_k & \text{o.w . }\end{cases } \quad \forall k \in \{1,2\ } , p , q \in \llbracket n \rrbracket : p \neq q.\ ] ]we prove by enumerating the possible values for the components of having support over the constraint , noting in particular that is always infeasible .recall also that * sum the constraints and to get the constraint . * using and rearranging gives adding the constraint gives the desired result * want .since , we must have that either or ; w.l.o.g .choose the second .then rearranging from gives * want that , which follows immediately from the upper bounds on .we prove by enumerating the possible values for the components of having support over the constraint , noting in particular that is always infeasible .* want to show that but since necessarily ( sum ) it suffices to show that , which follows immediately from summing constraints with to get that . applying thisalso for and summing the resulting inequalities gives the result .* want to show that we have from that ; adding the appropriate constraint in to this gives the result . *same argument as the previous case .* want to show that which follows immediately from and the fact that for any feasible solution . * want to show that rearranging gives now summing the relation from with one of gives summing these two derived inequalities with gives the desired result . *want to show that take the sum of one of and the inequality from to derive furthermore , using our big- value , we have summing the two derived inequalities along with the lower bounds on gives the result .* want to show that , which is immediate from . *want to show that from an argument above , for this particular setting , and so we are done by summing this with implied by and using the lower bounds on . * want to show that which just follows from the fact that and . *want to show that which follows immediately from the inequality ( valid for this particular setting for ) derived previously . *same argument as the previous case .we now present the b2 and v2 inequalities from in the notation used in the present work .the b2 inequalities for the unary formulation are of the form the v2 inequalities for the unary formulation are which is equivalent with a potentially tightened coefficient . using proposition [ prop : map - to - refined ] and proposition [ prop : map - from - refined ] , these inequalities may be applied to all the formulations discussed in this work .the symmetry - breaking described in works by restricting the possible relative layout between a single pair of components in a modification of the so - called _position method _ from .the scheme chooses a single pair ; in this work , we follow and choose .we then may add the following constraints to the refined unary formulation : using proposition [ prop : map - from - refined ] , these inequalities may be applied to all the formulations discussed in this work ..relative gap of the relaxation lower bound , with respect to the best known feasible solution .group # 1 includes ` u ` , ` bldp1 ` , ` bldp1 + ` , ` sp ` , ` sp+ ` , ` sp+vi ` , ` sp+vi3 ` , and ` ru ` .group # includes ` u+ ` , ` ru+vi ` , and ` ru+vi3 ` .symmetry breaking from is added to group # 1 for comparison ( # w/ sb ) ; it does not affect the values for group # 2 . [ cols=">,^,^,^",options="header " , ]
this material is based upon work supported by the national science foundation graduate research fellowship under grant no . 1122374 and grant cmmi-1351619 .
inspiraling binaries are one of the most promising sources for the first detection of gravitational waves ( gw ) .the post newtonian approximation methods accurately describe the phasing of the waveform - about a cycle in a wave train cycles long .this makes it amenable for matched filtering analysis .the best available estimates suggest that the expected number of neutron star ( ns)-ns binary coalescence seen per year by ground based interferometers is for initial detectors and for advanced detectors . in recent years , a number of ground based detectors are producing sufficiently interesting sensitive data and analysis of network data is highly advisable .the advantages of multi - detector search for the binary inspiral is that , not only does it improve the confidence of detection , it also provides directional and polarisational information about the gw source .two strategies currently exist in searching for inspiraling binary sources with a network of detectors : the coherent and the coincident .the coherent strategy involves combining data from different detectors phase coherently , appropriately correcting for time - delays and polarization phases and obtaining a single statistic for the full network , that is optimized in the maximum likelihood sense . on the other hand, the coincident strategy matches the candidate event lists of individual detectors for consistency of the estimated parameters of the gw signal .however , the phase information is ignored and as also the detectors are considered in isolation .the question arises as to which strategy performs better . on simple waveformsthe analysis has been performed by finn and arnaud et al .both these works have shown that the coherent strategy performs better than the coincident strategy .we consider here the astrophysically important source , namely , the inspiraling binary and report mainly the results of our work which has been described in detail in the papers .we compare the strategies by plotting the _ receiver operating characteristic _( roc ) curves , which is the plot of detection efficiency versus the false alarm rate .we broadly consider the two cases of co - located aligned detectors and geographically separated misaligned detectors . in the co - located case we further consider two subcases of ( i ) uncorrelated noise , and ( ii ) correlated noise .for the inspiraling binary , in the fourier domain we assume the spinless restricted post - newtonian ( pn ) waveform at detector : ^i ( f ) = e^i f^-7/6 i [ ( f ; t_c , _ c , _ 0 , _ 3 ) + 2 f t^i ] , [ eq : stilde ] where we take the phase given by the 3 pn formula .the extended beam pattern functions encode the orientation and direction parameters of the source and detectors ; the parameters , are respectively , the time of coalescence , phase of coalescence ; the quantities are the chirp time parameters which are independent functions of the two masses of the stars comprising the binary , the denote time - delays at the detector with respect to a fiducial detector and is the amplitude depending on the masses and the distance to the source . in matched filtering , it is natural to define the scalar product of two real functions by , ( a , b ) = 2 _ f_l^i^f_u^i df , [ eq : scalar ] where , we use the hermitian property of fourier transforms of real functions . is the one sided power spectral density ( psd ) of the noise which is assumed to be the same for all the detectors .then the normalised templates corresponding to the intrinsic parameters are defined via the equation , ^i(f;_i , t_c , _ c ) = ^i ( _ 0 ( f ; _ i , t_c ) _ c + _ /2 ( f ; _ i , t_c ) _ c ) .[ eq : quadrature ] we have the relation where represents a grid point in the intrinsic parameter space and the normalisation of templates requires that the scalar products .we then define the complex correlation where , the real correlations and are obtained by taking the scalar products of the data with and respectively ; that is , .then the single detector statistic for detector is just . to decide detection in a given single detector , is maximised over the template parameters and compared with a preassigned threshold .for coincidence detection , the procedure is as follows : * choose the same threshold for the two detectors .* prepare two candidate event lists such that .look for pairs of candidate events , each candidate event coming from a different list , such that the sets of estimated parameters match - where the denotes the difference in the measured parameter , where stands for any of the parameters .the allowed error box is denoted by . a quadratic sum of the noises is taken to determine the error box .we fix this box by performing simulations , so that the final probability of not losing an event is - on each parameter ,we atmost allow 1 % loss in events . for geographically separated detectors , for fixing the window size in , the light travel time between the detectorsis taken into account and added in quadratures to the errors due to noise in each detector .on the other hand , coherent detection involves combining data streams in a phase coherent manner so as to effectively construct a single , more sensitive detector .for the case of two misaligned detectors the network statistic is given by : = ||c||^2 = |c^1|^2 + |c^2|^2 = ( c^1_0)^2 + ( c^1_/2)^2 + ( c^2_0)^2 + ( c^2_/2)^2 , [ twoindep ] where is the complex correlation of the -th detector ( =1,2 ) .for two aligned colocated detectors the statistic is different and is given by : = |c^1 + c^2|^2 . for aligned colocated detectors we consider two subcases : ( i ) uncorrelated noise , ( ii ) correlated noisewe first consider the case of co - located aligned detectors .this case would be of significance to the two ligo detectors at hanford and other similar topologies envisaged elsewhere in the future such as lcgt .we compare the performances of the two strategies by plotting the roc curves for uncorrelated noise ( left ) and correlated noise ( right ) in fig .( [ fig1 ] ) . for plotting these curves the false alarm and the detection probabilities must be computed as a function of the threshold and then plotted versus each other parametrically by varying the parameter .the correlation parameter is taken as the weighted average of a frequency dependent correlation where , being the noise in the detector .details may be found in .while performing the simulations there are many subtleties such as estimating the number of independent templates , the error window size computation etc .the discussion of which we have omitted here although it is nonetheless important .cc it is clear from the figures that the coherent strategy is far superior in this case .we now consider the case of geographically separated detectors which are then also normally misaligned . when the detectors are misaligned , the sky coverage for the usual coincidence detection is very poor which leads to intolerable false dismissal .thus another coincidence strategy is devised which we call _ enhanced _ coincidence .the usual coincidence strategy we then call _ naive _ coincidence .enhanced coincidence strategy is formulated as follows : * choose a low threshold ( we choose ) , and prepare two candidate event lists such that . *look for a pair of candidate events , the events coming from separate lists , such that the sets of estimated parameters match within the error - window .the procedure is the same as the co - located case , except for the parameter in which the distance between the detectors enters .* choose the final ( high ) threshold and construct the final statistic and register detection if .cc note that although this statistic looks formally like the coherent case , the mass parameters for the templates in the two detectors do not have to be the same ( they however must be close enough so that they lie in a error window ) .thus this is not matched filtering while coherent detection is . in this strategythe sky coverage is better than naive coincidence . as seen from the fig .( [ fig2 ] ) , coincidence strategy performs far better than the naive coincidence strategy , but we see that the detection probability for the coherent strategy is still superior by around 5% for the same false alarm rate .although the coherent strategy is superior to coincident strategies , the difference between the coherent and enhanced coincident strategies is small. only a relative improvement of about 5% in the detection probability is obtained with the coherent strategy .one may ask whether there is any practical advantage in using the coherent strategy in the case of two misaligned detectors .note however that the coherent method is not so computationally expensive compared with two coincident methods , since we do not take cross correlation of two detectors data in the coherent strategy .thus overall , we conclude that the coherent strategy is a good detection method . however , the above results assume stationary gaussian noise .but we know that the current real data are neither stationary nor gaussian . in coincidence detection , the requirement of the consistency of estimated parameters in an error window acts as a powerful veto to veto out fake events generated from non - gaussian noise . on the ther hand in coherent detection as yet no such obvious veto has been developed .this however does not rule out the possibility that a powerful veto can not be constructed for the coherent strategy . in futurewe propose to work on this aspect of the problem .perhaps a judicious combination of the two methods might be an effective way of dealing with this problem .the authors would like to thank the dst , india and jsps , japan for the indo - japanese cooperative programme for scientists and engineers under which this work has been carried out .
we compare two multi - detector detection strategies , namely , the coincidence and the coherent , for the detection of spinless inspiraling compact binary gravitational wave ( gw ) signals . the coincident strategy treats the detectors as if they are isolated - compares individual detector statistics with their respective thresholds while the coherent strategy combines the detector network data _ phase coherently _ to obtain a single detection statistic which is then compared with a single threshold . in the case of geographically separated detectors , we also consider an _ enhanced _ coincidence strategy because the usual ( naive ) coincidence strategy yields poor results for misaligned detectors . for simplicity , we consider detector pairs having the same power spectral density of noise , as that of initial ligo and also assume the noise to be stationary and gaussian . we compare the performances of the methods by plotting the _ receiver operating characteristic _ ( roc ) for the two strategies . a single astrophysical source as well as a distribution of sources is considered . we find that the coherent strategy performs better than the two coincident strategies under the assumptions of stationary gaussian detector noise .
we will denote the set of density operators on by .we will denote the set of quantum channels by .we will put . for any linear map , we define it _choi - jamiokowski matrix _ as this isomorphism was first studied by choi and jamiokowski .note that some authors prefer to add a normalization factor of if front of the expression for .other authors use the other order for the tensor product factors , a choice resulting in an awkward order for the space in which lives .the rank of the matrix is called the _ choi rank _ of ; it is the minimum number such that the map can be written as for some operators . given two pairs of unitary operators , , respectively , define the `` rotated map '' it is an easy exercise to check that the choi - jamiokowski matrix of the rotated map is given by the transposition appearing in the equation above is due to the following key fact ( here is an arbitrary unitary operator ) : the _ diamond norm _ was introduced in quantum information theory by kitaev ( * ? ? ?* section 3.3 ) as a counterpart to the -norm in the task of distinguishing quantum channels .first , define the norm of a linear map as kitaev noticed that the norm is not stable under tensor products ( as it can easily be seen by looking at the transposition map ) , and considered the following `` regularization '' : in operator theory , the diamond norm was known before as the _ completely bounded trace norm _; indeed , the norm of an operator is the norm of its dual , hence the diamond norm of is equal to the completely bounded ( operator ) norm of ( see ( * ? ? ?* chapter 3 ) ) .we shall need two simple properties of the diamond norm .first , note that the supremum in the definition can be replaced by taking the value ( recall that is the dimension of the input hilbert space of the linear map ) ; actually , one could also take equal to the choi rank of the map , see ( * ? ? ?* theorem 3.3 ) or ( * ? ? ? * theorem 3.66 ) .second , using the fact that the extremal points of the unit ball of the -norm are unit rank matrices , we always have moreover , if the map is hermiticity - preserving ( e.g. is the difference of two quantum channels ) , one can optimize over in the formula above , see ( * ? ? ?* theorem 3.53 ) . given a map , it is in general difficult to compute its diamond norm .computationally , there is a semidefinite program for the diamond norm , , which has a simple form and which has been implemented in various places ( see , e.g. ) .we bound next the diamond norm in terms of the partial trace of the absolute value of the choi - jamiokowski matrix .the diamond norm has application in the problem of quantum channel discrimination .suppose we have an experiment in which our goal is to distinguish between two quantum channels and .each of the channels may appear with probability .then , celebrated result by helstrom gives us an upper bound on the probability of correct discrimination from the discussion in this section we readily arrive at the main goal of this work is to show the asymptotic behavior of the diamond norm . to achieve this , in section [ sec : sdp ] we find a new upper bound of on the diamond norm of a general map . in the case of a hermiticity preserving map it has a nice form next , in section [ sec : lower - bound ]we prove that the well known lower bound on the diamond norm converges to a finite value for random independent quantum channels and in the limit .we obtain that for channel sampled from the flat hilbert - schmidt distribution the value of the lower bound is the general case is discussed in - depth in the aforementioned section .finally , in section [ sec : upper - bound ] we show that the upper bound also converges to the same value as the lower bound . from these results we have for channels sampled from the hilbert - schmidt distribution discuss in this section some bounds for the diamond norm . for a matrix ,we denote by and its right and left absolute values , i.e. when is the svd of . in the case where is self - adjoint, we obviously have . in the result below, the lower bound is well - known , while the upper bound appear in a weaker and less general form in ( * ? ? ?* theorem 2 ) .[ prop : bound - diamond ] for any linear map , we have consider the semidefinite programs for the diamond norm given in ( * ? ? ?* section 3.2 ) : \text{subject to:}\quad & \begin{bmatrix } \rho_0 \otimes i_{d_2 } & x\\ x^ * & \rho_1 \otimes i_{d_2 } \end{bmatrix } \geq 0\\ & \rho_0,\rho_1\in m_{d_1}^{1,+}(\mathbb c)\\ & x \in m_{d_1d_2}(\mathbb c ) \end{aligned}\ ] ] \text{subject to:}\quad & \begin{bmatrix } y_0 & -j(\phi)\\ -j(\phi)^ * & y_1 \end{bmatrix } \geq 0\\ & y_0 , y_1 \in m_{d_1d_2}^+(\mathbb c ) \end{aligned}\ ] ] the lower and upper bounds will follow from very simple feasible points for the primal , resp . the dual problems .let be a svd of the choi - jamiokowski state of the linear map .for the primal problem , consider the feasible point and .the value of the primal problem at this point is showing the lower bound . for the upper bound , set and , both psd matrices .the condition in the dual problem is satisfied : and the proof is complete . if the map is hermiticity - preserving ( i.e. the matrix is self - adjoint ) , the inequality in the statement reads simply the two bounds in are equal iff the psd matrices and are both scalar .indeed , the lower bound in can be rewritten as and the two bounds are equal exactly when the spectra of and ae flat .the upper bound in can be seen as a strengthening of the following inequality , which already appeared in the literature ( e.g. ( * ? ? ?* section 3.4 ) ) . indeed , again in terms of and , we have and .there are several ways to endow the convex body of quantum channels with probability distributions . in this section, we discuss several possibilities and the relations between them .recall that the choi - jamiokowski isomorphism puts into correspondence a quantum channel with a bipartite matrix having the following two properties * is positive semidefinite * .the above two properties correspond , respectively , to the fact that is complete positive and trace preserving .hence , it is natural to consider probability measures on quantum channels obtained as the image measures of probabilities on the set of bipartite matrices with the above properties .given some fixed dimensions and a parameter , let be a random matrix having i.i.d .standard complex gaussian entries ; such a matrix is called a _ginibre random matrix_. define then the random matrices and are called , respectively , _ wishart _ and _ partially normalized wishart_. the inverse square root in the definition of uses the moore - penrose convention if is not invertible ; note however that this is almost never the case , since the wishart matrices with parameter larger than its size is invertible with unit probability .it is for this reason we do not consider here smaller integer parameters .note that the matrix satisfies the two conditions discussed above : it is positive semidefinite and its partial trace over the second tensor factor is the identity : \\ & = ( { { \mathrm{tr}}}_2 w)^{-1/2 } \left ( \operatorname{tr}_2 w \right ) ( { { \mathrm{tr}}}_2 w)^{-1/2 } = i_{d_1}.\end{aligned}\ ] ] hence , there exists a quantum channel , such that ( note that , and thus are functions of the original ginibre random matrix ) .[ def : measure - partially - normalized - wishart ] the image measure of the gaussian standard measure through the map defined in , and the equation is called the _ partially normalized wishart measure _ and is denoted by .another way of introducing a probability distribution on the set of quantum channels is via the stinespring dilation theorem : for any channel , there exists , for some given , an isometry such that [ def : measure - isometries ] for any integer parameter , let be the image measure of the haar distribution on isometries through the map in . finally , one can consider the lebesgue measure on the convex body of quantum channels , . in this work, we shall however be concerned only with the measure coming from normalized wishart matrices . in this sectionwe introduce and study the basic properties of a two - parameter family of probability measures which will appear later in the paper .this family generalizes the symmetrized marcenko - pastur distributions from , see also for other occurrences of some special cases .before we start , recall that the marcenko - pastur ( of free poisson ) distribution of parameter has density given by ( * ? ? ?* proposition 12.11 ) }(u)\ , du,\ ] ] where and .[ def : smp - xy ] let be two free random variables having marcenko - pastur distributions with respective parameters and . the distribution of the random variable is called the _ subtracted marcenko - pastur distribution _ with parameters and is denoted by .in other words , we have the following result . [prop : smp - wishart ] let ( resp . ) be two wishart matrices of parameters ( resp ) . assuming that and for some constants , then , almost surely as , we have the proof follows from standard arguments in random matrix theory , and from the fact that the schatten -norm is the sum of the singular values , which are the absolute values of the eigenvalues in the case of self - adjoint matrices. we gather next some properties of the probability measure . examples of this distribution are shown in fig .[ fig : smp - xy ] .let .then , 1 .if , then the probability measure has exactly one atom , located at 0 , of mass .if , then is absolutely continuous with respect to the lebesgue measure on .2 . define ^ 2 - 4 \left [ t_{x , y}(u ) \right]^3}. \end{split}\ ] ] the support of the absolutely continuous part of is the set ^ 2 - 4 \left [ t_{x , y}(u ) \right]^3 \geq 0\}.\ ] ] this set is the union of two intervals if and it is connected when , with 3 . on its support , the density of is given by ^{\frac23}-2^{\frac23 } t_{x , y}(u)}{2^{\frac43 } \sqrt{3 } \pi u \left [ y_{x , y}(u ) \right]^{\frac13 } } \right|.\ ] ] the statement regarding the atoms follows from ( * ? ? ?* theorem 7.4 ) .the formula for the density and equation comes from stieltjes inversion , see e.g. ( * ? ? ?* lecture 12 ) .indeed , since the -transform of the marcenko - pastur distribution reads , the -transform of the subtracted measure reads the cauchy transform of is the functional inverse of . to write down the explicit formula for , one has to solve a degree 3 polynomial equation , and we omit here the details . the statement regarding the number of intervals of the support follows from .the inequality is given by a polynomial of degree 6 which factorizes by , hence an effective degree 4 polynomial .the nature of roots of this polynomial is given by the sign of its discriminant , which , after some algebra , is the same as the sign of , see .in the case where , some of the formulas from the result above become simpler ( see also ) . the distribution is supported between . finally , in the case when , which corresponds to a flat hilbert - schmidt measure on the set of quantum channels , we get that .+ we state here the main result of the paper . for the proof , see the following two sections , each providing one of the bounds needed to conclude .[ thm : main ] let , resp . , be two _ independent _ random quantum channels from having with parameters , resp .. then , almost surely as in such a way that , ( for some positive constants ) , and , the proof follows from theorems [ thm : lower ] and [ thm : upper ] , which give the same asymptotic value . combining theorem [ thm : main ] with hellstrom s theorem for quantum channels , we get that the probability of distinguishing two quantum channels is equal to : additionally , any maximally entangled state may be used to achieve this value .in this section we compute the asymptotic value of the lower bound in theorem [ thm : main ] . given two random quantum channels , we are interested in the asymptotic value of the quantity .[ thm : lower ] let , resp . , be two _ independent _ random quantum channels from having with parameters , resp . . then , almost surely as in such a way that and for some positive constants , the proof of this result ( as well as the proof of theorem [ thm : lower ] ) uses in a crucial manner the approximation result for partially normalized wishart matrices .[ prop : approximation - partial - normalization ] let a random wishart matrix of parameters , and consider its `` partial normalization '' as in . then , almost surely as in such a way that for a fixed parameter , note that in the statement above , the matrix is not normalized ; we have the marchenko - pastur distribution of parameter . in other words , , where is random matrix of size , having i.i.d .standard complex gaussian entries .let us introduce the random matrices the first observation we make is that the random matrix is also a ( rescaled ) wishart matrix .indeed , the partial trace operation can be seen , via duality , as a matrix product , so we can write where is a complex gaussian matrix of size ; remember that scales like . since , in our model , both , grow to infinity , the behavior of the random matrix follows from . as , the random matrix converges in moments toward a standard semicircular distribution .moreover , almost surely , the limiting eigenvalues converge to the edges of the support of the limiting distribution : the proof is a direct application of ( * ? ? ?* corollary 2.5 and theorem 2.7 ) ; we just need to check the normalization factors . in the setting of (* section 2 ) , the wishart matrices are not normalized , so the convergence result deals with the random matrices ( here and ) we look now for a similar result for the matrix ; the result follows by functional calculus .[ lem : convergence - y ] almost surely as , the limiting eigenvalues of the random matrix converge respectively to : by functional calculus , we have ^{-1/2} ] converge almost surely to multiple of the identity matrix : where is the average of : in the proof , we shall drop the parameter , but the reader should remember that the matrix dimensions are functions of and that all the matrices appearing are indexed by . to conclude , it is enough to show that since the statement for the smallest eigenvalue follows in a similar manner .let us denote by ^ 2 = \operatorname{tr}_{(12)}(b ) - [ \operatorname{tr}_{(1)}(b)]^2\end{aligned}\ ] ] the average eigenvalue and , respectively , the variance of the eigenvalues of ; these are real random variables ( actually , sequences of random variables indexed by ) . by chebyshev s inequality, we have note that one could replace the factor in the inequality above by by using samuelson s inequality , but the weaker version is enough for us .we shall prove now that almost surely and later that almost surely , which is what we need to conclude .to do so , we shall use the weingarten formula . in the graphical formalism for the weingarten calculus introduced in ,the expectation value of an expression involving a random haar unitary matrix can be computed as a sum over diagrams indexed by permutation matrices ; we refer the reader to or for the details . using the unitary invariance of , we write , for a haar - distributed random unitary matrix , and some ( random ) eigenvalue vector . note that traces of powers of only on , so we shall write .we apply the weingarten formula to a general moment of , given by a permutation : where are the cycles of , and denotes the conditional expectation with respect to the haar random unitary matrix . from the graphical representation of the weingarten formula (* theorem 4.1 ) , we can compute the conditional expectation over ( note that below , the vector of eigenvalues is still random ) : above , is the weingarten function and is the moment of the diagonal matrix corresponding to the permutation .the combinatorial factors and come from the initial wirings of the boxes respective to the vector spaces of dimensions ( initial wiring given by ) and ( initial wiring given by the identity permutation ) , see figure [ fig : graphical - wg ] .the pre - factors contain the normalization from the ( partial ) traces . finally , the ( random ) factors are the normalized power sums of : where are the cycles of .recall that we have assumed almost sure convergence for the sequence ( and , thus , for ) : -th group in the diagram corresponding to .,scaledwidth=50.0% ] as a first application of the weingarten formula , let us find the distribution of the random variable .obviously , actually , does not depend on the random unitary matrix , since from the hypothesis ( with ) , we have that , almost surely as , the random variable converges to the scalar .let us now move on to the variance of the eigenvalues .first , we compute its expectation . we apply now the weingarten formula for ; the sum has terms , which we compute below : * : * , : * , : * : . combining the expressions above with, we get using the hypothesis , we have thus , as , let us now proceed and estimate the variance of ; more precisely , let us compute . asbefore , we shall compute the expectation in two steps : first with respect to the random haar unitary matrix , and then , using our assumption , with respect to , in the asymptotic limit . to perform the unitary integration , note that the weingarten sum is indexed by a couple ,so it contains terms , see . in appendix[ sec : variance ] we have computed the variance of with the usage of symmetry arguments .the result , to the first order reads ^ 2.\ ] ]taking the expectation over and the limit ( we are allowed to , by dominated convergence ) , we get ^ 2.\ ] ] we put now all the ingredients together : ^ 2 } \sim \frac{cd_1^{-2}d_2^{-4}}{[\varepsilon^2 d_1^{-1 } - ( 1+o(1))c'd_2^{-2}]^2},\ ] ] where non - negative constants depending on the limiting measure . using ,the dominating term in the denominator above is , and thus we have : since the series is summable , we obtain the announced almost sure convergence by the borel - cantelli lemma , finishing the proof .we remind here , that and ( a)$ ] . because we assume that has unitarly invariant distribution , we can write where is -th column of matrix and we denote and consider mixed moments computed in appendix [ sec : mixed - moments ] . and symmetric mixed moments let we have as , in the above . direct computations with the usage of symmetric moments give us \\ & = \frac{2 \left(d_1 ^ 2 - 1\right ) \left(d_2 ^ 2 - 1\right ) } { d_2 ^ 2 \left(d_1 ^ 2 d_2 ^ 2 - 1\right){}^2 \left(d_1 ^ 4 d_2 ^ 4 - 13 d_1 ^ 2 d_2 ^ 2 + 36\right ) } \big(d_1 ^ 4 d_2 ^ 4 \left(\mu _ 1 ^ 2-\mu _ 2\right){}^2 \\ & \ \ \ \ \ \ \ + d_1 ^ 2 d_2 ^ 2 \left(11 \mu _ 1 ^ 4 - 22 \mu _ 2 \mu _ 1 ^ 2 + 20 \mu _ 3 \mu _ 1 - 4 \mu _ 2 ^ 2 - 5 \mu _ 4\right)+5 \left(3 \mu _ 2 ^ 2 - 4 \mu _ 1 \mu _ 3+\mu _4\right)\big ) \\ & = \frac{2 ( \mu_1 ^ 2 - \mu_2)^2}{d_1 ^ 2 d_2 ^ 4}(1 + o(1 ) ) \end{split}\ ] ]we have the following formulas for mixed moments , which covers all possible cases ( because of the symmetry ) here we will consider expectations of the following kind note , that if we multiply matrix by a unitary matrix which does not change the first column we will not change the expectation , in fact we can integrate over the subgroup of matrices which does not change the first column of . now for a moment we fix matrix and consider where matrices are in the form the is an expectation with respect to the haar measure on embedded in , in the above way .note , that the vector represents a random orthogonal vector to the .first we calculate now , using standard integrals we obtain where and incorporates the condition that first element of vector is zero .now we obtain , after elementary calculations , using the fact , that is unitary where is a swap operation on two systems of dimensions , i.e. . so we get next we consider so we have obtained in the above formulas we used and fact , that .
in this work we analyze properties of generic quantum channels in the case of large system size . we use the random matrix theory and free probability to show that the distance between two independent random channels tends to a constant value as the dimension of the system grows larger . as a measure of the distance we use the diamond norm . in the case of a flat hilbert - schmidt distribution on quantum channels , we obtain that the distance converges to . furthermore , we show that for a random state acting on a bipartite hilbert space , sampled from the hilbert - schmidt distribution , the reduced states and are arbitrarily close to the maximally mixed state . this implies that , for large dimensions , the state may be interpreted as a jamiokowski state of a unital map . = 1
in spectral graph theory one uses spectral analysis to study the interplay between the topology of a graph and the dynamical processes modelled through the graph .in fact , various dynamical processes in disciplines ranging from physics , biology , information theory , and chemistry to technological and social sciences are modelled with graph theory .therefore , graph theory forms a unified framework for their study .in particular , one associates a certain matrix to a graph ( e.g. the adjacency matrix , the laplacian , the google matrix ) and studies the connection between the spectral properties of this matrix and the properties of the dynamical processes governed through them .we mention some studies in this context : the stability of synchronization processes , the robustness and the effective resistance of networks , error - correcting codes , etc .these examples illustrate how spectral graph theory relies to a large extent on the capability of determining spectra of sparse graphs .it is thus important to develop mathematical methods which allow to derive in a systematic way exact analytical as well as numerical results on spectra of large graphs . for an overview of analytical results on the spectra of infinite graphs we refer to the paper of mohar and woess .recently , the development of exact results for large sparse graphs has been reconsidered using ideas from statistical physics of disordered systems . in this approach oneformulates the spectral analysis of graphs in a statistical - mechanics language using a complex valued measure .the spectrum is given as the free energy density of this measure , which can be calculated using methods from disordered systems such as the replica method , the cavity method or the super - symmetric method .this approach is exact for infinitely large graphs that do not contain cycles of finite length . in recentworks the cavity method has been generalized to the study of spectra of graphs with a community structure and spectra of small - world graphs .the replica and cavity methods have also been used to derive the largest eigenvalue of sparse random matrix ensembles .although the cavity method is heuristic , it has been considered in a rigorous setting for undirected sparse graphs using the theory of local weak convergence . however , for directed sparse graphs the asymptotic convergence of the spectrum to the resultant cavity expressions has not been shown .nevertheless , recent studies have shown the asymptotic convergence of the spectrum of highly connected sparse matrices to the circular law and have proven the asymptotic convergence of the spectrum of markov generators on higly connected random graphs .these studies , however , do not concern finitely connected graphs . in this workwe extend the statistical - mechanics formulation , in particular the cavity method for the spectra of large sparse graphs which are locally tree - like , to large sparse graphs with many short cycles .such an extension is relevant because cycles do appear in many real - world systems such as the internet .we derive a set of resolvent equations for graph ensembles that contain many cycles of finite length .these equations are exact for infinitely large ( un)directed husimi graphs and solving them constitutes an algorithm for determining the spectral density of these graphs .first , we show how this algorithm determines the spectra of irregular husimi graphs up to a high accuracy , well corroborated by numerical simulations .then we derive novel analytical results not only for the spectra of undirected regular husimi graphs , but also for the spectra of directed regular husimi graphs .in particular , we show that the boundary of the spectrum of directed husimi graphs composed of cycles of length , is determined by a hypotrochoid with radii ( being the radius of the fixed circle and of the moving circle ) .not many analytical expressions for the spectra of directed sparse random graphs and non - hermitian random matrices are known ( besides some exceptions , see ) .a short account of some of our results on regular graphs has appeared in .networks appearing in nature are usually modelled with theoretical ensembles of graphs .these ensembles consist of randomly constructed graphs with certain topological constraints on their connectivity .model systems allow for a better understanding of the properties of the more complex real - world systems . in this work we consider simple graphs of size consisting of a discrete set of vertices and a set of edges .simple graphs are uniquely defined by their adjacency matrix , with elements {ij } = a_{ij}\in \left\{0,1\right\} ] for .we have when and zero otherwise .an undirected edge is present between and when , a directed edge is present from to when and , while indicates that there is no edge present between and .we define graph ensembles through a normalized distribution for the adjacency matrices . selecting a graph from the ensemble corresponds with drawing randomly an adjacency matrix from the distribution .we always consider ensembles of infinite graphs , for which .this is implicitly assumed throughout the whole paper .below , we first define random graphs which are locally tree - like .their spectral properties have been considered in several studies .second , we define ensembles of graphs with many cycles : the cacti or husimi graphs . ensembles of random graphs with only certain constraints on the vertex degrees are locally tree - like .some well - studied examples with these characteristics are the ensemble of -regular graphs ( also called bethe lattices ) and the ensemble of irregular erds - rnyi graphs . for undirected graphs ,these ensembles can be formally defined as follows : * -regular graphs with fixed connectivity : with the degree of the -th vertex .figure [ cavities1a ] presents a sketch of a bethe lattice . * erds - rnyi graphs with mean connectivity : in this ensemblethe degrees fluctuate from vertex to vertex within the graph .the distribution of vertex degrees converges to a poissonian distribution with mean in its asymptotic limit .ensembles of directed graphs can be defined similarly by leaving out the symmetry constraints in the definitions eqs ( [ eq : ensemble1 ] ) and ( [ eq : ensemble2 ] ) and by taking into consideration the difference between and .for instance , we define a -regular ensemble of directed graphs by the constraint for any , with the indegree defined as and the outdegree as .the erds - rnyi and the -regular ensemble have the local tree property , i.e. the probability to encounter a cycle of finite length in the local neighbourhood of a vertex becomes arbitrary small for .the local tree property of random graphs is an important characteristic which has allowed to determine the spectra of sparse graphs in previous works .this property is illustrated in figure [ cavities1a ] for the bethe lattice : cycles only appear at the leaves of the tree , where the vertex constraints have to be satisfied such that their typical length is of the order .next , we define graph ensembles which contain many short cycles and are , therefore , not locally tree - like .a cycle of length is defined by a set of nodes with an edge connecting each pair , for .the set of all -tuples is denoted by .we define the following ensembles which are very similar to the husimi graphs in : * -regular husimi graphs with fixed connectivity and fixed cycle length : where denotes now the number of -cycles of incident to the -th vertex . in figure [ cavities1 ] we show the typical local neighbourhood of an undirected ( 4,2)-regular husimi graph .* irregular husimi graphs with fixed cycle length and mean loop connectivity : \times \prod_{i < j}\delta(a_{ij } ; a_{ji } ) \nonumber \\ & & \times \prod_{(i , j)}\delta\left(a_{ij}\sum_{(i , k_1,k_2,\cdots , k_{\ell-2 } , j)\in v^{\ell}_{ij}}a_{ik_1}\left(\prod^{\ell-3}_{n=1 } a_{k_n k_{n+1}}\right ) a_{k_{\ell-2}j } ; a_{ij } \right).\nonumber \label{eq : poiss3}\end{aligned}\ ] ] the set consists of all -tuples .the first factor gives a weight to each randomly drawn cycle of length , the second factor is the constraint which makes the graph undirected and the last factor is a constraint which takes into consideration that each edge must belong to exactly one cycle of length .we remark that in our notation of husimi graphs the mean cycle connectivity is the average number of cycles of length connected to a certain vertex , while denotes the mean degree of each vertex .note that for the ensembles of husimi graphs introduced here reduce to the corresponding ensembles of random graphs which are locally tree - like ( discussed in the previous subsection ) .as before , we can define as well directed ensembles of husimi graphs by removing the symmetry constraint and taking into consideration the possible difference beween and .for instance , for a regular husimi graph we set the mean vertex indegree and the mean vertex outdegree equal to . in figure [ cavities1 ] we present the neighbourhood of a vertex in a directed ( 3,2)-regular husimi graph .husimi graphs are not locally tree - like since they are composed of cycles , but they do have an infinite - dimensional character . indeed , the local neighbourhood of a vertex contains different branches which are only connected by cycles of order , see figures [ cavities1a ] and [ cavities1 ] .this property allows us to present an exact spectral analysis .in this section we show how to determine the spectral properties of undirected graphs with cycles . for regular graphs , our approach , briefly presented in , consists of four steps .first , we write the spectrum in statistical mechanics terms using a complex measure. then we use the infinite - dimensional nature of the graphs to derive a closed expression in the submatrices of the resolvent .next , we simplify the resolvent equations to the specific case of regular undirected husimi graphs . finally , we solve the resultant algebraic expressions to find analytical expressions for the spectra of regular husimi graphs .we define the resolvent of a real symmetric matrix as where is the identity matrix and , with and . from the resolventwe can determine the asymptotic density of the real eigenvalues of an ensemble of matrices : .\label{eq : specres}\end{aligned}\ ] ] we have left out ensemble averages since we assume that the spectrum self - averages in the limit to a deterministic value .a common method in statistical physics is to write the resolvent elements and the spectrum as averages over a certain complex - valued measure . indeed , when defining the normalized gaussian function the diagonal elements {ii } = g_{i} ] , with .the quantity is a diagonal matrix with elements {kk } = \sum_{\beta\in \partial^{(\ell-1)}_{j_k}\setminus ( i , j_1,\ldots , j_{k-1 } , j_{k+1 } , \ldots , j_{\ell-1 } ) } \ba^t_{j_k , \beta } g^{(j_k)}_{\beta } \ba_{j_k , \beta } \ , , \nonumber \end{aligned}\ ] ] with .we have also defined the -dimensional vector .the diagonal elements of the resolvent are given by in appendix [ app : cactusstruct ] we elaborate on the precise derivation of these equations .the resolvent equations ( [ eq : reshus1 ] ) determine the spectra of graphs with many short cycles of fixed length . note that they can be straightforwardly generalized to graphs with variable cycle lengths and , even more general , to graphs composed from arbitrary figures , by replacing by the corresponding adjacency matrix of the figure in the absence of a node , and by the submatrix of connecting the figure and the node .in our case , the figures of the graph are the -tuples .the set of resolvent equations ( [ eq : reshus1])-([eq : reshus2 ] ) can also be seen as a message - passing algorithm between regions of the graph , which is similar to the generalized belief propagation algorithm in information theory . herethe messages of the algorithm are the matrices sent from the -tuples to a vertex , with whom the tuple forms a cycle of length in the graph .we have verified the exactness of the resolvent equations ( [ eq : reshus1])-([eq : reshus2 ] ) on irregular husimi graphs with through a comparison with direct diagonalization results .this is presented in figure [ fig : c6 ] , where we also compare the spectrum of irregular husimi graphs with mean cycle connectivity with the spectrum of erds - rnyi random graphs with the same mean vertex degree . from these resultsit appears that the spectrum of erds - rnyi graphs converges faster to the wigner semicircle for .finally , we point out that the ensemble definitions in section 2 are global by specifying . in the derivation of the resolvent equations ( [ eq : reshus1])-([eq : reshus2 ] )these global definitions are not explicitely used .instead , our analysis uses the typical local neighbourhoods of the vertices in the graph .therefore , our results are valid for all graphs which have a local neighbourhood similar to the one given by the resolvent equations .the connection between the global definitions and the resolvent equations can be made explicit through the distribution of local vertex neighbourhoods , see for instance .this is how we have derived the results in figure [ fig : c6 ] . in the next two sections we solve the resolvent equations for the case of undirected regular husimi graphs for which they simplify considerably . in this sectionwe consider graphs with a simple topology that allows to extract exact analytical solutions from the resolvent equations ( [ eq : reshus1])-([eq : reshus2 ] ) : regular husimi graphs with cycles .for this ensemble of random graphs and for all connected pairs . in figure [ cavities1 ] the square undirected husimi graph is illustrated .since there is no local disorder every node has the same local neighbourhood .for the graph becomes transitive and we can set in the resolvent equations ( [ eq : reshus1 ] ) .we get the following closed equations in the -dimensional matrices : and the resolvent follows from where .it is convenient to introduce the scalar variable and rewrite ( [ reg ] ) as the matrix in ( [ reg1 ] ) has a simple structure .we have calculated the inverse analytically using specific methods for tridiagonal matrices .this leads to our final formula for when the coefficients are complex numbers given by the recurrence relation with initial values and . equations( [ gs ] ) and ( [ alpha ] ) are a set of very simple equations that determine the resolvent elements of undirected regular husimi graphs .it is one of the main results of our previous work . from the expression of we find straightforwardly the green function and the spectrum , according to eq .( [ eq : spectrumrec ] ) and ( [ greensimp ] ) , we solve the algebraic equations ( [ gs])-([alpha ] ) to find the spectrum of -regular undirected husimi graphs . due to their simple linear structure , these recurrence relations can be solved excactly and allow to obtain the analytical form of the polynomials in the variable for any value of : * corresponds with a quadratic equation : - g_s = 2 .\nonumber \end{aligned}\ ] ] * gives a cubic equation : } { \left [ z - ( c-1 ) g_s \right]^2 -2}. \nonumber \end{aligned}\ ] ] * is determined by a quintic equation : ^{3 } - 4\left [ z - ( c-1 ) g_s \right ] + 2 } { \left [ z - ( c-1 ) g_s \right]^{4 } - 3 \left [ z - ( c-1 ) g_s \right]^{2 } + 1}. \nonumber \end{aligned}\ ] ] * follows from a quartic equation : ^{2 } - 4 } { \left [ z - ( c-1 ) g_s \right]^{3 } - 3\left [ z - ( c-1 ) g_s \right]}. \nonumber \end{aligned}\ ] ] * for , the polynomials have a degree larger than four . the root of these polynomial equations which gives the expression for the spectrum is a stable fixed point of eq .( [ reg1 ] ) . in general , one finds the roots of polynomials in terms of radicals up to degree four . for larger degrees , algebraic solutions in terms of radicals no longer exist , apart from some particular situations .the roots of general polynomials with degree larger than four are given in terms of elliptic functions .we have solved the above equations for and have found the analytical expression for , generalizing the kesten - mckay law to regular graphs with cycles . for other values of , eqs .( [ gs])-([alpha ] ) are solved numerically in a straightforward way , allowing to study the spectrum as a function of with high accuracy . from the solution of the resolvent equations ,we have found the expressions for the spectra of -regular undirected husimi graphs .we point out that for we recover a sparse regular graph without short cycles . in this casewe have the kesten - mckay eigenvalue - density distribution for ] . for find the spectrum of a square husimi graph ^{2 } + 3 \ , \pi \ , c^{2 } \ , q_{-}^{2}(\lambda ) , } \nonumber\ ] ] for and otherwise , where and the edges of are determined by finding the roots of .for , converges to the wigner semicircle law . for ,the spectrum contains a power - law divergence as .we have also found the analytical expression for : ^ 2 + \pi c^2 f(\lambda ) } , \nonumber \end{aligned}\ ] ] for and otherwise , with }{3(c-1)^2 } , \nonumber \\r(\lambda ) & = & \sqrt{u_{r}(\lambda ) - \frac{3 \lambda^2}{4 ( c-1)^2 } - \frac{(2c-5)}{(c-1)^2 } } , \nonumber \\f(\lambda ) & = & -\frac{|\lambda|^3}{(c-1)^3 } + \frac{4(2c+1)|\lambda|}{(c-1)^3 } + 4 r(\lambda ) \left [ \frac{(2c-5)}{(c-1)^2 } - \frac{3 \lambda^2}{2 ( c-1)^2 } + u_r(\lambda ) \right],\nonumber \end{aligned}\ ] ] and \lambda^2 \nonumber \\ & & + \frac{16(2c-5)}{3}+ \frac{(2c-5)^3}{27(c-1 ) }. \nonumber\end{aligned}\ ] ] the function is the discriminant of the cubic polynomial associated to the quartic polynomial for .the edges of are obtained from the roots of .the spectrum converges to the wigner semicircle law when . for and obtain the spectrum by solving numerically the set of eqs .( [ gs])-([alpha ] ) .one can show that converges to the wigner law in the limit , and converges to the kesten - mckay law in the limit ( this limit corresponds to a graph without short cycles ) .in figure [ fig:4ell ] we present the evolution of the spectrum as a function of the connectivity for the case .we notice indeed the fast convergence to the wigner semi - circle law for . in figure [ oddell ]we present the evolution of the spectrum as a function of the cycle length , for fixed .we see how the spectrum converges rapidly to the kesten - mckay law for .in our study of spectral properties of regular directed graphs with cycles we follow again four steps .first , we write the spectrum in statistical mechanics terms .this is done by mapping the resolvent calculation of a directed graph on a resolvent calculation of a related undirected graph , using the hermitization procedure .the resolvent of this undirected graph is then analyzed using the methodology presented in section 3 .thereby , the infinite - dimensional nature of the resultant graphs again allows to derive a closed expression in submatrices of the resolvent matrix .thirdly , we show how this closed set of equations determines the spectrum of directed regular husimi graphs and we derive their explicit form in this case .finally , the resultant algebraic expressions are solved to arrive at explicit analytical expressions in the support of the spectra .the eigenvalues of the adjacency matrix of directed husimi graphs are distributed in the complex plane , contrary to the real eigenvalues for undirected husimi graphs . by introducing and using the relation , the density of states at a certain point be written formally as , where is the resolvent .the operation denotes complex conjugation .the resolvent is the central object of interest and its non - analytic behavior at the eigenvalues of poses difficulties in applying various techniques well - developed for hermitian matrices .an elegant way to avoid this problem is the hermitization method .we define a block matrix where is a regularizer and is the -dimensional identity matrix. the lower - left block of is precisely the matrix .thus , the problem reduces to calculating the diagonal matrix elements {j+n , j} ] , with ( ) denoting pauli matrices .a graphical representation using an induced graph is again useful in these calculations . for a non - hermitian matrix with real entriesthe induced graph is directed , contrary to an undirected graph for real symmetric matrices . graphically the matrix elements correspond then with a directed edge from node to node . combining the representation of {ij} ] , where .the matrix is a diagonal matrix formed by the following block elements {kk } = \sum_{\beta \in \partial^{(\ell-1)}_{j_k}\setminus ( i , j_1,\ldots , j_{k-1 } , j_{k+1 } , \ldots , j_{\ell-1 } ) } \mathbb { a}^{t}_{j_{k } \beta } \mathcal { d}_{\beta}^{(j_k ) } \mathbb{a}_{j_k \beta } , \label{matrb}\ ] ] with and .once eqs .( [ cavd ] ) have been solved , the spectrum follows from eqs .( [ gj ] ) and ( [ spectrgj ] ) .the cavity equations have an interpretation in terms of a message - passing algorithm : the matrix is seen as the message sent by the nodes of cycle to node of the same cycle .this completes the general solution of the problem .we determine now the resolvent equations for the spectrum of infinitely large -regular directed husimi graphs .these graphs have for , i.e. , each vertex is incident to cycles of length .we set and when there is a directed edge from node to , such that the corresponding matrix assumes the form . as a consequence , the matrices independent of the indices ( ) .it is convenient to define the two - dimensional matrix , where .we write in terms of as follows {21}^{-1}\ , .\label{spectrcav}\ ] ] from eqs .( [ cavd ] ) and ( [ matrb ] ) one obtains that , for , the two - dimensional matrix solves the equation ^{-1 } \mathbb{a } \ , , \label{eqd } \ ] ] where is a -dimensional matrix with elements {i j } = \delta_{i , j+1} ] , and substituting this form in eqs .( [ ell3b ] ) and ( [ ell4b ] ) , we find the following expressions for the boundary of the support of triangular and square regular directed husimi graphs : * : * : these are the parametric equations which describe , for each corresponding , an hypotrochoid in the complex plane .the parameter as a function of the cycle degree for and is given by , respectively , eqs .( [ rell3 ] ) and ( [ rell4 ] ) .a hypotrochoid is a cyclic function in the complex plane which is drawn by rotating a small circle of radius in a larger circle of radius .the support of triangular and square husimi graphs is therefore given by hypotrochoids with , respectively , and .these analytical results for the support of the spectra of directed husimi graphs for and are shown in the lower graphs of figure [ mozaic ] .the agreement with direct diagonalization results for is excellent , confirming the exactness of our analytical results .-regular husimi graphs for several values of the cycle length and the following values of the number of cycles incident to each vertex : ( solid line ) , ( dashed line ) , ( dotted line ) and ( dot - dashed line ) .the hypotrochoids have a rotational symmetry by the angle , from which one obtains the value of the cycle length .direct diagonalization results ( dots ) for matrices with and are shown . ]based on the form of eqs .( [ ell3b]-[ell4polyn ] ) , we conjecture that , for a given and , the following equations are fulfilled at the boundary of the support of ^{\ell-1}}{s^{2(\ell-1 ) } } \ , , \label{eqbanyell } \\s^{2(\ell-1 ) } & = & ( c-1 ) \sum_{n=0}^{\ell-2 } s^{2n } \ , . \label{eqranyell}\end{aligned}\ ] ] substituting ( $ ] ) in eq .( [ eqbanyell ] ) reads } \ , \label{hypogen}\ ] ] for the boundary of the support of a directed regular husimi graph .remarkably , eq . ( [ hypogen ] ) is a hypotrochoid with a fraction .the parameter is determined from the roots of a polynomial of degree in the variable , see eq .( [ eqranyell ] ) .equation ( [ eqranyell ] ) can also be written as we have found an analytical expression for the roots for and . for larger values of , we have solved eq .( [ simple ] ) numerically and , by choosing the stable solution , we have derived accurate values for the parameters of the hypotrochoids .we present explicit results for and in the upper graphs of figure [ mozaic ] .direct diagonalization results exhibit once more an excellent agreement with the theoretical results , strongly supporting our conjecture that the support of the spectrum of directed regular husimi graphs for general is given by eqs .( [ hypogen]-[simple ] ) .the support of regular husimi graphs converges to the circle in the limit , corresponding with the expression ( [ spectruml2 ] ) for a graph without short cycles .in this work we have obtained the spectra of ( un)directed husimi graphs .the main result is a set of exact equations which determines a belief - propagation like algorithm in the resolvent elements of the matrix . for irregular graphswe have shown a very good correspondence between direct diagonalization results and our approach . for regular graphswe have derived several novel analytical expressions for the spectrum of undirected husimi graphs and the boundaries of the spectrum of directed husimi graphs . remarkably , the boundaries of directed regular husimi graphs consist of hypotrochoid functions in the complex plane .our results indicate that , at high connectivities , the spectrum of undirected random graphs converges to the wigner semicircle law , while the spectrum of directed random graphs converges to girko s circular law .this convergence seems to be rather universal and independent of the specific graph topology .it would be interesting to better understand the conditions under which finitely connected graphs converge to these limiting laws .finally , we point out that the eigenvalues of random unistochastic matrices are distributed over hypocycloids in the complex plane .this close similarity with our results suggests an interesting connection between the spectra of unistochastic matrices and regular directed husimi graphs with cycles of length .this paper is dedicated to fritz gesztesy , on the occasion of his 60th birthday .db wants to thank fritz not only for many years of stimulating and fruitful collaborations but especially for a lifetime friendship !flm is indebted to karol yczkowski for illuminating discussions .we present the essential steps to determine the resolvent equations using the cavity method .this method is based on the introduction of cavities in a graph forming subgraphs , where the node and all of its incident edges have been removed , see figure [ fig : cavities ] . in analogyone can also remove the i - th column and the i - th row from a matrix to obtain the submatrix .the cavity method is based on the consideration that a probability distribution defined on a locally tree - like graph has the factorization property : the quantity is the -th marginal of on the cavity subgraph of , and is the marginal of with respect to the set of variables . in the language of spin models , the factorization property follows from the locally tree - like structure of a typical neighbourhood in the graph , see figure [ fig : cavities ] . following the derivationas presented in , we find a set of closed equations in the marginals from which the marginals follow : finally , we use the fact that the are gaussian functions to recover the resolvent equations ( [ eq : resolv2 ] ) , after substitution of ( [ eq : ansatz ] ) in ( [ eq : marg ] ) and ( [ eq : marg2 ] ) .we apply now a similar logic to graphs with many cycles which have an infinite - dimensional structure . in this casewe use the factorization property on the marginals of the distribution .the factorization property follows from the fact that the average distance between different branches connected to is of the order after removal of , see figure [ fig : cavities ] .note that is the marginal of for a tuple forming a cycle with the -th vertex .we have generalized the derivation for regular husimi graphs presented in to the case of arbitrary husimi graphs .we find a set of closed equations in the marginals : \nonumber \\ & & \times \prod_{j\in \alpha } \left[\prod_{\beta \in \partial^{(\ell-1)}_j\setminus \alpha } \exp\left(i\sum_{j\in \alpha}\sum_{\beta\in \partial^{(\ell)}_j\setminus \alpha}\left ( a_{j j^{(\beta)}_1}\:\ : x_{j^{(\beta)}_1}x_{j } + a_{j^{(\beta)}_{\ell-1}j}\:\ : x_{j^{(\beta)}_{\ell-1}}x_{j } \right)\right)\right ] , \nonumber \end{aligned}\ ] ] with the -tuple .the marginals are given as a function of the marginals : we now use the gaussian ansatz for the marginals : with the submatrix of the resolvent of the cavity matrix : we also use a gaussian ansatz of the type ( [ eq : ansatz ] ) for the marginal .substitution of these anstze in ( [ eq : marginalst ] ) and ( [ eq : marginalstt ] ) gives the resolvent equations ( [ eq : reshus1 ] ) and ( [ eq : reshus2 ] ) .a b. mohar , _ some applications of laplace eigenvalues of graphs _ , graph symmetry : algebraic methods and applications , vol .497 kluwer ( 1997 ) , 227 - 275 .f. r. k. chung , _ spectral graph theory _, american mathematical society ( 1997 ) a. e. brouwer , w. h. haemers , spectra of graphs , springer ( 2010 ) .b. mohar , w. woess , _ a survey on spectra of infinite graphs _london math .* 21 * ( 1989 ) , 209 - 234 .a. barrat , m. barthlemy , a. vespignani , _ dynamical processes on networks _ , cambridge university press ( 2008 ) m. e. j. newman , _ networks , an introduction _ , oxford university press ( 2010 ) .m. baharona , l. m. pecora , _ synchronization in small - world systems _ , phys . rev* 89 * ( 2002 ) , 054101 .s. f. edwards , d. r. wilkinson , _ the surface statistics of granular aggregate _london , ser .a * 381 * , 17 ( 1982 ) ; b. kozma , m. b. hastings , g. korniss , _roughness scaling for edwards - anderson relaxation in small - world networks _ , phys .lett . * 92 * , 108701 ( 2004 ) .d. j. klein , r. randi , _ resistance distance _ , j. math .* 12 * ( 1993 ) , 81 - 95 .s. hoory , n. linial , a. wigderson , _ expander graphs and their applications _ , bull .* 43 * ( 2006 ) , 439 - 561 s. f. edwards , r. c. jones , _ the eigenvalue spectrum of a large symmetric random matrix _ ,a * 9 * ( 1976 ) , 1595 - 1603 .r. khn , _ spectra of sparse random matrices _ ,a : math . theor .* 41 * ( 2008 ) , 295002 .t. rogers , k. takeda , i. p. castillo and r. khn , _ cavity approach to the spectral density of sparse symmetric random matrices _ , physe. * 78 * ( 2008 ) , 031116 .t. rogers , i. p. castillo , _ cavity approach to the spectral density of non - hermitian sparse matrices _ ,e * 79 * ( 2009 ) , 012101 .y. v. fyodorov , a. d. mirlin , _ on the density of states of sparse random matrices _ ,a * 24 * ( 1991 ) , 2219 - 2223 .t. rogers , c. p. vicente , k. takeda , i. p. castillo , _ spectral density of random graphs with topological constraints _ , j. phys .a : math . theor .* 43 * ( 2010 ) , 195002 .r. khn , j. m. van mourik , _ spectra of modular and small - world matrices _ ,j. phys . a : math ., * 44 * ( 2011 ) , 165205 y. kabashima , h. takahashi , o. watanabe , _cavity approach to the first eigenvalue problem in a family of symmetric random sparse matrices _, j.phys . :* 233 * ( 2010 ) , 012001 ; y. kabashima , h. takahashi , _ first eigenvalue / eigenvector in sparse random symmetric matrices : influences of degree fluctuation _ ,j. phys . a : math* 45 * ( 2012 ) , 325001 c. bordenave , m. lelarge , _ resolvent of large random graphs _ , random structures and algorithms * 37 * ( 2010 ) , 332 .c. bordernave , d. chafai , _ around the circular law , probability surveys _ , * 9 * ( 2012 ) , 1 - 89 .t. tao , v. vu ., _ random matrices : the circular law _ , commun . contemp .math.,*10 * , 261 - 307 ( 2008 ) .t. tao , v. vu , m. krishnapur , _ universality of esds and the circular law _ , ann .probab . * 38 * , 2023- 2065 ( 2010 ) .f. gtze , a. tikhomirov , _ the circular law for random matrices _ ,probab . , * 38 * , 1444- 1491 ( 2010 ) .p. m. wood , _ universality and the circular law for sparse random matrices _ ,* 22 * , 1266 ( 2012 ) . c. bordenave , p. caputo , d. chafai , _ spectrum of markov generators on sparse random graphs _ , arxiv:1202.0644 ( 2012 ) .p. m. gleiss , p. f. stadler , a. wagner , d. a. fell , _ small cycles in small worlds _ , arxiv : cond - mat/0009124 ( 2000 ) . g. bianconi , g. caldarelli , a. capocci , _ loops structure of the internet at the autonomous system level _ ,e * 71 * ( 2005 ) , 066116 .g. bianconi , n. gulbahce , a. e. motter , _ local structure of directed networks _ ,* 100 * ( 2008 ) , 118701 .k. husimi , _ note on mayers theory of cluster integrals _ , j. chem .* 18 * ( 1950 ) , 682 - 684 .f. harary , g. uhlenbeck , _ on the number of husimi trees i _ ,* 39 * ( 1953 ) , 315 - 322 .i. neri , f. l. metz , _ spectra of sparse non - hermitian random matrices : an analytical solution _rev . lett .* 109 * ( 2012 ) , 030602 f. l. metz , i. neri , d. boll , _ spectra of sparse regular graphs with loops _, phys . rev .e * 84 * ( 2011 ) , 055101(r ) .b. d. mckay , _ the expected eigenvalue distribution of a large regular graph _, linear algebra appl . * 40 * ( 1981 ) , 203 - 216 .o. khorunzhy , m. shcherbina , v. vengerovsky , _ eigenvalue distribution of large weighted random graphs _ , j. math* 45 * ( 2004 ) , 1648 - 1672 .b. bollobs , _ random graphs _ , cambridge university press ( 2001 ) . f. l. metz , i. neri , d. boll , _ localization transition in symmetric random matrices _ ,e * 82 * ( 2010 ) , 03115 .m. mzard , a. montanari , _ information , physics and computation _ , oxford university press ( 2009 ) .h. a. bethe , _ statistical theory of superlattices _ , proc .london ser a * 150 * ( 1935 ) , 552 - 575 .r. abou - chacra , d. j. thouless , p. w. anderson , _ a selfconsistent theory of localization _ , j. phys .c : solid state phys .* 6 * ( 1973 ) , 1734 p. cizeau , j. p. bouchaud,_theory of lvy matrices _ , phys .e * 50 * ( 1994 ) , 1810 - 1822 .p. judea , _ probabilistic reasoning in intelligent systems : networks of plausible inference _ , morgan kaufmann ( 1988 ) , san francisco , ca .r. kikuchi , _ a theory of cooperative phenomena _ , phys .* 81 * ( 1951 ) , 988 - 1003 w. t. yedidia , j. s. freeman , y. weiss , _ bethe free energy , kikuchi approximations , and belief propagation algorithms _ , technical report tr-2001 - 16 , mitsubishi electric reseach , 2001 . y. huang , w. f. mccoll , _ analytic inversion of general tridiagonal matrices _ ,a : math . gen .* 30 * ( 1997 ) , 7917 .r. b. king , _ beyond the quartic equation _, birkhuser boston , 1996 . h. kesten , _ symmetric random walks on groups _ , trans .soc . * 92 * , ( 1959 ) 336354 .m. eckstein , m. kollar , k. byczuk , d. vollhardt , _ hopping on the bethe lattice : exact results for density of states and dynamical mean - field theory _ ,b * 71 * , 235119 ( 2005 ) ; m. galiceanu , a. blumen , _ spectra of husimi cacti : exact results and applications _ , j. chem .phys.*127 * , 134904 ( 2007 ) .j. feinberg , a. zee , _ non - hermitian random matrix theory : method of hermitian reduction _b * 504 * , 579 ( 1997 ) .j. t. chalker , z. j. wang , _ diffusion in a random velocity field : spectral properties of a non - hermitian fokker - planck operator _* 79 * , 1797 ( 1997 ) .j. t. chalker , b. mehlig , _ eigenvector statistics in non - hermitian random matrix ensembles _ , phys .81 * , 3367 ( 1998 ) .r. a. janik , w. nrenberg , m. a. nowak , g.papp , i. zahed , _ correlation of eigenvectors for non - hermitian random matrix models _ , phys .e * 60 * , 2699 ( 1999 ) .t.rogers , _ new results on the spectral density of random matrices _ , thesis ( 2010 ). j. d. lawrence , _ a catalog of special plane curves _, new york : dover , pp .165 - 168 , ( 1972 ) .i. dumitriu , s. pal , _ sparse regular random graphs : spectral density and eigenvectors _ , arxiv:0910.5306 .l. tran , v. vu , k. wang , _ sparse random graphs : eigenvalues and eigenvectors _ , arxiv:1011.6646 k. yczkowski , m. kus , w. somczynski and h .- j .sommers , _ random unistochastic matrices _, j. phys a : math .gen . * 36 * , 3425 ( 2003 ) .m. mzard , g. parisi , m.virasoro , _ spin glass theory and beyond _ , world scientific ( 1986 ) .
we present a general method for obtaining the spectra of large graphs with short cycles using ideas from statistical mechanics of disordered systems . this approach leads to an algorithm that determines the spectra of graphs up to a high accuracy . in particular , for ( un)directed regular graphs with cycles of arbitrary length we derive exact and simple equations for the resolvent of the associated adjacency matrix . solving these equations we obtain analytical formulas for the spectra and the boundaries of their support .
large hadron collider ( lhc ) collider at cern is expected to start data taking in 2009 with the luminosity ramp up to the design goal of 10//s over the next few years .after 2016 a proposal to extend the physics potential of the lhc with a major luminosity upgrade to the super lhc ( slhc ) has been endorsed by the cern council strategy group . the goal is a factor ten increase in luminosity .this imposes severe requirements on the silicon tracking devices , which need to be completely redesigned in order to cope with the order of magnitude increase in occupancy and radiation damage .the increase in occupancy requires a corresponding increase in granularity , while the radiation hardness can be improved by a different sensor design combined with cooling to lower temperatures in order to reduce the leakage current .radiation damage to silicon sensors creates defects with energies between the valence and conduction band .this increases both the leakage current at a given temperature and the depletion voltage .if one uses n - strips on p - type wafers the depletion will start from the n - strips , so there will be no undepleted layer between the sensor and the strips .furthermore , it was shown that even after 10 n/ , which is the fluence expected at the slhc at the innermost layers during its lifetime , the collected charge is still around 7000 electrons , which is roughly one quarter of the non - irradiated sensor . if one reduces the noise both , by cooling the sensor and by reducing the sensor capacitance ( by reducing the strip length ) it seems feasible to get a similar signal / noise ratio as for non - irradiated samples .shorter strips ( `` strixels '' ) are anyway required to reduce the channel occupancy for the high luminosity of the slhc .another requirement is the reduction of the material budget in order to reduce the interactions in the tracker , which lead to a distortion of the momentum measurements and an increase of the multiplicity by photon conversions .one expects up to 400 interactions per bunch crossing at the slhc with typically 1000 charged tracks per bunch crossing .if most photons convert into electron - positron pairs , as they will do with the present material budget , the charged multiplicity will be doubled to tripled .have been indicated as well . , width=336 ] originates from the service connections for cooling and power to the inner barrel and disk detectors , which pass in front of the endcap detectors .this can be avoided by having a long barrel ( see fig .[ f3 ] ) with all service connections outside the tracking volume .this leads to a material budget indicated by the light shaded ( yellow ) area.,width=336,height=336 ] here we present a tracker design based on co two - phase cooling .co allows for long cooling pipes with still negligible temperature gradients , because of the small pressure drop ( owing to the large latent heat and correspondingly small mass flow and the low viscosity ) .co cooling is efficient between 20 and -50 , so the leakage currents can be made negligible , while the long cooling pipes allow the use of 3 m long ladder type detectors , thus paving the way for having all service connections outside the tracking volume . by combining the functions of cooling pipe , mechanical support andcurrent leads the material budget can be additionally minimized .first tests with a simple co blow system prove the feasibility of the approach .a traditional layout of a silicon tracker has a barrel with endcaps and a pixelated detector near the beam pipe , as shown in fig .[ f1 ] using the 200 m silicon tracker of cms as an example .the main reason for the endcaps is that in the forward direction the particles traverse horizontal detectors under a small angle , thus increasing the material thickness by a factor . therefore vertical detectors are much better with respect to the material budget for small angles , since in this case the traversed material is . however , in this case the cooling pipes , current leads and endflanges of the horizontal detectors are just in front of the endcaps , which increases the material budget in the forward region by as much as 0.8 , as shown in fig .[ f2 ] for the present cms silicon tracker . to avoidthis would require long barrel detectors , which seemed difficult to cool .however , with co two - phase cooling this is possible and therefore it seems better to use only horizontal detectors , as shown in fig .[ f3 ] . in this caseall service connections and interconnect boards for the read out can be at the end of the barrel , i.e. outside the tracking volume . also optocouplers on the interconnect boards would be further away from the center , which is important , since these are known to suffer severe radiation damage after a fluence of .furthermore , they consume a significant amount of power , which need not to be entered into the tracking volume .all signals from the hybrids have then to be transferred to the interconnect board , e.g. by lvds signals via aluminized kapton cables .an additional reduction of the material budget can be obtained by using the cooling pipes as power lines , as sketched in fig .details will be discussed in the next section . to get enough tracking points for tracks in the forward directionrequires then to have disks at small radii . a disk layer for radii between 5 and 35 cmis shown in fig .the rings can be constructed in the same way as the ladders of the barrel , i.e. with cooling pipes at each side which act at the same time as power lines .the only difference is that the cooling pipes are now bent in half circles with the connections on the inside , i.e. outside the tracking volume .half circular disks are are needed to be able to install or replace the detectors with the beampipe installed .the four inner horizontal layers and four vertical disks are pixelated , most likely similar to the present pixel detectors .note that the pixel detectors as well as the strixel disks can be exchanged without moving the major part of the detector , the barrel detectors .this is important , since for these detectors the radiation damage is much higher than for the barrel , so they may need replacement .having all service connections outside the tracking volume reduces the material budget by 40% or more , as shown by the light shaded ( yellow ) area in fig .note that in the center the total material budget is similar , around 0.4 x , as expected , since in this area there are no service connections and the number of layers is similar .two - phase cooling system ., width=336 ] the luminosity at the slhc will be an order of magnitude higher than at the lhc . requiring the same occupancy implies that the number of channels increases also by a factor between 5 to 10 , e.g. by having pixel detectors or shorter strips ( `` strixels '' ) .the power will not increase by an order of magnitude , since the smaller feature size of today s and future electronics is expected to reduce the power per channel by a factor 5 or more . so the total power for the cms upgrade is expected to be of the order of 35 kw , similar to the present cms tracker .if one opts for a long barrel of identical ladders , one needs only one type of sensors with eventually different pitches and strixel lengths for the inner and outer ladders in order to reduce the number of channels .this simplifies the detector construction enormously in comparison with the present detector .long ladders require efficient cooling .we first discuss the module design integrated on a long ladder with cooling pipes as power lines integrated in the same mechanical structure and then the cooling system .as mentioned in the introduction , the material budget can be reduced by long ladders having all service connections outside the tracking volume as well as combining the mechanical support , the cooling pipe , which acts simultaneously as power line , into a single mechanical structure .a possible design is sketched in fig .[ f4 ] . the pair of cooling pipes on each side of the detector ( inlet and outlet ) are most easily realized by extruded al tubes .pure al , which is easy to extrude , has the additional advantage that the electrical resistivity drops by about 30% , if one cools the tube from + 40 to -40 . for a 2.5 m long pair of cooling pipes withan i d of 1.5 mm and outer od of 3 mm the total resistance is about 5 m , so they form excellent power lines . in order to avoid a contact resistance to the hybrid , the al tubes have to be chromatized , which is a standard process .the sensors are glued to a piece of al , shown in red , which serves simultaneously as power connector , the mechanical support and the cooling point .the hybrids and the sensors are both screwed on the cooling pipes , but there is no further mechanical or thermal contact between them , so the hybrid and sensor are cooled independently and thus can operate at different temperatures .thermal stresses are reduced , since the hybrid and sensor move together with the cooling pipes during cooldown . for the tests the hybrids consisted of a simple kapton printed circuit with a resistive line between the power pads and a pt100 thermometer , which can be read out via an usb interface .the power pads of copper on the kapton are pressed against the cooling block by the mounting screw , which forms simultaneously the power connector . a picture is shown in fig .[ f6 ] . in total 56 hybridsare screwed between the cooling pipes .of course , in such a scheme all modules are powered in parallel , which raises several questions .first , what is the voltage drop along the power line .the current needed for a single ladder is estimated as follows .each sensor of 9x10 cm is assumed to have strixels of 2.2 cm with bond pads at the outer edge , as indicated in fig .[ f7 ] . assuming a pitch of 130 leads to a total of 12x256 channels , i.e. 6 front end chips with 256 channels bonded to each side of the detector .if we assume new front ends need 0.5 mw per channel , if the 0.13 or even smaller feature sizes will be used and add 25% for the control power , one needs 1.6 a at 1.2 v per sensor . for 56 hybrids this corresponds to about 50 a and 60 w per ladder . to get all connections at large z requires that the current is returned , e.g. via the cooling pipes of the neighboring ladder , as sketched in fig .[ f8 ] . in this casethe current is used twice , thus reducing the total amount of current in the power lines between the power supplies and the detector .the total current needed for such a design is of the order of 12 ka ( 33 kw ) for a total of 64 million channels .this is to be compared with 15 ka and 33 kw in the present cms detector .if the input current flows in both power lines in the same direction by returning the current in the neighboring ladder , one has the same voltage drop on each power line , so one naively expects no influence of the voltage drop on the hybrids , since each hybrid would see the same voltage difference between the power lines .however , at the beginning of the ladder the positive power line carries 50 a and the negative only the current from the first hybrid , i.e. around 1 a , so the voltage drops are not equal in both power lines .this can be improved by increasing the thickness of the positive power line at the beginning , as shown at the bottom of fig .[ f6 ] by the additional piece of al between the power connectors ( = cooling points ) . in the middle of the ladderboth power lines carry a similar current , so here nothing needs to be done , but at the end the returning power line needs to have additional material , since here the full current flows . by this voltage compensation in the power lines all hybrids can have the same voltage with a precision better than 20 mv .the hybrids have not exactly a common ground because of the voltage drop of about 0.1 v along the power lines , so the control and signal lines better be differential lvds type lines with a standard ac coupling to the interconnect boards at the end of the ladder .alternatively , a differential lvds receiver with a somewhat higher source voltage can easily cope with common mode offsets of 100 mv .the detectors have been mounted on a 5 mm thick support plate made from rohacell foam , a strong lightweight material , which has been used before as support in silicon detectors . with the attached cooling structure this forms a stable double layer sandwich structure which can easily take up the magnetic forces of the power lines , if the ladder is inside a solenoid .the radiation length of rohacell is above 5 m , so it does not contribute significantly to the material budget .however , it is hygroscopic .water absorption can be prevented by a parylene coating .alternatively , the support structure can be made out of carbon fiber .+ .,width=336 ]the leakage current of heavily irradiated sensor can be reduced sufficiently by using sensor temperatures below -25 .the high cooling power of co makes it a natural choice as refrigerant , especially since co becomes increasingly popular in replacing the climate unfriendly freons .this leads to many off - the - shelf components , like co liquid pumps .the temperature range for co 2-phase cooling can be read off from the pressure - temperature diagram in fig .[ f9 ] : -57 to + 31 for pressures between 5.2 and 73 bar , respectively .this implies that during normal operations the pressure is below 20 bar , but after warm up to room temperature the pressure increases to around 50 bar at 20 . showing that a two - phase liquid - gas system exists between -57 and 31 , so this is the range of cooling liquid temperatures with vapor pressures between 5 and 73 bar ., width=336 ] what makes co so interesting for low mass cooling systems is its large enthalpy , which allows to cool close to 300 j for each g of co evaporated .this is to be compared with non - evaporative systems , like cms , where only 5 j / g are obtained , if a temperature increase of 5 k is accepted. therefore the flow rate of co liquid can be one to two orders lower compared to non - evaporative systems , thus reducing the size of tubing , pumps etc . by a similar amount .in addition , the viscosity of co is low , so the pressure drop and the corresponding temperature drop in the cooling tubes is small . for evaporative systems one can either have a cooling plant with a gas compressor to liquify the evaporated gas orpump the liquid around with a liquid pump instead of using the compressor to build up pressure .both schemes have been used at the lhc , as shown in fig .[ f10 ] for the atlas and lhcb cooling systems .the atlas system is running with a low vapor pressure c3f8 coolant , while lhcb has opted for a high pressure co2 cooling system , in which the liquid is pumped around .great features of the co cooling system are : i ) no active elements like heaters or electric valves inside detector ii ) standard liquid co pumps iii ) standard primary chiller iv ) the temperature of the whole system is controlled by only one parameter , namely the vapor pressure in the accumulator , which can be increased by the heater and decreased by the chiller . blow system .the co bottle is precooled to -40 in a household freezer .the co liquid is sent through 2 mm i d al cooling pipes of a 2.5 m ladder of hybrids , consisting of kapton with resistors and pt100 temperature sensors ( see fig .the cooling pipes act at the same time as power lines and mechanical support .bottom : photographs of the system .the freezer is shown on the left .the temperatures of different sensors as function of time are shown at the right .the blue curve shows the temperature of a sensor just be cooled by the arriving liquid.,title="fig:",width=336 ] blow system .the co bottle is precooled to -40 in a household freezer .the co liquid is sent through 2 mm i d al cooling pipes of a 2.5 m ladder of hybrids , consisting of kapton with resistors and pt100 temperature sensors ( see fig .the cooling pipes act at the same time as power lines and mechanical support . bottom : photographs of the system .the freezer is shown on the left .the temperatures of different sensors as function of time are shown at the right .the blue curve shows the temperature of a sensor just be cooled by the arriving liquid.,title="fig:",width=336 ] another advantage of having a liquid pump is that the system works very well without heat load , e.g. when the detector is switched off .liquid co membrane pumps are widely used in the food industry and elsewhere , so they are cheap and easy to obtain including sensors in case the membrane would be leaking .large liquid pumps with capacities of around 100 m/h are available , which are e.g. used on off - shore gas platforms to pressurize the natural gas with co . for the cooling power of an slhc tracker of about 50kw pumps with a capacity of about 1 m/h are enough .of course , pumping the liquid requires thermally isolated outlets , but this is easily accomplished , especially since the temperature gradients are not large .since co is non - toxic and non - flammable one can easily design small `` blow''-cooling - systems , in which the co is blown to the air .the main precautions , which have to be taken , is that co gas has a higher density than air , so it will collect at the bottom .therefore it has to be blown outside the window , if one wants to prevent the risk of not having enough oxygen in the lab .a simple blow - system is shown in fig .the co bottle is precooled in a commercial house - hold freezer to -40 , which reduces the pressure to around 10 bar . in this casenylon tubing with standard swagelok connectors can be used to transport the liquid to and from the detector .the nylon tubing is already a good isolator , so only little additional isolation is required . after the detector a simple ball flow meter , consisting of a little ball inside a flow tube with a needle valve at the entrance for regulating the flow rate , is installed .the environmental heat into the nylon tube is enough to evaporate the rest of liquid from the detector before entering the flow meter .the whole system is regulated by just two knobs : i ) the pressure reducer regulates the temperature of the liquid - gas mixture coming out of the bottle , which has a riser tube sticking into the liquid at the bottom ; ii ) the cooling power is regulated by the needle valve in the ball flow meter . the long ladder , shown in fig . [ f6 ] , with two pairs of 2x2.5 m long cooling tubes with an inner diameter ( i d ) of 2 mm and outer diameter ( od ) of 3 mm has been tested .the temperature was set by the pressure reducer on the bottle , which kept the temperature on the whole ladder very well constant , as shown in fig .[ f12 ] . increasingthe power in the hybrid resulted in evaporation of the liquid .if the liquid had evaporated before the end of the ladder , the temperature at the end of the ladder increased , i.e. dry - out occurred .increasing the flow rate restored the cooling everywhere .[ f13 ] shows the power against the flow rate required to be just above the dry - out condition .the minimum power estimated for a 2.5 m long ladder is 50 w , but clearly much higher powers can be sustained with 2 mm i d cooling tubes .the maximum power was not limited by the detector system , but by the maximum throughput of the flow meters .bottle.,width=336 ] at a temperature of -31 as function of the flow rate of co needed to prevent dry - out .the total power is proportional to the flow rate , so the negative offset for represents environmental heat load of approximately 15 w. this is rather large , since the ladder was only inside a box with a transparent cover to prevent condensation from the air , but this yields minimal isolation.,width=336 ]the successful operation of a co cooling system in lhcb proves its feasibility .this paves the way for a rather different design of a large radiation hard silicon tracker , since co cooling has two advantages : it allows to cool easily to temperatures below -30 , thus eliminating the leakage currents in a harse radiation environment and secondly , the high cooling power allows for tiny and long cooling tubes , which in turn allows to have long ladder type detectors with all service connections and interconnect boards outside the tracking volume. the number of layers for tracks at small angles with respect to the beampipe can be increased by a larger number of pixel layers and strixel disks at small radii . for pitches between 100 and 200 and strixels around 2 cmthe strixels can have their bonding pads all at the outside of a 9x10 cm sensor , so a similar module construction can be used as in present lhc detectors , except for having hybrids on each side of the detector .although this increases the number of channels by a factor 5 to 10 , the total power is expected to be similar because of the expected power reduction in future front end electronics with a smaller feature size .serial powering of two ladders is proposed , which would keep the total current similar to the ones in present detectors , thus allowing to reuse the existing services and power supplies without the need for dc - dc converters .it is shown that the material budget can be reduced by 40% or more in the forward region of the cms detector , if one adopts the strawman design of long barrel detectors with inner disks at small radii ( instead of the traditional endcaps ) , especially if one combines the functions of cooling pipes , power lines and mechanical support in a single structure .a simple co blow system has been designed and first tests regarding powering via cooling pipes are encouraging .a final pair of ladders with real sensors and all interconnect boards at the end of a long ladder needs to be tested in order to check the stability of the proposed powering via cooling pipes and the transfer of signals to and from the interconnect board at the end of the long ladder .special thanks go to bart verlaat for helpful discussions on co cooling .the project was supported by the bundesministerium fr bildung und forschung under grant 05 hs6vk1. 99 slhc strategy , see http://council-strategygroup.web.cern.ch/council-strategygroup/ g. casse , a. affolder and p. p. allport , ieee trans .* 54 * ( 2007 ) 1695 .r. adolphi _ et al . _[ cms collaboration ] , jinst * 3 * ( 2008 ) s08004 m. raymond , http://indico.cern.ch/conferencedisplay.py?confid=22827 d. attree _ et al ._ , jinst * 3 * ( 2008 ) p07003 .b. verlaat , proceedings 22nd iir international congress of refrigeration , beijing , china , 21 - 26 aug .icr07-b2 see e.g. lewa herbert ott gmbh , leonberg , germany , http://www.lewa.com/main/de/2_2_proc0/ e. bosze , j. simon - gillo , j. boissevain , j. chang and r. seto , nucl . instrum .a * 400 * ( 1997 ) 224 , http://p25ext.lanl.gov / phenix / mvd/.
silicon trackers at the slhc will suffer high radiation damage from particles produced during the collisions , which leads to high leakage currents . reducing these currents in the sensors requires efficient cooling to -30 . the large heat of evaporation of co and the low viscosity allows for a two - phase cooling system with thin and long cooling pipes , because the small flow of liquid needed leads to negligible temperature drops . in order to reduce the material budget a system is proposed in which a large scale tracker requiring ca . 50 kw of power is powered via 1 - 2 mm diameter aluminum cooling pipes with a length of several m. these long cooling pipes allow to have all service connections outside the tracking volume , thus reducing the material budget significantly . the whole system is designed to have negligible thermal stresses . a co blow system has been designed and first tests show the feasibility of a barrel detector with long ladders and disks at small radii leading to an optimized design with respect to material budget and simplicity in construction .
the exponential function is often appeared in every scientific field . among many properties of the exponential function ,the linear differential function is the most important characterization of the exponential function .a slightly nonlinear generalization of this linear differential equation is given by(see the equation ( 17 ) at page 5 of and the equations ( 22)-(23 ) at page 8 of . )this nonlinear differential equation is equivalent to we define the so - called -logarithm _ _ a generalization of . applying the property : to ( [ diffequ1 ] ), we obtain is _ any _ constant .then we define the so - called -exponential _ _ as the inverse function of as follows: ^{\frac{1}{1-q } } & \text{if } 1+\left ( 1-q\right ) x>0 , \\ 0 & \text{otherwise.}% \end{array}% \right .\label{q - exponential}\]]note that the -logarithm and -exponential recover the usual logarithm and exponential when , respectively ( see the pages 84 - 87 of for the detail properties of these generalized functions and ) .thus , the general solution to the nonlinear differential equation ( [ nonlinear differential equation ] ) becomes is _ any _ constant satisfying dividing the both sides by of the above solution , we obtain the following scaling: obtain means that the solution of the nonlinear differential equation ( nonlinear differential equation ) obtained above is _ scale - invariant _ under the above scaling ( [ scaling ] ) .moreover , we can choose _ any _ constant satisfying because is an integration constant of ( [ diffequ1 ] ) .note that the above scaling ( [ scaling ] ) with respect to both variables and can be observed only when and .in fact , when i.e. , ( [ scaling ] ) reduces to the scaling with respect to only i.e. , , and when both scalings in ( [ scaling ] ) disappears .the above scaling property of the nonlinear differential equation ( nonlinear differential equation ) is very significant in the fundamental formulations for every generalization based on ( [ nonlinear differential equation ] ) .we summarize the important points in the above fundamental result . 1. in the scaling ( [ scaling ] ) is _ any _ constant satisfying because is an integration constant of ( diffequ1 ) .this means that the scaling ( [ scaling ] ) is _ arbitrary _ for _ any _ and if and satisfies .2 . in general studies of differential equation , is determined by the _initial condition _ of the nonlinear differential equation ( nonlinear differential equation ) .this means , when an observable in a dynamics grows according to the nonlinear differential equation ( nonlinear differential equation ) , _ the initial condition determines the scaling of the dynamics ._ this is applicable to the analysis of the chaotic dynamics .3 . in general , for a mapping , is called a restriction of a mapping to , which is denoted by .let be a -exponential function ( [ q - exponential ] ) from .for the restricted domain defined by restriction of a mapping to is denoted by becomes a power function: in the above formulations the only case is discussed . as shown in the section iv , the -generalizations along the line of ( [ nonlinear differential equation ] )has a symmetry , i.e. , .therefore , the above can be replaced by a restriction of a mapping to in accordance with the symmetry , which implies that the case can be discussed . in this way ,the _ restriction _ of the -exponential function to the domain or coincide with a power function , which has been often appeared and discussed in science .in general , the restricted domain or is called _ scaling domain _ and its corresponding range is called _ scaling range_. 4 .a power function is known to be characterized by the following functional equation , i.e. , there exists a function such that for any .the above functional equation uniquely determines a power function choosing .see some references such as for the proof . on the other hand ,a -exponential function is characterized by the nonlinear differential equation ( [ nonlinear differential equation ] ) as similarly as a exponential function . moreover ,the solutions of ( [ nonlinear differential equation ] ) are scale - invariant under the scaling ( [ scaling ] ) and reduce to power functions when the domain is restricted to or as shown above .in fact , by restricting the domain of ( [ scaling-1 ] ) to the general solution ( [ scaling-1 ] ) of the nonlinear differential equation ( [ nonlinear differential equation ] ) reduces to the following power function according to ( [ f_q]). is, is equivalent to ( [ power - func ] ) if , many discussions on exponential versus power - law , i.e. , versus should be replaced by exponential versus -exponential , i.e. , versus , which is more natural from mathematical point of view . as shown in these discussions , the fundamental nonlinear differential equation ( [ nonlinear differential equation ] ) provides us with not only the characterization of the -exponential function but also the scaling property in its solution .as similarly as the relation between the exponential function and shannon entropy , we expect the corresponding information measure to the -exponential function .there exist some candidates such as rnyi entropy , tsallis entropy and so on .but the algebra derived from the -exponential function uniquely determines tsallis entropy as the corresponding information measure . in the following sections of this paper , we present the two mathematical results to uniquely determine tsallis entropy by means of the already established formulations such as the -exponential law , the -multinomial coefficient and -stirling s formula .the exponential law plays an important role in mathematics , so this law is also expected to be generalized based on the -exponential function . for this purpose ,the new multiplication operation is introduced in and for satisfying the following identities: concrete form of the -logarithm or -exponential has been already given in the previous section , so that the above requirements as -exponential law leads us to the definition of between two positive numbers .for two positive numbers and , the -_product _ is defined by ^{\frac{1}{1-q } } , & \text{if } % x>0,\,y>0,\,x^{1-q}+y^{1-q}-1>0 , \\ 0 , & \text{otherwise.}% \end{array}% \right .\label{def of q - product}\ ] ] the -_product _ recovers the usual product such that .the fundamental properties of the -product are almost the same as the usual product , but other properties of the -_product _ are available in nmw03 and . in order to see one of the validities of the -product ,we recall the well known expression of the exponential function given by the power on the right side of ( [ def of expx ] ) by the times of the -product is obtained . in other words , coincides with proof of ( [ repre of q - exp ] ) is given in the appendix of .this coincidence ( [ repre of q - exp ] ) indicates a validity of the -product .in fact , the present results in the following sections reinforce it .we briefly review the -multinomial coefficient and the -stirling s formula by means of the -product .as similarly as for the -product , -ratio _ _ is introduced as follows : for two positive numbers and , the inverse operation to the -product is defined by ^{\frac{1}{1-q } } , & \text{if } % x>0,\,y>0,\,x^{1-q}-y^{1-q}+1>0 , \\ 0 , & \text{otherwise}% \end{array}% \right.\]]which is called -ratio _ _ in . is also derived from the following satisfactions , similarly as for . -product and -ratio are applied to the definition of the -multinomial coefficient . for and the -multinomial coefficientis defined by _{ q}:=\left ( n!_{q}\right ) \oslash _ { q}\left [ \left ( n_{1}!_{q}\right ) \otimes _ { q}\cdots \otimes _ { q}\left ( n_{k}!_{q}\right ) % \right ] .\label{def of q - multinomial coefficient}\ ] ] from the definition ( [ def of q - multinomial coefficient ] ) , it is clear that _ { q}=\left [ \begin{array}{ccc } & n & \\n_{1 } & \cdots & n_{k}% \end{array}% \right ] = \frac{n!}{n_{1}!\cdots n_{k}!}.\ ] ] in addition to the -multinomial coefficient , the -stirling s formula is useful for many applications such as our main results . by means of the -product ( [ def of q - product ] ) , the -factorial naturally defined as follows . for a natural number and ,the -factorial is defined by using the definition of the -product ( [ def of q - product ] ) , is explicitly expressed by an approximation of is not needed , this explicit form should be directly used for its computation . however , in order to clarify the correspondence between the studies and , the approximation of is useful .in fact , using the following -stirling s formula , we obtain the unique generalized entropy corresponding to the -exponential function , shown in the following sections .let be the -factorial defined by ( [ def of q - kaijyo ] ) .the rough -stirling s formula is computed as follows: the proof of the above formulas ( [ rough q - stirling ] ) is given in su04b .in this section we show that tsallis entropy is uniquely and naturally derived from the fundamental formulations presented in the previous section . in order to avoid separate discussions on the positivity of the argument in ( [ def of q - multinomial coefficient ] ) , we consider the -logarithm of the -multinomial coefficient to be given by _ { q}=\ln _ { q}\left ( n!_{q}\right ) -\ln _ { q}\left ( n_{1}!_{q}\right ) \cdots -\ln _ { q}\left ( n_{k}!_{q}\right ) .\label{lnq of multinomial}\]]the unique generalized entropy corresponding to the -exponential is derived from the -multinomial coefficient using the -stirling s formula as follows .when is sufficiently large , the -logarithm of the -multinomial coefficient coincides with tsallis entropy ( [ d - tsallis entropy ] ) as follows: _{ q}\simeq \left\ { \begin{array}{ll } \dfrac{n^{2-q}}{2-q}\cdot s_{2-q}\left ( \dfrac{n_{1}}{n},\cdots , \dfrac{n_{k}% } { n}\right ) & \text{if}\quad q>0,\,\,q\neq 2 \\ -s_{1}\left ( n\right ) + \sum\limits_{i=1}^{k}s_{1}\left ( n_{i}\right ) & \text{% if}\quad q=2% \end{array}% \right .\label{important0 - 2}\]]where is tsallis entropy : is given by the proof of this theorem is given in .note that the above relation ( [ important0 - 2 ] ) reveals a surprising _ symmetry _ : ( [ important0 - 2 ] ) is equivalent to _{ 1-\left ( 1-q\right ) } \simeq \frac{n^{1+\left ( 1-q\right ) } } { % 1+\left ( 1-q\right ) } \cdot s_{1+\left ( 1-q\right ) } \left ( \frac{n_{1}}{n}% , \cdots , \frac{n_{k}}{n}\right ) \label{symmetry}\]]for and .this expression represents that there exists a _ symmetry _ with a factor around in the algebra of the -product .substitution of some concrete values of into ( important0 - 2 ) or ( [ symmetry ] ) helps us understand the meaning of this symmetry .remark that the above correspondence ( [ important0 - 2 ] ) and the symmetry ( [ symmetry ] ) reveals that the -exponential function ( q - exponential ) derived from([nonlinear differential equation ] ) is consistent with tsallis entropy only as information measure .this section shows another way to uniquely determine the generalized entropy .more precisely , the identity derived from the -multinomial coefficient coincides with the generalized shannon additivity which is the most important axiom for tsallis entropy .consider a partition of a given natural number into groups such as .in addition , each natural number is divided into groups such as where .[ ptbh ] fig1.eps then , the following identity holds for the -multinomial coefficient. _{ q}=\left [ \begin{array}{ccc } & n & \\ n_{1 } & \cdots & n_{k}% \end{array}% \right ] _ { q}\otimes _ { q}\left [ \begin{array}{ccc } & n_{1 } & \\n_{11 } & \cdots & n_{1m_{1}}% \end{array}% \right ] _ { q}\otimes _ { q}\cdots \otimes _ { q}\left [ \begin{array}{ccc } & n_{k } & \\n_{k1 } & \cdots & n_{km_{k}}% \end{array}% \right ] _ { q } \label{q - identity}\]]it is very easy to prove the above relation ( [ q - identity ] ) by taking the -logarithm of the both sides and using ( [ lnq of multinomial ] ) . on the other hand , the above identity ( [ q - identity ] )is reformed to the generalized shannon additivity in the following way . taking the -logarithm of the both sides of the above relation ( [ q - identity ] ), we have _ { q}=\ln _ { q}\left [ \begin{array}{ccc } & n & \\ n_{1 } & \cdots & n_{k}% \end{array}% \right ] _ { q}+\sum_{i=1}^{k}\ln _ { q}\left [ \begin{array}{ccc } & n_{i } & \\ n_{i1 } & \cdots & n_{im_{i}}% \end{array}% \right ] _ { q}.\]]from the relation ( [ important0 - 2 ] ) , we obtain , by means of the following probabilities defined by identity ( [ n - gene - shanaddi ] ) becomes formula ( [ generalizedshannon ] ) obtained from the -multinomial coefficient is exactly same as the generalized shannon additivity ( see [ gsk3 ] given below ) which is the most important axiom for tsallis entropy .in fact , the generalized shannon - khinchin axioms and the uniqueness theorem for the nonextensive entropy are already given and rigorously proved in su04d . the present result ( [ generalizedshannon ] ) and the already established axiom [ gsk3 ]perfectly coincide with each other .let be defined by the -dimensional simplex : following axioms [ gsk1][gsk4 ] determine the function such that satisfies properties ( i ) ( iv ) : 1 . is continuous and has the same sign as .e ., 2 . 3 .there exists an interval such that and is differentiable on the interval 4 .there exists a constant such that [ gsk1 ] _ continuity _ : is continuous in and , [ gsk2 ] _ maximality _ : for any , any and any , [ gsk3 ] _ generalized shannon additivity _ : if the following equality holds: [ gsk4 ] _ expandability _ : note that , in order to uniquely determine the tsallis entropy ( d - tsallis entropy ) in the above set of the axioms , should be removed from ( [ constrant4 ] ) , that is , ( i.e. , ) should be used instead of ( [ constrant4 ] ) .the general form perfectly corresponds to tsallis original introduction of the so - called tsallis entropy in 1988 . see his original characterization shown in page 9 of for the detail ( corresponds to in his notation .his simplest choice of coincides with the simplest form of i.e. , ) .when one of the authors ( h.s . ) submitted the paper in 2002 , nobody presented the idea of the -product .however , as shown above , the identity on the -multinomial coefficient which was formulated based on the -product coincides with one of the axioms ( _ _ _ _ [ gsk3 ] : generalized shannon additivity ) in .this means that the whole theory based on the -product is self - consistent .moreover , other fundamental applications of the -product , such as law of error and the derivation of the unique non self - referential -canonical distribution , are also based on the -product .starting from a fundamental nonlinear equation , we present the scaling property and the algebraic structure of its solution .moreover , we prove that the algebra determined by its solutions is mathematically consistent with tsallis entropy only as the corresponding unique information measure based on the following 2 mathematical reasons : ( 1 ) derivation of tsallis entropy from the -multinomial coefficient and -stirling s formula , ( 2 ) coincidence of the identity derived from the -multinomial coefficient with the generalized shannon additivity which is the most important axiom for tsallis entropy . 1. axioms and the uniqueness theorem for the nonextensive entropy su04d 2 .law of error in tsallis statistics 3 .-stirling s formula in tsallis statistics 4 .-multinomial coefficient in tsallis statistics 5 .central limit theorem in tsallis statistics ( numerical evidence only ) 6 .-pascal s triangle in tsallis statistics 7 . the unique non self - referential -canonical distribution in tsallis statistics 8 .scaling property characterized by the fundamental nonlinear differential equation [ the present paper ] .all of the above fundamental results are derived from the algebra of the -product and mathematically consistent with each other .this means that the -product is indispensable to the formalism in tsallis statistics .more important point is that the -product originates from the fundamental nonlinear differential equation ( [ nonlinear differential equation ] ) with _ scale - invariant _ solutions . in any manuscripts and papers up to now ,this constant is set to be , i.e. , ( see the equation ( 17 ) at page 5 of and the equations ( 22)-(23 ) at page 8 of ) .but the arbitrariness of this constant plays a very important role in scaling ( [ scaling ] ) .h. suyari , the unique non self - referential -canonical distribution and the physical temperature derived from the maximum entropy principle in tsallis statistics , prog .* 162 * , pp.79 - 86 ( 2006 ) .( see the author s website ( http://www.ne.jp/asahi/hiroki/suyari/publications.htm ) for the detail derivation . )
we derive a scaling property from a fundamental nonlinear differential equation whose solution is the so - called -exponential function . a scaling property has been believed to be given by a power function only , but actually more general expression for the scaling property is found to be a solution of the above fundamental nonlinear differential equation . in fact , any power function is obtained by restricting the domain of the -exponential function appropriately . as similarly as the correspondence between the exponential function and shannon entropy , an appropriate generalization of shannon entropy is expected for the scaling property . although the -exponential function is often appeared in the optimal distributions of some one - parameter generalized entropies such as rnyi entropy , only tsallis entropy is uniquely derived from the algebra of the -exponential function , whose uniqueness is shown in the two ways in this paper .
has been a large body of work involving fiber bragg grating sensors over the past two decades .early demonstrations were based on changes in the gross bragg wavelength as the gratings were perturbed due to strain and temperature . as interrogation techniques became more sophisticated , various signal processing and active fringe side locking schemes were employed , which dramatically improved their resolution .this was further enhanced by refinement of grating design , enabling fabry - perot resonators to be fabricated , which effectively multiply the phase change due to fiber optical path displacements . with careful control of the grating writing process and appropriate choice of glass material , a bragg grating fiber fabry - perot ( ffp )can now have a finesse of well over 1000 and a linewidth of a few mhz .a fiber distributed feedback ( dfb ) laser can be fabricated when the ffp is written in a fiber amplifier and pumped optically .these lasers have attracted significant interest for use in various schemes as active sensing elements , where changes in lasing wavelength due to environmental perturbations are used as the sensor signal . the past decade has also seen intense international effort in attaining direct gravitational wave detection , which demands unprecedented interferometric sensitivity to measure the strain of space - time . towards achieving this ultra resolution ,the pound - drever - hall ( pdh ) laser frequency locking scheme is widely used .it is adopted for laser frequency stabilization , interferometer longitudinal control , as well as gravitational wave signal extraction ..,width=288 ] while the pdh frequency locking technique is well - established with free - space bulk - optical resonators and solid - state lasers within the gravitational wave community , it can readily be extended to diode laser stabilization , and guided - wave optics .it has previously been utilized in a fiber laser stabilization scheme , where an erbium doped fiber laser was referenced to a coated micro resonator .the pdh locking scheme can be adapted for both low frequency ( ) quasi - static strain sensing , and dynamic measurements at higher frequencies .in this paper we will discuss this technique in some detail , and demonstrate pdh locking for signal extraction in a fiber sensor , with a bragg grating ffp as the sensing resonator .remote interrogation of passive resonators has significant advantages over active dfb lasers as sensing devices .the problems relating to dopant clustering and relaxation oscillations in erbium doped dfb lasers are completely avoided , and the undesirable effects of optical pump noise is eliminated .in addition , the interrogating laser can be housed in a controlled environment , and any residual introduction of phase noise due to laser intensity fluctuations are secondary effects . when the chosen interrogating wavelength is 1550 nm , telecoms grade smf-28 fiber can be used for both laser delivery as well as grating sensor fabrication , with the dual benefit of low cost and low loss .this removes the need for more exotic fibers which requires cutoff below the pump wavelength for single - mode pump delivery . at 0.1 - 1mw output power ,erbium doped fiber dfb lasers are inherently inefficient compared with commercially available extra - cavity diode lasers , used in this work , with of output power .this higher laser power improves the signal to shot noise ratio by up to an order of magnitude . while this output power can potentially be matched by er / yb codoped dfb lasers , they require complex fiber geometry to achieve sufficient photosensitivity .in addition , frequency instability due to thermal pump absorption continues to limit their sensing performance .remote interrogation , therefore , presents itself as an elegant and superior sensing solution .for the purpose of this discussion , we will simplify our treatment of the ffp as similar to that of a free space resonant cavity , ie , within the bandwidth of concern , the bragg reflectors are broadband , and both the reflectors and resonator refractive index are non - dispersive . at the optical carrier frequency , the complex reflection response of a lossless ffp formed by two matched reflectors separated by distance ,both with amplitude reflection coefficient , can be expressed as , \end{aligned}\ ] ] where and are the reflected and incident electric fields ; is the round - trip phase in a material of refractive index n ; and are , respectively , the amplitude and phase response .the ffp has a full - width half - maximum ( fwhm ) bandwidth of .the pdh locking scheme involves interrogating the ffp with the laser carrier phase modulated at , while measuring the reflected power with a photodetector , as illustrated in figure [ schematic ] .after electronic demodulation and low - pass filtering , this signal can be reduced to \cos(\psi ) \nonumber \\ & & + \im[\tilde{f}(\nu)\tilde{f}^{*}(\nu_{+ } ) -\tilde{f}^{*}(\nu)\tilde{f}(\nu_{-})]\sin(\psi)\ } , \label{err_v}\end{aligned}\ ] ] where the cross term \ } \nonumber \\ & & -a(\nu)a(\nu_{-})\exp\{i[\phi(\nu_{-})-\phi(\nu)]\ } ; \label{cross_term } \end{aligned}\ ] ] and ; is the power in the carrier while is the power in each sideband .the phase shift is set to optimize the demodulated error signal . in generalthis is achieved when \big]/d\nu } { d\big[\re[\tilde{c}(\nu_{\pm})]\big]/d\nu}\bigg\}_{\theta(\nu)=m2\pi},\end{aligned}\ ] ] where m is an integer .the round - trip phase when the carrier is resonant with the ffp . from equation [ cross_term ], we can deduce that in the case of , and are both very small , and so the expression is dominated by its real part .conversely , when , the sidebands are well outside of the ffp linewidth when the carrier is near resonance . in this casethese phase difference terms approach and the expression is dominated by its imaginary part. if the ffp lineshape is symmetric and the carrier is at resonance , and for both cases , implying that equation [ cross_term ] , and hence equation [ err_v ] , become zero .this is the usual lock point of the frequency servo . from equation [ err_v ] , it is clear that when the cross term equals 0 ( locked to resonance ) , the output is equal to zero and independent of and .hence , when locked , the pdh system is immune to variations in laser intensity noise to the first order . in comparison , a fringe - side locking technique shows no implicit immunity to intensity noise , and requires an additional intensity monitor and subtraction electronics .figure [ theo_errsig]a illustrates the theoretical error signal for the case of , while figure [ theo_errsig]b is for the case of , when is scanned across the resonance of a ffp . figure [ theo_errsig]c shows the intermediate case where .the two satellite error signals in figure [ theo_errsig]b are due to the sidebands undergoing the ffp resonance , whereas in figure [ theo_errsig]c , the error signals due to the carrier and sidebands merge to form a single and almost square error signal .the plots assume a resonance linewidth of 150mhz , and it is interrogated by phase modulation frequencies 15mhz , 1500mhz and 300mhz respectively ..,width=288 ] the case where describes the classic pdh locking regime , involving high finesse fabry - perot cavities .the principle of operation behind both extremes are similar and , for the sake of brevity , we shall refer to both as pdh locking in this treatment .subsequent experimental results for our ffp to be presented in this paper will show that we were operating nearer the regime . .inset : normalized experimental operating regimes for two resonances , overlaid with expanded theoretical plot.,width=288 ] for a given resonance fwhm , , the frequency separation between the turning points of a pdh error signal is dependent on . it approaches asymptotic values for both cases of and , as illustrated by the theoretical plot in figure [ theo_errsig_bw ] .the plot is calculated with optimized for each .on the other hand , for a given modulation frequency , the size and , therefore , slope of the error signal is dependent on the fwhm bandwidth .figure [ theo_errsig_size ] shows the theoretical plot of peak - to - peak normalized error signal size vs normalized fwhm bandwidth .the error signal size approaches zero when , but reaches an asymptotic value when .the topology of our pdh interrogation of an ffp is shown in figure [ schematic ] .the laser carrier is provided by a new focus vortex 6029 , which was an extra - cavity diode laser with a factory - estimated linewidth of 1mhz , and an intrinsic linewidth of .its optical wavelength was centered around 1550.15 nm , with about 0.40 nm tuning range , which corresponds to a frequency range of .the frequency tuning of the laser was actuated by applying a voltage to the piezo - electric transducer ( pzt ) , which changed the laser cavity length .the factory calibration specified that its laser pzt actuator had a gain of 12.5ghz / v . after passing through the optical isolator, the laser polarization was adjusted to vertical by a half - wave plate before being modulated at 15mhz by the resonant phase modulator ( new focus 4003 ) .the phase modulator was driven by a radio - frequency ( rf ) signal generator , which also provided the local oscillator for the demodulation electronics .the modulated laser beam was coupled with an aspheric lens into a fiber - pigtailed polarization - independent optical circulator , which was spliced to the ffp .the ffp was held between a pair of magnetic clamps , with one of the clamps in turn mounted on a translation stage , so that the bragg wavelength could be stretch - tuned to within the laser frequency range .our ffp consisted of a pair of nominally matched 13.5db bragg gratings ( r ) each 15 mm long , spaced 10 mm apart , fabricated in a single phase - coherent writing process .the schematic for the uv exposure along the length of the fiber is illustrated in figure [ ffp_schematic ] .they were written in hydrogenated smf-28 fiber with no apodization .both the transmitted and reflected light were collimated back into free space with ashperic lenses and then focussed onto photodetectors tx and rx , respectively , each with electronic bandwidth of .the optical isolator in the transmitted port eliminated any parasitic etalon effects due to residual back reflections from the collimating asphere .the rf local oscillator was phase shifted before being used to mix down the electronic signal from the reflected port .the mixed signal was low - pass filtered to provide the pdh error signal .the local oscillator phase shift was optimized experimentally by maximizing the error signal .a 95hz voltage ramp of 2vp - p and 50:50 symmetry was applied to the laser pzt input to sweep the laser carrier frequency , which equates to a slope of 380v / s .the intensities transmitted and reflected by the ffp , as measured by the photodetectors , and the corresponding mixed down experimental error signal were recorded using a digital oscilloscope while the laser frequency was scanned .they are displayed in figures [ exp_scan_wide]a , [ exp_scan_wide]b and [ exp_scan_wide]c , respectively .there were two ffp resonances within the bragg grating bandwidth with differing peak heights and s .these differences were mainly due to the frequency dependent reflectivity of the bragg grating pair , thus resulting in differing finesse at the two resonances .since the gratings were not apodized during the fabrication process , we can expect higher reflectivity near the center of their bandwidth , which is confirmed by the higher finesse and thus narrower fwhm of the first resonator mode .further , by comparing the heights of the two peaks in figure [ exp_scan_wide]a , we can see that the lower finesse resonance is closer to impedance matching . at this mode ,nearly all of the laser light was transmitted and the reflection approached zero .this difference in transmitted intensity , compared with the under - coupled high finesse mode , can be explained by uv induced loss in the resonator , particularly in the 10 mm spacing between the grating pair .the higher finesse resonance transmitted less intensity due to its greater resonator round - trip number , or total storage time , which resulted in greater total loss while circulating within the resonator . to reduce this loss ,the uv laser can be easily controlled to avoid fiber exposure between the grating pair during the resonator fabrication process .the transmission scan and the reflected error signal for the narrower resonance is enlarged in figure [ exp_scan_zoom]a and [ exp_scan_zoom]b .the fwhm time for the pzt scan in figure [ exp_scan_zoom]a was , which corresponds to 11.4mv on the pzt . recalling that the factory calibration specified that its laser pzt input provided 12.5ghz / v of tuning , the fwhm bandwidth of this mode can be determined to be 143mhz . for comparison ,the broader resonance had a fwhm time of 66 , which implies a bandwidth of 314mhz .the separation between the two peaks can be seen to be in figure [ exp_scan_wide ] , which infers a free spectral range of 9ghz .hence , the narrower mode had a finesse of 63 , while the broader resonance had a finesse of 29 .the ratio for the higher finesse mode was .the corresponding peak - to - peak time for its error signal in figure [ exp_scan_zoom]b was , which yields an error signal turning point frequency separation to ratio of .on the other hand , the lower finesse resonance had an error signal peak - to - peak time of 38 , which corresponds to of , and an error signal turning point separation to ratio of .the error signal turning point separation to ratios for the two modes are close to each other , and agree with the values as predicted in figure [ theo_errsig_bw ] . at these linewidths, is small enough relative to to approach the asymptotic value of the lower limit .the peak - to - peak error signal size for the higher finesse mode was larger than that of the lower one , as seen in figure [ exp_scan_wide]c , since the for the higher finesse mode was twice that of the lower finesse mode .this was predicted by the theoretical plot in figure [ theo_errsig_size ] .the error signal peak - to - peak voltage for the high finesse mode was measured to be 1.4v , while that for the lower finesse resonance was 0.63v .these two points , for of 0.1 and 0.05 , are normalized and overlaid with the theoretical plot in the inset of figure [ theo_errsig_size ] , to illustrate the region where these two modes were operated . assuming an effective refractive index of 1.45 , a free spectral range of 9ghz yields a resonator length of 11.5 mm , implying that the effective reflection point of the gratings was mm inside each grating .we tested for polarization dependence of the ffp response with a second half - wave plate before the laser was coupled into the fiber . no visible shift in resonance frequencieswere observed as the waveplate was rotated .this implies that for the intent and purpose of this application , the uv illumination of the grating core during the fabrication process can be regarded as isotropic .any degeneracy due to parasitic birefringence was beyond the linewidth resolution of the ffp resonance , as the two modes provided well behaved error signals free from input polarization wander effects .it is evident from figures [ exp_scan_wide ] and [ exp_scan_zoom ] that pzt scanning to sweep the frequency of the laser , as demonstrated in this experiment , is a simple alternative to the single - sideband modulation technique for high resolution spectral characterization of fiber gratings .the slope of the error signal through resonance was / hz for the higher finesse mode , and / hz for the lower finesse mode .hence the higher finesse resonance was our preferred mode for pdh locking , as it provided a larger signal as a frequency and displacement discriminator in sensing applications .one should note , however , that while higher ffp finesse is preferred for superior sensitivity , the free running laser frequency noise sets a limit to interferometer sensitivity . to lock the laser , the voltage ramp from the signal generatorwas turned off , and the pzt dc offset voltage tuned slowly while the transmitted and reflected laser intensities were monitored with an oscilloscope . when the laser is nearly resonant with the chosen ffp peak , the transmitted intensity approaches its maximum , and the feedback loop was then engaged to acquire lock .this process was recorded by the digital oscilloscope traces shown in figure [ exp_lock_acq ] .the servo amplifier used in this experiment had a single real pole response with a corner frequency of 0.03hz .the total feedback loop had a dc gain of and a unity gain bandwidth of around 40hz .lock acquisition was straight forward and once it was acquired , the system stayed locked for several hours even when subjected to large environmental noise events .lock termination occured when the grating drifted outside the laser tuning range , and this typically happened after over 3 hours of locked operation .5.5sec.,width=288 ] in a pdh locking scheme , the sensor signal is extracted by either monitoring the pzt feedback voltage required to maintain lock within the servo unity gain bandwidth , or by monitoring the mixer output at frequencies above the unity gain bandwidth .environmental stimulations , such as temperature drift as well as stress and strain due to mechanical or acoustic perturbation , change the resonance condition of the ffp .this results in both dc and ac voltage change in the mixer output and the pzt feedback voltage . when the mixer output is read with a dynamic signal analyzer , information about these perturbations can be extracted .the signal analyzer performs a fast fourier transform of the mixer output voltage , and provides a trace with units in volts . the quotient of this trace by the slope of the error signal ( 19nv / hz ) yields the callibrated measurement in hz .the low frequency measurement of this mixer output is shown in figure [ exp_sig_analyser_lowfreq ] .there was a large component of ambient noise at low frequencies as the ffp was not isolated from laboratory acoustic and thermal noise .we were able to identify fiber violin modes , broadband acoustic noise , and pzt resonances in this frequency regime .for example , the large feature at khz seen in figure [ exp_sig_analyser_lowfreq ] is due to closed loop excitation of a laser pzt mode .figure [ exp_sig_analyser ] shows a wider frequency scan of the ambient frequency noise .it is overlaid with the calculated shot noise and measured electronic noise . at frequencies above ambient excitation ,the free running frequency noise of the laser limits this measurement to .assuming the laser has a lorentzian lineshape with white spectral density of frequency noise , the 3db linewidth of the laser can be estimated by where has units of hz .thus , the broadband frequency noise of corresponds to an intrinsic laser linewidth of , which is consistent with the manufacturer s estimate of 300khz .according to the empirical model determined by kersey et al . , bragg grating responsivity where is the strain perturbation , and is the bragg wavelength , 1 pm of induced grating wavelength shift corresponds to a strain of . at nm , equation ( [ kerseymodel ] )can be rearranged to arrive at the conversion factor where is the equivalent induced grating frequency shift . since 1 pm is equivalent to 125mhz at 1550 nm , we can infer from the high frequency noise floor that the ffp sensor has a broadband strain sensitivity of .the shot noise in figure [ exp_sig_analyser ] was calculate as follows : where is the equivalent shot noise voltage ; c is the electronic charge ; is the output voltage of the photodetector when the system is locked ; g is the transimpedance gain of the photodetector : and is the mixer conversion gain .the quotient of by the error signal slope then gives the shot noise in units of hz .this was calculated to be 16 hz , which corresponds to a limiting shot - noise sensitivity of ( 16 hz ) 100 f .the electronic noise is the dark noise measured at the mixer output . within the unitygain bandwidth of the feedback system , the sensor dynamic range depends on the laser optical frequency tuning range .since our laser had a pzt tuning range of 50 ghz , the low frequency dynamic range of this system is limited to ( ) 330 . assuming a breaking stress of 100 kpsi , and a young s modulus of kpsi for fused silica , the breaking strain is 9800 .this means that typically , the breaking strain is well beyond the limited tuning range of the laser used in this experiment . above the unity gain bandwidth ,the sensor dynamic range is limited by the fwhm bandwidth of the resonator to ( ) 0.9 .hence , for large dynamic range applications , the preferred operating approach is to expand the unity gain bandwidth out to a maximum , and perform in - loop measurements at the laser pzt actuator input .we have presented a passive fiber sensor interrogation technique which was adapted from pdh locking , used in gravitational wave detection .we demonstrated the robust and stable operation of the simple locking system in fiber . in many applications, we believe this passive technique is superior to active methods using fiber lasers , due to its better efficiency , improved signal to shot - noise ratio , lower cost , and its suitability for remote sensing .it has an implied broadband strain sensitivity limit of 2 p , which is due to the free - running frequency noise of our laser . with appropriate laser stabilization prior to ffp interrogation , however ,it has the potential to surpass the pico - strain regime and approach the fundamental shot noise limit .june 29 , 2004the authors would like to thank adrian l. g. carter , of nufern , for useful discussions .alan d. kersey , michael a. davis , heather j. patrick , michel leblanc , k. p. koo , c. g. askins , m. a. putnam , and e. joseph friebele , `` fiber grating sensors , '' _ j. lightwave technol ._ , vol . 15 , pp . 1442 - 1463 , 1997 .anthony dandridge , alan b. tveten , and thomas g. giallorenzi , `` homodyne demodulation scheme for fiber optic sensors using phase generated carrier , '' _ ieee j. quantum electron ._ , qe-18 , pp . 1647 - 1653 , 1982 .n. e. fisher , d. j. webb , c. n. pannell , d. a. jackson , l. r. gavrilov , j. w. hand , l. zhang , and i. bennion , `` ultrasonic hydrophone based on short in - fiber bragg gratings , '' _ appl . opt .8120 - 8128 , 1998 .michel leblanc , alan d. kersey , and tsung - ein tsai , `` sub - nanostrain strain measurements using a pi - phase shifted grating , '' in _ proc .fiber sensors ofs 97 _ , williamsburg va , 1967 pp .28 - 30 , 1997 .y. j. rao , m. r. cooper , d. a. jackson , c. n. pannell , and l. reekie , `` absolute strain measurement using an in - fibre - bragg - grating - based fabry - perot sensor , '' _ electron .708 - 709 , 2000 .sigurd weidemann lvseth , jon thomas kringlebotn , erlend rnnekleiv , and kjell bltekjr , `` fiber distributed - feedback lasers used as acoustic sensors in air , '' _ appl .4821 - 4830 , 1999 .a. frank , k. bohnert , k. haroud , h. brndle , c. v. poulsen , j. e. pedersen , and j. patscheider , `` distributed feedback fiber laser sensor for hydrostatic pressure , '' _ ieee photon .15 , pp . 1758 - 1760 , 2003 .r. w. p. drever , j. l. hall , f. v. kowalski , j. hough , g. m. ford , a. j. munley , and h. ward , `` laser phase and frequency stabilization using an optical resonator , '' _ appl .b _ , vol . 31 , pp .97 - 105 , 1983 .bram j. j. slagmolen , malcolm b. gray , karl g. baigent , and david e. mcclelland , `` phase - sensitive reflection technique for characterization of a fabry - perot interferometer , '' _ appl .3638 - 3643 , 2000 .timothy day , eric k. gustafson , and robert l. byer , `` sub - hertz relative frequency stabilization of two - diode laser - pumped nd : yag lasers locked to a fabry - perot interferometer , '' _ ieee j. quantum electron .1106 - 1117 , 1992 .kenneth a. strain , guido mller , tom delker , david h. reitze , david b. tanner , james e. mason , phil a. willems , daniel a. shaddock , malcolm b. gray , conor mow - lowry , and david e. mcclelland , `` sensing and control in dual - recycling laser interferometer gravitational - wave detectors , '' _ appl .1244 - 1256 , 2003 .daniel a. shaddock , malcolm b. gray , conor mow - lowry , and david e. mcclelland , `` power - recycled michelson interferometer with resonant sideband extraction , '' _ appl . opt .42 , pp . 1283 - 1295 , 2003 .a. schoof , j. grnert , s. ritter , and a. hemmerich , `` reducing the linewidth of a diode laser below 30 hz by stabilization to a reference cavity with a finesse above , '' _ opt .1562 - 1564 , 2001 .t - c zhang , j - ph poizat , p. grelu , j - f roch , p. grangier , f. marin , a. bramati , v. jost , m. d. levenson , and e. giacobino , `` quantum noise of free - running and externally - stabilized laser diodes , '' _ quantum semiclass ._ , vol . 7 , pp . 601 - 613 , 1995
we discuss a phase - sensitive technique for remote interrogation of passive bragg grating fabry - perot resonators . it is based on pound - drever - hall laser frequency locking , using radio - frequency phase modulation sidebands to derive an error signal from the complex optical response , near resonance , of a fabry - perot interferometer . we examine how modulation frequency and resonance bandwidth affect this error signal . experimental results are presented that demonstrate when the laser is locked , this method detects differential phase shifts in the optical carrier relative to its sidebands , due to minute fiber optical path displacements . fiber fabry - perot , fiber resonator , bragg grating resonator , fiber sensor , strain sensor , nanostrain , picostrain , frequency locking , bragg grating interrogation .
jnosi and gallas analyzed statistics of the alpine river danube daily water level collected , over the period 1901 - 97 , at nagymaros , hungary .the authors found , in the one day logarithmic rate of change of the river water level , similar characteristics to those of company growth ( see stanley et al . ) which shows that the properties seen in company data are present in a wider class of complex systems .bramwell et al . defined a daily water level mean and variance and computed the daily river water level fluctuations .they have shown a data collapse of the danube daily water level fluctuations histogram to the ( reversed ) bramwell - holdsworth - pinton ( bhp ) probability density function ( pdf ) .dahlstedt and jensen described the statistical properties of several river systems .they did a careful study of the size of basin areas influence in the data collapse of the rivers water level and runoff , in particular of river negro at manaus , to the reversed bhp and to the gaussian pdf showing that not all rivers have the same statistical behavior . in this paper , we study , again , the south american river negro daily water level at manaus ( 104 years ) . we compute and present a cyclic fit for the negro daily water level period and the negro daily water level standard deviation . we show that the histogram of the danube water level fluctuations is on top of the reversed bhp pdf , which does not happen for the negro daily water level .we define the _ danube daily water level period _ by where is the number of observed years and is the danube daily water level time series .of february were eliminated . ] in figure [ fig1 ] , we show a fit to the danube daily water level period .the mean period fit , using the first harmonic of the fourier series , is given by the percentage of variance explained by the fit is .we define the _ negro daily water level period _ by where is the number of observed years and is the negro daily water level time series .of february were eliminated . ] in figure [ fig2 ] , we show a fit of the negro daily water level period .the mean period fit , using the first four sub - harmonics of the fourier series , is given by where and are given in table [ tab1 ] .the percentage of variance explained by the fit is .we define the _ danube daily water level standard deviation _ by in figure [ fig3 ] , we show the chronogram of the danube daily water level standard deviation .we define the _ negro daily water level standard deviation _ given by in figure [ fig4 ] , we show a fit of the negro daily water level standard deviation .the fit , using the first ten sub - harmonics of the fourier series , is given by where and are given in table [ tab2 ] .the percentage of variance explained by the fit is . following bramwell et .al , we define the _ danube daily water level fluctuations _ by in figure [ fig5 ] , we show the danube daily water level fluctuations . as shown by bramwell et al . , the ( reversed ) bhp pdf falls on top of the histogram of the danube daily water level fluctuations , in the semi - log scale ( see figure [ fig7 ] ) .we define the _ negro daily water level fluctuations _ by in figure [ fig6 ] , we show the negro daily water level fluctuations . in figure[ fig8 ] , we show the histogram of the negro daily water level fluctuations . in figure [ fig9 ] , we show the data collapse of the histogram in the semi - log scale to the bhp pdf .we computed and presented a cyclic fit for the negro daily water level period and for the negro daily water level standard deviation using the first four and ten sub - harmonics , respectively .we computed the histogram of the negro daily water level fluctuations at manaus , and we compared it with the bhp pdf .the histogram of the negro daily water level fluctuations is close to the bhp pdf .we have shown that the histogram of the danube water level fluctuations is on top of the reversed bhp pdf , which does not happen for the river negro daily water level fluctuations .we thank imre jnosi for providing the river danube data and mrs .andrelina santos of the agncia nacional de guas of brazil for providing the river negro data .bramwell , s.t . ,christensen , k. , fortin , j.y ., holdsworth , p.c.w . ,jensen , h.j . ,lise , s. , lpez , j.m . ,nicodemi , m. & sellitto , m .( 2000 ) universal fluctuations in correlated systems , _ phys .lett . _ * 84 * , 37443747 .bramwell , s.t . ,fortin , j.y ., holdsworth , p.c.w . ,peysson , s. , pinton , j.f . ,portelli , b. & sellitto , m .( 2001 ) magnetic fluctuations in the classical xy model : the origin of an exponential tail in a complex system , _ phys . rev _ *e 63 * , 041106 .bramwell , s.t . ,fennell , t. , holdsworth , p.c.w . , & portelli , b. ( 2002 ) universal fluctuations of the danube water level : a link with turbulence , criticality and company growth , _ europhysics letters _ * 57 * , 310 .bramwell , s.t ., holdsworth , p.c.w . , & pinton , j.f .( 1998 ) universality of rare fluctuations in turbulence and critical phenomena , _ nature _ * 396 * , 552554 .stanley , m.h.r . ,amaral , l. a. n. , buildrev , s. v. , havlin , s. , leschhorn , h. maass , p. , salinger , m.a .& stanley , e. ( 1996 ) scaling behaviour in the growth of companies ._ letters to nature _ * 379 * , 29 , 804 - 806 .
we study the european river danube and the south american river negro daily water levels . we present a fit for the negro daily water level period and standard deviation . unexpectedly , we discover that the river negro and danube are mirror rivers in the sense that the daily water levels fluctuations histograms are close to the bhp and reversed bhp , respectively . river systems , hydrological statistics , data analysis 92.40.qh , 07.05.kf
language models play an important role in many applications like speech recognition , machine translation , information retrieval and nature language understanding .traditionally , the back - off n - gram models are the standard approach to language modeling .recently , neural networks have been successfully applied to language modeling and have achieved the state - of - the - art performance in many tasks . in neural network language models ( nnlm ) , the feedforward neural networks ( fnn ) and recurrent neural networks ( rnn ) are two popular architectures .the basic idea of nnlms is to use a projection layer to project discrete words into a continuous space and estimate word conditional probabilities in this space , which may be smoother to better generalize to unseen contexts .fnn language models ( fnn - lm ) usually use a limited history within a fixed - size context window to predict the next word .rnn language models ( rnn - lm ) adopt a time - delayed recursive architecture for the hidden layers to memorize the long - term dependency in language .therefore , it is widely reported that rnn - lms usually outperform fnn - lms in language modeling . while rnns are theoretically powerful , the learning of rnns needs to use the so - called back - propagation through time ( bptt ) due to the internal recurrent feedback cycles .the bptt significantly increases the computational complexity of the learning algorithms and it may cause many problems in learning , such as gradient vanishing and exploding .more recently , some new architectures have been proposed to solve these problems .for example , the long short term memory ( lstm ) rnn is an enhanced architecture to implement the recurrent feedbacks using various learnable gates , and it has obtained promising results on handwriting recognition and sequence modeling . moreover ,the so - called temporal - kernel recurrent neural networks ( tkrnn ) have been proposed to handle the gradient vanishing problem .the main idea of tkrnn is to add direct connections between units in all time steps and every unit is implemented as an efficient leaky integrator , which makes it easier to learn the long - term dependency . along this line, a temporal - kernel model has been successfully used for language modeling in . comparing with rnn - lms , fnn - lmscan be learned in a simpler and more efficient way .however , fnn - lms can not model the long - term dependency in language due to the fixed - size input window . in this paper, we propose a novel encoding method for discrete sequences , named _ fixed - size ordinally - forgetting encoding _ ( fofe ) , which can almost uniquely encode any variable - length word sequence into a fixed - size code . relying on a constantforgetting factor , fofe can model the word order in a sequence based on a simple ordinally - forgetting mechanism , which uses the position of each word in the sequence .both the theoretical analysis and the experimental simulation have shown that fofe can provide _ almost _ unique codes for variable - length word sequences as long as the forgetting factor is properly selected . in this work ,we apply fofe to neural network language models , where the fixed - size fofe codes are fed to fnns as input to predict next word , enabling fnn - lms to model long - term dependency in language .experiments on two benchmark tasks , penn treebank corpus ( ptb ) and large text compression benchmark ( ltcb ) , have shown that fofe - based fnn - lms can not only significantly outperform the standard fixed - input fnn - lms but also achieve better performance than the popular rnn - lms with or without using lstm . moreover, our implementation also shows that fofe based fnn - lms can be learned very efficiently on gpus without the complex bptt procedure .assume vocabulary size is , nnlms adopt the 1-of - k encoding vectors as input . in this case , each word in vocabulary is represented as a one - hot vector .the 1-of - k representation is a context independent encoding method .when the 1-of - k representation is used to model a word in a sequence , it can not model its history or context .we propose a simple context - dependent encoding method for any sequence consisting of discrete symbols , namely _ fixed - size ordinally - forgetting encoding _ ( fofe ) . given a sequence of words ( or any discrete symbols ) , , each word is first represented by a 1-of - k representation , from the first word to the end of the sequence , fofe encodes each partial sequence ( history ) based on a simple recursive formula ( with ) as : where denotes the fofe code for the partial sequence up to , and ( ) is a constant forgetting factor to control the influence of the history on the current position .let s take a simple example here , assume we have three symbols in vocabulary , e.g. , _ a _ , _ b _ , _ c _ , whose 1-of - k codes are ] and ] , and that of _ \{abcbc } _ is ] ( incremental ) .the architecture of a fofe based neural network language model ( fofe - fnnlm ) is as shown in figure [ fig : fofe_bigram ] .it is similar to standard bigram fnn - lms except that it uses a fofe code to feed into neural network lm at each time instance .moreover , the fofe can be easily scaled to other n - gram based neural network lms . for example , figure [ fig : fofe_trigram ] is an illustration of fixed - size ordinally forgetting encoding based tri - gram neural network language model .fofe is a simple recursive encoding method but a direct sequential implementation may not be efficient for the parallel computation platform like gpus .here , we will show that the fofe computation can be efficiently implemented as sentence - by - sentence matrix multiplications , which are particularly suitable for the mini - batch based stochastic gradient descent ( sgd ) method running on gpus . given a sentence , , where each word is represented by a 1-of - k code as .the fofe codes for all partial sequences in can be computed based on the following matrix multiplication : \left [ \begin{gathered } { \bf e}_1 \hfill \\ { \bf e}_2 \hfill \\ { \bf e}_3 \hfill \\\hspace{0.1 cm } \vdots \hfill \\ { \bf e}_t \hfill \\\end{gathered } \right ] = { \bf m } { \bf v}\ ] ] where is a matrix arranging all 1-of - k codes of the words in the sentence row by row , and is a -th order lower triangular matrix . each row vector of represents a fofe code of the partial sequence up to each position in the sentence .this matrix formulation can be easily extended to a mini - batch consisting of several sentences .assume that a mini - batch is composed of n sequences , , we can compute the fofe codes for all sentences in the mini - batch as follows : \left [ \begin{gathered } { \bf v}_1 \hfill \\ { \bf v}_2\hfill \\ \quad \vdots \hfill \\ { \bf v}_n \hfill \\\end{gathered } \right ] = { \bf \bar{m } } { \bf \bar{v}}\ ] ] when feeding the fofe codes to fnn as shown in figure [ fig : fofe_bigram ] , we can compute the activation signals ( assume is the activation function ) in the first hidden layer for all histories in as follows : where denotes the word embedding matrix that projects the word indices onto a continuous low - dimensional continuous space . asabove , can be done efficiently by looking up the embedding matrix .therefore , for the computational efficiency purpose , we may apply fofe to the word embedding vectors instead of the original high - dimensional one - hot vectors . in the backward pass, we can calculate the gradients with the standard back - propagation ( bp ) algorithm rather than bptt . as a result ,fofe based fnn - lms are the same as the standard fnn - lms in terms of computational complexity in training , which is much more efficient than rnn - lms .we have evaluated the fofe method for nnlms on two benchmark tasks : i ) the penn treebank ( ptb ) corpus of about 1 m words , following the same setup as .the vocabulary size is limited to 10k .the preprocessing method and the way to split data into training / validation / test sets are the same as .ii ) the large text compression benchmark ( ltcb ) . in ltcb, we use the _ enwik9 _dataset , which is composed of the first bytes of enwiki-20060303-pages-articles.xml .we split it into three parts : training ( 153 m ) , validation ( 8.9 m ) and testing ( 8.9 m ) sets .we limit the vocabulary size to 80k for ltcb and replace all out - of - vocabulary words by a token .details of the two datasets can be found in table [ tab : datasets ] ..the size of ptb and ltcb corpora in words . [ cols="^,^,^,^",options="header " , ] [ tab : wiki_summary ] we have further examined the fofe based fnn - lms on a much larger text corpus , i.e. ltcb , which contains articles from wikipedia .we have trained several baseline systems : i ) two n - gram lms ( 3-gram and 5-gram ) using the modified kneser - ney smoothing without count cutoffs ; ii ) several traditional fnn - lms with different model sizes and input context windows ( bigram , trigram , 4-gram and 5-gram ones ) ; iii ) an rnn - lm with one hidden layer of 600 nodes using the toolkit in , in which we have further used a spliced sentence bunch in to speed up the training on gpus .moreover , we have examined four fofe based fnn - lms with various model sizes and input window sizes ( two 1st - order fofe models and two 2nd - order ones ) . for all nnlms , we have used an output layer of the full vocabulary ( 80k words ) . in these experiments ,we have used an initial learning rate of 0.01 , and a bigger mini - batch of 500 for fnn - lmms and of 256 sentences for the rnn and fofe models .experimental results in table [ tab : wiki_summary ] have shown that the fofe - based fnn - lms can significantly outperform the baseline fnn - lms ( including some larger higher - order models ) and also slightly overtake the popular rnn - based lm , yielding the best result ( perplexity of 107 ) on the test set .in this paper , we propose the fixed - size ordinally - forgetting encoding ( fofe ) method to _ almost _ uniquely encode any variable - length sequence into a fixed - size code . in this work, fofe has been successfully applied to neural network language modeling .next , fofe may be combined with neural networks for other nlp tasks , such as sentence modeling / matching , paraphrase detection , machine translation , question and answer and etc .this work was supported in part by the science and technology development of anhui province , china ( grants no .2014z02006 ) and the fundamental research funds for the central universities from china , as well as an nserc discovery grant from canadian federal govenment .we appreciate dr .barlas oguz at microsoft for his insightful comments and constructive suggestions on theorem [ theorem - fofe - alpha - less - one ] .* theorem 2 : * _ for , given any finite values of and , _ fofe _ is almost unique everywhere for , except only a finite set of countable choices of . _* proof : * when we decode a given fofe code of an unknown sequence ( assume the length of is not more than ) , for any single value in the -th position of the fofe code , there are only two possible cases that may lead to ambiguity in decoding : ( i ) word appears in the current location of ; or ( ii ) word appears multiple times in the history of and the total contribution of them happens to be .for case ( ii ) to happen , the forgetting factor needs to satisfy at least one of the following polynomial equations : where the above coefficients , , are equal to either or .if the word appears in the -th location ahead in the history , we have . otherwise , . we know , each equation in eq.([eq - fofe - unique - formula ] ) is a -th ( or lower ) order polynomial equation .it can have at most real roots for .moreover , since , we can only have a finite set of equations in eq.([eq - fofe - unique - formula ] ) .the total number is not more than .therefore , in total , we can only have a finite number of values that may satisfy at least one equation in eq.([eq - fofe - unique - formula ] ) , i.e. , at most possible roots . among them , only a fraction of these roots lies between . except these countable choices of values , eq.([eq - fofe - unique - formula ] ) never holds for any other values between . as a result ,case ( ii ) never happens in decoding except some isolated points of .this proves that the resultant fofe code is _ almost _ unique between . slava katz .estimation of probabilities from sparse data for the language model component of a speech recognizer ._ ieee transactions on acoustics , speech and signal processing ( assp ) _ , volume 35 , no 3 , pages 400 - 401 .tomas mikolov , stefan kombrink , lukas burget , jan cernocky and sanjeev khudanpur .extensions of recurrent neural network language model . in _ proc . of international conference on acoustics , speech and signal processing ( icassp ) _ , pages 5528 - 5531 .yong - zhe shi , wei - qiang zhang , meng cai and jia liu .temporal kernel neural network language model . in _ proc . of international conference on acoustics , speech and signal processing ( icassp)_. pages 8247 - 8251 .
in this paper , we propose the new fixed - size ordinally - forgetting encoding ( fofe ) method , which can almost uniquely encode any variable - length sequence of words into a fixed - size representation . fofe can model the word order in a sequence using a simple ordinally - forgetting mechanism according to the positions of words . in this work , we have applied fofe to feedforward neural network language models ( fnn - lms ) . experimental results have shown that without using any recurrent feedbacks , fofe based fnn - lms can significantly outperform not only the standard fixed - input fnn - lms but also the popular recurrent neural network ( rnn ) lms .
the majority of the information that we have obtained during the last years about the magnetism of the sun and other astrophysical objects is based on the analysis of the polarization of the light .the polarization state of a light beam is typically described , using the stokes formalism , by the so - called stokes vector : where refers to the total intensity of the beam , describes its circular polarization properties while and are used to described linear polarization .when the light beam passes through optical elements or , in general , any medium that modifies its polarization properties , the emergent stokes vector can be related to the input one by the following linear relation : the matrix , , is the so - called mueller matrix and it can be used to unambiguously represent any optically passive medium .it is important to note that these matrices have to obey certain properties so that they have physical meaning .the analysis of polarization of a given beam is usually carried out with the aid of modulation schemes .this is a requisite in the short - wavelength ( optical ) domain because no detectors that present the required polarization sensitivity are still available .furthermore , the majority of the detectors that are used in the optical are only sensitive to the total amount of light or , in other words , to the total intensity given by stokes .this is not the case in the long - wavelength ( microwave ) domain , where it is relatively easy to build detectors that analyze directly the polarization properties of the light beam .modulation schemes have been built to overcome the difficulties when measuring the polarization state of light beams in the short - wavelength domain .any modulation scheme consists of a train of optical devices that produce a known modification of the input beam so that the observed intensity in the detector is a linear combination of the elements of the input stokes vector .carrying out several of these measurements , each one with a different combination of optical devices , it is possible to _ infer _ the input stokes vector using a demodulation procedure . assuming that the mueller matrix of the -th combination of optical devices is given by , the detected intensity in each case is given by : {00}s^\mathrm{in}_0+[m_j]_{01}s^\mathrm{in}_1+[m_j]_{02}s^\mathrm{in}_2+[m_j]_{03}s^\mathrm{in}_3,\ ] ] where we have used the standard notation for the stokes vector , which will be used extensively in the rest of the paper . by putting several measurements together , the modulation scheme can be expressed as the following linear system : where each row of the matrix is built with the first rows of the different mueller matrices of the different combinations of optical elements used in the modulation scheme .therefore , the matrix has dimensions , with the number of measurements .note that , neither the vector is a stokes vector nor the matrix is a mueller matrix .the vector consists of different intensity measurements , while the matrix does not fulfill , in general , the conditions described by and to be considered a mueller matrix .several different modulation techniques have been developed .one of the most widespread methods is to use a temporal modulation with a quarter - wave plate and a linear polarizer , with the fast axis of the quarter - wave plate being set to preselected angles .this approach has the advantage that the different polarizations states are observed on the same pixel . however , modulation has to be carried out very fast in order not to be limited by atmospheric fluctuations .another possibility is to use spatial modulation in which the measurements are carried out simultaneously by using a beamsplitter instead of a linear polarizer .the advantage is that atmospheric fluctuations are avoided but potential problems appear because the optical path and the detector response are not the same for all the polarization states .the most advanced polarimeters now use a combination of both approaches in the so - called spatio - temporal modulation . starting from eq .( [ eq : modulation ] ) , the input stokes vector can be obtained by solving the previous linear system of equations , so that : where the matrix is the inverse ( or moore - penrose pseudoinverse in the general case that more than four measurements are carried out to infer the input stokes vector ) of the matrix and has dimensions . in order to measure the four stokes parameters ,the minimum number of measurements is 4 .in such a case , the matrix is unique and can be calculated as the standard inverse of . however , it is also possible to carry out more measurements than stokes parameters . in this case , the linear system of eq .( [ eq : modulation ] ) is overdetermined and , in general , no solution that satisfies simultaneously all the equations exist .recently , it has been shown that it is still possible to choose a solution if we seek for the one that maximizes the efficiency of the modulation - demodulation scheme .such efficiency for each stokes parameter , represented with the vector , is obtained from the demodulation matrix : in this case , it can be shown that the optimal demodulation matrix is given by the moore - penrose pseudoinverse : the pseudoinverse can also be found with the singular value decomposition of the modulation matrix . note that, while in the general case , is only valid when ( is the identity matrix ) .we present a theoretical calculation of how the error propagates in the demodulation process .this is an important issue that has been partially treated in the past , either considering how the uncertainty in the measurement of propagates to the demodulated stokes vector when the modulation matrix is perfectly known or taking into account uncertainties in the knowledge of the modulation matrix , though the full covariance matrix is not obtained .many papers also deal with how error is propagated in non - ideal mueller matrix polarimeters while others consider the optimization of such polarimeters .in practical situations , the modulation matrix has to be measured once the optics and the modulation scheme have been defined . as a consequence ,one expects the elements of the modulation matrix to have some uncertainty produced by the measurement procedure .additionally , uncertainties can also be found when the rotation of the fast axis of the retarders or of the linear polarizers has some repeatability problems . in this case , although the average modulation matrix can be known with great precision , random deviations from the average modulation matrix can occur .these random deviations induce a modification on the modulation scheme that directly affects the inferred polarization properties of the input light beam . for these reasons ,it is fundamental to characterize the error propagation through the demodulation process and how they affect the measured stokes profiles . in general , since the matrix inversion ( or pseudoinverse ) is a nonlinear process , one should expect to find a full covariance matrix for the elements of the matrix , with non - zero non - diagonal elements .these non - diagonal elements account for the statistical correlations that appear between elements of the demodulation matrix after the inversion process .more importantly , a full covariance matrix has to be expected even in the case that the elements of the modulation matrix are not correlated ( i.e. , , with the kronecker delta ) .note that a diagonal covariance matrix for the matrix comes out when the measurements of the elements of the modulation matrix are statistically independent , something that can be assumed in some cases if the measurements of the elements of the matrix are carried out with proper calibration .however , this is an issue that has to be carefully analyzed for each case .the full covariance matrix for the demodulation matrix can be obtained analytically following the standard error propagation formulae .the starting point is to consider that elements of the demodulation matrix are known nonlinear functions of the elements of the modulation matrix : the most general error propagation formula for such operation reads : this covariance matrix gives information about the variances of the elements of the demodulation matrix ( diagonal elements ) , as well as covariances between those elements ( non - diagonal elements ) .note that , for a matrix , the covariance matrix is . if the covariance matrix of the modulation matrix is diagonal ( i.e. , , with the kronecker delta ) , the previous equation simplifies to : as already presented above , this turns out to be a good approximation if the elements of the modulation matrix are carefully measured .however , our approach is general and can cope with non - diagonal covariance matrices as will be shown below .the derivatives that appear in eqs .( [ eq : covarianced_general ] ) and ( [ eq : covarianced_diagonal ] ) can be calculated , in the most general case , starting from eq .( [ eq : optimal_demodulation ] ) .however , we distinguish the cases of a square modulation matrix from the general case , because the analytical procedure can be largely simplified . in any case, we will show that the general expressions are equivalent to those of the particular case of a square matrix .although it is possible to calculate directly the derivatives from eq .( [ eq : optimal_demodulation ] ) , it is more advantageous to use the fact that the matrix product between and commute , so that . in this case , it is easy to show that : therefore , the full covariance matrix of the matrix elements of the demodulation matrix can be calculated by substituting eq .( [ eq : derivative_square ] ) into eq .( [ eq : covarianced_general ] ) : in the particular case of a diagonal covariance matrix for , the previous equation simplifies to : when is larger than four , it is not possible to follow the previous approach because and do not commute .however , inserting the intermediate matrix , using the fact that and that the matrix is always invertible , the derivative can be expressed , after some algebra , as : the derivative of the elements of the matrix with respect to the elements of can be calculated using the chain rule : the definition of the matrix allows us to obtain the following derivative easily : the derivative of the elements of with respect to the elements of can also be calculated easily from eq .( [ eq : derivative_square ] ) because is a square matrix , that commutes with its inverse .therefore : substituting eq .( [ eq : der_ainvfinal ] ) into eq .( [ eq : deriv_d_o ] ) and after some algebra , we end up with the following expression for the derivative : this expression reduces to eq .( [ eq : derivative_square ] ) if is square because the product of and commutes , so that : after substitution in eq .( [ eq : derivative_nonsquare ] ) , we recover eq .( [ eq : derivative_square ] ) .the final expression for the covariance of the demodulation matrix is obtained by plugging eq .( [ eq : derivative_nonsquare ] ) into eq .( [ eq : covarianced_general ] ) [ or eq . ( [ eq : covarianced_diagonal ] ) ] : \nonumber \\ & \times & \left [ a_{ak}^{-1 } \delta_{bk } - a_{ak}^{-1 } \sum_m d_{mb } o_{km } - d_{ak } d_{lb } \right].\end{aligned}\ ] ] when both the modulation matrix and the measured intensities present uncertainties , the resulting uncertainty in the demodulated stokes vector has contribution coming from both origins . using standard error propagation formulae applied to eq .( [ eq : demodulation ] ) , the covariance matrix can be written as : the first contribution takes into account the uncertainty in the knowledge of the modulation matrix while the second contribution takes into account the uncertainty in the measurement of the intensities arriving to the detector . in the field of optimization of polarimeters , a suitable norm of this very last term ( often neglecting the non - diagonal covariances in the vector that we include here ) is the chosen one to measure the efficiency of a polarimeter .particularizing to our problem and calculating the partial derivatives , we end up with : the covariance matrix is typically assumed to be diagonal , since no statistical dependence is assumed between consecutive intensity measurements in the detector , so that .since this is not generally the case for the quantities , a non - zero ( in general ) covariance between different stokes profiles appears as a consequence of some degree of correlation in the demodulation process . in principle , it is possible to diagonalize the covariance matrix given by eq .( [ eq : covariance_intensity ] ) to obtain its principal components .such principal components define the directions along which the correlation between different stokes parameters is minimized . since the principal components define a new reference system , it is possible to rotate the original reference system in which the four - dimensional stokes vector is defined .this rotation minimizes in a statistical sense the cross - talk .this could be of interest when synthetic calculations have to be compared with polarimetric observations . in this case , in order to minimize the effect of cross - talk , it could be advantageous to compare the observations with projections along the eigenvectors of the synthetic calculations .a quantity that is also affected by uncertainties in the knowledge of is the efficiency defined in eq .( [ eq : efficiency ] ) . applying the error propagation formula ,we get : if we assume that the covariance matrix of the elements of the modulation matrix is diagonal , we get : the derivatives can be calculated using the definition of the efficiency and the chain rule : where plugging this expression into eq .( [ eq : derivative_epsilon ] ) , we can simplify it to read : the derivative is obtained from eq .( [ eq : derivative_nonsquare ] ) in the general case and from eq .( [ eq : derivative_square ] ) in the case of a square modulation matrix .according to the previous results , when the covariance matrix of the modulation matrix is diagonal and the variance is the same for all the elements , all the covariance matrices calculated in this section are proportional to the variance of the elements of . as a consequence ,reducing an order of magnitude the uncertainty in the elements of the modulation matrix reduces an order of magnitude the uncertainty in the demodulation matrix and in the inferred stokes vector .the previous analytical approach assumes that the error propagates normally .this is typically a good approximation when the modulation matrix is far from singular . in other words ,the rows of the modulation matrix have to be as linearly independent as possible .however , since the inversion is a nonlinear process , it would be possible to obtain distributions in the demodulation matrix ( and as a consequence , in the inferred stokes parameters ) that are far from gaussians .in order to verify this issue and also to test that the previous analytical approach gives the correct answer , we have carried out a monte carlo experiment . for simplicity , we assume that the measurement uncertainty is the same for all the matrix elements of the modulation matrix and that it is characterized by the variance , so that the full variance matrix is given by : this selection makes the presentation of the results easier , but the previous approach is general and can cope with non - diagonal covariance matrices .the monte carlo experiment , given an initial modulation matrix , generates instances of such a matrix with added uncertainties : where is a normally distributed random constant with zero mean and unit variance . for each modulation matrix, we obtain the optimal demodulation matrix following eq .( [ eq : optimal_demodulation ] ) .finally , the statistical properties of the demodulation matrix are characterized by the full covariance matrix , given by : - \mathrm{e}[d_{ij } ] \mathrm{e}[d_{kl}],\ ] ] where ] .the previous analytical approach is applied to several modulation matrices obtained from the literature and that represent different typical modulation schemes .we analyze the uncertainties in the demodulation matrix and in the inferred stokes parameters using the error propagation formulation presented in the previous section . for illustration purposes ,the results are also accompanied in some cases with plots obtained using the monte carlo approach .one of the reasons for that is to demonstrate whether the resulting distributions can be correctly assumed to be gaussians .the monte carlo results have been obtained using .we assume that the variance that characterizes the uncertainty in the measurement of the modulation matrix is ( the standard deviation is equal to ) , which is a reasonable value for a standard calibration of the modulation .the first example is representative of a square modulation matrix in which four measurements are carried out to infer the four stokes parameters .the matrix is that used by the tenerife infrared polarimeter ( tip ; ) as presented in .although this matrix is not the experimental one for tip , its presents the desired structure and can be used to gain some insight on the properties of the error propagation .the chosen modulation matrix is : \ ] ] the optimal demodulation matrix , obtained from the application of eq .( [ eq : optimal_demodulation ] ) , is just the inverse of : \ ] ] as shown in fig .[ fig : d00_tip ] , the distribution of the elements of the demodulation matrix are gaussians whose standard deviation can be obtained from the diagonal elements of the matrix defined in eq .( [ eq : covariance_square ] ) : \times 10^{-3}.\ ] ] although we only present the diagonal elements of the covariance matrix , note that the covariance matrix is a full matrix in which all the elements are different from zero . as a direct consequence of the uncertainties in the modulation matrix, the efficiency of the proposed scheme presents uncertainties .they can be calculated using eq .( [ eq : covariance_efficiency_diagonal ] ) , so that the average value and their standard deviations are : where non - diagonal elements in are non - zero , but typically much smaller than the elements in the diagonal . now we investigate the propagation of uncertainties to the inferred stokes parameters . to this end , we choose an initial light beam defined by .such stokes vector is representative of what one would observe in relatively low magnetic flux regions of the solar surface .the application of eqs .( [ eq : covariance_intensity ] ) and ( [ eq : covariance_square ] ) gives the following covariance matrix : \times 10^{-6 } \label{eq : cov_stokes_tip}\ ] ] this result has been obtained assuming that there is no uncertainty in the measurement of the intensity arriving to the detector , so that . in so doing , we isolate the effect of the modulation on the inferred stokes vectors .the results of the monte carlo simulation are shown in fig .[ fig : inferred_stokes_tip ] , demonstrating that the values are quasi - normally distributed around the original value with a dispersion that is given in each panel of the plot . note that the standard deviations are large , almost of the order of the standard deviation of the precision in the measurement of the modulation matrix .note also that the largest value of are on the diagonal of the matrix , but sizable non - diagonal elements also appear , indicating a certain degree of correlation between the inferred stokes parameters .for instance , the value indicates that when the inferred stokes ( ) increases , the inferred stokes ( ) tends to systematically decrease .this can be understood as a cross - talk between all the stokes parameters induced by the demodulation process due to the special structure of the modulation matrix .the two - dimensional distribution of the inferred stokes and is shown in the left panel of fig .[ fig : correlationiv ] . in specific cases ,part of this cross - talk can be corrected for easily based on physical arguments .this happens , for instance , when observing the stokes profiles induced by the zeeman effect in magnetized regions of the solar surface with the aid of spectropolarimeters .in such a case , one can assume that stokes , and are zero away from the spectral line and carry out a correction of the cross - talk from stokes to stokes , and .the diagonalization of the matrix gives the following eigenvector matrix ordered in row format : .\label{eq : eigenvectors_tip}\ ] ] each row represents the linear combination of stokes parameters that one infers due to the induced cross - talk contamination . note that the weight of one of the stokes parameters is larger in each row , but the contamination from the other parameters is still large , a consequence of the chosen modulation matrix .we present here results for a typical non - square modulation matrix .we have chosen the one belonging to the advanced stokes polarimeter ( asp ; ) as presented in . as it happens for the tip matrix, this is probably not the exact matrix used in the polarimeter but is representative of what happens in a scheme in which more than four measurements are used to obtain the four stokes parameters .the modulation matrix we choose is : .\ ] ] after calculating the demodulation matrix and its covariance matrix , we can also calculate the diagonal of the covariance matrix for the efficiency which , transformed into standard deviations , give : and the full covariance matrix for the inferred stokes parameters using as the input stokes vector : \times 10^{-6}. \label{eq : cov_stokes_asp}\ ] ] note that the elements of the diagonal of this covariance matrix are smaller than for the case of only four measurements ( except for the case of ) , probably induced by the larger number of measurements carried out .furthermore , it is important to point out that this modulation scheme induces no correlations between ( stokes ) and ( stokes ) and any other stokes parameter . on the contrary, there is a large correlation between stokes and stokes , something that can be clearly seen in the right panel of fig .[ fig : correlationiv ] .the diagonalization of the previous matrix gives the following eigenvector matrix ( in row order ) : .\label{eq : eigenvectors_asp}\ ] ] the demodulation process has induced a cross - talk between stokes and which can be represented by a rotation of .it is instructive to present results for the following ideal modulation matrix : .\ ] ] the efficiency in this case is the maximum that can be reached , while the diagonal of the covariance matrix for the efficiency given as standard deviations is : and the full covariance matrix for the inferred stokes parameters using as the input stokes vector is : \times 10^{-6}. \label{eq : cov_stokes_ideal}\ ] ] the covariance matrix is diagonal with equal uncertainties in stokes , and .these values are smaller than for the asp example except for stokes , where the uncertainty for the ideal modulation matrix is slightly larger .more important is the fact that , since the covariance matrix is diagonal , no correlation is found between the inferred stokes parameters , so that no residual cross - talk induced by the modulation is induced in the demodulation .we have presented analytical expressions for the calculation of the propagation of errors in the demodulation process when the modulation matrix is not known with infinite precision .this can happen when the modulation system has not been calibrated with enough precision or when the repeatability of the modulation system induces uncertainties in the modulation scheme .the formulae that we have presented allows polarimeter designers to calculate the errors in the demodulation matrix , the efficiency of the modulation scheme and the inferred stokes parameters .they are simple to calculate and require only the knowledge of the modulation matrix , together with its covariance matrix .we have pointed out the fact that , in general , since matrix inversion ( or moore - penrose pseudoinversion ) is a nonlinear operation , non - zero non - diagonal covariances have to be expected in the demodulation matrix even if such non - diagonal correlations are not present in the modulation matrix .this has the important consequence of generating spurious correlations ( cross - talk ) between the inferred stokes parameters .we calculate the induced cross - talk by diagonalizing the covariance matrix .the matrix of eigenvectors represent the reference system in which the cross - talk is minimized .we have illustrated these points with three different modulation matrices representing three different ways of measuring the polarization state of light beams .each method has its own advantages and disadvantages .it is up to the polarimeter designer to choose the modulation scheme depending on the desired precision .we hope that the formulae present in this paper are of interest for improving the quality of modulation based polarimeters .routines in fortran 90 and idl for the calculation of the formulae presented in the paper can be obtained after an e - mail request to the authors of this paper .finantial support by the spanish ministry of education and science through project aya2007 - 63881 is gratefully acknowledged .v. martnez pillet , m. collados , j. snchez almeida , v. gonzlez , a. cruz - lopez , a. manescau , e. joven , e. paez , j. diaz , o. feeney , v. snchez , g. scharmer , and d. soltau , `` lpsp & tip : full stokes polarimeters for the canary islands observatories , '' in `` high resolution solar physics : theory , observations , and techniques , '' , vol .183 of _ astronomical society of the pacific conference series _ , t. r. rimmele , k. s. balasubramaniam , and r. r. radick , eds . ( 1999 ) ,183 of _ astronomical society of the pacific conference series _ , p. 264 .m. collados , `` high resolution spectropolarimetry and magnetography , '' in `` third advances in solar physics euroconference : magnetic fields and oscillations , '' , vol .184 of _ astronomical society of the pacific conference series _ ,b. schmieder , a. hofmann , and j. staude , eds .( 1999 ) , vol .184 of _ astronomical society of the pacific conference series _ , p.3 . e. l. dereniak , d. s. sabatke , m. r. locke , m. r. descour , w. c. sweatt , j. p. garca , d. sass , t. hamilton , s. a. kemme , and g. s. phipps , `` design and optimization of a complete stokes polarimeter for the mwir , '' osti ( 2000 ) .d. f. elmore , b. w. lites , s. tomczyk , a. p. skumanich , r. b. dunn , j. a. schuenke , k. v. streander , t. w. leach , c. w. chambellan , and h. k. hull , `` the advanced stokes polarimeter - a new instrument for solar magnetic field research , '' in `` polarization analysis and measurement , '' , d. h. goldstein and r. a. chipman , eds .( 1992 ) , p. 22 .
the polarization analysis of the light is typically carried out using modulation schemes . the light of unknown polarization state is passed through a set of known modulation optics and a detector is used to measure the total intensity passing the system . the modulation optics is modified several times and , with the aid of such several measurements , the unknown polarization state of the light can be inferred . how to find the optimal demodulation process has been investigated in the past . however , since the modulation matrix has to be measured for a given instrument and the optical elements can present problems of repeatability , some uncertainty is present in the elements of the modulation matrix and/or covariances between these elements . we analyze in detail this issue , presenting analytical formulae for calculating the covariance matrix produced by the propagation of such uncertainties on the demodulation matrix , on the inferred stokes parameters and on the efficiency of the modulation process . we demonstrate that , even if the covariance matrix of the modulation matrix is diagonal , the covariance matrix of the demodulation matrix is , in general , non - diagonal because matrix inversion is a nonlinear operation . this propagates through the demodulation process and induces correlations on the inferred stokes parameters .
the physical boundary between a human being and his or her environment is the skin , but in the space of behaviours the same boundary is less strict . taking decisions , we are not completely selfish ; we are to some extent bound by the social norms .the way of enforcing norms varies between a direct control and a deep internalization ; in the latter case we treat our conformity to norms in the same way as our payoff .norms create the society , where we are formed , and norms are modified by the society members .as it was formulated by a leading polish psychiatrist antoni kpiski , to decide where to put limits of our own rebel is one of most difficult problem in human life . solving this problem in our individual scale emerges in the social scale as a time evolution of norms . to search for laws which rulethis process is a worthwhile challenge for the agent - based simulations .+ a serious advance in this path was done by robert axelrod who formulated the norm game : an algorithm to simulate the conditions of persistence and fall of a social norm .the simulations done by axelrod have been questioned , but his paper has been cited hundreds times and it triggered a cascade of research ; for a recent review of simulations of norms see .further , the subject of norms overlaps with the theory of cooperation ; to cooperate is an example of a social norm .an overview on the latter might provide insight into current trends ; still the research in this field seems to be at its intensively rising stage .as norms are beliefs , there is also some overlap with the simulations of opinion dynamics ; for a review of sociophysical simulations on this matter see .+ direct motivation of this research comes from statistical data on norm breaking .perhaps most striking change we have seen deals with the data on divorces in portugal during the carnival revolution .there , the number of divorces increased from 777 in 1974 to 7773 in 1977 .the plots on crime in countries in central europe are more conventional . in fig .1 we show the data on germany , poland , czech republic , austria and hungary . in accordance with warnings by eurostat ,our aim is not to compare the amount of crimes in these countries , but rather to show the changes in some of them .note that opinion shifts were classified into continuous and abrupt by michard and bouchaud in 2005 within a theory of imitation . in the axelrod model ,the driving social mechanism is punishment ; the interaction inhibits the change rather than releases it . + in ng agents defect a norm with a given probability .once an agent defects , other agents punish the defector , also with some probability .the defection is gratified with some payoff , but those who are punished lose . also , those who punish incur some cost .axelrod considered also a metagame : the possibility of punishment those who do not punish .the overall success of an agent was measured by his income , but the agent himself - represented by his strategy - was not modified . after some number of games , the genetic algoritm was used to select strategies which yielded the best income .as indicated in , this kind of modeling has an advantage to deal with the dynamics of the process .we should add that neumann criticizes the approach of axelrod for disregarding the functional character of norms . + recently we developed a new realization of the axelrod model with two new ingredients .first is that the model is freed from the payoff parameters ; what is left is just the influence of agents decisions on decisions of other agents .further , the probabilities of decisions of individual agents ( to defect or not , to punish or not ) are not constant , but they are dynamically modified in each game they play .second modification is less technical : once an agent decides to defect , his ability to punish in his future games vanishes , and his probability to defect ( boldness ) in future games is kept one until he is punished ; then it is multiplied by .then , the constant describes the severity of the punishment . on the contrary ,once an agent punishes , he will never defect the norm , and his probability to punish ( vengeance ) is set to , where is due to the punishment cost .this vengeance can be further reduced if the agent punishes also in his future games . in this way, a kind of social labeling takes place : first decision is irreversible , and the whole process can be seen as a social contagion . as a consequence , a sharp transition of the final boldness as dependent of the initial boldness is found .the threshold value varies with the model parameters .these results depend only quantitatively on the assumed topology of the social network , which determines the probability distribution of the number of punishers , i.e. of the node degree .+ the aim of the present paper is twofold .first , we are going to investigate the above mentioned sharp transition in the space of parameters . on the contrary to , here we are going to assume a given probability distribution of the initial boldness between agents .this modification makes the calculation closer to a social reality , where different agents present different willingnesses to break the norm and to punish .we should precise that in our model the parameters represent not the boldness and the vengeance , but rather the modifications of agents boldness and vengeance due to the decisions of other agents . then, our model parameters describe the interactions between agents and not their actual states .our goal here is to calculate the critical line on the plane between the final state all defect and nobody is punished and the final state nobody defects. this critical line is a kind of generalization of the critical concentration , calculated in the problems of directed percolation .our second aim is to investigate the character of the transition when some biased modifications of the structure of the social network .we use two kinds of these modifications : _i ) _ agents ( guards ) at some amount of nodes always punish and never defect , _ ii ) _ agents ( sinners ) at some amount of nodes always defects and never punish . + in our model , agents are placed at nodes of the directed erds - renyi network .this network is selected for its generic topology .there is much work on the structures of social networks , mostly done by sociologists ; for an early list of references see . with the outburst of scale - free networksa common opinion appeared that social networks are scale - free .some of them can indeed be classified as scale - free , in particular those where direct face - to - face contact is not needed ; as citation networks or telephone - calls networks .still , the actual structure varies from one social network to another , and often the network is simply too small to be classified to any type . to end ,we have checked that the investigated threshold appears also in the scale - free growing networks , except the case when the direction of links is determined by the sequence of attaching new nodes .last but not least , here we are not going to discuss the role of hubs , which could complicate the results .+ the paper is organized as follows . in the next sectionthe model assumptions and the details of the calculations are listed . in section 3we describe the numerical results .these are two : the critical line on the plane of the parameters , and the transition dependence of the amount of guards / sinners .section 4 is devoted to discussion of the results in the context of a recent classification of contagion processes , and of some statistical data on dynamics of crime , presented in fig .the network size is nodes , the mean number of in - going links ( punishers ) is , and their distribution is poissonian . on the contrary to our former calculations , here we assume a homogeneous distribution of the initial probability to defect the norm , i.e. the initial boldness , where is the node index .as a rule , the initial vengeance , i.e. the initial probability of punishing , is .this condition is not maintained during the simulation ; however , .third option is to obey the norm and not punish , with the probability .+ at each time step , an agent is selected and he breaks the norm with the probability equal to his boldness . if actually he does , his boldness is set to 1 and his vengeance , i.e. the probability of punishing - is set to 0. then his neighbours are asked , one by one , if they punish .if one of them punishes , the boldness of is multiplied by a factor , and the vengeance of the punisher is multiplied by .the defector can be punished only by one neighbour . on the contrary , if a neighbour does not punish , his boldness is set to one and his vengeance is set to zero . in this way, the process is accompanied by kind of social labeling : those who break the norm and those who refrain from punishing can not punish in their future games , and those who punish can not break the norm . + as a rule , we calculate the values of the parameters where the threshold appears , i.e. the final state changes from the bold state all defect to the vengeant state all punish. additionally , as a new variant of the game , some amount of sites of the network is selected randomly .agents at these nodes got special roles of guards or sinners. if an agent is a guard , he always punishes and never defects ; sinners do the opposite , i.e. always defect and never punish .the value and character of the threshold is observed against the ratio of the number of those special nodes to the whole population .all these special nodes are either all guards or all sinners.calculating the final boldness as dependent on and we observe a sharp change of the result , as in fig .3 a , at some threshold values of and .this means that we got a critical line in the plane of the parameters - see fig .4 . this line divides the plane into two areas ; the plane can be treated as a phase diagram , then we can talk about two phases .abobe the line , we have a bold phase , where is large ; there punishment costs too much and is not effective . below the line, we have a vengeant phase where the cost of punishment is low .then , everybody punishes and there is no interest in defection .+ having added a small amount of sinners or guards to the system we observe that the threshold changes differently in these two cases .basically the character of the threshold remains the same , just the threshold value is more susceptible to the admixture of guards than to the one of sinners. this can be seen at the positions of two additional curves in fig .while adding of five percent of sinners apparently produces no effect , twice smaller admixture of guards shifts the critical line upwards , reducing the area of the bold phase .when the number of modified nodes increases more , the character of the plot gradually changes from an abrupt to a more continuous one .examples of these curves are shown in fig .3 b. this change prevents us to investigate the critical line for higher concentration of special nodes ; the transition becomes fuzzy .the crossover from the sharp to the fuzzy character of the transition between the bold phase and the vengeant phase , observed at the plane , fits into a recent classification of contagion processes . according to that scheme, models of contagion processes can be divided into three classes : independent interaction models , stochastic threshold models , and deterministic threshold models .sharp transitions , like those found in our results , are characteristic for class .models in class give fuzzy curves , similar to those obtained here for larger amounts of admixtured guards or sinners. these models are also called critical mass models. in our terms , a critical amount of sinners should not be punished to get the transition to the bold phase ; this could mean that local concentration of guards should be small .if this is so , our results do fit into the classification proposed in . as we noted in the introduction, a similar classification was developed in .note that the basis of was the random - field ising model , which is far from the picture of contagion .+ coming back to the sociological reality , let us add a few words on the data presented in fig . 1 in terms of punishment and its cost .we stress that the parameters and do not mean directly the amount of units which a punished and punishing agent should pay , but just the measure of the decrease of the probability that he will defect and punish again . in these terms ,large numbers on the statistics of crimes in a country can be interpreted as an indication that the punishment cost are large or the punishment is weak .accordingly , an increase of the data can mean that the punishment decreased below some critical value or the punishment cost increased .then , new generations faced with an issue , to break a given norm or to preserve it , decide in a collective way , and these decisions are visible in the statistical data . if this point of view is accepted , we should admit in particular that law in germany , hungary and poland is broken much more frequently after 1993 .however , the data presented in fig . 1 can be seen also as a demonstration , that police in these countries is less punished for detecting crimes after 1993 , than before .obviously , this punishment is not open and intended ; still it can appear as a consequence of burdensome bureaucratic procedures , faulty organization , unclear rights and shifts of political aims .99 r. i. m. dunbar , _ the social brain : mind , language , and society in evolutionary perspective _ , annual review of anthropology * 32 * ( 2003 ) 163 - 181 .a. kpiski , _ self - portrait of a man _( in polish ) , wyd .literackie , krakw 2003 .r. axelrod , _ an evolutionary aproach to norms _ , amer .political sci .80 ( 1986 ) 1095 - 1111 .r. axelrod , _ complexity of cooperation .player - based models of competition and collaboration _ , princeton up , princeton 1997 .m. galan and l. izquierdo , ( 2005 ) _ appearances can be deceiving : lessons learned re - implementing axelrod s evolutionary approach to norms_. jasss * 8 * ( 3 ) 2 ( 2005 ) .m. neumann , _ homo socionicus : a case study of simulation models of norms _ , jasss * 11 * , no .4/6 ( 2008 ) .r. axelrod , _ on six advances in cooperation theory _ , analyse und kritik , * 22 * ( 2000 ) 130 - 151 .g. deffuant , s. moss and w. jager , _ dialogues concerning a ( possibly ) new science_. jasss 9 ( 1 ) 1 ( 2006 ) . c. castellano , s. fortunato and v. loreto , _ statistical physics of social dynamics, in press ( arxiv:0710.3256 ). data of eurostat ( http://epp.eurostat.ec.europa.eu/ ) q. michard and j .- p .bouchaud , _ theory of collective opinion shifts : from smooth trends to abrupt swings _ , eur .j. b * 47 * ( 2005 ) 151 - 159 .k. kuakowski , _ cops or robbers - a bistable society _c * 19 * ( 2008 ) 1105 - 1111 .k. kuakowski , a. dydejczyk and m. rybak , _ the norm game : how a norm fails _ , lncs * 5545 * ( 2009 ) 835 - 844 .d. stauffer and a. aharony , _ introduction to percolation theory _ ,routledge , london 2003 . b. bollobs and o. riordan , _ percolation _ , cambridge up , cambridge 2006 . s. wasserman and k. faust , _ social network analysis : methods and applications _ , cambridge up , cambridge 1994 .. j. scott , _ social network analysis : a handbook _ , sage publications , newbury park 1991 .r. a. hanneman and m. riddle , _ introduction to social network methods_. riverside , ca : university of california , riverside 2005 ( published in digital form at http://faculty.ucr.edu/ hanneman/ ) m. e. j. newman , _ the structure of scientific collaboration networks _ , pnas * 98 * ( 2001 ) 404 - 409 .m. c. gonzlez , h.j .herrmann , j. kertsz , and t. vicsek , _ community structure and ethnic preferences in school friendship networks _ , physica a * 379 * ( 2007 ) 307 - 316 . j .-onnela , j. saramki , j. hyvonen , g. szab , m. argollo de menezes , k. kaski , a .-barabsi and j. kertsz , _ analysis of a large - scale weighted network of one - to - one human communication _ , new j. phys .* 9 * ( 2007 ) 179 .m. e. j. newman , d. j. watts and s. h. strogatz , _ random graph models of social networks _ , pnas * 99 * ( 2002 ) 2566 - 2572 . l. a. n. amaral , a. scala , m. barthlmy and h. e. stanley , _ classes of small - world networks _ ,pnas * 97 * ( 2000 ) 11149 - 11152 .m. schnegg , _ reciprocity and the emergence of power laws in social networks _c * 17 * ( 2006 ) 1067 - 1076 .p. s. dodds and d. j. watts , _ universal behavior in a generalized model of contagion _ , phys .lett . * 92 * ( 2004 ) 218701 . g. marshall ( ed . ) , _ the concise oxford dictionary of sociology _ , oxford up , oxford 1994 .
the norm game ( ng ) introduced by robert axelrod is a convenient frame to disccuss the time evolution of the level of preserving norms in social systems . recently ng was formulated in terms of a social contagion on a model social network with two stable states : defectors or punishers . here we calculate the critical line between these states on the plane of parameters , which measure the severities of punishing and of being punished . we show also that the position of this line is more susceptible to the amount of agents who always punish and never defect , than to those who always defect and never punish . the process is discussed in the context of the statistical data on crimes in some european countries close to wrocaw - the place of this conference - around 1990 . * the norm game on a model network : a critical line * + m. rybak , a. dydejczyk and k. kuakowski + _ _ faculty of physics and applied computer science , agh university of science and technology , al . mickiewicza 30 , pl-30059 krakw , poland .pl , dydejczyk.agh.edu.pl , kulakowski.ftj.agh.edu.pl _ keywords : _ social networks ; multiagent systems
in traditional cellular networks , the base stations ( bss ) in different cells independently control the transmission with their associated users .the inter - cell interference is avoided or minimized by adopting different frequency reuse patterns , which only allow non - adjacent cells to reuse the same frequency band .the frequency reuse factor is assigned to specify the rate at which the same frequency band can be used in the network . due to emerging high - rate wireless multimedia applications , traditional cellular systems have been pushed towards their throughput limits . as a result , it has been proposed to increase the frequency reuse factor such that each cell can be assigned with more frequency bands to increase the attainable throughput . in the special case where all cells can share the same frequency band for simultaneous transmission, this corresponds to the factor - one or _universal _ frequency reuse .however , with more flexible frequency reuse , the inter - cell interference control becomes an essential problem in cellular systems , which has recently drawn significant research attentions ( see , e.g. , ) . for multicell systems with a universal frequency reuse, two promising approaches have been proposed to resolve the inter - cell interference problem ( see , e.g. , and the references therein ) : _ interference coordination _ and _ network mimo ( multiple - input multiple - output)_. in the former approach , the performance of a multicell system is optimized via joint resource allocation among all cells , based on their shared channel state information ( csi ) of all direct and interfering links across different cells .furthermore , if the baseband signal synchronization among the bss of different cells is available and the transmit messages of different cells are shared by their bss , a more powerful cooperation can be achieved in the downlink via jointly encoding the transmit messages of all bss . in this so - called network mimo approach, the combined use of antennas at different bss for joint signal transmission resembles the conventional single - cell multiantenna broadcast channel ( bc ) . in this paper , the former interference coordination approach is adopted due to its relatively easier implementation in practical systems .more specifically , we study the inter - cell interference coordination for a two - cell ofdma downlink system with universal frequency reuse . all bss and user terminalsare assumed to be each equipped with a single antenna , and thus the system of interest can be modeled as a _parallel interfering siso ( single - input single - output ) bc_. promising applications of this two - cell system model are illustrated in fig .[ fig.1 ] , which shows a geographically symmetric setup with two adjacent macrocells , as well as a non - symmetric setup with one macrocell and one inside femtocell .this paper investigates the joint power and subcarrier allocation over the two cells to maximize their sum throughput , for both centralized and decentralized implementations .specifically , for the centralized allocation , with the assumption of a global knowledge of all channels in the network , we propose a scheme to jointly optimize power and subcarrier allocation over the two cells by applying the _ lagrange duality _ method from convex optimization .this centralized scheme provides a performance benchmark for the decentralized schemes studied subsequently . for the decentralized resource allocation ,this paper proposes a new _ cooperative interference control _ approach , whereby the two cells independently optimize resource allocation to maximize individual throughput subject to a set of preassigned mutual interference power constraints , in an iterative manner until the resource allocation in both cells converges .two types of interference power constraints are further examined : one is to constrain the total interference power across all subcarriers from each cell to the active users in its adjacent cell , termed _joint subcarrier protection _ ( jsp ) ; and the other is to limit the interference power over each individual subcarrier , termed _ individual subcarrier protection _ ( isp ) .also , the optimal resource allocation rules for each cell to maximize individual throughput with jsp or isp are derived .the rest of this paper is organized as follows .section ii introduces the two - cell downlink ofdma system , and formulates the optimization problem for resource allocation .section iii presents the centralized resource allocation scheme .section iv proposes two decentralized schemes via the cooperative interference control approach with jsp and isp , respectively .section v presents simulation results and pertinent discussions .finally , section vi concludes the paper .+ as shown in fig .[ fig.1 ] , we consider a two - cell system sharing the same frequency band with each cell having a downlink ofdma transmission .we use to denote each of the two cells , which are referred to as the 1st and 2nd cells in this paper , respectively . for convenience ,let the 1st cell refer to the macrocell and the 2nd cell refer to either the macrocell or the femtocell in fig .[ fig.1 ] . the total system bandwidth shared by the two cellsis assumed to be hz , which is equally divided into subcarriers ( scs ) indexed by .each sc is assumed to be used by at most one user inside each cell and could be shared between two users individually selected from the two cells .in addition , the users in the network are indexed by in the 1st cell and in the 2nd cell , where and are the total numbers of users in each corresponding cell .furthermore , we denote the channel power gains ( amplitude squares ) from the two bss to their respective users , saying users , in each cell as and , respectively . the inter - cell interference channel gain from to denoted by , while that from to is by .we assume that the noise at each user s receiver has independent circularly symmetric complex gaussian ( cscg ) distribution over scs with zero mean and variance , denoted by , where is the noise power spectral density .in addition , the transmit power allocated to user at sc is denoted by .thus , over all users and scs in the 1st cell , we can define a power allocation matrix ( -by- ) with the non - negative elements denoted by . is assumed to satisfy an ofdma - based power allocation ( opa ) , in which there exists at most one element in each column being larger than zero and all the other elements are equal to zero .this opa constraint can be expressed as .similarly , we can define the power allocation matrix for the 2nd cell as ( -by- matrix ) under a similar opa constraint .the columns of and are denoted by two vectors and , where ( ) is drawn from the column of ( ) . with the above system model and assuming that the inter - cell interference is treated as additional gaussian noise at each user s receiver , the signal - to - interference - plus - noise - ratio ( sinr ) of user at sc in the 1st cell is given by similarly , the sinr of user at sc in the 2nd cell is denoted by .thus , the achievable sum - rate of user is given by we consider the weighted - sum - rate ( wsr ) in each cell i.e. , where is the ( non - negative ) rate weight of user in the cell .with individual transmission power constraint at each bs , the following optimization problem can be formulated to maximize the _ system throughput _ defined as where is the given power constraint at , and is the opa constraint for the cell .in this section , we study the centralized optimization for jointly allocating resources in the two cells so as to maximize the system throughput , which corresponds to solving problem ( [ equ.6 ] ) globally with constraints ( [ equ.7 ] ) and ( [ equ.8 ] ) . for the centralized allocation , it is assumed that all channel gains in the network are collected by a central controller , which is capable of performing a centralized resource allocation and informing the allocation results to each cell for data transmission . due to the non - convex opa constraint and the non - concave objective function over and ,the optimization problem in ( [ equ.6 ] ) is _ non - convex _ and thus can not be solved efficiently for the global optimum .nevertheless , the lagrange duality method can be applied to this problem to obtain a suboptimal solution .interestingly , according to , it has been shown that a so - called `` time - sharing '' condition usually holds for resource allocation problems in ofdma , and the duality gap for such problems solved by the lagrange duality method becomes asymptotically zero as the number of subcarriers in the system becomes large . accordingly , in the sequel , we apply the lagrange duality method to solve problem ( [ equ.6 ] ) .first , we express the _ partial _ lagrangian of problem ( [ equ.6 ] ) as where , for each sc , and are non - negative dual variables associated with the power constraints in ( [ equ.7 ] ) with and , respectively .the lagrange dual function is then given by hence , the dual problem can be defined as for a given pairs of and , we have where , is obtained by solving the following per - sc maximization problem the maximization problem in ( [ equ.11 ] ) is thus decoupled into per - sc resource allocation problems given by ( [ equ.14 ] ) . due to the opa constraints , for one particular sc , it can be simultaneously assigned to one pair of users from the two cells when the resultant in ( [ equ.10 ] ) attains its maximum value ( with the optimized and ) .this user pair can be obtained by searching over all possible combinations from users .thus , the optimal sc and power allocation that solves the problem in ( [ equ.14 ] ) is where is the selected user pair to share sc , and is obtained from ( [ equ.10 ] ) . for a given pair of , the optimal and to maximize in ( [ equ.16 ] )have no closed - form solutions due to the non - convexity of this problem .however , an iterative search based on , e.g. , newton s method can be utilized to find a pair of local optimal solutions for and .then , we can check all possible user combinations to determine the optimal sc allocation according to ( [ equ.15 ] ) with optimized power allocation . after solving the per - sc problems in ( [ equ.14 ] ) for all s ,a subgradient - based method , e.g. , the ellipsoid method , can be adopted to solve the dual problem in ( [ equ.12 ] ) so that the power constraints in ( [ equ.7 ] ) at both bss are satisfied .the details are thus omitted for brevity .note that problem ( [ equ.6 ] ) can be solved in polynomial time with an overall complexity with order in its dual domain .specifically , for one particular sc , we search for combinations of user pairs and determine the power allocation for each user pair with iterations .in addition , is the number of iterations for solving the dual problem in ( [ equ.12 ] ) .however , this centralized allocation needs a system level coordination with all channel conditions in the two cells , which is a demanding requirement for practical applications . in the next section ,we propose decentralized schemes for resource allocation , which can be implemented by each cell independently .in this section , a new cooperative interference control approach is applied to design decentralized resource allocation schemes for the two - cell ofdma downlink system . in this approach, each cell independently optimizes its resource allocation to maximize individual wsr under its bs s own transmit power constraint , as well as a set of newly imposed constraints to regulate the leakage interference power levels to the active users in its adjacent cell .the above operation iterates between the two cells , until both cells obtain a converged resource allocation under their mutual inter - cell interferences .specifically , two decentralized allocation schemes are studied in this section corresponding to two different types of interference power constraints , namely jsp and isp . in this subsection, we solve the optimal resource allocation problem of maximizing the 1st cell s wsr subject to its bs s power constraint and a given jsp constraint to the active users in the 2nd cell .similar problem formulation and solution apply to the resource allocation in the 2nd cell and are thus omitted .consider the resource allocation problem in the 1st cell subject to the leakage interference constraint for the 2nd cell . in order to characterize the leakage interference to the 2nd cell, needs to know the interference channel gains from it to all active users over different scs in the 2nd cell .let denote the active user at sc in the 2nd cell , with the corresponding interference channel gain from to being .it is then assumed that has been perfectly estimated by user in the 2nd cell and fed back to . after collecting for all s from its active users, sends these channel gain values to ( via a backhaul link connecting these two bss ) . note that if a particular sc is not used by any user in the 2nd cell , the corresponding interference channel gain sent from to is set to be zero regardless of its actual value , so that this sc can be used by the 1st cell without any interference constraint . to maximize the wsr of the 1st cell , the following problem is formulated as subject to where is the given jsp power constraint for protecting all the active users in the 2nd cell .note that limits the interference power averaged over all the scs ; thus , the corresponding resource allocation scheme is refereed to as the * average * scheme for convenience .we assume the non - negative dual variables associated with ( [ equ.21 ] ) and ( [ equ.22 ] ) are .similarly as in the case of centralized allocation , for a given pair of , problem ( [ equ.20 ] ) can be decoupled into per - sc problems in its dual domain , and the optimal allocation for sc in the 1st cell is derived as where with being the interference - plus - noise power at sc .( [ equ.27 ] ) means sc should be assigned to the user , denoted by , giving the highest value of with the optimized . by letting be zero and considering non - negative power allocation , the power allocation in ( [ equ.27 ] ) should be optimized according to thus , ( [ equ.27 ] ) and ( [ equ.28 ] ) together provide the optimal resource allocation rules at all scs with fixed and .then , the ellipsoid method can be adopted to iteratively search over and so that the constraints in ( [ equ.21 ] ) and ( [ equ.22 ] ) are simultaneously satisfied .this algorithm also bears a linear complexity order i.e. , , where denotes the number of iterations for updating and .similarly as ( [ equ.20 ] ) , the resource allocation problem for the 2nd cell can be formulated to maximize subject to the transmit power constraint of and the jsp constraint to protect the active users in the 1st cell . for a given pair of and , and be found in the journal version of this paper . ]the per - cell resource allocation described above can be iteratively implemented between and until the sc and power allocation in both cells converges . in this subsection, we study the decentralized resource allocation with isp . similarly to the previous case of jsp , we merely present the solution to the optimization problem for the 1st cel . with the same objective functionas ( [ equ.20 ] ) and bs transmit power constraint as ( [ equ.21 ] ) , we formulate the current problem via replacing the jsp constraint in ( [ equ.22 ] ) by the following isp constraint over each individual sc : where is the interference power constraint for protecting the active user at sc in the 2nd cell .again , we apply the lagrange duality to solve the per - cell resource allocation problem with isp . following a similar procedure as in jsp, we can derive the following optimal sc and power allocation rules : where , with being the non - negative dual variable associated with ( [ equ.30 ] ) . according to ( [ equ.35 ] ) and ( [ equ.37 ] ) ,the optimal sc and power allocation can be determined for all s with any given .then , the bisection method can be used to adjust so that the bs transmit power constraint ( [ equ.21 ] ) is satisfied . nevertheless , it is not computationally efficient to individually optimize ( ) for each sc , thus two special schemes are further identified .one scheme is to set , which means that each cell is not aware of its interference to the adjacent cell , named as the * no protection * scheme .the other scheme is to set uniform peak interference power constraints over all scs , i.e. , , named as the * peak * scheme .in this section , simulation results are presented to evaluate the performance of the proposed schemes for the two - cell downlink ofdma system .it is assumed that mhz and .in addition , all users rate weights are assumed to be one , and the noise power spectral density is set to be dbm / hz . assuming independent ( time - domain ) rayleigh fading with six independent , equal - energy multipath taps , the frequency - domain channel gains \{ } , \{ } , \{ } , and \{ } are modeled as independent cscg random variables distributed as , , and , respectively . for convenience , we normalize , and adjust and to generate different channel models .[ fig.3 ] and [ fig.6 ] show the results for two macrocells with and fig .[ fig.8 ] for the case with one macrocell and one femtocell with and ( cf .1 ) . fig .[ fig.3 ] shows the system throughput , , achieved by different interference power constraints and using the proposed decentralized scheme with jsp ( i.e. , the average scheme ) in section iv.a for one particular channel realization .the channel gains are obtained by setting and , while the transmit power limits at two bss are set equally to be watt . in this figure , we have marked one local maximum point obtained by the iterative search method in .also , we have marked the system throughput obtained by the centralized scheme proposed in section iii ( * optimal * scheme ) .it is observed that the system throughput achieved by the decentralized average scheme is suboptimal as compared to that by the centralized optimal scheme .[ fig.6 ] shows the system throughput against the average inter - cell interference channel gain for various schemes .the channels are generated via and , with being the average interference channel gain ranging from to 1 .the proposed decentralized average scheme achieves the system throughput close to that by the centralized optimal scheme for all values of , when the searched optimized values of and are applied .if instead the preassigned values for and are applied , throughput degradations are observed to be negligible in the case of for the low inter - cell interference regime with small values of , and in the case of for the high inter - cell interference regime with large values of .in addition , the * half * scheme ( each cell orthogonally uses half of the overall frequency band ) and the no protection scheme are observed to perform poorly for small and large values of .moreover , the average scheme with jsp performs superior over the peak scheme with isp , especially when becomes large .+ finally , fig .[ fig.8 ] shows the system throughput for a macrocell with a femtocell inside it .the channel gains are , , and .the transmit power constraint at the macrocell s bs is assumed to be 1 watt , while that at the femtocell s bs is changed from 0.02 to 2 watts.it is observed that all proposed centralized and decentralized resource allocation schemes outperform the no protection scheme in the achievable system throughput , which eventually becomes saturated with the increased inter - cell interference . at low femtocell snr ,there exists a noticeable throughput gap between the average and peak schemes , which is due to the fact that when the femtocell suffers detrimental interference from the macrocell , the average scheme can opportunistically allocate the femtocell transmit power to a small portion of scs with best channel conditions . on the other hand , at high femtocell snr ,both average and peak schemes tend to perform close to the optimal scheme .in this paper , the downlink cooperative interference control in a two - cell ofdma system is investigated with centralized and decentralized implementations for joint power and subcarrier allocation to maximize the system throughput .it is shown that the proposed decentralized recourse allocation schemes via the new approach of inter - cell interference power protection achieve a performance close to that of the centralized scheme in various system settings .in addition , the joint subcarrier protection ( jsp ) with average interference power constraint is shown to achieve a larger system throughput than the more stringent individual subcarrier protection ( isp ) counterpart with peak interference power constraint .d. gesbert , s. hanly , h. huang , s. shamai shitz , o. simeone , and w. yu , `` multi - cell mimo cooperative networks : a new look at interference , '' _ ieee j. select .areas communications _ , vol .9 , pp . 1380 - 1408 , dec .l. venturino , n. prasad , and x. wang , `` coordinated scheduling and power allocation in downlink multicell ofdma networks , '' _ ieee trans .vehicular technology _58 , no . 6 , pp . 2835 - 2848 , july 2009 . w.yu , t. kwon , and c. shin , `` joint scheduling and dynamic power spectrum optimization for wireless multicell networks , '' in _ proc .44th conf .science sys ._ ( _ ciss _ ) , princeton , nj , march 2010 , pp . 1 - 6 .
this paper studies cooperative schemes for the inter - cell interference control in orthogonal - frequency - division - multiple - access ( ofdma ) cellular systems . the downlink transmission in a simplified two - cell system is examined , where both cells simultaneously access the same frequency band using ofdma . the joint power and subcarrier allocation over the two cells is investigated for maximizing their sum throughput with both centralized and decentralized implementations . particularly , the decentralized allocation is achieved via a new _ cooperative interference control _ approach , whereby the two cells independently implement resource allocation to maximize individual throughput in an iterative manner , subject to a set of mutual interference power constraints . simulation results show that the proposed decentralized resource allocation schemes achieve the system throughput close to that by the centralized scheme , and provide substantial throughput gains over existing schemes .
in this paper we consider the applications of a new numerical - analytical technique which is based on the methods of local nonlinear harmonic analysis or wavelet analysis to the nonlinear root - mean - square ( rms ) envelope dynamics [ 1 ] .such approach may be useful in all models in which it is possible and reasonable to reduce all complicated problems related with statistical distributions to the problems described by systems of nonlinear ordinary / partial differential equations . in this paperwe consider an approach based on the second moments of the distribution functions for the calculation of evolution of rms envelope of a beam .the rms envelope equations are the most useful for analysis of the beam self forces ( space charge ) effects and also allow to consider both transverse and longitudinal dynamics of space - charge - dominated relativistic high brightness axisymmetric / asymmetric beams , which under short laser pulse driven radio - frequency photoinjectors have fast transition from nonrelativistic to relativistic regime [ 1 ] .analysis of halo growth in beams , appeared as result of bunch oscillations in the particle - core model , also are based on three - dimensional envelope equations [ 2 ] . from the formal point of viewwe may consider rms envelope equations after straightforward transformations to standard cauchy form as a system of nonlinear differential equations which are not more than rational ( in dynamical variables ) .because of rational type of nonlinearities we need to consider some extension of our results from [ 3]-[10 ] , which are based on application of wavelet analysis technique to variational formulation of initial nonlinear problems .wavelet analysis is a relatively novel set of mathematical methods , which gives us a possibility to work with well - localized bases in functional spaces and give for the general type of operators ( differential , integral , pseudodifferential ) in such bases the maximum sparse forms .our approach in this paper is based on the generalization [ 11 ] of variational - wavelet approach from [ 3]-[10 ] , which allows us to consider not only polynomial but rational type of nonlinearities .our representation for solution has the following form which corresponds to the full multiresolution expansion in all time scales .formula ( [ eq : z ] ) gives us expansion into a slow part and fast oscillating parts for arbitrary n. so , we may move from coarse scales of resolution to the finest one for obtaining more detailed information about our dynamical process .the first term in the rhs of equation ( 1 ) corresponds on the global level of function space decomposition to resolution space and the second one to detail space . in this way we give contribution to our full solution from each scale of resolution or each time scale . the same is correct for the contribution to power spectral density ( energy spectrum ) : we can take into account contributions from each level / scale of resolution . in part 2we describe the different forms of rms equations .in part 3 we present explicit analytical construction for solutions of rms equations from part 2 , which are based on our variational formulation of initial dynamical problems and on multiresolution representation [ 11 ] .we give explicit representation for all dynamical variables in the base of compactly supported wavelets .our solutions are parametrized by solutions of a number of reduced algebraical problems from which one is nonlinear with the same degree of nonlinearity and the rest are the linear problems which correspond to particular method of calculation of scalar products of functions from wavelet bases and their derivatives .below we consider a number of different forms of rms envelope equations , which are from the formal point of view not more than nonlinear differential equations with rational nonlinearities and variable coefficients .let be the distribution function which gives full information about noninteracting ensemble of beam particles regarding to trace space or transverse phase coordinates .then we may extract the first nontrivial bit of ` dynamical information ' from the second moments rms emittance ellipse is given by .expressions for twiss parameters are also based on the second moments .we will consider the following particular cases of rms envelope equations , which described evolution of the moments ( 1 ) ( [ 1],[2 ] for full designation ) : for asymmetric beams we have the system of two envelope equations of the second order for and : the envelope equation for an axisymmetric beam is a particular case of preceding equations .also we have related lawson s equation for evolution of the rms envelope in the paraxial limit , which governs evolution of cylindrical symmetric envelope under external linear focusing channel of strenghts : where according [ 2 ] we have the following form for envelope equations in the model of halo formation by bunch oscillations : where x(s ) , y(s ) , z(s ) are bunch envelopes , , .after transformations to cauchy form we can see that all this equations from the formal point of view are not more than ordinary differential equations with rational nonlinearities and variable coefficients ( also , b we may consider regimes in which , are not fixed functions / constants but satisfy some additional differential constraint / equations , but this case does not change our general approach ) .our problems may be formulated as the systems of ordinary differential equations with fixed initial conditions , where are not more than polynomial functions of dynamical variables and have arbitrary dependence of time . because of time dilation we can consider only next time interval : .let us consider a set of functions and a set of functionals where are dual ( variational ) variables .it is obvious that the initial system and the system are equivalent .of course , we consider such which do not lead to the singular problem with , when or , i.e. .now we consider formal expansions for : where are useful basis functions of some functional space ( , sobolev , etc ) corresponding to concrete problem and because of initial conditions we need only , where the lower index i corresponds to expansion of dynamical variable with index i , i.e. and the upper index corresponds to the numbers of terms in the expansion of dynamical variables in the formal series .then we put ( [ eq : pol1 ] ) into the functional equations ( [ eq : veq ] ) and as result we have the following reduced algebraical system of equations on the set of unknown coefficients of expansions ( [ eq : pol1 ] ) : where operators l and m are algebraization of rhs and lhs of initial problem ( [ eq : pol0 ] ) , where ( [ eq : lambda ] ) are unknowns of reduced system of algebraical equations ( rsae)([eq : pol2 ] ) . are coefficients ( with possible time dependence ) of rhs of initial system of differential equations ( [ eq : pol0 ] ) and as consequence are coefficients of rsae . , are multiindexes , by which are labelled and other coefficients of rsae ( [ eq : pol2 ] ) : where p is the degree of polinomial operator p ( [ eq : pol0 ] ) where q is the degree of polynomial operator q ( [ eq : pol0 ] ) , , .now , when we solve rsae ( [ eq : pol2 ] ) and determine unknown coefficients from formal expansion ( [ eq : pol1 ] ) we therefore obtain the solution of our initial problem .it should be noted if we consider only truncated expansion ( [ eq : pol1 ] ) with n terms then we have from ( [ eq : pol2 ] ) the system of algebraical equations with degree and the degree of this algebraical system coincides with degree of initial differential system .so , we have the solution of the initial nonlinear ( rational ) problem in the form where coefficients are roots of the corresponding reduced algebraical ( polynomial ) problem rsae ( [ eq : pol2 ] ) .consequently , we have a parametrization of solution of initial problem by solution of reduced algebraical problem ( [ eq : pol2 ] ) .the first main problem is a problem of computations of coefficients ( [ eq : alpha ] ) , ( [ eq : beta ] ) of reduced algebraical system .these problems may be explicitly solved in wavelet approach .the obtained solutions are given in the form ( [ eq : pol3 ] ) , where are basis functions and are roots of reduced system of equations . in our case obtained via multiresolution expansions and represented by compactly supported wavelets and are the roots of corresponding general polynomial system ( [ eq : pol2 ] ) .our constructions are based on multiresolution approach .because affine group of translation and dilations is inside the approach , this method resembles the action of a microscope .we have contribution to final result from each scale of resolution from the whole infinite scale of spaces .more exactly , the closed subspace corresponds to level j of resolution , or to scale j. we consider a multiresolution analysis of ( of course , we may consider any different functional space ) which is a sequence of increasing closed subspaces : satisfying the following properties : so , on fig.1 we present contributions to bunch oscillations from first 5 scales or levels of resolution . it should be noted that such representations ( 1 ) , ( 15 ) for solutions of equations ( 3)-(5 ) give the best possible localization properties in corresponding phase space .this is especially important because our dynamical variables corresponds to moments of ensemble of beam particles .in contrast with different approaches formulae ( 1 ) , ( 15 ) do not use perturbation technique or linearization procedures and represent bunch oscillations via generalized nonlinear localized eigenmodes expansion .rosenzweig , fundamentals of beam physics , e - ver- + sion : http://www.physics.ucla.edu/class/99f/250rosenzweig/notes/ l. serafini and j.b .rosenzweig , _ phys .e _ * 55 * , 7565 , 1997 . c. allen , t. wangler , papers in ucla icfa proc ., nov . , 1999 , world sci . , 2000 .a.n . fedorova and m.g .zeitlin , wavelet approach to mechanical problems .symplectic group , symplectic topology and symplectic scales , _ new applications of nonlinear and chaotic dynamics in mechanics _ , 31,101 ( kluwer , 1998 ) .fedorova , m.g .zeitlin and z. parsa , variational approach in wavelet framework to polynomial approximations of nonlinear accelerator problems .* cp468 * , 48 ( american institute of physics , 1999 ) .+ los alamos preprint , physics/990262 a.n . fedorova and m.g .zeitlin , nonlinear accelerator problems via wavelets , parts 1 - 8 , proc .pac99 , 1614 , 1617 , 1620 , 2900 , 2903 , 2906 , 2909 , 2912 ( ieee / aps , new york , 1999 ) .+ los alamos preprints : physics/9904039 , physics/9904040 , physics/9904041 , physics/9904042 , physics/9904043 , physics/9904045 , physics/9904046 , physics/9904047 .
we present applications of variational wavelet approach to different forms of nonlinear ( rational ) rms envelope equations . we have the representation for beam bunch oscillations as a multiresolution ( multiscales ) expansion in the base of compactly supported wavelet bases .
the field of non - linear dynamics introduced the fascinating idea that an apparently random behavior of a time series might have been generated by a low dimensional deterministic system .based on the notions of chaos theory , different algorithms have been invented to infer if an observed time series is a realization of a chaotic system , e.g. the estimation of the largest lyapunov - exponent , the correlation dimension and nonlinear prediction .there is hope to gain deeper insights in complex systems like those from biology and physiology by applying these methods .however , the application of these methods to a finite , often noisy set of measured data is not straightforward , see e.g. and references therein .for example , in order to claim a finite , fractal correlation dimension , a scaling region of sufficient length has to be established .determining this scaling region by eye or some algorithm may lead to an erroneous evidence of chaotic behavior . in order to evaluate the analysis ,it has become popular to apply the method of surrogate data .therefore , data are generated which have the same linear statistical properties as the original data but not the possible nonlinear ones .for many realizations of these data , the same algorithm as to the original data is applied . a significant difference between the distribution of the nonlinear feature for the surrogate data andthe original data is taken as an indication that the process underlying the original data is deterministic , nonlinear or even chaotic .the explicit null hypotheses of surrogate data testing for linearity is that the data were generated by a linear , stochastic , gaussian stationary process , including a possible invertible nonlinear observation function .thus , a rejection of this hypothesis does not necessarily mean that the data come from a chaotic , i.e. some kind of stationary , nonlinear deterministic , process .they might also originate from a nonlinear stochastic or even simply from a linear , stochastic , non - stationary process . in this paper , we investigate the power of surrogate data testing against non - stationarity . as nonlinear featurewe use the correlation dimension .the behavior of correlation dimension estimates has been investigated for the , type of linear non - stationarity . for physiological data ,such behavior has been observed in heart rate .often , physiological data are characterized by some kind of oscillatory behavior like eeg , hormone secretion , breathing or tremor . for such data , types of non - stationarity introducing some time dependency of the oscillating dynamics ,e.g a modulation of frequency or amplitude , seems to be a natural violation of the null hypothesis . if the process is linear and the time dependency of the parameters , and thus , the autocovariance function is periodically in time , these processes are called cyclostationary . many other types of non - stationarity in oscillatory processes are imaginable .we choose cyclostationary processes because they allow in simple way for a parametric violation of the null hypothesis .formally , these processes can be expressed as higher dimensional autonomous non - linear stochastic processes .a special version of surrogate data testing acting on segments of the data has been suggested to analyze such data . in the next section, we informally discuss the class of cyclostationary processes and introduce the two specific examples we use in section iii to investigate the power of surrogate data testing with respect to these types of non - stationarity .the parameters and of a linear stochastic autoregressive ( ar ) process : [ ar_process ] x(t ) = _i=1^p a_1 x(t - p ) + ( t ) , ( t ) ~(0,^2 ) determine the autocovariance function : r ( ) = < x(t ) x(t+ ) > .the spectrum is given as fourier transform of the autocovariance function : s ( ) = e^-i r ( ) . a possible first step to non - stationarity is to define a time dependent spectrum and , correspondingly , a time dependent autocovariance function : r(t , ) = < x(t ) x(t+ ) > .a cyclostationary process of periodicity is defined by : [ cyclo_def ] r(t , ) = r(t+l , ) . for the ar process of eq .( [ ar_process ] ) this means that the parameters and may change periodically . as process satisfying the null hypothesis of surrogate data testing for linearity , we chose an autoregressive ( ar ) process of order two : x_t = a_1 x_t-1 + a_2 x_t-2 + _ t , _t ~(0,^2 ) . in terms of physics ,ar processes can be interpreted as a combination of linear relaxators and linear damped oscillators driven by noise . for an ar process of order two which describes a damped oscillator ,the parameters are related to the relaxation time and period by : the variance of the process var( ) is given by : [ var_ar2 ] ( x_t)= .we choose an ar2 process with , and as process that satisfies the null hypothesis .[ ar2_dat]a displays a realization of this process .= 8.4 cm = 8.4 cm = 8.4 cm the oscillatory behavior with a mean period of 10 time steps is clearly visible as well as the natural variability of period and amplitude .[ ar2_spe ] ( solid line ) shows the estimated spectrum of the process .the spectrum was estimated by averaging 100 periodograms , i.e. the squared absolute value of the fourier transform of the data .= 8.4 cm a broad peak , typical for a stochastically driven linear damped oscillator can be seen . based on eqs .( [ a1],[a2],[var_ar2 ] ) we now introduce two parameterized violations of this stationary , linear , stochastic process in order to investigate the power of surrogate data testing with respect to non - stationarity . for the first violation of stationarity in the frame of cyclostationary processes , we choose a simple amplitude modulation , corresponding by eq.([var_ar2 ] ) to a periodicity of the variance of the driving noise . based on the stationary ar2 process ,the amplitude modulated process is given by : [ process1 ] x_amp(t ) = ( 1 + _ ( 2/t_mod t ) ) x_0(t ) . , the modulation depth , parameterizes the violation of the null hypothesis . determines the modulation period .[ ar2_dat]b displays a realization of this process with and for three periods of the modulation .compared to fig .[ ar2_dat]a , the non - stationarity is hardly visible . due to the long modulation period compared to the period of the process ,its spectrum is not distinguishable from that of the stationary process in fig .[ ar2_spe ] . for the second violation of stationarity, we chose a modulation of the period of the ar2 process with period and amplitude around the mean period .this leads to a time dependency of the parameter of the ar2 process : parameterizes the violation of the null hypothesis .according to eq .( [ var_ar2 ] ) , the time dependency of causes a time dependency of the variance of the process .the effect of a changing variance is already covered by the first process , eq .( [ process1 ] ) . to investigate only the effect of a changing period of the process here , we use eq .( [ var_ar2 ] ) to adjust the variance of the driving noise such that the variance of the process is constant : where and denote the parameters of the process satisfying the null hypothesis .[ ar2_dat]c displays a realization of this process with and .again , compared to fig .[ ar2_dat]a , the non - stationarity is hardly visible .[ ar2_spe ] ( dashed line ) shows the estimated spectrum of the process .the spectrum shows two peaks at the corresponding frequencies due to the specific type of modulation chosen .as nonlinear feature to investigate the power of surrogate data testing against the two violations of stationarity we use the correlation dimension .the phase space is reconstructed by delay embedding .the delay is chosen equal to the lag at which the autocorrelation function first crosses zero .the correlation dimension is defined by : where c(r ) , the correlation integral , is given by : including the theiler correction which we chose equal to the mean period , i.e. 10 time steps .the canonical procedure to establish a finite correlation dimension is to show the existence of a scaling region for small where eq .( [ d2 ] ) holds and stays constant for a high enough embedding dimensions . for all processes investigated here , the true correlation dimension is infinity .following the idea of surrogate data testing , we fix an algorithm to obtain a finite value from the correlation integral and look for differences to the original data .therefore , we apply theiler and lookman s rule of five chord estimator and chose their equal to the standard deviation of the data . for such a large do not examine the small scale behavior of eq .( [ d2 ] ) anymore .we are aware that we should not call this quantity correlation dimension anymore .it has been termed dimensional complexity .the surrogate data are produced by the fft algorithm .for each degree of violation of the null hypothesis 50 independent surrogate data sets of length 8192 were generated . denoting the `` correlation dimension '' of the original data by , the mean of the distribution of this feature for the surrogate data by and its the variance by , the resultis displayed as : [ zet ] z= .it was confirmed that the distribution of the feature is sufficiently well described by a gaussian distribution .thus , can be related to a confidence interval , since for 50 realizations the -distribution of is well approximated by a gaussian distribution and corresponds to the 5 % level of significance . in general , in power of the test investigations a procedure different from that outlined above is chosen . for a certain significance level , e.g. 5% , and different degrees of violation of the null hypothesis ,numerous realizations , e.g. 1000 , of the process are generated and the fraction of rejected null hypotheses is reported .due to the high computational burden for calculating the correlation integral , this procedure is not feasible here .the above procedure has the drawback that the results depend on the single realization that is used as basis for the surrogates .we repeated the analysis reported below for independent realizations and found no qualitative differences for different realizations .for the first violation of the null hypothesis , we increase in eq.([process1 ] ) from zero , i.e. no violation , to 0.5 in steps of 0.1 .the distribution of these data are not gaussian for .thus , the amplitude adjusted surrogate data algorithm was applied .the deviation from gaussianity is weak for the range of violations chosen .we also applied the algorithm without amplitude adjustment and did not found significant different results .[ ar2amp_res ] displays the result of the simulation study . in dependence on the embedding dimension, is displayed for different degrees of violation of the null hypothesis .= 8.4 cm as expected , without any violation , the values stay within the 2 region given by .a modulation depth of and leads to results at the border of 5% significance .starting from , see fig .[ ar2_dat]b , the null hypothesis is clearly rejected at the 5% level of significance whenever the embedding dimension is large enough to reconstruct the second order process appropriately .to investigates the effect of a variation in the period of the linear stochastic process , we increase in eq.([process2_1],[process2_2 ] ) from zero to three.the distribution of these data are gaussian independent from the value of .thus , no amplitude adjustment was necessary .again , the distribution of the feature is sufficiently well described by a gaussian distribution .[ ar2fre_res ] displays the result of the simulation study .= 8.4 cm for all degrees of violation , the violation is not detected when the embedding dimension is too small to unfold the dynamics in phase space .otherwise , a modulation of the period of 15 % , see fig .[ ar2_dat]c , leads to a clear rejection of the null hypothesis at the 5 % level of confidence .the simulation studies reported in this paper indicate that surrogate data testing for linear , stochastic , gaussian stationary processes is powerful against a violation of the assumption of stationarity .thus , a significant result of the test does not necessarily indicate a non - linear or even chaotic process underlying the data .it might have simply be caused by a non - stationarity of the process .
surrogate data testing is a method frequently applied to evaluate the results of nonlinear time series analysis . since the null hypothesis tested against is a linear , gaussian , stationary stochastic process a positive outcome may not only result from an underlying nonlinear or even chaotic system , but also from e.g. a non - stationary linear one . we investigate the power of the test against non - stationarity .
machine - type communications ( mtc ) are typically characterized by a massive number of machine - type devices that connect to the network to transmit small data payloads . those featurespresent a significant challenge to cellular networks , whose radio access part is traditionally designed to deal with a rather low number of connections with high data requirements .specifically , current cellular networks , such as lte - a , are connection - oriented , requiring a connection establishment between the device and the base station ( bs ) before the device can transmit its data packet . as an example, the connection establishment in lte - a involves a high amount of signaling overhead , which is particularly emphasized when the data payload is small , e.g. , less than 1000 bytes .therefore , in 3gpp it was proposed an approach to optimize the connection establishment by reducing the signaling overhead .the resulting simplified connection establishment protocol starts with the contention - based access reservation protocol ( arp ) , depicted in the first four steps in fig .[ fig : arpcomparison](a ) , followed by a fifth message where the signaling and a small data payload are concatenated .the signaling exchanges related to the security mechanisms are omitted in the optimized version of the lte - a connection establishment , by reusing an a - priori established security context .the throughput and blocking probability of the arp are rather sensitive to the number of contending devices .specifically , the devices contend for access by sending their preambles in a designated and periodically occurring uplink sub - frame , here termed as random access opportunity ( rao ) .when the number of contending devices is high , multiple devices activate the same preamble in a rao , which leads to collisions of their rrc connection requests , see fig .[ fig : arpcomparison](a ) .consequently , most devices are unable to establish a connection in the first attempt and perform subsequent attempts that , due to the high load , are also likely to result in collisions .a solution put forward to cope with congestion , was the extended access class barring ( eab ) , where certain classes of devices are temporally blocked from participating in the arp , but at the cost of an increased access latency to those same devices .another drawback of the arp is that the network learns the devices identities and connection establishment causes only after the rrc connection request is successfully received , as the contention is performed via randomly chosen preambles that do not carry information .a solution that allows the network to learn the identities and connection establishment causes of the contending devices already at the beginning of the arp , could enable their differentiated treatment in later phases of the connection establishment and even skip some of the steps in the lte - a random access protocol , as indicated in fig .[ fig : arpcomparison ] .[ fig : arpcomparison ] in this paper we propose a new access method based on signatures and bloom filtering .the method is demonstrated in the context of the lte - a arp , however , we note that it can be employed in the next generation arps following similar principles . in the proposed method , instead of contending with a single preamble in a rao , the devices contend by transmitting a predefined sequence of preambles in a frame composed of several raos , the transmitted sequence of preambles is denoted as the _ device signature_. the presented ideas are a conceptual extension of the work , where the devices contend for access by selecting a random signature , generated by combining random preambles over consecutive raos .in contrast , in the method described here , each device contends with a unique signature generated using the international mobile subscriber identity ( imsi ) of the device and its connection establishment cause , in further text referred to as the device s identification .specifically , we apply the bloom - filter principles for signature generation , where the device s identification is hashed over multiple independent hash functions and the resulting output used to select which preamble in which rao to activate . we introduce an analytical framework through which we tune the signature properties ,i.e. , its length and the number of activated preambles , based on the number of expected arrivals and the target efficiency of the use of system resources , denoted as the goodput .we also investigate the expected latency and signature detection probability of the proposed method .finally , we show that , when the arrivals are synchronous , the proposed method outperforms the lte - a connection establishment procedure in terms of goodput , while achieving similar or lower average latency .the rest of the paper is organized as follows .section [ sec : lte_arp ] summarizes the standard arp in lte - a .section [ sec : proposed_contention_modifications ] describes the proposed access method and section [ sub : analytical_performance_model ] presents the corresponding analysis .section [ sec : system_performance_evaluation ] evaluates the performance of the proposed method , comparing it with the reference lte - a procedure for mtc traffic .section [ sec : conclusions ] concludes the paper .a successful lte - a access reservation entails the exchange of four messages , as depicted in fig . [fig : arpcomparison](a ) .initially , a device randomly chooses a preamble to be transmitted in a rao from a set of available preambles generated using zadoff - chu sequences .the preambles are orthogonal and can be simultaneously detected by the bs .we also note that the bs is able to detect a preamble even when it is transmitted by multiple devices , i.e. , a collision in the `` preamble space '' is still interpreted as an activated preamble .this represents a logical or operation , since the preamble is detected as activated if there is _ at least _one device that transmits the preamble .this observation motivates the use of bloom filter , a data structure based on or operation for testing set membership .the devices whose preambles are detected are notified via a random access response ( rar ) in the downlink and assigned a temporary network identifier .the reception of the rar triggers the transmission of the rrc connection request in the allocated uplink sub - frame . at this point ,the bs is able to detect the collision of the multiple connection requests , sent by the devices that originally sent the same preamble .the successfully received connection requests are acknowledged , marking the start of the data transmission phase .on the other hand , the devices whose connection requests collided , do not receive the feedback and either contend again by sending a new preamble or end up in outage when the number of connection attempts reaches the predefined limit . in the rrc connection request ,the device informs the network of its temporary identifier , imsi , and the connection establishment cause . from these, the network can confirm if the device is authorized for access , track the device s subscribed services and reestablish the preexisting security context .as already mentioned , the channel over which the devices contend can be modeled as an or multiple access channel ( or - mac ) . by , denote the set of available preambles , where the absence of preamble activation is denoted by the idle preamble .assume that there are devices in total .we model the contention by assuming that the device , , transmits a binary word ,\end{aligned}\ ] ] where bit indicates if the device transmitted preamble .note that only a single entry , , can be set to 1 since a device can only transmit a single preamble in a single rao .the bs observes where denotes a bit - wise or operator and is the detected binary word of device .in particular , the bs detects a transmitted preamble with probability and with probability falsely detects a non - transmitted preamble , which may cause that . in practice, the preamble detection at the bs should ensure that and requirement in corresponds to the single activation of a preamble .when a preamble is activated by multiple devices it is expected that the effective will be higher . ] .finally , every non - zero entry in implies a detection of the corresponding preamble .obviously , in the best - case scenario , the bs can detect up to different devices in a rao .[ fig : lteormac ] the essence of the proposed method lies in the idea of devices contending with combinations of preambles transmitted over raos , denoted as signatures .each preamble of a signature is sent in a separate rao , while raos define a signature frame , see fig .[ fig : lteormac ] . extending the model introduced in section [ sec : lte_arp ] ,the device contends by transmitting its signature , \end{aligned}\ ] ] where the binary words , , follow the structure introduced in .obviously , the number of available signatures is , potentially allowing for the detection of exponentially more contenders compared to the case in which the preambles sent in each of the raos are treated independently and where the maximal number of detected contenders is .similarly to , the bs observes where is the detected version of .the bs decodes all signatures for which the following holds where is the bit - wise and . at this point, we turn to a phenomenon intrinsically related to the proposed contention method .namely , even in the case of perfect preamble detection ( ) and no false detections ( ) , the bs may also decode signatures that have _ not _ been transmitted but for which also holds . in other words , the bs may decode _false positives_. an example of this is shown in fig .[ fig : ormacsignaturetransmissiondetectionexample ] .signatures when and and ( b ) erroneous decoding of a signature which was not present in the original transmission ( and ) . ][ fig : ormacsignaturetransmissiondetectionexample ] the performance of the random signature construction in terms of probability of decoding false positives was first analyzed in , where they are referred to as phantom sequences .on the other hand , there is an extensive work on the construction of or - mac signatures based on the following criterion : if up to -out - of- signatures are active , then there are no false positives .however , these constructions are not directly applicable to the lte - a access , as they would ( 1 ) require that a device sends multiple preambles in the same rao , and ( 2 ) imply rather long signature lengths , i.e. , , which implies an increased access latency . inspired by bloom filters , we propose a novel signature construction that uses much lower signature lengths , at the expense of introducing false positives in a controlled manner . in the proposed method , the device signature is constructed in such a way that it provides a representation of the device s identification , which is assumed to be a - priori known to the network . to illustrate how a signature is constructed , we first consider the case where a single preamble is available at each of the raos dedicated to the signature transmission , i.e. , .taking the view of the device , we start with the binary array of length , indexed from to , where all the bits are initially set to .we then activate index positions in this array , i.e. , we set them to ; note that is a predefined constant valid for all devices .this is done by using independent hash functions , , , whose output is an integer value between 1 and , corresponding to an index position in the array , and where is representation of the device identity .the resulting binary array becomes the device signature .this construction follows the same steps as the object insertion operation in a bloom filter .when , the signature construction occurs in two stages .the first stage corresponds to the selection of the active raos using hash functions , , as described previously . in the second stage , for each of the activated raos , a contending device selects and transmits randomly one of preambles .this is performed by hashing the device identity using another set of independent hash functions , , i.e. , a separate hash function for each rao , whose output is an integer between and that corresponds to one of the available preambles .the signature - based access reservation protocol is depicted in fig .[ fig : arpcomparison](b ) , which starts by the devices transmitting their signatures . upon the successful decoding of a signature ,the bs transmits the _ rrc connection setup _ message .in contrast with the lte - a arp depicted in fig . [fig : arpcomparison](a ) , the messages 2 and 3 are not required in the signature based access , since the bs is able to determine from the signature the imsi of the device and the connection establishment cause .the protocol concludes with the transmission of the small data payload together with the completion of the rrc connection message .the described signature generation raises two important issues : ( i ) out of hash functions , , there is a probability of that at least two of these functions generating the same output , leading to less than distinct raos active in a signature ; ( ii ) there is a non - zero probability that two or more devices share the same signature , given by ^{-1}\ ] ] and as the total number of devices .the above probabilities can be minimized by increasing the signature length , which is the reason why these issues are commonly ignored within the bloom filter related literature , where is of the order of .although we do not use such large ranges for , we note that for values of and that are used in the performance evaluation in section [ sec : system_performance_evaluation ] , the second probability can be neglected , as in this case .[ alg : bloomfilterinsertion ] * input * : , , , ; + * initialize * : , , output ; + the first issue can be addressed by a signature construction that enforces distinct active raos per signature .we provide in alg .[ alg : bloomfilterinsertion ] a description of a practical signature construction that uses the modulus operation as basis for the hashing .this construction ensures that distinct raos are active per signature , by removing the raos selected in previous iterations from the set of available raos .further , the preambles activated in previously selected raos are removed from the set preambles available for the next iteration .this operation limits the generation of signatures to active raos ; however , this is within the operating range of interest where and allows us to apply probabilistic tools , as presented in the analysis in section [ sub : analytical_performance_model ] , to design the signatures length and number of active raos . as it will be shown in section [ sec : system_performance_evaluation ] , the proposed signature generation algorithm matches well the derived analytical model .finally , we note that an essential prerequisite for the proposed signature access scheme is that the signature generation algorithm and all the hash functions are known to all devices , including the bs .this can be accomplished via the existing periodic broadcasts that include the network configuration ; an alternative would be to include this information already in the device s subscriber identity module .we analyze a single instance of the contention process , assuming a synchronous batch arrival of devices .we assume that the probability of an arrival of a device is /t ] .the parameters of the proposed scheme are the signature frame size , denoted by , the number of active raos in the signature , denoted by , and the number of preambles per rao that are available for signature construction , denoted by .the first two parameters are subject to design , and we analyze their dimensioning when on average -out - of- signatures are active , such that the false positive rate is below a threshold .in contrast , is assumed to be fixed , which corresponds to the typical scenario in lte - a systems .we start by establishing the relationship between the correctly detected signatures and all detected signatures , which also includes the false positives , after all the contenders have completed step of the proposed method , see fig . [fig : arpcomparison](b ) .we denote this metric as the goodput .in essence , the goodput reflects the efficiency of the subsequent small data transmission , as the bs will also attempt to serve the falsely detected signatures .the expected goodput is = { \mathrm{e}}\left [ \frac{n_\text{a } } { n_\text{a } + p } \right ] \approx \frac{{\mathrm{e}}[n_\text{a } ] } { { \mathrm{e}}[n_\text{a } ] + { \mathrm{e}}[p ] } = \frac{n}{n + { \mathrm{e}}[p]}.\ ] ] where is the number of false positives . fromit follows \leq 1,\end{aligned}\ ] ] as there can be no more than detected signatures .the mean number of false positives ] .the mean number of arrivals is assumed to be known , and the signature based scheme is dimensioned for it .can be estimated , e.g. , using techniques that take advantage of the lte - a arp , such as the one proposed in . ]the probability of preamble detection by the bs is set to and the probability of false detection of a preamble is set to . in the baseline ,i.e. , 3gpp scheme , we assume the typical values for the backoff window of 20 ms and the maximum number of connection attempts . the devices upon becoming active contend for access by activating randomly one preamble in one of the available raos within the backoff interval , i.e. , the batch arrival is spread with the backoff interval . in casethat a device is the only one that selected a given preamble in a given rao and that this preamble has been detected , then the access procedure , as depicted in fig [ fig : arpcomparison](a ) , proceeds until completion . otherwise , the device will reattempt the access within the back - off window after the timer to receive the rar as elapsed .when multiple devices select the same preamble within a rao , the resources assigned by the bs corresponding to the step 3 in the protocol are wasted due to the collided devices ; and the collided devices re - attempt access later by selecting a random rao within the backoff interval .the devices re - attempt access until either successful or until exceeding the allowed number of retransmissions . in the proposed method ,the devices contend by transmitting their signatures , where the signature frame length is obtained from .for the sake of comparison , we also evaluate the performance of the random signature construction , where . each device upon its signature being decoded , even in the case of false positive , receives the feedback rrc connection setup message and is assigned uplink data resources for the transmission of the third and final message , see fig [ fig : arpcomparison](b ) .the performance is evaluated in terms of : ( i ) the average goodput ] is evaluated as the ratio between the successfully used resources and the total resources spent in the third step of both access protocols .it directly relates to the efficient use of resources , since the bs is only able to discern if there is a correctly detected device upon successful completion of the third step . in the baseline scheme ,the system resources are wasted whenever two or more devices select the same preamble within a rao ; the goodput in this case is given as the ratio between the total number of messages that are exchanged successfully and the total number of exchanged messages at the third step , including the failed ones due to collisions .in the case of the signature based access , the wasted resources in the third step occur whenever a false positive signature occurs , and the goodput is given by .$ ] observed with increasing , for the 3gpp scheme , random signature construction and the proposed signature construction .( ) ] [ fig : goodput ] the expected goodput is depicted in fig .[ fig : goodput ] , where for the goodput target for the proposed method is set to .we observe that the proposed method meets the actual goodput meets the design target at higher access loads . on the other hand , at lower , the performance deviates from the target value .this is due to the assumption that the false positive signatures are independently and uniformly generated from the idle signatures , which is the basis of the approximation in .we can also observe that the goodput performance of the proposed method is always superior to the 3gpp scheme . specifically , in the 3gpp scheme the devices re - attempt retransmission upon colliding and until they are either successful or the number of retransmissions is exceeded .each subsequent failed retransmission results in additional wasted system resources , which results in the observed degradation of the baseline goodput with increasing number of active devices .finally , the goodput achieved with the random signature construction is quite low , due to the high number of false positives . and minimum computed from , at different stages of the access procedures .( ) ] [ fig : latency ] in fig .[ fig : latency ] we depict the mean latency at step 1 in all schemes , as well as in steps 3 and 5 in the signature and 3gpp schemes , respectively .an important observation is that the latency of the proposed method is always lower than the 3gpp scheme ; and the gap between these two schemes increases for higher .this is a consequence of the more efficient detection of active users , as can be seen when comparing the latency of these two schemes at step 1 .furthermore , the random signature construction has the worst performance , the reason being that a signature can not be decoded before all raos of the signature frame have been received . finally , in tab . [ table : probdetectiontable ]we show the probability of a device being successfully detected at end of the access protocol . herethe proposed method has a slight performance degradation compared to the 3gpp scheme , but this degradation diminishes higher access loads .the 3gpp scheme achieves higher detection performance due to only requiring one transmission out of all preamble retransmissions to be successful , making it more robust but at the cost of lower goodput and higher latency . on the other hand, the random signature construction leads to a very low detection performance , as it requires the successful detection of all the active preambles .following the insights provided by bloom filters , we have introduced the concept of signatures with probabilistic guarantees and applied it to a system model derived from the lte - a access reservation protocol .the most important feature of the proposed method is in allowing the device to be identified already at the access stage .moreover , the method is very efficient in terms of use of the system resources and has a favorable performance in terms of decoding latency . in the paper we assumed that the base station serves the successfully connected devices without preferences .nevertheless , it is straightforward to modify the proposed solution to scenarios in which the bs serves devices based on the identifications inferred from the decoded signatures , i.e. , imsis and/or connection establishment causes . in such cases ,the proposed access method enables differentiated treatment by the bs from the very beginning .finally , we note that in the paper we assessed a simplified scenario of a synchronous bath arrival in order to present the key concepts and the related analysis . tuning the proposed scheme for the other typical models , like the beta arrival model for synchronous arrivals or the poisson arrival model for asynchronous arrivals, is left for further work ..probability of successfully detecting a device [ % ] .( t = 1000 ) [ cols="^,^,^,^,^,^",options="header " , ] [ table : probdetectiontable ]this work was performed partly in the framework of h2020 project fantastic-5 g ( ict-671660 ) , partly supported by the danish council for independent research grant no .dff-4005 - 00281 `` evolving wireless cellular systems for smart grid communications '' and by the european research council ( erc consolidator grant nr .648382 willow ) within the horizon 2020 program .the authors acknowledge the contributions of the colleagues in fantastic-5 g .d. t. wiriaatmadja and k. w. choi , `` hybrid random access and data transmission protocol for machine - to - machine communications in cellular networks , '' _ ieee trans . wireless commun ._ , vol . 14 , no . 1 , p. 3346, jan . 2015 .g. c. madueno , j. j. nielsen , d. m. kim , n. k. pratas , c. stefanovic , and p. popovski , `` assessment of lte wireless access for monitoring of energy distribution in the smart grid , '' _ ieee journal on selected areas in communications _ , vol .34 , no . 3 , pp .675688 , march 2016 .h. thomsen , n. k. pratas , c. stefanovic , and p. popovski , `` code - expanded radio access protocol for machine - to - machine communications , '' _ transactions on emerging telecommunications technologies _24 , no . 4 , pp .355365 , 2013 .
we present a random access method inspired on bloom filters that is suited for machine - type communications ( mtc ) . each accessing device sends a _ signature _ during the contention process . a signature is constructed using the bloom filtering method and contains information on the device identity and the connection establishment cause . we instantiate the proposed method over the current lte - a access protocol . however , the method is applicable to a more general class of random access protocols that use preambles or other reservation sequences , as expected to be the case in 5 g systems . we show that our method utilizes the system resources more efficiently and achieves significantly lower connection establishment latency in case of synchronous arrivals , compared to the variant of the lte - a access protocol that is optimized for mtc traffic . a dividend of the proposed method is that it allows the base station ( bs ) to acquire the device identity and the connection establishment cause already in the initial phase of the connection establishment , thereby enabling their differentiated treatment by the bs .
micro - electro - mechanical systems ( mems ) are increasingly finding practical application .friction from solid - solid contacts in bearings and hinges can lead to device failure or breakage , so practical devices have primarily used flexing components , such as cantilevers , rather than rotating shafts and gears , which are common in macro - scale devices .frictional concerns are one of the primary obstacles to using rotating components for mems. in macroscopic devices , air bearings with an actively injected lubricating gas layer or journal bearings with a dynamically maintained lubricant film are used in high - performance applications , but engineering such intricate structures is challenging in mems. the use of surface tension to maintain a liquid lubricant layer is an alternative that is most effective in mems , since surface tension is very significant in microscopic devices . furthermore , surface tension and fluid pressure at these scales can actually be used to passively support a mems bearing , eliminating all solid - solid contact . in such bearings, a liquid , usually water or an aqueous solution , is used between the two solid surfaces .variations in surface wetting ( e.g. hydrophobicity ) are used to pin the liquid in position .the liquid - solid contact is low friction compared to solid - solid contacts , allowing parts to slide with less stiction and lower wear. a variety of geometries for surface tension supported liquid bearings have been investigated . in the simplest case , a single drop of water trapped between identical hydrophilic pads in the center of the bearing ( fig .[ fig : bearing_types]a ) supports the rotating part ( rotor ) above the stationary substrate ( stator). wear is reduced due to the lack of solid contact and concentricity of the pads is well maintained by surface tension , but the tilt stability is poor , as there is little energetic cost to tipping the rotor even to the point of a collision between the rotor and stator . using a ring of water reduces the hydrodynamic drag on the surfaces somewhat by reducing the wetted surface area ( fig .[ fig : bearing_types]b ) , and has increased tilt stiffness. the tilt stiffness can be greatly increased by breaking up the wetting ring into discrete drops and making the rotor superhydrophobic , as in fig .[ fig : bearing_types]c. a superhydophobic surface is a structured surface with a water contact angle greater than 150. since the drops do not wet to the rotor surface , a central drop , which wets to both sides , is added to the center to maintain the relative positions of the surfaces .the decreased contact area between the water and solid can reduce the hydrodynamic drag , and the superhydrophobic contacts provide high lifting forces and reduce titling , but energy loss due to the remaining contact angle hysteresis of the superhydrophobic surface may dominate the sources of drag. the ideal surface tension supported liquid bearing would provide high stiffness in all directions while minimizing all sources of drag . in this paper , we investigate a new ring and axle bearing design ( fig .[ fig : bearing_types]d and fig .[ fig : bearing ] ) , which combines several of the best features of other designs .the design uses a central drop , which wets hydrophilic areas on both the rotor and the stator , as an axle .the axle holds the rotor down and keeps the rotor concentric with the stator .a ring of water is wetted to a hydrophilic annulus on the stator , while the surface of the rotor in contact with the ring is superhydrophobic .the ring provides a vertical force to balance that of the axle , and stiffens the bearing with respect to tilting .since the rotor is superhydrophobic where it contacts the ring of liquid , there may be less hydrodynamic drag than in the ring bearing , where the rotor surface in contact with the liquid is hydrophilic. furthermore , since there is no wetting and dewetting in the system as the rotor spins , there are no hysteretic losses and thus the drag is expected to be reduced compared to the drop and axle bearing design .this new ring and axle liquid bearing design is not unconditionally stable , e.g. the ring will actually push the rotor to tilt if it is overfilled with water . here , we examine this system and determine the parametric region of stability . the axle and ring each generally provide two contributions to the forces acting on the rotor : the force due to laplace pressure , which is the result of the pressure difference across the liquid surface , and the force directly due to surface tension of the water acting on the rotor . even in the simplest cases ,calculating the stability and stiffness of the bearing analytically is not trivial , and numerical solution is more practical .we examine the bearings via numerical modeling using energy minimization. performed simulations of our bearings using surface evolver , an open source finite element analysis system .surface evolver minimizes the surface energy of the liquid subject to constraints , which include the size and contact angles of the wetting and non - wetting surfaces , and the volume of liquid in each region .the effect of gravity is also included , using experimental values for the mass of the rotor . only the liquid surface was directly modeled ; the rotor and stator were represented by boundary conditions and constraints .typically , the distance between the rotor and stator is allowed to vary , but the angle of the rotor is fixed in any one calculation .the stiffness of the bearing and the equilibrium tilt angle were calculated by displacing the rotor vertically or tilting the rotor while adjusting the liquid - air interface to minimize energy .the tilt with the minimum energy was recorded as the equilibrium tilt , while the curvature of the energy versus height or tilt was used to calculate the stiffness .an example of a generated liquid surface mesh can be seen in fig .[ fig : compare ] . not all combinations of constraints resulted in physically realizable bearing configurations . in the model , a configurationwas considered unstable if either the center drop separated or the contact of the ring of water on the upper surface vanished at any point during the calculation . for a range of axle and ring volumes ,the model was used to determine if the configuration would be stable and , if stable , the equilibrium rotor height , tilt , and tilt stiffness .to verify the validity of the model , the bearing design was also experimentally tested .we fabricated bearings with the same parameters as the model starting with % pure aluminum and using single point diamond turning ( spdt ) to cut the aluminum to a flat surface ( nm rms roughness ) .where needed , surfaces were made superhydrophobic by a modified porous aluminum oxide ( paa ) growth technique and fluoropolymeric coating composed of spin coated hexamethlydisalizane ( hmds) and solvey plastics hyflon ad-60 , as reported previously. the stator ring and axle hydrophilic areas were patterned by coating a superhydrophobic surface with photoresist and sputter - depositing ti , then performing lift off , to form a mm radius circular hydrophilic region for the axle and an annular hydrophilic region with an average mm radius and a width of mm .spdt was then used to cut out cm diameter rotors from mm thick discs with a superhydrophobic surface produced by the same method .sptd was also used to remove the superhydrophobic structure in a mm radius circle at the center of the rotor to create a matching hydrophilic circle for the axle water to wet .the rotor hydrophilic area was defined by spdt instead of photolithography to guarantee concentricity of the rotor edge and the axle to within the accuracy of the ultra - precision lathe used for spdt ( nm ) .a rotor and stator pair are shown in fig .[ fig : bearing ] .experimentally , the bearings were photographed in profile over a range of center and ring liquid volumes ( examples shown in fig . [ fig : failure ] ) .the parts were allowed to completely dry between tests to minimize volume uncertainties , and the ring and axle hydrophilic regions were filled with water just before placement to minimize evaporation . to achieve full wetting of both axle padsreliably with such small volumes of water , half of the volume had to be placed on each side .rotors were handled by vacuum tweezers from the back side to ensure that the superhydrophobic surfaces were not damaged .after placement of the rotor , the entire bearing was carefully rotated to ensure that the image was taken in profile perpendicular to the tilt .image analysis was then used to find the height and angle of tilt . in both experiments and models, we find that there are several possible sources of instability .[ fig : compare ] shows the results of experiments and models with the same range of axle and ring volumes .if the axle has insufficient liquid relative to the ring , the liquid on the rotor and stator will not join to form an axle or will re - separate due to the lift from the ring with most of the water on one side , and the rotor will not remain centered ( fig . [fig : failure]a ) ; this leads to the loss of stability seen in the lower right area in each plot in fig .[ fig : compare ] .if the liquid in the ring is insufficient relative to the axle , then the rotor is lifted partially off of the ring and tips to one side , since balancing on the center drop alone is unstable ( fig .[ fig : failure]b ) ; this leads to the loss of stability seen in the upper left area in each graph in fig .[ fig : compare ] . finally , if the ring has too much water in it , it bulges asymmetrically , again resulting in a tilted ( and often off - center ) rotor ( fig .[ fig : failure]c ) ; this requires a volume in the rings such that they have more than a semi - circular crossection , which is larger than the maximum ring volume shown in fig .[ fig : compare ] .there is an exact match between the predicted and measured regions of stability .this shows that a simple model can predict the stable range of parameters for the bearing .the experimentally measured and numerically modeled rotor heights show reasonable agreement .both show increasing height of the rotor with increasing volume in either the axle or the ring .quantitatively , the results agree to within % with no free parameters in the model .the results for the rotor tilt show less agreement between the experiments and model .the model predicts effectively no tilt in the stable region , while experimentally measurable tilt is observed , particularly near the high ring volume and low axle volume boundary of the stable region .this experimental tilt seems to fluctuate without any clear trend , and is not highly repeatable .it is likely that experimentally this results from the assembly process of the bearing , where the bearing is not guaranteed to start off with zero tilt ; i.e. near the edge of the stable region , there may be some hysteresis in the tilt .with some evidence that the model is accurate , we can then predict the stiffness of the bearing with respect to tilt ( vertical stiffness is not generally difficult to achieve , and thus is not analyzed ) .the results , calculated from the dependence of the energy of the system on tilt , are shown in fig .[ fig : compare ] .using the same calculation method , for bearings with the same thickness ( 0.38 mm ) , a model drop bearing ( fig [ fig : bearing_types]a ) has a spring constant of 0.32 nj / degree ( drop covers the same area as the full ring and axle bearing ) , a ring bearing ( fig .[ fig : bearing_types]b ) has a spring constant of 0.34 nj / degree ( compared to the full ring and axle bearing , the simulated ring bearing has an inner radius equal to the radius of the axle and an outer radius equal to the outer radius of the ring ) , and the new ring and axle bearing ( fig .[ fig : bearing_types]d ) has a spring constant of 2.55 nj / degree , which represents a significant improvement .the tilt stiffness of ring and axle bearings is calculated to generally increase with decreasing volume of liquid in either the axle or ring .it is likely that having less liquid in the ring directly increases the stiffness , while less liquid in the axle pulls the rotor closer to the stator and compresses the ring , also increasing the bearing stiffness .the surface tension supported ring and axle bearings reported have significant potential for use in mems due to their low frictional losses and complete lack of solid on solid contact .these results demonstrate that these devices are stable over a large range of ring and axle volumes , and that this region can be modeled using energy minimization , simplifying design and experimental work with these devices .it also shows that these devices have a higher tilt stiffness for a given size than alternative designs , which is critical since the the tilt stiffness is typically low in surface tension supported bearings .furthermore , due to the superhydrophobic rotor surface and absence of contact angle hysteresis , ring and axle bearings could have lower drag than previous designs .a common objection to water - based liquid bearings is that , while promising from a mechanics standpoint , the evaporation of the liquid water makes them of limited practical use .although the testing presented here was performed with pure water , we have also tested longer term use of a saturated water - cacl solution on superhydrophobic surfaces with patterned hydrophilic regions .this solution has similar surface tension to pure water and since cacl is deliquescent , it can actually pull moisture from the air , rather than allowing the water to evaporate . in practice , this solution was stable over a period of at least 2 years under ambient conditions , and shows no signs of evaporation or corrosion of the underlying surface .we would like to thank dr .kenneth brakke at the mathematics department of susquehanna university , the creator of surface evolver , for his assistance with understanding and optimizing our simulation .m. l. chana , b. yoxalla , h. parka , z. kangb , i. izyuminb , j. choub , m. m. megensb , m. c. wub , b. e. boserb , and d. a. horsleya , `` design and characterization of mems micromotor supported on low friction liquid bearing , '' _ sensors and actuators a : physical _ , vol .177 , pp . 19 , 2012 .s. chu , k. wada , s. inoue , m. isogai , y. katsuta , and a. yaumori , `` large - scale fabrication of ordered nanoporous alumina films with arbitrary pore intervals by critical - potential anodization , '' _ journal of the electrochemical society _ , vol .b384b391 , 2006 .n. tasaltin , d. sanli , a. jon , a. kiraz , and c. erkey , `` preparation and characterization of superhydrophobic surfaces based on hexamethyldisilazane - modified nanoporous alumina , '' _ nanoscale research letters _ , vol . 6 , no . 487 , 2011 .
friction between contacting solid surfaces is a dominant force on the micro - scale and a major consideration in the design of mems . non - contact fluid bearings have been investigated as a way to mitigate this issue . here we discuss a new design for surface tension - supported thrust bearings utilizing patterned superhydrophobic surfaces to achieve improved drag reduction . we examine sources of instability in the design , and demonstrate that it can be simply modeled and has superior stiffness as compared to other designs . superhydrophobicity , porous anodized aluminum , bearings , non - contact
shape is a fundamental property of an object that influences its interaction with the environment and often determines the object s functional capabilities .understanding how to generate and control shape by modifying the environmental conditions is of primary importance in designing systems that respond to external clues .we show here that electrostatic interactions can be used to change the equilibrium shape of soft , nanometer - sized shells .we find that a uniformly - charged , spherical shell undergoes shape changes , transforming into ellipsoids , discs , and bowls , as the electrolyte concentration in the environment is decreased .this electrostatics - based shape design mechanism , regulated by varying properties external to the shell , can be used to build efficient nanocontainers for various medical and technological applications .+ iological matter in cells is often compartmentalized by elastic membranes that take various shapes such as blood cell membranes , organelles and viral capsids .these biomembranes are highly optimized to perform specific functions .a key focus of current biomedical technologies is to engineer synthetic materials that can match the performance and structural sophistication displayed by natural entities .mimicking key physical features of biomembranes , including shape , size , and flexibility , is a crucial step towards the design of such synthetic biomaterials .recent findings also indicate that the shape of a drug - carrier nanoparticle directly influences the amount and efficiency of drug delivery .the shape and deformability of soft materials such as colloids , emulsions , hydrogels or micelles play an important role in determining their usefulness in various technological applications as well .for example , colloidal self - assembly is governed to a large extent by the shape of individual colloids .similarly , controlling the shape and size of reverse micelles is of key importance in their use as solvent extraction systems for removing rare - earth metals from aqueous solutions or as templates for nanoparticle synthesis .shape transformations in materials are engineered via chemically - induced modifications or using techniques such as photoswitching of membrane properties and controlled evaporation of the enclosed solvent . however , generating desired material shapes with precision and manipulating them with relative ease at the nanoscale has been a challenge . from the theoretical standpoint ,much attention has been focused on finding the low - energy conformations of flexible materials , modeled often as soft elastic membranes , in the hope of suggesting superior experimental systems that can enable the design of nanostructures .examples include the exploration of shape transitions driven by topological defects or compression , and the study of low - energy conformations of multicomponent shells .changing the shape of an elastic shell entails bending and stretching it and the associated energy costs form the components of the elastic free energy of the shell .however , when the shell is charged , it is possible to compensate for the increase in elastic energy associated with the shape deformation if the latter is accompanied with a significant lowering of the electrostatic free energy .previous studies on charged , soft membranes mainly focused on mapping a charged elastic shell to an uncharged elastic shell with charge - renormalized elastic parameters . in the case of charged nanoshells , electrostatic screening length is comparable to the shell dimensions and the surface charge density can assume high values . as a result , shell models where coulomb interactions are included explicitly are needed .using such models , it has been shown that an ionic shell , where positive and negative charges populate the surface , lowers its energy by taking an icosahedral shape with the same surface area . in this work, we find that a uniformly - charged , spherical elastic shell , when constrained to maintain the enclosed volume , can lower its free energy by deforming into smooth structures such as ellipsoids , disks , and bowls ( see fig .[ fig1 ] ) .we show that the transition to these nonspherical shapes can be driven by varying environmental properties such as the electrolyte concentration in the surrounding solvent . in order to include the non - linear coupling between the shape of the shell and its electrostatic response self - consistently, we study the soft , charged nanoshells numerically .we model the charged shell by a set of discrete points placed on a spherical membrane , forming a mesh consisting of vertices , edges and faces , recognizing that in the limit of large number of vertices , the discretized elastic membrane recovers the physics of the associated continuum model ( see _ materials and methods _ for details ) .the uniform surface charge density is simulated by assigning every vertex with the same charge .we work with elastic parameters such that the uncharged shell assumes a spherical shape at equilibrium .we allow only the deformations that preserve the shell s total volume , the latter being chosen to be that of the uncharged conformation .our model is applicable to monolayers , such as emulsions or reverse micelles where nanodroplets of oil or water are surrounded by properly polymerized charged surfactant molecules , and also to incompressible bilayer systems and nanocontainers that do not exchange material with their environment . in the following sections , we provide evidence that this minimum model reproduces various shapes observed experimentally .furthermore , we test the validity of this electrostatic model and associated simulation results by providing analytical solutions in limiting cases , namely by computing the electrostatic energy of oblate spheroidal shells and comparing it to that of a sphere of the same volume in salt - free conditions. effects of ion condensation are then included via a two - state model to derive the renormalized charge on the spherical and spheroidal shells in order to test the robustness of our results . using the discretization of the continuum expression for the elastic energy introduced in ref . , we write the free energy associated with the discretized shell as & = & \frac{\kappa}{2 } \sum_{l\in \rm{e } } |\mathbf{n}_{l,1 } - \mathbf{n}_{l,2}|^{2 } + \frac{k}{2r^2 } \sum_{l\in \rm{e } } \left(\left|\mathbf{r}_{l,1 } - \mathbf{r}_{l,2}\right| - a_l\right)^{2 } \nonumber\\ & & + \ ,\frac{l_{b } z^2}{2 } \sum_{i , j\in \rm{v } } \frac{e^{-|\mathbf{r}_{i } - \mathbf{r}_{j}| /\lambda_{\rmd}}}{|\mathbf{r}_{i } - \mathbf{r}_{j}|},\end{aligned}\ ] ] where is measured in units of . here is the room temperature and is the boltzmann constant .we make the free energy dimensionless by defining , where is proportional to the bending rigidity of the continuum model , and , with being proportional to the 2d young s modulus of the continuous elastic membrane , and is the spherical shell radius .we employ the dimensionless bending rigidity and the spring constant as the scale for bending and stretching energies respectively . in eq ., and denote the set of all edges and vertices respectively , and is the position vector of the vertex .the first term on the right - hand side is the bending energy with and being the normal vectors to the faces adjacent to edge .the second term is the stretching energy with and being the position vectors of the vertices corresponding to the edge , and is the rest length of edge .the last term is the ( dimensionless ) electrostatic energy of the model membrane .we consider an aqueous environment inhabiting electrolyte whose presence is taken into consideration implicitly , leading to screened coulomb interactions between each vertex pair . here, denotes the bjerrum length in water , is the debye length and is a dimensionless charge associated with each vertex .we assume a uniform dielectric in order to simplify the computations , thus ignoring any induced charge effects .as is evident from eq . , the free energy is a function of the set of vertex position vectors which also parametrizes the shape of the shell .the equilibrium shape of the shell is the one that corresponds to the minimum of subject to constraint of fixed enclosed volume .we perform this constrained free - energy minimization using a molecular dynamics ( md ) based simulated annealing procedure , details of which are provided in _ materials and methods_.the uncharged elastic shell conformation in all our simulations is a sphere of radius nm .we discretize the sphere with points , generating a nearly - uniform distribution with an average edge length of nm . we fix the elastic spring constant in all simulations .this value corresponds to per which , for the bending rigidities under investigation , leads to shells characterized by a fppl - von krmn number ( ) in the range .we consider a monovalent electrolyte with concentration . the debye length is known via the relation nm .the concentration thus parametrizes the spatial range of coulomb interactions . in our simulations, we tune such that this range varies from , in which case the shell mimics the behavior of an uncharged elastic shell , to , which corresponds to the case where most charges feel each other .snapshots of minimum - energy conformations of charged elastic nanoshells for three different bending rigidities 1 , 5 , 10 ( columns , left to right ) . in each column the electrolyte concentration ( m ) decreases ( top to bottom ) as .different colors suggest different concentration values , with red being the highest under study and purple corresponding to the lowest .as the concentration is lowered , the range of electrostatic interactions is increased , leading to the variation in the shape of the nanoshell .we find that for the concentration - range under investigation , softer shells tend to form bowl - like structures , while more rigid vesicles form ellipsoidal and disk - like shapes .all the above nanostructures have the same total surface charge and volume , fixed to values associated with the spherical conformation . ] in fig . [ fig1 ], we show the change in the shape corresponding to the minimum of the shell free energy as is varied . here , we set , which is equivalent to a shell surface potential of mv .each column represents the shapes obtained for a fixed value of , with the latter increasing from left to right assuming the values 1 , 5 , 10 . within each column, decreases from top to bottom as 1 , 0.1 , 0.05 , 0.005 m. this range of concentration covers most biological and synthetic conditions .we see that the top row ( m ) is comprised of spherical shapes . at m, the screening length is very small ( ) and hence the electrostatic forces only come into effect at extremely short distances , resulting in a nearly vanishing contribution to the overall free energy .this leads to conformations that resemble the shape of the uncharged elastic shell which is spherical .however , as is lowered , transitions to a variety of nonspherical shapes are observed . in case of the most flexible charged shell ( fig .[ fig1 ] , left column ) , increasing the range of the electrostatic interactions leads to the formation of concave structures , hereafter referred to as bowls .the opening of the bowl widens with decreasing . for a shell with a higher bending rigidity ( middle column ) , as is lowered ,the shell first assumes a convex , ellipsoidal shape , then a bi - concave , disc - like structure , and finally the shell deforms into a bowl .we note the similarity between the bi - concave discs we obtain and the shape of synthetic red blood cells , despite the differences in their respective physical origins and sizes .the rightmost column shows the results for the most rigid membrane under study . due to the high energy penalty associated with bending ,the shell remains spherical even at m. however , upon further lowering of , we first witness an ellipsoidal shape and then a flattened disc - like structure at m. it is worth noting that the discs and bowls we obtain , closely resemble the shapes of elastic structures in ref . that are synthesized using light as a tool to engineer shape .shell shapes that minimize free energy for fixed and m as a function of increasing 0.3 , 0.6 , 1 ( from left to right ) . as increases , the strength of the electrostatic interactions increase and the shell transforms from a convex , ellipsoidal form to a dimpled disk and finally to a concave bowl - like structure .all shapes correspond to the same total volume .see text for the meaning of symbols . ]next , we study the effects of modulating the strength of coulomb interactions on shell shape . in fig .[ fig2 ] we show snapshots of minimum - energy shell conformations when we vary the parameter keeping the flexibility of the shell and the salt concentration in the environment constant ( and m ). changing corresponds to simulating shells with different total charge on the surface .the shapes from left to right correspond to the values of 0.3 , 0.6 , 1 .we find that at , the shell assumes a convex ellipsoidal shape .as is increased to 0.6 , the ellipsoid deforms into a dimpled disk and finally at , the bowl structure is obtained . the transition to nonspherical shapes is accompanied by a decrease in the electrostatic energy . in fig .[ fig3 ] ( top half ) , we plot , the total coulomb energy of the final structure relative to that of the spherical shell with identical parameters .the data for is shown as a function of for various values of 0.3 , 0.6 , 1 and 1 , 10 . in all cases, is negative . for convex shapes ( spheres and ellipsoids ) , represented by black symbols, is small . on the other hand , for discs and bowls , represented by blue and red symbolsrespectively , the reduction in electrostatic energy is more pronounced . in general , as the concentration is lowered , the behavior of suggests that the spherical shell deforms to an ellipsoid , then to a disk and finally to a bowl .we find that the nonspherical shapes have a larger surface area relative to the spherical conformation ( see fig .[ fig3 ] , bottom half ) .we expect this to be the case as for a given fixed volume , sphere has the lowest surface area .we find in some cases , the minimum - energy structure has twice the surface area of a sphere with same volume .though a more general model of the elastic shell would include an energy penalty associated with increasing the surface area , we expect the shape changes to occur in situations where the surface energy increase due to the rise in area is compensated by the adsorption of molecules ( such as neutral surfactants ) to the membrane , thereby reducing its surface tension . using the data in fig .[ fig3 ] we estimate that the shell surface tension should be low , dyne , for the aforementioned predicted shapes to be realized . in fig .[ fig4 ] , we show the distribution of local electrostatic and elastic energies on the disc ( top two rows ) and bowl ( bottom two rows ) .the disc corresponds to the case of , , m and the bowl shape is characterized by , , m. the electrostatic energy at a vertex is computed by summing over the screened coulomb interactions of the charge at that vertex with all other charges on the shell . as the scalebars on the right point out , the electrostatic energy is the dominant of the two energies and drives the shape formation , with the elastic energy adapting locally to conform to the new shape . for both disc and bowl, the local elastic energy ( second and fourth row ) has large spatial variations and tends to be higher on the more bent regions of the nanoshell . for the disc shape ,the coulomb energy ( first row ) is higher near the center .this is , in part , due to the enhanced repulsion resulting from the proximity of the opposite faces which are at a distance less than the debye length associated with this system .electrostatic contribution to the energy of the shell ( top plot ) and shell s surface area ( bottom plot ) vs salt concentration for different lowest - energy structures .we plot the electrostatic energy , , which is measured relative to that of a spherical shell with identical parameters .similarly , the area of the shell is normalized by the area of a sphere with the same volume .black symbols are spheres or ellipsoids ; blue symbols are discs ; red symbols are bowl - shaped structures .the inset shows the legend for the symbols used in the plot .the large ( negative ) changes in coulomb energy help drive the shape transitions . ]increasing the range or strength of electrostatic interactions enhances the coulomb repulsion between any two charged vertices , making them move apart .however , the resulting extension in edge lengths is penalized by the rise in the stretching energy . in addition, the bending energy term penalizes any sharp changes in curvature , thus favoring transitions to smooth shapes .this competition between the electrostatic and elastic energies sets an effective area for the nanomembrane which in conjunction with the fixed - volume constraint determines the eventual shape of the nanoshell .varying the screening length or the total charge on the shell changes this effective area , leading to variations in the shell shape .spatial distribution of electrostatic and elastic energies ( in units of , where is the room temperature ) on the surface of the disc ( top two rows ) and bowl ( bottom two rows ) . left column : front view , middle column : angle view , right column : side view .for either shapes , the elastic energy ( second and fourth row ) is concentrated in the edges .the electrostatic energy on the disc ( first row ) is higher in the center where the opposite faces are nearby ( first row ) .the five - coordinated vertices , which are visible as spots in the electrostatic energy distribution , lead to small fluctuations in the energy . ] to substantiate the above explanation , we focus on the sphere - to - ellipsoid - to - disc part of the observed shape transitions and perform analytical calculations .judging by the simulation snapshots ( see the images in fig .[ fig1 ] , rightmost column ) , these shapes can be approximated as oblate spheroids with different degrees of eccentricity and major semiaxis lengths .since the volume of the shell is fixed , the oblate spheroidal shell can be characterized by a single parameter . for ,one obtains sphere - like shapes and leads to disc - like conformations .the competition between elastic and electrostatic energies can now be considered as determining the eccentricity for the oblate spheroid .the concentration is seen as the control over such that the lowering of can be understood as an increase in .thus , we can verify the order of shape transitions observed in our simulations by examining the change in the electrostatic energy of a uniformly - charged shell as its eccentricity is increased . for simplicity, we consider unscreened coulomb interactions in the following calculations .we evaluate the electrostatic energy of a uniformly - charged oblate spheroidal shell with total surface charge and with volume constrained to ( derivation in _ si text _ ) .we obtain : where is an even integer , and are legendre polynomials of first and second kind , , and is evaluated relative to the thermal energy at room temperature .we define as the electrostatic energy of the oblate spheroid relative to the electrostatic energy of the sphere with identical parameters .we examine the variation of vs for the parameter set associated with the transition recorded in the open blue circles of fig . 3 and find that the coulomb energy of an oblate spheroidal shell subject to the constraint of constant volume decreases with increasing its eccentricity ( see fig . s2 ) .in other words , a disc - shaped shell has lower coulomb energy than a sphere of the same volume .the order of shape transitions observed in our simulations is thus backed by the above analytical result .next , we examine the spatial distribution of the local electrostatic energy on the surface of the shell ( see _ si text _ for details ) .we find that for a spherical shell , is constant everywhere .however , as the eccentricity increases , the surface distribution of electrostatic energy becomes increasingly inhomogeneous .in particular , for , which corresponds to a disc - like shape , we find varies significantly on the disc surface , assuming higher values near the disc center and low magnitude near the edge of the disc ( see fig .it is evident from the top row of fig .[ fig4 ] that we observe this trend in our simulation results as well .we obtain more insight into our results by exploring the low - energy conformations of a very flexible uniformly - charged shell where the elastic energy can be neglected in comparison with the coulomb energy .equilibrium shapes of such a shell will correspond to the minimum of the total electrostatic energy . in eq ., taking the limit gives , which is the lowest possible value for the coulomb energy of a uniformly - charged shell .this limit corresponds to a disc - like spheroidal shell whose area approaches infinity .further , we check that when the enclosed volume is held fixed , the coulomb energy of a prolate spheroidal shell vanishes as well when the shell is stretched into a long and thin wire - like shape .thus , we obtain ( at least ) two distinct shell shapes that correspond to the state of lowest electrostatic energy .this result suggests that in our original model system , electrostatic interactions drive the transformation in the shell shape by favoring the deformation of sphere towards disc - like shapes , while the elastic energies compete with the coulomb energy to generate oblate - shaped ( ellipsoidal , disc - like ) structures of various eccentricities .it also appears that the elastic energy component of the free energy favors the formation of oblate shapes to prolate ones .the constraint of fixed enclosed volume is critical to the low - energy shell conformations obtained in our simulations . if instead of the volume, the shell surface area is fixed , we expect the gallery of lowest free - energy conformations to look different than fig .we check that under the constraint of fixed area , the coulomb energy of an oblate spheroidal shell is higher when its eccentricity increases , and the spherical shape corresponds to the conformation with the lowest coulomb energy among all oblate shapes . however , sphere is _ not _ the configuration that minimizes the shell electrostatic energy when prolate - shaped deformations are considered .we find that prolate spheroids of high eccentricities have lower coulomb energy than the sphere and the lowest - energy conformation for the area - constrained system is a prolate spheroidal shell with its major - axis length stretched to infinity .hence , for the area - constrained problem , we expect the competition between coulomb and elastic energies to give rise to different nonspherical shapes as ground - state solutions . in our charged shell model , we assume that the counterions remain in the bulk and do not condense on the shell surface .however , in an experiment it is possible that a fraction of the counterions do condense and in that event it becomes important to analyze their effect on the observed shape transitions .we measure this effect qualitatively in the salt - free limit for the sphere - disc transition by employing the expression for the electrostatic energy of a uniformly - charged oblate shell in a two - state model of free and condensed counterions .we consider a spherically - shaped wigner - seitz ( ws ) cell of volume with a single shell of volume and surface charge placed at its center .we define the quantity as the shell volume fraction .the cell also contains counterions , each of charge to neutralize the shell charge .we separate the counterions into two distinct groups : free ions and condensed ions .free ions occupy the available space in the ws cell which in the dilute limit becomes the volume of the cell .the condensed counterions are restricted to have translational motion in a thin layer of volume surrounding the shell , where is the surface area of the oblate shell and is the gouy - chapman length that is chosen as the condensed - layer width . here, is the unrenormalized surface charge density .when a shell has a higher or the system is characterized by a longer , we expect the condensed - layer width to shrink owing to the enhanced counterion - shell attraction . our choice of as the layer thickness correctly reflects this behavior .as is a characteristic of the charged planar surface , our analysis is limited to the regime where is much smaller than the lengths of the major and minor semiaxis of the shell .we write the free energy ( in units of ) associated with the shell in the event of ion condensation as : where is the fraction of counterions that condense and is the thermal de broglie wavelength . here, the first term is the electrostatic energy of the shell obtained from eq . by replacing with the reduced charge , the next two terms stem from the entropic contribution of the condensed ions , and the last two terms correspond to the entropy of free counterions . can be considered as a function of two variables : eccentricity , which characterizes the shape of the shell , and condensate fraction , which measures the renormalized charge on the shell . for a given , we find the condensate fraction that extremizes the above free energy .using , we evaluate the equilibrium free energy difference , , between the free energy of the oblate shell and that of the sphere of the same volume ( see _ si text _ for details ) .we compute for the parameters associated with the transition recorded in the open blue circles of fig .[ fig3 ] and find that for all values of the volume fraction , becomes increasingly more negative as the eccentricity is raised , implying that the shape transitions from sphere to oblate spheroids are favored ( see fig .s4 ) . additionally , we find that the condensate fraction decreases with increasing for all values of . for low , we obtain while for large , we find the condensate fraction to be .regardless of the amount of condensation , we find the shell with higher eccentricity is preferred energetically .we next examine the variation of the renormalized electrostatic energy with for different values . for low and high values of the volume fraction , we find that is negative and decreases , just like , upon the increase of the eccentricity ( see fig .s5 ) . however , for some intermediate values , we observe that , that is the electrostatic energy increases as is raised , in sharp contrast to the free energy associated with the shell .this suggests that for some values of shell volume fractions , the shape transitions are expected to occur despite an increase in the electrostatic energy .we attribute the feasibility of such transitions to the gain in entropy by the ions as less number of ions condense when the shape is deformed from a sphere to an oblate .the main conclusions reached above remain unchanged when we repeat the two - state model analysis assuming that the shell is an equipotential surface . for all values of , increasingly becomes more negative as the eccentricity is raised , implying that the shape transitions from sphere to oblate spheroids are favored ( see fig .thus , judging by the variation of determined by the above two - state model analysis , we conclude that the shape transitions from sphere to oblates of increasing eccentricity should be feasible in the event of ion condensation .however , due to the renormalization of the charge on the shell surface , it is likely that the specific parameter values ( for example , concentration strength , bending rigidity ) for which the shape transitions occur , will change .quantitative results can be obtained by including counterions explicitly in the simulations and taking into account the induced polarization charges on the shell surface in analyzing changes in shape .we note that our md - based simulation algorithm provides an ideal platform to include these effects via its coupling with recently introduced energy - functional - based approaches of treating dielectric inhomogeneities .we investigate the prospects of electrostatics - based generation and control of shapes in materials at the nanoscale . we find that by increasing the strength or the range of coulomb interacting potential , a uniformly - charged spherical shell , constrained to maintain its volume , deforms to structures of lower symmetry , resulting in ellipsoids , discs , and bowls .this symmetry breaking is accompanied with a reduction in the overall electrostatic energy of the shell and a significant spatial variation in the local elastic energy on the shell surface . to support our simulation findings ,we show analytically that a uniformly - charged disc - like spheroidal shell has a lower coulomb energy than a spherical shell of the same volume . in order to evaluate the renormalization of shell charge due to non - linear effects, we use a two - state model of free and condensed ions .we find that the shape transitions are feasible in the event of ion condensation for a wide range of shell volume fractions .shape changes in our model membrane are triggered by changing the attributes of the environment external to the membrane , such as the electrolyte concentration in the surrounding solvent .this is in contrast with transitions brought about by patterning the shell surface with defects or by introducing elastic inhomogeneities on the shell surface . in comparison with ionic shells , where the primary experimental challenge is to synthesize membranes with desired stoichiometric ratios ,our base shell surface is uniformly charged and elastically homogeneous , which is relatively simple to design .we envision that the electrostatics - driven shell design mechanism proposed here can function as a useful template for synthesizing nanoparticle - based drug delivery carriers of desired shapes .our results can also prove useful in the analysis of shape changes in charged emulsions or reverse micelle systems that form during the metal - extraction processes involved in the recovery of scarce rare - earth elements or cleaning of nuclear waste .in addition , our findings can aid in the development of theories explaining the properties of stretchable electronic materials such as dielectric elastomers where electrostatic field and deformation are intimately coupled .we generate the triangulation on the shell via the caspar and klug construction , which produces a lattice where each point has six neighbors with the exception of 12 five - coordinated vertices ( defects ) . due to the presence of these defects, the lattice has a non - vanishing initial stretching energy .we remove this residual strain by appropriately choosing rest lengths of the edges , leading to a vanishing stretching energy for the initial mesh .the defects , however , lead to slight variations in the surface charge density and local elastic and electrostatic energies . by choosing a large number of lattice points , the effect of these small deviations on the resulting shape transformationsis minimized . in order to make sure that our results are independent of the particular triangulation, we perform simulations employing sufficient number of lattice points generated via different choices of caspar and klug constructions , obtaining similar results for all runs . our lattice maintains the initial connectivity throughout the shape evolution .since each vertex carries a charge of the same sign , our discretized membrane is characterized with an inbuilt self - avoidance due to the mutual electrostatic repulsion between any pair of vertices .however , to ensure complete stability , we include an additional short - range , purely repulsive lennard - jones potential between two vertices , where each vertex is modeled as a hard sphere with radius chosen to be a fraction of the average edge length associated with the triangular lattice .we use molecular dynamics ( md ) method to minimize the free energy which requires the analytical expressions for the gradients of with respect to the vertex positions . evaluating the gradient of the bending energy termis relatively difficult and we show this calculation in the _si text_. our simulations start from a spherical shell with a nearly homogeneous surface distribution of local elastic and electrostatic energies .slight deviations in the energies arise from the presence of five - coordinated vertices which are the result of using the aforementioned triangulation of the shell surface .we assign the vertices a kinetic energy , and direct their motion according to the forces derived from , where the latter plays the role of the potential energy .we thus obtain the lagrangian , from which we derive the equations of motion for the vertices : . here , is a mass term associated with the vertices which determines the choice of the simulation timestep .these equations of motion , which form the basis of the md simulation of the vertices , are appropriately augmented to preserve the constraint of fixed total volume .we achieve this via the shake - rattle routine of implementing constraints that guarantees the conservation of shell volume at each simulation step . finally , in order to arrive at the shape that corresponds to the minimum of the energy landscape we couple the md scheme with simulated annealing .we associate a ( fictitious ) temperature with the kinetic energy of the vertices and employ a nose - hoover thermostat to set it .this temperature is not the physical temperature , it is merely a parameter we employ to control the annealing process .we reduce this temperature at periodic intervals so as to arrive at the lowest point of the potential energy associated with the md lagrangian , thus reaching the minimum of the free energy .we thank r. sknepnek , z. yao , g. vernizzi , and j. zwanikken for many insightful discussions .the model was developed with the financial support of the office of basic energy sciences within the department of energy ( doe ) grant number de - fg02 - 08er46539 .the computational work was funded by the office of the director of defense research and engineering ( ddr ) and the air force office of scientific research ( afosr ) under award no .fa9550 - 10 - 1 - 0167 .shrestha , l. k , sato , t , & aramaki , k. ( 2009 ) intrinsic parameters for structural variation of reverse micelles in nonionic surfactant ( glycerol [ small alpha]-monolaurate)/oil systems : a saxs study . , 42514259 .vander hoogerstraete , t , wellens , s , verachtert , k , & binnemans , k. ( 2013 ) removal of transition metals from rare earths by solvent extraction with an undiluted phosphonium ionic liquid : separations relevant to rare - earth magnet recycling ., 919927 .hamada , t , sugimoto , r , vestergaard , m. c , nagasaki , t , & takagi , m. ( 2010 ) membrane disk and sphere : controllable mesoscopic structures for the capture and release of a targeted object ., 1052810532 .
manipulating the shape of nanoscale objects in a controllable fashion is at the heart of designing materials that act as building blocks for self - assembly or serve as targeted drug delivery carriers . inducing shape deformations by controlling external parameters is also an important way of designing biomimetic membranes . in this article , we demonstrate that electrostatics can be used as a tool to manipulate the shape of soft , closed membranes by tuning environmental conditions such as the electrolyte concentration in the medium . using a molecular - dynamics - based simulated annealing procedure , we investigate charged elastic shells that do not exchange material with their environment , such as elastic membranes formed in emulsions or synthetic nanocontainers . we find that by decreasing the salt concentration or increasing the total charge on the shell s surface , the spherical symmetry is broken , leading to the formation of ellipsoids , discs and bowls . shape changes are accompanied with a significant lowering of the electrostatic energy and a rise in the surface area of the shell . to substantiate our simulation findings , we show analytically that a uniformly - charged disc has a lower coulomb energy than a sphere of the same volume . further , we test the robustness of our results by including the effects of charge renormalization in the analysis of the shape transitions and find the latter to be feasible for a wide range of shell volume fractions .
the measurement of the angular power spectrum of the cmb anisotropies , , has become one of the most important tools in modern cosmology . as long as they remain in the linear regime , the fluctuations predicted by most inflationary scenarii lead to gaussian anisotropies on the cmb .thus the angular power spectra in temperature and polarization contain all the cosmological information on the cmb sky .cosmological parameters and other physical quantities of interest in the early universe can be directly derived from them . in parallel to the explosion of cmb datasets both in size and quality ( wmap , archeops , boomerang , maxima , dasi , vsa , cbi , acbar ) , fast codes have been developed to estimate the cmb angular power spectrum ( cmbfast , camb ) allowing us to compare fast and efficiently theory and observations using powerful statistical tests ( cmbeasy , cosmomc , ) .furthermore , huge efforts are undertaken to ease the estimation of the angular power spectrum from input cmb maps in order to cope with larger , deeper and more complex sky surveys in a reasonable amount of computing time .excluding very specific methods for example those which are under study for the planck satellite mission and which take advantage of the planck ring scanning strategy most cmb power spectrum estimators can be grouped into two categories : maximum likelihood and ` pseudo'- estimators . a complete review and comparison between the two methodscan be found in , here we just discuss the key points of each of them .maximum likelihood methods for temperature anisotropies are based on the maximization of the quadratic likelihood .the method estimates the sky angular power spectrum from the angular correlation function of the data .error bars for the power spectrum are generally computed directly from the likelihood function which is either fully sampled in the range of interest or approximated by a quadratic form .dealing with an inhomogeneous coverage of the sky involves a great computational complexity where is the number of pixels of the input map .therefore , these methods are very cpu time consuming for current large datasets like wmap and probably not well adapted for future satellite missions like planck which will produce maps of the sky of more than pixels . a generalization of these methods to the analysis of cmb polarization is discussed in . alternatively, the so - called pseudo- s estimators compute directly the _ ` pseudo ' _ angular power spectrum from the data .then , they correct it for the sky coverage , beam smoothing , data filtering , pixel weighting and noise biases .a comprehensive description of this method was first given by and an application to the angular clustering of galaxies can be found in .more recently , several approaches to this method have been developed . among them , spice and its extension to polarization compute in the real space first the correlation function to correct for the sky coverage bias and then the power spectrum from the latter .a pseudo- estimator in the spherical harmonic space applied to cmb experiments is given in and ( master ) .they computes directly the power spectrum before correct it for the different biases .an approach applied to apodised regions of the sky is presented in and extended to polarization in .these estimators can be evaluated using fast spherical harmonic transforms and therefore provide fast and accurate estimates of the .however , they require an accurate knowledge of the instrumental setup and noise in order to correct them for the biases discussed previously .in fact , they use an estimation of the power spectrum of the noise in the map , generally computed via monte - carlo simulations , which is subtracted from the original power spectrum .this is also used to estimate the error bars in the power spectrum by calculating the variance of the over the set of simulations . in this paper , we describe a method to estimate the by computing the cross - power spectra between a collection of input maps coming either from multiple detectors of the same experiment or from different instruments .the ` pseudo ' cross - power spectra are explicitly corrected for incomplete sky coverage , beam smoothing , filtering and pixelization . assuming no correlation between the noise contribution from two different maps , each of the corrected cross - power spectra is an unbiased estimate of the .analytical error bars are derived for each of them .the cross - power spectra , that do not include the classical _ auto_-power spectra , are then combined using a gaussian approximation of the likelihood function . in the same way, we can also compute the estimate of the common angular power spectrum , , of sky maps from different experiments . +a similar method also based on the combination of a set of cross - power spectra has been first used to obtain recent results from the first year wmap data .the main difference between the method presented is this paper and the wmap one is the determination of the cross - correlation matrix ( see sect.[correlation ] ) of the corrected cross - power spectra used both for the combination of these into a single power spectra and for the estimation of the error bars on the latter .the wmap team estimates the cross - correlation matrix from a model of the data .this includes specific terms related to the wmap data such as the contribution from point sources and the uncertainties on the beam window functions as well as a term related to the cmb anisotropies which is estimated from a fiducial model .the wmap cross - correlation matrix , used for the combination of the cross - power spectra , does not incorporate the effects of mode coupling . in a further step , they account for the mode coupling and the dependence on the fiducial cmb model for the computation of the uncertainties on the final power spectrum .by contrast , the method presented here computes the cross - correlation matrix directly from the cross - power spectra estimated from the data .this allows us to include naturally the mode coupling in this matrix .further , this permits the computation of analytical error bars ( as described in sect .[ correlation ] ) which are very compatible with those obtained from simulations ( see sect . [ archeops ] ) .because of the above , this method can be applied without modification to the estimation of the power spectrum of the correlated signal between a set of maps of the sky coming from multiple instruments with potentially different sky coverages .for example , we have used this method on archeops data for the estimation of the cmb angular power spectrum and the contribution from foregrounds to this one .we have also used it for the estimation of the foreground emission at the sub - millimeter and millimeter wavelength by cross - correlating the archeops data with foreground dust templates . in sect .[ cross ] , we remind to the reader the computation of the cross - power spectra from ` pseudo ' cross - power spectra . in sect .[ correlation ] , we specify the correlation between cross - power spectra and between multipoles . analytical expressions for the error bars and the covariance matrix for each cross - power spectra are derived .section [ combination ] discusses the combination of the cross - spectra from either a single full data set ( sect .[ lincomb ] ) or several independent experiments ( sect .[ common ] ) .finally , xspect is applied to simulations of the archeops balloon - borne experiment in sect .[ archeops ] .under the assumption of uncorrelated noise between detectors , the cross - power spectrum computed from sky maps is a non noise - biased estimate of the angular power spectrum . in general , when computing the , other instrumental effects as beam smoothing , incomplete sky coverage and time ordered data filtering need to be taken into account .these effects and the way to correct them have been deeply covered in the literature for classical ` pseudo- ' estimators which are noise biased as they use directly the auto power spectrum of each detector map .as shown in the following , these corrections can be extended to the case of cross - power spectra .the cmb temperature anisotropies , , over the full - sky can be decomposed into spherical harmonics as follows , where the coefficient are given by the cmb temperature field predicted by most inflationary models is in general gaussian distributed so that the ensemble average of the coefficients are an unbiased estimate of the cmb temperature power spectrum is therefore given by ground - based and balloon - borne cmb experiments present an inhomogeneous sky coverage .on the other hand , satellite experiments like cobe , wmap and planck , although they provide full - sky maps , residuals of foreground contamination in the galactic plane and point sources contamination make impossible to use the complete maps when computing the cmb angular power spectrum .furthermore , for most cmb experiments , the noise properties vary considerably due to different redundancies between pixels of the same map . so obtaining an estimate of the power spectrumrequires a differential weighting of pixels within the same map which translates into an effective inhomogeneous sky coverage .the decomposition in spherical harmonics of the observed temperature anisotropies including weighting can be written for a single detector as follows where for equal pixels ( as in healpix pixelization , ) and is the mask applied to the input sky temperature .this temperature can be decomposed into signal and noise , which are assumed to be uncorrelated , _ the effect of a non - homogeneous coverage of the sky can be described in spherical harmonics by a mode - mode coupling matrix which depends only on the angular power spectrum of the weighting scheme ( hereafter weighting mask ) applied to the sky to account for the incomplete coverage , the removal of the galactic plane and the inhomogeneous noise properties of the detector .thus , the ` pseudo- ' estimator is defined as follows the relation that links the ` pseudo ' power spectrum , directly measured on the sky , and the power spectrum of the cmb anisotropies is given by where is the beam transfer function describing the beam smoothing effect ; is the transfer function of the pixelization scheme of the map describing the effect of smoothing due to the finite pixel size and geometry ; is an effective function that represents any filtering applied to the time ordered data ; and is the noise power spectrum .an unbiased estimate of the cross - power spectrum between the full sky maps of two independent and perfect detectors and can be obtained from where and are the coefficients of the spherical harmonic decomposition of maps and respectively . in the same way, we can compute the ` pseudo ' cross - power spectrum between any two detectors and by generalizing eq .[ pseudo_cl ] each of the terms in eq .[ pseudo_cross ] is described in more details in the following subsections .the main advantage of using cross - power spectra is that the noise is generally uncorrelated between different detectors this assumption will be maintained throughout this paper .thus , the cross - power spectra are straightforward estimates of the angular power spectrum on the sky and for two different detectors ( _ i.e. _ ) , the ` pseudo ' cross - power spectrum reads the coupling kernel matrix , introduced in eq .[ pseudo_cl ] and described in details in , reads where is the power spectrum of the mask .it takes into account the mask applied to the data where mask represents both the sky coverage and the weighting scheme .equation [ mll_master ] can be easily generalized to the case of two different masks applied respectively to each map of the two detectors involved in the cross - power spectrum calculation .replacing the quadratic terms by in the computation of eq .[ mll_master ] leads to where , the cross - power spectrum of the masks .this property allows us to deal with independent masks representing different sky coverages and to apply an appropriate specific weighting scheme to each detector map .note that the correction in the multipole space discussed here is fully equivalent to an appropriate normalization of the cross correlation between the two sky masked maps in real space .this analogy is important as it helps to understand why no fully overlapping masks for the input maps can be considered .the filter function accounts for the filtering of the time ordered data which is generally needed in most cmb experiments either to avoid systematic effects or to reduce correlated low frequency noise . the time domain filtering is performed along a preferred direction onthe sky ( scanning direction ) and so leads commonly to an anisotropic sky even if the assumption of initial isotropic temperature fluctuations holds .in this case , the estimates of the angular power spectrum provided by eq . [ pseudo_cl ] and eq .[ pseudo ] are not exact any more and should be corrected for a function both in and , . obtaining accurate estimates of sucha correction is particularly complex and for most cases , as proposed by , the correction for an effective is good enough for the accuracy required in the reconstruction of the cmb power spectrum. the function can be , for example , computed via monte - carlo simulations of the sky from which mock time ordered data are produced for each of the detectors involved and then filtered . from an initial theoretical cmb power spectrum , we compute a large number of realizations of the sky using the healpix software _ synfast _ and compute mock time ordered data from the scanning strategy of each detector .maps are then computed with and without filtering before re - projection .the function is obtained from the mean ratio of the ` pseudo ' power spectra of the filtered and not filtered maps .the latter are obtained using the healpix software _anafast_. in the case of the cross - power spectra and considering the previous approximation , an effective filter function will be considered in the following .as defined , the effective filtering function allows us to consider detectors for which the time domain filtering is different .note that , for nearly white noise and all - sky surveys such as wmap or planck missions , filtering may not be required and thus .the beam window function describes the smoothing effect of the main instrumental beam under the hypothesis of circularity .the latter does not hold in general as for most experiments the main beam pattern is asymmetric . as beam uncertainties have become the most important source of systematic errors , taking into account the asymmetry of the beam pattern is necessary .several solutions have been proposed either circularizing the beam or assuming an elliptical gaussian beam .the work of presents how to convolve _ exactly _ two band limited but otherwise arbitrary functions on the sphere - which can be the 4- beam pattern and the sky descriptions . an analytic framework for studying the leading order effects of a non - circular beam on the cmb power spectrum estimationis proposed in .the authors of this paper present _ asymfast _ , a general method to estimate an effective function taking into account the asymmetry of the main beam and the scanning strategy ._ asymfast _ is based on a decomposition of the main beam pattern into a linear combination of gaussians which permits fast convolution in the spherical harmonic space along the scanning strategy .the cross - power spectrum can be obtained from the ` pseudo ' cross - power spectrum by resolving eq .[ pseudo_cross ] which leads to invert the coupling kernel matrix . in general for complex sky coverage and weighting schemes ,this matrix is singular and can not be inverted directly . to avoid this problem , proposed the binning of the coupling kernel matrix which reduces considerably the complex correlation pattern in the introduced by the applied mask .the binning is obtained by applying the operators and as follows the solution of eq .[ pseudo ] in the new base reads with as the correlation between multipoles depends very much on the instrumental setup , the sky coverage and the weighting scheme , the binning has to be defined for each experiment .it is a compromise between a good multipole sampling and low correlations between adjacent bins .hereafter to avoid confusion and makes the notation simpler all equations will be written in instead of .from input maps we can obtain cross - power spectra ( ) which are unbiased estimates of the angular power spectrum but which are obviously not independent . in this section, we describe the estimation of the cross - correlation matrix between cross - spectra and between multipoles .we show how the error bars and the covariance matrix in multipole space can be deduced for each cross - power spectra . given a sky map from a detector ,it can be combined to each of the other detector maps to form cross - power spectra which will therefore be highly correlated .furthermore , due to the masking we also expect that each cross - power spectra will be correlated for adjacent multipoles and thus , correlations between adjacent multipoles will be also present between different cross - power spectra . to describe this complexity we define the cross - correlation matrix of the cross - power spectra ( ) and ( ) which can be fully computed as shown in the following . from eq .[ pseudo ] we can express the ` pseudo ' cross - power spectrum between detectors and as follows where and and therefore the corrected cross - power spectrum for detectors and is given by using the above expression and following the previous definition , the cross - correlation matrix reads the above equation as it stands can not be used in practice for above as the calculation of described in appendix [ appendix_xi ] , is numerically unstable .however , for high multipoles and sufficiently large sky coverage ( as it is the case for satellite missions as wmap and planck ) it can be simplified by replacing and by and respectively and then applying the completeness relation for spherical harmonics .this is because the , which is diagonal for full sky coverage , is quasi - diagonal for large sky coverage .+ under the above hypothesis and following appendix [ appendix_xi ] , the cross - correlation matrix reads ( { \cal m}_{\ell ' \ell_2}^{cd\,-1})^{t } \label{xi}\end{aligned}\ ] ] where and is the quadratic coupling kernel matrix associated to the cross - power spectrum of the product of the masks .the represents the spherical harmonic coefficients for the product of the masks associated to detectors and .+ equation [ xi ] can be further simplified by assuming uniform weighting and the same sky coverage for all detectors , as well as a diagonal dominated coupling kernel matrix , \label{xi_approx}\ ] ] in this case , the effect of a non - homogeneous sky coverage is represented by a simple function which can be associated to the effective number of degrees of freedom in the distribution of the over the sky ( see ) where is the -th moment of the mask in eq .[ xi ] and [ xi_approx ] , the can be either cross - power spectra or auto - power spectra depending on the combination of , , and ( the only condition is and ) .in one hand , noise terms , as included in the auto - power spectra , appear in the analytical form of the correlation matrix . on the other hand , for a set of 4 independant detectors ( ) ,the correlation matrix is the variance of the signal .it is important to notice that the cross - correlation matrix contains all the needed information to perform error bars and covariance matrix for each single cross - power spectrum . from eq .[ xi ] ( or eq . [ xi_approx ] for the approximated form ) , the covariance matrix providing the correlation between adjacent multipoles is given by \label{covariance}\end{aligned}\ ] ] in the same way , we can also write the variance of each cross - power spectra which corresponds to the error bars associated to each cross - power spectra \label{error_bar}\end{aligned}\ ] ] in these cases , instrumental noise appears in the auto - power spectra ( for which ) . as it is computed directly from the data , no estimation of the noise is needed .this section describes the estimation of the final cmb angular power spectrum and the error bars associated to it using the corrected cross - power spectra described in sect .[ cross ] in addition to the cross - correlation matrix computed in sect .[ correlation ] .we propose a simple but efficient way to combine them by maximizing a quadratic likelihood function and to deduce the final error bars from the cross - correlation matrix . for this paperwe have considered a very general form for the likelihood function limited to a simple cmb plus noise model .however , the likelihood function can be adapted more specifically to the data , for example to include point sources as it is the case for the wmap results presented in .more generally , it could be extended to algorithms of spectral matching decomposition ( _ e.g. _ ) . in the following ,we present how this method can be used to provide both an estimate of the from a full data set ( sect .[ lincomb ] ) and of the angular power spectrum of common sky fluctuations between maps coming from two or more independent surveys ( sect .[ common ] ) .once all possible cross - power spectra have been computed from a data set , we dispose of different but not independent measurements of the angular power spectrum , . to combine them and obtain the best estimate of the power spectrum , we maximize the gaussian approximated likelihood function \label{likelihood}\ ] ] where is the cross - correlation matrix of the cross - power spectra described before ( and ) . the auto - power spectra are not considered . from this and neglecting the correlation between adjacent multipoles ,it is straightforward to show that the estimate of the angular power spectrum is } { \sum_{ij } |\xi^{-1}|_{\ell\ell}^{ij } } \label{bestcell}\ ] ] the final covariance matrix can be obtained from eq . [ likelihood ] , and the final error bars are given by depending on the cross - correlation between cross - power spectra , the instrumental noise variance is reduced by a factor comprised between n , the number of independant detectors , and , the number of cross - power spectra . for the noise dominated case ,the correlation between different cross - power spectra can be neglected and the cross - correlation matrix becomes diagonal ( see eq . [ xi ] and [ xi_approx ] ) .the values in the diagonal are the products of 2 auto - power spectra ( including noise ) .the variance of final power spectrum is then proportional to . in any case , when combining cross - power spectra , the upper limit for the variance comes from the combination of independent detectors ( proportional to ) .the xspect formalism allows us to compare two or more different sets of sky maps coming from two or more independent experiments . in this respect ,the quantity of interest is the power spectrum of the common fluctuations on the sky with the same physical origin .we will call the latter common angular power spectrum for simplicity .in fact , if we compare for example two sets of cmb maps at low and high frequencies , the foreground contamination will be different and we can expect to obtain a better estimation of the cmb power spectrum . in the same way , for two different experiments ,systematic effects will be decorrelated and will not contribute to the common angular power spectrum calculated from them .in addition , template maps of foreground emission can be correlated to the cmb experiments maps to monitor and to subtract foreground residuals .+ to simplify the notation we will only consider two sets of sky maps and , corresponding to and detectors respectively . following the previous paragraph considerations , by cross - correlating each detector map from which each detector map from and correcting the ` pseudo ' cross - power spectra as described before, we can form cross - power spectra which are unbiased estimates of the common power spectrum but not fully independent .the cross - power spectra formed in this way can be noted and their cross - correlation matrix reads which can be computed using previous approximations .+ finally , equations [ bestcell ] and [ errorbarbestcell ] are used to obtain the estimate of the common angular power spectrum and the error bars associated to it .this method allows us to deal naturally with very different experimental configurations such as for example completely different scanning strategies , different timeline filtering and different beam patterns for each of the detectors involved .furthermore , it can be easily generalized to work with non cmb data such as maps of the sunyaev - zeldovich effect in clusters of galaxies or maps of mass fluctuations from weak lensing observations .xspect was mainly developed to measure the cmb temperature angular power spectrum from the archeops data as well as to compare these data to other observations of the sky as for example those from the wmap satellite .archeops is a balloon - borne experiment conceived as a precursor of the planck high frequency instrument .it consists of a 1.5 m off - axis gregorian telescope and of an array of 21 photometers cooled down to mk and which operates in 4 frequency bands : 143 and 217 ghz ( cmb channels ) and 353 and 545 ghz ( galactic channels ) .for the latest flight campaign , the entire archeops data cover about % of the sky , including the galactic plane . as a first step in the application of xspect to the archeops data set , we have produced 500 simulations of the sky observed by archeops including both the cmb signal and the detector noise for the six most sensitive archeops detectors at 143 and 217 ghz .these simulations are computed from realizations of the cmb sky at ( corresponding to a pixel size of arcmin ) for the archeops best - fit cmb model presented in .the sky maps are convolved by the main beam pattern of each of the archeops detectors using the beam transfer function and then deprojected following the archeops scanning strategy to produce mock archeops timelines .the beam transfer functions were computed individually for each detector from jupiter s crossings in the data using the _ asymfast _ method presented in .simulations of archeops observations of the cmb sky ( see text for details ) .we use the archeops best - fit model to produce cmb maps which are smeared out using the beam pattern of each of the six most sensitive archeops detectors and deprojected into mock archeops timelines .noise is then added up for each detector and the resulting timelines projected into sky maps are analyzed using xspect .the error bars shown here are computed analytically as described in previous sections .the xspect estimate of the angular power spectrum ( in red ) is an unbiased estimate of the input cmb model ( in yellow ) . ]the detector noise is computed from the time power spectrum of each of the detectors and added to the mock signal timelines .the method used for the estimation of the time noise power spectra is described in details in . to suppress -noise , remaining atmospheric and galactic contamination as well as non - stationary high frequency noise ,the archeops time streams are fourier filtered using a bandpass filter between 0.1 and 38 hz .we apply the same filtering to the simulated timelines before projection .the corresponding multipole filtering function was independently calculated by specific simulations .the filtered timelines are then reprojected using the archeops scanning strategy to produce simulated co - added maps for each simulation run .a galactic mask deduced from a sfd iras map , extrapolated to 353 ghz , using a cut in amplitude ( greater than 0.5 mjy.str ) is applied to the simulated maps .this reduces the total archeops coverage to of the sky .we have used two different weighting schemes on the maps .the first one is an uniform weighting and the second one , a weighting per pixel and per detector , where is the noise variance for the given pixel .then we apply the xspect method to each simulations - run .the cross - power spectra are computed with their associated cross - correlation matrices and combined as described in sect .[ lincomb ] using the gaussian approximation of the likelihood function .in addition to the above simulations , we produced , in the same way , 500 non noisy simulations from which we can extract for example the sample variance for the archeops coverage .figure [ cl_simuarcheops ] shows the mean angular power spectrum computed from the angular power spectrum estimates obtained for each of the 500-simulations runs .the error bars are analytically computed .we observe that the gaussian approximated combination of cross - power spectra is an unbiased estimate of the input cmb model for the angular power spectrum .this mean angular power spectrum is a combination of two weighting schemes .up to , we use uniform weighting whereas , for larger , a mask inversely proportional to the noise in each of the pixels has been applied .this allows us to choose the smaller error bars in each region .error bars for the archeops data computed analytically , from eq .[ xi_approx ] ( red solid line ) and from eq .[ xi ] ( yellow solid line ) .they are compared to the standard deviation of simulations ( black dashed line ) .differences essentially originate from the extremely inhomogeneous archeops sky coverage ( see text for details ) . ]figure [ error_simuarcheops ] shows analytic estimates of the error bars in the final angular power spectrum compared to the dispersion for each multipole bin over the 500 monte - carlo simulations .the archeops sky coverage is very inhomogeneous due to the particular choice of the scanning strategy which tries to maximize the area of the sky observed ( % ) in a very reduced amount of observation time ( less than 12 hours ) .archeops performs large circles on the sky that leads to a quasi ring - like coverage with a large uncovered region at the center of the ring as shown on fig .[ archeops_cover ] .this implies that the approximation used to obtain eq .[ xi_approx ] is not valid , especially at low multipoles , and therefore the error bars obtained analytically ( red solid line ) are underestimated up to at very low with a mean of . by computing as given by eq .[ xi ] , the analytic error bars ( yellow solid line ) increase as expected and fit much better the dispersion in the simulations ( black dashed line ) .the agreement is then within over the full range of multipoles with a mean of .equivalent results are obtained using non noisy simulations . sky coverage andweighting scheme applied to the sky map ( ) obtained from the archeops most sensitive detector at 143 ghz .we observe a ring - like structure with a large uncovered area in the center and highly redundant areas on the edges of the ring . ]xspect has been applied to the data from the last archeops flight campaign using the six most sensitive bolometers as in the simulations presented above .the results of this analysis is presented in a collaboration paper .cross - correlation with wmap maps is also under study in order to assess the electromagnetic spectrum of the cmb anisotropies from 40 to 217 ghz .in this paper , we have presented a method , xspect , for the obtention of the cmb angular power spectrum with analytical error bars based on four main steps : 1 .given independent input sky maps from different detectors either from a single experiment or from multiple ones , we estimate ` pseudo ' cross - power spectra as well as ` pseudo ' auto - power spectra using the healpix package .cross - power spectra are obtained by correcting the ` pseudo ' power spectra for weighting scheme , beam smoothing and filtering as discussed in section [ cross ] . in the same way, we compute auto - power spectra with noise .xspect can deal with different and complex weighting schemes for each of the detectors involved .the coupling matrix , beam and filtering transfer functions are precomputed for each pair of detectors .we compute the full cross - correlation matrix , , between cross - power spectra ( ) and ( ) ( section [ correlation_matrix ] ) from which we can extract the covariance matrix and the error bars for each cross - power spectra ( sect [ covpercross ] ) .4 . finally, the corrected cross - power spectra are combined into a single angular power spectrum using their cross - correlation matrices which are assumed to be diagonal in multipole space .the gaussian approximation of the likelihood function is fully justified in the large multipole range . in addition ,analytical estimates for the error bars and for the covariance matrix are computed ( section [ combination ] ) .the covariance matrix can be used to check the degree of correlation between multipoles . asthis method estimates the angular power spectrum using ` pseudo ' cross - power spectra , we can obtain a non noise - biased power spectrum avoiding the estimation of the noise power spectra which requires heavy monte - carlo simulations .equally , this permits the estimation of analytical error bars which are compatible to those computed from simulations .furthermore , xspect allows us to obtain the angular power spectrum of common sky fluctuations between two or more experiments .associated to other surveys used as templates , it can provide estimations of systematic residuals or astrophysical contaminations .nevertheless , xspect computes only the common structures between maps , assuming a single physical component .this means that , for cmb purposes , foregrounds and systematics must be subtracted beforehand .xspect has been successfully applied to simulations of the archeops experiment which presents an inhomogeneous sky coverage and detectors with unequal sensitivities . even for a balloon - borne experiment that can be compared to satellite missions neither in noise level nor in sky coverage , the analytical estimates of the errors computed with xspect are accurate within few percents .this property is of great interest when producing monte carlo simulations in the process of testing and improving the estimation of the or when checking the robustness of the data analysis with respect to the various choices of mask , filtering or binning .the application of xspect to the archeops data set will be published by the archeops collaboration shortly . for the wmap satellite , which covers the full sky in a very homogeneous way , it was also interesting to check the analytical estimates of the error bars .xspect analytical errors are equivalent to those provided by the wmap team within 15% with a mean of 1.8% .this agreement is satisfactory as our analysis was more basic than the one used to derive the first year wmap results presented in .in particular all maps at all multipoles were used without any point - source specific treatment and only two weighting schemes were applied .the extension of xspect to cmb polarization maps is under development . as for the temperature power spectrum ,the and power spectra obtained directly from a single set of i , q and u maps via a pseudo power spectrum estimator are noise biased .a pseudo cross - power spectrum estimator adapted to polarization can solve this problem by using independent sets of i , q and u maps .authors are grateful to j - ch . hamilton for discussions in the initial stage of this work in our team .we also want to thank f.x dsert for useful discussions on the method and for corrections of this paper .the healpix package was used extensively .amblard a. , hamilton j. c. , 2004 , , * 417 * , 1189 ansari r. _ et al ._ , 2003 , , * 343 * , 552 bennett c. l. _ et al . _ , 2003 , , * 148 * , 97 benot a. _ et al . _ , 2003 , , * 399 * , l19 benot a. _ et al . _ , 2003 , , * 399 * , l25 bond j.r . ,jaffe a.h . , knox l. , 1998 , , * 57 * , 2117 .borrill j. , 1999a , , * 59 * , 27302 .borrill j. , 1999b , in ec - tmr conf .476 , 3k cosmology , eds .l. maiani , f. melchiorri and n. vittorio , woodbury aip , 277 .challinor a.d ._ et al . _ , 2002 , , * 331 * , 994. chon g. _ et al ._ , 2004 , , , * 350 * , 914 delabrouille j. , cardoso j .- f . , patanchon g. , 2003 , , * 346 * , 1089 doran m. , 2003 , arxiv : astro - ph/0302138 . douspis m. , bartlett j.g . , blanchard a. and le dour m. , 2001 , , * 368 * , 1 efstathiou g. , 2004 , , * 349 * , 603 fosalba p._ et al . _ , 2002 , , * 65 * , 63003 gorski k.m ., hivon e. & wandelt b. d. , 1998 , proceed . of the mpa / eso conf . on evolution of large - scale structure : from recombination to garching , 2 - 7 august 1998 , eds .banday , r.k .sheth and l. da costa , astro - ph/9812350 , + http://www.eso.org/science/healpix grainge k. _ et al ._ , 2003 , , * 341 * , l23 halverson n.w . _ et al ._ , 2002 , , * 568 * , 38 hansen f. , grski k. m. , hivon e. , 2002 , , * 336 * , 1304 hansen f. , grski k. m. , 2003 , , * 343 * , 559 hinshaw g. _ et al . _ , 2003 , , * 148 * , 135 hivon e. _ et al . _ , 2002 , , * 567 * , 2 hu w. , spergel d.n . , white m. , 1997 , , * 55 * , 3288 jaffe a.h_ et al . _ , 2003 , new astronomy review ,* 47 * , 727 kuo c.l ._ et al . _ , 2004 , ,* 600 * , 32 lewis a. , bridle s. , 2002 , , * 66 * , 103511 lewis a. , challinor a. , lasenby a. , 2000 , , * 538 * , 473 liddle a.r . ,lyth d.h . , 2000 , _cosmological inflation and large - scale structure _ , cambridge university press , cambridge .linde a. , sasaki m. , tanaka t. , 1999 , , * 59 * , 123522 mitra s. , sengupta a. s. , souradeep t. , 2004 , , accepted , astro - ph/0405406 .page l. _ et al ._ , 2003 , apjs , * 148 * , 39 peebles p.j.e . , 1973 , , * 185 * , 431 peebles p.j.e . ,hauser m.g . , 1974 , , * 28 * , 19 . ruhl j.e ._ et al . _ , 2003 , , * 599 * , 786 schlegel d. , finkbeiner d. , davis m. , 1998 , , * 500 * , 525 seljak u. , zaldarriaga m. , 1996 , , * 469 * , 437 sievers j.l ._ et al . _ , 2003 , , * 591 * , 599 souradeep t. , ratra b. , 2001 , , * 560 * , 28 szapudi i. _ et al ._ , 2001 , , * 548 * , l115 tegmark m. , 1997 , , * 55 * , 5895 . tegmark m. , de oliveira - costa a. , 2001 , , * 64 * , 63001 . tristram m. , hamilton j - ch . , macas - perz j.f. , renault c. , 2004 , , 69 , 123008 tristram m. , patanchon g. , macas - perz j.f ._ et al . _ , 2005 , , submitted , astro - ph/0411633 van leeuwen f. _ et al . _ , 2002 , , * 331 * , 975 .varshalovich d.a ., moskalev a.n ., khersonoskii v.k . , 1988 , _ quantum theory of angular momentum _ , world scientific , singapore .verde l. _ et al ._ , 2003 , , * 148 * , 195 wandelt b. d. , hivon e. , gorski k. m. , 2001 , , * 64 * , 083003 wandelt b. d. , gorski k. m. , 2001 , , * 63 * , 123002 wu j.h.p ._ et al . _ , 2000 ,apjs , * 132 * , 1 zaldarriaga m. , seljak u. , bertschinger , e. , 1998 , , * 494 * , 491 zaldarriaga m. , seljak u. , 2000 , , * 129 * , 431in this appendix we describe in details the calculation of the cross - correlation matrix under the hypothesis of large sky coverage which leads to where are the coefficients of the spherical harmonic decomposition of the masked sky map such that with is the area of pixel . represents the mask applied to the sky map and the coefficients of the spherical harmonic decomposition of the mask where is the quadratic coupling kernel matrix associated to the cross - power spectrum of the products of the masks for each of the cross - power spectra et ,
we present xspect , a method to obtain estimates of the angular power spectrum of the cosmic microwave background ( cmb ) temperature anisotropies including analytical error bars developed for the archeops experiment . cross - power spectra are computed from a set of maps and each of them is in itself an unbiased estimate of the power spectrum as long as the detector noises are uncorrelated . then , the cross - power spectra are combined into a final temperature power spectrum with error bars analytically derived from the cross - correlation matrix . this method presents three main useful properties : ( 1 ) no estimation of the noise power spectrum is needed , ( 2 ) complex weighting schemes including sky covering and map noise properties can be easily taken into account , and corrected for , for each input map , ( 3 ) error bars are quickly computed analytically from the data themselves with no monte - carlo simulations involved . xspect also permits the study of common fluctuations between maps from different sky surveys such as cmb , sunyaev - zeldovich effect or mass fluctuations from weak lensing observations . cosmic microwave background cosmology : observations methods : data analysis
in this report i will be concerned mainly with the frequentist ( classical ) theory of statistical inference , but i think that it is interesting and useful that i express my opinion on the war between frequentists and bayesians . to the question`` are you frequentist or bayesian '' ?i answer `` i like statistics . ''i think that if one likes statistics , one can appreciate the beauty of both frequentist and bayesian theories and the subtleties involved in their formulation and application .i think that both approaches are valid from a statistical as well as physical point of view .their difference arises from different definitions of probability and their results answer different statistical questions .one can like more one of the two theories , but i think that it is unreasonable to claim that only one of them is correct , as some partisans of that theory claim .these partisans often produce examples in which the other approach is shown to yield misleading or paradoxical results .i think that each theory should be appreciated and used in its limited range of validity , in order to answer the appropriate questions .finding some example in which one approach fails does not disprove its correctness in many other cases that lie in its range of validity .my impression is that the bayesian theory ( see , for example , ) has a wider range of validity because it can be applied to cases in which the experiment can be done only once or a few times ( for example , our thoughts in everyday decisions and judgments seem to follow an approximate bayesian method ) . in these casesthe bayesian definition of probability as _ degree of believe _ seems to me the only one that makes sense and is able to provide meaningful results .let me remind that since galileo an accepted basis of scientific research is the _ repeatability of experiments_. this assumption justifies the frequentist definition of probability as ratio of the number of positive cases and total number of trials in a large ensemble .the concept of _ coverage _ follows immediately : a _ confidence interval _ for a physical quantity is an interval that contains ( covers ) the unknown true value of that quantity with a frequentist probability . in other words ,a confidence interval for belongs to a set of confidence intervals that can be obtained with a large ensemble of experiments , of which contain the true value of .i think that in order to fully appreciate the meaning and usefulness of frequentist confidence intervals obtained with neyman s method , it is important to understand that the experiments in the ensemble do not need to be identical , as often stated , or even similar , but can be real , different experiments .one can understand this property in a simple way by considering , for example , two different experiments that measure the same physical quantity .the classical confidence interval obtained from the results of each experiment belongs by construction to a set of confidence intervals which can be obtained with an ensemble of identical experiments and contain the true value of with probability .it is clear that the sum of these two sets of confidence intervals , containing the two confidence intervals obtained in the two different experiments , is still a set of confidence intervals that contain the true value of with probability .moreover , for the same reasons it is clear that _ the results of different experiments can also be analyzed with different frequentist methods _ , _ i.e. _ methods with correct coverage but different prescriptions for the construction of the confidence belt .this for me is amazing and beautiful : _ whatever method you choose you get a result that can be compared meaningfully with the results obtained by different experiments using different methods _ !it is important to realize , however , that the choice of the frequentist method must be done independently of the knowledge of the data ( before looking at the data ) , otherwise the property of coverage is lost , as in the `` flip - flop '' example in ref . .this property allow us to solve an apparent paradox that follows from the recent proliferation of proposed frequentist methods .this proliferation seems to introduce a large degree of subjectivity in the frequentist approach , supposed to be objective , due to the need to choose one specific prescription for the construction of the confidence belt , among several available with similar properties . from the property above , we see that whatever frequentist method one chooses , if implemented correctly , the resulting confidence interval can be compared statistically with the confidence intervals of other experiments obtained with other frequentist methods .therefore , _ the subjective choice of a specific frequentist method does not have any effect from a statistical point of view _ !then you should ask me : why are you proposing a specific frequentist method ?the answer lies in _ physics _, not statistics .it is well known that the statistical analysis of the same data with different frequentist methods produce different confidence intervals .this difference is sometimes crucial for the physical interpretation of the result of the experiment ( see , for example , ) .hence , the physical significance of the confidence intervals obtained with different frequentist methods is sometimes crucially different .in other words , _ the frequentist method suffers from a degree of subjectivity from a physical , not statistical , point of view_.the possibility to apply successfully frequentist statistics to problematic cases in frontier research has received a fundamental contribution with the proposal of the unified approach by feldman and cousins .the unified approach consists in a clever prescription for the construction of a classical confidence belt which unifies the treatment of upper confidence limits for null results and two - sided confidence intervals for non - null results " . in the followingi will consider the case of a poisson process with signal and known background .the probability to observe events is the unified approach is based on the construction of the acceptance intervals ] with ) for , whereas for they are upper limits ( _ i.e. _ ) . the fact that the confidence intervals are two - sided for can be understood by considering , that gives . in this casethe likelihood ratio ( [ lr ] ) is given by - ( \mu+b ) \right\ } \stackrel{n\to\infty}{\longrightarrow } 0 \ , .\label{lr-2}\ ] ] this implies that the rank of high values of is very low and they are excluded form the confidence belt .therefore , the acceptance intervals ] is given by |n , b ) = \left ( e^{-\mu_1 } \sum_{k=0}^{n } \frac{(b+\mu_1)^k}{k ! } - e^{-\mu_2 } \sum_{k=0}^{n } \frac{(b+\mu_2)^k}{k ! }\right ) \left ( \displaystyle \sum_{k=0}^{n } \frac{b^k}{k ! } \right)^{-1 } \ , .\label{integral - probability}\ ] ] the shortest credibility intervals ] and if possible ( with ) , or . ] .one can see that the behavior of obtained with the bayesian ordering method is intermediate between those in the unified approach and in the bayesian theory . although one must always remember that the statistical meaning of is different in the two frequentist methods ( unified approach and bayesian ordering ) and in the bayesian theory , for scientists using these upper limits it is often irrelevant how they have been obtained .hence , i think that an approximate agreement between frequentist and bayesian results is desirable . from eq .( [ bev ] ) one can see that therefore , for the confidence belt obtained with the bayesian ordering method is similar to that obtained with the unified approach .the difference between the two methods show up only for .this is illustrated in figs .[ bolim ] and [ bobelt ] , that must be confronted with the corresponding figures [ ualim ] and [ uabelt ] in the unified approach .notice that , as shown in fig .[ bobelt ] , contrary to the unified approach , the bayesian ordering method gives physically significant ( non - zero - width ) confidence intervals even for low values of the confidence level .* criticism : * _ bayesian ordering is a mixture of frequentism and bayesianism .the uncompromising frequentist can not accept it ._ no ! it is a frequentist method .bayesian theory is only used for the _ choice of ordering _ in the construction of the acceptance intervals , that in any case is subjective and beyond frequentism ( as , for example , the central interval prescription or the unified approach method ) .the bayesian method for such a subjective choice is quite natural .if you belong to the frequentist orthodoxy ( sort of religion ! ) and the word `` bayesian '' gives you the creeps , you can change the name `` bayesian ordering '' into whatever you like and use its prescription for the construction of the acceptance intervals as a successful recipe .* criticism : * _ in the unified approach ( and maybe bayesian ordering ? )the upper limit on goes to zero for every as goes to infinity , so that a low fluctuation of the background entitles to claim a very stringent limit on the signal ._ this is not true !one can see it is given by the expression in eq .( [ lr-1 ] ) , that tends to for and small . for , and all rank close to maximum . for likelihood ratio is given by the expression in eq .( [ lr-2 ] ) . for large values of , taking into account that , we have and , which imply that .so the rank drops rapidly for .therefore , for small values of the s much smaller than have highest rank . since they have also very small probability , they all lie comfortably in the confidence belt , if the confidence level is sufficiently large ( ) . ] doing a calculation of the upper limit for as a function of for large values of .the result of such a calculation in the unified approach is shown in fig .[ higback - asy]a , where the 90% cl upper limit is plotted as a function of in the interval for ( solid line ) , ( dashed line ) and ( dotted line ) .one can see that initially decreases with increasing , but it stabilizes to about 0.8 for , with fluctuations due to the discreteness of . figure [ higback - asy]b shows the same plot obtained with the bayesian ordering .one can see that initially decreases with increasing , but less steeply than in the unified approach , and it stabilizes to about 1.8 . for comparison , in fig .[ higback - asy]c i plotted as a function of in the bayesian theory with a flat prior and shortest credibility intervals .one can see that the behavior of in the three methods considered in fig .[ higback - asy ] is rather similar .* criticism : * _ for the upper limit should be independent of the background . _but for the upper limit always decreases with increasing ! it is true that for one is sure that no background event as well as no signal has been observed .but this is just the effect of a low fluctuation of the background that _ is present _ !should we built a special theory for ?i think that this is not interesting in the frequentist framework , because i guess that it leads necessarily to a violation of coverage ( that could be tolerated , but not welcomed , only if it is overcoverage ) .i think that if one is so interested in having an upper limit independent of the background for , one better embrace the bayesian theory ( see fig .[ higback]c , fig . [ higback - asy]c and ref . ) , which , by the way , may present many other attractive qualities ( see , for example , ) . *criticism : * _ a ( worse ) experiment with larger background should not give a smaller upper limit for the same number of observed events . _ but , as shown in fig .[ higback ] , this always happens ! notice that it happens both for ( dotted part of lines ) and for ( solid part of lines ) , in frequentist methods as well as in the bayesian theory ( for ) .as far as i know , nobody questions the decrease of as is increased if .so why should we question the same behavior when ?the reason for this behavior is simple : the observation of a given number of observed events has the same probability if the background is small and the signal is large or the background is large and the signal is small .i think that it is physically desirable that and experiment with a larger background do not give a _ much smaller _ upper limit for the same number of observed events , but a _ smaller _ upper limit is allowed by _statistical fluctuation_. indeed , _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ upper limits ( as confidence intervals , etc . )are statistical quantities that _ must fluctuate ! _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ i think that the current race of experiments to find the most stringent upper limit is bad , because it induces people to think that limits are fixed and certain . instead , everybody should understand that _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ a better experiment can sometimes give a worse upper limit because of statistical fluctuations and there is nothing wrong about it !_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _in this report i have shown that the necessity to choose a specific frequentist method , among several available , does not introduce any degree of subjectivity from a statistical point of view ( section [ significance ] ) .in other words , all frequentist methods are statistically equivalent . however , the physical significance of confidence intervals obtained with different methods is different and scientists interested in obtaining reliable and useful information on the characteristics of the real world must worry about this problem . obtaining empty or very small confidence intervals for a physical quantity as a result ofa statistical procedure is useless .sometimes it is even dangerous to present such results , that lead non - experts in statistics ( and sometimes experts too ) to false believes . in section [ beauty ] i have discussed some virtues and shortcomings of the unified approach .these shortcomings are ameliorated in the bayesian ordering method , discussed in section [ ordering ] , that is natural , relatively easy , and leads to more reliable upper limits . in conclusion, i would like to emphasize the following considerations : * one must always remember that , in order to have coverage , the choice of a specific frequentist method must be done independently of the knowledge of the data . * finding some examples in which a method fails does not imply that it should not be adopted in the cases in which it performs well . * since all frequentist methods are statistically equivalent , + there is no need of a general frequentist method !+ in each case one can choose the method that works better ( basing the judgment on easiness , meaningfulness of limits , etc . ) .complicated methods with a wider range of applicability are theoretically interesting , but not attractive in practice . *somebody thinks that the physics community should agree on a standard statistical method ( see , for example , ) . in that case , it is clear that this method must be always applicable .but this is not the case , for example , of the unified approach , as shown in .although the bayesian ordering method has not been submitted to a similar thorough examination , i doubt that it is generally applicable .+ i do not see why experiments that explore different physics and use different experimental techniques should all use the same statistical method ( except a possible ignorance of statistics and blind believe to `` authorities '' ) .+ i would recommend that + _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ instead of wasting time on useless characteristics as generality , _ the physics community should worry about the usefulness and credibility of experimental results_. _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _i would like to thank marco laveder for fruitful collaboration and many stimulating discussions .
it is shown that all the frequentist methods are equivalent from a statistical point of view , but the physical significance of the confidence intervals depends on the method . the bayesian ordering method is presented and confronted with the unified approach in the case of a poisson process with background . some criticisms to both methods are answered . it is also argued that a general frequentist method is not needed .
lattice monte carlo ( lmc ) computer simulations are often used to study diffusion problems when it is not possible to solve the diffusion equation .if the lattice mesh size is small enough , lmc simulations provide results that are in principle arbitrarily close to the numerical solution of the diffusion equation . in lmc simulations, a particle is essentially making an unbiased random - walk on connected lattice sites , and those moves that collide with obstacles are rejected .the allowed monte carlo moves are usually displacements by one lattice site along one of the spatial directions . in the presence of an external field, one must bias the possible lattice jumps in order to also reproduce the net velocity of the particle .however , this is not as easy as it looks because one must also make sure that the diffusion coefficient is correctly modelled along each of the spatial directions . using a metropolis weighting factor does not work because in the limit of large driving fields , all the jumps along the field axis are in the same direction and hence the the velocity saturates and the diffusion coefficient in this direction vanishesthis approach is thus limited to weak fields , at best .a better approach is to solve the local diffusion problem ( i.e. , inside each lattice cell ) using a first - passage problem ( fpp ) approach , and to use the corresponding probabilities and mean jumping times for the coarser grained lmc moves . in this case, the mean jumping times are shorter along the field axis , but one can easily renormalize the jumping probabilities to use a single time step . in a recent paper , we demonstrated that although this method does give the correct drift velocity for arbitrary values of the driving field , it fails to give the correct diffusion coefficient .the problem is due to the often neglected fact that the variance of the jumping time affects the diffusion process in the presence of a net drift .lmc models do not generally include these temporal fluctuations of the jumping time , at least not in an explicit way . in the same article , we showed how to modify a one - dimensional lmc algorithm with the addition of a stochastic jumping time , where the appropriate value of the standard - deviation was again obtained from the resolution of the local fpp . for simulations in higher spatial dimensions ,it is possible to use our one - dimensional algorithm with the proper method to alternate between the dimensions as long as the monte carlo clock advances only when the particle moves along the field direction .lmc simulations of diffusion processes actually use stochastic methods to resolve a discrete problem that can be written in terms of coupled linear equations .several years ago , we proposed a way to compute the exact solution of the lmc simulations via matrix methods , thus bypassing the need for actual simulations .this alternative method is valid only in the limit of vanishingly weak driving fields , but it produces numerical results with arbitrarily high precision . the crucial requirement of the method is a set of lmc moves that have a common jumping time .dorfman suggested a slightly different but still exact numerical method , and the two agree perfectly at zero - field .more recently , we extended our numerical method to cases with driving fields of arbitrary magnitudes ; in order to do that , we used lmc moves that possess a single jumping time for all spatial directions , but this forced us to neglect the temporal fluctuations discussed above . as a consequence , our numerical method generates exact velocities but fails to provide reliable diffusion coefficients . again , dorfman s alternate method also give the same velocities , but because the lmc moves do not include the proper temporal fluctuations , neither method can be used to compute the diffusion coefficient along the field axis . in summary , a fixed - time lmc algorithm can be used with exact numerical methods to compute the net velocity , but temporal fluctuations ( and hence computer simulations ) must be used to compute the diffusion coefficient .we recently solved the problem of defining a lmc algorithm with both a fixed time step and the proper temporal fluctuations .this required the addition of a probability to stay put on the current lattice site during a given time step ( of course , this change also implies a renormalization of the jumping probabilities ) .this probability of non - motion has a direct effect on the real time elapsed between two displacements of the brownian walker , and this effect can be adjusted in order to reproduce the exact temporal fluctuations of the local fpp .we showed that this new lmc algorithm can be used with dorfman s exact numerical method to compute the exact field - dependence of both the velocity and the diffusion coefficient of a particle on a lattice in the presence of obstacles .as far as we know , this is the first biased lattice random - walk model that gives the right diffusion coefficient for arbitrary values of the external field .other models , such as the repton model , are restricted to weak fields .several other articles ( see , e.g. ) report simulations of diffusive processes , but all of them appear to be limited to small biases .unfortunately , our lmc algorithm has a fatal flaw : for dimensions , some of the jumping probabilities turn out to be negative .this failure suggests that there is a fundamental problem with this class of models , or more precisely with standard lmc moves ( however , note that it is still possible to use computer simulations and fluctuating jumping times , as explained above ) . in other words , it is impossible to get both the right velocity and the right diffusion coefficient in all spatial directions ( if ) when the lmc jumps are made along a single axis at each step . in this article, we examine an alternative to the standard lmc moves in order to derive a valid lmc algorithm with a common time step for spatial dimensions .we suggest that a valid set of lmc moves should respect the fact that motion along the different spatial directions is actually simultaneous and not sequential .as we will show , this resolves the problem and allows us to design a powerful new lmc algorithm that can be used both with exact numerical methods and stochastic computer simulations .as mentioned above , metropolis - like algorithms are not reliable if one wants to study diffusion via the dynamics of biased random - walkers on a lattice .the discretization of such continuous diffusion processes should be done by first solving the fpp of a particle between two absorbing walls ( the distance between these arbitrary walls is the step size of the lattice ) . indeed, completion of a lmc jump is identical to the the first passage at a distance from the origin . in one dimension , this fpp has an exact algebraic solution , and the resulting transition probabilities ( noted for parallel and antiparallel to the external force ) are : where is the ( scaled ) external field intensity , is boltzmann s constant and is the temperature .the time duration of these fpp jumps is : where , the time duration of a jump when no external field is applied , is called the brownian time .although eqs .[ e : p1d ] and [ e : tau1d ] can be used to simulate one - dimensional drift problems ( the net velocity is then correct ) , they erroneously generate a field - dependent diffusion coefficient for a free particle , which is wrong .this failure is due to the lack of temporal fluctuations in such a lmc algorithm ( at each step , the particle would jump either forward ( ) or backward ( ) , and all jumps would take the same time ) .as mentioned above , it is possible to fix this problem with a stochastic time step like where can also be calculated exactly within the framework of fpp s : however , the resulting algorithm can only be used in monte carlo computer simulations because exact resolution methods require a common time step for all jumps .alternatively , temporal fluctuations can be introduced using a probability to remain on the same lattice site during the duration of a fixed time step .not moving has for effect to create a dispersion of the time elapsed between two actual jumps . in order to obtain the right free - solution diffusion coefficient, we must have : this modification also forces us to renormalize the other elements of the lmc algorithm : equations [ e : ss ] to [ e : tt ] define a lmc algorithm that can be used with monte carlo simulations ( or exact numerical methods ) to study one - dimensional drift and diffusion problems .one can easily verify that it leads to the proper free - solution velocity ( ) and diffusion coefficient ( ) , while satisfying the nernst - einstein relation .these equations will thus be the starting point of our new multidimensional lmc algorithm .in principle , we can build a simple model for dimensions using the elements of a one - dimensional biased random walk for the field axis and those of an unbiased random - walk for each of the transverse axes .indeed , it is possible to fully decouple the motion along the different spatial directions if the field is along a cartesian axis .such an algorithm is divided into three steps : 1 .first , we must select the jump axis , keeping in mind that the particle should share its walking time equally between the spatial directions . the probability to choose a given axisshould thus be inversely proportional to the mean time duration of a jump in this direction ( note that the time duration of a jump is shorter in the field direction ) .2 . secondly , the direction ( ) of the jump must be selected .3 . finally , the time duration of the jump must be computed and the monte carlo clock must be advanced .there are several ways to implement these steps .the easiest way is to use eqs .[ e : p1d ] to [ e : dtau ] ; in this case , the lmc clock must advance by a stochastic increment each time a jump is made along the field axis ( in order to obtain the proper temporal fluctuations , the clock does not advance otherwise ) .a slightly more complicated way would be to use eqs .[ e : ss ] to [ e : tt ] ; again , the clock advances only when the jump is along the field axis , but this choice has the advantage of not needing a stochastic time increment . although both of these implementations can easily be used with computer simulations , they would not function with exact numerical methods because of the way the clock is handled . for exact numerical methods ,an algorithm with a common time step and a common clock for all spatial directions is required .we showed that it is indeed possible to do this if we renormalize eqs .[ e : p1d ] and [ e : tau1d ] properly ; this approach works for any dimension , but it can only be used to compute the exact velocity of the particle since it neglects the temporal fluctuations . in order to also include these fluctuations , one must start from eqs .[ e : ss ] to [ e : tt ] instead .unfortunately , this can be done only in two dimensions since the renormalization process gives negative probabilities when .clearly , in order to derive a multi - dimensional lmc algorithm with a fixed time - step , a common clock and the proper temporal fluctuations , we need a major change to the basic assumptions of the lmc methodology . in the next section ,we propose to allow simultaneous jumps in all spatial directions .this is a natural choice since lmc methods do indeed assume that the motion of the particle is made of entirely decoupled random - walks .current lmc methods assume this decoupling to be valid , but force the jumps to be sequential and not simultaneous .in our multi - dimensional algorithm , the lmc moves were the standard unit jumps along one of the cartesian axes , and a probability to stay put was used to generate temporal fluctuations . since moving along a given axis actually contributes to temporal fluctuations along all the other axes , the method fails for because the transverse axes then provide an excess of temporal fluctuations .this strongly suggests that the traditional sequential lmc moves are the culprit .sequential lmc moves are used solely for the sake of simplicity , but they are a poor representation of the fact that real particles move in all spatial directions at the same time .this weakness is insignificant for unbiased diffusion , but it becomes a roadblock in the presence of strong driving fields . in order to resolve this problem , we thus suggest to employ a set of moves that respect the simultaneous nature of the dynamics along each of the axes . to generate a lmc algorithm for this new set of moves, we will use our exact solution of the one - dimensional problem for each of the directions .our new lmc moves will include one jump attempt along each of spatial directions .the list will thus consist of different moves since we must allow for all possible permutations of the three fundamental jumps ( of length and ) used by the exact one - dimensional model that we will be using for each axis .note that the external field must be parallel to one the cartesian axes ( we choose the -axis here ) .the dynamics is governed by , and in the -direction ( eqs .[ e : ss ] to [ e : tt ] ) , whereas we can in principle use and for the transverse directions because there is no need to model the temporal fluctuations when there is no net drift in the given direction .the optimal time step for our new moves is , the duration of the fastest unit process .we thus have to rescale the transverse probability accordingly : this generates an arbitrary probability to stay put in the transverse directions : in the zero - field limit , this probability gives : therefore , the probability to stay put is the same in all the directions in this limit , as it should . in the opposite limit , we have : and the jumps in the transverse directions become extremely rare , as expected .equations [ e : ss ] to [ e : sy ] are sufficient to build the table of multi - dimensional moves and their different probabilities since the directions are independent .figure [ f : schema ] illustrates the new lmc moves for the and cases in the absence of obstacles .the moves , all of duration , combine simultaneous one - dimensional processes and include net displacements along lattice diagonals .the paths are further defined in table [ t : free]a ; such a description of the trajectories will be essential later to determine the dynamics in the presence of obstacles .it is straightforward to extend this approach to higher dimensions ( ) .we can easily verify that this new set of lmc moves gives the right free - solution velocity and diffusion coefficients for all dimensions . if the field is pointing along the -axis ,the average displacement per time step is , while the average square displacement is . using these results, we can compute the free - solution velocity and diffusion coefficient : and one can also verify that and .these are precisely the results that we expect .therefore , the model introduced here does work for all values of the external field and all dimensions in the absence of obstacles .the problems faced in ref . have been resolved by making the directions truly independent from each other and choosing as the fundamental time step of the new lmc moves .since this new model works fine in free - solution , the next step is to define how to deal with the presence of obstacles .the rule that we follow in those cases where a move leads to a collision with an obstacle is the same as before , i.e. , such a jump is rejected and the particle remains on the same site . in our algorithm , though , this means that one ( or more ) of the sub - components of a -dimensional move is rejected .therefore , the list of transition probabilities must take into account all of the possible paths that the particle can follow given the local geometry .a two - dimensional example is illustrated in table [ t : free]b .we see that the two different trajectories that previously lead to the upper right corner ( site ) now lead to different final positions due to the rejection of one of the two unit jumps that are involved .the final transition probabilities for this particular case are listed in table [ t : free]b .of course , all local distributions of obstacles can be studied using the same systematic approach .in order to test our new set of lmc moves for systems with obstacles , we will compare its predictions to those of our previous two - dimensional algorithm since we know that both can properly reproduce the velocity and the diffusion coefficient of a particle in the case of an obstacle - free system .however , the different moves used by these two algorithms means that a true comparison can only be made in the limit of the continuum since the choice of moves always affects the result of a coarse - grained approach if there are obstacles .the exact numerical method that we developed in collaboration with dorfman is not limited to the previous set of lmc moves .it can easily be modified to include other lmc moves , including diagonal moves . combining dorfman s method and our new lmc moves, we now have a way to compute the exact velocity and the exact diffusion coefficient of a particle in the presence of arbitrary driving field for any dimension .we thus studied the system shown in fig .[ f : continuum]b using both algorithms , and we repeated the calculation for different lattice parameters ( with ) while the obstacle size ( ) remained constant ( the surface concentration of obstacles is thus kept constant at ) .the limit of the continuum corresponds to .we compared the velocities and diffusion coefficients along the field - axis obtained with both algorithms over a wide range of .note that the value of the external scaled field , which is proportional to the lattice parameter ( ) , has to be rescaled by the factor .figure [ f : continuum]a presents the data for both algorithms for a nominal field intensity .we clearly see that the two approaches converge perfectly in the limit .interestingly , the new algorithm converges slightly faster towards the asymptotic continuum value .this is explained by the fact that the diagonal transitions reduce the number of successive collisions made by a random - walker when it is trapped behind an obstacle at high field .conventional three - dimensional lmc algorithms can not be used to study both the mean velocity and the diffusion coefficient of a brownian particle if the time step has to be constant ( as required by exact numerical methods ) .this limitation is due to the fact that these algorithms only allow jumps to be made along one axis at each time step .such unit jumps make it impossible to obtain the proper temporal fluctuations that are key to getting the right diffusion coefficient .we propose that lmc moves should actually respect the fact that all of the spatial dimensions are fully independent .this means that each move should include a component along each of these dimensions .this complete dimensional decoupling allows us to conserve the proper temporal fluctuations and hence to reproduce the correct diffusion process even in the presence of an external field of arbitrary amplitude .this approach leads to a slightly more complicated analysis of particle - obstacle collisions , but this is still compatible with the exact numerical methods developed elsewhere .the new lmc algorithm presented in this paper opens the door to numerous coarse - grained simulation and numerical studies that were not possible before because previous algorithms were restricted to low field intensities .this work was supported by a discovery grant from the natural science and engineering research council ( _ nserc _ ) of canada to gws .mgg was supported by a _scholarship , an excellence scholarship from the university of ottawa and a strategic areas of development ( sad ) scholarship from the university of ottawa .( squares ) and diffusion coefficient ( circles ) vs the mesh size for .these calculations were done using the algorithm presented in ref . ( filled symbols ) and the one proposed in this paper ( empty symbols ) .( b ) the obstacle is of size , the lattice is of size ( with periodic boundary conditions ) , and the particle ( not shown ) is of size . the system is shown for three different values of the mesh size parameter . , width=528 ]
we recently demonstrated that standard fixed - time lattice random - walk models can not be modified to properly represent biased diffusion processes in more than two dimensions . the origin of this fundamental limitation appears to be the fact that traditional monte carlo moves do not allow for simultaneous jumps along each spatial direction . we thus propose a new algorithm to transform biased diffusion problems into lattice random walks such that we recover the proper dynamics for any number of spatial dimensions and for arbitrary values of the external field . using a hypercubic lattice , we redefine the basic monte carlo moves , including the transition probabilities and the corresponding time durations , in order to allow for simultaneous jumps along all cartesian axes . we show that our new algorithm can be used both with computer simulations and with exact numerical methods to obtain the mean velocity and the diffusion coefficient of point - like particles in any dimensions and in the presence of obstacles . , , diffusion coefficient , biased random - walk , monte carlo algorithm .
in recent years , there has been increasing interest of the possibility that evolution of species in an ecosystem may be a self organized critical phenomena . there are interactions among species in that ecosystem .the most common such interactions are predation , competition for resources and mutualism . as a result of these interactionsthe evolutionary adaptation of one species must affect its nearest neighbors .these interactions can give rise to large evolutionary disturbances , termed _most of these evolution models , like bs model [ 1 ] , considered only one fitness for each species .bak and sneppen proposed a self organized model to explain the punctuated equilibrium of biological evolution .they considered a 1- dimensional model with periodic boundary conditions , topologically a circle .assign a fitness to each site , , where , is the number of species in the ecosystem . at each time steplook for the site with lowest fitness then replace its fitness together with the fitnesses of its nearest neighbors , by new ones which are uniformly distributed random variables . after running the system for sufficiently long time most of the fitness are above certain threshold . also , the distribution of the distance between subsequent mutations and the avalanche sizes exhibit power laws .several modification can be done to the bs model .the first possible modification is to use extremal dynamics [ 2 , 3 ] which depends on the following idea : in real biological systems not only the lowest one who is updated but some of the low fitness species .this number that changes is not fixed but random .so , we will study this random version of bs model in section 2 .also , in biology almost every optimization problem is multiobjective ( mob ) e.g. objective of foraging and of minimizing predation risk . in section 3, we will apply the concept of mob to bs model .here , we will study the first modification that can be done to the bs model . instead of finding exactly the site with lowest fitness , one may use the extremal dynamics . in this case a uniformly distributed random number is picked and all the sites with fitness less than this number has its fitness updated .this dynamics has been used to explain the long term memory for the immune system [ 3 ] .it has been also used to solve some optimization problems e.g. spin glass , graph coloring and graph partitioning [ 4 , 5 ] .we run a system consisting of species for different sufficiently long time ( up to ) .we find that most of the fitnesses are above a certain threshold value , as shown in figure 2 . in figure 1we plotted the standard bak - sneppen model for reference .in most evolution models e.g. bs model , only one fitness is considered i.e. single objective optimization .almost every real life problem is multiobjective ( mob ) one [ 6 ] .therefore it is important to generalize the standard single goal oligopoly studies to multiobjective ones .methods for mob optimization are mostly intuitive .the * first method * is lexicographic method . in this method objectivesare ordered according to their importance .then the first objective is satisfied fully .the second one is satisfied as much as possible given that the first objective has already been satisfied and so on .a famous application is in university admittance where students with highest grades are allowed in any college they choose .the second best group are allowed only the remaining places and so on .this method is useful but in some cases it is not applicable .the * second method * is the method of weights [ 7 ] .assume that it is required to minimize the objectives , .the problem of maximization is obtained via replacing by .define where then the problem becomes to minimize .this method is easy to implement but it has several weaknesses .the first is that it may give a pareto dominated solution .a solution is pareto dominated if there is another solution such that for all with at least one such that .the second difficulty of this method is that it is difficult to apply for large .the * third method * is to minimize only one objective while setting the other objectives as constraints e.g. minimize subject to , where are parameters to be updated .the problem with this method is the choice of the thresholds . in the case of equalityi.e. this method is guaranteed to give a pareto optimal solution . the * fourth method * using fuzzy logic is to study each objective individually and find its maximum and minimum say , respectively . then determine a membership thus . thenapply . againthis method is guaranteed to give a pareto optimal solution .this method is a bit difficult to apply for large number of objectives .the bs model can be generalized to the multiobjective . assigning two fitnesses , , to each site instead of one .the updating rule is if where , then update both , and , . in the updating rulewe have used the simple and widely used method , weighting method in mob .multiobjective optimization is much more realistic than single objective ones . after running a system consisting of species for different sufficiently long time ( up to ) .the distribution of the distance between subsequent mutations are shown in figure 3 .we find that most of the fitnesses are above certain threshold value , as shown in figure 4 .the size of avalanches are shown in figure 5 .bak p. and sneppen k. ( 1993 ) , punctuated equilibrium and criticality in a simple model of evolution , phys .71 , 4083 .head d. ( 2000 ) , extremal driving as a mechanism for generating long term , j. phys .a : math . gen .33 , 387 . ahmed e. and hashish a.h . ( 2003 ) , on modeling of immune memory mechanisms , theor .biosci.122 , 349 .boettcher s. and percus a. ( 2001 ) , on extremal optimization , phys .86 , 5211 . ahmed and el - alem m.(2002 ) , immune motivated optimization , int .41 , 985 . zeleny m. ( 1982 ) ,multiple criteria decision making , mcgraw hill , newyork .zadeh l. ( 1963 ) , optimality and nonscalar performance criteria , ieee transactions .automatic control , ac-8 , 59 . 1. the results of bs model under our simulations in a system of size and , iteration .( a ) the distribution of distances d(x ) between subsequent mutations x. ( b ) distribution of fitness in the critical state ( right curve ) d(f ) with the distribution of minimum fitness ( left curve ) . ( c ) distribution of avalanche sizes d(s ) in critical state .( d ) mutation activity vs time measured as the total number of mutations ( and ) .2 . the results of random bs model in a system of size and , iteration .( a ) distribution of fitness in the critical state d(f ) .( b ) distribution of minimum fitness in the critical state .( c ) distribution of avalanche sizes d(s ) in critical state .3 . the distribution of distances d(x ) between subsequent mutations in a system of size and , iteration with three different weights 0.3 , 0.5 , 0.9 . 4 . the distribution of fitness in the critical state ( right curve ) d(f ) with the distribution of minimum fitness ( left curve ) in a system of size and , iteration with three different weights 0.3 , 0.5 , 0.9 . 5 .the distribution of avalanche sizes d(s ) in critical state in a system of size and , iteration with three different weights 0.3 , 0.5 , 0.9 .
self - organized criticality ( soc ) phenomena could have a significant effect on the dynamics of ecosystems . the bak - sneppen ( bs ) model is a simple and robust model of biological evolution that exhibits punctuated equilibrium behavior . here we will introduce random version of bs model . also we generalize the single objective bs model to a multiobjective one . * keyword * : self - organized criticality , evolution and extinction , bs model , multiobjective optimization .
correlation functions are some of the most widely used statistics within astrophysics ( see peebles 1980 for a extensive review ) .they are often used to quantify the clustering of objects in the universe ( _ e.g. _ galaxies , quasars _ etc ._ ) compared to a pure poission process .more recently , they have also been used to measure fluctuations in the cosmic microwave background ( see szapudi et al . 2000 ) . on large scales ,the higher order correlation functions ( 3-point and above ) can be used to test several fundamental assumptions about the universe ; for example , our hierarchical scenario for structure formation , the gaussianity of the initial conditions as well as testing various models for the biasing between the luminous and dark matter .the reader is referred to szapudi ( 2000 ) , szapudi et al .( 1999a , b ) and scoccimarro ( 2000 ; and references therein ) for an overview of the usefulness of correlation functions in constraining cosmological models . over the coming decade , several new , massive cosmological surveys will become available to the astronomical community . in this new era , the quality and quantity of data will warrant a more sophisticated analysis of the higher order correlation functions of galaxies ( and other objects ) over the largest range of scales possible .our ability to perform such studies will be severely limited by the computational time needed to compute such functions and no - longer by the amount of data available . in this paper, we address this computational `` bottle - neck '' by outlining a new algorithm that uses innovative computer science to accelerate the computation of correlation functions far beyond the naive scaling law ( where is the number of objects in the dataset and is the power of correlation function desired ) .the algorithm presented here was developed as part of the `` computational astrostatistics '' collaboration ( see nichol et al . 2000 ) and is a member of a family of algorithms for a very general class of statistical computations , including nearest - neighbor methods , kernel density estimation , and clustering .the work presented here was initially presented by gray & moore ( 2001 ) and will soon be discussed in a more substantial paper by connolly et al .( 2001 ) . in this conferenceproceeding , we provide a brief review of _ k_d - trees ( section [ kdtr ] ) , a discussion of the use of _ k_d - trees in range searches ( section [ range ] ) , an overview of the development of a fast correlation function code ( section [ npt ] ) as well as presenting the concept of controlled approximations in the calculation of the correlation function ( section [ apprx ] ) .in section 6 , we provide preliminary results on the computation speed - up achieved with this algorithm and discuss future prospects for further advances in this field through the use of other tree structures .our fast correlation function algorithm is built upon the _ k_d - tree data structure which was introduced by friedman et al .k_d - tree is a way of organizing a set of datapoints in -dimensional space in such a way that once built , whenever a query arrives requesting a list all points in a neighborhood , the query can be answered quickly without needing to scan every single point .the root node of the _ k_d - tree owns all the data points .each non - leaf - node has two children , defined by a splitting dimension and a splitting value .the two children divide their parent s data points between them , with the left child owning those data points that are strictly less than the splitting value in the splitting dimension , and the right child owning the remainder of the parent s data points : < { { { { { n}}.{{\mbox{\scriptsize \sc splitvalue}}}}}}\mbox { and } { { { { { \mbox{\bf x}}}}_{i}}}\in { n}\\ { { { { { \mbox{\bf x}}}}_{i}}}\in { { { { { n}}.{{\mbox{\scriptsize \sc right } } } } } } & \leftrightarrow & { { { { { \mbox{\bf x}}}}_{i}}}[{{{{{n}}.{{\mbox{\scriptsize \sc splitdim } } } } } } ] \geq { { { { { n}}.{{\mbox{\scriptsize \sc splitvalue}}}}}}\mbox { and } { { { { { \mbox{\bf x}}}}_{i}}}\in { n}\end{aligned}\ ] ] as an example , some of the nodes of a _ k_d - tree are illustrated in figures 1 . _k_d - trees are usually constructed top - down , beginning with the full set of points and then splitting in the center of the widest dimension .this produces two child nodes , each with a distinct set of points .this procedure is then repeated recursively on each of the two child nodes .a node is declared to be a leaf , and is left unsplit , if the widest dimension of its bounding box is some threshold , .a node is also left unsplit if it denotes fewer than some threshold number of points , .a leaf node has no children , but instead contains a list of -dimensional vectors : the actual datapoints contained in that leaf .the values and would cause the largest _ k_d - tree structure because all leaf nodes would denote singleton or coincident points . in practice ,we set to of the range of the data point components and to around 10 .the tree size and construction thus cost considerably less than these bounds because in dense regions , tiny leaf nodes are able to summarize dozens of data points .the operations needed in tree - building are computationally trivial and therefore , the overhead in constructing the tree is negligible . also , once a tree is built it can be re - used for many different analysis operations . since the introduction of _ k_d - trees , many variations of them have been proposed and used with great success in areas such as databases and computational geometry ( preparata & shamos 1985 ) .r - trees ( guttman 1984 ) are designed for disk resident data sets and efficient incremental addition of data .metric trees ( see uhlmann 1991 ) place hyperspheres around tree nodes , instead of axis - aligned splitting planes . in all cases ,the algorithms we discuss in this paper could be applied equally effectively with these other structures .for example , moore ( 2000 ) shows the use of metric trees for accelerating several clustering and pairwise comparision algorithms .before proceeding to fast calculations , we will begin with a very standard _ k_d - tree search algorithm that could be used as a building block for fast 2-point computations . for simplicity of exposition we will assume the every node of the _ k_d - tree contains one extra piece of information : the bounding box of all the points it contains . call this box .the implication of this is that every node must contain two new dimensional vectors to represent the lower and upper limits of each dimension of the bounding box .the range search operation takes two inputs .the first is a -dimensional vector called the _ query point_. the second is a separation distance .the operation returns the complete set of points in the _ k_d - tree that lie within distance of .* + returns a set of points such that : = the closest distance from to . *if then it is impossible that any point in can be within range of the query .so simply return the empty set of points without doing any further work .* else , if is a leaf node , we must iterate through all the datapoints in its leaf list . for each point , find if it is within distance of . if so , add it to the set of results .* else , is not a leaf node .then : * * let * * let * * return .figure 2a shows the result of running this algorithm in two dimensions .many large nodes are pruned from the search .117 distance calculations were needed for performing this range search , compared with 499 that would have been needed by a naive method .note that it is not essential that _ k_d - tree nodes have bounding boxes explicitly stored .instead a hyper - rectangle can be passed to each recursive call of the above function and dynamically updated as the tree is searched .range searching with a _ k_d - tree can be much faster than without if the range is small , containing only a small fraction of the total number of datapoints .but what if the range is large ?figure 2b shows an example in which _ k_d - trees provide little computational saving because almost all the points match the query and thus need to be visited . in generalthis problem is unavoidable .but in one special case it _ can _ be avoided if we merely want to count the number of datapoints in a range instead of explicitly find them all .we will add the following field to a _ k_d - tree node .let be the number of points contained in node .this is the first and simplest of a set of _ k_d - tree decorations we refer to as _ cached sufficient statistics _( see moore & lee 1998 ) .in general , we frequently stored the centroid of all points in a node and their covariance matrix .once we have it is trivial to write an operation that counts the number of datapoints within some range without explicitly visiting them .* + returns an integer : the number of points that are both inside the and also within distance of .* let : = the closest distance from to . * if then it is impossible that any point in can be within range of the query .so simply return 0 .* let : = the furthest distance from to . * if then every point in must be within range of the query .so simply return .* else , if is a leaf node , we must iterate through all the datapoints in its leaf list .start a counter at zero .for each point , find if it is within distance of .if so , increment the counter by one .return the count once the full list has been scanned .* else , is not a leaf node .then : * * let * * let * * return .the same query that gave the poor range search performance in figure 2b gives good performance in figure 3 .the difference is that a second type of pruning of the search is possible : if the hyperrectangle surrounding the n is either entirely outside _ or inside _ the range then we prune . ]it is easy to see that the 2-point correlation function is simply a repeated set of range counts .for example , given a minimum and maximum separation and we run the following algorithm : * + input is a dataset , represented as a matrix in which the row corresponds to the datapoint . has rows and columns .input is the root of a kdtree built from the data in .output integer : the number of pairs of points such that .* c : = 0 * for between and do : * * note that in practice we do not use two range counts at each iteration , but one slightly more complex rangecount operation _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ that directly counts the number of points whose distance from is between and .the previous algorithm iterates over all datapoints , issuing a range count operation for each .we can save further time by turning that outer iteration into an additional kd - tree search .the new search will be a recursive procedure that takes two nodes , and , as arguments .the goal will be to compute the number of pairs of points such that , , and .* + returns an integer : the number of pairs of points such that , , and .* let : = the closest distance between and . * if then it is impossible that any pair of points can match .so simply return 0 .* let : = the furthest distance between and . * if then it is again impossible that any pair of points can match .so simply return 0 . *if then all pairs of points must match .use and to compute the number of resulting pairs , and return that value .* else , if and are both leaf nodes , we must iterate through all pairs of datapoints in their leaf lists .return the resulting ( slowly computed ) count .* else at least one of the two nodes is a non - leaf .pick the non - leaf with the largest number of points ( breaking ties arbitrarily ) , and call it . call the other node . then : * * let * * let * * return . computing a 2-point function on a dataset simply consists of computing the value , where is a kd - tree built from , for a range of bins with minimum and maximum boundaries of and .we note here that the 2-point correlation function , the quanity of interest is not simply , but ( the number of unique pairs of objects ) .a further speed up can be obtained by simultaneously computing the over a series of bins .we will discuss this in further detail in connolly et al .( 2001 ) .so far , we have discussed two operations exclusion and subsumption which remove the need to traverse the whole tree thus speeding up the computation of the correlation function .another form of pruning is to eliminate node - node comparisons which have been performed already in the reverse order .this can be done simply by ( virtually ) ranking the datapoints according to their position in a depth - first traversal of the tree , then recording for each node the minimum and maximum ranks of the points it owns , and pruning whenever s maximum rank is less than s minimum rank .this is useful for all - pairs problems , but will later be seen to be _ essential _ for all - k - tuples problems .this kind of pruning is not practical for single - tree search .the advantages of dual - tree over single - tree are so far two fold .first , dual - tree can be faster , and second it can exploit redundancy elimination .but two more advantages remain .first , we can extend the `` 2-tree for 2-point '' method up to `` n - trees for n - point '' .second ( discussed in section [ se : approx ] ) , we can perform effective approximation with dual - trees ( or n - trees ) .we now discuss the first of these advantages .the computation is parameterized by two symmetric matrices : and .we wish to compute where is zero unless the following conditions hold ( in which case it takes the value 1 ) : we will achieve this by calling a recursive function on an -tuple of kdtree nodes .this recursive function much return * + * let * for to do * * for to do * * * let : = the closest distance between and . * * * if then it is impossible that any -tuple of points can match because the distance between the and points in any such -tuple must be out of range .so simply return 0 . ** * let : = the furthest distance between and .* * * if then similarly return 0 . * * * if then every has the property the the member and member match .we are interested in whether this is true for all pairs and so the first time we are disappointed ( by discovering the above expression does not hold ) then we will update the flag .thus the actual computation at this step is : + _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ if or then + . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ * if has remained true throughout the above double loop , we can be sure that every derived from the nodes in the recursive call must match , and so we can simply return * else , if all of are leaf nodes we must iterate through all of datapoints in their leaf lists .return the resulting ( slowly computed ) count .* else at least one of the nodes is a non leaf .pick the non - leaf with the largest number of points ( breaking ties arbitrarily ) , and assume it has index . then : * * let * * let * * return . the full computation is achieved by calling with arguments consisting of an of copies of the root node .we should note once again it is possible to save considerable amounts of computation by eliminating redundancy .for example , in the 4-point statistic , the above implementation will recount each matching 4-tuple of points in 24 different ways : once for each of the permutations of .again , this excess cost can be avoided by ordering the datapoints via a depth - first tree indexing scheme and then pruning any of nodes violating that order .but the reader should be aware of an extremely messy problem regarding how much to award to the count in the case that a subsume type of pruning can take place .if all nodes own independent sets of points the answer is simple : the product of the node counts .if all nodes are the same then the answer is again simple : , where is the number of points in the node .somewhat more subtle combinatorics are needed in the case where some nodes in the -tuple are identical and others are not . and fearsome computation is needed in the various cases in which some nodes are descendants of some other nodes .in general , when the final answer comes back from , the majority of the quantity in the count will be the sum of components arising from large subsume prunes .but the majority of the computational effort will have been spent on accounting for the vast number of small but unprunable combinations of nodes .we can improve the running time of the algorithm by demanding that it also prunes it search in cases in which only a tiny count of is at stake .this is achieved by adding a parameter , , to the algorithm , and adding the following lines at the start : * let * if then quit this recursive call .this will clearly cause an inaccurate result , but fortunately it is not hard to maintain tight lower and upper bounds on what the true answer would have been if the approximation had not been made .thus now returns a pair of counts where we can guarantee that the true count lies in the range .suppose the true value of the function is but that we are prepared to accept a fractional error of : we will be happy with any value such that latexmath:[\[\label{awareapprox } possible to adapt the n - tree algorithm using a best - first iterative deepening search strategy to guarantee this result while exploiting permission to approximate effectively by building the count as much as possible from `` easy - win '' node pairs while doing approximation at hard deep node - pairs .this is simply achieved by repeatedly calling the previous approximate algorithm with diminishing values of until a value is discovered that satisfies equation [ awareapprox ] .is shown and agrees well with the observed data .the naive law is also plotted for comparison [ fig4 ] ]we plan to present a more detailed discussion of the techniques presented here in a forthcoming paper ( connolly et al .2001 ) . that paper will also include a full analysis of the computational speed and overhead of our correlation function algorithm and compare those with existing software for computing the higher order correlation functions _e.g. _ szapudi et al .however , in figure [ fig4 ] , we present preliminary results on the scaling of computational timing needed for a 2-point correlation function as a function of the number of objects in the data set . for these tests , we computed all the data data pairs for random data sets and real , projected 2-dimensional galaxy data .these data show that our 2-point correlation function algorithm scales as ( for projected 2-dimensional data ) compared to the naive all - pairs scaling of where here is the size of the dataset under consideration . to emphasis the speed up obtained by our algorithm ( figure [ fig4 ] ) , an all pairs count for a database of objects would take only 10 hours ( on our dec alpha workstation ) using our methodology compared to hours ( year ) using the naive method .clearly , binning the data would also drastically increase the speed of analyses over the naive all pairs scaling but at the price of lossing of resolution .similar spectacular speed ups will be achieved for the 3 and 4point functions and we will report these results elsewhere ( connolly et al .furthermore , controlled approximations can further accelerate the computations by several orders of magnitude .such speed ups are vital to allow monte carlo estimates of the errors on these measurements . in summary ,our algorithm now makes it possible to compute an exact , all pairs , measurement of the 2 , 3 and 4point correlation functions for data sets like the sloan digital sky survey ( sdss ) .these algorithms will also help in the speed - up of cosmic microwave background analyses as outlined in szapudi et al .( 2000 ) .finally , we note here that we have only touched upon one aspect of how trees data structures ( and other computer science techniques ) can help in the analysis of large astrophysical data sets .moreover , there are other tree structures beyond _ k_d - trees such as ball trees which could be used to optimize our correlation function codes for higher dimensionality data .we will explore these issues in future papers .
we present here a new algorithm for the fast computation of correlation functions in large astronomical data sets . the algorithm is based on _ k_d - trees which are decorated with cached sufficient statistics thus allowing for orders of magnitude speed ups over the naive non - tree - based implementation of correlation functions . we further discuss the use of controlled approximations within the computation which allows for further acceleration . in summary , our algorithm now makes it possible to compute exact , all pairs , measurements of the 2 , 3 and 4point correlation functions for cosmological data sets like the sloan digital sky survey ( sdss ; york et al . 2000 ) and the next generation of cosmic microwave background experiments ( see szapudi et al . 2000 ) .
since its introduction at the 2009 prague stringology conference , the problem of indexed binary jumbled pattern matching has been discussed in many top conferences and journals .it asks us to preprocess a binary string such that later , given a number of 0s and a number of 1s , we can quickly report whether there exists a substring with those numbers of 0s and 1s and , optionally , return the position of one such substring or possibly even all of them .the nave preprocessing algorithm takes quadratic time but researchers have reduced that bound to , , and finally with randomization or without .researchers have also looked at indexing for approximate matching , indexed jumbled pattern matching over larger alphabets , indexing labelled trees and other structures , and how to index faster when the ( binary ) input string is compressible .gagie et al . gave an algorithm that runs in when the input is represented as a straight - line program with rules , and badkobeh et al . gave one that runs in time when the input consists of runs , i.e. , maximal unary substrings ( we will denote later as the number of maximal substrings of 1s , for convenience ) . giaquinta and grabowski gave two algorithms : one runs in time , where is a parameter , and produces an index that uses extra space and answers queries in time ; the other runs in time , where is the size of a machine word .amir et al . gave an algorithm that runs in time when the input is a run - length encoded binary string , or time when it is a plain binary string ; it builds an index that takes words and answers queries in time , however .very recently , sugimoto et al . considered the related problems of finding abelian squares , abelian periods and longest common abelian factors , also on run - length encoded strings .we first review some preliminary notions in section [ sec : pre ] .we present our main result in section [ sec : alg ] : a new and very simple indexing algorithm that runs in time , which matches giaquinta and grabowski s algorithm with the parameter and is thus tied as the fastest known when and the smallest straight - line program for the input has rules . for an input string of up to ten million bits , for example , if the average run - length is three or more then .our algorithm takes only 17 lines of pseudocode , making it a promising starting point for investigating other possible algorithmic features . in section [ sec : witness ] , for example , we show how to extend our algorithm to store information that lets us report the position of a match ( if there is one ). finally , in section [ sec : workspace ] , we show how we can alternatively adapt it to use only bits of space .consider a string .we denote by ] .cicalese et al . observed that , if we slide a window of length over , the number of 1s in the window can change by at most 1 at each step .it follows that if ] contains copies of 1 with then , for between and ( notice could be smaller than , larger than , or equal to ) , there is a substring of length in ] ; see , e.g. , for more details of rank queries on bitvectors . ) for example , suppose we want to find a substring of length 5 with exactly 3 copies of 1 in our example string .we have stored that there are substrings of length 5 with 2 and 4 copies of 1 starting at positions 1 and 4 , respectively , so we know there is a substring of length 5 with exactly 3 copies of 1 starting in ] via two rank queries . in this case, the answer is 3 , so we have found a witness in one step ; otherwise , we would know there is a witness starting in ] and , \ldots , o [ \rho] ] , we can build a table ] , where denotes the maximum number of 1s in a substring of of length . the complete pseudo - code of our algorithm only 17 lines is shown as algorithm [ alg : index ] .the starting point of our explanation and proof of correctness is the observation that , if the bit immediately to the left of a substring is a 1 , we can shift the substring one bit left without decreasing the number of 1s ; if the first bit of the substring is a 0 , then we can shift the substring one bit right ( shortening it on the right if necessary ) without decreasing the number of 1s .it follows that , for , there is a substring of length at most containing copies of 1 and starting at the beginning of a run of 1s .since we can remove any trailing 0s from such a substring also without changing the number of 1s , there is such a substring that also ends in a run of 1s .therefore we have the following lemma : [ lem : starting ] for , there is a substring of length at most containing copies of 1 , starting at the beginning of a run of 1s and ending in a run of 1s .applying lemma [ lem : starting ] immediately yields an -time algorithm : set ] ; finally , because is non - decreasing , make a pass over from ] setting each = \max ( t [ i ] , t [ i - 1]) ] of 1s in a substring ] of length starting at the beginning of a run of 1s , ending within a run of 1s and containing copies of 1 :let be the length of the substring starting at ] , so .[ lem : ending ] if is the length of a substring starting at the beginning of a run of 1s , ending in a run of 1s and containing copies of 1 , and is the length of a substring starting at the beginning of a run of 1s and ending at the end of a run of 1s , then .furthermore , for some such , we have . with lemma [ lem : ending ], we can compute the number + \cdots + s[j] ] starting at the beginning of a run of 1s and ending in a run of 1s , in a total of time : again , set ] ; make a pass over from ] setting each = \max ( t [ i ] , t [ i + 1 ] - 1) ] of 1s in a substring ] to all 0s ; for each position at the beginning of a run of 1s and each position at the end of a run of 1s , set = \max ( t [ j - 1 + 1 ] , s [ i ] + \cdots + s [ j]) ] to ] ( which sets ] to ] ( which sets every entry in correctly ) .once we have , we can convert it into a bitvector in time .summarizing our results so far , we have the following theorem , which we adapt in later sections : [ thm : main ] given a binary string of length containing runs of 1s , we can build an -bit index for constant - time jumbled pattern matching in time .now we examine how our algorithm works on our example .first we set all entries of to 0 , then we loop through the runs of 1s and , for each , loop through the runs of 1s not earlier , computing distance from the start of the first to the end of the second and the number of 1s between those positions . while doing this , we set = 1 ] , the number of 1s from the start of the first run of 1s to the end of the second run of 1s ; = 5 ] , the number of 1s from the start of the first run of 1s to the end of the fourth run of 1s ; = 4 ] .we then make a pass over from right to left , setting each = \max ( t [ i ] , t [ i + 1 ] - 1) ] .finally , we make a pass over from left to right , setting each = \max ( t [ i ] , t [ i - 1]) ] and leaves correctly computed as ] such that ] , we have found a substring of length containing copies of 1 , so we can set ] for .when we start this stage , for every positive entry in we have set the corresponding entry in .therefore , by induction , whenever we set = t [ i + 1 ] - 1 ] set to the starting position of a substring of length containing ] contains at least - 1 ] . in the last stage of the algorithm ,in which we make a left - to - right pass over , we can almost use the same kind of argument and simply copy values when we copy values , except that we must ensure the starting positions we copy are far enough to the left of the end of the string ( i.e. , that the substrings have the correct lengths ) .our modified algorithm is shown as algorithm [ alg : witness ] still only 25 lines and we now have the following theorem : [ thm : witness ] given a binary string of length containing runs of 1s , we can build an -word index for constant - time jumbled pattern matching _ with witnessing _ in time . running our modified algorithm on our example , in the first stage we set ] , where 0 indicates an unset value in . in the second stage, we set ] .finally , in the third stage , we fill in = t [ 11] ] because ]. ] , we ensure that each value ] and each value ] .since we would eventually set each such ] and ] . we can thus store each block by storing its first value and a binary string of length whose bits indicate where the values in the block increase .therefore , we need a total of only bits to store all the blocks .notice that , if we increase a value ] of the block to be - i + h ] , and set the later bits of the block to 0s to indicate that the values remain equal to ] , ] stored explicitly and represent the other entries of implicitly with three 3-bit binary strings , and .initially we set = t [ 5 ] = t [ 9 ] = 0 ] , the number of 1s from the start to the end of the first run of 1s . at this point , we do not need to change .we then set = 2 ] , this encodes = t [ 1 ] + 0 = 1 ] and = t [ 1 ] + 0 + 1 + 0 = 2 ] the number of 1s from the start of the first run of 1s to the end of the third run of 1s by setting = 3 ] , this encodes = t [ 5 ] + 1 = 4 ] and = t [ 5 ] + 1 + 1 + 0 = 5 ] by setting = 5 ] and ; etc .when we are finished this stage , = 1 ] and = 6 ] . in this casethe final right - to - left and left - to - right passes have no effect , but there are cases ( e.g. , when we do not set any values in a certain block ) when they are still necessary .amihood amir , alberto apostolico , tirza hirst , gad m landau , noa lewenstein , and liat rozenberg .algorithms for jumbled indexing , jumbled border and jumbled square on run - length encoded strings . , 656:146159 , 2016 .amihood amir , timothy m chan , moshe lewenstein , and noa lewenstein . on hardness of jumbled indexing . in _ proceedings of the international colloquium on automata , languages , and programming ( icalp ) _ ,pages 114125 .springer , 2014 .david bremner , timothy m chan , erik d demaine , jeff erickson , ferran hurtado , john iacono , stefan langerman , mihai ptracu , and perouz taslakian .necklaces , convolutions , and x+ y. , 69(2):294314 , 2014 .pter burcsi , ferdinando cicalese , gabriele fici , and zsuzsanna liptk . on table arrangements , scrabble freaks , and jumbled pattern matching .in _ proceedings of the international conference on fun with algorithms ( fun ) _ , pages 89101 .springer , 2010 .timothy m chan and moshe lewenstein .clustered integer 3sum via additive combinatorics . in_ proceedings of the forty - seventh annual acm on symposium on theory of computing ( stoc ) _ , pages 3140 .acm , 2015 .ferdinando cicalese , travis gagie , emanuele giaquinta , eduardo sany laber , zsuzsanna liptk , romeo rizzi , and alexandru i tomescu .indexes for jumbled pattern matching in strings , trees and graphs . in _ proceedings of the international symposium on string processing and information retrieval ( spire ) _ , pages 5663 .springer , 2013 .ferdinando cicalese , eduardo laber , oren weimann , and raphael yuster .near linear time construction of an approximate index for all maximum consecutive sub - sums of a sequence . in _ proceedings of the annual symposium on combinatorialpattern matching ( cpm ) _ , pages 149158 .springer , 2012 .stephane durocher , robert fraser , travis gagie , debajyoti mondal , matthew skala , and sharma v thankachan .indexed geometric jumbled pattern matching . in _ proceedings of the annual symposium on combinatorial pattern matching ( cpm ) _ ,pages 110119 .springer , 2014 .tomasz kociumaka , jakub radoszewski , and wojciech rytter .efficient indexes for jumbled pattern matching with constant - sized alphabet . in_ proceedings of the european symposium on algorithms ( esa ) _ , pages 625636 .springer , 2013 .
important papers have appeared recently on the problem of indexing binary strings for jumbled pattern matching , and further lowering the time bounds in terms of the input size would now be a breakthrough with broad implications . we can still make progress on the problem , however , by considering other natural parameters . badkobeh et al . ( ipl , 2013 ) and amir et al . ( tcs , 2016 ) gave algorithms that index a binary string in time , where is the length and is the number of runs , and giaquinta and grabowski ( ipl , 2013 ) gave one that runs in time . in this paper we propose a new and very simple algorithm that also runs in time and can be extended either so that the index returns the position of a match ( if there is one ) , or so that the algorithm uses only bits of space .
a sequence formed as the concatenation of non - overlapping segments is given , where .each segment is generated by some unknown stochastic process distribution .the process distributions that generate every pair of consecutive segments are different . the index where one segment ends and another starts is called a _change point_. the parameters specifying the change points are unknown and have to be estimated .change point analysis is one of the core problems in classical mathematical statistics . in a typical formulation of the problem ,the samples within each segment are assumed to be i.i.d . and the change refers to a change in the mean . in the literature on nonparametric methods for dependent data , the form of the change and/or the nature of dependenceare usually restricted , for example , strong mixing conditions are imposed . moreover , even for dependent time series , the finite - dimensional marginals are almost exclusively assumed different .however , such strong assumptions do not necessarily hold in most of such real - world applications as bioinformatics , network traffic , market analysis , audio / video segmentation , fraud detection etc .methods used in these applications are thus usually model - based or employ application - specific ad hoc algorithms .more specifically , a theoretical framework to allow for the understanding of what is possible and under which assumptions is entirely lacking . in this paper, we consider highly dependent time series , making as little assumptions as possible on how the data are generated .each segment is generated by an ( unknown ) stationary ergodic process distribution .the joint distribution over the samples can be otherwise arbitrary .we make no such assumptions as independence , finite memory or mixing ; the samples can be arbitrarily dependent .the marginal distributions of any given fixed size before and after the change points may be the same : the change refers to that in the time - series distribution .we aim to construct an asymptotically consistent algorithm that simultaneously estimates all parameters consistently .an estimate of a change point parameter is _ asymptotically consistent _ if it becomes arbitrarily close to in the limit as the length of the sequence approaches infinity .the asymptotic regime means that the error is arbitrarily small if the sequence is sufficiently long , i.e. the problem is offline " and does not grow with time .note that , in general , for stationary ergodic processes , rates of convergence are provably impossible to obtain ( see , for example , ) .therefore , the asymptotic results of this work can not be strengthened . as follows from an impossibility result by , it is impossible to estimate the number of change points in the general setting that we consider .thus , we assume that is known .the case of has been addressed in where a simple consistent algorithm to estimate one change point is provided .the general case of turns out to be much more complex . with the sequence containing multiple change points ,the algorithm is required to simultaneously analyze multiple segments of the input sequence , with no a - priori lower bound on their lengths . in this casethe main challenge is to ensure that the algorithm is robust with respect to segments of arbitrarily small lengths .usually in statistics this is done using methods based on the speed of convergence of sample averages to expectations . in the context of stationary ergodic processes ,such tools are unavailable as no guarantees on the speed of convergence exist .hence , the simultaneous analysis of segments of arbitrarily small lengths is conceptually much more difficult .the problem is considerably simplified if additionally a lower bound on the minimum separation of the change points is provided . under this assumption, an algorithm is proposed in that gives a list of possibly more than candidate estimates , whose first elements are asymptotically consistent , but it makes no attempt to estimate .we use empirical estimates of the so - called distributional distance , which have proved useful in various statistical learning problems involving stationary ergodic time series . our method has a computational complexity that is at most quadratic in each argument .we evaluate it on synthetic data generated by processes that , while being stationary ergodic , do not belong to any simpler " class , and can not be modeled as hidden markov processes with countable state spaces . moreover , the single - dimensional marginals before and after each change point are the same . to the best of our knowledge , none of the existing change point estimation algorithms work in this scenario .the remainder of this paper is organized as follows . in section [ sec : pre ] we introduce preliminary notations and definitions . in section [ sec : protocol ]we formalize the problem and describe the general framework considered . in section [ sec : results ] we present our algorithm , state the main consistency result , and informally describe how the algorithm works . in section [ sec : exp ] we provide some experimental results .we prove the consistency of the proposed method in section [ sec : proofs ] .let be a measurable space ( the domain ) ; in this work we let but extensions to more general spaces are straightforward . for a sequence we use the abbreviation .consider the borel -algebra on generated by the cylinders , where the sets are obtained via the partitioning of into cubes of dimension and volume ( starting at the origin ) .let also .process distributions are probability measures on the space . for and denote the _ frequency _ with which falls in , i.e. a process is _ stationary _ if for any and , we have a stationary process is called _ stationary ergodic _if for all with probability 1 we have by virtue of the ergodic theorem ( see , e.g. , ) , this definition can be shown to be equivalent to the standard definition for the stationary ergodic processes ( see , e.g. , ) . for a given can define distributions on the space where the borel sigma - algebra is generated by the distributional distance between a pair of process distributions is defined as follows we let , but any summable sequence of positive weights may be used . in words , we partition the sets , into cubes of decreasing volume ( indexed by ) and take a weighted sum over the differences in probabilities of all the cubes in these partitions .the differences in probabilities are weighted : smaller weights are given to larger and finer partitions .we use _ empirical estimates _ of this distance defined as follows .[ emd ] the empirical estimate of between and a process is given by and that between a pair of sequences .is defined as where and are any sequences of integers that go to infinity with .[ thm : constd ] let a pair of sequences and be generated by a distribution whose marginals are stationary and ergodic .then the triangle inequality holds for the distributional distance and its empirical estimates , so that for all distributions and all sequences we have , the distributional distance and its empirical estimates are convex functions : for every for all distributions and all sequences with we have [ rem1 ] consider a pair of sequences with .the computational complexity of is of order , where .let correspond to the partition where each cell contains at most one element i.e. indeed , in all summands corresponding to are equal to 0 ; moreover , all summands corresponding to are equal .thus , we already see that the number of required calculations is finite .note that in practice is bounded by the length of the binary precision in approximating real numbers ( i.e. , the length of the mantissa ) . for a fixed for every sequence the frequencies for all may be calculated using suffix trees with worst case construction and search complexity ( see , e.g. , ) .this brings the overall computational complexity of to .furthermore , the practically meaningful choices of are of order . to see this , observe that for a fixed the frequencies of cells in corresponding to higher values of are not consistent estimates of their probabilities ( and thus only add to the error of the estimate ) .indeed , for a pattern with of length the probability is asymptotically of the order where denotes the entropy rate of . by the above argument , one can set and in , bringing the overall complexity of calculating to .we formalize the problem as follows . the sequence , generated by an unknown arbitrary process distribution , is formed as the concatenation of of sequences where , and is assumed known .each of the sequences , is generated by an_ unknown stationary ergodic _ process distribution .formally , consider the matrix of random variables generated by some ( unknown ) stochastic process distribution such that , 1 .the marginal distribution over every one of its rows is an unknown stationary ergodic process distribution ; 2 .the marginal distributions over the consecutive rows are different , so that every two consecutive rows are generated by different process distributions .note that the requirements are only on the marginal distributions over the rows ; the distribution is otherwise completely arbitrary .the process distributions are unknown and may be dependent .moreover , the means , variances , or , more generally , the finite - dimensional marginal distributions of any fixed size before and after the change points are not required to be different .we consider the most general scenario where the _ process distributions are different_. the sequence is obtained by first fixing a length and then concatenating the segments that is , where for each , the _ segment _ is the sequence obtained as the first elements of the row of with . the parameters the _ change points _ which separate consecutive segments generated by different process distributions .the change points are _ unknown _ and to be estimated .let the minimum separation of the change point parameters be defined as since the consistency properties we are after are asymptotic in , we require that . note that this condition is standard in the change point literature , although it may be unnecessary when simpler formulations of the problem are considered , for example when the samples are i.i.d .however , conditions of this kind are inevitable in the general setting that we consider , where the segments and the samples within each segment are allowed to be arbitrarily dependent : if the length of one of the sequences is constant or sub - linear in then asymptotic consistency is not possible in this setting .however , is assumed unknown , and no ( lower ) bounds on it are available .we also make no assumptions on the distance between the process distributions : they can be arbitrarily close .our goal is to devise an algorithm that provides estimates for the parameters .the algorithm must be _ asymptotically consistent _ so that this section we present our method given by algorithm [ alg : kk ] , which as we show in theorem [ thm : kk ] , is asymptotically consistent under the general assumptions stated in section [ sec : protocol ] .the proof of the consistency result is deferred to section [ sec : proofs ] . in this sectionwe give describe the algorithm , and intuitively explain why it works .the following two operators namely , the score function denoted and the single - change point - estimator denoted are used in our method .let be a sequence and consider a subsequence of with . 1 .define the score function as the intra - subsequence distance of , i.e. 2 .define the single - change point estimator of as let us start by giving an overview of what algorithm [ alg : kk ] aims to do .the algorithm attempts to simultaneously estimate all change points using the single - change point - estimator given by applied to appropriate segments of the sequence . for to produce asymptotically consistent estimates in this setting , each change point must be isolated within a segment of ,whose length is a linear function of .moreover , each segment containing a change point must be `` sufficiently far '' from the rest of the change points , w where `` sufficiently far '' means within a distance linear in .this may be obtained by dividing into consecutive non - overlapping segments , each of length with for some ] .3 . for each obtain ; draw from .set if is irrational by a long double with a long mantissa . ]this produces a real - valued stationary ergodic time - series .similar families are commonly used as examples in this framework , ( e.g. ) . for the purpose of our experiment, we fixed four parameters , and ( with long mantissae ) to correspond to different process distributions ; we used uniform distributions and over ] respectively , ( deliberately chosen to overlap ) . to produce we randomly generated change point parameters at least apart .every segment of length with was generated with , and using and . by this procedure ,the single - dimensional marginals are the same throughout .figure [ fig : synth ] demonstrates the average estimation error - rate of algorithm [ alg : kk ] as a function of the sequence length .we calculate the error rate as this section , we prove the main consistency result .the proof depends upon some technical lemmas stated below .[ prelem : nochpt : i ] let be generated by a stationary ergodic process . for all following statements hold with -probability 1 : 1 .[ prelem : nochpt : i1 ] for every .[ prelem : nochpt : i1d ] .[ prelem : nochpt : ii ] to prove part we proceed as follows .assume by way of contradiction that the statement is not true .therefore , there exists and some , and sequences and with , such that with probability we have using the definition of it is easy to see that the following inequalities hold for every and all .fix . for each can find a finite subset of such that for every , there exists some such that for all with probability one we have define and let ; observe that .let consider the sequence . 1 . for every we have 2 . on the other hand , by all we have increase if necessary to have for all and all we obtain where follows from ; follows from and ; and follows from , , , summing over the probabilities , and observing that for all . observe that holds for any , and it particular it holds for .therefore , we have contradicting .part follows .fix , and .we can find some such that by part of lemma [ prelem : nochpt : i ] , there exists some such that for all we have + from and , for all we have and part of the lemma follows .fix , . without loss of generality assume that .observe that for every we have .therefore , by there exists some such that for all we have it remains to use the definition of ( [ defn : delta ] ) and the triangle inequality to observe that for all , and follows .[ prelem : chpt : dist ] assume that a sequence has a change point for some so that the segments , are generated by two different process distributions , respectively .if , are both stationary ergodic then with probability one , for every we have 1 .[ prelem : chpt : dist : i ] 2 .[ prelem : chpt : dist : iii ] fix , , .there exists some such that to prove part we proceed as follows . by the definition of given by , for all and all we have therefore , for all and all we obtain where the first inequality follows from the fact that , the second inequality follows from the definition of given by and the third inequality follows from .observe that for all .therefore , by part of lemma [ prelem : nochpt : i ] , there exists some such that for all we have similarly , for all .therefore , by part of lemma [ prelem : nochpt : i ] , there exists some such that for all we have note that for all .therefore , we have for all , and we have let . by , , , and , for all we have finally , by and for all we obtain and part of lemma[ prelem : chpt : dist ] follows .the proof of the second part is analogous .[ lem2 ] consider a sequence with change points .let , be a sequence of indices with for some , such that for some . 1 .[ lem2:i ] with probability one we have where denotes the minimum distance between the distinct distributions that generate .[ lem2:ii ] assume that we additionally have \subseteq [ \theta_{k-1},\theta_{k+1}]\ ] ] where denote the elements of that appear immediately to the left and to the right of respectively . with probabilityone we obtain ) .fix some .define . following the definition of given by we have to prove part of lemma [ lem2 ] ,we show that for large enough , with probability we have let .to prove for the case where we proceed as follows .by assumption of the lemma , we have hence , it is easy to see that fix .observe that as follows from the definition of and , and our assumption that , the segment is fully generated by . by ,the condition of part of lemma [ prelem : nochpt : i ] hold for .therefore , there exists some such that for all we have similarly , from and we have by and , the conditions of part of lemma [ prelem : chpt : dist ] hold for .therefore , there exists some such that for all we have by we have moreover , we obtain where the inequality follows from ( [ lem2:fract ] ) and the definition of as the minimum distance between the distributions . let . for all obtain where and follow from applying the triangle inequality on , follows from and , and follows from .since holds for every , this proves ( [ objective1 ] ) in the case where . the proof for the case where is analogous .since holds for every , part of lemma [ lem2 ] follows .+ ( [ lem2:ii ] ) .fix some .following the definition of given by ( [ defn : phi ] ) we have to prove part of the lemma , it suffices to show that for every with probability , for large enough , we have for all . to prove ( [ objective ] ) for we proceed as follows .fix some and .first note that for all we have note that by the sequence is a subsequence of .consider the segment .observe that by the conditions of part of lemma [ prelem : nochpt : i ] are satisfied by all .therefore , there exists some such that for all we have similarly , consider .observe that by definition of we have ; moreover , by the segment is a subsequent of .therefore , by part of lemma [ prelem : nochpt : i ] , there exists some such that for all we have by , there is a single change point within .therefore , every has a linear distance from , i.e. for all . on the other hand , .therefore by part of lemma [ prelem : chpt : dist ] there exists some such that let . by ( [ lem2ii : firstdiff1 ] ) , ( [ lem2ii : firstdiff2 ] ) and the subsequent application of the triangle inequality on for all we obtain by applying the triangle inequality on , for all we obtain where follows from , and follows from .we also have where the inequality follows from and the definition of as the minimum distance between the distributions that generate the data .finally , from ( [ lem2ii : firsthalf ] ) , ( [ lem2ii:2ndhalf ] ) and ( [ lem2ii : dr ] ) for all we obtain , since ( [ lem2ii : final ] ) holds for every , this proves ( [ objective ] ) for . the proof for the case where is analogous .since ( [ objective ] ) holds for every , part follows . on each iteration the algorithm produces a set of estimated change points .we show that on some iterations these estimates are consistent , and that estimates produced on the rest of the iterations are negligible .we partition the set of iterations into three sets as described below .first recall that for every and the algorithm generates a grid of boundaries such that for all and we have therefore , the segments have lengths that are linear functions of .more specifically , for and define ( note that can also be zero . ) for all we have such that this first subset of the set of iterations corresponds to the higher iterations where is too small . in this casethe resulting grids are too fine , and the segments may not be long enough for the estimates to be consistent .these iterations are penalized by small weights , so that the corresponding candidate estimates become negligible .* the second subset corresponds to the iterations where * a. * $ ] _ and _ * b. * the segments are long enough for the candidate change point parameter estimates to be consistent .let where defined by specifies the minimum separation of the change points . for all we have .therefore , at every iteration on and , for every change point we have \subseteq [ \theta_{k-1},\theta_{k+1}]\ ] ] where and are defined in lemma [ lem2 ] .we further partition the set of iterations on into two subsets as follows .for every fixed we identify a subset of the iterations on at which the change point parameters are estimated consistently and the performance scores are bounded below by a nonzero constant .moreover , we show that if the set is nonempty , the performance scores for all and are arbitrarily small . 1 . to define we proceed as follows . for every we can uniquely define and so that .therefore , for any with , we have . observe that we can only have distinct residues . therefore , any subset of with elements , contains at least one element such that .it follows that for every there exists at least one such that . for every ,define let and define .note that . by , and hence part of lemma [ lem2 ] , for every there exists some such that for all we have where denotes the minimum distance between the distinct distributions that generate the data . recall that , as specified by algorithm [ alg : kk ] we have .hence by ( [ thm : constj1 ] ) for all we have by lemma [ lem2 ] there exists some such that for all we have 2 .define for .it may be possible for the set to be nonempty on some iterations on . without loss of generality ,define for all with .observe that by definition , for all such that , we have where is given by .this means that on each of these iterations , there exists some for some such that for some . since for all , we have and .therefore , by part of lemma [ prelem : nochpt : i ] , there exists some such that for all we have , thus , for every and all we have * step 3 . *consider the set of iterations , . recall that it is desired for a grid to be such that every three consecutive segments contain at most one change point .this property is not satisfied for , since by definition on these iterations we have .we show that for all these iterations , the performance score becomes arbitrarily small . for all and , define the set of intervals and consider its partitioning into . observe that , by construction for every fixed , every pair of indices specifies a segment of length and the elements of index non - overlapping segments of . since for all we have , at every iteration on and , there exists some such that the segment contains more than one change point . since there are exactly change points , in at least one of the partitions for some we have that within any set of segments indexed by a subset of elements of , there exists at least one segment that contains no change points .therefore , by ( [ thm : lem1cond ] ) , ( [ thm : lineardist ] ) and hence lemma [ prelem : nochpt : ii ] , for every there exists some such that for all we have let and .let . by , and that , for all we have recall that by definition we have which , as follows from is nonzero .therefore we have by and for all we have note that and that . therefore , by and for all we obtain similarly , from and we obtain let . by , ,and we have since the choice of is arbitrary , the statement of the theorem follows .we have presented an asymptotically consistent method to locate the changes in highly dependent time - series data .the considered framework is very general and as such is suitable for real - world applications .note that , in the considered setting , rates of convergence ( even of frequencies to respective probabilities ) are provably impossible to obtain .therefore , unlike in the traditional settings for change - point analysis , the algorithms developed for this framework are forced not to rely on any rates of convergence .we see this as an advantage of the framework as it means that the algorithms are applicable to a much wider range of situations . at the same time, it may be interesting to derive the rates of convergence of the proposed algorithm under stronger assumptions ( e.g. , i.i.d . data , or some mixing conditions ) .we conjecture that our method is optimal ( up to some constant factors ) in such settings ( although it is clearly suboptimal under parametric assumptions ) ; however , this is left as future work .
given a heterogeneous time - series sample , it is required to find the points in time ( called change points ) where the probability distribution generating the data has changed . the data is assumed to have been generated by arbitrary , unknown , stationary ergodic distributions . no modeling , independence or mixing are made . a novel , computationally efficient , nonparametric method is proposed , and is shown to be asymptotically consistent in this general framework ; the theoretical results are complemented with experimental evaluations .
state determination for a physical system hinges on being able to non - invasively measure its relevant parameters . however , a measurement performed on a quantum system is by definition invasive , and hence quantum state estimation relies on measurements made on an ensemble of identically prepared systems .quantum state estimation process is thus intrinsically statistical in nature , with its accompanying ambiguities and uncertainties .there is no direct measurement possible for the quantum state of a single system and for the estimation of a state , we are required to determine the expectation values of a set of incompatible observables .the accuracy of such a determination depends upon the size of the ensemble and ideally we need an infinite size ensemble to obtain the precise values of these expectations . however , if we are given a small ensemble with a fixed number of identically prepared states , how can we effectively extract information from it ?the wave function collapse associated with projective measurements limits us from using each member of the ensemble more than once .the problem of state estimation has been investigated by physicists since the inception of quantum information theory .apart from the direct use of projective measurements , there exist other methods of state tomography which try to extract information from a system in different ways .some prescriptions use the information gained about the system from one measurement to decide on the next measurement .others employ maximum likelihood technique and numerical optimization , or use repeated weak measurements on a single system , to maximize the information gain . weak or unsharp measurements potentially hold promise for state estimation because of their non - invasiveness which allows state recycling . on the one handthe disturbance caused by a weak measurement is less , however on the other hand it also gives us limited information .the challenge therefore is to find a balance i.e. an intermediate regime of weakness , which leads to optimal information gain .the idea has been explored recently in the context of a qubit with a small ensemble size .weak measurements coupled with postselection have also been employed in the problem of state estimation where a complete characterization of the postselected quantum statistics or the direct measurement of the quantum wavefunction is used .state measurement schemes based on weak measurement tomography have also been recently proposed .there have also been critial analysis of these schemes . for continuous variable systems , quasi - probability distributions including the wigner distributionscan be tomographed by measuring the rotated quadrature components .there are homodyne and heterodyne schemes for estimation of squeezed gaussian states and squeezed thermal gaussian states .schemes using a single photon detector instead of a homodyne detector to characterize gaussian states have been proposed .the possible advantage offered by an entangled gaussian probe to estimate the displacement of a continuous variable state has also been explored .the importance of maximum likelihood methods has been emphasized and state reconstruction has been described . in a different direction , arthurs and kelly aimed to simultaneously measure position and momentum of a general wavefunction by coupling two different apparatuses with the system .since and are noncommuting observables , this leads to an unsharp measurement .symplectic tomography has been used to estimate the master equation parameters in an open setting for a single mode system .alternatively , one can reuse each member of the ensemble if the first measurement is done weakly enough such that the disturbance induced is very small .a similar idea has been utilized , albeit in a different direction , in the construction of loophole free hybrid bell - leggett- garg inequalities .weak or unsharp measurements are performed by weakly coupling the device to the quantum system .although the noise produced in such measurements is small , which should serve our purpose well , the information obtained is also very low .therefore , there is a tradeoff between the disturbance and information gain . to effectively use weak measurements for state estimation , we need to optimize the process . in this workwe restrict ourselves to the realm of a special class of states of one continuous variable quantum systems called gaussian states .these are states with gaussian - wigner quasiprobability distributions and include coherent states , squeezed states and thermal squeezed states .the gaussian states are determined by the first and second moments of position and momentum .we explore the advantage of the scheme involving weak measurements in estimating the gaussian states over projective measurements .we show that with an optimal strength of the weak measurement , our technique is more powerful in the determination of the wigner quasiprobability distribution when the ensemble size available is small .we have chosen the meter state in the form of a minimum uncertainty squeezed gaussian state and the tuning of the weakness of a measurement is achieved by changing the squeezing of the position quadrature .this scheme is tested for average performance over a large number of states and over a large number of runs to kill statistical fluctuations .we also take gaussian states at different temperatures and check whether the efficacy of our method depends in any way on temperature .the paper is arranged as follows : in section [ tools ] we collate all the background material necessary for the problem .we describe continuous variable states and wigner s quasiprobability distributions in brief .symplectic methods are described briefly .section [ scheme ] gives a description of weak measurement in quantum mechanics when applied to gaussian states . in section [ tomo ]we describe how to perform tomography of gaussian states using our method .we provide conclusions in section [ conc ] .let us consider a one - dimensional quantum system with the position and momentum operators and , satisfying the commutation relation =\iota \quad \rm{(}\hbar=1\rm { ) } \label{commutation}\ ] ] the corresponding eigenkets are and for a complete set , and are defined as an arbitrary mixed density operator in position and momentum bases can be represented by where is a function of real variables and while is a function of the real variables and .an alternative and equivalent way of representing the system state is via its wigner distribution given by the probability distributions corresponding to position and momentum can be obtained by computing the marginals and can be used to calculate expectation values of arbitrary observables via the symmetric ordering rule .the second order moments of position and momentum corresponding to a quantum state are given by and they obey the schrdinger uncertainty principle given by \rangle\vert^2\ ] ] a compact way to represent the second order moments is via the variance matrix given by and the uncertainty condition re - expressed in terms of the variance matrix takes the elegant form the subclass of states for which the wigner distribution is a gaussian function are called gaussian states and play an important role in quantum optics and quantum information .all states with gaussian wave functions are gaussian , however the class of gaussian states is a much bigger class and includes mixed states .the wigner representation corresponding to a general gaussian state centered at the origin of the phase space is given by , where the matrix is related to the variance matrix as if the center of the gaussian is located at a point , this can be achieved by action of a displacement operator which acts on canonical operators as and leads to a point transformation of the winger function . for the gaussian - wigner functionthis amounts to shifting the center to the location .the matrix can be written as : where is a diagonal symplectic matrix belonging to the group , is a rotation matrix and is proportional to identity .a total of three real parameters are involved in describing .we further restrict ourselves to a special class of gaussians where is described by two parameters , namely , the temperature and squeezing . setting with representing the temperature , being the boltzmann s constant and having the units of frequency .the corresponding variance matrix in this case is diagonal at or if we have , the gaussian state is a coherent state .a coherent state is represented by a circle of radius in phase space .now at , if and happen to be unequal , the state is a squeezed state .such a gaussian state is represented by an ellipse .the center of a general coherent or a general squeezed state may not be at the origin of phase space and in such cases it is said to be a displaced coherent or a displaced squeezed state .the displacement of a wigner function , centered at the origin is achieved by means of a displacement operator which takes the center to . for a non - zero temperature ,the value of the product is greater than ( for coherent states it is equal to half ) and it increases with a rise of temperature .pictorially , the class of states can be represented by an ellipse in phase space with center at and semimajor axis oriented along either or and is depicted in figure [ spread ] . and with spreads and .the estimated state is represented by another ellipse , bounded by a broken line , centered at and has spreads and . [ spread ] ]consider a system represented by a displaced gaussian - wigner distribution function represented by and characterized by two real displacement parameters : one temperature parameter and one squeezing parameter .such a wigner function can be obtained by starting with the centered gaussian - wigner function given in equation ( [ gaucen ] ) with parameters chosen as per equations ( [ theg ] ) and ( [ squeezing ] ) and applying a displacement operator as described in the previous section .given that the system is in such a state , our goal is to estimate the state . for the purposes of measurement , consider a meter which is a macroscopic pointer with position and momentum variables and .the meter is chosen to be in a squeezed coherent state represented by a wigner distribution such that and . herewe take temperature to be zero .as we shall see , when we employ this meter to measure the position , the strength of the measurement can be varied by changing the squeezing of the meter along the position quadrature .the larger values of the squeezing parameter correspond to a stronger measurement while the smaller values of the squeezing parameter correspond to a weaker measurement .similarly , if we are measuring momentum one can tune the measurement strength by varying the value of squeezing of the variable .the system and the meter form a composite system and the joint wigner function representing this two degrees of freedom system can be obtained by multiplying the two individual wigner functions .for such a system it is natural to define a four dimensional column vector of phase space variables as the phase space displacement of the system variables given in equation ( [ displacement ] ) acts on this column vector to give us a displaced vector in terms of the above column vector the joint wigner function becomes the matrix is a diagonal matrix }}{4\pi^2 \delta q \delta p \delta q_{m_1 } \delta p_{m_1}}\ ] ] when we perform a measurement ( weak or strong ) of the position , we switch on the following interaction hamiltonian between the system and the meter degrees of freedom the corresponding unitary transformation on the composite system - meter is in the language of the wigner quasi - probability distribution a unitary operation is equivalent to a symplectic transformation satisfying the symplectic transformation corresponding to the interaction hamiltonian acts on the phase variables by multiplication leading to the computation of the transformed wigner function under this symplectic transformation } } { 4\pi^2 \delta q \delta p \delta q_{m_1 } \delta p_{m_1}}.\ ] ] the wigner function of the meter after the above interaction is obtained by integrating over the system variables and and is given by }}{2\pi\delta q_{m_1 } \delta p_{m_1}\delta q \sqrt{\frac{1}{(\delta q_{m_1})^2}+\frac{1}{(\delta q)^2}}}. \label{wigsymp1}\ ] ] we can see at this point that the state of the meter has become correlated with the state of the system .however , as the meter is a macroscopic entity , its very observation leads to the collapse of its wavefunction and gives us a definite value .thus the probability density for the meter to show a reading } } { \sqrt{2\pi } \delta q_{m_1 } \delta q\sqrt{\frac{1}{(\delta q_{m_1})^2 } + \frac{1}{(\delta q)^2}}}. \label{probweak1}\ ] ] on the other hand , the reduced state of the system after the measurement interaction represented by the symplectic transformation is obtained by integrating over the meter degrees of freedom leading to the wigner function for the system } } { 2 \pi \delta p_{m_1 } \delta p \delta q \sqrt{\frac{1}{(\delta p_{m_1})^2}+\frac{1}{(\delta p)^2 } } } \label{wigtrans}\ ] ] in the weak measurement limit is large ( i.e. the initial meter state is prepared in distributions wide in position ) .since we have chosen the meter to be in a squeezed coherent state , this corresponds to a high degree of squeezing in the momentum quadrature of the initial meter state . in this limitwe have hence , weak measurement causes controllable disturbance to the state and the the disturbance vanishes in the limit of extremely weak measurement .however , if we make the measurement too weak , the correlation between the meter state and the system state diminishes . in the limit of extremely weak measurement , where no disturbance is caused, we do not learn anything about the system from observing the meter . in our scheme ,the first measurement that we perform is a weak measurement of position with a tunable strength as described above .subsequently , we carry out a projective measurement of momentum is on this system , then the probability density for obtaining any momentum as obtained from the modified system wigner function given in equation ( [ wigtrans ] ) is given by , } } { 2 \pi \delta p_{m_1 } \delta p \sqrt{\frac{1}{(\delta p_{m_1})^2}+\frac{1}{(\delta p)^2 } } } \label{probstrong1}\ ] ] in the reverse scenario where we do a weak measurement of momentum followed by a projective measurement of position , the composite system - meter system wigner function after the measurement interaction given by the hamiltonian is given by }}{4 \pi ^2\delta p_{m_2 } \delta p \delta q_{m_2 } \delta q } \label{wigsymp2}\ ] ] where and denote the position and momentum coordinates of the meter measuring momentum of the system .the wigner of the meter alone is given by }}{2 \pi \delta p_{m_2 } \delta p \delta q_{m_2 } \sqrt{\frac{1}{(\delta p)^2}+\frac{1}{(\delta q_{m_2})^2}}}\ ] ] giving the probability density of the meter to show a reading being } } { \sqrt{2 \pi } \delta p \delta q_{m_2 } \sqrt{\frac{1}{(\delta p)^2}+\frac{1}{(\delta q_{m_2})^2 } } } \label{probweak2}\ ] ] the corresponding system wigner function becomes }}{2 \pi \delta p_{m_2 } \delta p \delta q \sqrt{\frac{1}{(\delta p_{m_2})^2}+\frac{1}{(\delta q)^2}}}\ ] ] as before , in the weak measurement limit the disturbance caused in the system is limited and we have on this state we perform a projective measurement of position giving us the probability density for getting a result }}{\sqrt{2 \pi } \delta p_{m_2 } \delta q \sqrt{\frac{1}{(\delta p_{m_2})^2}+\frac{1}{(\delta q)^2}}}. \label{probstrong2}\ ] ]into two parts . for one half of the ensemble we measure position ( the weakness being defined by the initial spread in position of the meter ) leading to a disturbed ensemble . on every member of the ensemble carry out a projective measurement of momentum . with the other half of the initial ensemble , momentum is measured weakly leading to a disturbed ensemble on which a projective measurement of is carried out .[ flowchart ] ] in order to perform complete state tomography of any gaussian state of the form discussed earlier , we are required to estimate the center of the gaussian wigner function , and the spreads and .hence it is necessary to measure both position and momentum of the system as accurately as possible . to this end, we divide the initial ensemble of identically prepared systems into two equal parts . on every member of one partwe perform a weak measurement of position .the strength of the measurement is governed by the initial squeezing of the position quadrature of the meter determining the initial variance of the meter state .the larger the value of , weaker is the measurement strength and vice versa .the meter reading is recorded in each case and the final states of all the members are collected to generate a second ensemble .the members of this ensemble are now used as the initial states of a second measurement , which is a projective measurement of momentum . as before the meter readings of this measurement are noted .now the process is repeated with the members of the second part of the initial ensemble where we first measure weakly , with the strength of the measurement determined through and then carry out a projective measurement of . in all further analysis and discussionswe take .a summary of the procedure is illustrated in figure [ flowchart ] .the entire algorithm is repeated over many runs to rule out statistical fluctuations .it is worth noting that the initial squeezing of the relevant quadrature which determines the strength of the measurement is a tunable parameter in our hand .although we call certain measurements `` weak '' , we actually mean that it is not too strong to be projective and not too feeble to induce large errors to the measurement outcomes .the main point is that the measurements are weak enough and do not cause the complete collapse of the state so that it can be used for subsequent measurements .the expectation values obtained from the and measurements are used to estimate the values of , and the spreads and .looking at the equations ( [ wigsymp1 ] ) and ( [ wigsymp2 ] ) reveals that the information about the system has flowed into the meter . in fact the meter is now centered over which is the center of the initial system state .we carry out simulations using the meter reading probabilities given in equations ( [ probweak1 ] ) , ( [ probstrong2 ] ) , ( [ probweak2 ] ) and ( [ probstrong2 ] ) .we take different ensemble sizes of member numbers , , and respectively with randomly generated gaussian states .each virtual experiment is repeated over 1000 runs .the quantities and for a state are estimated by taking the mean over the and measurements while and are estimated from the corresponding variances .the order of measurement of and is reversed for the second part of the ensemble to rule out the possibility of preferential treatment of any of the observables . in the scheme involving projective measurementsonly , we divide the original ensemble into two parts and perform and measurements independently on the individual members of these parts .no sequential measurements are possible here because of the wavefunction collapses after the measurement . the accuracy of the state estimate is measured via the following distance measures where , , and are the estimated values of , , and , respectively .the parameter is a measure of how well our method is able to estimate the center of the gaussian and gives a measure of how well the spreads of the gaussian have been estimated .the two measures and represent closeness in position and width of the estimated wigner distribution from the original wigner distribution respectively .we can immediately see that the lower these distances , the better the estimates .for a perfect estimate the values should go to zero .to study the average performance of our scheme for squeezed displaced thermal states , we begin by numerically generating 100 gaussian states at a particular temperature , with randomly chosen values of displacement and squeezing . to generate these states ,the value of the parameter in equation ( [ squeezing ] ) is varied between and according to a uniform distribution .similarly , the centers of the gaussians are also chosen randomly using uniform distributions between and for both and . with each of these 100 random states, we numerically carry out the prescription given in subsection [ pres ] on a fixed number of identical copies of the state determining the ensemble size .the simulation is carried out with the help of the results obtained in section [ scheme ] . the distance measures and used to compare the efficacy of our method with projective measurements are computed .each experiment involving one gaussian state is repeated 1000 times to reduce statistical fluctuations .the process is carried out with ensembles of sizes , , and . for a given ensemble size, the results for each member are averaged over 1000 runs and then the distance measures are averaged over the 100 states .we show that there is a clear advantage of using our scheme when the ensemble size is small .the test is carried out for three different sets of gaussian states corresponding to three different temperatures given by , and , respectively .using averages over 100 random states further averaged over 1000 runs .the behaviors of and are plotted with for ensemble sizes and .the corresponding projective measurement results are plotted as dotted lines .while the method performs well in estimating the position of the gaussian states for all ensemble sizes , as represented by , it provides a clear advantage for estimating the spreads represented by over projective measurements in the case of a small ensemble of size .[ k_1 ] ] using averages over 100 random states further averaged over 1000 runs .the behaviors of and are plotted with for ensemble sizes and .the corresponding projective measurement results are plotted as dotted lines .while the method performs well in estimating the position of the gaussian states for all ensemble sizes , as represented by , it provides a clear advantage for estimating the spreads represented by over projective measurements in the case of a small ensemble of size .[ k_0.9 ] ] using averages over 100 random states further averaged over 1000 runs .the behaviors of and are plotted with for ensemble sizes and .the corresponding projective measurement results are plotted as dotted lines .while the method performs well in estimating the position of the gaussian states for all ensemble sizes , as represented by , it provides a clear advantage for estimating the spreads represented by over projective measurements in the case of a small ensemble of size .[ k_0.8 ] ] the performance of state estimation of gaussian states via our weak measurement protocol is compared to the corresponding performance of projective measurements .this is done via plots of the distance measures and vs weakness parameter defined by the inverse of squeezing of the meter state , averaged over 100 such random states .the process is carried out for four different small ensemble sizes and and three different absolute temperatures given by and .let us first look at figure [ k_1 ] ( a ) .in this case the distance measures and have been plotted with for an absolute temperature given by and ensemble size .a low value of indicates the meter prepared as a wide gaussian in the position space .this corresponds to the weak measurement limit .a very weak measurement introduces a large amount of error in the measurement and this leads to a low quality of state estimation .this can be seen from the fact that the values of both and , on the left hand side of the plot for the weak measurement method are much higher than those involving only projective measurements represented by the dotted line .similarly , on the right side of the plot , the meter is prepared as a narrow gaussian .the corresponding measurement limit for this side of the plot is that of strong projective measurements .projective measurements destroy the state of the system and hence using the state for the second time leads to a low quality of state estimation .only for an intermediate value of weakness , our method performs better than projective measurements .this is seen from the plot of going below the dotted line representing the same distance measure for the projective measurement .the plot of attains its minimum for the intermediate values of but remains above the dotted line .it indicates that though our method has worked in giving a better estimation of the position of the gaussian state , it does not perform as well to provide an estimation of the spreads of the gaussian , in this particular case .figure [ k_1 ] ( b ) shows the plot of the same parameters for the same absolute temperature but for a lower ensemble size of .we find that here our method proves to be more effective than the projective measurements both for the estimations of the position and the spread of the gaussian wigner function .moving on to figure [ k_1](c ) and ( d ) which are for the ensembles of sizes and we find that the relative efficacy of the estimation for position as well as the spread improves .we repeat the same exercise with gaussian states with finite temperatures with and as indicated in figures [ k_0.9 ] and [ k_0.8 ] , respectively .we observe the same trend as observed for the zero temperature in all these cases .our method is not too effective in the extremely weak or extremely strong regimes .it works in the intermediate regimes depending upon the size of the ensemble and its efficacy increases with the lowering of the ensemble size . in each of the plots, it is observed that the distance measures attain small values for an optimal value of squeezing .this is expected , as a very large value of squeezing ushers in too many errors into the `` weak measurement '' , while a small value causes a larger disturbance to the original state .we observe from figures [ k_1 ] , [ k_0.9 ] and [ k_0.8 ] that for an optimal range of values , the weak distance measure curves go below the projective measurement line ( represented by broken straight lines ) . in this regime of values ,our method is more effective than the projective measurement state estimation . the advantage is greater for smaller ensemble size .in fact for the ensemble size of and , the performances of the optimal weak measurement method and projective measurement are almost equal as can be seen in figure [ k_1 ] .however , as the ensemble size decreases , a clear advantage emerges for the proposed scheme .there is no particular change in the advantage of our scheme relative to projective measurements , with change of temperature as is evident from plots with different temperature parameter .in this paper , we have described our work on the estimation of gaussian states by a method employing weak or unsharp measurements .we use phase space methods and the language of wigner distributions for state estimation .we compare our results with state estimation based on projective measurements and show how one can do better in certain parameter regimes .recycling of states , where one makes more than one measurement on a single copy before discarding it and tenability of the strength of the weak measurement are the two main ingredients of our scheme .the strength of the measurement is directly related to the amount of squeezing in the initial pointer state and can be tuned at will and we optimize the performance of our scheme with respect to this weakness parameter .the efficacy of the scheme is tested over a randomly chosen subset of gaussian states .we demonstrate that the weak measurement based scheme produces a wigner distribution which is much closer to the original wigner distribution as compared to the scheme based on projective measurements , for small ensemble sizes . as the ensemble size increases , the relative advantage of our scheme decreases , as seen in the comparative results for varying ensemble sizes .the behavior is repeated over the range of temperatures we have considered .while in this work we have dealt with gaussian states with the maximum spread along the or axes it will be interesting to extend the scheme to general gaussian and non - gaussian states .another interesting direction that we are following up is to compare our results with schemes similar to the arthurs and kelly setup where position and momentum are measured together .39ifxundefined [ 1 ] ifx#1 ifnum [ 1 ] # 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1'''' [ 0]secondoftwosanitize [ 0 ] + 12$12 & 12#1212_12%12[1][0] __ ( , ) _ _ ( , ) _ _ ( , ) pp .link:\doibase 10.1103/physreva.61.032306 [ * * , ( ) ] link:\doibase 10.1103/physreva.64.052312 [ * * , ( ) ] link:\doibase 10.1002/prop.200310009 [ * * , ( ) ] link:\doibase 10.1103/physreva.89.062121 [ * * , ( ) ] link:\doibase 10.18520/v109/i11/1939 - 1945 [ * * , ( ) ] link:\doibase 10.1103/physreva.81.012103 [ * * , ( ) ] link:\doibase 10.1038/nature10120 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.108.070402 [ * * , ( ) ] ( ) , link:\doibase 10.1038/srep01193 [ * * , ( ) ] link:\doibase 10.1209/0295 - 5075/96/40002 [ * * , ( ) ] link:\doibase 10.1103/physreva.92.062133 [ * * , ( ) ] link:\doibase 10.1103/physreva.40.2847 [ * * , ( ) ] link:\doibase 10.1038/srep12289 [ * * , ( ) ] link:\doibase 10.1103/physreva.79.033834 [ * * , ( ) ] link:\doibase 10.1016/s0079 - 6638(08)70389 - 5 [ * * , ( ) ] link:\doibase 10.1103/physreva.70.053812 [ * * , ( ) ] link:\doibase 10.1103/physreva.87.012107 [ * * , ( ) ] link:\doibase 10.1103/physreva.55.r1561 [ * * , ( ) ] link:\doibase 10.1103/physreva.61.010304 [ * * , ( ) ] link:\doibase 10.1103/revmodphys.81.299 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.106.220502 [ * * , ( ) ] link:\doibase 10.1002/j.1538 - 7305.1965.tb01684.x [ * * , ( ) ] link:\doibase 10.1103/physreva.80.052108 [ * * , ( ) ] link:\doibase 10.1119/1.1475328 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.106.040402 [ * * , ( ) ] link:\doibase 10.1103/physreva.89.012125 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.60.1351 [ * * , ( ) ] link:\doibase 10.1103/physrevd.40.2112 [ * * , ( ) ] link:\doibase 10.1103/physreva.76.044103 [ * * , ( ) ] link:\doibase 10.1103/physrev.40.749 [ * * , ( ) ] link:\doibase 10.1007/bf02848172 [ * * , ( ) ] link:\doibase 10.1016/s0375 - 9601(99)00471 - 5 [ * * , ( ) ] link:\doibase 10.1140/epjst / e2012 - 01532 - 4 [ * * , ( ) ] link:\doibase 1983/11/10/print [ * * , ( ) ] link:\doibase 10.1119/1.2957889 [ * * , ( ) ]
we present a scheme to estimate gaussian states of one - dimensional continuous variable systems , based on weak ( unsharp ) quantum measurements . the estimation of a gaussian state requires us to find position ( ) , momentum ( ) and their second order moments . we measure weakly and follow it up with a projective measurement of on half of the ensemble , and on the other half we measure weakly followed by a projective measurement of . in each case we use the state twice before discarding it . we compare our results with projective measurements and demonstrate that under certain conditions such weak measurement - based estimation schemes , where recycling of the states is possible , can outperform projective measurement - based state estimation schemes .
most iterative decoders , e.g. turbo decoders , rely on knowledge of the signal - to - noise ratio ( snr ) or the channel reliability constant .the snr is also required for other functionalities in the receiver .many snr estimators have been proposed , both data - aided ( da ) that require pilot symbols or feedback from the decoders , and non - data - aided ( nda ) that are only based on the received observables .a comparison of both da and nda snr estimators was performed in and compared to the cramr - rao lower bound ( crlb ) for da estimators .the crlb for nda estimators was later derived in . the nda maximum likelihood ( ml )estimator based on the expectation maximization ( em ) algorithm was proposed in and also compared to other nda estimators .this nda ml estimator was found iteratively , but unfortunately requiring processing of all observables for each iteration , making it computationally complex .the contributions in this paper are as follows .to complement the nda crlb for snr in , we derive the nda crlb for the signal amplitude , the noise variance , the channel reliability constant , and the bit - error rate ( ber ) .it is also shown how to estimate the _ a priori _ probability of the transmitted symbols , in the case when they are not equally likely .furthermore , we provide a more direct , alternative derivation of the nda ml estimator and we propose a new , low complexity nda snr estimator .the performance of the new estimator is compared to previously suggested nda estimators and found to be similar to that of the nda ml estimator .this performance is achieved with significantly lower computational complexity than the ml estimator . only binary - phase - shift - keying ( bpsk ) transmission is considered here , but generalization to -psk is straightforward .let denote a binary random variable with equally likely symbols .further , let represent a zero - mean gaussian random variable with unit variance . define a new random variable according to with probability density function ( pdf ) expressed as let , , and denote samples from , , and , respectively . independent samples of is observed and collected in a column vector }{^t} ] the transmitted data , and }{^t}$ ] white gaussian noise . is the transmitted energy and is the double - sided noise power spectral density .define the snr as since all samples in are assumed independent , the logarithm of their joint pdf is given by the average ber can be expressed as where is the gaussian -function .the average mutual information ( mi ) between and in can be expressed as where is defined as the log - likelihood ratio ( llr ) for is defined as where is the channel reliability constant the instantaneous ber for a specific received symbol at position can be estimated by the corresponding instantaneous mi for a symbol at position can be estimated by with no knowledge of the transmitted symbols the average ber in or the average mi in depend solely on the snr , . also , in order to use an llr - based decoder ( basically all turbo - like decoders or soft decoders ) , to estimate the instantaneous ber in , or to estimate the instantaneous mi in , the channel reliability constant in needs to be known . however , as we show in section [ sec_estimators ] , the snr and the channel reliability constant are related through the second moment of the observables .we therefore only need to estimate one of the two . here , we have chosen to estimate the snr , .the crlb , here denoted by , provides a lower bound on the variance of any unbiased estimator .let represent an arbitrary function of the parameters and , and define .the normalized crlb ( ncrlb ) for can then be calculated as { \textbf{j}^{-\!1}}\left [ \begin{array}{cc } \!\!\!{\frac{\partial { g(\mu,{\sigma})}}{\partial \mu } } & \!{\frac{\partial { g(\mu,{\sigma})}}{\partial { \sigma } } } \\\end{array } \!\!\!\right ] { ^t}\ ! , \label{eq_crlbd}\end{aligned}\ ] ] where is the fisher information matrix , defined as .\label{eq_fm}\end{aligned}\ ] ] here denotes the expectation over .a similar fisher information matrix as in has been derived in and the inverse can be expressed as , \label{eq_fminv}\end{aligned}\ ] ] where is a function of using , the crlb for can be calculated as reported in and .the ncrlbs for , , , , and are the ncrlb for can also be found in a similar way by replacing in with .unfortunately , there is no simple form to express .note that for da estimation is found by setting in .this implies that the ncrlbs for da estimation of , , , , and are easily found by letting in .define moment of as which can be approximated by its sample average .the second moment of is assume that an estimate of exists , denoted by . combining with and gives the following estimators for , , and the next sub - sections present different estimators for that can be used to estimate the above parameters .the absolute moment ( am ) of is defined as and can also be approximated by the sample average for large or small , the am will tend to in other words , for high values of , can be closely approximated by using and .this estimator was first introduced in and will here be referred to as the conventional method ( cm ) estimate . the ml estimator maximizes the joint pdf in . taking the partial derivatives of setting the derivatives in and to zero and solve for gives inserting in gives an expression depending only on , and , which can be solved iteratively by where denotes the estimate of after iterations .the iteration in is identical to the iteration in the em algorithm presented in , but here derived in a different way .a good starting point for the iterative estimator is the cm estimate of , .after iterations the snr can be estimated by which will be referred to as the ml estimator .the approach of estimating a parameter based on the moments of the observables is known as the method of moments ( mm ) .the fourth moment of is combining with gives the mm estimator , e.g. , where and are approximated by the sample average .if , is no longer real and is set to zero .this will be referred to as the mm estimator .an estimator based on the second moment and the am can be found by combining with .unfortunately , there is no closed - form analytical solution for and as for the mm estimator .however , dividing the square of by gives an expression that only depends on , an estimator for can therefore be stated as since there is no closed - form solution to , alternative methods must be explored . in , a table - lookup for is suggested .a different approach is to approximate with a simple closed - form function , which was done in as , the estimator in , using the approximation in is referred to as the second - order polynomial ( p2 ) estimator . from it is easy to verify that therefore , we suggest the following approximation of and its inverse numerical optimization , using the nelder - mead simplex method to minimize the mean squared difference between and gives , , and . the estimator in , using the novel approximation in , with whenever , is a new approach we propose and is here referred to as the am estimator .( 94,84)(-1,-2 ) ( -3,0 ) .,title="fig:",width=321 ] ( -2,46)(0,0)[r ] ( 49,-1)(0,0)[t] fig .[ fig_h_approx ] shows the analytical expression from together with its indistinguishable approximation in .since the cm estimator in depends only on it is also shown in fig .[ fig_h_approx ] .it is clear that the function used by the cm estimator converges to the analytical one for large , but differs for small .the same figure also shows the approximation in .since this polynomial approximation was only optimized between to db ( ) it differs from the analytical expression outside this region .define to be the _ a priori _ probability of in the ml estimator is invariant to non - equiprobable symbols and gives the same results even if .it is straightforward to show that , , and are independent of .since the cm , the mm , the p2 , and the am estimators are based only on these quantities , they will give the same results independent of . however , when , the odd moments are non - zero , this means that the _ a priori _probability can be estimated using , by combining with and performance of the snr estimators is evaluated based on their normalized mean squared error ( nmse ) and their normalized bias ( nb ) where is estimated based on samples .the number of trials was chosen to .the best estimator is an unbiased estimator with minimum nmse .[ fig_nmse_g_gs ] shows the nmse and fig .[ fig_nb_g_gs ] shows the nb , both for observables .the nda and the da ncrlb are also included as a reference , even though they are only bounds for unbiased estimators . all the estimators presented here are biased when , even for high which is evident from fig .[ fig_nb_g_gs ] .different approaches to reduce the bias has been suggested , e.g. , .the cm estimator has a large nb ( and therefore also a large nmse ) for low .for large the cm estimator approaches the ml estimator , which was shown analytically in .in fact , figs . show that all estimators , except the p2 estimator converges to the same constant nmse and constant nb for high ( the nb is around 5% above the true ) .the p2 estimator only works well between -3 to 3 db , the interval for which it was optimized .the mm estimator has the second highest nmse for low .the ml estimator after iterations has the lowest nmse at -6 db , but fig .[ fig_nb_g_gs ] shows that it at the same time has the second highest nb .finally , the suggested am estimator has almost identical performance ( both in nmse and nb ) as the ml estimator for all , even though it has a computationally complexity that is less than the first ml iteration. figs .show the nmse and the nb for different at -2 db .this corresponds to an around 1 db for a half - rate code , e.g. the original turbo code .at this low snr , the cm estimator has bad performance .the nb saturates around 60% above the true value ( not shown here ) , which gives the high nmse in fig .[ fig_nmse_g_k ] .the p2 estimator has a negative nb for large at this snr .the ml estimator after iterations and the am estimator have a small positive nb ( around 1% ) for large .the ml estimator and the mm estimator are the only two estimators that are unbiased for large , but only the ml estimator approaches the ncrlb in fig .[ fig_nmse_g_k ] which makes it asymptotically optimal .the second best estimator , after the ml estimator , over the whole range of is the suggested am estimator .in this paper we have derived the nda ncrlb for the signal amplitude , the noise variance , the channel reliability constant , and the ber in an awgn channel with bpsk modulated transmission .it was also shown that these parameters , as well as the _ a priori _ probability of the transmitted symbols and the instantaneous mi can all be estimated based on the snr estimate .a novel snr estimator with low computationally complexity was introduced and shown to be surpassed in performance only by the iterative ml estimator among previously suggested estimators .the proposed estimator performs close to the performance of the iterative ml estimator at significantly lower computationally complexity . c. berrou , a. glavieux , and p. thitimajshima , `` near shannon limit error - correcting coding and decoding : turbo - codes , '' in _ proc .ieee int .( icc 93 ) _ , vol . 2 ,geneva , switzerland , may 1993 , pp .10641070 .p. hoeher , i. land , and u. sorger , `` log - likelihood values and monte carlo simulation some fundamental results , '' in _ proc. int .symp . on turbo codes and rel .topics _ , brest , france , sept . 2000 , pp . 4346 .
non - data - aided ( nda ) parameter estimation is considered for binary - phase - shift - keying transmission in an additive white gaussian noise channel . cramr - rao lower bounds ( crlbs ) for signal amplitude , noise variance , channel reliability constant and bit - error rate are derived and it is shown how these parameters relate to the signal - to - noise ratio ( snr ) . an alternative derivation of the iterative maximum likelihood ( ml ) snr estimator is presented together with a novel , low complexity nda snr estimator . the performance of the proposed estimator is compared to previously suggested estimators and the crlb . the results show that the proposed estimator performs close to the iterative ml estimator at significantly lower computational complexity .
recently , much attention has been focused on the topic of scale - free networks which characterize many social , information , technological and biological systems. the qualitative properties of many interesting real - world examples , such as the internet network , the power grid network and the protein interaction network , are as following : ( 1 ) : : the degree distribution has power - law tail ; ( 2 ) : : local clustering of edges : graph is not locally tree - like ; ( 3 ) : : small average distance .the networks can be visualized by nodes representing individuals , organizations , computers and by links between them representing their interactions . for the purpose of analyzing topology, we ignore the variation in the type of links .robustness of the network topology comes from the presence of alternate paths , which ensures the communication remains possible in spite of the damages to the network .designers of the networks must assume that networks have random failures or might be attacked , and some of these attacks can result in damage .the robust networks will continue functioning in spite of such damages .although many literatures have discussed what the optimal network topology would be , many real - world networks present the power - law degree distribution .when the scale - free networks are subjected to random breakdowns , with a fraction of the nodes and their connections are removed randomly , the network s integrity might be compromised : when the exponent of the power - law degree distribution , there exists a critical threshold , such that for , the network would break into smaller and disconnected parts , but the networks with are more resilient to random breakdowns .cohen _ et .al_ presented a criterion to calculate the percolation critical threshold to random failures to scale - free networks .if we attack the scale - free networks intentionally : the removal of sites is not random , but rather sites with the highest connectivity are targeted first .the numerical simulations suggest that scale - free networks are highly sensitive to this kind of attack. cohen _ et . al_ studied the exact value of the critical fraction needed for disruption .thus scale - free networks are highly robust against random failures of nodes and hypersensitive to intentional attacks against the system s largest nodes .so a randomly chosen node has low degree with high probability , but removal of a highly connected node might produce large effect to the network .this situation is often compared to that of the classical random graph presented by erd and rnyi. such graphs have a poisson degree distribution .this makes the random graphs less robust to random failures than comparable networks with power - law degree distribution , but much more robust against attacks on hubs . in this paper , we specifically focus on the robustness of the network topology to random failures .we use the percolation theory and the optimization method to investigate the guideline which can maximize the robustness of the scale - free networks to random failures of nodes with the constrained condition that the average connectivity of per node in the network is constant .the percolation theory provides the measures of distribution which are possible ways for measuring robustness .we examine the relationship between the average connectivity per node and the network robustness to random failures .then , we investigate the trend of the network robustness to random failures with the network size .the work may provide the theoretical evidence that if the minimal connectivity and the exponent of the power - law degree distribution take in more optimal way , the robustness of the scale - free networks can be optimized .if we construct and maintain a network with a given number of nodes as being proportional to the average number of links per node in the network , our goal then becomes how to maximize the robustness of a network with nodes to random failures with the constraint that the number of links remains constant but the nodes are connected in a different and more optimal way .our goal is to maximize the threshold for random removal with the condition that the average degree per node is constant .we construct the following model . { \rm s.t .} & \langle k \rangle={\rm constant . }\end{array } \right.\ ] ] for any degree distribution , the threshold for random removal of nodes is where is calculated from the original connectivity distribution .a wide range of networks have the power - law degree distribution : where is the minimal connectivity and is an effective connectivity cutoff presented in finite networks . to the power - law degree distribution , the average can be given with the usual continuous approximation , this yields }{2-\alpha}. % \end{array}\ ] ] from ( [ f2.2 ] ) , can be calculated as }{[k^{(3-\alpha)}-m^{(3-\alpha)}]}.\ ] ] in a finite network , the largest connectivity can be estimated from where is the number of the network nodes .then we have that to the power - law degree distribution , we have & = & c[k^{(1-\alpha)}-m^{(1-\alpha)}]/(1-\alpha ) , \\[8pt ] \end{array}\ ] ] this yields }.\ ] ] in the real world , there always exists the relation , so we have combining ( [ f2.3 ] ) and ( [ f2.4 ] ) , we have that .\ ] ] from ( [ f2.5 ] ) , we have the following numerical results .it can been seen from table 1 that the distribution exponent increases when the minimal connectivity increases . combining ( [ f2.6 ] ) and ( [ f2.5 ] ), we have that from table 1 , we can get the following relationship : ( 1 ) : : when the average connectivity per node is constant , the exponent increases when the minimum connectivity increases ; ( 2 ) : : to the minimum connectivity , the exponent decreases when the average connectivity of the network increases . using the results obtained abovewe construct the following model . s.t . &\frac{(\alpha-1)}{(\alpha-2)}m[1-n^{-\frac{\alpha-2}{\alpha-1}}]=\langle k \rangle\\[10pt ] & m\in z^{+ } , \end{array } \right.\ ] ] where }{[k^{(3-\alpha)}-m^{(3-\alpha)}]}$ ] .the numerical results suggest that whether the network size is very large or not , reaches its maximum value when .the numerical results are presented in table 2 . from table 2, we can get the following three conclusions : ( 1 ) : : if the average connectivity per node and the exponent of the power - law degree distribution is constant , the robustness of the scale - free networks will decrease when the network size becomes larger .( 2 ) : : if the network size is constant , the robustness of the scale - free networks increase when the average connectivity becomes larger .( 3 ) : : to the random failures , we have to take several times cost to increase the robustness of the scale - free networks about 1% .it is well known that the networks with power - law degree distribution are resilient to random failures . but this conclusion do nt answer the following three questions : ( i ) to a constant average connectivity , how to determine the distribution exponent of the scale - free networks so that the networks are more robust to the random failures .( ii ) to different network size , how many edges we need to add to the network to satisfy the robustness level to random failures .( iii ) to an exist network with power - law degree distribution , what we need to do to improve the network robustness . in this paper, we use the percolation theory and the mathematic programme method to optimize the robustness of the scale - free networks for random failures and give the numerical results .finally , we give the relationship between the threshold and the network size , the degree distribution exponent and the average connectivity per node . from fig . 1, we can get the conclusion that if the scale - free networks size become large , the network robustness to random failures would become weak . to the internet and other growing scale - free networks the designers must add more links to the network to improve the average connectivity per node to random failures .subjects for further study include ( i ) an analysis of the robustness of the scale - free networks to the intentional attack to the highest connectivity nodes .( ii ) the optimization of complex network under both random failures and intentional attack . ( iii ) the topology structure to improving the robustness of existing scale - free networks .the authors are grateful to dr .qiang guo for her valuable comments and suggestions , which have led to a better presentation of this paper .this research was supported by chinese natural science foundation grant nos .70431001 and 70271046 .
it has been found that the networks with scale - free distribution are very resilient to random failures . the purpose of this work is to determine the network design guideline which maximize the network robustness to random failures with the average number of links per node of the network is constant . the optimal value of the distribution exponent and the minimum connectivity to different network size are given in this paper . finally , the optimization strategy how to improve the evolving network robustness is given .
in various medical studies , outcomes of interest include the time to death or time to tumor recurrence .when observing survival times , censoring is common because of partially known information .thus , survival data consist of survival time as an outcome , censoring status , and many covariates as risk factors . the relationship between survival time andthe covariates has been studied extensively . in survival data analysis ,customized statistical methodologies are employed because of non - normal distributions and censoring .the surveillance , epidemiology and end results ( seer ) program , a premier source for cancer statistics in the united states , contains information on incidence , prevalence , and survival from specific geographic areas representing 28 percent of the us population .survival as an endpoint is one of the important outcomes of the seer database ; hence , it is often of interest to determine the relationship between survival and covariates . when analysing seer data , it is important to screen for outlying observations or outliers that significantly deviate from other measurements because they may distort the conclusions of the data analysis . therefore , the development of outlier detection methods is essential to obtain reliable results .outlier detection has been studied in various types of data including normal data , multivariate normal data , censored data , incomplete survey data , time series data , gene expression data , proteomics data , functional data , spatial data and circular data . for more details , see , , and .the algorithm of was developed to identify outlying observations based on cox linear regression for censored data .it can be more effective to utilize quantile regression because of it is robust to outliers . and proposed to use quantile regression for outlier detection in proteomics data .the most algorithms focus on determining whether observations are outliers according to a threshold , which should be specified in advance .the dichotomous algorithms , which depend solely on a pre - specified cut - off , may often be unsatisfactory .thus , a function for providing scores and flexibly determining a threshold can be helpful . in this paper , we present three outlier detection algorithms for censored data : the residual - based , boxplot , and scoring algorithms .the residual - based and boxplot algorithms were developed by modifying existing algorithms , and the scoring algorithm was developed to provide the outlying magnitude of each point from the distribution of observations and enable the determination of a threshold by visualizing the scores .the presented algorithms are based on quantile regression , which is robust to outliers .the algorithms were investigated by a simulation study , and their characteristics were summarized at the conclusion of that study .we implemented the three algorithms customized for censored survival data in an package called outlierdc , which can be conveniently employed in the environment and freely obtained from the comprehensive r archive network ( cran ) website ( http://cran.r-project.org/web/packages/outlierdc/index.html ) .we demonstrate its use with real data from the seer database ( http://seer.cancer.gov ) , which contains a number of data sets related to various cancers . the remainder of this paper is organized as follows . in sections[ theory ] and [ package ] , we describe three algorithms using censored quantile regression for identifying outlying observations in censored data and then implement them into an package outlierdc . in section [ simul ] ,simulation studies are conducted to investigate the performance of the outlier detection algorithms . in section [ ex ] , we illustrate the application of the algorithm using outlierdc with a real example .we present our conclusions in section [ dis ] .in this section , we describe three outlier detection algorithms based on censored quantile regression .we here focus only on detecting too large observations , because too small observations can be generated by censoring .we first define the notation used to explain the algorithms .let be an uncensored dependent variable of interest , such as survival time or some transformation of it , and let and be a censoring variable and a -dimensional covariate vector for the observation , respectively .we observe the triples and define which represent the observed response variable and the censoring indicator , respectively .we consider the quantile regression model where for some is a -dimensional quantile coefficient vector and is a random error whose conditional quantile equals zero .the conditional quantile function is defined as where is the conditional quantile of given , and is the conditional cumulative distribution function of the survival time given .several approaches such as , , and can be used to estimate the conditional quantile coefficients .for instance , let us consider as the basis of quantile regression for the outlier detection algorithms .previous methods have stringent assumptions , such as unconditional independence of the survival time and the censoring variable , or global linearity at all quantile levels . to alleviate the assumptions , proposed locally weighted censored quantile regression based on the local kaplan - meier estimator with nadaraya - watsons type weights and a biquadratic kernel function . the local kaplan - meier estimates of the distribution function obtained by where and is a sequence of non - negative weights adding up to 1 . here ,the nadaraya - watson type weight is employed by where is a density kernel function and is a bandwidth converging to zero as . by plugging the estimator ( [ local ] ) into the weight function ( [ weight ] ) , the estimated local weights are obtained .a weight function is used for each censored observation as follows . where is the conditional cumulative distribution function of the censoring time given .the regression coefficient estimates can be obtained by minimizing the objective function .\ ] ] it is natural to consider the distance from each observation to the center to identify outliers .utilizing the residuals from fitting quantile regression , the quantreg procedure in provides an outlier detection algorithm for uncensored data . proposed the residual - based outlier detection algorithm based on cox linear regression for censored data .it can be more effective to utilize quantile regression because it is robust to outliers .thus , we modify the residual - based algorithm for censored data by utilizing the residuals from fitting censored quantile regression in the following manner .let be the residual defined as where is the conditional quantile for the observation by censored quantile regression .the outlier indicator for the observation , , is defined as where is a resistant parameter for controlling the tightness of cut - offs and is the corrected median of the absolute residuals .that is , where is the inverse cumulative distribution function ( cdf ) of gaussian density with the quantile . as default values, we consider and like in the procedure . the observation is declared an outlier if . in our package outlierdc , this algorithm is implemented in the function with the argument . the algorithm is summarized as follows : + * algorithm 1 * : residual - based outlier detection 1 . fit a censored quantile regression model with to the data .2 . calculate the residuals .3 . compute the scale parameter estimate by the residuals and the inverse cdf .4 . declare each observation an outlier if its corresponding residual is larger than . a simple outlier detection approach based on a boxplot has widely been used for uncensored data . and proposed to use the boxplot algorithm based on quantile regression for high - throughput high - dimensional data .we modify the boxplot algorithm using quantile regression for censored data in the following manner .a censored quantile regression model is fitted to obtain the and conditional quantile estimates , and , respectively .the inter - quantile range ( iqr ) for the observation can be obtained by the outlier indicator for the observation , , is defined as where the upper fenceis defined as and is to control the tightness of cut - offs with a default value of 1.5 .if an observation is located above the fence , we declare it an outlier .the algorithm is powerful , particularly when the variability of data is heterogeneous .we implement the algorithm in the function with the argument . it can be summarized as follows .+ * algorithm 2 * : boxplot outlier detection 1 .fit censored quantile regression models with and to the data .2 . obtain the and conditional quantile estimates .calculate 4 .construct the fence , 5 .declare each observation an outlier if it is located above the fence .the residual - based and boxplot algorithms described in the previous sections focus on determining whether observations are outliers according to a threshold , which should be specified in advance .these dichotomous algorithms , which depend solely on a pre - specified cut - off , may often be unsatisfactory .moreover , the boxplot algorithm can be applicable when a single covariate exists .thus , we developed the scoring algorithm , which provides the outlying degree that indicates the magnitude of deviation from the distribution of observations given the covariates .visualizing the scores enables the flexible determination of a threshold for outlier detection .the resulting scores are free from the levels of the covariates even though the variability of the data is heterogeneous .the outlying score is based on the relative measure of conditional quantiles . the outlying score for the observationis defined as the score is the difference between the distance of the observation from the quantile relative and that of the quantile , conditional on its corresponding covariates .larger scores indicate higher outlying possibilities .the normal qq plot of the scores enables identification of outlying observations .when the scores are visualised , a threshold can be determined , and an observation is declared an outlier if .the algorithm is implemented in the function with the argument and summarised as follows .+ * algorithm 3 * : scoring outlier detection 1 .fit censored quantile regression models with and to the data .2 . obtain the , , and conditional quantile estimates .calculate the outlying score .4 . generate the normal qq plot of the outlying scores .5 . determine a threshold to identify outlying observations that are outside the distribution of the majority of observations .declare each observation an outlier if its corresponding score is greater than the threshold .we develop an package outlierdc , which is designed to detect outliers in censored data under the environment system .the outlierdc package utilizes existing packages , including methods , formula , survival , and quantreg .the package methods is adopted to provide formal structures for object - oriented programming .formula is used to manipulate the design matrix on the object .the package survival enables the handling of survival objects by the function .the package quantreg provides typical censored quantile regressions .the function plays a pivotal role in outlier detection .the usage and input arguments are as follows : odc < - function(formula , data , method = c(``score '' , `` boxplot'',``residual '' ) , rq.model = c(``wang '' , `` penghuang '' , `` portnoy '' ) , k_r = 1.5 , k_b = 1.5 , h = .05 ) * [ formula ] : a type of object with a object on the left - hand side of the operator and covariate terms on the right - hand side .the survival object with its survival time and censoring status is constructed by the function . * [ data.frame ] : a data frame with variables used in the .it needs at least three variables , including survival time , censoring status , and covariates . * [ character ] : an outlier detection method to be used .the options , , and implement the scoring , boxplot , and residual - based algorithms , respectively. the default is . *[ character ] : a type of censored quantile regression to be used for fitting .the options , , and conduct wang and wang s , portnoy s , and peng and huang s censored quantile regression approaches , respectively .the default is . *[ numeric ] : a value to control the tightness of cut - offs having a default value of 1.5 for the residual - based algorithm . * [ numeric ] : a value to control the tightness of cut - offs having a default value of 1.5 for the boxplot algorithm . * [ numeric ] : bandwidth for locally weighted censored quantile regression with a default value of 0.05 ..output slots for the class [ cols="<,<,<",options="header " , ]in this section , we illustrate the use of outlierdc for the detection of outlying observations using real data from the us seer database system .the data for patients with extrahepatic cholangiocarcinoma can be obtained from the seer website ( http://seer.cancer.gov ) . to call the package and the data set , > library(outlierdc ) >data(ebd ) >dim(ebd ) [ 1 ] 402 6 the data consist of 402 observations with six variables . to take a glance at the data , display the first six observations as follows : > head(ebd ) i d meta exam status time ratio 1787 55468952 0 12 1 26 0.0000000 1788 8883016 0 12 1 11 0.0000000 1789 10647194 0 12 0 134 0.0000000 1790 16033679 2 12 1 1 0.1666667 1791 19519884 0 12 0 111 0.0000000 1792 19574077 0 12 1 8 0.0000000 to illustrate the outlier detection algorithms , we utilized the number of metastatic lymph nodes ( called ) as a covariate .the response variable is the survival time in months ( ) , and its censoring status is denoted by , where 0 means censored .the outlier detection algorithm can be run by the function as follows : > fit < - odc(formula = surv(log(time ) , status ) meta , data = ebd ) this command with the essential arguments and runs the scoring outlier detection algorithm with wang and wang s censored quantile regression to create the object .the arguments method and can be omitted when the defaults are used . the argument for a threshold does not need to be specified in advance for the scoring algorithm .its full command is fit < - odc(formula = surv(log(time ) , status ) meta , data = ebd , + method = `` score '' , h = 0.05 ) that is , outlier detection is performed by the scoring algorithm ( ) based on the locally weighted censored quantile regression with the bandwidth ( ) for selecting outliers .the command provides a summary for the class . to use it ,type the object name on the command line : > fit outlier detection for censored data call : odc(formula = surv(log(time ) , status ) meta , data = ebd ) algorithm : scoring algorithm ( score ) model : locally weighted censored quantile regression ( wang ) value for cut - off k_s : # of outliers detected : 0 top 6 outlying scores : times delta ( intercept ) meta score outlier 346 4.48 0 1 9 4.59 327 2.71 1 1 13 4.54 326 2.08 1 1 14 2.52 296 4.86 1 1 4 2.35 354 3.09 1 1 10 2.11 233 5.29 0 1 1 1.95 the output via consists of two parts : basic model information and top outlying scores .the first part shows the overall information such as the formula used ( ) , the algorithm ( ) , the fitted quantile regression model ( ) , the threshold value to be applied ( ) , and the number of outliers detected ( ) . the command displays the model formula with input arguments and the used outlier detection algorithm .next , the top six scores are displayed with the original data in decreasing order .the number of outliers detected ( ) is zero because a threshold ( ) has not been provided thus far .the decision is postponed until the result is updated by the function .a threshold can be determined by visualizing the scores . to visualize the scores ,plot(fit ) the function draws a normal quantile - quantile ( qq ) plot of the outlying scores , as shown in figure [ fig : qqplot ] .the qq plot of outlying scores in figure [ fig : qqplot ] shows that the two points in the top right lie away from the line that passes through the first and third quartiles .a threshold is added by to this plot .thus , the result can be updated by >fit1 < - update(fit , k_s = 4 ) > plot(fit1 ) > fit1 outlier detection for censored data call : odc(formula = surv(log(time ) , status ) meta , data = ebd ) algorithm : scoring algorithm ( score ) model : locally weighted censored quantile regression ( wang ) value for cut - off k_s : 4 # of outliers detected : 2 top 6 outlying scores : times delta ( intercept ) meta score outlier 346 4.48 0 1 9 4.59 * 327 2.71 1 1 13 4.54 * 326 2.08 1 1 14 2.52 296 4.86 1 1 4 2.35 354 3.09 1 1 10 2.11 233 5.29 0 1 1 1.95 the two points with scores greater than the cut - off ( = 4 ) were the 346th and 327th observations , which are marked by an asterisk .the residual - based algorithm with a coefficient of 1.5 can be applied using with as follows : > fit2 < - odc(surv(log(time ) , status ) meta , data = ebd , method = `` residual '' , k_r = 1.5 ) > plot(fit2 ) > fit2 outlier detection for censored data call : odc(formula = surv(log(time ) , status ) meta , data = ebd , method = `` residual '' , k_r = 1.5 ) algorithm : residual - based algorithm ( residual ) model : locally weighted censored quantile regression ( wang ) value for cut - off k_r : 1.5 # of outliers detected : 9 outliers detected : times delta ( intercept ) meta residual sigma outlier 57 4.80 0 1 2 1.63 1.6 * 80 5.04 1 1 0 1.64 1.6 * 189 5.38 0 1 0 1.98 1.6 * 191 5.20 0 1 0 1.80 1.6 * 233 5.29 0 1 1 2.00 1.6 * 296 4.86 1 1 4 1.90 1.6 * 6 of all 9 outliers were displayed .nine observations by were selected as outliers , six of which are shown in the above output .all the outliers detected can be displayed by running .the boxplot algorithm with a coefficient of 1.5 can be applied using with , as follows : >fit3 < - odc(surv(log(time ) , status ) meta , data = ebd , method = `` boxplot '' , k_b = 1.5 ) > plot(fit3 ) > fit3 outlier detection for censored data call : odc(formula = surv(log(time ) , status ) meta , data = ebd , method = `` boxplot '' , k_b = 1.5 ) algorithm : boxplot algorithm ( boxplot ) model : locally weighted censored quantile regression ( wang ) value for cut - off k_b : 1.5 # of outliers detected : 1 outliers detected : times delta ( intercept ) meta ub outlier 346 4.48 0 1 9 4.32 * 1 of all 1 outliers were displayed .the boxplot algorithm with a coefficient of 1.5 detected only one outlying point .the 346th observation detected was also detected by both the scoring and residual - based algorithms .the boxplot algorithm with a coefficient of 1.0 yielded the same result as the scoring algorithm with a threshold of 4.0 ; that is , the 346th and 327th observations were detected .lastly , the function can be used to give the estimated 10th , 25th , 50th , 75th , and 90th quantile coefficients as follows : > coef(fit ) q10 q25 q50 q75 q90 ( intercept ) 1.609 2.549 3.3324.190 5.037 meta -0.039 -0.064 -0.091 -0.121 -0.138in this paper , we proposed three algorithms to detect outlying observations on the basis of censored quantile regression .the outlier detection algorithms were implemented for censored survival data : residual - based , boxplot , and scoring algorithms .the residual - based algorithm detects outlying observations using constant scale estimates , and therefore , it tends to select relatively many observations to achieve a high level of sensitivity in identifying outliers .thus , this algorithm is effective when high sensitivity is essential .the results of our simulation study imply that the boxplot and scoring algorithms with censored quantile regression are more effective than the residual - based algorithm when considering sensitivity and specificity together .the residual - based and boxplot algorithms require a pre - specified cut - off to determine whether observations are outliers .thus , these two algorithms are useful if a cut - off can be provided in advance .moreover , the boxplot algorithm can be applicable when a single covariate exists .the scoring algorithm is more practical in that it provides the outlying magnitude or deviation of each point from the distribution of observations and enables the determination of a threshold by visualizing the scores ; thus , this scoring algorithm is assigned as the default in our package .all the algorithms were implemented into our developed package outlierdc , which is freely available via comprehensive r archive network ( cran ) .the function yields the result of outlier detection by the residual - based , boxplot , or scoring algorithm .the resulting object can be used for generic functions such as , , , and .the help page for the function contains several examples for use in algorithms .these can be easily accessed by the command . in our package , there are several options that users need to choose . for convenience, the most effective and practical choice is assigned as the default for each option .thus , first - time users can run our package easily by following the illustration without a deep understanding of the presented algorithms .this research was supported by the basic science research program through the national research foundation of korea ( nrf ) funded by the ministry of education , science and technology ( 2010 - 0007936 ) .bguin c , hulliger b ( 2004 ) multivariate outlier detection in incomplete survey data : the epidemic algorithm and transformed rank correlations .journal of the royal statistical society : series a ( statistics in society ) 167(2):275294
outlying observations , which significantly deviate from other measurements , may distort the conclusions of data analysis . therefore , identifying outliers is one of the important problems that should be solved to obtain reliable results . while there are many statistical outlier detection algorithms and software programs for uncensored data , few are available for censored data . in this article , we propose three outlier detection algorithms based on censored quantile regression , two of which are modified versions of existing algorithms for uncensored or censored data , while the third is a newly developed algorithm to overcome the demerits of previous approaches . the performance of the three algorithms was investigated in simulation studies . in addition , real data from seer database , which contains a variety of data sets related to various cancers , is illustrated to show the usefulness of our methodology . the algorithms are implemented into an package outlierdc which can be conveniently employed in the environment and freely obtained from cran . keywords : outlier detection , quantile regression , censored data , survival data
plenty of tools in astrophysics are developed using system programming languages such as fortran , c or c++ .these languages are known to provide high performance and fast executions but they rely heavily on the developer for concurrency and memory control , which may lead to common errors as shown in fig.[fig1 ] : a ) access to invalid memory regions , b ) dangling pointers and attempts to free already freed memory , c ) memory leaks and , d ) race conditions .this can produce random behaviors and affect the scientific interpretation of the results .the recently created language rust prevents such problems and fields like bioinformatics have already started to take advantage of it .astroinformatics can benefit from it too .we first discuss the general principles behind this new language and what makes it attractive when compared to more traditional languages such as c or fortran .we then show that this language can reach the same performance as a fortran n - body simulator , mercury - t , designed for the study of the tidal evolution of multi - planet systems .mozilla research , motivated by the development of a new web browser engine , released in 2015 the first stable version of a new open source programming language named rust .it uses patterns coming from functional programming languages and it is designed not only for performance and concurrency , but also for safety .rust introduces concepts like ownership , borrowing and variable lifetime , which : 1 . facilitates the automatic control of the lifetime of objects during compilation time .there is no need for manually freeing resources or for an automated garbage collector like in java or go ; 2 . prevents the access to invalid memory regions ; 3 .enforces thread - safety ( race conditions can not occur ) .these zero - cost abstraction features make rust very attractive for the scientific community with high performance needs . in rust , variables are non - mutable by default ( unless the mutable keyword is used ) and they are bound to their content ( i.e , they own it or they have ownership of it ) .when you assign one variable to another ( case a in fig.[fig2 ] ) , you are not copying the content but transferring the ownership , so that the previous variable does not have any content ( like when we give a book to a friend , we stop having access to it ) .this transfer takes also place when we call functions ( case b in fig.[fig2 ] ) , and it is important to note that rust will free the bound resource when the variable binding goes out of scope ( at the end of the function call for case b ) .hence , we do not have to worry about freeing memory and the compiler will validate for us that we are not accessing a memory region that has already been freed ( errors are caught at compilation time , before execution time ) . additionally , apart from transferring ownership, we can borrow the content of a variable ( case c in fig.[fig2 ] ) . in this case ,two variables have the same content but none of them can be modified , thus protecting us from race conditions .alternatively , we can borrow in a more traditional way ( like when we borrow a book from a friend , he is expecting to get it back when we stop using it ) like in case d in fig.[fig2 ] , where the function borrows the content of a variable , operates with it ( in this case , it could modify its content ) and returns it to the original owner ( not destroying it as shown in case b ) . exceptionally , all these rules can be violated if we make use of unsafe blocks , which is strongly discouraged but necessary in certain situation ( e.g. , dealing with external libraries written in fortran , c or c++ ) .if present , unsafe blocks allow us to clearly identify parts of the code which should be carefully audited , keeping it isolated and not making the whole program unsafe by default like in fortran , c or c++ .we explored the advantages and drawbacks of rust for astrophysics by re - implementing the fundamental parts of mercury - t , a fortran code that simulates the dynamical and tidal evolution of multi - planet systems .we developed a simple n - body dynamical simulator ( without tidal effects ) based on a leapfrog integrator in rust , fortran , c and go ( which provide a garbage collector for memory management ) .the software design and implementation does not include any language - specific optimization that a developer with basic knowledge would not do .we compiled the four implementations with an optimization level 3 ( rustc / gfortran / gcc compiler ) and the standard compilation arguments in the case of go .we selected the best execution time out of five for an integration of 1 million years and the results are in table [ table ] .for this particular problem , rust is as efficient as fortran , and both surpass c and go implementations [ cols="^,^,^,^",options="header " , ] [ table ] time might be improved if language - specific optimizations were implemented .based on mercury - t , we implemented the additional acceleration produced by tidal forces between the star and its planets into our rust and fortran leapfrog integrators . to test the codes , we ran a simulation of 100 million years with the same initial conditions as the case 3 described in the mercury - t article , hence a single planet with a rotation period of 24 hours , orbiting a brown dwarf ( 0.08 ) at 0.018 au with an eccentricity of 0.1 .the results are shown in fig .[ fig3 ] , the rust and fortran code are practically identical and they reproduce a similar behavior to what is shown in the mercury - t article . nevertheless , leapfrog is a very simple integrator and not very accurate .this can be seen in the eccentricity evolution , which is slightly different from the mercury - t article and appears noisy .as an additional exercise , we implemented the whfast integrator in rust ( black line in fig .[ fig3 ] ) .this better integrator leads to a better agreement with mercury - t thus demonstrating that a high level of accuracy can also be achieved with rust .we have shown the reliability of rust as a programming language as opposed to fortran , c or even go .rust allows the user to avoid common mistakes such as the access to invalid memory regions and race conditions .we have also shown that it is a competitive language in terms of speed and accuracy .the main challenge we experienced was the initial learning curve , it was necessary to really understand and get used to the ownership and borrowing concepts .once the paradigm shift is done , the benefits are immediate .we therefore encourage the community to consider rust as a language that will help us produce good quality , memory safe , concurrent and high - performance scientific code .
the astrophysics community uses different tools for computational tasks such as complex systems simulations , radiative transfer calculations or big data . programming languages like fortran , c or c++ are commonly present in these tools and , generally , the language choice was made based on the need for performance . however , this comes at a cost : safety . for instance , a common source of error is the access to invalid memory regions , which produces random execution behaviors and affects the scientific interpretation of the results . in 2015 , mozilla research released the first stable version of a new programming language named rust . many features make this new language attractive for the scientific community , it is open source and it guarantees memory safety while offering zero - cost abstraction . we explore the advantages and drawbacks of rust for astrophysics by re - implementing the fundamental parts of mercury - t , a fortran code that simulates the dynamical and tidal evolution of multi - planet systems .
global and local , private and public institutions increasingly rely on numerical indexes for their decision making processes. a single index may be the difference between approval or rejection .these indexes , and even the mathematical models behind them , are often made public to enhance transparency , accountability , and establish standards .the usefulness of an index is measured by the successes and failures of decisions based on it .thus index creation , computation , selection , comparison , evaluation , and consolidation has become an industry of its own , and an integral part of institutionalized decision making processes .+ in this work we address the problem of finding suitable indexes for ranking countries according to their influences on the international trade market . in this casethere is an obvious candidate : the influence of a country on the international trade market is proportional to the amount ( counted in us dollars ) of its international trade , i.e. the total amount of imports plus the total amount of exports .this primordial index is sound and should not be overlooked .nevertheless , we claim that it disregards some subtle but important issues .+ suppose we have a couple of countries and both with high levels of international trade , so that they both are highly ranked with the above index .suppose in addition that and trade essentially with each other , i.e. trade of and with other countries is negligible in comparison with trade between themselves . in this caseour feeling is that and do not exert a strong influence on the international trade market : a disruption on s economy will surely impact s economy , but will have a negligible impact on global trade .countries and although highly interdependent are in fact quite isolated from the rest of the world , and should not be ranked as highly influential countries in the international trade market .+ to address this sort of issues we take a network approach to our index creation problem . in section [ stn ] , we regard the international trade market as a weighted network with nodes representing countries , edges representing trade between countries , and weights measuring the influence that a country exerts on another country trough trade . indeed , in sections [ dit ] and [ dio ] , we are going to introduce a couple of different weights leading to a couple of rankings .+ once we have established the network settings for approaching our index creation problem , we face the problem of ranking nodes in a network by their influence .this demands that we paid attention to the distinction between direct and indirect influences in a network , a distinction emphasized by godet and his collaborators , who stressed the power of studying indirect influences for uncovering hidden relations .direct influences in a network arise from directed edges .indirect influences in a network arise from chains of direct influences , i.e. from directed paths .+ let us consider an extreme case that illustrates the importance of taking indirect influences into account .cuba and the united states ( us ) have a fairly weak amount of trade , indeed since 1960 the us has placed a series of economic , financial , and commercial prohibitions to trade with cuba .thus a fairly low influence of the us on the cuban economy is to be expected .however , the us trades with cuba s main trading partners , and so we may expect that the us exerts a stronger indirect influence on the cuban economy than what one may naively think. simply put , a disruption of the us s economy will likely impact the spaniard economy , and trough spain it will also be felt by the cuban economy .the main challenge that we confront in this work is to give a quantitative account of the latter impact . +after building our mathematical models in sections [ stn ] trough [ hiitn ] , we proceed in sections [ secactn ] and [ wtn2 ] to implement them using real world data . out of united nationsmember states we restrict our attention to 177 countries , namely , those that have recent data available in the economic commission for latin america and the caribbean ( eclac ) web site .we collect the 2011 data of exports and imports between each pair of countries , as well as the gdp , and the exports and imports totals for each country . using this datawe build the `` international trade network , '' where nodes are countries , and a pair of nodes is connected by an edge if there were exports or imports between them in 2011 .suppose there actually was trade between countries and , then we define a couple of weights on the edge from to as follows : 1 . * direct influences on trade .* this weight computes the proportion of the international trade of ( exports + imports ) that involves country + 2 . * direct influences on offer .* this weight computes the relative contribution of the trade between and to the offer of ( gdp + imports ) . to take indirect influences into account we start from a trade network , with one of the weights just introduced , and apply one of the mathematical methods available for computing indirect influences in complex networks . with any of these methodsone goes beyond computing direct trade between countries and , and takes as well into account trade chains such that country trades with country .+ in this work we weight chains of trade using the pwp method introduced by daz . to place this method whiting a general context and for the reader convenience we provide a in section [ mcii ] a succinct description of four closely related methods for computing indirect influences , namely , godet s micmac , google s pagerank , chung s heat kernel , and the pwp method . for more details on the similarities and differences among these methodsthe reader may consult .our choice of the pwp method rest on the fact that with it any chain of direct influences , of any length , generates indirect influences , and reciprocally , all indirect influences are generated in this fashion .+ thus we obtain several rankings among nations using the methodology outlined above : for each of the above weights we get the direct and indirect influences rankings .we compare these rankings among themselves and with the gdp ranking for the american continent trade network in section [ secactn ] , and for the world trade network in section [ wtn2 ] .we show that there are remarkable differences between the various rankings , and analyse them in economic terms .+ we remark once again that our economic data refers to year 2011 , and is measured in us dollars . when we compare different indexes what is really at stake is to compare the rankings they induce . in practice what we do is to compare the corresponding normalized indexes .numerical calculations in this work were made with scilab s module for computing indirect influences designed by catumba .for the economics terminology the reader may consult .in this section we introduce basic definitions concerning the construction of international trade networks and their adjacency matrices , called matrices of direct influences in this work . a trade network is a directed graph such that : * the set of vertices is a family of countries .* there is an edge in from vertex to vertex if and only if there is trade ( exports or imports ) between and . note that trade networks are actually symmetric graphsnevertheless , we regard them as directed graphs since we are soon going to introduced non - symmetric weights on them . for each edge , let be its source vertex , or equivalently , the country that exerts the influence , and let be its target vertex , or equivalently , the country that is influenced .a weight on a directed graph assigns weight to each edge of , i.e. a weight on is a map . in sections [ dit ] and [ dio ] we are going to introduce a couple of weights on trade networks .+ the bi - degree of a vertex in a weighted directed graph is such that is the sum of the weights of edges with target , and is the sum of the weights of edges with source , that is : we impose the alphabetic order on countries , so the set of countries is identified with the set , where is the number of countries in our trade network .the adjacency matrix or matrix of direct influences of a trade network is given for by : thus gives the direct dependence of on , or equivalently , the direct influence of on .the matrix of indirect influences is computed from the matrix of direct influences using one of the methods discussed in section [ mcii ] . in our applications , the matrix of indirect influences is computed applying the pwp method . [ d1 ]let be a trade graph , its matrix of direct influences , and its associated matrix of indirect influences .the indirect dependence , indirect influence , and indirect connectedness of vertex in are given , respectively , by : the ordered pair is the bi - degree of in the network of indirect influences .the indirect connectedness of vertices and is given by direct dependencies , influences , and connectedness are computed in a similar way using the matrix of direct influences instead of the matrix of indirect influences . following godet a -dimensional representation of vertices bi - degrees can be displayed trough the dependence - influence plane which comes naturally divided in four sectors , see figure [ p1 ] .the horizontal axis represents dependencies and the vertical axis represents influences .a country is represented in the dependence - influence plane by the ordered pair .the horizontal and vertical lines defining the four sectors of the plane are located at the mean dependence and at the mean influence , given respectively , by * * sector 1 : * influential independent countries . * * sector 2 : * influential dependent countries . * * sector 3 : * low influence independent countries . * * sector 4 : * low influence dependent countries .although our focus is on the pwp method , for the reader convenience we introduce four methods for computing indirect influences , and describe how each method computes the matrix of indirect influences . for more on the similarities and differences among these methodsthe reader may consult . with this method , introduced in 1992 by godet ,the matrix of indirect influences is where is the matrix of direct influences , and is a parameter usually equal to or .the relevant paths in the network , with this method , are those of length .this method , registered in 1999 by google , is quite well - known thanks to its application to web searching . with pagerank influencesare normalized , and dependencies measure the relevance or relative importance of web pages .pagerank takes infinite powers of matrices , thus it gives greater importance to infinite long paths .let be the matrix of direct influences , whose entries should be non negative real numbers , and the sum of each column must be either or .the matrix of indirect influences is given by ^{k} ] ( with and being the ranking positions of country according to and , respectively ; and is the total number of countries ) given by furthermore , we also have the problem of consolidating the various indexes into just one index .+ we applied our methods for computing indirect influences to trade networks at the level of countries .it should however be clear that these methods may also be applied at the business level , and even at the individual level . in the latter cases ,finding reliable data at a global scale and managing such huge data are daunting problems .nevertheless , our methods can be readily applied if one focusses on specific business sectors , just like we made a focus study of indirect influences on trade and offer in the american continent .+ the models presented in this work were static , in the sense that our data referred to just one year , namely 2011 .nevertheless , our techniques may be extended to a multiple years dynamical model in which influences are time depended functions .
we address the problem of gauging the influence exerted by a given country on the global trade market from the viewpoint of complex networks . in particular , we apply the pwp method for computing indirect influences on the world trade network . [ thm]lema [ thm]definition [ thm]remark [ thm]theorem [ thm]example [ thm]corollary [ thm]proposition [ thm]table
experiments searching for neutrinoless double beta ( ) decay require an extremely low background level in the region of interest around a few mev .compton scattered -particles , originating from radioactive decays in the proximity of the detectors , are an important background contribution at such energies . in high - purity germanium ( hpge ) experiments ,these interactions are often identified and removed from the signal data set through pulse - shape analysis ( psa ) . in order to extract a half - life limit ,the signal recognition efficiency has to be known . usually , experimentally obtained pulse - shape libraries with signal - like events are used to obtain the signal recognition efficiency . however , these evaluation libraries can have energy - deposition topologies and event - location distributions different to those of the signal searched for .efficiencies obtained like this can be systematically different from the recognition efficiency for the real signal . furthermore , the evaluation libraries used to derive the recognition efficiencies often contain events of the wrong type , making a direct determination of the efficiencies impossible .this paper presents investigations of the reproducibility and systematic uncertainties of the efficiencies of pulse - shape discrimination ( psd ) using artificial neural networks ( anns ) with libraries of simulated pulses .the general idea of psd using anns is introduced and the sources of possible systematic effects are discussed . the simulations and the libraries used for the analysis are described as well as the anns and the procedures used to train them .the stability of the method against initial conditions and ann topologies is investigated .the focus is on the differences obtained in recognition efficiencies using different evaluation libraries and the associated systematic uncertainties .the detection principle of semiconductor detectors is based on the creation and detection of electron hole pairs , i.e. charge carriers , when radiation interacts with the detector material .charge - sensitive preamplifiers are commonly used to detect the drifting charge carriers in large volume hpge detectors .the time structure of an event , the pulse - shape , is defined by the mirror charge signal induced on the electrodes as a function of time .the pulse length is given by the time needed to fully collect the charges on the electrodes .see e.g. for a detailed description of the pulse creation process . for photons in the mev range ,the dominant interaction process is compton scattering .a photon with an energy of one mev has a mean free path of cm in germanium .thus , photon - induced events with energies of about 2mev are mostly composed of several energy deposits within a hpge detector , separated by a few centimeters .these background - like events are referred to as multi - site events ( mse ) .in contrast , electrons with the same energy have a range of the order of millimeters and deposit their kinetic energy `` locally '' .signal - like events of this kind are referred to as single - site events ( sse ) .note that in reality there also exists `` signal - like '' background , i.e. background events that have an indistinguishable event topology , such as the irreducible background from decay .the two electrons emitted in 0 decay result predominantly in sses . due to bremsstrahlung ,a fraction of a few % of the 0-decay events become mses .events identified as mse in the energy region of interest are rejected as background .methods to distinguish between sses and mses in hpge detectors using anns were developed previously . in most previous works , events from double escape peaks ( dep ) and full energy peaks ( fep ) were used to create training libraries of signal - like and background - like events , respectively .these were obtained from calibration data for which sources such as or were used . the ann efficiencies to correctly identify eventsare also typically extracted using evaluation libraries from calibration measurements .the efficiencies of psd methods are not necessarily homogeneous throughout the detector volume . for a realistic evaluation, the spatial distribution of the events in a given test library has to be taken into account .especially , dep events will exhibit a non - uniformity in event location distribution due to the topology of the events .if pair production occurs in a coaxial hpge at high radii , , and height , , i.e. close to the extreme boundaries , the probability for the two 511kev -particles to escape is the highest . hence , libraries of dep events have a higher event location density in these parts of the detector ( see section [ sec : libraries ] and fig .[ fig : psa : event_distr ] ) . on the other hand ,signal events due to decay ( but also `` signal - like '' background events due to decay ) are expected to be homogeneously distributed . using a library with an event location distribution different from the one expected forthe signal can lead to systematic biases .the main scope of this work is to address this issue and estimate the uncertainties on the sse recognition efficiency evaluation arising from the use of different training and evaluation sets .in order to quantify the uncertainties on the ann event topology recognition efficiencies , simulations are used .the signal ( background ) recognition efficiency ( ) of any psd method is defined as the probability that the method correctly identifies an sse ( mse ) from an event - library containing only sses ( mses ) .realistic sse and mse pulse - shape libraries always contain events of both classes .hence , the ann method applied to a library of predominantly sse or mse pulses will result in a survival probability or rejection probability , defined as the fraction of pulses in the library that are classified as sse or mse , respectively : where and are the fraction of sses and mses in the sse - library , respectively , and and are the fraction of mses and sses in the mse - library , respectively . using simulated pulses idealized libraries with and can be created .these libraries can be used to determine and directly , as for this case and ( see equ .[ equ:1 ] ) . in order to quantify the effect of non - homogeneous event - location distributions , for anns obtained with training libraries with inhomogeneous event - location distributionsare compared to those obtained from libraries with a homogeneous event - location distribution .the stability of the method is verified by training and evaluating a set of anns with different initial weights of the ann synapses and with training libraries of different sizes .finally , the influence of the number of hidden layers and the number of neurons in the ann on , is investigated .signal recognition efficiencies obtained using evaluation libraries with event location distributions as expected and different from the signal are then compared .true - coaxial hpge detectors are considered in this paper .they have a simple radial electric field and , thus , have relatively simple pulse shapes .consequently , pulse - shapes of this type of detectors have lower systematic uncertainty due to smaller uncertainties in the field calculations compared to detectors with more complex geometries .this makes this type of detectors interesting for this analysis .hpge detectors for low background experiments typically have a radius , , and a height of a few cm axis pointing upwards . in cartesiancoordinates , the - and - axes coincide with the crystallographic axes , while the axis coincides with the crystallographic axis . ] .the simulated n - type true - coaxial germanium detector has a height of 70 mm and =37.5 mm with the diameter of the borehole being 10 mm .the dead layer due to the n+ contact ( outer surface ) is less than 1 m , while the dead layer due to the p+ contact is 0.5 mm .the simulated geometry describes an existing true - coaxial 18-fold segmented n - type hpge developed as a prototype detector for the gerda experiment .photon and electron interactions for different libraries were simulated within the mage framework , based on geant4 .pulse shapes were simulated for the core electrode . whenever individual energy deposits within one event were separated by less than 0.1 mm they were combined .pulse shapes for the combined energy deposits were simulated using pre - calculated electric and weighting fields using the pulse - shape simulation package described in .the charge collection efficiency is either zero or one .charge cloud diffusion and self repulsion effects are not taken into account in the simulation .the drift path anisotropy originating from the axis effect , i.e. the dependence of mobilities on the axis orientation , is accounted for in all simulations .the number of grid points for the electric- and weighting - field calculations was .the electrically active impurities were assumed to be homogeneous within the detector , with a density of .the length of the simulated pulses is 1 .the step frequency of the simulation is 1125mhz , a multiple of 75mhz to which the pulses were resampled to take the effects of a typical daq into account . above 1ghz ,the step frequency is sufficient to correctly describe trajectories .the amplifier rc - integration constant was set to 20ns , corresponding to a bandwidth of about 10mhz , while the amplifier decay time was set to 50 .each individual pulse shape was convoluted with gaussian noise , .the results presented in this work do not change when simulated pulses with no noise are used , i.e. the efficiencies obtained are within the uncertainties quoted in the following .sse and mse pulses take on a wide variety of shapes as shown in fig .[ fig : sample_pulse ] .it is not trivial to interpret the pulse shapes and distinguish between sses and mses without an involved quantitative analysis .the pulse length , , is between 160 and 500ns , where is defined as the time in which the pulse increases from 10% to 90% of its amplitude .this part of the pulse contains the relevant information regarding the event topology .training and evaluation libraries with independent pulses were created .the simulated libraries are listed in table [ tab : libraries ] .the dep , 2 and 0 event - libraries were created with and without a realistic admixture of mses due to bremsstrahlung and compton - scattered -particles .all mse libraries were simulated for the 1620kev fep , corresponding to a source , typically used for calibration .the notation of _ no comp _ & _ brems _ is used for sse libraries in which all events with compton scattering or hard bremsstrahlung were removed .mse libraries that contain only events which have at least one energy deposition due to compton scattering or hard bremsstrahlung in the detector and thus have at least two distinct energy deposits are marked as _ comp _ & _ brems only_. in order to obtain a clean mse library it was required that r , the radius within which 90% of the deposited energy was contained , is larger than 2 mm .this ensures that all events have at least two energy deposits that are at least 2 mm apart . to indicate the origin of incoming photons , the last column in table [ tab : libraries ] lists either `` top '' , `` side '' or `` homog '' .this means that the photons were simulated to come from either the - or -plane for `` top '' and `` side '' , respectively .their origins are homogeneously distributed on these planes with their momentum perpendicular to the plane of origin .the planes are located 17.5 cm from the center of the detector and their area is sufficiently large to cover the detector .sse libraries with homogeneous event location distributions within the detector volume are listed as `` homog '' .for _ dep clean _ , 2.6mev photons were forced to make pair creation with the event vertices homogeneously distributed within the detector .each training and evaluation library contains between 7.000 and 20.000 simulated pulses .lcccc library & energy & processes & source location + + _ dep top _ & ( 1593)kev & _ no comp _ & _ brems _ & top + _ dep side _ & ( 1593)kev & _ no comp _ & _ brems _ & side + _ dep real _ & ( 1593)kev & all processes & side + _ dep clean _ & ( 1593)kev & _ no comp _ & _ brems _ & homog . + _ 0 _ & ( 2039)kev & all processes & homog .+ _ 0 clean _ & ( 2039)kev & _ no comp _ & _ brems _ & homog . + _2 real _ & 450kev540kev & all processes & homog . + _2 clean _ & 1000kev1450kev & _ no comp _ & _ brems _ & homog . + + _ fep top _ &( 1620)kev & _ comp _ & _ brems only _ & top + _fep all _ & ( 1620)kev & all processes & top + _ fep side _ & ( 1620)kev & _ comp _ & _ brems only _ & side + _ fep clean _ & ( 1620)kev & r mm , _ comp _ & _ brems only _ & top + the radial distributions of the energy barycenters , defined as the energy - weighted mean radial position of the energy deposit , of individual events for sse libraries containing no mses are shown in fig .[ fig : psa : event_distr ] .top , middle and bottom refer to events contained in the upper , middle and lower third of the detector , respectively .these three volumes are equal .the barycenter of an individual event corresponds approximately to the position of the interaction / decay .for the _ dep clean _library , where _ clean _ is used here and below to identify libraries with no compton or bremsstrahlung interactions , it is flat as a function of and equivalent to the distribution of the _ 2 real _ library . _ real _ is used to indicate libraries which contain all processes , i.e. including compton scattering and bremsstrahlung .dep side _ and _ dep top _ libraries have inhomogeneous event - location distributions , events being located with a higher probability at high close to the bottom and top of the detector since for these parts of the detector it is more likely for the two back to back 511kev photons to escape the detector ._ side _ and _ top _ indicate the location of the source with respect to the detector .the libraries listed in table [ tab : libraries ] were used to create five different ann training and evaluation sets each . they are listed in table [ tab : training_sets ] , showing the combinations of sse and mse libraries .lcc & sse library & mse library + & _ dep side _ & _ fep side _ + _ set ii - real 2 _ & _ 2 real _ & _ fep top _ + _ set iii - hom dep _ & _ dep clean _ & _ fep top _ + _ set iv - top dep _& _ dep top _ & _ fep top _ + _ set v - clean 0 _ & _ 0 clean _ & _ fep top _+ the anns used in this analysis were built using the tmultilayerperceptron ( tmlp ) within the root framework . only the part of the pulse containing the relevant information on the event topology is used by the anns .the pulses in the considered detector are maximally around 500ns long . in total , 40 time steps , corresponding to 530ns , were used .the center of the resulting trace was chosen to be the point where the pulse reaches 50% of its amplitude .the amplitude of each pulse was normalized to unity .the anns are composed of 40 input neurons , one hidden layer with the same number of neurons and an output layer with only one neuron .the anns were trained using the broyden - fletcher - goldfarb - shanno learning method .background - like mses were assigned an ann output , , of 0 and signal - like sses were assigned an of 1 .libraries of the same size for mses and sses were used . for a trained network , should be close to 1 for sses and close to 0 for mses . for each individual ann eventsare classified as sse if , where is a parameter that has to be optimized . the rejection probability represents the fraction of events from an mse dominated library , fep in this case , rejected by the cut .the survival probability represents the fraction of events from a sse dominated library ( dep , or ) kept with .the cut value is chosen for each individual ann to maximize the quantity of the corresponding evaluation set used .this ensures that the highest e and r are obtained at the same time . the solid histogram in fig .[ fig : sample_spectrum ] shows the simulated energy spectrum for events contained in the _ dep real _ and _ fep all _ libraries ( table [ tab : libraries ] ) . the fep is significantly reduced while the dep remains almost untouched .the survival probability for 0 and dep events is given by the ratio of the peak areas after and before the ann rejection .the areas are determined by fitting a gaussian plus constant background to the spectra . in figures [ fig : psa: training_a ] and [ fig : psa : training_b ] , the distribution for an ann trained with mses and sses from training _ sets i _ and _ ii _ are shown .a clear separation between the distributions of the mse and sse libraries is visible .figures [ fig : psa : training_c ] and [ fig : psa : training_d ] show , and ) for training _ sets i _ and _ ii _ , respectively .the vertical line represents .the , , and values are called , , and , respectively , in the following .these variables are summarized in table [ tab : variables ] . for libraries with purely sse or mse events , for which and , and coincide with and , respectively ( see eq .[ equ:1 ] ) .note that the anns with the optimized as described here are later used for efficiency and uncertainty evaluations .clcc survival probability & & + rejection probability & & + signal recognition efficiency & & + background recognition efficiency & & + background reduction power & & + statistical uncertainties quoted in the following are derived from the statistical fluctuations expected due to the limited number of simulated events and events surviving the selection .the reproducibility of was investigated by training five anns with the same ann topology .the same training samples were used but the initial weights of the individual synapses of the untrained ann were different in each case . also the order in which individual pulses from the training sets were chosen for the iterative training was different for each ann .the fluctuations of between the different anns are of the order of 1% of the value of , the rms of the distribution of is taken as its systematic uncertainty .this systematic uncertainty only describes within which precision efficiencies are reproducible .they are not to be confused with systematic uncertainties related to pulse shape simulation .two groups of five anns , each with a different number of neurons in the hidden layer , were trained using the same training set ( _ set ii _ ) .the value of for the default ann with 40 hidden neurons was ( stat . ) ( syst . ) , while an ann with 40 input neurons and one hidden layer with 10 neurons had a recognition efficiency ( stat . ) ( syst . ) .this is not significantly worse .five anns with three hidden layers with 40 neurons each were trained and have ( stat . ) ( syst . ) .this is not a significant improvement with respect to the default ann .the corresponding values are ( ) , ( ) and ( ) for the default network , the network with one hidden layer of 10 neurons and the network with three hidden layers , respectively . in summary , the variation of due to the choice of the topology of the network is . in the following ,the default network with one hidden layer with 40 neurons is used and the variation due to the topology is not considered in the following uncertainties .anns were trained with the training sets listed in table [ tab : training_sets ] .the trained anns were applied to the libraries and _ fep clean _ , containing purely sses and mses . in this case , and , respectively .hence , = and = for a clean library ( see eq . [ equ:1 ] ) .the resulting , and values are given in table [ tab : efficiencies ] .lccc training set & ( _ clean _ ) & ( _ fep clean _ ) & + _ set i inhom dep _ & 0.915.017 & 0.893.014 & 0.904 + _ set ii real 2 _ & 0.976.005 & 0.862.008 & 0.917 + _ set iii hom dep _ & 0.964.009 & 0.887.006 & 0.924 + _ set iv top dep _ & 0.921.012 & 0.888.006 & 0.904 + _ set v clean 0&0.958.008 & 0.888.008 & 0.922 + the highest values for were obtained with anns trained with sse samples with homogeneous event - location distributions .the values for anns trained with inhomogeneous samples are by approximately 0.02 lower .the variation on is up to 0.06 and hence more pronounced than on ( .03 ) .the variations due to sse libraries with different event - location distributions used for the ann training are significantly bigger than the fluctuations due to changes of the ann initial conditions .the survival probabilities obtained with the trained and optimized anns were evaluated on the sse libraries _ real _ , _ real _ and _ dep real _ according to the method explained in sec . [ sec : training ] .the results are listed in table [ tab : survival_prob ] for training , and .cccc ( -4,1)clean set bla bla & & & + _ 0 real _ & 0.867.018 & 0.937.005 & 0.916.009 + & 0.885.017 & 0.944.005 & 0.915.008 + _ dep real _ & 0.898.012 & 0.936.003 & 0.914.007 + & ( 2.1.6)% & ( 0.8.1)% & ( 0.3.4)% + & ( 3.5.1)% & ( -0.1.7)% & ( -0.2.6)% + for 0 events in the energy interval kev , values of ( 0.937.006 ) and ( 0.867.018 ) were obtained with the ann trained with _ set ii _ and _ set i _ , respectively , where the statistical and systematic uncertainties were added in quadrature . for the ann trained on _ set i _ , is lower than for _ sets ii _ and _ v _ , as expected from the lower for this training set ( see table [ tab : efficiencies ] ) .the realistic signal - like libraries also contain a significant amount of mses .this explains why the obtained values for 0 are significantly different from the listed in table [ tab : efficiencies ] .as the amount of wrong type of events in event libraries depends on geometry and energy this also implies that by itself is not a precise quantity to compare psd methods even if is also considered . the position distribution of the rejected events inside the detector , i.e. the position dependence of the signal recognition efficiency was studied .in fig . [fig : psa : xyaver ] , the location dependence of the mean value of the output inside the detector is depicted for the sses from the _0 clean _ library .regions where the average output is lower than are seen as blue areas . in these regions ,sses are systematically rejected .the fraction of the volume where the sses of the _0 clean _ library are more likely to be rejected than to be accepted as sse is ( 8.0.7)% for an ann trained with the sse sample with inhomogeneous event - location distribution _ set i_. for the anns trained with _ sets ii _ and _ v _ , the affected volume is reduced to ( 2.2.5)% and ( 3.7.8)% , respectively .using an ann training set with similar event - location distribution as for the evaluation set decreases the effect of the systematic volume cut , however , it does not completely remove it . the symmetry in the patterns observed in fig .[ fig : psa : xyaver ] seems to be connected to the crystallographic symmetry of the detector .the axis dependence of the effect might be due to the dependence of the electron to hole mobility ratio on the position of the charge carriers with respect to the crystal axes ( see fig . 2 in ) .affected zones appear close to the inner detector surface and in the middle of the bulk around mm .the mechanism of pattern formation is , however , not understood .the different event - location distributions for dep samples from calibration and 0 signal events ( see fig .[ fig : psa : event_distr ] ) was identified as the major source of systematic uncertainty for the approach of anns trained with dep sets . for 2 training samples, the different energy distribution leads to a different signal - to - noise ratio .the values obtained for different sse evaluation libraries with anns trained with different training sets are listed in table [ tab : efficiency_e ] .the signal recognition efficiencies of the different anns are within uncertainties the same for the different evaluation libraries with homogenous event - location distribution .this demonstrates that the normalization of the input to the ann makes the influence of the lower energy of events , down to 1mev , insignificant .however , when is derived using the _ dep side _ set with realistic event - location distribution it is systematically overestimated .there is a (dep\ side - 0\nu\beta\beta)=(4.2^{+0.6}_{-0.9}\pm0.8)\% ] and (dep\ side - 0\nu\beta\beta)=(1.4^{+0.4}_{-0.7}\pm0.9)\%$ ] .note that the efficiencies obtained when training the network with the set are higher than the ones obtained using the set , the reason for which is unclear .comparing the resulting with quoted in table [ tab : survival_prob ] shows that the additional admixture of mses to the evaluation libraries slightly reduces with respect to .cccc ( -4,1)clean set bla bla & & & + _ clean _ & 0.915.017 & 0.976.005 & 0.958.008 + _ clean _ &0.911.018 & 0.970.007 & 0.956.008 + _ dep clean _ & 0.917.018 & 0.976.005 & 0.960.009 + _ dep side _ & 0.954.011 & 0.987.004 & 0.971.006 + ) & ( -0.4.1)% & ( -0.6.3)% & ( -0.2.2)% + ) & ( 0.1.2)% & ( 0.0.2.03)% & ( 0.2.1)% + ) & ( 4.2.8)% & ( 1.1.7)% & ( 1.4.4)% +systematic effects on the determination of the signal recognition efficiency of pulse - shape - analysis using anns were investigated using pulse shape simulation .the most important effect was found to be due to the event - location distribution of the evaluation libraries .in contrast , the energy distribution of events in the training library was found to be irrelevant within reasonable limits .the use of evaluation libraries with homogeneous event location distribution lead to reduced systematic uncertainties on the signal recognition efficiencies of the order of 1% . on the contrary signal recognition efficiencies of anns determined from dep libraries with inhomogeneous event location distributionswere found to be up to 5% too high , consistent with the systematic uncertainties derived in .differences in the energy distribution of the events of the evaluation samples do not have a significant effect .the different event - location distributions resulting from different positions of the calibration sources may result in variations of the ann signal recognition efficiency by up to 6% and the background discrimination power by 2% .the signal detection efficiency of an ann depends on the location of the events inside a true - coaxial detector .the efficiency is above 90% in most parts of the detector .however , sses in the inner regions and in the center of the bulk are systematically misidentified . about 2% to 8% of the volumeis affected , depending on the homogeneity of the event - location distribution of the training set used .using training sets with homogeneous sse location distribution reduces the affected regions but does not eliminate them completely .the true - coaxial detectors assumed for these studies have particularly simple field configurations .the effects on detectors with more complex field configurations will have to be studied very carefully .pulse - shape discrimination with artificial neural networks is a useful tool to identify multi - site events .it potentially increases the sensitivity of 0 experiments like gerda . the usage of events for training and efficiency evaluation of the artificial neural networks is recommended .
a pulse - shape discrimination method based on artificial neural networks was applied to pulses simulated for different background , signal and signal - like interactions inside a germanium detector . the simulated pulses were used to investigate variations of efficiencies as a function of used training set . it is verified that neural networks are well - suited to identify background pulses in true - coaxial high - purity germanium detectors . the systematic uncertainty on the signal recognition efficiency derived using signal - like evaluation samples from calibration measurements is estimated to be 5% . this uncertainty is due to differences between signal and calibration samples .
wireless networks , the use of relay is attracting increasing attention [ 1 , 2 ] because of its many advantages . among the relay channels , two - way relay channel ( twrc , as shown in fig .1 ) is a especially interested due to the almost double spectral efficiency with the physical layer network coding ( pnc ) [ 3 ] transmission scheme .it was further proved in [ 4 , 5 ] that pnc can approach the capacity of twrc in high snr region . another spectral efficiency boosting technique is multiple input and multiple output ( mimo ) , which has been widely used in wireless systems .therefore , it is of great interest to combine pnc and mimo to further improve the wireless spectral efficiency .a straightforward way is to divide the mimo transmission into parallel siso streams by precoding , so that pnc can be implemented on each steam [ 6 ] .however , the precoding requires not only transmitter side channel state information ( csit ) but also strict time and carrier phase synchronization between the two end nodes .mimo nc scheme [ 7 ] is more practical , where only receiver side csi ( csir ) is needed . in mimo nc , the relay node detects each end node s packet with traditional mimo detection and then combines them with network coding .these schemes failed to exploit the fact that the relay does not need each end node s individual information .hence , the performance is limited by over - detection . in [ 8 ] ,we have proposed a novel mimo pnc scheme based on linear detection , which will be referred as linear mimo pnc in this paper .linear mimo pnc tries to detect the summation and the difference of the two end node s packets before transforming them to the network coding form .with similar complexity and csir requirement , it significantly outperforms mimo nc .however , due to the performance limit of linear detection , linear mimo pnc s performance is poor under bad channel conditions .besides the linear detection , another popular mimo detection method is vblast ( vertical bell laboratories layered space - time ) which can achieve much better performance with an acceptable increase in complexity .the vblast architecture was first proposed in [ 9 ] where a code block is de - multiplexed into different layers and each is transmitted through a particular antenna . at the receiver , these layers are successively detected , where the detected interference are canceled and the unknown interferences are nulled by linearly weighting the residual signal vector with a zf ( zero - forcing ) null vector ( zf vblast ) .a low complexity zf vblast scheme is proposed in [ 10 ] , where the channel matrix is rewritten in terms of the qr decomposition as .the inverse of unitary matrix was then multiplied to the received signal before estimating the transmit information . in order to find the optimal detection order , [ 10 ] further proposed the sorted qr decomposition algorithm , zf - sqrd . in this paper, we combine the basic idea of pnc and the qr vblast mimo detection scheme and propose vblast pnc .our scheme only requires receiver side channel state information and symbol level synchronization between the end nodes , as in general virtual mimo system .the basic idea of vblast pnc in a 2-by-2 mimo system is as follows .with qr decomposition , the relay first detects the second layer ( one end node ) signal . rather than canceling all the component of the second layer signal from the first layer as in traditional vblast detection, we only subtract a part of the second layer information and directly map the residual signal ( including both second and first layer information ) to the network coding form . with such partial interference cancellation , the error propagated from the incorrectly detected second layer is significantly decreased .thus , the system performance is improved .we then extend our vblast pnc to a detection scheme with an optimal order as in zf - sqrd , and even better performance is achieved .numerical simulation is done to compare the performance of vblast pnc with linear mimo pnc and mimo nc schemes .the results show that vblast pnc can achieve much better ber performance than linear mimo pnc and vblast mimo nc .this paper is mainly based on the twrc in fig .1 , where the relay is equipped with 2 antennas and the each end node is equipped with single antenna .the transmission consists of two phases . in the uplink phase , boththe end nodes transmit their packets to the relay node simultaneously using qpsk modulation .we assume that the two packets arrive at the relay node in a symbol level synchronization . in that way , the superimposed signal received by the relay is : where denotes the received signal at the -th antenna of the relay node ; is zero mean complex gaussian random variable , which denotes the channel coefficient from the end node to the -th antenna of the relay node ; the transmitted signal with qpsk modulation of the end node , and demotes the complex gaussian noise with zero mean and variance for each dimension . in this phase ,the full channel information is available at the relay node .rewriting the received signal in the vector form as then , the relay node tries to extract some useful information from and transforming it to the network coded form .the detailed processing will be illustrated later , which is also the focus of our paper . in the downlink phase ,the relay broadcasts the network coded packet to both end nodes . after receiving the packet from the relay ,the end nodes extract their target packets with the help of their own information .now , we present an example to illustrate the basic idea of vblast pnc and its superiority over vblast nc , with a simple channel realization . in the first phase , the transmission in ( 2 )can be regarded as a mimo system ( two transmit antennas and two received antennas ) .the relay node s goal is to acquire an estimate of . in the traditional mimonc scheme , the relay node decodes and explicitly before network encoding them .nevertheless , this scheme is suboptimal .consider an ill - conditioned channel matrix \ ] ] where is a small quantity .then eq.(1 ) can be rewritten as with qr zf vblast detection , is first detected as according to the processing of vblast , we cancel the interference in and obtain the estimate of as from ( [ eq5 ] ) and ( [ eq6 ] ) , we can find that the snr of tends to zero when is very small .it means that the vblast nc scheme ca nt accurately estimate the target signal in this case .based on the basic idea of pnc , the relay can estimate the target signal from .therefore , we can first estimate rather than estimate individual information of , from .the particular processing is as follows . after estimating , we subtract rather than from .then , we can obtain with the pnc mapping [ 3 ] , we can directly map in ( 7 ) to the target information .since the noise in ( 7 ) is small ( independent of , the performance of this scheme is much better than the mimo nc scheme .this example indicates that mimo pnc may significantly outperform the traditional mimo nc . in the following sections , we introduce the proposed vblast pnc scheme in detail .in this section , we first briefly review the mimo nc scheme based on qr vblast detection algorithm [ 9 , 10 ] for a comparison . after that, we elaborate the proposed vblast pnc . consider the 2-by-2 mimo system in ( 2 ) .the channel matrix can be decomposed with qr decomposition so that _ _ , where the matrix is a unitary matrix ( orthogonal columns with unit norm ) and is an upper triangular matrix . multiplying the received signal by ,we can obtain the calculated signal where denotes the matrix conjugate transpose and the new noise vector has the same distribution as .the scalar form of is owing to the upper triangular characteristic of , we can easily estimate the second layer signal ( here ) as : since the qpsk modulation is adopted , the above signal can be demodulated with hard decision as note that the hard decision in ( [ eq11 ] ) is performed for the real part and the imaginary part respectively . according to the basic idea of vblast , after the second layer signal is detected , we can then detect the first layer signal ( here ) by canceling the interference of : after obtaining the individual decisions of and , a straightforward way to calculating is to combine the estimates of and ( obtained in ( 11 ) and ( 12 ) respectively ) : hereafter , we refer to this scheme as vblast nc .vblast nc detects the signals separately with vblast algorithm , and then encodes them into the network - coded form.however , it may perform poorly as shown in our illustrating example . as in the vblast processing above , exchange of the detection order between x1 and x2 ( first detect andthen cancel it before detecting is also workable . as shown in [ 10 ] , the detection sequence is crucial to the performance of vblast because of the risk of error propagation . to obtain a better performance , we can permute the columns of the channel matrix before the qr decomposition . by carefully selecting the permutation pattern ,the post - permuted has a larger . as a result ,the snr of the second layer is increased and the ber is decreased , and the error propagation effect is also decreased .we can also adopt this sorted algorithm [ 10 ] in our vblast pnc , as elaborated in the following part . in vblast ,the diversity order of the first layer should be larger than the second layer in theory under the assumption of clean interference cancellation . in practice, however , the diversity order of both layers is the same [ 11 ] .the main reason is the error propagated from the erroneously detected second layer signal . to mitigate the error propagation and better the performance , we propose vblast pnc scheme which only cancels a part of the detected second layer information .after the cancellation , we require the remaining signal to be in the form of where is an integer .we can then directly map this signal to the target signal by applying pnc mapping , without explicitly detecting .since the pnc mapping has similar performances as the ordinary point - to - point transmission , our vblast pnc could achieve a better performance by mitigating the error propagation effects .the detail of vblast pnc is as follows .we rewrite in ( 9 ) as where is an integer to be determined later . in ( [ eq1 ] )we regard as the signal to be estimated and ( -kr__ as interference to be cancelled . in order to decrease the effect of error propagation, we must minimize ( -kr__ .for example , if , no interference needs to be cancelled and there will never exist error propagation during the detection of the signal in ( [ eq14 ] ) . taking the integer requirement of into account, we can determine the value of as where means the integer nearest to . in ( 15 ), is a complex variable and is a real variable , and we only take account the real part to calculate in this paper . after cancelling the interference with the hard estimate of in ( 11 ) ,we can obtain the estimate of as finally , the relay node estimates only from as long as does not equal to 0 . when k>0 , intuitively , is larger for , while it is smaller for . when k<0 , the situation reverse. then , for each dimension ( real part or imaginary part ) signal , the corresponding decision rule is where is the decision threshold and its optimal value can be calculated as in [ 3 ] . in high snr , we can simplify the calculation of and set it to . then we have to further improve the performance , we can extend our scheme to the sorted vblast pnc , where the optimal detection order is chosen as the tradition vblast .in particular , we exchange the columns of to obtain a larger .then , the vblast algorithm is performed on the new .in this section , we present the simulation results for the proposed vblast pnc . to compare its performance, we also show the simulation performance of linear mimo pnc , the performance of linear mimo nc and vblast nc .the simulation setting is mainly based on the system model in section ii . in particular, we use qpsk modulation and set the packet length to .the wireless channels are assumed to be block fading with each entry of the channel matrix to be independently complex gaussian distributed over .the noise is gaussian distribution with and the snr of the system is defined as .simulation results are measured in terms of bit error rate ( ber ) of at the relay node since the broadcast phase is straightforward . in figure 2 ,we plot the ber performance of different schemes .as shown in the figure , the proposed mimo pnc schemes always outperform their counterparts .specifically , our vblast pnc outperforms vblast nc by about 0.5 db at a ber of ; the sorted vblast pnc outperforms sorted vblast nc by about 1db . in sorted vblast, the more performance improvement mainly comes from the fact that the average value of is smaller and the interference to be cancelled , , is smaller .in this paper , a novel signal detection and network encoding scheme , vblast pnc , is proposed to extract at the relay node in mimo twrc .the basic idea is that the relay node first uses partial interference cancellation to obtain vblast detection process and then converts it to with pnc mapping . with partial interference cancellation, error propagation effect is mitigated and the performance is significantly improved .the simulation results verified the performance advantages of vblast pnc under the setting of random rayleigh fading channel coefficients .our scheme is of great interest in practice since only csir , symbol lever synchronization and low complexity are needed .this work was partially supported by nsfc ( no . 60902016 ) , nsf guangdong ( no . 10151806001000003 ) , and nsf shenzhen ( no . jc201005250034a ) .y. rong , x. tang and y. hua , `` a unified framework for optimizing linear nonregenerative multicarrier mimo relay communication systems '' , ieee transactions on signal processing , pp : 4837 - 4851 , dec .2009 s. zhang , s. c. liew , and p. p. lam , `` physical layer network coding , '' in proc .acm mobicom06 : the 12th annual international conference on mobile computing and networking , pages 358365 , new york , ny , usa , 2006 .
for mimo two - way relay channel , this paper proposes a novel scheme , vblast - pnc , to transform the two superimposed packets received by the relay to their network coding form . different from traditional schemes , which tries to detect each packet before network coding them , vblast - pnc detects the summation of the two packets before network coding . in particular , after firstly detecting the second layer signal in 2-by-2 mimo system with vblast , we only cancel part of the detected signal , rather than canceling all the components , from the first layer . then we directly map the obtained signal , summation of the first layer and the second layer , to their network coding form . with such partial interference cancellation , the error propagation effect is mitigated and the performance is thus improved as shown in simulations . shell : bare demo of ieeetran.cls for journals multiple input multiple output , physical layer network coding , two way relay channel , vblast .
fusion frames , as a generalization of frames , are valuable tools to subdividing a frame system into smaller subsystems and combine locally data vectors .the theory of fusion frames was systematically introduced in . since then, many useful results about the theory and application of fusion frames have been obtained rapidly . in the context of signal transmission , fusion frames and their alternative dualshave important roles in reconstructing signals in terms of the frame elements .the duals of fusion frames for experimental data transmission are investigated in .but the problem that occurs is that the duality properties of fusion frames are not like discrete frames , such as , the duality property of fusion frames is not alternative and fusion riesz bases have more than one dual .this paper deals with investigating such problems , which help us to obtain alternative dual fusion frames .let be a separable hilbert space .frame _ for is a sequence such that there are constants satisfying the constants and are called _ frame bounds_. if , we call a _ tight frame_. if the right - hand side of ( [ def frame ] ) holds , we say that is a _bessel sequence_. given a frame , the _ frame operator _ is defined by a direct calculation yields hence , the series defining converges unconditionally for all and is a bounded , invertible , and self - adjoint operator .hence , we obtain the possibility of representing every in this way is the main feature of a frame .a sequence is bessel sequence if and only if the operator , which is called the _ synthesis operator _ , is well - defined and bounded . when is a frame , the synthesis operator t is well - defined , bounded and onto .a sequence is called a _dual _ for bessel sequence if every frame at least has a dual .in fact , if is a frame , then ( [ frame decomposition ] ) implies that , which is a frame with bounds and , is a dual for ; it is called the _ canonical dual_. to see a general text in frame theory see .let and be bessel sequences with synthesis operators and , respectively . then from ( [ def dual ] ) follows immediately that and are dual of each other if and only if ; in particular , they are frames . for more studies in the duality properties of frameswe refer to .the following proposition describes a characterization of alternate dual frames . [ ar ] 1 .the dual frames of are precisely as , where is a bounded left inverse of and is the canonical orthonormal basis of 2 .there is a one to one correspondence between dual frames of and operators such that .we now review preliminary results about fusion frames . throughout this paper, denotes a countable index set and is the orthogonal projection onto a closed subspace of .let be a family of closed subspaces of and be a family of weights , i.e. , .then is a _ fusion frame _ for if there exist constants such that the constants and are called the _ fusion frame bounds_. if we only have the upper bound in ( [ def fusion ] ) we call a _ bessel fusion sequence_. a fusion frame is called _ tight _ , if and can be chosen to be equal , and _ parseval _ if . if for all , the collection is called _ -uniform_. a fusion frame is said to be an _orthonormal fusion basis _if , and it is a _ riesz decomposition _ of for every there is a unique choice of so that .recall that for each sequence of closed subspaces in , the space with the inner product is a hilbert space . for a bessel fusion sequence for , the _ synthesis operator _ is defined by its adjoint operator which is called the _ analysis operator _ is given by recall that is a fusion frame if and only if the bounded operator is onto and its adjoint operator is ( possibly into ) isomorphism .if is a fusion frame , the _ fusion frame operator _ is defined by is a bounded , invertible and positive operator and we have the following reconstruction formula the family , which is also a fusion frame , is called the _ canonical dual _ of and satisfies the following reconstruction formula let be a fusion frame by the frame operator .a bessel fusion sequence is called a _ dual _ of if let be a family of closed subspaces of and be a family of weights , i.e. , .we say that is a _ fusion riesz basis _ for if and there existconstants such that for each finite subset some characterizations of fusion riesz bases are given in the following theorem .[ rie] let be a fusion frame for and be a basis for for each .then the following conditions are equivalent .+ ( 1 ) is a riesz decomposition of .+ ( 2 ) the synthesis operator is one - to - one .+ ( 3 ) the analysis operator is onto .+ ( 4 ) is a fusion riesz basis for .+ ( 5 ) is a riesz basis for .[ har] let and be a closed subspace .then we have this paper is organized as follows : in section 2 , we compare the duality properties of discrete and fusion frames and by presenting examples of fusion frames we show that some well - known results on discrete frames are not valid on fusion frames .also we investigate the cases that these properties can satisfy on fusion frames . in section 3, we investigate the relation between the duals of fusion frames , local frames and the associated discrete frames and we try to characterize the dual of fusion frames .for a fusion frame and a bessel fusion sequence , we define by it is easy to see that is a linear operator and , its adjoint can be given by , for all .now , the identity ( [ def : alt ] ) can be written in an operator form as follows .[ dual by preframes ] let be a fusion frame .a bessel fusion frame is a dual of if and only if where and are the synthesis operators of and , respectively .by lemma [ dual by preframes ] , we deduce that , unlike discrete frames , two fusion frames are not dual of each other in general . herewe present an example which confirms this statement .[ s ] let .consider and , for . also take and , for . then and are fusion frames for with frame operators and , respectively .the following calculation shows that is an alternative dual of .\\&=&(a , b , c),\ \ \ ( a , b , c)\in\mathbb{r}^{3}.\end{aligned}\ ] ] but is not an alternative dual of . in fact\neq ( a , b , c).\end{aligned}\ ] ] now , it is natural to ask when two bessel fusion frames are dual of each other . to answer this questionassume that is also a dual fusion frame for or equivalently ( by lemma [ dual by preframes ] ) where is given by let be a fusion frame with a dual .then the fusion frame is also a dual of if moreover the converse is hold if and are fusion riesz bases .let be a dual of , then by using ( [ u.phi.t * ] ) and ( [ dual each other ] ) we obtain i.e. fusion frame is also a dual for .+ for the proof of moreover part , since and are fusion riesz bases , by theorem [ rie ] , and are invertible .so we deduce the proof by ( [ u.phi.t * ] ) and ( [ khan ] ) .let and for each .suppose that is a tight fusion frame for .then is also a dual fusion frame of one of the important results in the duality of discrete frames is that every riesz basis has just a unique dual ( canonical dual ) and that dual is riesz basis as well .but the following example shows that this property is not confirmed in fusion riesz bases .[ dual riesz ] consider then is a fusion frame for with bounds and , and the frame operator it is not difficult to see that is a fusion riesz basis and its canonical dual can be given with to construct an alternate dual consider then is a fusion frame for .moreover , if then hence , the fusion riesz basis has more than one dual and the second dual is not a fusion riesz basis .let be a fusion frame for . by considering a frame for each subspace we can construct a discrete frame for .we begin with the following key theorem .[ relation fus.dis] for each let and let be a frame sequence in with the frame bounds and .define for all and assume that then is a frame for if and only if is a fusion frame for .our aim in this section is to study the relation between the duals of fusion frames , local frames and the associated discrete frames of . in particular , in the following theorem we investigate the relation between the duals of local frames of with the associated discrete frames of .suppose that is a fusion frame for and is the frame operator of local frames for each .now the question is whether the canonical dual of each frame is also a frame for the canonical dual of .the following example shows that the answer is not true in general .[ exam in r3 ] let consider and then by example 3.1 in , is a fusion frame for .let and .it is clear that is a frame for for each .suppose that is the frame operator of and is the frame operator of for each .a straightforward calculation shows that and the subspaces with the weights is the canonical dual of . moreover , if we take then is the canonical dual of for each .however , is not a frame for for each .the following example shows that there is no significant relation between the duals of fusion frames and their associated discrete frames , i.e. if is a dual of , then it is not necessary that their associated discrete frames be dual of each other .let and then is a fusion frame for with an alternate dual .+ consider and then and are frames for , but they are not dual of each other . in the rest of the paperwe try to characterize the duals of fusion frames .we first discuss the riesz case .let be a riesz decomposition of and be its dual .associated to the canonical dual we can consider the operator given by applying ( [ u.phi.t * ] ) and theorem [ rie ] we conclude that , where is the synthesis operator of .it follows easily that or equivalently following example shows that unfortunately , we can not characterize the duals of fusion frames by the duals of their associated discrete frames and the first part of proposition [ ar ] .consider the fusion frame introduced in example [ exam in r3 ] . by theorem [ relation fus.dis ]the sequence is frame for with the frame operator denote its canonical dual by . then consider and , then is a fusion frame for .but is not an alternative dual of and vise - versa .let be a fusion frame for and be a bessel sequence of normalized vectors such that . take where is the 1-dimensional subspace generated by .then is a dual for .first , it is not difficult to see that now by using 8.12 of and corollary 2.5 of we conclude that where and are the frame bounds of and , respectively .hence , is a bessel fusion frame .moreover , by lemma [ har ] we have the above theorem gives us a very simple method to construct duals of finite fusion frames . more precisely ,let be a finite fusion frame for .take where is a 1-dimensional subspace of . to illustrate this algorithm ,let us consider the fusion frame in example [ dual riesz ] .clearly therefore , we can introduce some duals :
fusion frames are valuable generalizations of discrete frames . most concepts of fusion frames are shared by discrete frames . however , the dual setting is so complicated . in particular , unlike discrete frames , two fusion frames are not dual of each other in general . in this paper , we investigate the structure of the duals of fusion frames and discuss the relation between the duals of fusion frames with their associated discrete frames .
cognitive radio ( cr ) is an emerging wireless paradigm that enables wireless devices , called secondary users ( sus ) , to share the available spectrum whenever the licensed primary users ( pus ) are idle .however , this dynamic spectrum sharing can introduce many security threats , one of which is the primary user emulation ( pue ) attack . in pue attacks ,a malicious attacker emulates the pu signals to force legitimate sus to vacate the spectrum even though no real pus are currently present .this type of attacks significantly decreases the total performance of legitimate sus and is detrimental to the overall cr system .many defense and detection strategies have been proposed to prevent pue attacks in cr systems .these strategies fall into the following main categories : authentication and encryption based such as in , localization - based , transmit power estimation techniques , and smart strategies based on supervised machine learning techniques such as in and or game theory . for pu authentication and encryption .the authors in proposed to add helper nodes to the system that authenticate pus through link signatures , and hence inform sus with available spectrum . within the scope of localization - based techniques, the authors in exploited the received signal strength ( rss ) distribution at different locations in the system for device fingerprinting .however , both the authentication and the localization - based techniques will require additional nodes for authentication or location reference , which can increase system complexity .localization - based techniques also assume the pus to be stationary with minimal channel variations , which is clearly not the case in practical crs in which users constantly change their locations . in ,the authors estimated the mean and the variance of the received power at both a licensed pu and a malicious user using fenton approximation .these estimations are compared with the probability ratio test on the received signals to detect pue attacks . however, these techniques assume that the transmission power levels of attackers are significantly different than the fixed known transmission power levels of the pus . using neural networks , the authors in developed an approach to distinguish between pu and pue attacker .the use of belief propagation was proposed in , in which each su calculates the belief based on the local and compatibility functions and the beliefs are exchanged between neighboring users and then used to detect pue attacks .the work in adopts a game - theoretic approach in which the pue defense problem is a dog fight game between defending sus and the attacker .previous supervised machine learning based techniques such as in require knowledge about users and authorized signals .however , this kind of information is not always available in real cr implementations .device fingerprinting is a process to extract device - specific information from the device transmitted signals .these fingerprints form signatures that uniquely identify each individual pu or su in the system .some of the features that can be used as fingerprints include the carrier frequency difference ( cfd ) , phase shift difference ( psd ) , and second - order cyclostationary feature ( socf ) .since device fingerprinting requires no prior knowledge about pus and sus , unsupervised machine learning techniques can be used to detect pue attacks .the authors in used an unsupervised bayesian mixture model to classify fingerprints into different groups .each group represents a unique device in the system .a pue attack is detected if two groups , physical devices , share the same i d , e.g. mac address .another benefit of the unsupervised techniques is that they are passive and require no additional hardware to be added to the system unlike localization based techniques .however , unsupervised fingerprinting techniques are applied at each time frame independently .therefore , sufficient amount of fingerprints is required at each time frame , and it is required to have fingerprints for both the malicious attacker and the victim pu at the same time frame . the main contribution of this paper is to propose a novel transfer learning based framework in which knowledge from past time frames is transferred to current time frame to improve the final pue attacks detection decisions . to our knowledge, this paper is the _ first to develop a transfer learning approach for device fingerprinting_. transferring knowledge is significant in fingerprinting scenarios in which an insufficient amount of fingerprints is available at the current time frame .our proposed framework extracts abstract information about all su and pu devices in the cr system .this abstract knowledge is used to boost the output of an unsupervised pue attacks detection algorithm .the transfer learning part in the proposed framework updates the abstract knowledge after each time frame .extensive simulations demonstrates that the transfer approach improves the efficiency of pue attacks detection especially in the case of insufficient availability of fingerprints .results show that using the proposed framework the performance is enhanced with an average of for only of relevant information between the past knowledge and the current environment signals .the rest of this paper is organized as follows .section [ sec : model ] describes our system model . in section [ sec : method ] we explain in detail the proposed transfer learning framework .simulation results and discussions are presented in section [ sec : results ] .finally , conclusions are drawn in section [ sec : end ] .consider a cr system composed of pus and sus , with the total number of pus and sus being .each user has a unique identifier ( i d ) , such as a mac address .secondary users ( sus ) can benefit from the spectrum only when all pus are idle .hence , sus observe the signals of active pus to avoid using the spectrum when it is used by a licensed pu . in this model , we consider a pue attack scenario in which a malicious attacker copies the i d of one of the licensed pus , and , in turn , forces the sus to vacate the spectrum due to the fake pu - like signals sent from the attacker .the pue attack causes a serious security threat and significantly degrades the total performance of the cr system . nevertheless , each device in the cr system , whether it is a pu or an su , transmits signals that include device - specific information , such information is the carrier frequency difference ( cfd ) , phase shift difference ( psd ) , and second - order cyclostationary feature ( socf ) .such device - specific information is known to be unique to each transmitter , and hence can not be replicated by attackers .the process to extract this device - specific information from the transmitted signals is called _ device fingerprinting _ . in device fingerprinting , the extracted device - specific information , or fingerprints , forms signatures that uniquely identify each device in the system .therefore , even though the pue attacker shares the same i d with a real pu , the fingerprint data for the attacker is different from the fingerprint data for the attacked pu . although our model is applicable to any device fingerprint features , we restrict our attention to the cfd , psd , and socf . to detect pue attacks ,a nonparametric bayesian model can be used , such as the one in . in this model, fingerprinting data is gathered from all the devices in the system and grouped into different clusters . since fingerprints are unique for each device , each cluster of similar fingerprints belongs to one device only .therefore , the number of clusters represents the real number of devices in the system .the number of clusters is a key factor to detect the existence of a pue attack .for example , if multiple clusters have the same i d , then a pue attack has occurred . to form the clusters, one can adopt the nonparametric approach proposed by wood and black , that uses the infinite gaussian mixture model ( igmm ) . in this model, the number of clusters , or mixtures , could be infinite .using an igmm , one can cluster a given set of data points , such as the pus and sus fingerprints , by estimating a number of gaussian distributions , where each gaussian distribution represent a cluster .the igmm therefore requires a mixture input and a set of hyper - parameters .these hyper - parameters include priors on the mean and variance of the multivariate gaussian distributions .to integrate the hyper - parameters out of the igmm , the mixture input needs to be generated from a distribution that is conjugate to the probability of the priors , with being the mean and being the variance .hence , the dirichlet distribution is used for the mixing probability with being the mixing weights and being the mixing parameter .the dirichlet distribution is given by for all where . is the beta function , which can be expressed in terms of the gamma function as follows : the gibbs sampling process is then used to iteratively assign cluster labels to input data vectors , which in our case represent assigning fingerprints to devices .gibbs sampler starts by initializing clusters for all the fingerprints .subsequently , the sampler removes the cluster label for one of the fingerprints and calculates its probability for new cluster assignment .the sampler iterates until convergence of all device assignments .convergence is guaranteed only for the case of infinite number of fingerprints . in the real case in which a finite number of fingerprints is available ,a maximum number of iterations for the gibbs sampler is chosen as a stopping criterion .such a nonparametric bayesian method can be applied to a given set of fingerprints to detect pue attacks .however , most such existing models , such as in , are static and they apply this method independently at every time period .however , such static approaches require the cr system to have a sufficient amount of unique fingerprints for each device in the cr system to construct correct clusters for both the pus and the sus .in such a nonparametric system , there might not be a sufficient amount of fingerprint data for each device in the system .therefore , there is a need to handle the case of having an insufficient amount of fingerprints to efficiently detect pue attackers . to address this problem , next ,we propose a transfer based framework in which fingerprinting results from past time frames are used to enhance the current pue attacks detection .to enhance the fingerprinting process in a cr network , we use the powerful mathematical tool of transfer learning .in particular , we propose a new transfer learning based framework for device fingerprinting , as illustrated in fig .[ fig : method ] .the proposed framework consists of two main components .the first is the abstract knowledge database ( akd ) , which is able to gather general information about devices in the cr system from all time frames .the second component is the transfer tool that updates the stored abstract knowledge database with the current time frame information .the proposed framework receives the current fingerprint data from the environment , and suggests a clustering for the received fingerprint data based on the knowledge in the akd .the clustering decisions from the proposed framework are merged with the clustering decisions of the static nonparametric bayesian model to form a final clustering decisions .these final clustering decisions represent the assignment of fingerprint data to devices in the cr system .the transfer tool updates the akd with the final clustering decisions for future pue attacks detection .next , both the abstract knowledge database and the transfer learning algorithm will be described in detail .the abstract knowledge database ( akd ) includes general information about the devices in the cr system starting from time , when the system was first deployed , and until the current time instant .the information stored in the akd consists of a set of fingerprints , with _db _ referring to the abstract knowledge database , grouped into multiple clusters , where each cluster represents a device in the cr system . the akd block in fig .[ fig : method ] shows an example of 3 clusters of fingerprints .hence , each fingerprint has a corresponding cluster label and a weight value .weights represent the amount of confidence in assigning fingerprint to cluster . to reduce the number of fingerprints in the akd , a similarity metricis introduced , which replaces two or more fingerprints , that are nearly identical with new fingerprint , where minimizes the distances between , : where is a distance measure and is the set of similar fingerprints : such that , .hence , the new set of fingerprints in the akd equals to : .for instance , if the distance measure is the l2-norm and we have two fingerprints with high similarity , then the new fingerprint using ( [ eq : simi ] ) will simply be the mean of both .the distance threshold defines how close fingerprints need to be in order to be considered similar .large values of allow more fingerprints to be merged and hence will result in a very generic abstract knowledge in the akd .on the other hand , small values of tighten the similarity condition and result in a highly specific information in the akd .when a new set of input fingerprint data arrives from the environment , the goal is to group these fingerprints into clusters . in order to groupthe input fingerprints , the model compares these newly arriving fingerprints with the fingerprints stored in the akd to find the set of fingerprints which is as close as possible to the input fingerprints .next , we use the clustering labels of as a suggested clustering labels for .the model also uses the weights associated with each fingerprint in the akd to provide a confidence level along with each suggested clustering label .a sigmoid logistic growth function is chosen to calculate the amount of confidence for each fingerprint in the database : where _cf _ is the confidence weight for the fingerprint , is the weight associated with the fingerprint in the akd , and are the parameters of the logistic function that control how steep the logistic function is which in turn affects the level of confidence assigned to each weight .the initial weights are chosen near the middle , based on the amount of confidence that we want to assign to the newly added fingerprints in the database .set of fingerprint data from the environment set of labels and confidence values for all fingerprints calculate similarity between and ( of the similar to ) labels of } ] is the mapping vector between final cluster labels and cluster labels in the akd such that , where is the number of labels , for any two elements and of , is an indicator function in which when and when . for example , in the case of 2 clusters only , can take two possible values : ] . after finding the right mapping of the labels from the final clustering output to the clustering labels in the akd , the transfer tool updates the existing fingerprints in the akd with the final output labels of the input fingerprints . in the update process, the input fingerprint data is divided into two categories : fingerprints with high similarity to the fingerprints in the akd , and fingerprints with low similarity to the fingerprints in the akd . in the high similarity case , we increase the weights for all the fingerprints in the akd that are of high similarity to the input fingerprints to give these fingerprints a higher confidence weight . as for the fingerprints with low similarity , the input fingerprints are considered new to the akd and are added to the akd with initial associated weights .the algorithm used to update fingerprints in the akd is shown in algorithm [ alg : alg2 ] .after specific time intervals , the weights for all the fingerprints in the database are decreased by to penalize for old unused fingerprints .all the fingerprints with weights below a certain threshold are removed from the akd database .fingerprint data from the environment , and similarity levels between fingerprints from the environment and fingerprints from the akd updated akd find the mapping of labels * _ m _ * ( between framework output labels and final output labels ) .increase the weight of by decrease the weight of by add the new label mapping of to the akd with initial weight add the new fingerprint with its label to the akd with initial weight decrease all weights for all fingerprints in the akd by our simulations , we consider the nonparametric bayesian method as our static pue attacks detection model , and we evaluate the benefits of using our proposed transfer based framework to enhance the detection of the pue attacks in the cr system . even though the hyper - parameters were integrated out of the iterative gibbs process as shown in section [ sec : model ] , an appropriate choice of hyper - parameters affects the speed of convergence of the gibbs sampler . the hyper - parameters consist of priors to the and of the multivariate gaussian distributions .for example , the mean prior is usually set to be close to the mean value of all the fingerprints .the hyper - parameter , which is proportional to prior mean for , determines how likely the fingerprints are to be variant from each others .therefore , for features like signal amplitude , a large value of is chosen since signal amplitude feature has large variance between fingerprints .on the other hand , frequency features tend to have small variance between fingerprints and hence a small value of is chosen for these features .the dirichlet distribution is used in the generation of the fingerprints for the simulations .random means and variances are used for each label ..33 .33 .32 in this section , we evaluate the performance of the nonparametric bayesian model with and without our proposed transfer learning framework .we use two main metrics for evaluation : ( i ) the hit rate of assigning the right label to each fingerprint , which is equals to the percentage of correctly assigned labels to fingerprints compared to the total number of fingerprints .( ii ) missed attack rate , which is an indicator that missclassifying some fingerprints led into missclassifying a user in the system .the second metric provides a better indicator on the performance enhancement due to using the transfer learning framework , this is due to the fact that even with high hit rate percentages , an attacker might not be detected if all the missclassified fingerprints belong to this attacker . on the other hand ,even with low hit rate , accurate detection of attackers can be achieved if the missclassified fingerprints are distributed on various users .the conducted experiments were repeated times and final results were averaged .the number of users chosen is with total number of fingerprints ranging from to fingerprints .an upper bound on the transfer learning approach is shown in the plots , which presents the maximum hit rate possible with perfect transferred knowledge .this upper bound curve gives us an insight on the maximum performance that our proposed transfer approach can achieve . in fig .[ fig : res1 ] , we show the hit rate resulting from the different learning approaches as the number of fingerprints varies . from this figure , we can see that our proposed framework achieved an average enhancement of in total hit rate ( for fingerprints ) with only of transferable knowledge .[ fig : res1 ] also shows that the transfer learning upper bound curve yields to highest output hit rate if all the transferable knowledge were completely used .the figure also shows that the maximum performance enhancement that can be reached through transfer learning is around making our model up to close to the optimal transfer case . as expected the more fingerprints we have that describes users the better the clusters are which yields to a higher total hit rate . fig .[ fig : res2 ] shows the efficiency of using transfer learning as the number of fingerprints increases . as more fingerprints are available, there is less need to transfer knowledge and , thus , the amount of performance improvement resulting from the proposed approach will naturally decrease .this is due to the fact that having a large number of fingerprints will yield in a better clustering of labels by the detection algorithm which leaves small room of enhancement for the transfer approach . in fig .[ fig : res3 ] , we show the number of times that the proposed transfer learning approach managed to correctly classify a pue attacker that was missclassified in the traditional detection approach .as the number of fingerprints increases , missclassifying an attacker is less likely to happen .this is due to the fact that even if some fingerprints were missclassified , having other correctly classified fingerprints insures that the attacker is correctly detected .[ fig : res3 ] shows that for fingerprints the proposed transfer approach detects of the times an attacker which was not detected in the traditional non - transfer approach .fig [ fig : res4 ] shows the effect of increasing the number of devices on the total hit rate for a fixed number of fingerprints for each user . in this experiment, we obtain fingerprints for each user . as expected the hit rate decreases as the number of devices increases . for a fixed information from past time frames , the figure shows that the effect of the proposed method increases as the number of devices increases , this is due to the fact that having 25 fingerprints for each user is sufficient in the 10 users case , and as the number of users increases , more fingerprints are needed to correctly cluster users ; and hence the more effective the transfer learning approach becomes .in this paper , we have proposed a novel approach for performing device fingerprinting in wireless networks . in particular , we have introduced a novel framework , based on the machine learning tools of transfer learning , that enable static detection algorithms to benefit from past detection results .we have applied the framework for fingerprinting pus and sus in a cognitive radio network .our results have shown that the proposed framework enhanced the performance with an average of for only relevant information between the past knowledge and the current environment fingerprints .
primary user emulation ( pue ) attacks are an emerging threat to cognitive radio ( cr ) networks in which malicious users imitate the primary users ( pus ) signals to limit the access of secondary users ( sus ) . ascertaining the identity of the devices is a key technical challenge that must be overcome to thwart the threat of pue attacks . typically , detection of pue attacks is done by inspecting the signals coming from all the devices in the system , and then using these signals to form unique fingerprints for each device . current detection and fingerprinting approaches require certain conditions to hold in order to effectively detect attackers . such conditions include the need for a sufficient amount of fingerprint data for users or the existence of both the attacker and the victim pu within the same time frame . these conditions are necessary because current methods lack the ability to learn the behavior of both sus and pus with time . in this paper , a novel transfer learning ( tl ) approach is proposed , in which abstract knowledge about pus and sus is transferred from past time frames to improve the detection process at future time frames . the proposed approach extracts a high level representation for the environment at every time frame . this high level information is accumulated to form an abstract knowledge database . the cr system then utilizes this database to accurately detect pue attacks even if an insufficient amount of fingerprint data is available at the current time frame . the dynamic structure of the proposed approach uses the final detection decisions to update the abstract knowledge database for future runs . simulation results show that the proposed method can improve the performance with an average of for only relevant information between the past knowledge and the current environment signals .
* expected order parameter for two incoherent oscillators . * in fig.3 of the main text we showed that when approaches the order parameter for a pair of symmetric oscillators remains higher than the expected value for a pair of incoherent oscillators .we report here the derivation of for a pair of oscillators that are not synchronized .we assume that the phases and of the two oscillators are drawn uniformly in ] .we notice that the value of for the generic pair of phases reads : consequently , the expected value of is obtained as the average over all the possible choices of , namely : * phase dispersion and numerical estimate for incoherent oscillators . * in fig.2d of the main text we reported , as a function of , the dispersion of the phases for a system of seven oscillators coupled through graph in fig.1a .we noticed that the dispersion approaches the value when .given a set of unitary vectors having phases , consider the average vector : having polar coordinates .the dispersion of the phases of the set around the phase of the average vector is defined as : where the difference ) is computed and takes in ] . for the system of oscillatorscoupled through graph we averaged over samples , obtaining the estimate .* computation of the maximum lyapunov exponent . * the computation of the maximum lyapunov exponent for the system coupled through graph of fig .( 1 ) in the main text was performed using equally separated values of between and ( ) .then , for each value of , we considered 500 different initial configurations of the phases of the seven oscillators .for each initial condition , we let evolve the trajectory according to eq .( 1 ) in the main text , until it reached the attractor ( or the stationary state , for ) . to properly take into account the rotational symmetry of the system, we studied the evolution of in cartesian coordinates , i.e. , by looking at the set of variables .we considered a perturbation of of magnitude , i.e. , a trajectory such that . here denotes the euclidean distance in .then , we integrated both trajectories for one integration step ( using a standard fourth - order runge - kutta integration scheme ) , we measured the distance and we computed .the quantity is a one - step approximation of the largest lyapunov exponent of the system .then , we realigned the perturbed trajectory so that the distance between and the realigned perturbed trajectory was equal to in the same direction of , and we iterated the procedure .the value of for a set of initial conditions was obtained by averaging the values of computed at each iteration over subsequent integration steps .* brain data acquisition and pre - processing . * the anatomical connectivity network is based on the connectivity matrix obtained by diffusion magnetic resonance imaging ( dw - mri ) data from 20 healthy participants , as described in .the elements of this matrix represent the probabilities of connection between the 90 anatomical regions of interest ( nodes in the network ) of the tzourio - mazoyer brain atlas .these probabilities are proportional to the density of fibers between different areas , so each element of the matrix represents an approximation of the connection strength between the corresponding pair of brain regions .the functional brain connectivity was extracted from bold fmri resting state recordings obtained as described in .all fmri data sets ( segments of 5 minutes recorded from 15 healthy subjects ) were co - registered to the anatomical data set and normalized to the standard mni ( montreal neurological institute ) template image , to allow comparisons between subjects . as for dw - mri data , normalized and corrected functional scans were sub - sampled to the anatomical labeled template of the human brain .regional time series were estimated for each individual by averaging the fmri time series over all voxels in each region ( data were not spatially smoothed before regional parcellation ) . to eliminate low frequency noise ( e.g. slow scanner drifts ) and higher frequency artifacts from cardiac and respiratory oscillations ,time - series were digitally filtered with a finite impulse response ( fir ) filter with zero - phase distortion ( bandwidth hz ) as in . *functional synchrony . * a functional link between two time series and ( normalized to zero mean and unit variance ) was defined by means of the linear cross - correlation coefficient computed as , where denotes the temporal average . for the sake of simplicity, we only considered here correlations at lag zero . to determine the probability that correlation values are significantly higher than what is expected from independent time series , values ( denoted )were firstly transformed by the fisher s z transform under the hypothesis of independence , has a normal distribution with expected value 0 and variance , where is the effective number of degrees of freedom . if the time series consist of independent measurements , simply equals the sample size , , autocorrelated time series do not meet the assumption of independence required by the standard significance test , yielding a greater type i error .in presence of auto - correlated time series must be corrected by the following approximation : where is the autocorrelation of signal at lag . to estimate a threshold for statistically significant correlations , a correction for multipletesting was used .the false discovery rate ( fdr ) method was applied to each matrix of values . with this approach ,the threshold of significance was set such that the expected fraction of false positives is restricted to .* clustering of phase values . * to identify brain areas that could be related by a topological symmetry , we used the anatomical connectivity obtained from the dw - mri data as the connectivity matrix ( nodes ) in eq .( 1 ) of the main text .a standard hierarchical agglomerative clustering algorithm was then used to identify nodes with similar phases .the resulting dendrogram is depicted in fig .[ figure1suppmat ] .a problem is in the complexity class np if it can be solved in polynomial time by a nondeterministic turing machine .a problem is said to be np - complete if it is in np and it is np - hard , i.e. , at least as hard to solve as any other problem in np . see ref . for an in - depth discussion of complexity classes ._ computational complexity _ ( addison wesley , reading , ma , 1994 ) . each row and each column of a permutation matrix has exactly one entry equal to one and all others equal to . if is a permutation matrix , the matrix product swaps pairs of rows of , while swaps pairs of columns of .
we study a kuramoto model in which the oscillators are associated with the nodes of a complex network and the interactions include a phase frustration , thus preventing full synchronization . the system organizes into a regime of remote synchronization where pairs of nodes with the same network symmetry are fully synchronized , despite their distance on the graph . we provide analytical arguments to explain this result and we show how the frustration parameter affects the distribution of phases . an application to brain networks suggests that anatomical symmetry plays a role in neural synchronization by determining correlated functional modules across distant locations . synchronization of coupled dynamical units is a ubiquitous phenomenon in nature . remarkable examples include phase locking in laser arrays , rhythms of flashing fireflies , wave propagation in the heart , and also normal and abnormal correlations in the activity of different regions of the human brain . in 1975 y. kuramoto proposed a simple microscopic model to study collective behaviors in large populations of interacting elements . in its original formulation the kuramoto model describes each unit of the system as an oscillator which continuously readjusts its frequency in order to minimize the difference between its phase and the phase of all the other oscillators . this model has shown very successful in understanding the spontaneous emergence of synchronization and , over the years , many variations have been considered . recently , the kuramoto model has been also extended to sets of oscillators coupled through complex networks , and it has been found that the topology of the interaction network has a fundamental role in the emergence and stability of synchronized states . in particular , the presence of communities groups of tightly connected nodes has a relevant impact on the path to synchronization , and units that are close to each other on the network , or belong to the same module or community , have a higher chance to exhibit similar dynamics . this implies that nodes in the same structural module share similar functions , which is a belief often supported by empirical findings . however , various examples are found in nature where functional similarity is instead associated with morphological symmetry . in these cases , units with similar roles , which could potentially swap their position without altering the overall functioning of the system , appear in remote locations of the network . some examples include cortical areas in brains , symmetric organs in plants and vertebrates , and even atoms in complex molecules . therefore , identifying the sets of symmetric units of a complex system might be helpful to understand its organization . finding the global symmetries in a graph , i.e. , constructing its automorphism group , is a classical problem in graph theory . however , it is still unknown if this problem is polynomial or np - complete , even if there exist polynomial - time algorithms for graphs with bounded maximum degree . recent works have focused instead on defining and detecting local symmetries in complex networks . nevertheless , the interplay between the structural symmetries of a network and the dynamics of processes occurring over the network has been studied only marginally , or for specific small network motifs . in this letter we show that network symmetries play a central role in the synchronization of a system . we consider networks of identical kuramoto oscillators , in which a phase frustration parameter forces connected nodes to maintain a finite phase difference , thus hindering the attainment of full synchronization . we prove that the configuration of phases at the synchronized state reflects the symmetries of the underlying coupling network . in particular , two nodes with the same symmetry have identical phases , i.e. , are fully synchronized , despite the distance between the two nodes on the graph . such a remote synchronization behavior is here induced by the network symmetries and not by an initial _ ad hoc _ choice of different natural frequencies . let us consider identical oscillators associated to the nodes of a connected graph , with nodes and links . each node is characterized , at time , by a phase whose time evolution is governed by the equation here is the natural frequency , identical for all the oscillators , and is the adjacency matrix of the coupling graph . the model has two control parameters : accounting for the strength of the interaction , and , the phase frustration parameter ranging in ] at any time , or equivalently \label{kura_lin_equi}\ ] ] where is the average degree of the network . this corresponds to a synchronization frequency . in a connected graph the laplacian matrix has one null eigenvalue and the system of eqs . ( [ kura_lin_equi ] ) is singular . consequently , at each time we can solve the system by computing the phase difference between each node and a given node chosen as reference . for instance , if in we define , by solving eqs . ( [ kura_lin_equi ] ) we obtain ] , and ] and \alpha ] . since ( commutes with ) and ( symmetric nodes have the same degree ) then we have \label{eq : final}\ ] ] combining eq . ( [ kura_lin_equi ] ) and eq . ( [ eq : final ] ) we finally obtain the linear system which is singular , i.e. , has one free variable . again , it can be solved by leaving free one of the variables , setting and considering the new system . the matrix is obtained from by removing the row and the column corresponding to node . if does not permute node with another node , then is still a permutation matrix . similarly , is the reduced laplacian , i.e. , the matrix obtained from the laplacian by deleting the row and the column . by left - multiplying by , which is not singular , we obtain since is a permutation of the phases of symmetric nodes , eq.([eq : phase_perm ] ) implies that the phases of symmetric nodes will be equal at any time , whereas by solving eq . ( [ eq : final ] ) we can get the values of the corresponding phases . this argument is valid for small values of , since the linearization of eq . ( [ kura ] ) is possible only if , but as shown in fig . [ fig2](a)-(c ) we observe the formation of the same perfectly synchronized clusters of symmetric nodes for a wide range of . however , when becomes larger than a certain value , the assumption does not hold any more and the global synchronized state loses stability . by looking at fig . [ fig2](d)-(e ) we notice that for , with for the graph , the value of steadily decreases while the dispersion of phases increases , until it reaches the expected value for a system of seven incoherent oscillators ( see fig . [ fig2](d ) and appendix ) . moreover , for the maximal lyapunov exponent of the system becomes positive and the system enters a chaotic regime ( see fig . [ fig2](e ) ) . interestingly , the results reported in fig . [ fig3](a)-(d ) confirm that in this regime the coherence of symmetric nodes , measured by the pairwise order parameter , is higher than expected for incoherent oscillators ( refer to appendix for additional details ) . figure [ fig3](e ) shows that for the system exhibits metastable , partially synchronized states , in which pairs of symmetric nodes alternates intervals of perfect synchronization with intervals of complete incoherence . we point out that in this regime chimera states could potentially occur and could even coexist with remote synchronization for . qualitatively similar results are obtained for different coupling topologies , but the actual value of seems to depend on the structure of the coupling network in a nontrivial way . _ application to the brain. _ as an example , we investigate here the role of symmetry in the human brain by considering anatomical and functional brain connectivity graphs defined on the same set of cortical areas ( see details in appendix ) . we have first constructed a graph of anatomical brain connectivity as obtained from dw - mri data , where links represent axonal fibers , and we used this graph as a backbone network to integrate eqs . ( [ kura ] ) . between pairs of nodes as a function of their phase differences according to the simulated kuramoto dynamics . the black solid curve corresponds to the average value over all the subjects , while the gray area covers the and the percentiles of the distribution . the dashed horizontal line indicates the threshold for statistical significant correlations ( , corrected for multiple comparisons).,width=288 ] we identified candidate pairs of anatomically symmetric areas by means of agglomerative clustering , i.e. , grouping together nodes having close phases at the stationary state ( full dendrogram and details are provided in appendix ) . then , we considered the graph of functional brain connectivity , in which links represent statistically significant correlations between the bold fmri time - series of cortical areas ( see details in appendix ) . figure [ fig4 ] illustrates the results for ( we obtained qualitatively similar results in a wide range of ) . consider nodes 57 and 74 , corresponding respectively to the green and blue areas in panel ( a ) . not only the two areas are spatially separated , but there is no edge connecting the two corresponding nodes in the anatomical connectivity network . however , the two nodes are detected as a candidate symmetric pair since at the stationary state of the kuramoto dynamics in eq . ( [ kura ] ) the oscillators associated to these two nodes have very close phases ( see dendrogram in appendix ) . as shown in fig . [ fig4](b ) , also the bold fmri signals corresponding to nodes 57 and 74 also are strongly synchronized . we obtain remarkably different results when we consider node 74 and node 76 . these nodes correspond to two spatially adjacent areas of the brain ( the red and blue regions in fig . [ fig4](a ) and are directly connected in the anatomical connectivity network . however , at the stationary state of eq . ( [ kura ] ) the phase difference of the oscillators associated to node 74 and 76 is quite large . interestingly , in this case the fmri time - series associated to these nodes are much less similar to each other [ see the two bottom trajectories reported in fig . [ fig4](b ) ] . to quantify this effect , we plot in fig . [ fig4]c the average functional correlation between the fmri activity of pairs of brain areas as a function of the phase differences between the phases of the corresponding oscillators , obtained from the dynamics of eq . ( [ kura ] ) on the anatomical connectivity network . the fact that decreases with suggests that structural symmetry plays an important role in determining human brain functions . in fact , the functional activities of anatomically symmetric areas can be strongly correlated , even if the areas are distant in space . these results suggest that the study of anatomical symmetries in neural systems might provide meaningful insights about the functional organization of distant neural assemblies during diverse cognitive or pathological states . applied to other connectivity networks as a method to spot potential network symmetries , our study could provide new insights on the interplay between structure and dynamics in complex systems . the authors thank yasser iturria - medina for sharing the dti connectivity data used in the study , and simone severini for useful comments . m. v. acknowledges financial support from the spanish ministry of science and innovation , juan de la cierva programme ref . jci-2010 - 07876 . a. d - g . acknowledges support from the spanish dgicyt grant fis2009 - 13730 , from the generalitat de catalunya 2009sgr00838 . this work was supported by the eu - lasagne project , contract no.318132 ( strep ) .
pulsed radio - frequency ( rf ) discharges are becoming an increasingly important subset of rf plasmas , particularly for industrial applications where ion energy distribution control is important .pulsed plasmas are often used to mitigate charge build up on surface substrates .electrons are the species responsible for this mitigation , so time resolved electron density measurements are particularly important . obtaining these time resolved measurements of transients in rf pulsed discharges can be done through a number of diagnostic methods .however , highly time resolved measurements on the order of 100 s of nanoseconds can be considerably more difficult} ] , which is important for higher pressure plasmas where collisions will negatively influence the signal quality . after deciding this, one needs estimates for the pressure ranges , electron densities , and electron temperatures .the peak electron densities combined with the desired hairpin dimensions will yield the required maximum signal generator frequency required , as seen in equation 1 .the hairpin dimensions need to be such that the radii of the hairpin wires must be much less than the electron mean free path and also long enough to keep the resonance frequency low .further hairpin probe design considerations can be found in literature} ] .variation of the pressure revealed a change in slope , or a `` kink '' , when pressure was plotted against decay constant , shown in figure 4 .this kink was found to be near the same pressure the vacuum vessel transitions from the constant diffusion regime to the variable mobility regime .the slopes for both regions are consistent with the slopes expected from both regimes .the power was varied at a pressure of 30 mtorr , with a pulse frequency of 500 hz , duty cycle of 50% , and a flow rate of 10 sccm , as seen in figure 7 .the decay constant is relatively independent of power , except for the region around 15 - 20 .this power coincides with the e - h transition for this chamber at these settings .a plot of the e - h transition at the same settings is shown in figure 8 .the electron temperature at the center of an icp plasma increases dramatically when it transitions to h mode .higher electron temperatures suggest a larger diffusion coefficient} ] , which would be a larger enough timescale to have a noticeable effect on the diffusion in the period that electron density decay time constants are typically taken from ( )} ] .a time - averaged power measurement will have to be taken to decouple these two competing phenomena .argon and oxygen discharges are measured over a number of different parameters including pressure , power , pulse frequency , flow , duty cycle , and ar / o ratios .this work is supported by the nsf doe partnership on plasma science , the nsf goali program , and mks instruments . a special thanks to lynda larson at treasure isle jewelers for doing such an excellent job custom making our hairpin probe .
time resolved electron density measurements in pulsed rf discharges are shown using a hairpin resonance probe using low cost electronics , on par with normal langmuir probe boxcar mode operation . time resolution of less than one microsecond has been demonstrated . a signal generator produces the applied microwave frequency ; the reflected waveform is passed through a directional coupler and filtered to remove the rf component . the signal is heterodyned with a frequency mixer and read by an oscilloscope . at certain points during the pulse , the plasma density is such that the applied frequency is the same as the resonance frequency of the probe / plasma system , creating a dip in the reflected signal . the applied microwave frequency is shifted in small increments in a frequency boxcar routine to determine the density as a function of time . the system uses a grounded probe to produce low cost , high fidelity , and highly reproducible electron density measurements that can work in harsh chemical environments . measurements are made in an inductively coupled system , driven by a single frequency pulsing generator driven at 13.56 mhz and are compared to results from literature .
pace missions can be impaired by intentional or unintentional jamming .such a threat is particularly dangerous for telecommands ( tc ) , since the success of a mission may be compromised because of the denial of signal reception by the satellite .it is well known that to counter the jamming threat , error correcting codes can be used jointly with direct sequence spread spectrum .this topic has been investigated in previous literature , but rarely taking into account the peculiarities of the tc space link . as a consequence , in current standards or recommendations on tc space links , only weak countermeasuresare included , that do not appear adequate to face the increasing skill of malicious attacks .whilst for the spreading technique a relevant advance is brought by the introduction of long cryptographic pseudo - noise sequences , the discussion is quite open as regards possible usage of new coding techniques . as a matter of fact, the only error correcting code currently included in the standards and recommendations for tc applications , is the bch code with dimension and length exploiting hard - decision decoding .the performance of this code in the presence of jamming is generally not good and it is possible to verify that significant losses appear even when an interleaver ( not currently included in the standard ) is employed .actually , the performance of the bch code is unsatisfactory even when considered on the additive white gaussian noise ( awgn ) channel . for this reason a number of new proposals have been formulated with emphasis on binary and non - binary low - density parity - check ( ldpc ) codes , , . since good correction capability and short frames are needed , the most recent proposals consider codes with rate and , or .using non - binary ldpc codes allows to improve on their binary counterparts : as an example , a non - binary ldpc code at a codeword error rate ( cer ) of about gains roughly db over the binary ldpc code .the performance of these codes over jamming channels has not yet been investigated .in this paper we consider different types of jamming signals , namely : pulsed jamming , continuous wave ( cw ) jamming and pseudo - noise ( pn ) jamming .we also study the impact of jammer state information ( jsi ) , clipping and interleaving , under typical tc application constraints . in order to assesshow far the performance of the considered codes is from the theoretical limits , we extend the concept of shannon s sphere packing lower bound ( splb ) . through our analysis , we are able to identify which are the critical values of the signal - to - interference ratio ( sir ) for which the coding scheme is no more able to guarantee an acceptable level of protection . as further possible candidate schemes , we consider short parallel turbo codes ( ptc ) and extended bch ( ebch ) codes with soft - decision decoding . for decoding the ebch codes we consider the most reliable basis ( mrb ) algorithm , which has been successfully applied to these codes over the awgn channel .we extend its use also to the jamming channel .the organization of the paper is as follows . in section [ sec : two ] we introduce the types of jamming . in section [ sec : three ] we give examples of the performance of the bch code over the jamming channel . in section [ sec : four ] we describe the new coding schemes and in section [ sec : five ] the performance metrics adopted , including an extension of the splb . in section [ sec : six ] we provide some numerical examples . in section [ sec : seven ] we evaluate the impact of finite length interleavers for the system using ptc . finally , section [ sec : eight ] concludes the paper .the definition of the types of jamming we consider is shortly reminded next .we assume that the system adopts a direct sequence - spread spectrum ( ds - ss ) , and binary phase shift keying ( bspk ) modulation with carrier circular frequency .the ds - ss is characterized by bandwidth and processing gain . moreover ,the length ( period ) of the spreading sequence is denoted by .a pulsed jamming signal has the following characteristics : * white gaussian noise on the whole bandwidth ; * discontinuity , with pulse active time and period , which means that the pulse is active for a fraction of time ( also called duty cycle ) ; * power during the active time , and zero for the remaining time . during the active timethe jamming signal has a power spectral density which is constant over the band with value , where . for proper comparisonit is also useful to introduce an equivalent ( with the same energy ) gaussian continuous jamming signal .since the same energy is transmitted over instead of , it has a power .this equivalent jamming signal has a power spectral density constant over the band , with value , where .the error rate performance can be expressed in the terms of the ratio between the energy per bit and the equivalent one - side jamming spectral density .when an error correcting code is applied , the impact of pulsed jamming can be mitigated through interleaving .this is because most of the forward error correction schemes are designed for an awgn channel which exhibits no memory .they do not handle bursts of errors .an interleaver distributes a burst of errors among many consecutive codewords . by doing thisthe number of errors contained in each codeword is limited , the code is able to correct them and the burst is neutralized . for analysis purposes, we can initially refer to an ideal interleaver .this implies that if a burst of errors corresponds to a fraction of the symbols , its impact after de - interleaving is modeled as a probability , for each symbol , of having a higher noise variance .even if an ideal interleaver can not be implemented , it is very useful for analytically investigating the performance over jamming channels .then , the performance in the presence of a real interleaver can be determined through simulation .a cw jamming is a narrowband , continuous signal of type hence , it is a pure tone with : * circular frequency which , in general , may be different from the signal circular frequency ; * initial phase which , in general , may be different from the signal initial phase ( conventionally set to ) ; * power .the worst case occurs when . under the hypothesis of having a large , the jamming contribution on a generic symbolcan be modeled by a gaussian random variable with zero mean and variance , where is the bit time duration . in the case ofcw jamming it is preferable to express the error rate performance in terms of the sir , , where is the signal power .let us denote by the spreading sequence used in the ds - ss system .a pn jamming is a signal of type hence , it is a ds - ss signal with : * circular frequency equal to ; * spreading sequence different from ( although it may have the same length ) ; * time delay on the spreading sequence ; * power .a common choice for the spreading sequence is to use a gold code .the novel cryptographic pn sequences proposed for tc applications are very long ( a suggested length is ) . once again , under the hypothesis of a large , the interfering contribution on a generic symbol can be modeled by a gaussian random variable with zero mean and variance .thus we can obtain the same expression as for cw jamming with .for all the types of jamming the channel is also impaired by thermal noise with signal - to - noise ratio per bit . as the two disturbances are independent one each other , when simultaneously present , their variances can be summed .the tc protocols for synchronization and channel coding are specified ( with some differences ) both in the recommendation issued by the consultative committee for space data systems ( ccsds ) and in the standard issued by the european cooperation for space standardization ( ecss ) .let us refer to the ccsds recommendation : it specifies the functions performed in the `` synchronization and channel coding sublayer '' in tc ground - to - space ( or space - to - space ) communication links . in short ,the sublayer takes transfer frames ( tfs ) produced by the upper sublayer ( `` data link protocol sublayer '' ) , elaborates them and outputs communications link transmission units ( cltus ) that are passed to the lower layer ( ` physical layer' ) where they are mapped into the transmitted waveform by adopting a proper modulation format .details on the structure of the tf and cltu can be found in , and are here omitted for the sake of brevity .the current ccsds recommendation and ecss standard use a bch code for error protection against noise and interference . at the receiverside a hard decision is taken on the received symbols .the performance against pulsed jamming of the hard - decision decoded bch code is rather poor .examples are shown in fig .[ fig : standard_no_int ] for the case without the interleaver ( as addressed by the current standard ) and in fig .[ fig : standard_with_int ] for the case with the ( ideal ) interleaver .performance is expressed in terms of the cer ; the value of has been set equal to db and the cer is plotted as a function of for some values of . as expected , performance degrades for decreasing ; the use of the ( ideal ) interleaver introduces for an improvement that however remains unsatisfactory to the point that reaching cer = , that is a reference value for tc applications , is practically impossible for and .code over pulsed jamming channel , for db and no interleaver .[ fig : standard_no_int],width=340 ] code over pulsed jamming channel , for db and ideal interleaver .[ fig : standard_with_int],width=340 ] an example of the performance of the hard - decision decoded bch code against cw jamming is shown in fig .[ fig : standard_cw ] for , and ( worst case ) .the sir value is assumed as a parameter and we see that a sir in the order of db makes the system unpractical .a further reduction in the value of would require higher sir values ; for example , by assuming , the cer target of becomes practically unreachable just for sir db .code over cw jamming channel ( and ) , for .[ fig : standard_cw],width=340 ] for pn jamming it is possible to verify that under the gaussian approximation its impact is equivalent to that of cw jamming with and . in fig .[ fig : standard_pn ] we have plotted the cer curve as a function of the sir for different values of and db . so this curve also applies to the worst case cw jamming with the same sir .code over pn jamming channel , for fixed db . [ fig : standard_pn],width=340 ]the codes proposed for tc channel coding updating have length greater ( and ) and rate smaller ( ) than the standard code .also the code type has been changed , with the aim to introduce state - of - the - art codes .the mostly addressed candidates , in this sense , are ldpc codes , both binary and non - binary .recently , however , we have also shown that potential competitors can be ptc and even bch codes , if they are soft - decision decoded with maximum likelihood ( ml)-like algorithms characterized by limited complexity .the main features of these schemes are reminded next .a class of binary ldpc codes that is suitable for tc applications has been proposed by the national aeronautics and space administration ( nasa ) and is described in .it is based on the adoption of three systematic short binary ldpc codes designed using protographs with circulant matrices .soft - decision decoding can be realized by using the classic sum - product algorithm with log - likelihood ratios ( llr - spa ) .non - binary ldpc codes with the same lengths have been analyzed by nasa and independently by the deutsches zentrum fr luft- und raumfahrt ( dlr ) in a joint work with the university of bologna ( unibo ) . in this paperwe refer to the dlr - unibo implementation .decoding is realized by using iterative algorithms based on fast hadamard transforms .parallel turbo codes are one of the coding options of the ccsds recommendation for telemetry ( tm ) links . the ccsds turbo encoder is based on the parallel concatenation of two equal 16-state systematic convolutional encoders with polynomial description .the interleavers are based on an algorithmic rule proposed by berrou and described in .the ccsds turbo encoder has four possible information frame lengths : and bits .the nominal code rate can be and . however , higher rates are obtainable by puncturing . maintaining unchanged the encoder structure ,we have considered frame lengths shorter than those in the tm recommendation and fixed the nominal code rate to , in such a way as to comply with the nasa s choices discussed above . because of the shorter length , we can not use the interleavers in and we must design new smaller interleaving structures . among a wide number of different options , we have focused attention on : completely random , spread , quadratic permutation polynomial ( qpp ) and dithered relative prime ( drp ) interleavers . moreover , since the constituent ccsds convolutional codes have states , four extra - tail bits are needed for termination ; then , the turbo codeword length is . as an example , for the case of , this implies to have and an actual code rate . in order to achieve exactly the code rate ,as it is necessary for fair comparison , we have implemented a suitable puncturing strategy .the interleaver and the puncturing pattern have been jointly optimized , in such a way as to maximize the minimum distance and minimize the codewords multiplicity ( i.e. , the number of codewords with hamming weight ) ; these parameters , in fact dominate the code performance at low error rates .the results of the design optimization are shown in table [ tab : syst ] ; besides and , also the information multiplicity ( sum of the input weights over all the codewords with weight ) is provided since , together with , it determines the asymptotic bit error rate performance ..selected interleavers for the parallel turbo codes .[ cols="^,^,^ " , ] [ tab : syst ] the decoding of turbo codes is performed by iteratively applying the well - known bcjr algorithm to the constituent encoders .although soft - decision decoding of bch codes is generally complex , in the case of short bch codes with high rate , an exact ml soft - decision decoding is possible , through its trellis representation ( for example , based on the viterbi or the bcjr algorithms ) .however , having now decided to use codes with rate and the shortest length , these techniques are too involved and therefore can not be applied .alternative solutions can be found , at least for the case of .in fact , the ebch code can be efficiently decoded by using sub - optimal soft - decision decoding algorithms .several options are available for this purpose , also exploiting ldpc - like code representations . for this code, we have focused on the mrb algorithm , which has very good performance and acceptable complexity .it consists of the following steps : 1 .identify most reliable received bits and obtain from them a vector .2 . construct a systematic generator matrix corresponding to these bits .3 . encode by to obtain a candidate codeword .4 . choose the order of the algorithm .5 . consider all ( or a proper subset of ) test error patterns of length and weight .6 . for each of them :sum to , encode by , verify if the likelihood is higher than that of the previous candidate codeword and , if this is true , update the candidate .further details can be found in and the references therein .the mrb algorithm can be applied also to the other schemes ( to ldpc codes , in particular ) where , depending on the order , it can provide performance comparable to , or even better than , that offered by the iterative algorithms .the performances of the new coding schemes presented in section [ sec : four ] have been recently discussed and compared over the awgn channel .we have verified they can provide an advantage of more than db over the current bch code . in section [ sec : six ] we will show that similar improvements can be achieved against jamming . for soft - decision decoding , the knowledge of the jamming state can play a relevant role . for the case of pulsed jamming , for example , to have jsi means to know the noise variance for each symbol .thus , in our simulations , we have considered the case of perfect jsi , which means the receiver is able to identify the fraction , , of symbols affected by jamming and the remaining fraction , , that is only affected by thermal noise , and properly estimate their noise variances , which are and , respectively . on the opposite side ,we have also considered the case when jsi is not available , and the variance used for llr calculation of the decoder input is always equal to the average value .the goodness of the proposed solutions can be measured through the distance of the cer curves from the splb . among the various approaches available to compute the splb, the most suitable one is the so - called sp59 .a modified version of this bound is also available ( called sp67 ) , that is able to take into account the constraint put by the signal constellation ( bpsk in the present analysis ) .more recent improvements , are significant only for high code rates or long codeword lengths , and these conditions are not satisfied by the codes here of interest .thus , in the present study , we consider the sp59 as the most significant splb .the splb reported in the literature refers to the awgn channel and needs generalization .this is easy to achieve for pulsed jamming on the condition that the pulse duration is a multiple of the codeword length and no interleaving is applied . for this purpose ,let us denote by splb the sp59 bound for the awgn channel . under the hypotheses above an extended sphere packing lower bound , esplb , for the case of pulsed jammingcan be defined as : by setting in , we obtain an expression which is valid also for cw and pn jamming channels , when the gaussian approximation is applied . the esplb given bywill be considered in section [ sec : six ] as a useful benchmark for the case with jsi and without interleaving .due to limited space , we focus on pulsed jamming and on codes with and . the analysis can be extended to the longer codes for which , however , the complexity issue for the decoding algorithms adopted can become more critical . the performances of the new coding schemes presented in section [ sec : four ] are compared in figs .[ fig : results_noint_nojsi]-[fig : results_withint_withjsi ] assuming , db and variable , with and without an ( ideal ) interleaver , with and without jsi . in the latter case , to limit the impact of the incorrect noise estimation a clipping threshold , equal to twice the amplitude , has been applied to the signal at the channel output . for the ptcwe have used the optimal drp interleaver reported in table [ tab : syst ] .the order used for the mrb algorithm applied to the ebch code is . without interleaving and without jsi .[ fig : results_noint_nojsi],width=340 ] with interleaving and without jsi .[ fig : results_withint_nojsi],width=340 ] without interleaving and with jsi .[ fig : results_noint_withjsi],width=340 ] with interleaving and with jsi .[ fig : results_withint_withjsi],width=340 ] for three of the considered scenarios the relative behavior of the proposed schemes is very similar : the best performance is achieved by the ebch code , while ptc and non - binary ldpc codes are very close one each other and suffer a penalty with respect to the ebch code that depends on the simulation conditions .an interesting exception occurs for the case with interleaving and without jsi where , because of the ordering mechanism that is at the basis of mrb , the ebch code loses its leadership .the ebch code is also very close to the esplb , where applicable ( see fig .[ fig : results_noint_withjsi ] ) . on the contrary , the performance of the binary ldpc code is rather poor with a loss than can be larger than db with respect to the best solution .these gaps are even more pronounced than those found over the awgn channel .the results shown in the previous section for the systems using interleaving referred to the adoption of an ideal interleaver . in this section we discuss the effect of using real interleavers characterized by finite length .we consider the particular case of using ptc , but a similar analysis could be developed for the other schemes . with reference to the short description given in section [ sec :three ] ( but further details can be found in ) , we suppose that interleaving is applied at cltu level and also taking into account the possible presence of partitioning . this occurs when the tf has length that is not a multiple of : the tf is partitioned into input blocks and , if needed , zero filling is used to complete the last block .each block is then encoded producing cltu coded bits . for the sake of simplicity , in this first evaluation we neglect the presence of the preamble ( bits ) and the postamble ( 64 bits ) that are added for cltu synchronization .so , we assume that an interleaver is applied to the cltu coded bits to increase the protection against bursts ; it involves all the codewords of the cltu . as a simple example, we consider a square row - by - column interleaver where ; the cltu coded bits are written by row and read by column ( or vice versa ) . in this case , wishing to compare the impact of bursts of growing length it is preferable to refer directly to the power and the corresponding ratio .an example of the impact of a real interleaver on the transfer frame error rate ( fer ) in the considered scenario is shown in fig .[ fig : real_int ] , for db , and a burst length of bits . we observe that the cltu interleaver is very effective against bursts produced by pulsed jamming . for long tfs ( like the one here considered )the burst is practically neutralized by the interleaver . without interleaving and using a row - by - column interleaver , for a burst length of bits ; db and .[ fig : real_int],width=340 ]for the first time at our knowledge , the performance over jamming channels of new coding schemes that potentially replacing the bch code in the space tc channel coding standard has been investigated .the adoption of soft - decision decoding , combined with interleaving and jsi , allows to achieve significant improvements .whilst it is confirmed that non - binary ldpc codes , that are considered the most eligible candidates , are generally very good , we have shown that comparable performances can also be achieved by using ptc or ebch codes .we have studied the case of and , where ebch codes often offer the best results at an acceptable complexity .this fact , however , can not be used to draw general conclusions .first of all , extending the sub - optimal decoding algorithms to longer codes , while maintaining acceptable complexity , may be quite difficult .additionally , the sub - optimal algorithms generally define `` complete '' decoders .as it is well known , this may be a penalty for the undetected codeword ( or frame ) error rate that , in tc applications , is at least as important as the codeword ( or frame ) error rate .further work is progress to assess either the complexity and the completeness issues .g. liva , e. paolini , t. de cola , and m. chiani , `` codes on high - order fields for the ccsds next generation uplink , '' in _ proc .2012 6th advanced satellite multimedia systems conf .( asms ) and 12th signal proc .space commun .workshop ( spsc ) _ , wessling , germany , sep .l. costantini , b. matuz , g. liva , e. paolini , and m. chiani , `` non - binary protograph low - density parity - check codes for space communications , '' _ int .commun . and networking _ , vol .30 , pp . 4351 , 2012 .g. p. calzolari , e. vassallo , f. chiaraluce , and r. garello , `` turbo code applications on telemetry and deep space communications , '' in _ turbo code applications : a journey from a paper to realization _ , k. sripimanwat , ed.1em plus 0.5em minus 0.4emspringer , 2005 , ch . 13 , pp .321344 .m. baldi , g. cancellieri , and f. chiaraluce , `` iterative soft - decision decoding of binary cyclic codes based on spread parity - check matrices , '' in _ proc .softcom 2007 _ , split - dubrovnik , croatia , sep .2007 .m. baldi , m. bianchi , f. chiaraluce , r. garello , i. aguilar sanchez , and s. cioni , `` advanced channel coding for space mission telecommand links , '' in _ proc .ieee 78th vehicular technology conference ( vtc fall 2013 ) _ , las vegas , nv , sep . 2013 .
the aim of this paper is to study the performance of some coding schemes recently proposed for updating the tc channel coding standard for space applications , in the presence of jamming . besides low - density parity - check codes , that appear as the most eligible candidates , we also consider other solutions based on parallel turbo codes and extended bch codes . we show that all these schemes offer very good performance , which approaches the theoretical limits achievable . error correcting codes , jamming , telecommands .
the search to directly image an extrasolar planet requires contrast levels of a few from the central star .scattered light in a telescope and the diffraction pattern of the telescope s aperture limit the contrast possible for direct detection of faint companions .the circular aperture of telescopes creates a sub - optimal diffraction pattern , the so - called airy pattern which is azimuthally symmetric .in addition , the intensity in the diffraction pattern of the circular aperture declines as , where . currently the best way to diminish the airy pattern is to use a coronagraph by using the combination of a stop in the focal plane that rejects a majority of the central bright object s light and a lyot stop in the pupil plane to reject high frequency light ( lyot 1939 ; malbet 1996 ; sivaramakrishnan et al . 2001 ) .several recent ideas explore the use of alternative `` apodized '' apertures for high contrast imaging in the optical or near - infrared ( nisenson & papaliolios 2001 ; spergel 2002 ; debes , ge , & chakraborty 2002 ) . these designs revisit concepts first experimented with in the field of optics ( jacquinot & roizen - dossier 1964 ) .other designs , such as the band limited mask , seek to null the light from a central star in much the same way that a nulling interfermoeter performs ( kuchner & traub 2002 ) . by placing a mask into the pupil plane with a gaussian aperture, one can transform a traditional circular aperture telescope into one with a diffraction pattern better suited for high contrast imaging . usinga mask represents a quick , efficient , and cheap way to test this emerging imaging method to determine its advantages and tradeoffs and compare them to the performance of other existing techniques .preliminary results of observign with a prototype gaussian pupil mask can be found in debes et al .( 2002 ) . in thisproceeding we report further lab tests of the performance of gapms compared to lyot coronagraphs with similar throughput as well as testing the new technique of combining a gapm with a coronagraphic image plane mask .we theoretically compare different techniques with the same throughput on the hubble space telescope to determine what may be useful in a real spacecraft .a fairer comparison between a lyot coronagraph and the gapm is to have equal throughput designs and compare their contrast levels . as part of lab experiments that we are performingwe compared two new gapm designs with a lyot coronograph that had a comparable throughput .two types of designs were tested , idealized apertures with no secondary structure ( 20% throughput ) and realistic masks that will be used for future observing ( 30% throughput ) , which have two gaussian aperture per quadrant , avoiding support structure .unsaturated images gapms were taken to predict the flux for the longer , saturated images .we took exposures that had on the order of 10 counts in order to image the psf .we found that this was insufficient to get high s / n on the fainter portions of the psf and we estimate that beyond 1 - 2 the read noise begins to dominate .longer integrations are planned . for the coronagraphic modes we used a gaussian transmission focal plane mask with a fwhm of 500 ( ) .short exposures were taken without the mask for estimating the peak flux of an unblocked point source for a given exposure time .the mask was then carefully aligned to within 1 pixel to block the point source .figure 1 shows the results of these lab tests . in both casesthe hybrid designs perform as well or better , both reaching .the flattening of the profile suggests that the observations were hitting the read noise limit of the picnic detector on piris .other experiments that were done where the mask was misaligned by several pixels presented dramatically worse results , underscoring the need for subpixel alignment and stability over an observation .this points to the utility of the gapms alone for quick surveys and hybrid or lyot designs for deeper searches .given the large number of potential ideas for high contrast imaging , and due to a lack of many lab tests of these designs , a way to model some of the various contrast degradations present in a real space mission would be useful for determining what designs are better suited for future tpf type missions .a strong test would be to compare the performance of the different designs on the hst , where many of these different errors are well modeled .tinytim , the psf modeling software used by the space telescope institute , has accurate wavefront error maps of the telescope ( krist 1995 ) . using these wavefront error maps as input to our models allows us to compare all of the different designs with an equal footing of throughput .figure 2 shows a horizontal cut along the axis of highest contrast for all of the designs . in this preliminary simulationthe band limited mask performs slightly better than the other designs for the same amount of throughput .this is because the lyot stop for the band limited mask is undersized from its optimal shape by about 10 .this would block some of the residual light that leaks through the focal plane mask .these simulations ignore mask errors in both the focal and pupil planes .possibly with psf subtraction techniques a factor of 10 - 100 deeper contrast could be achieved , allowing some bright extrasolar planets to be observed with hst .further simulations need to be done to better understand the feasibility of such observations .we have performed several simulations , lab tests , and telescope observations with gapms and lyot coronagraphs in order to better understand the interplay between theory and the reality of observations .gapms alone provide an improvement over a simple circular aperture for quick high contrast imaging .the combination of a gapm and a coronagraphic mask further supresses and improves the performance of the gapm alone .the masks are very sensitive to an accurate reproduction of shape and thus need accuracies that may be as restrictive as sub - micron precision .this is possible with new nanofabrication techniques that have been perfected at the penn state nanofabrication facility , where future masks may be produced . precisely fabricating these maskscan potentially improve performance to the ideal limit for a mask provided it is above the scattered light limit of the telescope , bringing it in line with lyot coronagraphs of comparable throughput .simulations of different techniques on the hst provide an avenue to test new technologies for future nasa missions such as tpf , and show what can be possible from space .j.d acknowledges funding by a nasa gsrp fellowship under grant ngt5 - 119 .this work was supported by nasa with grants nag5 - 10617 , nag5 - 11427 as well as the penn state eberly college of science . j.g. also acknowledges funding through ball aerospace co ..
gaussian aperture pupil masks ( gapms ) can in theory achieve the contrast requisite for directly imaging an extrasolar planet . we use lab tests and simulations to further study their possible place as a high contrast imaging technique . we present lab comparisons with traditional lyot coronagraphs and simulations of gapms and other high contrast imaging techniques on hst . # 1_#1 _ # 1_#1 _ = # 1 1.25 in .125 in .25 in
as a warning to the reader , we want to stress from the beginning that this paper is mostly a formal exercise : to understand how the bayesian analysis of a mixture model unravels and automatically exploits the missing data structure of the model is crucial for grasping the details of simulation methods ( not covered in this paper , see , e.g. , ) that take full advantage of the missing structures .it also allows for a comparison between exact and approximate techniques when the former are available . while the relevant references are pointed out in due time , we note here that our paper builds upon the foundational paper of .we thus assume that a sample from the mixture model is available , where denotes the scalar product between the vectors and .we are selecting on purpose the natural representation of an exponential family ( see , e.g. * ? ? ? * chapter 3 ) , in order to facilitate the subsequent derivation of the posterior distribution .when the components of the mixture are poisson distributions , if we define , the poisson distribution indeed is written as a natural exponential family : for a mixture of multinomial distributions , the natural representation is given by and the overall ( natural ) parameter is thus . in the normal case ,the derivation is more delicate when both parameters are unknown since in this particular setting , the natural parameterisation is in while the statistic is two - dimensional .the moment cumulant function is then .as described in the standard literature on mixture estimation , the missing variable decomposition of a mixture likelihood associates each observation in the sample with one of the components of the mixture , i.e. given the component allocations , we end up with a cluster of ( sub)samples from different distributions from the same exponential family .priors customarily used for the analysis of these exponential families can therefore be extended to the mixtures as well .while conjugate priors do not formally exist for mixtures of exponential families , we will define _ locally conjugate priors _ as priors that are conjugate for the completed distribution , that is , for the likelihood associated with both the observations and the missing data .this amounts to taking regular conjugate priors for the parameters of the different components and a conjugate dirichlet prior on the weights of the mixture , when we consider the complete likelihood \\ & = \prod_{j=1}^k p_j^{n_j } \exp\left [ \theta_j\cdot \sum_{z_i = j } r(x_i ) - n_j\psi(\theta_j ) \right ] \\ & = \prod_{j=1}^k p_j^{n_j } \exp\left [ \theta_j\cdot s_j - n_j\psi(\theta_j ) \right]\,,\end{aligned}\ ] ] it is easily seen that we remain within an exponential family since there exists a sufficient statistic with fixed dimension , .if we use a dirichlet prior , on the vector of the weights defined on the simplex of , and ( generic ) conjugate priors on the , \propto \exp\left [ \theta_j\cdot s_{0j } - \lambda_j\psi(\theta_j ) \right ] \,,\ ] ] the posterior associated with the complete likelihood is then of the same family as the prior : \times p_j^{n_j } \exp\left [ \theta_j\cdot s_j - n_j\psi(\theta_j ) \right ] \\ & = \prod_{j=1}^k p_j^{\alpha_j+n_j-1}\,\exp\left [ \theta_j\cdot ( s_{0j}+s_j ) - ( \lambda_j+n_j)\psi(\theta_j ) \right]\,;\end{aligned}\ ] ] the parameters of the prior are transformed from to , from to and from into .for instance , in the case of the poisson mixture , the conjugate priors are gamma , with corresponding posteriors ( for the complete likelihood ) , gamma distributions , in which denotes the sum of the observations in the group . for a mixture of multinomial distributions , ,the conjugate priors are dirichlet distributions , with corresponding posteriors , denoting the number of observations from component in group , with . in the normal mixture case ,the standard conjugate priors are products of normal and inverse gamma distributions , i.e. indeed , the corresponding posterior is and where is the sum of the observations allocated to component and is the sum of the squares of the differences from for the same group ( with the convention that when ) . these straightforward derivations do not correspond to the observed likelihood , but to the completed likelihood .while this may be enough for some simulation methods like gibbs sampling ( see , e.g. * ? ? ?* ; * ? ? ?* ) , we need further developments for obtaining the true posterior distribution . if we now consider the observed likelihood , it is natural to expand this likelihood as a sum of completed likelihoods over all possible configurations of the partition space of allocations , that is , a sum over terms . except in the very few cases that are processed below , including poisson and multinomial mixtures ( see section [ ex : fullp ] ) , this sum does not simplify into a smaller number of terms because there exists no summary statistics . from a bayesian point of view, the complexity of the model is therefore truly of magnitude .the observed likelihood is thus ( with the dependence of upon omitted for notational purposes ) and the associated posterior is , up to a constant , where is the normalising constant missing in i.e. if is the normalising constant of , i.e. the posterior is therefore a mixture of conjugate posteriors where the parameters of the components as well as the weights can be computed in closed form !the availability of the posterior does not mean that alternative estimates like map and mmap estimates can be computed easily. however , this is a useful closed form result in the sense that moments can be computed exactly : for instance , if there is no label switching problem and , if the posterior mean is producing meaningful estimates , we have that = \sum_{{\mathbf{z}}}\ , \omega({\mathbf{z}})\ , \frac{s_{0j}+s_j}{n_j+\lambda_j\,,}\ ] ] since , for each allocation vector , we are in an exponential family set - up where the posterior mean of the expectation of is available in closed form .( obviously , the posterior mean only makes sense as an estimate for very discriminative priors ; see . )similarly , estimates of the weights are given by = \sum_{{\mathbf{z}}}\ , \omega({\mathbf{z}})\ , \frac{n_j+\alpha_j}{n+\alpha_\cdot}\,,\ ] ] where .therefore , the only computational effort required is the summation over all partitions .this decomposition further allows for a closed form expression of the marginal distributions of the various parameters of the mixture .for instance , the ( marginal ) posterior distribution of is given by }{k(s_{0j}+s_j , n_j+\lambda_j)}\,.\ ] ] ( note that , when the hyperparameters , , and are independent of , this posterior distribution is independent of . ) similarly , the posterior distribution of the vector is equal to if is small and is large , and when all hyperparameters are equal , the posterior should then have spikes or peaks , due to the label switching / lack of identifiability phenomenon .we will now proceed through standard examples . in the case of a two component poisson mixture , us assume a uniform prior on ( i.e. ) and exponential priors and on and , respectively . (the scales are chosen to be fairly different for the purpose of illustration . in a realistic setting, it would be sensible either to set those scales in terms of the scale of the problem , if known , or to estimate the global scale following the procedure of . )the normalising constant is then equal to { \,\text{d}\,}\theta\\ & = \int_0^\infty \lambda_j^{\xi-1}\,\exp ( -\delta\lambda_j)\,{\,\text{d}\,}\lambda_j \\ & = \delta^{-\xi}\,\gamma(\xi)\,,\end{aligned}\ ] ] with and , and the corresponding posterior is ( up to the normalisation of the weights ) with corresponding to a beta distribution on and to a gamma distribution on .an important feature of this example is that the sum does not need to involve all of the terms , simply because the individual terms in the previous sum factorise in , which then acts like a local sufficient statistic . since and , the posterior only requires as many distinct terms as there are distinct values of the pair in the completed sample .for instance , if the sample is , the distinct values of the pair are .there are therefore distinct terms in the posterior , rather than . the problem of computing the number ( or cardinality ) of terms in the sum with the same statistic has been tackled by in that he proposes a recursive formula for computing in an efficient way , as expressed below for a component mixture : * * if denotes the vector of length made up of zeros everywhere except at component where it is equal to one , if then therefore , once the are all computed , the posterior can be written as \ , \pi(\theta,\mathbf{p}|{\mathbf{x}},n_1,s_1)\,,\ ] ] up to a constant , since the complete likelihood posterior only depends on the sufficient statistic .now , the closed - form expression allows for a straightforward representation of the marginals .for instance , the marginal in is given by up to a constant , while the marginal in is again up to a constant , and the marginal in is still up to a constant , if denotes the sum of all observations .as pointed out above , another interesting outcome of this closed - form representation is that marginal likelihoods ( or evidences ) can also be computed in closed form .the marginal distribution of is directly related to the unormalised weights in that up to the product of factorials ( but this is irrelevant in the computation of the bayes factor ) . in practice, the derivation of the cardinalities can be done recursively as in : include each observation by updating all the in both and , and then check for duplicates . below is a nave r implementation ( for reasonable efficiency , the algorithm should be programmed in a faster language like c. ) , where ` ncomp ` denotes the number of components : .... # matrix of sufficient statistics , last column is number of occurrences cardin = matrix(0,ncol=2*ncomp+1,nrow = ncomp ) # initialisation for ( i in 1:ncomp ) cardin[i,((2*i)-1):(2*i)]=c(1,dat[1 ] ) cardin[,2*ncomp+1]=1 # update for ( i in 2:length(dat ) ) { ncard = dim(cardin)[1 ] update = matrix(t(cardin),ncol=2*ncomp+1,nrow = ncomp*ncard , byrow = t ) for ( j in 0:(ncomp-1 ) ) { update[j*ncard+(1:ncard),(2*j)+1]= update[j*ncard+(1:ncard),(2*j)+1]+1 update[j*ncard+(1:ncard),(2*j)+2]= update[j*ncard+(1:ncard),(2*j)+2]+dat[i ] } update = update[do.call(order , data.frame(update ) ) , ] nu = dim(update)[1 ] # changepoints jj = c(1,(2:nu)[apply(abs(update[2:nu,1:(2*ncomp)]- update[1:(nu-1),1:(2*ncomp)]),1,sum)>0 ] ) # duplicates or rather ncomplicates !duplicates=(1:nu)[-jj ] if ( length(duplicates)>0 ) { for ( dife in 1:(ncomp-1 ) ) { ji = jj[jj+dife<=nu ] ii = ji[apply(abs(update[ji+dife,1:(2*ncomp)]- update[ji,1:(2*ncomp)]),1,sum)==0 ] if ( length(ii)>0 ) update[ii,(2*ncomp)+1]=update[ii,(2*ncomp)+1]+ update[ii+dife,(2*ncomp)+1 ] } update = update[-duplicates , ] } cardin = update } .... at the end of this program , all non - empty realisations of the sufficient are available in the two first columns of ` cardin ` , while the corresponding is provided by the last column .once the s are available , the corresponding weights can be added as the last column of ` cardin ` , i.e. .... w = log(cardin[,2*ncomp+1])+apply(lfactorial(cardin[,2*(1:ncomp)-1]),1,sum)+ apply(lfactorial(cardin[,2*(1:ncomp)]),1,sum)- apply(log(xi[1:ncomp]+cardin[,2*(1:ncomp)-1 ] ) * ( cardin[,2*(1:ncomp)]+1),1,sum)- sum(lfactorial(dat ) ) w = exp(w - max(w ) ) cardin = cbind(cardin , w ) .... where ` xi[j ] ` denotes .the marginal posterior on can then be plotted via .... marlam = function(lam , comp=1 ) { sum(cardin[,2*(ncomp+1)]*dgamma(lam , shape = cardin[,2*comp]+1 , rate = cardin[,2*comp-1]+xi[comp]))/sum(cardin[,2*(ncomp+1 ) ] ) } lalam = seq(.01,1.2*max(dat),le=100 ) mamar = apply(as.matrix(lalam),1,marlam , comp=1 ) plot(lalam , mamar , type="l",xlab = expression(mu[1]),ylab="",lwd=2 ) .... while the marginal posterior on is given through .... marp = function(p , comp=1 ) { sum(cardin[,2*(ncomp+1)]*dbeta(p , shape1=cardin[,2*comp-1]+1 , shape2=length(dat)-cardin[,2*comp-1]+1))/sum(cardin[,2*(ncomp+1 ) ] ) } pepe = seq(.01,.99,le=99 ) papar = apply(as.matrix(pepe),1,marp ) plot(pepe , papar , type="l",xlab="p",ylab="",lwd=2 ) .... now , even with this considerable reduction in the complexity of the posterior distribution ( to be compared with ) , the number of terms in the posterior still grows very fast both with and with the number of components , as shown through a few simulated examples in table [ tab : explose ] .( the missing items in the table simply took too much time or too much memory on the local mainframe when using our ` r ` program . used a specific ` c ` program to overcome this difficulty with larger sample sizes . )the computational pressure also increases with the range of the data ; that is , for a given value of , the number of rows in ` cardin ` is much larger when the observations are larger , as shown for instance in the first three rows of table [ tab : explose ] : a simulated poisson sample of size is primarily made up of zeros when but mostly takes different values when .the impact on the number of sufficient statistics can be easily assessed when .( note that the simulated dataset corresponding to in table [ tab : explose ] corresponds to a sample only made up of zeros , which explains the values of the sufficient statistic when . )l ccc & & & + & 11 & 66 & 286 + & 52 & 885 & 8160 + & 166 & 7077 & 120,908 + & 57 & 231 & 1771 + & 260 & 20,607 & 566,512 + & 565 & 100,713 & + & 87 & 4060 & 81,000 + & 520 & 82,758 & + & 1413 & 637,020 & + & 216 & 13,986 & + & 789 & 271,296 & + & 2627 & & + an interesting comment one can make about this decomposition of the posterior distribution is that it may happen that , as already noted in , a small number of values of the local sufficient statistic carry most of the posterior weight .table [ tab : cumuweit ] provides some occurrences of this feature , as for instance in the case .l ccc & & & + & 20/44 & 209/675 & 1219/5760 + & 58/126 & 1292/4641 & 13,247/78,060 + & 38/40 & 346/630 & 1766/6160 + & 160/196 & 4533/12,819 & 80,925/419,824 + & 99/314 & 5597/28,206 & + & 21/625 & 13,981/117,579 & + & 50/829 & 62,144/211,197 & + & 1/580 & 259/103,998 & + & 198/466 & 20,854/70,194 & 30,052/44,950 + & 202/512 & 18,048/80,470 & + & 1/1079 & 58,820/366,684 & + we now turn to a minnow dataset made of observations , for which we need a minimal description .as seen in figure [ fig : topminnow1 ] , the datapoints take large values , which is a drawback from a computational point of view since the number of statistics to be registered is much larger than when all datapoints are small .for this reason , we can only process the mixture model with components . _( top right ) _ marginal posterior distribution of _ ( bottom left ) _ marginal posterior distribution of _ ( bottom right ) _ histogram of the minnow dataset .( the prior parameters are and to remain compatible with the data range . ) ] if we instead use a completely symmetric prior with identical hyperparameters for and , the output of the algorithm is then also symmetric in both components , as shown by figure [ fig : topminnow2 ] .the modes of the marginals of and remain the same , nonetheless . for a symmetric prior with hyperparameter . ]the case of a multinomial mixture can be dealt with similarly : if we have observations from the mixture where and , the conjugate priors on the are dirichlet distributions , and we use once again the uniform prior on .( a default choice for the s is . )note that the may differ from observation to observation , since they are irrelevant for the posterior distribution : given a partition of the sample , the complete posterior is indeed up to a normalising constant that does not depend on .more generally , if we consider a mixture with components , the complete posterior is also directly available , as once more up to a normalising constant .the corresponding normalising constant of the dirichlet distribution being it produces the overall weight of a given partition as where is the number of observations allocated to component , is the sum of the for the observations allocated to component and given that the posterior distribution only depends on those sufficient " statistics and , the same factorisation as in the poisson case applies , namely that we simply need to count the number of occurrences of a particular local sufficient statistic .the book - keeping algorithm of applies in this setting as well .what follows is a nave r program translating the above : .... em = dim(dat)[2 ] emp = em+1 empcomp = emp*ncomp # matrix of sufficient statistics : # last column is number of occurrences # each series of ( em+1 ) columns contains , first , number of allocations # and , last , sum of multinomial observations cardin = matrix(0,ncol = empcomp+1,nrow = ncomp ) .... therefore , the column of ` cardin ` contains the sum of the for the s allocated to the first component . .... # initialisation for ( i in 1:ncomp ) cardin[i , emp*(i-1)+(1:emp)]=c(1,dat[1 , ] ) cardin[,empcomp+1]=1 # update for ( i in 2:dim(dat)[1 ] ) { ncard = dim(cardin)[1 ] update = matrix(t(cardin),ncol = empcomp+1,nrow = ncomp*ncard , byrow = t ) for ( j in 0:(ncomp-1 ) ) { indi = j*ncard+(1:ncard ) empj = emp*j update[indi , empj+1]=update[indi , empj+1]+1 update[indi , empj+(2:emp)]=t(t(update[indi , empj+(2:emp)])+dat[i , ] ) } update = update[do.call(order , data.frame(update ) ) , ] nu = dim(update)[1 ] # changepoints jj = c(1,(2:nu)[apply(abs(update[2:nu,1:empcomp]-update[1:(nu-1 ) , 1:empcomp]),1,sum)>0 ] ) # duplicates or rather ncomplicates !duplicates=(1:nu)[-jj ] if ( length(duplicates)>0 ) { for ( dife in 1:(ncomp-1 ) ) { ji = jj[jj+dife<=nu ] ii = ji[apply(abs(update[ji+dife,1:empcomp]- update[ji,1:empcomp]),1,sum)==0 ] if ( length(ii)>0 ) update[ii , empcomp+1]=update[ii , empcomp+1]+ update[ii+dife , empcomp+1 ] } update = update[-duplicates , ] } cardin = update # print(sum(cardin[,2*ncomp+1])-ncomp^i ) } .... where ` dat ` is now a matrix with columns . the computation of the number of replicates of a given sufficient statistic , is then provided by the last column of the matrix ` cardin ` .the overall weight is then computed as the product of with the normalising constant : .... olsums = matrix(0,ncol = ncomp , nrow = dim(update)[1 ] ) for ( y in 1:ncomp ) colsums[,y]=apply(update[,(y-1)*emp+(2:emp)],1,sum ) w = log(cardin[,empcomp+1])+ apply(lfactorial(cardin[,emp*(0:(ncomp-1))+1]),1,sum)+ apply(lfactorial(cardin [ , ( 1:empcomp)[-1-emp*(0:(ncomp-1))]]-.5),1,sum)- apply(lfactorial(colsums)+em*.5 - 1,1,sum)- sum(lfactorial(dat ) ) w = exp(w - max(w ) ) cardin = cbind(cardin , w ) .... as shown in table [ tab : multi ] , once again , the reduction in the number of cases to be considered is enormous .l cccc & & & & + & & & & + & & & & + & & & & + & & & & + & & & & + & & & & + & & & & + & & & & + & & & & + & & & & + & & & & + _ ( missing terms are due to excessive computational or storage requirements . ) _ for a normal mixture ,the number of truly different terms in the posterior distribution is much larger than in the previous ( discrete ) cases , in the sense that only permutations of the members of a given partition within each term of the partition provide the same local sufficient statistics .therefore , the number of observations that can be handled in an exact analysis is necessarily extremely limited .as mentioned in section [ sub : locconj ] , the locally conjugate priors for normal mixtures are products of normal by inverse gamma distributions .for instance , in the case of a two - component normal mixture , we can pick , , , if a difference of one between both means is considered likely ( meaning of course that the data are previously scaled ) and if is the prior assumption on the variance ( possibly deduced from the range of the sample ) . obviously , the choice of a gamma distribution with degrees of freedom is open to discussion , as it is not without consequences on the posterior distribution .the normalising constant of the prior distribution is ( up to a true constant ) indeed , the corresponding posterior is \ ] ] and \,.\ ] ] the number of different sufficient statistics is thus related to the number of different partitions of the dataset into at most groups .this is related to the bell number , which grows extremely fast .we therefore do not pursue the example of the normal mixture any further for lack of practical purpose .this paper is a chapter of the book _ mixtures : estimation and applications _ , edited by the authors jointly with mike titterington and following the icms workshop on the same topic that took place in edinburgh , march 03 - 05 , 2010 .the authors are deeply grateful to the staff at icms for the organisation of the workshop , to the funding bodies ( epsrc , lms , edinburgh mathematical society , glasgow mathematical journal trust , and royal statistical society ) for supporting this workshop , and to the participants in the workshop for their innovative and exciting contributions .lee k , marin jm , mengersen k and robert c 2009 bayesian inference on mixtures of distributions in _ perspectives in mathematical sciences i : probability and statistics _sastry nn , delampady m and rajeev b ) , pp .world scientific singapore .mengersen k and robert c 1996 testing for mixtures : a bayesian entropic approach ( with discussion ) in _ bayesian statistics 5 _ ( ed berger j , bernardo j , dawid a , lindley d and smith a ) , pp . 255276 .oxford university press .
in this paper , we show how a complete and exact bayesian analysis of a parametric mixture model is possible in some cases when components of the mixture are taken from exponential families and when conjugate priors are used . this restricted set - up allows us to show the relevance of the bayesian approach as well as to exhibit the limitations of a complete analysis , namely that it is impossible to conduct this analysis when the sample size is too large , when the data are not from an exponential family , or when priors that are more complex than conjugate priors are used . * keywords : * bayesian inference , conjugate prior , exponential family , poisson mixture , binomial mixture , normal mixture .
complex networks are often employed as models for large - scale systems like connectivity inside the brain , linking structure of the internet or trust relations in social networks . even in cosmology, causality can be modeled based on a network as demonstrated in .such networks are of hyperbolic structure , in which older nodes are favorably connected compared to younger ones . in many settingsthe interconnections of a network contain uncertainties or represent the ( possibly unknown ) parametrized quantities of interest .parametrized models with high - dimensional state and parameter spaces are often infeasible to evaluate many times for different locations of the parameter space .this is due to two effects .first , the high - dimensional state space makes each integration of the dynamic system computationally costly .second , the high - dimensional parameter space may make many simulations necessary .in this situation model reduction will accelerate these otherwise costly experiments .particularly , the combined reduction of state and parameter space will be illustrated .this setting for model reduction was inspired by .the gramian - based ( state ) reduction approach originates in ( approximate ) balanced truncation comprehensively described in .an alternative computational method for these gramians , based on proper orthogonal decomposition , was introduced in under the name * empirical gramians*. for the parameter identification and combined state and parameter reduction , the empirical joint gramian from is utilized . in the next section the construction of a hyperbolic networkis described . in section [ sr ]the state reduction procedure , then in section [ pr ] the parameter identification and combined state and parameter reduction is explained . for an efficient assembly of the required gramians , the empirical cross gramian is presented in section [ eg ] . in section[ uq ] the usage of empirical gramians in the context of uncertainty quantification is outlined .finally , in section [ nr ] a sample network is reduced .generating a hyperbolic network is a dynamic process with a discrete time space .the following description is taken from and ( * ? ? ?* supplementary notes ii .c ) . at each time step a new node is born by drawing from a uniform random distribution on the circle yielding a new node at $ ] and a radius with network degree .the new node connects to all existing nodes that satisfy : this leads to a `` space - time '' representation in which the network is created in the shape of a hyperboloid visualized in figure [ hy ] . in this worka maximum number of nodes will be set .such a hyperbolic network can be modeled as a matrix , by treating a matrix element as connection from node to , which leads to a dynamic system setting of a basic linear autonomous system : with state and a system matrix embodying the network structure .usually some external input or control is applied to the system and the quantities of interest are some subset or linear combination of the systems states .this leads to a linear control system with input , input matrix , outputs and output matrix .the matrices can for example be used to excite only a certain nodes via and observe the dynamics in others via . in this settingit is assumed , the connections , being the components of , between the networks nodes , are parametrized in each component by : as described above , in each time step a new node is born and connects to existing nodes . hence , the components of , which are components of , too , change over time and yield a parametrized * linear time - varying control system * : now , the hyperbolic network can be treated with the model reduction methods of control theory .a well known method for model order reduction of the state space in a control system setting is * balanced truncation*. in this approach a systems controllability and observability is balanced and the least controllable and observable states are truncated .a systems controllability is encoded in a gramian matrix which is a solution of the lyapunov equation .a systems observability is also encoded in a gramian matrix which is a solution of the lyapunov equation .then , by a balancing transformation , computed from the controllability and observability gramian , the control system is transformed in a manner such that the states are ordered from the most to the least important , for the systems dynamics .this ordering is based on the hankel singular values .after the sorting , the least controllable and observable can be truncated .the * cross gramian * encodes controllability and observability into one gramian matrix .a solution of the sylvester equation yields the cross gramian matrix as a solution , if the system is square . in case the system is symmetric , meaning the systems gain is symmetric , then the absolute value of the cross gramians eigenvalues equal the hankel singular values : given an asymptotically stable system , the cross gramian can also be computed as the time integral over the product of input - to - state and state - to - output map : which will be the basis for computing the empirical gramian variant . the state reduction is based on the singular values of the cross gramian . a singular value decomposition of , provides a projection of the states in which the states are sorted by their importance . without loss of generality , the singular values , composing the diagonal matrix , are assumed to be sorted in descending order .based on this projection , the matrices and the initial value can be partitioned and reduced , this direct truncation approximates closely the balanced truncation of controllability and observability gramians , but does not require an additional balancing transformation . in case the system is not square or not symmetric , following the approach from , the system can be embedded into a symmetric system . since for each square matrix there exists a symmetrizer such that and thus the embedding system is given by : if the system matrix is symmetric , and thus , the embedding of the system simplifies to : even though the number of inputs and outputs is increased the number of states remains the same as in the original system .the concept of controllability and especially observability extends to parametrized systems by treating the parameters as additional states .these parameter states are constant over time and are assigned the parameters value as initial states : this augmented system , used in , can now be subject to a similar method to the direct truncation of the cross gramian for state reduction . the cross gramian of the augmented system yields the * joint gramian * introduced in : with its upper left block ( ) being the usual cross gramian of the system . the identifiability information of the parameters is encoded in . the parameter related information is then extracted by the schur - complement of the symmetric part of the joint gramian , resulting in the * cross - identifiability gramian * : a singular value decomposition of , provides a projection of the parameters that are sorted by their importance : based on this projection the parameters can be partitioned and reduced , as described in , by employing a truncation of states based on the singular values of and parameters based on the singular values of enables the combined reduction .empirical gramians were introduced in and are solely based on simulations of the underlying control system .these simulations use perturbations in input and initial states which are averaged .the required perturbations are organized into sets allowing a systematic perturbation of input and initial states : now the empirical cross gramian can be defined as follows ( taken from ) : for sets , , , , , , input during steady state with output , the * empirical cross gramian * relating the states of input to output of , is given by : the joint gramian encapsulates the cross gramian , hence the * empirical joint gramian * is computed in the same manner as the empirical cross gramian , yet of the augmented system .as shown in , the empirical gramians extend to time - varying systems , and thus can be applied in this setting for the hyperbolic networks .the connections between the network nodes , which are modeled by the components of the system matrix , might contain uncertainties .due to the computation of empirical gramians based on simulations , potential uncertainties in initial state and external input can be incorporated by enlarging the corresponding set of perturbations respectively .hence , for an augmented system uncertainties in the parameters can also be included .this allows robust model reduction .additionally , the parameter reducing projection can also be used to reduce , for example in a gaussian setting , mean and covariance of a parameter distribution .to demonstrate the capabilities of this approach a synthetic hyperbolic network is utilized .as described in section [ hn ] , the time varying system is growing with each time step .this network with a maximum of nodes , thus a state dimension of , and inputs and outputs is selected .furthermore , it is assumed that each connection is reciprocal , hence and .all possible connections of all nodes are treated as ( time - varying ) parameters in this setting , yet input matrix and output matrix are random and notably , which requires an embedding into a symmetric system ( see section [ sr ] ) . first , in an offline phase , that has to be performed only once , a reduced order model is created .the reduction procedure uses the empirical joint gramian of section [ eg ] that computes the cross gramian of the embedded augmented system : then , a reduction of states , based on the singular values of , and of parameters , based on the singular values of , is performed .second , in the online phase , the reduced model can be evaluated .the computations are performed using the empirical gramian framework - * emgr * described in .source code for the following experiments can be found at .since the network evolves during its evaluation the reduction has to be performed for unknown connectivity .a distribution of the singular values of the empirical cross gramian and the empirical cross - identifiability gramian is given in figure [ sv0 ] and figure [ sv1 ] .the singular values of these two empirical gramians describe the energy contained in the state and parameter respectively .a reduction of the parameter space from dimension to is suggested by the singular values of the cross - identifiability gramian . for the state space a reduction from dimension to is performed , based on the singular values of the cross gramian .the reduced model can be evaluated and compared to the full order model .figures [ hn0 ] and [ hn1 ] show the impulse response of the full order and reduced order network , while the relative error between them is shown in figure [ hn2 ] . between the two time series of the full and reduced order model impulse response , the relative -error is .the sharp drop in the singular values of the cross - identifiability gramian determines the reduced order model dimension ; a truncation of more parameter space dimensions will introduce a significant higher error .the descent of singular values of the cross gramian also allows a graduated increase of error when truncating states . to scan various location of the parameter space, only the low - dimensional reduced parameter space has to be scanned . comparing the reduced order model with the full order model ,the combined reduction decreased the integration time by 21% and memory requirements by 70% ..original time , offline time , online time and relative l2-error ; averaged over 100 simulations .[ cols= " < , < , < , < " , ]the numerical experiments suggest that the empirical joint gramian , which is based on the empirical cross gramian , can be applied to reduce this type of control system with hyperbolic network structure . as shown ,a linear time - varying control system in which also the parameter values vary over time , can be handled by the empirical gramians .this concurrent reduction of state and parameter spaces enables , for example , scenarios in which the reduced model can be used to scan the parameter space . using the ( empirical ) cross gramian of the embedded system is efficient , since the number of inputs and outputs is small compared to the number of states and no symmetrizer needs to be computed and inverted , has been chosen to be symmetric . yet , for networks with a non - symmetric system matrix , a possibly costly computation of the symmetrizer and its inverse is required .thus , a generalization of the cross gramian to non - symmetric systems should be explored .
* we recently introduced the joint gramian for combined state and parameter reduction [ c. himpe and m. ohlberger . cross - gramian based combined state and parameter reduction for large - scale control systems . arxiv:1302.0634 , 2013 ] , which is applied in this work to reduce a parametrized linear time - varying control system modeling a hyperbolic network . the reduction encompasses the dimension of nodes and parameters of the underlying control system . networks with a hyperbolic structure have many applications as models for large - scale systems . a prominent example is the brain , for which a network structure of the various regions is often assumed to model propagation of information . networks with many nodes , and parametrized , uncertain or even unknown connectivity require many and individually computationally costly simulations . the presented model order reduction enables vast simulations of surrogate networks exhibiting almost the same dynamics with a small error compared to full order model . * + * keywords : * hyperbolic network , model reduction , combined reduction , cross gramian , joint gramian , empirical gramian
stochastic fluctuations are intrinsic to fluid dynamics because fluids are composed of molecules whose positions and velocities are random at thermodynamic scales . because they span the whole range of scales from the microscopic to the macroscopic , fluctuations need to be consistently included in all levels of description .stochastic effects are important for flows in new microfluidic , nanofluidic and microelectromechanical devices ; novel materials such as nanofluids ; biological systems such as lipid membranes , brownian molecular motors , nanopores ; as well as processes where the effect of fluctuations is amplified by strong non - equilibrium effects , such as ultra clean combustion , capillary dynamics , and hydrodynamic instabilities .one can capture thermal fluctuations using direct particle level calculations .but even coarse - grained particle methods are computationally expensive because the dynamics of individual particles has time scales significantly shorter than hydrodynamic time scales .alternatively , thermal fluctuations can be included in the navier - stokes equations through stochastic forcing terms , as proposed by landau and lifshitz and later extended to fluid mixtures .the basic idea of _ fluctuating hydrodynamics _ is to add a _stochastic flux _ corresponding to each dissipative ( irreversible , diffusive ) flux .this ensures that the microscopic conservation laws and thermodynamic principles are obeyed while also maintaining fluctuation - dissipation balance .specifically , the equilibrium thermal fluctuations have the gibbs - boltzmann distribution dictated by statistical mechanics .fluctuating hydrodynamics is a useful tool in understanding complex fluid flows far from equilibrium but theoretical calculations are often only feasible after ignoring nonlinearities , inhomogeneities in density , temperature , and transport properties , surface dynamics , gravity , unsteady flow patterns , and other important effects . in the past decade fluctuating hydrodynamics has been applied to study a number of nontrivial practical problems ; however , the numerical methods used are far from the comparable state - of - the - art for deterministic solvers .previous computational studies of the effect of thermal fluctuations in fluid mixtures have been based on the compressible fluid equations and thus require small time steps to resolve fast sound waves ( pressure fluctuations ) .recently , some of us developed finite - volume methods for the incompressible equations of fluctuating hydrodynamics , which eliminate the stiffness arising from the separation of scales between the acoustic and vortical modes .for inhomogeneous fluids with non - constant density , diffusive mass and heat fluxes create local expansion and contraction of the fluid and the incompressibility constraint should be replaced by a `` quasi - incompressibility '' constraint .the resulting _ low - mach number _ equations have been used for some time to model deterministic flows with thermo - chemical effects , and several conservative finite - volume techniques have been developed for solving equations of this type . to our knowledge, thermal fluctuations have not yet been incorporated in low mach number models . in this workwe extend the staggered - grid , finite - volume approach developed in ref . to isothermal mixtures of fluids with unequal densities .the imposition of the quasi - incompressibility constraint poses several nontrivial mathematical and computational challenges . at the mathematical level ,the traditional low mach number asymptotic expansions assume spatio - temporal smoothness of the flow and thus do not directly apply in the stochastic context . at the computational level, enforcing the quasi - incompressibility or equation of state ( eos ) constraint in a conservative and stable manner requires specialized spatio - temporal discretizations . by careful selection of the analytical form of the eos constraint and the spatial discretization of the advective fluxes we are able to maintain strict local conservation and enforce the eos to within numerical tolerances . in the present work ,we employ an explicit projection - based temporal discretizations because of the substantial complexity of designing and implementing semi - implicit discretizations of the momentum equation for spatially - inhomogeneous fluids .thermal fluctuations exhibit unusual features in systems out of thermodynamic equilibrium .notably , external gradients can lead to _ enhancement _ of thermal fluctuations and to _ long - range _ correlations between fluctuations .sharp concentration gradients present during diffusive mixing lead to the development of macroscopic or _ giant fluctuations _ in concentration , which have been observed using light scattering and shadowgraphy techniques .these experimental studies have found good but imperfect agreement between the predictions of a simplified fluctuating hydrodynamic theory and experiments .computer simulations are , in principle , an ideal tool for studying such complex time - dependent processes in the presence of nontrivial boundary conditions without making the sort of approximations necessary for analytical calculations , such as assuming spatially - constant density and transport coefficients and spatially - uniform gradients . on the other hand , the multiscale ( more precisely , _ many - scale _ ) nature of the equations of fluctuating hydrodynamicsposes many mathematical and computational challenges that are yet to be addressed .notably , it is necessary to develop temporal integrators that can accurately and robustly handle the large separation of time scales between different physical processes , such as mass and momentum diffusion .the computational techniques we develop here form the foundation for incorporating additional physics , such as heat transfer and internal energy fluctuations , phase separation and interfacial dynamics , and chemical reactions .we begin section [ sec : equations ] by formulating the fluctuating low mach number equations for an isothermal binary fluid mixture .we present both a traditional pressure ( constrained ) formulation and a gauge ( unconstrained ) formulation .we analyze the spatio - temporal spectrum of the thermal fluctuations in the linearized equations and demonstrate that the low mach equations eliminate the fast ( sonic ) pressure fluctuations but maintain the correct spectrum of the slow ( diffusive ) fluctuations . in section [ sec : temporalintegration ]we develop projected runge - kutta schemes for solving the spatially - discretized equations , including a midpoint and a trapezoidal second - order predictor - corrector scheme , and a third - order three - stage scheme . in section [ sec : spatialdiscretization ] we describe a spatial discretization of the equations that strictly maintains the equation of state constraint and also obeys a fluctuation - dissipation balance principle . in section [ sec : giantfluct ] we study the steady - state spectrum of giant concentration fluctuations in the presence of an applied concentration gradient in a mixture of two dissimilar fluids , and test the applicability of common approximations that neglect spatial inhomogeneities .in section [ sec : mixingmd ] we study the dynamical evolution of giant interface fluctuations during diffusive mixing of two dissimilar fluids , using both hard - disk molecular dynamics and low mach number fluctuating hydrodynamics .we find excellent agreement between the two , providing a strong support for the usefulness of the fluctuating low mach number equations as a coarse - grained model of complex fluid mixtures . in section[ sec : conclusions ] we offer some concluding remarks and point out several outstanding challenges for the future .several technical calculations and procedures are detailed in appendices .the compressible equations of fluctuating hydrodynamics were proposed some time ago and have since been studied and applied successfully to a variety of situations .the presence of rapid pressure fluctuations due to the propagation of sound waves leads to stiffness that makes it computationally expensive to solve the fully compressible equations numerically , especially for typical liquids .it is therefore important to develop fluctuating hydrodynamics equations that capture the essential physics in cases where acoustics can be neglected .it is important to note that the equations of fluctuating hydrodynamics are to be interpreted as a mesoscopic coarse - grained representation of the mass , momentum and energy transport which occurs at microscopic scales through molecular interactions ( collisions ) . as such, these equations implicitly contain a mesoscopic coarse - graining length and time scale that is larger than molecular scales .while a coarse - graining scale does not appear explicitly in the formal stochastic partial differential equations ( spdes ) written in this section ( but note that it can be if desired ) , it does explicitly enter in the spatio - temporal discretization described in section [ sec : spatialdiscretization ] through the grid spacing ( equivalently , the volume of the grid , or more precisely , the number of molecules per grid cell ) and time step size .this changes the appropriate interpretation of convergence of numerical methods to a continuum limit in the presence of fluctuations and nonlinearities . only for the linearized equations of fluctuating hydrodynamics the formal spdes be given a precise continuum meaning .developing coarse - grained models that only resolve the relevant spatio - temporal scales is a well - studied but still _ ad hoc _ procedure that requires substantial _ a priori _ physical insight .more precise mathematical mode - elimination procedures are technically involved and often purely formal , especially in the context of spdes . herewe follow a heuristic approach to constructing fluctuating low mach number equations , starting from the well - known deterministic low mach equations ( which can be obtained via asymptotic analysis ) and then adding fluctuations in a manner consistent with fluctuation - dissipation balance . alternatively ,our low mach number equations can be seen as a formal asymptotic limit in which the noise terms are formally treated as smooth forcing terms ; a more rigorous derivation is nontrivial and is deferred for future work .the starting point of our investigations is the system of isothermal compressible equations of fluctuating hydrodynamics for the density , velocity , and mass concentration for a mixture of two fluids in dimensions . in terms of mass and momentum densitiesthe equations can be written as conservation laws , +\rho\v g\nonumber \\v\right)= & \grad\cdot\left[\rho\chi\left(\grad c+k_{p}\grad p\right)+\m{\psi}\right],\label{llns_primitive}\end{aligned}\ ] ] where is the density of the first component , is the density of the second component , is the equation of state for the pressure at the reference temperature , and is the gravitational acceleration .temperature fluctuations are neglected in this study but can be accounted for using a similar approach .the shear viscosity , bulk viscosity , mass diffusion coefficient , and baro - diffusion coefficient , in general , depend on the state .the baro - diffusion coefficient above [ denoted with in ref . , see eq .( a.17 ) in that paper ] is not a transport coefficient but rather determined from thermodynamics , where is the chemical potential of the mixture at the reference temperature , , and is the isothermal speed of sound .the capital greek letters denote stochastic momentum and mass fluxes that are formally modeled as where is boltzmann s constant , and and are standard zero mean , unit variance random gaussian tensor and vector fields with uncorrelated components , and similarly for . at mesoscopic scales , in typical liquids ,sound waves are much faster than momentum diffusion and can usually be eliminated from the fluid dynamics description .formally , this corresponds to taking the zero mach number singular limit of the system ( [ llns_primitive ] ) by performing an asymptotic analysis as the mach number , where is a reference flow velocity .the limiting dynamics can be obtained by performing an asymptotic expansion in the mach number . in a deterministic settingthis analysis shows that the pressure can be written in the form where .the low mach number equations can then be obtained by making the anzatz that the thermodynamic behavior of the system is captured by the reference pressure , , and captures the mechanical behavior while not affecting the thermodynamics .we note that when the system is sufficiently large or the gravitational forcing is sufficiently strong , assuming a spatial constant reference pressure is not valid . in those cases ,the reference pressure represents a global hydrostatic balance , ( see for details of the construction of these types of models ) . here , however , we will restrict consideration to cases where gravity causes negligible changes in the thermodynamic state across the domain . in this case, the reference pressure constrains the system so that the evolution of and remains consistent with the thermodynamic equation of state this constraint means that any change in concentration ( equivalently , ) must be accompanied by a corresponding change in density , as would be observed in a system at thermodynamic equilibrium held at the fixed reference pressure and temperature .this implies that variations in density are coupled to variations in composition .note that we do not account for temperature variations in our isothermal model .the equation for can be written in primitive ( non - conservation ) form as the concentration equation where the non - advective ( diffusive and stochastic ) fluxes are denoted with note that there is no barodiffusion flux because barodiffusion is of thermodynamic origin ( as seen from ( [ eq : k_p ] ) ) and involves the gradient of the _ thermodynamic _ pressure . by differentiating the eos constraint along a lagrangian trajectorywe obtain where the solutal expansion coefficient is determined by the specific form of the eos .equation ( [ eq : drho_dt ] ) shows that the eos constraint can be re - written as a constraint on the divergence of velocity , note that the usual incompressibility constraint is obtained when the density is not affected by changes in concentration , .when changes in composition ( concentration ) due to diffusion cause local expansion and contraction of the fluid and thus a nonzero .it is important at this point to consider the boundary conditions . for a closed system , such as a periodic domain or a system with rigid boundaries, we must ensure that the integral of over the domain is zero .this is consistent with ( [ eq : div_v ] ) if is constant , so that we can rewrite ( [ eq : div_v ] ) in the form . in this case does not vary in time .if is not constant , then for a closed system the reference pressure must vary in time to enforce that the total fluid volume remains constant . herewe will assume that , and we will give a specific example of an eos that obeys this condition . the asymptotic low mach analysis of ( [ llns_primitive ] ) is standard and follows the procedure outlined in ref . , formally treating the stochastic forcing as smooth .this analysis leads to the _ isothermal low mach number _ equations for a binary mixture of fluids in conservation form , +\rho\v g\equiv\v f(\rho,\v v , c , t)\label{eq : momentum_eq}\\ \partial_{t}\left(\rho_{1}\right)=-\grad\cdot\left(\rho_{1}\v v\right)+ & \grad\cdot\v f\equiv h(\rho,\v v , c , t)\label{eq : rho1_eq}\\ \partial_{t}\left(\rho_{2}\right)=-\grad\cdot\left(\rho_{2}\v v\right ) & -\grad\cdot\v f\label{eq : rho2_eq}\\ \mbox{such that } \grad\cdot\v v= & -\left(\rho^{-1}\beta\right)\,\grad\cdot\v f\equiv s(\rho , c , t).\label{eq : div_v_constraint}\end{aligned}\ ] ] the gradient of the non - thermodynamic component of the pressure ( lagrange multiplier ) appears in the momentum equation as a driving force that ensures the eos constraint ( [ eq : div_v_constraint ] ) is obeyed .we note that the bulk viscosity term gives a gradient term that can be absorbed in and therefore does not explicitly need to appear in the equations . by adding the two density equations ( [ eq : rho1_eq],[eq : rho2_eq ] ) we get the usual continuity equation for the total density , our conservative numerical scheme is based on eqs .( [ eq : momentum_eq],[eq : rho1_eq],[eq : div_v_constraint],[eq : rho_eq ] ) . in appendix[ sec : linearizedanalysis ] , we apply the standard linearized fluctuating hydrodynamics analysis to the low mach number equations .this gives expressions for the equilibrium and nonequilibrium static and dynamic covariances ( spectra ) of the fluctuations in density and concentration as a function of wavenumber and wavefrequency .specifically , the dynamic structure factor in the low mach number approximation has the form the linearized analysis shows that the low mach number equations reproduce the slow fluctuations ( small ) in density and concentration ( central rayleigh peak in the dynamic structure factor ) as in the full compressible equations ( see section [ sub : compressiblespectra ] ) , while eliminating the fast isentropic pressure fluctuations ( side brillouin peaks ) from the dynamics . the fluctuations in velocity , however , are different between the compressible and low mach number equations . in the compressible equations ,the dynamic structure factor for the longitudinal component of velocity decays to zero as because it has two sound ( brillouin ) peaks centered around , in addition to the central diffusive ( rayleigh ) peak .the low mach number equations reproduce the central peak ( slow fluctuations ) correctly , replacing the side peaks with a flat spectrum for large , which is unphysical as it formally makes the velocity white in time .the low mach equations should therefore be used only for time scales larger than the sound propagation time .the fact that the velocity fluctuations are white in space and in time poses a further challenge in interpreting the nonlinear low mach number equations , and in particular , numerical schemes may not converge to a sensible limit as the time step goes to zero . in practice , just as the spatial discretization of the equations imposes a spatial smoothing or regularization of the fluctuations , the temporal discretization of the equations imposes a temporal smoothing and filters the problematic large frequencies . in the types of problems we study in this work the problem concentration fluctuations can be neglected , , because the concentration fluctuations are dominated by nonequilibrium effects . if the problematic white - in - time longitudinal component of velocity disappears . in general , the eos constraint ( [ eq : general_eos ] )is a non - linear constraint . in this workwe consider a specific linear eos , where and are the densities of the pure component fluids ( and , respectively ) , giving it is important that for this specific form of the eos is a material constant independent of the concentration .the density dependence ( [ eq : beta_simple ] ) on concentration arises if one assumes that the two fluids do not change volume upon mixing .this is a reasonable assumption for liquids that are not too dissimilar at the molecular level .surprisingly the eos ( [ eq : eos_quasi_incomp ] ) is also valid for a mixture of ideal gases , since where is molecular mass and is the number density .this is exactly of the form ( [ eq : eos_quasi_incomp ] ) with and .even if the specific eos ( [ eq : eos_quasi_incomp ] ) is not a very good approximation over the entire range of concentration , ( [ eq : eos_quasi_incomp ] ) may be a very good approximation over the range of concentrations of interest if and are adjusted accordingly . in this case and are not the densities of the pure component fluids but rather fitting parameters that approximate the true eos in the range of concentrations of interest . for small variations in concentration around some reference concentration and density one can approximate by a constant and determine appropriate values of and from ( [ eq : beta_simple ] ) and the eos ( [ eq : eos_quasi_incomp ] ) evaluated at the reference state .our specific form choice of the eos will aid significantly in the construction of simple conservative spatial discretizations that strictly maintain the eos without requiring complicated nonlinear iterative corrections .several different types of boundary conditions can be imposed for the low mach number equations , just as for the more familiar incompressible equations .the simplest case is when periodic boundary conditions are used for all of the variables .we briefly describe the different types of conditions that can be imposed at a physical boundary with normal direction . for the concentration ( equivalently , ) , either neumann ( zero mass flux ) or dirichlet ( fixed concentration ) boundary conditions can be imposed .physically , a neumann condition corresponds to a physical boundary that is impermeable to mass , while dirichlet conditions correspond to a permeable membrane that connects the system to a large reservoir held at a specified concentration . in the case of neumann conditions for concentration , both the normal component of the diffusive flux and the advective flux vanish at the boundary , implying that the normal component of velocity must vanish , . for dirichlet conditions on the concentration , however , there will , in general , be a nonzero normal diffusive flux through the boundary .this diffusive flux for concentration will induce a corresponding mass flux , as required to maintain the equation of state near the boundary . from the condition ( [ eq : div_v_constraint ] ) , we infer the proper boundary condition for the normal component of velocity to be this condition expresses the notion that there is no net volume change for the fluid in the domain . note that no additional boundary conditions can be specified for since its boundary conditions follow from those on via the eos constraint . for the tangential component of velocity , we either impose a no - slip condition , or a free slip boundary condition in which the tangential component of the normal viscous stress vanishes , in the case of zero normal mass flux , , the free slip condition simplifies to a neumann condition for the tangential velocity , .the low mach number system of equations ( [ eq : momentum_eq],[eq : rho1_eq],[eq : div_v_constraint],[eq : rho_eq ] ) is a _ constrained _ problem . for the purposes of analysis and in particular for constructing higher - order temporal integrators , it is useful to rewrite the constrained low mach number equations as an _ unconstrained _ initial value problem . in the incompressible case , , we can write the constrained navier - stokes equations as an unconstrained system by eliminating the pressure using a projection operator formalism . the constraint is a constant linear constraint and independent of the state and of time .however , in the low mach number equations the velocity - divergence constraint depends on concentration , and also on time when there are additional ( stochastic or deterministic ) forcing terms in the concentration equation . treating this type of system requires a more general vector field decomposition .this more general vector field decomposition provides the basis for a projection - based discretization of the constrained system .we also introduce a gauge formulation of the system that casts the evolution as a nonlocal unconstrained system that is analytically equivalent to the orignal constrained evolution .the gauge formulation allows us to develop higher - order method - of - lines temporal integration algorithms .the velocity in the low mach number equations can be split into two components , where is a divergence - free ( solenoidal or vortical ) component , and therefore this is a poisson problem for that is well - posed for appropriate boundary conditions on .specifically , periodic boundary conditions on imply periodic boundary conditions for and . at physical boundaries where a dirichlet condition ( [ eq : v_n_bc ] ) is specified for the normal component of the velocity , we set and use neumann conditions for the poisson solve , . we can now define a more general vector field decomposition that plays the role of the hodge decomposition in incompressible flow . given a vector field and a density we can decompose into three components this decomposition can be obtained by using the condition and , which allows us to define a density - weighted poisson equation for , let denote the solution operator to the density - dependent poisson problem , formally , ^{-1},\ ] ] and also define a density - dependent projection operator defined through its action on a vector field , .\ ] ] this is a well - known variable density generalization of the constant - density projection operator ] are determined from and and the constraints ; hence they can formally be eliminated from the system , as can be seen in the linearized analysis in appendix [ sec : linearizedanalysis ] , which shows that fluctuations in the vortical velocity modes are decoupled from the longitudinal fluctuations .our spatio - temporal discretization follows a `` method of lines '' approach in which we first discretize the equations ( [ eq : momentum_eq],[eq : rho1_eq],[eq : div_v_constraint],[eq : rho_eq ] ) in space and then integrate the resulting semi - continuum equations in time .our uniform staggered - grid spatial discretization of the low mach number equations is relatively standard and is described in section [ sec : spatialdiscretization ] .the main difficulty is the temporal integration of the resulting equations in the presence of the eos constraint .our temporal integrators are based on the gauge formulation ( [ eq : m_gauge],[eq : rho1_gauge ] ) of the low mach equations .the gauge formulation is unconstrained and enables us to use standard temporal integrators for initial - value problems . in the majority of this section ,we assume that all of the fields and differential operators have already been spatially discretized and focus on the temporal integration of the resulting initial - value problem .because in the present schemes we handle both diffusive and advective fluxes explicitly , the time step size is restricted by well - known cfl conditions . for fluctuating hydrodynamics applications the time step is typically limited by momentum diffusion , where is the number of spatial dimensions and is the grid spacing .the design and implementation of numerical methods that handle momentum diffusion semi - implicitly , as done in ref . for incompressible flow , is substantially more difficult for the low mach number equations because it requires a variable coefficient implicit fluid solver .we have recently developed an efficient stokes solver for solving variable - density and variable - viscosity time - dependent and steady stokes problems , and in future work we will employ this solver to construct a semi - implicit temporal integrator for the low mach number equations . our temporal discretization will make use of the special form of the eos and the discretization of mass advection described in section [ sub : advection ] in order to strictly maintain the eos relation ( [ eq : eos_quasi_incomp ] ) between density and concentration in each cell at _ all _ intermediate values .therefore , no additional action is needed to enforce the eos constraint after an update of and .this is , however , only true to within the accuracy of the poisson solver and also roundoff , and it is possible for a slow drifting off the eos to occur over many time steps . in section [ sub : driftcorrection ] , we describe a correction that prevents such drifting and ensures that the eos is obeyed at all times to essentially roundoff tolerance . for simplicity , we will often omit the explicit update for the density and instead focus on updating and the momentum density , with the understanding that is updated whenever is .the foundation for our higher - order explicit temporal integrators is the first - order euler method applied to the gauge formulation ( [ eq : m_gauge],[eq : rho1_gauge ] ) .we use a superscript to denote the time step and the point in time where a given term is evaluated , e.g. , where denotes the spatial discretization of with analogous definitions for and .we also denote the time step size with .assume that at the beginning of timestep we know and we can then compute by enforcing the constraint ( [ eq : div_v_s ] ) .here denotes the affine transformation ( [ p_tilde_v ] ) with all terms evaluated at the beginning of the time step , so that . an euler step for the low mach equationsthen consists of the update together with an update of the density consistent with . at the beginning of the next time step , will be calculated from by applying , and it is only that will actually be used during time step .we therefore do not need to explicitly store and can instead replace it with without changing any of the observable results .this is related to the fact that the gauge is _ de facto _ arbitrary and , in the present setting , the gauge formulation is simply a formalism to put the equations in an unconstrained form suitable for method of lines discretization .the difference between and is a ( discrete ) gradient of a scalar .since our temporal integrators only use linear combinations of the intermediate values , the difference between the final result for and is also a gradient of a scalar and replacing with simply amounts to redefining the ( arbitrary ) gauge variable . for these reasons, the euler advance , ,\label{eq : euler_step_complex}\end{aligned}\ ] ] is analytically equivalent to ( [ eq : euler_lagged ] ) .we will use this form as the foundation for our temporal integrators .the equivalence to the gauge form implies that the update specified by ( [ eq : euler_step_complex ] ) can be viewed as an explicit update in spite of the formal dependence of the update on the solution at both old and new time levels .thermal fluctuations can not be straightforwardly incorporated in ( [ eq : euler_step_complex ] ) because it is not clear how to define . in the deterministic setting , is a function of concentration and density and can be evaluated pointwise at time level .when the white - in - time stochastic concentration flux is included , however , can not be evaluated at a particular point of time .instead , one must think of as representing the _ average _ stochastic flux over a given time interval , which can be expressed in terms of the increments of the underlying wiener processes , where is a collection of normal variates generated using a pseudo - random number generator , and is the volume of the hydrodynamic cells .similarly , the average stochastic momentum flux over a time step is modeled as where are normal random variates . as described in more detail in ref . , stochastic fluxes are spatially discretized by generating normal variates on the faces of the grid on which the corresponding variable is discretized , independently at each time step . as mentioned earlier , the volume of the grid cell appears here because it expresses the spatial coarse graining length scale ( i.e. , the degree of coarse - graining for which a fluid element with discrete molecules can be modeled by continuous density fields ) implicit in the equations of fluctuating hydrodynamics .similarly , the time interval expresses the typical time scale at which the mass and momentum transfer can be modeled with low mach number hydrodynamics . with this in mind, we first evaluate the velocity divergence associated with the constraint using the particular sample of , .\ ] ] we then define a discrete affine operator in terms of its action on the momentum \left(\v m\right)=\rho\m{\mathcal{r}}_{s}\left(\rho^{-1}\v m\right).\ ] ] using this shorthand notation , the momentum update in ( [ eq : euler_step_complex ] ) in the presence of thermal fluctuations can be written as \left(\v m^{n}+\d t\,\v f^{n}\right).\ ] ] observe that this is a conservative momentum update since the application of subtracts the ( discrete ) gradient of a scalar from the momentum . in actual implementation , it is preferable to apply at the beginning of the time step instead of at the end of time step , once the value is computed from the diffusive and stochastic fluxes for the concentration . following the above discussion , we can write an euler - maruyama temporal integrator for the low mach number equations in the shorthand notation , \left(\tilde{\v m}^{n}\right)\nonumber \\ \rho_{1}^{n+1 } & = & \rho_{1}^{n}+\d t\,\bar{h}^{n}+\check{h}^{n}\left(\d t,\,\widetilde{\v w}^{n}\right)\nonumber \\ \tilde{\v m}^{n+1 } & = & \v m^{n}+\d t\,\bar{\v f}^{n}+\check{\v f}^{n}\left(\d t,\,\m w^{n}\right),\label{eq : euler_step}\end{aligned}\ ] ] where and are collections of standard normal variates generated using a pseudo - random number generator independently at each time step . herethe deterministic increments are written using the shorthand notation , +\rho\v g\\ \bar{h } & = & \grad\cdot\left(-\rho_{1}\v v+\rho\chi\grad c\right).\end{aligned}\ ] ] the stochastic increments are written in terms of \delta t=\grad\cdot\left[\sqrt{\frac{\eta\left(k_{b}t\right)\d t}{\d v}}\,\left(\m w+\m w^{t}\right)\right]\\ \check{h}\left(\d t,\,\widetilde{\v w}\right ) & = & \left[\grad\cdot\m{\psi}\left(\d t,\,\widetilde{\v w}\right)\right]\delta t=\grad\cdot\left[\sqrt{\frac{2\chi\rho\mu_{c}^{-1}\left(k_{b}t\right)\d t}{\d v}}\,\widetilde{\v w}\right],\end{aligned}\ ] ] where and are vectors of standard gaussian variables .a good strategy for composing higher - order temporal integrators for the low mach number equations is to use a linear combination of several projected euler steps of the form ( [ eq : euler_step ] ) . in this way, the higher - order integrators inherit the properties of the euler step . in our case, this will be very useful in constructing conservative discretizations that strictly maintain the eos constraint and only evaluate fluxes at states that strictly obey the eos constraint .the incorporation of stochastic forcing in the runge - kutta temporal integrators that we use is described in refs . ; here we only summarize the resulting schemes .we note that the stochastic terms should be considered additive noise , even though we evaluate them using an instantaneous state like multiplicative noise .a weakly second - order temporal integrator for ( [ eq : m_gauge],[eq : rho1_gauge ] ) is provided by the _ explicit trapezoidal rule _, in which we first take a predictor euler step \left(\tilde{\v m}^{n}\right)\nonumber \\\rho_{1}^{\star , n+1 } & = & \rho_{1}^{n}+\d t\,\bar{h}^{n}+\check{h}^{n}\left(\d t,\,\widetilde{\v w}^{n}\right)\\ \tilde{\v m}^{\star , n+1 } & = & \v m^{n}+\d t\,\bar{\v f}^{n}+\check{\v f}^{n}\left(\d t,\,\m w^{n}\right).\label{eq : trapezoidal_predictor}\end{aligned}\ ] ] the corrector step is a linear combination of the predictor and another euler update , \left(\tilde{\v m}^{\star , n+1}\right)\nonumber \\ \rho_{1}^{n+1 } & = & \frac{1}{2}\rho_{1}^{n}+\frac{1}{2}\left[\rho_{1}^{\star , n+1}+\d t\,\bar{h}^{\star , n+1}+\check{h}^{\star , n+1}\left(\d t,\,\widetilde{\v w}^{n}\right)\right]\\ \tilde{\v m}^{n+1 } & = & \frac{1}{2}\v m^{n}+\frac{1}{2}\left[\m m^{\star ,n+1}+\d t\,\bar{\v f}^{\star , n+1}+\check{\v f}^{\star , n+1}\left(\d t,\,\m w^{n}\right)\right],\label{eq : trapezoidal_corrector}\end{aligned}\ ] ] and reuses the same random numbers and as the predictor step . note that both the predicted and the corrected values for density and concentration obey the eos .we numerically observe that the trapezoidal rule does exhibit a slow but systematic numerical drift in the eos , and therefore it is necessary to use the correction procedure described in section [ sub : driftcorrection ] at the end of each time step .the analysis in ref . indicates that for the incompressible case the trapezoidal scheme exhibits second - order weak accuracy in the nonlinear and linearized settings .an alternative second - order scheme is the _ explicit midpoint rule _ , which can be summarized as follows .first we take a projected euler step to estimate midpoint values ( denoted here with superscript * * ) , \left(\tilde{\v m}^{n}\right)\nonumber \\ \rho_{1}^{\star , n+\myhalf } & = & \rho_{1}^{n}+\frac{\d t}{2}\,\bar{h}^{n}+\check{h}^{n}\left(\frac{\d t}{2},\,\widetilde{\v w}_{1}^{n}\right)\nonumber \\\tilde{\m m}^{\star , n+\myhalf } & = & \v m^{n}+\frac{\d t}{2}\,\bar{\v f}^{n}+\check{\v f}^{n}\left(\frac{\d t}{2},\,\m w_{1}^{n}\right).\label{eq : midpoint_predictor}\end{aligned}\ ] ] and then we complete the time step with another euler - like update \left(\tilde{\v m}^{\star , n+\myhalf}\right)\nonumber \\ \rho_{1}^{n+1 } & = & \rho_{1}^{n}+\d t\,\bar{h}^{\star , n+\myhalf}+\check{h}^{\star , n+\myhalf}\left(\d t,\,\widetilde{\v w}^{n}\right)\nonumber \\\tilde{\v m}^{n+1 } & = & \v m^{n}+\d t\,\bar{\v f}^{\star , n+\myhalf}+\check{\v f}^{\star , n+\myhalf}\left(\d t,\,\m w^{n}\right),\label{eq : midpoint_corrector}\end{aligned}\ ] ] where the standard gaussian variates and the vectors of standard normal variates and are independent , and similarly for and .note that and are used in _ both _ the predictor and the corrector stages , while and are used in the corrector only .physically , the random numbers ( and similarly for ) correspond to the increments of the underlying wiener processes over the first half of the time step , and the random numbers correspond to the wiener increments for the second half of the timestep .note that both the midpoint and the endpoint values for density and concentration obey the eos .we numerically observe that the midpoint rule does not exhibit a systematic numerical drift in the eos , and can therefore be used without the correction procedure described in section [ sub : driftcorrection ] . the analysis in ref . indicates that for the incompressible case the midpoint scheme exhibits second - order weak accuracy in the nonlinear setting .furthermore , in the linearized setting it reproduces the steady - state covariances of the fluctuating fields to third order in the time step size .we have also tested and implemented the three - stage runge kutta scheme that was used in refs .this scheme can be expressed as a linear combination of three euler steps .the first stage is a predictor euler step , \left(\tilde{\v m}^{n}\right)\nonumber \\\rho_{1}^{\star } & = & \rho_{1}^{n}+\d t\,\bar{h}^{n}+\check{h}^{n}\left(\d t,\,\widetilde{\v w}^{n}\right)\\ \tilde{\v m}^{\star } & = & \v m^{n}+\d t\,\bar{\v f}^{n}+\check{\v f}^{n}\left(\d t,\,\m w^{n}\right).\label{eq : trapezoidal_predictor-1}\end{aligned}\ ] ] the second stage is a midpoint predictor \left(\tilde{\v m}^{\star}\right)\nonumber \\\rho_{1}^{\star\star } & = & \frac{3}{4}\rho_{1}^{n}+\frac{1}{4}\left[\rho_{1}^{\star}+\d t\,\bar{h}^{\star}+\check{h}^{\star}\left(\d t,\,\widetilde{\v w}^{\star , n}\right)\right]\\ \tilde{\v m}^{\star\star } & = & \frac{3}{4}\v m^{n}+\frac{1}{4}\left[\m m^{\star}+\d t\,\bar{\v f}^{\star}+\check{\v f}^{\star}\left(\d t,\,\m w^{\star , n}\right)\right],\label{eq : rk3_predictor}\end{aligned}\ ] ] and a final corrector stage completes the time step \left(\tilde{\v m}^{\star\star}\right)\nonumber \\ \rho_{1}^{n+1 } & = & \frac{1}{3}\rho_{1}^{n}+\frac{2}{3}\left[\rho_{1}^{\star\star}+\d t\,\bar{h}^{\star\star}+\check{h}^{\star\star}\left(\d t,\,\widetilde{\v w}^{\star\star , n}\right)\right]\\ \tilde{\m m}^{n+1 } & = & \frac{1}{3}\v m^{n}+\frac{2}{3}\left[\m m^{\star\star}+\d t\,\bar{\v f}^{\star\star}+\check{\v f}^{\star\star}\left(\d t,\,\m w^{\star\star , n}\right)\right].\label{eq : rk3_corrector}\end{aligned}\ ] ] here the stochastic fluxes between different stages are related to each other via where and are independent and generated independently at each rk3 step , and similarly for .the weights of are chosen to maximize the weak order of accuracy of the scheme while still using only two random samples of the stochastic fluxes per time step .the rk3 method is third - order accurate deterministically , and stable even in the absence of diffusion / viscosity ( i.e. , for advection - dominated flows ) .note that the predicted , the midpoint and the endpoint values for density and concentration all obey the eos .we numerically observe that the rk3 scheme does exhibit a systematic numerical drift in the eos , and therefore it is necessary to use the correction procedure described in section [ sub : driftcorrection ] at the end of each time step .the analysis in ref . indicates that for the incompressible case the rk3 scheme exhibits second - order weak accuracy in the nonlinear setting . in the linearizedsetting it reproduces the steady - state covariances of the fluctuating fields to third order in the time step size . while in principle our temporal integratorsshould strictly maintain the eos , roundoff errors and the finite tolerance employed in the iterative poisson solver lead to a small drift in the constraint that can , depending on the specific scheme , lead to an exponentially increasing violation of the eos over many time steps . in order to maintain the eos at all times to within roundoff tolerance, we periodically apply a globally - conservative projection of and onto the linear eos constraint .this projection step consists of correcting in cell using +\frac{1}{n}\sum_{k^{\prime}}\left(\rho_{1}\right)_{k^{\prime}},\ ] ] where is the number of hydrodynamic cells in the system and note that the above update , while nonlocal in nature , conserves the total mass .a similar update applies to , or equivalently , .the spatial discretization we employ follows closely the spatial discretization of the constant - coefficient incompressible equations described in ref .therefore , we focus here on the differences , specifically , the use of conserved variables , the handling of the variable - density projection and variable - coefficient diffusion , and the imposition of the low mach number constraint . note that the handling of the stochastic momentum and mass fluxes is identical to that described in ref . . for simplicity of notation , we focus on two dimensional problems , with straightforward generalization to three spatial dimensions .our spatial discretization follows the commonly - used mac approach , in which the scalar conserved quantities and are defined on a regular cartesian grid .the vector conserved variables are defined on a staggered grid , such that the component of momentum is defined on the faces of the scalar variable cartesian grid in the direction , see fig .[ fig : grid ] . for simplicity of notation, we often denote the different components of velocity as in two dimensions and in three dimensions .the terms `` cell - centered '' , `` edge - centered '' , and `` face - centered '' refer to spatial locations relative to the underlying scalar grid .our discretization is based on calculating fluxes on the faces of a finite - volume grid and is thus locally conservative .it is important to note , however , that for the mac grid different control volumes are used for the scalars and the components of the momentum , see fig .[ fig : grid ] . staggered ( mac ) finite - volume discretization on a uniform cartesian two - dimensional grid .( _ left _ ) control volume and flux discretization for cell - centered scalar fields , such as densities and .( _ middle _ ) control volume for the -component of face - centered vector fields , such as ( _ right _ ) control volume for the -component of face - centered vector fields , such as ., width=576 ] from the cell - centered and we can define other cell - centered scalar quantities , notably , the concentration and the transport quantities and , which typically depend on the local density and concentration ( and temperature for non - isothermal models ) , and can , in general , also depend on the spatial position of the cell . in order to define velocitieswe need to interpret the continuum relationship on the staggered grid .this is done by defining face - centered scalar quantities obtained as an arithmetic average of the corresponding cell - centered quantities in the two neighboring cells .specifically , we define except at physical boundaries , where the value is obtained from the imposed boundary conditions ( see section [ sub : boundaryconditions ] ) .arithmetic averaging is only one possible interpolation from cells to faces . in general , other forms of averaging such as a harmonic or geometric average or higher - order ,wider stencils can be used .most components of the spatial discretization can easily be generalized to other choices of interpolation . as we explain later, the use of linear averaging simplifies the construction of conservative advection . in this sectionwe describe the spatial discretization of the diffusive mass flux term in ( [ eq : rho1_eq ] ) .the discretization is based on conservative centered differencing , +\delta y^{-1}\left[\left(\rho\chi\frac{\partial c}{\partial y}\right)_{i , j+\myhalf}-\left(\rho\chi\frac{\partial c}{\partial y}\right)_{i , j-\myhalf}\right],\label{eq : discrete_diffusion}\ ] ] where , for example , and is an interpolated face - centered diffusion coefficient , for example , as done for in eq .( [ eq : rho_face ] ) , except at physical boundaries , where the value is obtained from the imposed boundary conditions .regardless of the specific form of the interpolation operator , the same face - centered diffusion coefficient must be used when calculating the magnitude of the stochastic mass flux on face , this matches the covariance of the discrete stochastic mass increments with the discretization of the diffusive dissipation operator given in ( [ eq : discrete_diffusion],[eq : discrete_diff_flux ] ) .this matching ensures discrete fluctuation - dissipation balance in the linearized setting .specifically , at thermodynamic equilibrium the static covariance of the concentration is determined from the equilibrium value of ( thermodynamics ) independently of the particular values of the transport coefficients ( dynamics ) , as seen in ( [ eq : s_equilibrium ] ) and dictated by statistical mechanics principles . in ref . a laplacian form of the viscous term is assumed , which is not applicable when viscosity is spatially varying and . in two dimensions ,the divergence of the viscous stress tensor in the momentum equation ( [ eq : momentum_eq ] ) , neglecting bulk viscosity effects , is & = & \left[\begin{array}{c } 2\frac{\partial}{\partial x}\left(\eta\frac{\partial u}{\partial x}\right)+\frac{\partial}{\partial y}\left(\eta\frac{\partial u}{\partial y}+\eta\frac{\partial v}{\partial x}\right)\\ 2\frac{\partial}{\partial y}\left(\eta\frac{\partial v}{\partial y}\right)+\frac{\partial}{\partial x}\left(\eta\frac{\partial v}{\partial x}+\eta\frac{\partial u}{\partial y}\right ) \end{array}\right].\label{eq : div_viscous_stress}\end{aligned}\ ] ] the discretization of the viscous terms requires at cell - centers and edges ( note that in two dimensions the edges are the same as the nodes of the grid ) .the value of at a node is interpolated as the arithmetic average of the four neighboring cell - centers , except at physical boundaries , where the values are obtained from the prescribed boundary conditions .the different viscous friction terms are discretized by straightforward centered differences .explicitly , for the -component of momentum {i+\myhalf , j}=\d x^{-1}\left[\left(\eta\frac{\partial u}{\partial x}\right)_{i+1,j}-\left(\eta\frac{\partial u}{\partial x}\right)_{i , j}\right]\ ] ] with similarly , for the term involving a second derivative in , {i+\myhalf , j}=\d y^{-1}\left[\left(\eta\frac{\partial u}{\partial y}\right)_{i+\myhalf , j+\myhalf}-\left(\eta\frac{\partial u}{\partial y}\right)_{i+\myhalf , j-\myhalf}\right],\ ] ] with a similar construction is used for the mixed - derivative term , {i+\myhalf , j}=\d y^{-1}\left[\left(\eta\frac{\partial v}{\partial x}\right)_{i+\myhalf , j+\myhalf}-\left(\eta\frac{\partial v}{\partial x}\right)_{i+\myhalf , j-\myhalf}\right],\ ] ] with the stochastic stress tensor discretization is described in more detail in ref . and applies in the present context as well .for the low mach number equations , just as for the compressible equations , the symmetric form of the stochastic stress tensor must be used in order to ensure discrete fluctuation - dissipation balance between the viscous dissipation and stochastic forcing .additionally , when is not spatially uniform the same interpolated viscosity as used in the viscous terms must be used when calculating the amplitude in the stochastic forcing at the edges ( nodes ) of the grid .it is challenging to construct spatio - temporal discretizations that conserve the total mass while remaining consistent with the equation of state , as ensured in the continuum context by the constraint ( [ eq : div_v_constraint ] ) .we demonstrate here how the special linear form of the constraint ( [ eq : eos_quasi_incomp ] ) can be exploited in the discrete context . following ref . , we spatially discretize the advective terms in ( [ eq : rho1_eq ] ) using a centered ( skew - adjoint ) discretization , {i , j}=\delta x^{-1}\left[\left(\rho_{1}\right)_{i+\myhalf , j}u_{i+\myhalf , j}-\left(\rho_{1}\right)_{i-\myhalf , j}u_{i-\myhalf , j}\right]+\delta y^{-1}\left[\left(\rho_{1}\right)_{i , j+\myhalf}v_{i , j+\myhalf}-\left(\rho_{1}\right)_{i , j-\myhalf}v_{i , j-\myhalf}\right],\label{eq : rho_adv}\ ] ] and similarly for ( [ eq : rho_eq ] ) .we would like this discrete advection to maintain the equation of state ( [ eq : eos_quasi_incomp ] ) at the discrete level , that is , maintain the constraint relating and in every cell . because the different dimensions are decoupled and the divergence is simply the sum of the one - dimensional difference operators , it is sufficient to consider ( [ eq : rho1_eq ] ) in one spatial dimension .the method of lines discretization is given by the system of odes , one differential equation per cell , ,\ ] ] and similarly for .as a shorthand , denote the quantity that appears in ( [ eq : eos_quasi_incomp ] ) with if we use the linear interpolation ( [ eq : rho_face ] ) to calculate face - centered densities , then because of the linearity of the eos the face - centered densities obey the eos if the cell - centered ones do , since .the rate of change of in cell is \\ & = & \left(\rho^{-1}\beta\right)\left(f_{i+\myhalf}-f_{i-\myhalf}\right)-\left(u_{i+\myhalf}-u_{i-\myhalf}\right)=0.\end{aligned}\ ] ] this simple calculation shows that the eos constraint is obeyed discretely in each cell at all times if it is initially satisfied and the velocities used to advect mass obey the discrete version of the constraint ( [ eq : div_v_constraint ] ) , ,\nonumber \end{aligned}\ ] ] in two dimensions .our algorithm ensures that advective terms are always evaluated using a discrete velocity field that obeys this constraint .this is accomplished by using a discrete projection operator , as we describe in the next section .the spatial discretization of the advection terms in the momentum equation ( [ eq : momentum_eq ] ) is constructed using centered differences on the corresponding shifted ( staggered ) grid , as described in ref .for example , for the -component of momentum , {i+\myhalf , j}=\d x^{-1}\left[(m_{x}u)_{i+1,j}-(m_{x}u)_{i , j}\right]+\d y^{-1}\left[(m_{x}v)_{i+\myhalf , j+\myhalf}-(m_{x}v)_{i+\myhalf , j-\myhalf}\right],\label{eq : mom_adv}\ ] ] where simple averaging is used to interpolate momenta to the cell centers and edges ( nodes ) of the grid , for example , because of the linearity of the interpolation procedure , the interpolated discrete velocity used to advect obeys the constraint ( [ eq : div_v_mac ] ) on the shifted grid , with a right - hand side interpolated using the same arithmetic average used to interpolate the velocities .in particular , in the incompressible case all variables , including momentum , are advected using a discretely divergence - free velocity , ensuring discrete fluctuation - dissipation balance . it is well - known that the centered discretization of advection we employ here is not robust for advection - dominated flows , and higher - order limiters and upwinding schemes are generally preferred in the deterministic setting .however , these more robust advection schemes add artificial dissipation , which leads to a violation of discrete fluctuation - dissipation balance . in appendix [ appendixfiltering ]we describe an alternative filtering procedure that can be used to handle strong advection while continuing to use centered differencing .we now briefly discuss the spatial discretization of the affine operator defined by ( [ p_tilde_v ] ) , as used in our explicit temporal integrators .the discrete projection takes a face - centered ( staggered ) discrete velocity field and a velocity divergence and projects onto the constraint ( [ eq : div_v_mac ] ) in a conservative manner . specifically , the projection consists of finding a cell - centered discrete scalar field such that where the gradient is discretized using centered differences , e.g. , the pressure correction is the solution to the variable - coefficient discrete poisson equation , \nonumber \\+ \frac{1}{\delta y}\left[\left(\frac{1}{\rho_{i , j+\myhalf}}\right)\left(\frac{\phi_{i , j+1}-\phi_{i , j}}{\delta y}\right)-\left(\frac{1}{\rho_{i , j-\myhalf}}\right)\left(\frac{\phi_{i , j}-\phi_{i , j-1}}{\delta y}\right)\right]\nonumber \\= s_{i , j}-\left[\left(\frac{\tilde{u}_{i+\myhalf , j}-\tilde{u}_{i-\myhalf , j}}{\delta x}\right)+\left(\frac{\tilde{v}_{i , j+\myhalf}-\tilde{v}_{i , j-\myhalf}}{\delta y}\right)\right],\label{eq : discrete_poisson}\end{aligned}\ ] ] which can be solved efficiently using a standard multigrid approach .the handling of different types of boundary conditions is relatively straightforward when a staggered grid is used and the physical boundaries are aligned with the cell boundaries for the scalar grid .interpolation is not used to obtain values for faces , nodes or edges of the grid that lie on a physical boundary , since this would require `` ghost '' values at cell centers lying outside of the physical domain . instead ,whenever a value of a physical variable is required at a face , node , or edge lying on a physical boundary , the boundary condition is used to obtain that value .similarly , centered differences for the diffusive and viscous fluxes that require values outside of the physical domain are replaced by one - sided differences that only use values from the interior cell bordering the boundary and boundary values .for example , if the concentration is specified at the face , the diffusive flux discretization ( [ eq : discrete_diff_flux ] ) is replaced with where is the specified boundary value , the density is obtained from using the eos constraint , and the diffusion coefficient is calculated at the specified values of concentration and density .similar straightforward one - sided differencing is used for the viscous fluxes . as discussed in ref . , the use of second - order one - sided differencing is not required to achieve global second - order accuracy , and would make the handling of the stochastic fluxes more complicated because it leads to a non - symmetric discrete laplacian . note that for the nonlinear low mach number equations our approach is subtly different from linearly extrapolating the value in the ghost cell .namely , the extrapolated value might be unphysical , and it might not be possible to evaluate the eos or transport coefficients at the extrapolated concentration . for neumann - type or zero - flux boundary conditions ,the corresponding diffusive flux is set to zero for any faces of the corresponding control volume that lie on physical boundaries , and values in cells outside of the physical domain are never required .the corresponding handling of the stochastic fluxes is discussed in detail in ref . .the evaluation of advective fluxes for the scalars requires normal components of the velocity at the boundary . for faces of the grid that lie on a physical boundary ,the normal component of the velocity is determined from the value of the diffusive mass flux at that face using ( [ eq : v_n_bc ] ) .therefore , these velocities are not independent variables and are not solved for or modified by the projection . specifically , the discrete pressure is only defined at the cell centers in the interior of the grid , and the discrete poisson equation ( [ eq : discrete_poisson ] ) is only imposed on the interior faces of the grid .therefore , no explicit boundary conditions for are required when the staggered grid is used , and the natural homogeneous neumann conditions are implied .advective momentum fluxes are only evaluated on the interior faces and thus do not use any values outside of the physical domain .by combining the spatial discretization described above with one of the temporal integators described in section [ sec : temporalintegration ] , we can obtain a finite - volume solver for the fluctuating low mach equations .for the benefit of the reader , here we summarize our implementation of a single euler step ( [ eq : euler_step ] ) .this forms the core procedure that the higher - order runge - kutta schemes employ several times during one time step . 1 .generate the vectors of standard gaussian variates and .2 . calculate diffusive and stochastic fluxes for using ( [ eq : discrete_diff_flux ] ) , 3 .solve the poisson problem ( [ eq : discrete_poisson ] ) with to obtain the velocity from using ( [ eq : u_minus_gradp ] ) , enforcing .4 . calculate viscous and stochastic momentum fluxes using ( [ eq : div_viscous_stress ] ) , ^{n}+\grad\cdot\left[\m{\sigma}^{n}\left(\d t,\,\m w^{n}\right)\right].\ ] ] 5 .calculate external forcing terms for the momentum equation , such as the contribution due to gravity .calculate advective fluxes for mass and momentum using ( [ eq : rho_adv ] ) and ( [ eq : mom_adv ] ) .update mass and momentum densities , including advective , diffusive , stochastic and external forcing terms , to obtain , and .note that this update preserves the eos constraint as explained in section [ sub : advection ] .we have tested and validated the accuracy of our methods and numerical implementation using a series of standard deterministic tests , as well as by examining the equilibrium spectrum of the concentration and velocity fluctuations .the next two sections present further verification and validation in the context of nonequilibrium systems .advection of concentration by thermal velocity fluctuations in the presence of large concentration gradients leads to the appearance of _ giant fluctuations _ of concentration , as has been studied theoretically and experimentally for more than a decade .these giant fluctuations were previously simulated in the absence of gravity in three dimensions by some of us in ref . , and good agreement was found with experimental results . in those previous studiesthe incompressible equations were used , that is , it was assumed that concentration was a passively - advected scalar . however , it is more physically realistic to account for the fact that the properties of the fluid , notably the density and the transport coefficients , depend on the concentration . in ref . a series of experiments were performed to study the temporal evolution of giant concentration fluctuations during the diffusive mixing of water and glycerol , starting with a glycerol mass fraction of in the bottom half of the experimental domain , and in the top half . because it is essentially impossible to analytically solve the full system of fluctuating equations in the presence of spatial inhomogeneity and nontrivial boundary conditions , the existing theoretical analysis of the diffusive mixing process makes a quasi - periodic constant - coefficient incompressible approximation . for simplicity , in this sectionwe focus on a time - independent problem and study the spectrum of steady - state concentration fluctuations in a mixture under gravity in the presence of a constant concentration gradient .this extends the study reported in ref . to account for the fact that the density , viscosity , and diffusion coefficient depend on the concentration . for simplicity, we do two - dimensional simulations , since there are no qualitative differences between the spectrum of concentration fluctuations in two and three dimensions ( note , however , that in real space , unlike in fourier space , the effect of the fluctuations on the transport is very different in two and three dimensions ) . furthermore , in these simulations we do not include a stochastic flux in the concentration equation , i.e. , we set , so that all fluctuations in the concentration arise from being out of thermodynamic equilibrium . with this approximationwe do not need to model the chemical potential of the mixture and obtain .this formulation is justified by the fact that it is known experimentally that the nonequilibrium fluctuations are much larger than the equilibrium ones for the conditions we consider . in the simple linearized theory presented in section [ sub : gianttheory ]several approximations are made .the first one is that a quasi - periodic approximation is used even though the actual system is not periodic in the direction .this source of error has already been studied numerically in ref .we also use a boussinesq approximation where it is assumed that and , where is a small density difference between the two fluids , , so that density is approximately constant and .more precisely , in the boussinesq model the gravity term in the velocity equation only enters through the product so the approximation consists of taking the limit and while keeping the product fixed .the final approximation made in the simple theory is that the transport coefficients , i.e. , the viscosity and diffusion coefficients , are assumed to be constant .here we evaluate the validity of the constant - coefficient constant - density approximation ( and constant , ) , as well as the constant - density ( boussinesq ) approximation alone ( constant , , but variable ) , by comparing with the solution to the complete low mach number equations ( and variable ) .we base our parameters on the experimental studies of diffusive mixing in a water - glycerol mixture , as reported in ref .the physical domain is discretized on a uniform two dimensional grid , with a thickness of along the direction .gravity is applied in the negative ( vertical ) direction .reservoir boundary conditions ( [ eq : v_n_bc ] ) are applied in the -direction and periodic boundary conditions in the -direction .we set the concentration to on the bottom boundary and on the top boundary , and apply no - slip boundary conditions for the velocity at both boundaries .the initial condition is , which is close to the deterministic steady - state profile .a very good fit to the experimental equation of state ( dependence of density on concentration at standard temperature and pressure ) over the whole range of concentrations of interest is provided by the eos ( [ eq : eos_quasi_incomp ] ) with the density of water set to and the density of glycerol set to . in these simulationsthe magnitude of the velocity fluctuations is very small and we did not use filtering ( see appendix [ appendixfiltering ] ) . experimentally , the dependence of viscosity on glycerol mass fraction has been fit to an exponential function , which we approximate with a quadratic function over the range of concentrations of interest , where and experimental measurements estimate .the diffusion coefficient dependence on the concentration has been studied experimentally , but is in fact strongly affected by thermal fluctuations and spatial confinement .we approximate the dependence assuming a stokes - einstein relation , which is in reasonable agreement with the experimental results in ref . over the range of concentrations of interest here , where experimental estimates for water - glycerol mixtures give , with a schmidt number .this very large separation of scales between mass and momentum diffusion is not feasible to simulate with our explicit temporal integration methods . referring back to the simplified theory ( [ eq : s_c_c ] ) , which in this case can be simplified further to we see that for the shape of the spectrum of the steady - state concentration fluctuations , and in particular , the cutoff wavenumber due to gravity , is determined from the product and not and individually .therefore , as also done in ref . , we choose and so that is kept at the physical value of but the schmidt number is reduced by two orders of magnitude , , where is an estimate of the average concentration .the condition and gives our simulation parameters and .the physical value for gravity is and the solutal expansion coefficient follows from and . when employing the boussinesq approximation , in which gravity only enters through the product we set and so that and increase gravity by the corresponding factor to in order to keep fixed at the physical value .we also performed simulations with a weaker gravity , , which enhances the nonequilibrium fluctuations , as well as no gravity , which makes the fluctuations truly giant .we employ the explicit midpoint temporal integrator ( which we recall is third - order accurate for static covariances ) and set , which results in a diffusive courant number .we skip the first 50,000 time steps ( about 5 diffusion crossing times ) and then collect samples from the subsequent 50,000 time steps .we repeat this eight times to increase the statistical accuracy and estimate error bars . to compare to the theory ( [ eq : s_c_c ] ) , we set the concentration gradient to and evaluate at from the equation of state .when computing the theory , we account for errors in the discrete approximation to the continuum laplacian by using the effective wavenumber instead of the actual discrete wavenumber .comparison between the simple theory ( [ eq : s_c_c_simp ] ) ( lines ) and numerical results ( symbols ) .results are shown for standard gravity ( the cutoff wavenumber ) , for the complete variable - coefficient variable - density low mach model ( green upward triangles ) and the constant - coefficient constant - density approximation ( red squares ) .also shown are results for a weaker gravity , ( the cutoff wavenumber ) , for the complete low mach model ( magenta pluses ) and the constant - coefficient constant - density approximation ( cyan stars ) . for comparison ,results for with variable viscosity but constant diffusion coefficient are also shown , for variable density ( orange downward triangles ) and the constant - density ( boussinesq ) approximation ( indigo right - facing triangles ) . finally , results for no gravity are shown in the constant - coefficient approximation ( black circles).,scaledwidth=75.0% ] the results for the static spectrum of concentration fluctuations as a function of the modified wavenumber ( [ eq : modified_kx ] ) are shown in fig .[ fig : constantgradient ] . when there is no gravity , we see the characteristic giant fluctuation power - law spectrum of the fluctuations , modulated at small wavenumbers due to the presence of the physical boundaries . when gravity is present , fluctuations at wavenumber below the cutoff ^{1/4}$ ] are suppressed .if we use a constant - coefficient approximation , in which we reduce so that and also fix the transport coefficients at and , we observe good agreement with the quasi - periodic theory ( [ eq : s_c_c_simp ] ) .when we make the transport coefficients dependent on the concentration as in ( [ eq : nu_c],[eq : chi_c ] ) , we observe a rather small change in the spectrum .this is perhaps not unexpected because the simplified theory ( [ eq : s_c_c_simp ] ) shows that only the product , and not and individually , matters . since we used the stokes - einstein relation to select the concentration dependence of the diffusion coefficient ,the value of is constant throughout the physical domain . for comparison , in fig .[ fig : constantgradient ] we show results from a simulation where we keep the concentration dependence of the viscosity ( [ eq : nu_c ] ) but set the diffusion coefficient to a constant value , , and we observe a more significant change in the spectrum .further employing the boussinesq approximation makes little difference showing that the primary effect here comes from the dependence of the transport coefficients on concentration .this shows that under the sort of parameters present in the experiments on diffusive mixing in water - glycerol mixture , it is reasonable to make the boussinesq incompressible approximation ; however , the spatial dependence of the viscosity and diffusion coefficient can not in general be ignored if quantitative agreement is desired . in particular , time - dependent quantities such as dynamic spectra depend on the individual values of and and not just their product , and are thus expected to be more sensitive to the details of their concentration dependence .even though the constant - coefficient approximation gives qualitatively the correct shape and a better choice of the constant transport coefficients may improve its accuracy , there is no obvious or simple procedure to _ a priori _ estimate what parameters should be used ( but see for a proposal to average the constant - coefficient theory over the domain ) .a direct comparison with experimental results is not possible until multiscale temporal integrators capable of handling the extreme separation of time scales between mass and momentum diffusion are developed . at presentthis has only been accomplished in the constant - coefficient incompressible limit ( ) , and it remains a significant challenge to accomplish the same for the complete low mach number system .in this section we study the appearance of giant fluctuations during _ time - dependent _ diffusive mixing . as a validation of the low mach number fluctuating equations and our algorithm , we perform simulations of diffusive mixing of two fluids of different densities in two dimensions .we find excellent agreement between the results of low mach number ( continuum ) simulations and hard - disk molecular dynamics ( particle ) simulations .this nontrivial test clearly demonstrates the usefulness of low mach number models as a coarse - grained mesoscopic model for problems where sound waves can be neglected .our simulation setup is illustrated in fig .[ fig : mixingillustrationmass4 ] .we consider a periodic square box of length along both the ( horizontal ) and ( vertical ) directions , and initially place all of the fluid of species one ( colored red ) in the middle third of the domain , i.e. , we set for , and otherwise , as shown in the top left panel of the figure . the two fluids mix diffusively and at the end of the simulation the concentration field shows a _ rough diffusive interface _ as confirmed by molecular dynamics simulations shown in the top right panel of the figure .the deterministic equations of diffusive mixing reduce to a one dimensional model due to the translational symmetry along the axes , and would yield a _flat _ diffusive interface as illustrated in the bottom left panel of the figure .however , fluctuating hydrodynamics correctly reproduces the interface roughness , as illustrated in the bottom right panel of the figure and demonstrated quantitatively below .diffusive mixing between two fluids of unequal densities , =4 , with coloring based on concentration , red for the pure first component , , and blue for the pure second component , .a smoothed shading is used for the coloring to eliminate visual discretization artifacts .the simulation domain is periodic and contains hydrodynamic ( finite volume ) cells .the top left panel shows the initial configuration , which is the same for all simulations reported here .the top right panel shows the final configuration at time as obtained using molecular dynamics .the bottom left panel shows the final configuration obtained using deterministic hydrodynamics , while the right panel shows the final configuration obtained using fluctuating hydrodynamics.,title="fig:",scaledwidth=40.0%]diffusive mixing between two fluids of unequal densities , =4 , with coloring based on concentration , red for the pure first component , , and blue for the pure second component , .a smoothed shading is used for the coloring to eliminate visual discretization artifacts .the simulation domain is periodic and contains hydrodynamic ( finite volume ) cells .the top left panel shows the initial configuration , which is the same for all simulations reported here .the top right panel shows the final configuration at time as obtained using molecular dynamics .the bottom left panel shows the final configuration obtained using deterministic hydrodynamics , while the right panel shows the final configuration obtained using fluctuating hydrodynamics.,title="fig:",scaledwidth=40.0% ] diffusive mixing between two fluids of unequal densities , =4 , with coloring based on concentration , red for the pure first component , , and blue for the pure second component , .a smoothed shading is used for the coloring to eliminate visual discretization artifacts .the simulation domain is periodic and contains hydrodynamic ( finite volume ) cells .the top left panel shows the initial configuration , which is the same for all simulations reported here .the top right panel shows the final configuration at time as obtained using molecular dynamics .the bottom left panel shows the final configuration obtained using deterministic hydrodynamics , while the right panel shows the final configuration obtained using fluctuating hydrodynamics.,title="fig:",scaledwidth=40.0%]diffusive mixing between two fluids of unequal densities , =4 , with coloring based on concentration , red for the pure first component , , and blue for the pure second component , .a smoothed shading is used for the coloring to eliminate visual discretization artifacts .the simulation domain is periodic and contains hydrodynamic ( finite volume ) cells .the top left panel shows the initial configuration , which is the same for all simulations reported here .the top right panel shows the final configuration at time as obtained using molecular dynamics .the bottom left panel shows the final configuration obtained using deterministic hydrodynamics , while the right panel shows the final configuration obtained using fluctuating hydrodynamics.,title="fig:",scaledwidth=40.0% ] we consider here a binary hard - disk mixture in two dimensions .we use arbitrary ( molecular ) units of length , time and mass for convenience .all hard disks had a diameter in arbitrary units , and we set the temperature at .the molecular mass for the first fluid component was fixed at , and for the second component at . for mass ratio , the two types of disks are mechanically - identical andtherefore the species label is simply a red - blue coloring of the particles . in this case and the low mach number equations reduce to the incompressible equations of fluctuating hydrodynamics with a passively - advected concentration field . for the case of unequal particle masses ,mechanical equilibrium is obtained if the pressures in the two fluid components are the same .it is well - known from statistical mechanics that for hard disks or hard spheres the pressure is where is the number density , and is a prefactor that only depends on the packing fraction and not on the molecular mass .therefore , for a mixture of disks or spheres with equal diameters , at constant pressure , the number density and the packing fraction are constant independent of the composition .the equation of state at constant pressure and temperature is therefore which is exactly of the form ( [ eq : eos_quasi_incomp ] ) with and .the chemical potential of such a mixture has the same concentration dependence as a low - density gas mixture , .\ ] ] in order to validate the predictions of our low mach number model , we performed hard disk molecular dynamics ( hdmd ) simulations of diffusive mixing using a modification of the public - domain code developed by the authors of ref .we used a packing fraction of for all simulations reported here .this packing fraction is close to the freezing transition point but is known to be safely in the ( dense ) gas phase ( there is no liquid phase for a hard - disk fluid ) .the initial particle positions were generated using a nonequilibrium molecular dynamics simulation as in the hard - particle packing algorithm described in ref .after the initial configuration was generated the disks were assigned a species according to their coordinate , and the mixing simulation performed using event - driven molecular dynamics . in order to convert the particle data to hydrodynamic data comparable to that generated by the fluctuating hydrodynamics simulations, we employed a grid of hydrodynamic cells that were each a square of linear dimension . at the chosen packing fraction this corresponds to about disks per hydrodynamic cell , which is deemed a reasonable level of coarse - graining for the equations of fluctuating hydrodynamics to be a reasonably - accurate model , while still keeping the computational demands of the simulations manageable .we performed hdmd simulations for systems of size and cells , and simulated the mixing process to a final simulation time of units .the largest system simulated had about million disks ( each simulation took about 5 days of cpu time ) , which is well into the `` hydrodynamic '' rather than `` molecular '' scale .every units of time , particle data was converted to hydrodynamic data for the purposes of analysis and comparison to hydrodynamic calculations . there is not a unique way of coarse - graining particle data to hydrodynamic data ; however , we believe that the large - scale ( giant ) concentration fluctuations studied here are _ not _ affected by the particular choice .we therefore used a simple method consistent with the philosophy of finite - volume conservative discretizations .specifically , we coarse - grained the particle information by sorting the particles into hydrodynamic cells based on the position of their centroid , as if they were point particles .we then calculated and in each cell based on the total mass of each species contained inside the given cell .since all particles have equal diameter other definitions that take into account the particle shape and size give similar results .we now turn to hydrodynamic simulations of the diffusive mixing of hard disks .our hydrodynamic calculations use the same grid of cells used to convert particle to hydrodynamic data . the only input required for the hydrodynamic calculations , in addition to those provided by equilibrium statistical mechanics , are the transport coefficients of the fluid as a function of concentration , specifically , the shear viscosity and the diffusion coefficient . the values for the transport coefficients used in the spatio - temporal discretization , as explained in refs . and detailed in appendix [ sec : transportmd ] , are not material constants independent of the discretization .rather , they are _ bare _ transport values and measured at the length scales of the grid size .we assumed that the bare transport coefficients obey the same scaling with the mass ratio as predicted by enskog kinetic theory ( [ eq : eta_r_scaling],[eq : chi_r_scaling ] ) .as explained in appendix [ sec : transportmd ] , theoretical arguments and molecular dynamics results suggest that renormalization effects for viscosity are small and can be safely neglected .we have therefore fixed the viscosity in the hydrodynamic calculations based on the molecular dynamics estimate for the pure fluid with molecular mass ( see section [ sub : viscosity ] ) . however , the bare diffusion coefficient is strongly dependent on the size of the hydrodynamic cells ( held fixed in our calculations at ) , and on whether filtering ( see appendix [ appendixfiltering ] ) is used. therefore , the value of needs to be adjusted based on the spatial discretization , in such a way as to match the behavior of the molecular dynamics simulations at length scales much larger than the grid spacing .we describe the exact procedure we used to accomplish this in section [ sub : diffusioncoeffient ] .the time step in our explicit algorithm is limited by the viscous cfl number . sincethe hydrodynamic calculations are much faster compared to the particle simulations , we used the more expensive rk3 temporal integrator with a relatively small time step , corresponding to for . for and we employed a larger time step , ( ) , with no measurable temporal discretization artifacts for the quantities studied here . we are therefore confident that the discretization errors in this study are dominated by spatial discretization artifacts . in future workwe will explore semi - implicit discretizations and study the effect of taking larger time steps on temporal accuracy .note that at these parameters for the isothermal speed of sound is so that a compressible scheme would require a time step on the order of ( corresponding to advective cfl of about a half ) .by contrast , the explicit low mach number algorithm is stable for .this modest gain is due to the small hydrodynamic cell we use here in order to compare to molecular dynamics . for mesoscopichydrodynamic cells the gain in time step size afforded by the low mach formulation will be several orders of magnitude larger . for mass ratio and , the hydrodynamic calculations were initialized using statistically identical configurations as would be obtained by coarse - graining the initial particle configuration .this implies a sharp , step - like jump in concentration at and .since our spatio - temporal discretization is not strictly monotonicity - preserving , such sharp concentration gradients combined with a small diffusion coefficient lead to a large cell peclet number .this may in turn lead to large deviations of concentration outside of the allowed interval for larger mass ratios .therefore , for we smoothed the initial condition slightly so that the sharp jump in concentration is spread over a few cells , and also employed a point filter for the advection velocity ( , see appendix [ appendixfiltering ] ) .we verified that for using filtering only affects the large wavenumbers and does not appear to affect the small wavenumbers we study here , provided the bare diffusion coefficient is adjusted based on the specific filtering width . in order to compare the molecular dynamics and the hydrodynamic simulations we calculated several statistical quantities : 1 .the averages of along the directions perpendicular to the concentration gradient , where the integral is discretized as a direct sum over the hydrodynamic cells .note that it is statistically better to use conserved quantities for such macroscopic averages than to use non - conserved variables such as concentration .the spectrum of the concentration averaged along the direction of the gradient by computing the average and then taking the discrete fourier transform .intuitively , is a measure of the thickness of the red strip in fig .[ fig : mixingillustrationmass4 ] , and corresponds closely to what is measured in light scattering and shadowgraphy experiments .the discrete fourier spectrum of the -coordinate of the `` center - of - mass '' of concentration along the direction perpendicular to the gradient , intuitively , is a measure of the height of the centerline of the red strip in fig .[ fig : mixingillustrationmass4 ] .all quantities were sampled at certain pre - specified time points in a number of statistically - independent simulations and then means and standard deviations calculated from the data points . for system of size cells we used simulations , and for systems of size we used simulations . by far the majority of the computational cost was in performing the hdmd simulations .once and were estimated based on simulations of a constant - density ( ) fluid ( see section [ sub : diffusioncoeffient ] ) , kinetic theory ( [ eq : eta_r_scaling],[eq : chi_r_scaling ] ) can be used to estimate them for different density ratios . in fig .[ fig : spreading_64x64 ] we show for mass ratio , showing good agreement between hdmd and hydrodynamics , especially when fluctuations are accounted for . for direct comparison is difficult because the initial condition was slightly different in the hydrodynamic simulations due to the need to smooth the sharp concentration gradient for numerical reasons , as explained earlier .this difference strongly affects the shape of at early times , however , it does not significantly modify the roughness of the interface , which we study next .the most interesting contribution of fluctuations to the diffusive mixing process is the appearance of giant concentration fluctuations in the presence of large concentration gradients , as evidenced in the roughness of the interface between the two fluids during the early stages of the mixing in fig .[ fig : mixingillustrationmass4 ] . in order to quantify this interface roughness we used the one - dimensional power spectra that here we do not correct the discrete wavenumber for the spatial discretization artifacts and continue to use instead of .discrete spatial spectrum of the interface fluctuations for and ( averaged over 32 simulations ) at several points in time ( drawn with different colors , as indicated in the legend ) , for fluctuating hydrodynamics ( fh , squares with error bars ) and hdmd ( circles , error bars comparable to those for squares ) .note that the largest wavenumber supported by the grid is .the larger wavenumbers are however dominated by spatial truncation errors and the filter employed ( if any ) and we do not show them here .( left panel ) spectrum of the vertically - averaged concentration .( right panel ) spectrum of the position of the vertical `` center - of - mass '' of concentration.,title="fig:",scaledwidth=49.0%]discrete spatial spectrum of the interface fluctuations for and ( averaged over 32 simulations ) at several points in time ( drawn with different colors , as indicated in the legend ) , for fluctuating hydrodynamics ( fh , squares with error bars ) and hdmd ( circles , error bars comparable to those for squares ) .note that the largest wavenumber supported by the grid is .the larger wavenumbers are however dominated by spatial truncation errors and the filter employed ( if any ) and we do not show them here .( left panel ) spectrum of the vertically - averaged concentration .( right panel ) spectrum of the position of the vertical `` center - of - mass '' of concentration.,title="fig:",scaledwidth=49.0% ] the temporal evolution of the spectra and is shown in fig .[ fig : s_k_mass_1 ] for mass ratio , and in fig .[ fig : s_k_mass_4 ] for mass ratio , for both hdmd and low mach number fluctuating hydrodynamics ( note that deterministic hydrodynamics would give identically zero for any spectral quantity ) .we observe an excellent agreement between the two , including the correct initial evolution of the interface fluctuations .same as fig .[ fig : s_k_mass_1 ] but for density ratio .,title="fig:",scaledwidth=49.0%]same as fig .[ fig : s_k_mass_1 ] but for density ratio .,title="fig:",scaledwidth=49.0% ] note that for a finite system , eventually complete mixing will take place and the concentration fluctuations will have to revert to their equilibrium spectrum , which is flat in fourier space instead of the power - law behavior seen out of equilibrium . in fig .[ fig : s_k_mass_1_long ] we show results for mixing up to a time ( this is 128 times longer than those described above ) .these long simulations are only feasible for the fluctuating hydrodynamics code , and employ a somewhat larger time step .the results clearly show that at late times the spectrum of the fluctuations reverts to the equilibrium one ; however , this takes some time even after the mixing is essentially complete .linearized incompressible fluctuating hydrodynamics predicts that at steady state the spectrum of nonequilibrium concentration fluctuations is a power law with exponent , the dynamically - evolving spectra in the right panel of fig .[ fig : s_k_mass_1_long ] show approximately such power - law behavior for intermediate times and wavenumbers . mixing to a time 128 times longer than previous results , with results reported at time intervals for .these long simulations are only feasible for the fluctuating hydrodynamics code , and employ a somewhat larger time step .( _ left _ ) horizontally - averaged , as shown for the shorter runs in the left panel of fig .[ fig : spreading_64x64 ] .( _ right _ ) the spectrum of interface fluctuations , as shown in the left panels of figs .[ fig : s_k_mass_1 ] and [ fig : s_k_mass_4 ] for the shorter runs .the theoretical estimates for the spectrum of equilibrium fluctuations , which is independent of wavenumber , is also shown .we also indicate the theoretical prediction for the power - law of the spectrum of steady - state nonequilibrium fluctuations under an applied concentration gradient , ,title="fig:",scaledwidth=49.0%]mixing to a time 128 times longer than previous results , with results reported at time intervals for .these long simulations are only feasible for the fluctuating hydrodynamics code , and employ a somewhat larger time step .( _ left _ ) horizontally - averaged , as shown for the shorter runs in the left panel of fig .[ fig : spreading_64x64 ] .( _ right _ ) the spectrum of interface fluctuations , as shown in the left panels of figs .[ fig : s_k_mass_1 ] and [ fig : s_k_mass_4 ] for the shorter runs .the theoretical estimates for the spectrum of equilibrium fluctuations , which is independent of wavenumber , is also shown .we also indicate the theoretical prediction for the power - law of the spectrum of steady - state nonequilibrium fluctuations under an applied concentration gradient , ,title="fig:",scaledwidth=49.0% ] in order to illustrate the appearance of giant fluctuations in three dimensions we performed simulations of mixing in a mixture of hard spheres with equal diameters , , and mass ratio . the packing density was chosen to be , which corresponds to a very dense gas , but is still well below the freezing point . for the hydrodynamic simulations we used cubic cells of dimension , which corresponds to about particles per hydrodynamic cell on average . in fig .[ fig : s_k_mass_4_3d ] we show results from a single simulation with a grid of size cells , which would correspond to about particles .this makes molecular dynamics simulations infeasible , and makes hydrodynamic calculations an invaluable tool in studying the mixing process at these mesoscopic scales . in the hydrodynamic simulations we used bare transport coefficient values based on enskog kinetic theory for the hard - sphere fluid . for the single - component fluid with molecular mass , this theory gives and , which corresponds to a bare schmidt number .we employed the same model dependence of bare transport coefficients on concentration as for hard disks , see eqs .( [ eq : eta_r_scaling],[eq : chi_r_scaling ] ) .the time step was set at ( corresponding to viscous cfl number ) . in three dimensions ,the cell peclet number is reduced with decreasing and we did not find it necessary to employ any filtering .diffusive mixing in three dimensions similar to that illustrated in fig .[ fig : mixingillustrationmass4 ] for two dimensions .parameters are based on enskog kinetic theory for a hard - sphere fluid at packing fraction , and there is no gravity .the mixing starts with the top half being one species and the bottom half another species , with density ratio , and concentration is kept fixed at the top and bottom boundaries while the side boundaries are periodic . a snapshot taken at time is shown .( _ top panel _ )the side panes show two dimensional slices for the concentration . the approximated contour surface is shown with color based on surface height to illustrate the rough diffusive interface .( bottom left panel ) similar as top panel but bottom pane shows vertically - averaged concentration , illustrating the giant concentration fluctuations .( bottom right panel ) the fourier spectrum of .the color axes is logarithmic and clearly shows the appearance of large scale ( small wavenumber ) fluctuations , as also seen in fig .[ fig : s_k_mass_4 ] in two dimensions.,scaledwidth=75.0% ] diffusive mixing in three dimensions similar to that illustrated in fig .[ fig : mixingillustrationmass4 ] for two dimensions .parameters are based on enskog kinetic theory for a hard - sphere fluid at packing fraction , and there is no gravity .the mixing starts with the top half being one species and the bottom half another species , with density ratio , and concentration is kept fixed at the top and bottom boundaries while the side boundaries are periodic . a snapshot taken at time is shown .( _ top panel _ )the side panes show two dimensional slices for the concentration . the approximated contour surface is shown with color based on surface height to illustrate the rough diffusive interface .( bottom left panel ) similar as top panel but bottom pane shows vertically - averaged concentration , illustrating the giant concentration fluctuations .( bottom right panel ) the fourier spectrum of .the color axes is logarithmic and clearly shows the appearance of large scale ( small wavenumber ) fluctuations , as also seen in fig .[ fig : s_k_mass_4 ] in two dimensions.,title="fig:",scaledwidth=49.0%]diffusive mixing in three dimensions similar to that illustrated in fig .[ fig : mixingillustrationmass4 ] for two dimensions .parameters are based on enskog kinetic theory for a hard - sphere fluid at packing fraction , and there is no gravity .the mixing starts with the top half being one species and the bottom half another species , with density ratio , and concentration is kept fixed at the top and bottom boundaries while the side boundaries are periodic . a snapshot taken at time is shown .( _ top panel _ )the side panes show two dimensional slices for the concentration . the approximated contour surface is shown with color based on surface height to illustrate the rough diffusive interface .( bottom left panel ) similar as top panel but bottom pane shows vertically - averaged concentration , illustrating the giant concentration fluctuations .( bottom right panel ) the fourier spectrum of .the color axes is logarithmic and clearly shows the appearance of large scale ( small wavenumber ) fluctuations , as also seen in fig .[ fig : s_k_mass_4 ] in two dimensions.,title="fig:",scaledwidth=49.0% ] instead of the fully periodic domain used in the two dimensional hard - disk simulations , here we employ the fixed - concentration boundary conditions ( [ eq : v_n_bc ] ) and set at the bottom and at the top boundary .this emulates the sort of `` open '' or `` reservoir '' boundaries that mimic conditions in experimental studies of diffusive mixing .the initial condition is a fully phase - separated mixture with for , and otherwise . as the mixing process continues the diffusive interface roughens and giant concentrationsappear , as illustrated in fig .[ fig : s_k_mass_4_3d ] and also observed experimentally in water - glycerol mixtures in ref . . in three dimensions , however , the diffusive interface roughness is much smaller than in two dimensions , being on the order of only 20 molecular diameters for the snapshot shown in the figure .this illustrates the importance of dimensionality when including thermal fluctuations .in particular , unlike in deterministic fluid dynamics , in fluctuating hydrodynamics one can not simply eliminate dimensions from consideration even in simple geometries .approximate theory based on the boussinesq approximation and linearization of the equations of fluctuating hydrodynamics has been developed in ref . and applied in the analysis of experimental results on mixing in a water - glycerol mixture in the presence of gravity .the simulations reported here do not make the sort of approximations necessary in analytical theories and can in principle be used to study the mixing process quantitatively .however , it is important to emphasize that in realistic liquids , such as a water - glycerol mixture , the schmidt number is on the order of a thousand .this makes explicit time stepping schemes that fully resolve the dynamics of the velocity fluctuations infeasible . in future workwe will consider semi - implicit type stepping methods that relax the severe time stepping restrictions present in the explicit schemes considered here .the behavior of fluids is strongly affected by thermal fluctuations at scales from the microscopic to the macroscopic .fluctuating hydrodynamics is a powerful coarse - grained model for fluid dynamics at mesoscopic and macroscopic scales , at both a theoretical and a computational level .theoretical calculations are rather complicated in the presence of realistic spatial inhomogeneities and nontrivial boundary conditions . in numerical simulations , those effects can readily be handled , however , the large separation of time scales between different physical processes poses a fundamental difficulty .compressible fluctuating hydrodynamics bridges the gap between molecular and hydrodynamic scales . at spatial scalesnot much larger than molecular , sound and momentum and heat diffusion occur at comparable time scales in both gases and liquids . at mesoscopic andlarger length scales , fast pressure fluctuations due to thermally - actuated sound waves are much faster than diffusive processes .it is therefore necessary to eliminate sound modes from the compressible equations . in the deterministic contextthis is accomplished using low mach number asymptotic expansion . for homogeneous simple fluids or mixtures of dynamically - identical fluids the zeroth order low mach equationsare the well - known incompressible navier - stokes equations , in which pressure is a lagrange multiplier enforcing a divergence - free velocity field . in mixtures of dissimilar fluids , local changes in composition and temperature cause local expansion and contraction of the fluid and thus a nonzero velocity divergence . in this paperwe proposed low mach number fluctuating equations for isothermal binary mixtures of incompressible fluids with different density , or a mixture of low - density gases with different molecular masses .these equations are a straightforward generalization of the widely - used incompressible fluctuating navier - stokes equations . in the low mach number equationsthe incompressibility constraint is replaced by , which ensures that compositional changes are accompanied by density changes in agreement with the fluid equation of state ( eos ) at constant pressure and temperature .this seemingly simple generalization poses many non - trivial analytical and numerical challenges , some of which we addressed in this paper . at the analytical levelthe low mach number fluctuating equations are different from the incompressible equations because the velocity divergence is directly coupled to the time derivative of the concentration fluctuations .this means that at thermodynamic equilibrium the velocity is not only white in space , a well - known difficulty with the standard equations of fluctuating hydrodynamics , but is also white in time , adding a novel type of difficulty that has not heretofore been recognized .the unphysically fast fluctuations in velocity are caused by the unphysical assumption of infinite separation of time scales between the sound and the diffusive modes .this unphysical assumption also underlies the incompressible fluctuating navier - stokes equations , however , in the incompressible limit the problem is not apparent because the component of velocity that is white in time disappears .here we analyzed the low mach equations at the linearized level , and showed that they reproduce the slow diffusive fluctuations in the full compressible equations , while eliminating the fast pressure fluctuations . at the formal level , we suggest that a generalized hodge decomposition can be used to separate the vortical ( solenoidal ) modes of velocity as the independently fluctuating variable , coupled with a gauge formulation used to treat the divergence constraint . such nonlinear analysis is deferred for future research , and here we relied on the fact that the temporal discretization regularizes the short - time dynamics at time scales faster than the time step size . at the numerical level ,the low mach number equations pose several distinct challenges .the first challenge is to construct conservative spatial discretizations in which density is advected in a locally - conservative manner while still maintaining the equation of state constraint relating the local densities and composition .we accomplish this here by using a specially - chosen model eos that is linear yet still rather versatile in practice , and by advecting densities using a velocity that obeys a discrete divergence constraint .we note that for this simplified case , the system can be modeled using only the concentration to describe the thermodynamic state .however , for more general low mach number models maintaining a full thermodynamic representation of the state independent of the constraint leads to more robust numerics . as in incompressible hydrodynamics , enforcing this constraint requires a poisson pressure solver that dominates the computational cost of the algorithm .a second challenge is to construct temporal integrators that are at least second - order in time .we accomplish this here by formally introducing an unconstrained gauge formulation of the equations , while at the same time taking advantage of the gauge degree of freedom to avoid ever explicitly dealing with the gauge variable .the present temporal discretizations are purely explicit and are similar in spirit to an explicit projection method . a third and remaining challenge is to design efficient temporal integrators that handle momentum diffusion , the second - fastest physical process , semi - implicitly .this poses well - known challenges even in the incompressible setting .these challenges were bypassed in recently - developed temporal integrators for the incompressible fluctuating navier - stokes equations by avoiding the splitting inherent in projection methods .extending this type of stokes - system approach to the low mach equations will be the subject of future research .one of the principal motivations for developing the low mach number equations and our numerical implementation was to model recent experiments on the development of giant concentration fluctuations in the presence of sharp concentration gradients .we first studied giant fluctuations in a time - independent or static setting , as observed experimentally by inducing a constant concentration gradient via a constant applied temperature gradient .our simulations show that under conditions employed in experimental studies of the diffusive mixing of water and glycerol , it is reasonable to employ the boussinesq approximation . the results also indicate that the constant - transport - coefficient approximation that is commonly used in theoretical calculations is appropriate if the diffusion coefficient follows a stokes - einstein relation , but should be used with caution in general .we continued our study of giant concentration fluctuations by simulating the temporal evolution of a rough diffusive interface during the diffusive mixing of hard disk fluids .comparison between computationally - intensive event - driven molecular dynamics simulations and our hydrodynamic calculations demonstrated that the low mach number equations of fluctuating hydrodynamics provide an accurate coarse - grained model of fluid mixing .special care must be exercised , however , in choosing the bare transport coefficients , especially the concentration diffusion coefficient , as these are renormalized by the fluctuations and can be strongly grid - dependent .some questions remain about how to define and measure the bare transport coefficients from microscopic simulations , but we show that simply comparing particle and hydrodynamic calculations at large scales is a robust technique .the strong coupling between velocity fluctuations and diffusive transport means that deterministic models have limited utility at mesoscopic scales , and even macroscopic scales in two - dimensions .this implies that standard fluorescent techniques for measuring diffusion coefficients , such as fluorescence correlation spectroscopy ( fcs ) and fluorescence recovery after photobleaching ( frap ) , may not in fact be measuring material constants but rather geometry - dependent values .fluctuating hydrodynamic simulations of typical experimental simulations , however , are still out of reach due to the very large separation of time scales between mass and momentum diffusion . surpassing this limitation requires the development of a semi - implicit temporal discretization that is stable for large time steps .furthermore , it is also necessary to develop novel mathematical models and algorithms that are not only stable but also accurate in the presence of such large separation of scales .this is a nontrivial challenge if thermal fluctuations are to be included consistently , and will be the subject of future research. we would like to thank boyce griffith and mingchao cai for helpful comments .j. bell , a. nonaka and a. garcia were supported by the doe applied mathematics program of the doe office of advanced scientific computing research under the u.s .department of energy under contract no .de - ac02 - 05ch11231 .a. donev was supported in part by the national science foundation under grant dms-1115341 and the office of science of the u.s .department of energy through early career award number de - sc0008271 .t. fai wishes to acknowledge the support of the doe computational science graduate fellowship , under grant number de - fg02 - 97er25308 . y.sun was supported by the national science foundation under award oci 1047734 .as discussed in more depth in ref . , there are fundamental mathematical difficulties with the interpretation of the nonlinear equations of fluctuating hydrodynamics due to the roughness of the fluctuating fields .it should be remembered , however , that these equations are coarse - grained models with the coarse - graining length scale set by the size of the hydrodynamic cells used in discretizing the equations .the spatial discretization removes the small length scales from the stochastic forcing and regularizes the equations .it is important to point out , however , that imposing such a small - scale regularization _ _ ( smoothing ) of the stochastic forcing also requires a suitable renormalization of the transport coefficients , as we discuss in more detail in section [ sec : mixingmd ] .as long as there are sufficiently many molecules per hydrodynamic cell the fluctuations in the spatially - discrete hydrodynamic variables will be small and the behavior of the nonlinear equations will closely follow that of the _ linearized _ equations of fluctuating hydrodynamics , which can be given a precise meaning .it is therefore crucial to understand the linearized equations from a theoretical perspective , and to analyze the behavior of the numerical schemes in the linearized setting .some of the most important quantities predicted by the fluctuating hydrodynamics equations are the equilibrium structure factors ( static covariances ) of the fluctuating fields .these can be obtained by linearizing the compressible equations ( [ llns_primitive ] ) around a uniform reference state , , , , where ,\ ] ] and then applying a spatial fourier transform .owing to fluctuation - dissipation balance the static structure factors are independent of the wavevector at thermodynamic equilibrium , note that density fluctuations do not vanish even in the incompressible limit unless .while fluctuations in and are uncorrelated , the fluctuations in concentration and density are _ correlated _ even at equilibrium , we will see below that the low mach equations correctly reproduce the static covariances of density and concentration in the limit . the dynamics of the equilibrium fluctuations can also be studied by applying a fourier - laplace transform in time in order to obtain the dynamic structure factors ( equilibrium correlation functions ) as a function of wavenumber and wavefrequency .it is well - known that the dynamic spectrum of density fluctuations exhibits three peaks for a given , one central rayleigh peak at small frequencies ( slow concentration fluctuations ) , and two symmetric brillouin peaks centered around .as the fluid becomes less compressible ( i.e. , the speed of sound increases ) , there is an increasing separation of time - scales between the side and central spectral peaks .as we will see below , the low mach equations reproduce the central peaks in the dynamic structure factors only , eliminating the side peaks and the associated stiff dynamics .we now examine the spatio - temporal correlations of the steady - state fluctuations in the low mach number equations ( [ eq : momentum_eq],[eq : rho1_eq],[eq : div_v_constraint],[eq : rho_eq ] ) . in order to model the nonequilibrium setting in which giant concentration fluctuationsare observed , we include a constant background concentration gradient in the equations .note that a density gradient will accompany a concentration gradient , and this can introduce some additional terms in depending on how depends on concentration . for simplicity , we assume is a constant so that the diffusive term in ( [ eq : rho1_eq ] ) is simply .we also assume the viscosity is spatially constant , to get the simplified coupled velocity - concentration equations , where and is given by ( [ eq : eos_quasi_incomp ] ) .we linearize the equations ( [ eq : simpl_primitive_eqs ] ) around a steady state , , , and , where the reference state is in mechanical equilibrium , .we denote the background concentration gradient with .we additionally assume that the reference state varies very weakly on length scales of order of the wavelength , an in particular , that and are essentially constant .this allows us to drop the bars from the notation and employ a _ quasi - periodic _ or weak - gradient approximation . in the linear approximation, the eos constraint relates density and concentration fluctuations , .the term is second order in the fluctuations and drops out , but the advective term leads to a term in the concentration equation .the forcing term due to gravity becomes . after a spatial fourier transform ,the linearized form of ( [ eq : simpl_primitive_eqs ] ) becomes a collection of stochastic differential equations , one system of linear additive - noise equations per wavenumber , .\label{eq : div_v_linearized}\end{aligned}\ ] ] replacing the right hand side of ( [ eq : div_v_linearized ] ) with zero leads to the incompressible approximation used in ref . , corresponding to the boussinesq approximation of taking the limit while keeping the product constant .let us first compare the dynamics of the equilibrium fluctuations ( ) in the low mach equations with those in the complete compressible equations .for simplicity of notation we will continue to use the hat symbol to denote the space - time fourier transform . in the wavenumber - frequency fourier domain , the concentration fluctuations in the absence of a gradient are obtained from ( [ eq : c_t_linearized ] ) , which is the same as the compressible equations .the density fluctuations follow the concentration fluctuations , , and the dynamic structure factor for density shows the same central rayleigh peak as obtained from the isothermal compressible equations , where we used eq .( [ stoch_flux_covariance ] ) for the covariance of .this shows that the low mach number equations correctly reproduce the slow fluctuations ( small ) in density and concentration , while eliminating the side brillouin peaks associated with the fast isentropic pressure fluctuations . the fluctuations in velocity , however , are different between the compressible and low mach number equations .let us first examine the transverse ( solenoidal ) component of velocity , where is the constant - density orthogonal projection onto the space of divergence - free velocity fields ( in fourier space ) .applying the projection operator to the velocity equation ( [ eq : v_t_linearized ] ) shows that the fluctuations of the solenoidal modes are the same as in the incompressible approximation , the fluctuations of the compressive velocity component , on the other hand , are driven by the stochastic mass flux , as seen from eq .( [ eq : div_v_linearized ] ) at thermodynamic equilibrium , the dynamic structure factor ( space - time fourier spectrum ) of the longitudinal component does not decay to zero as .this indicates that the fluctuations of velocity are not only white in space but also white in time . in the incompressible approximation so that the longitudinal velocity fluctuations vanish and the static spectrum of the velocity fluctuations is equal to the projection operator , . in the compressible equations ,the dynamic structure factor for the longitudinal component of velocity decays to zero as because it has two sound ( brillouin ) peaks centered around , in addition to the central diffusive ( rayleigh ) peak .the low mach number equations reproduce the central peak ( slow fluctuations ) correctly , replacing the side peaks with a flat spectrum for large .the origin of this unphysical behavior is the unjustified assumption of infinite separation of time scales between the propagation of sound and the diffusion of mass , momentum and energy . in reality, the same molecular motion underlies all of these processes and the incompressible or the low mach number equations can not be expected to reproduce the correct physical behavior at very short time scales ( ) .if we neglect the term involving in ( [ eq : div_v_linearized ] ) and eliminate the lagrange multiplier ( non - thermodynamic pressure ) using ( [ eq : div_v_linearized ] ) , we obtain the linearized velocity equation in fourier space \v k+i\beta\chi\left(\nu-\chi\right)k^{2}\left(\widehat{\delta c}\right)\v k.\label{eq : vhat_t_proj}\end{aligned}\ ] ] it is straightforward to obtain the steady - state covariances ( static structure factors ) in the presence of a concentration gradient from the linearized system of velocity - concentration equations ( [ eq : c_t_linearized],[eq : vhat_t_proj ] ) . the procedure amounts to solving a linear system for three covariances ( velocity - velocity , concentration - concentration , and velocity - concentration ) .these types of calculations are particularly well - suited for modern computer algebra systems like maple and can be carried out for arbitrary wavenumber and background concentration gradient .we omit the full solution for brevity .experiments measure the steady - state spectrum of concentration fluctuations averaged along the gradient , and we will therefore focus on wavenumbers perpendicular to the gradient , .a straightforward calculation shows that the concentration fluctuations are enhanced as the square of the applied gradient , }\ , h_{\parallel}^{2},\label{eq : s_c_c}\ ] ] where * * and denote the perpendicular and parallel component relative to gravity , respectively .the term in the denominator involving comes from the low mach number constraint ( [ eq : div_v_constraint ] ) and is usually negligible since the concentration gradient is parallel to gravity or . without this termthe result ( [ eq : s_c_c ] ) is the same result as obtained in , and shows that fluctuations at wavenumbers below are suppressed by gravity , as we study numerically in section [ sec : giantfluct ] .in our spatial discretization , we use centered differencing for the advective terms because this leads to a skew - adjoint discretization of advection that maintains discrete fluctuation - dissipation balance in the spatially - discretized stochastic equations .it is well - known that centered discretizations of advection do not preserve monotonicity properties of the underlying pdes in the deterministic setting , unlike one - sided ( upwind ) discretizations .therefore , our spatio - temporal discretization can lead to unphysical oscillations of the concentration and density in cases where the cell peclet number is large . in the deterministic setting , can always be decreased by reducing and resolving the fine scale dissipative features of the flow . however , in the stochastic setting , the magnitude of the fluctuating velocities at equilibrium is where is the volume of the hydrodynamic cell .therefore , in two dimensions the characteristic advection velocity magnitude is .this means that in two dimensions is independent of the grid size and reducing can not fix problems that may arise due to a large cell peclet number . for some of the simulations reported in section [ sec : mixingmd ] ,we have found it necessary to implement a spatial filtering procedure to reduce the magnitude of the fluctuating velocities while preserving their spectrum as well as possible at small wavenumbers .the filtering procedure consists of applying a local averaging operation to the spatially - discretized random fields and independently along each cartesian direction .this local averaging smooths the random forcing and thus reduces the spectrum of the random forcing at larger wavenumbers .the specific filters we use are taken from ref .for stencil width , filtering a discrete field in one dimension takes the form in fourier space , for discrete wavenumber this local averaging multiplies the spectrum of by and therefore maintains the second - order accuracy of the spatial discretization . at the same time , the filtering reduces the variance of the fluctuating fields by about a factor of two in one dimension ( a larger factor in two dimensions ) .the spectrum of the fluctuations can be preserved even more accurately if a stencil of width is used for the local averaging , giving a sixth - order accurate filter and a reduction of the variance by about a third in one dimension . in two and three dimensions the filteringoperators are simple tensor products of one - dimensional filtering operators .note that we only use these filters with periodic boundary conditions .one can , of course , also use fourier transform techniques to filter out high frequency components from the stochastic mass and momentum fluxes .the hydrodynamic simulations described in section [ sec : mixingmd ] require as input transport coefficients , notably , the shear viscosity and diffusion coefficient , which need to be extracted from the underlying microscopic ( molecular ) dynamics .this is a very delicate and important step that has not , to our knowledge , been carefully performed in previous studies . in this appendixwe give details about the procedure we developed for this purpose . as discussed in more detail in refs . , the transport coefficients in fluctuating hydrodynamics are not universal material constants but rather depend on the spatial scale ( degree of coarse - graining ) under question .we emphasize that this scale - dependent renormalization is not a molecular scale effect but rather an effect arising out of hydrodynamic fluctuations , and persists even at the hydrodynamic scales we are examining here .the best way to define and measure transport coefficients is by examining the dynamics of _ equilibrium _ fluctuations , specifically , by examining the _ dynamic structure factors _ of the hydrodynamic fields , i.e. , the equilibrium averages of the spatio - temporal fourier spectra of the fluctuating hydrodynamic fields . for a hydrodynamic variable that is transported by a purely diffusive process, the spectrum of the fluctuations at a given wavenumber and wavefrequency is expected to be a lorentzian peak of the form ^{-1},\ ] ] where in general the diffusion constant depends on the the wavenumber ( wavelength ) .we can therefore estimate the diffusion coefficient by fitting a lorentzian peak to for different s ( i.e. , ) .similarly , we can estimate the kinematic viscosity by fitting a lorentzian curve to dynamic structure factors for the scaled vorticity , .we performed long equilibrium molecular dynamics simulations of systems corresponding to a grid of hydrodynamic cells , and then calculated the discrete spatio - temporal fourier spectrum of the hydrodynamic fields at a collection of discrete wavenumbers .since these simulations are at equilibrium , the systems are well - mixed , specifically , the initial configurations were generated by randomly assigning a species label to each particle .we then performed a nonlinear least squares lorentzian fit in for each and estimated the width of the lorentzian peak .the results for the dynamics of the equilibrium vorticity fluctuations are shown in fig .[ fig : s_kw_hdmd ] .we see that kinematic viscosity is relatively constant for a broad range of wavelengths , consistent with fluctuating hydrodynamics calculations and previous molecular dynamics simulations . for the pure component one fluid , , with density the figure shows .we therefore used in all of the hydrodynamic runs reported in section [ sec : mixingmd ] .this is about higher than the prediction of the simple enskog kinetic theory , , and is consistent with the estimates reported in ref .because of the diffusion coefficient is small at the densities we study , more specifically , because the schmidt number is larger than 10 , we were unable to obtain reliable estimates for from the dynamic structure factor for concentration .simple dimensional analysis or kinetic theory shows that .since the disks of the two species have equal diameters the viscosity of the pure second fluid component is there is no simple theory that accurately predicts the concentration dependence of the viscosity of a hard disk mixture at higher densities . to our knowledgethere is no published enskog kinetic theory calculations for hard - disk mixtures in two dimensions , even for the simpler case of equal diameters . as an approximation to the true dependence , we employed a simple linear interpolation of the _ kinematic _ viscosity as a function of the mass concentration between the two known values and . the numerical results for mixtures with mass ratios and in fig .[ fig : s_kw_hdmd ] are consistent with this approximation to within the large error bars .for example , for and the interpolation gives which is in reasonable agreement with the numerical estimate .estimates of the momentum diffusion coefficient ( viscosity ) obtained from the width of the central peak in the dynamic structure factor of vorticity . a collection of distinct discrete wavenumbers were used and the width of the peaks estimated using a nonlinear least squares lorentzian fit.,scaledwidth=75.0% ] for the inter - species diffusion coefficient , which we emphasize is distinct from the self - diffusion coefficients for particles of either species , enskog kinetic theory predicts no concentration dependence and a simple scaling with the mass ratio , this particular dependence on mass ratio comes from the fact that the average relative speed between particles of different species is , where is the reduced molecular mass .we have assumed in our hydrodynamic calculations that the diffusion coefficient is independent of concentration and follows ( [ eq : chi_r_scaling ] ) .the only input to the hydrodynamic calculation is the bare self - diffusion coefficient for the pure component fluid , .diffusion is strongly renormalized by thermal fluctuations , and fluctuating hydrodynamics theory and simulations predict a strong dependence of the diffusion coefficient on the wavelength , consistent with molecular dynamics results . in order to estimate the appropriate value of the bare diffusion coefficient we numerically solved an inverse problem . using simple bisection, we looked for the value of that leads to best agreement for the average or `` macroscopic '' diffusion ( mixing ) between the particle and continuum simulations .specifically , we calculated the density of the first species along the -direction by averaging in each horizontal row of hydrodynamic cells , see eq .( [ eq : rho_1_h ] ) .the results for for mass ratios and are shown in fig . [fig : spreading_64x64 ] at different points in time for systems of size cells .the figures show the expected sort of diffusive mixing profile , and is exactly what would be used in experiments to measure diffusion coefficients using fluorescent techniques such as fluorescence recovery after photo - bleaching ( frap ) .this macroscopic measurement smooths over the fluctuations ( roughness ) of the diffusive interface and only measures an effective diffusion coefficient at the scale of the domain length .if deterministic hydrodynamics is employed , is the solution of a one - dimensional system of equations obtained by simply deleting the stochastic forcing and the -dependence in the low mach equations . instead of solving this system analytically , we employed our spatio - temporal discretization with fluctuations turned off , and with an effective diffusion coefficient that accounts for the renormalization of the diffusion coefficient by the thermal fluctuations .( left panel ) diffusive evolution of the horizontally - averaged density for a system of size hydrodynamic cells and density ratio , as obtained from hdmd simulations ( circles , averaged over 64 runs ) , deterministic hydrodynamics with ( dashed lines ) , and fluctuating hydrodynamics with ( squares , averaged over 64 runs ) .error bars are comparable to the symbol size and not shown for clarity .( right panel ) same as the left panel except the density ratio is and the transport coefficients are adjusted according to ( [ eq : eta_r_scaling],[eq : chi_r_scaling]).,title="fig:",scaledwidth=49.0%](left panel ) diffusive evolution of the horizontally - averaged density for a system of size hydrodynamic cells and density ratio , as obtained from hdmd simulations ( circles , averaged over 64 runs ) , deterministic hydrodynamics with ( dashed lines ) , and fluctuating hydrodynamics with ( squares , averaged over 64 runs ) .error bars are comparable to the symbol size and not shown for clarity .( right panel ) same as the left panel except the density ratio is and the transport coefficients are adjusted according to ( [ eq : eta_r_scaling],[eq : chi_r_scaling]).,title="fig:",scaledwidth=49.0% ] by matching the profile between the hdmd and the fluctuating and deterministic hydrodynamic simulations at mass ratio and system size cells , we obtained estimates for the bare and the renormalized coefficient ( see fig .[ fig : spreading_64x64 ] ) .the best estimate for the bare diffusion coefficient based on this matching in the absence of filtering is .this compares reasonably - well to the prediction of enskog theory of , as well as to the measurement of the self - diffusion coefficient for a periodic system with 169 disks reported in ref . , ( recall that a single hydrodynamic cell in our case contains about 76 particles ) . when a 5-point filter is employedthe estimate is and when a 9-point filter is employed .the estimated renormalized diffusion coefficient is much larger , , consistent with a rough estimate based on the simple theory presented in ref . , to within statistical accuracy we were not able to detect the increase in the estimated diffusion coefficients when using the larger systems of size cells , however , for it was clear that is reduced .it is important to emphasize that is not a material constant but rather depends on the details of the problem in question , in particular , the system geometry and size and boundary conditions .by contrast , is a constant for a given spatial discretization , and one can use the same number for different scenarios so long as the hydrodynamic cell size and the filter are kept fixed . unlike deterministic hydrodynamics , which presents an incomplete picture of diffusion , fluctuating hydrodynamics correctly accounts for the important contribution of the thermal velocity fluctuations and the roughness of the diffusive interface seen in fig . [fig : mixingillustrationmass4 ] .a. naji , p. j. atzberger , and frank l. h. brown .hybrid elastic and discrete - particle approach to biomembrane dynamics with application to the mobility of curved integral membrane proteins ., 102(13):138102 , 2009 .y. hennequin , d. g. a. l. aarts , j. h. van der wiel , g. wegdam , j. eggers , h. n. w. lekkerkerker , and d. bonn .drop formation by thermal fluctuations at an ultralow surface tension ., 97(24):244502 , 2006 .
we formulate low mach number fluctuating hydrodynamic equations appropriate for modeling diffusive mixing in isothermal mixtures of fluids with different density and transport coefficients . these equations represent a coarse - graining of the microscopic dynamics of the fluid molecules in both space and time , and eliminate the fluctuations in pressure associated with the propagation of sound waves by replacing the equation of state with a local thermodynamic constraint . we demonstrate that the low mach number model preserves the spatio - temporal spectrum of the slower diffusive fluctuations . we develop a strictly conservative finite - volume spatial discretization of the low mach number fluctuating equations in both two and three dimensions and construct several explicit runge - kutta temporal integrators that strictly maintain the equation of state constraint . the resulting spatio - temporal discretization is second - order accurate deterministically and maintains fluctuation - dissipation balance in the linearized stochastic equations . we apply our algorithms to model the development of giant concentration fluctuations in the presence of concentration gradients , and investigate the validity of common simplifications such as neglecting the spatial non - homogeneity of density and transport properties . we perform simulations of diffusive mixing of two fluids of different densities in two dimensions and compare the results of low mach number continuum simulations to hard - disk molecular dynamics simulations . excellent agreement is observed between the particle and continuum simulations of giant fluctuations during time - dependent diffusive mixing . # 1 # 1 # 1 # 1#1 # 1#1 # 1#1 # 1|#1| # 1#1 # 1#1
the block cipher algorithms are a family of cipher algorithms which use symmetric key and work on fixed length blocks of data .since novembre 26 , 2001 , the block cipher algorithm `` rijndael '' , became the successor of des under the name of `` advanced encryption standard '' ( aes ) .its designers , joan daemen and vincent rijmen used algebraic tools to give to their algorithm an unequaled level of assurance against the standard statistical techniques of cryptanalysis .the aes can process data blocks of 128 bits , using cipher keys with lengths of 128 , 192 , and 256 bits .one of the major issues of cryptography is the cryptanalysis of cipher algorithms .cryptanalysis is the study of methods for obtaining the meaning of encrypted information , without access to the secret information that is normally required .some mechanisms for breaking codes include differential cryptanalysis , advanced statistics and brute - force .recent works like , attempt to use algebraic tools to reduce the cryptanalysis of a block cipher algorithm to the resolution of a system of quadratic equations describing the ciphering structure . as an example ,nicolas courtois and josef pieprzyk have described the aes-128 algorithm as a system of 8000 quadratic equations with 1600 variables .unfortunately , these approaches are infeasible because of the difficulty of solving large systems of equations .we will also use algebraic tools but in a new way by using boolean functions and their properties .our aim is to describe a block cipher algorithm as a set of boolean functions then calculate their algebraic normal forms by using the mbius transforms . in our study, we will test our approach on the aes algorithm .our goal is to describe it under the form of systems of boolean functions and to calculate their algebraic normal forms by using the mbius transforms .the system of equations obtained is more easily implementable and could open new ways to cryptanalysis of the aes .let be the set and a boolean algebra , then such that and , is a subset of containing all -tuples of and .the variable is called boolean variable if she only accepts values from , that is to say , if and only if or regardless of .a boolean function of degree with is a function defined from , that is to say built from boolean variables and agreeing to return values only in the set .for example , the function defined from is a boolean function of degree two with : let and be two positive integers .a vector boolean function is a boolean function defined from .an s - box is a vector boolean function .finally , we can define a random boolean function as a boolean function whose values are independent and identically distributed random variables , that is to say : = \frac{1}{2}\ ] ] the number of boolean functions is limited and depends on .thus , there is boolean functions .similarly , the number of vector boolean functions is limited and depends on and .thus , there exists vector boolean functions .if we take , for example , then there exists boolean functions of degree two .these 16 boolean functions are presented in the table in figure [ fig:16bool ] page . among the boolean functions of degree 2 , the best knownare the functions or , and and xor ( see fig .[ fig : or ] , page ) , ( see fig . [fig : and ] , page ) and ( see fig . [fig : xor ] , page ) . [ cols="^,^",options="header " , ] the weight of the function is .so we can reduce to the sum of 3 atomic functions , and . the function if and only if , and . from thiswe can deduce that the anf of the function can be obtained by expanding the product . applying this reasoning to the functions and we get the following equation : this brief presentation of boolean functions , we have the necessary tools for the development of systems of boolean equations describing the _ advanced encryption standard_. we have just seen how to generate normal algebraic form ( anf ) of a boolean function. the presented method is not easily automatable in a computer program .so we will prefer the use of the mbius transform .the mbius transform of the boolean function is defined by : with if and only if .from there , we can define the normal algebraic form of a boolean function in variables : to better understand the mechanisms involved in the use of the mbius transform , take an example with the ` majparmi3 ` .this function from is characterized by the truth table shown in figure [ fig : majparmi3 ] page .> p18pt | > p18pt | > p18pt | > p52pt & & & ` majparmi3 ` & & & & & & & & & & & & & & & & & & & & & & & & calculating the mbius transform of the function we get the result of figure [ fig : rm - mp3 ] page . > p10pt | > p10pt | > p10pt | > p52pt > p10pt > p10pt | > p10pt | > p10pt | > p10pt | > p30pt | & & & ` majparmi3 ` & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & after the mbius transform of the function obtained , we take the for which . in our casewe have the triplets , , from which we can deduce the equation : with the addition corresponding to a ` xor ` and multiplication to a ` and ` .the implementation of the mbius transform in python is performed by the two functions described in the listing [ lst : tm ] page . .... def xortab(t1 , t2 ) : " " " takes two tabs t1 and t2 of same lengths and returns t1 xor t2 . " " " result = '' for i in xrange(len(t1 ) ) : result + = str(int(t1[i ] ) ^ int(t2[i ] ) ) return result def moebiustransform(tab ) : " " " takes a tab and return tab[0 : len(tab)/2 ] , tab[0 : len(tab)/2 ] ^ tab[len(tab)/2 : len(tab ) ] .usage : moebiustransform(1010011101010100 ) -- > [ 1100101110001010 ] " " " if len(tab ) = = 1 : return tab else : t1 = tab[0 : len(tab)/2 ] t2 = tab[len(tab)/2 : len(tab ) ] t2 = xortab(t1 , t2 ) t1 = moebiustransform(t1 ) t2 = moebiustransform(t2 ) t1 + = t2 return t1 .... to facilitate the analysis and in particular to try a combinatorial study we will implement a specific presentation for equations thus obtained .the aes algorithm takes 128 bits as input and provides 128 bits as output .so we will have boolean functions .the guiding principle is to generate a file by bit , we will have at the end 128 files .each file containing the boolean equation of the concerned bit .in each file , the boolean equation is presented under the form of lines containing sequences of 0 and 1 .each line describes a monomial of the equation and the transition from one line to another means applying a ` xor ` . in order to facilitate understanding of the chosen mechanismwe describe the realization of file corresponding to one bit from his equation to the file formalism in figure [ fig : fichiers_bit ] page .we will now apply to the aes the mechanism described above .the difficulty with our approach is that the encryption functions of the aes algorithm takes 128 bits as input and provides 128 bits as output .so we will have boolean functions and it is impossible to calculate their truth tables . indeed , in this case , we have possible combinations of 128-bit blocks and the space storage needed to archive these blocks is terabytes .so we have to find a way to describe the aes encryption functions in the form of boolean functions without using their truth table .we will now detail the solution implemented for each of the sub - functions of the aes encryption algorithm .the function ` subbytes ` is a non - linear substitution that works on every byte of the states array using a substitution table ( s - box ) .this function is applied independently to each byte of the input block .so , the s - box of the aes is a function taking 8 bits as input and providing 8-bit as output .so we can describe it as a boolean function .from there , we can calculate the truth table of the s - box and use the mbius transform for obtain the normal algebraic form of the s - box . then applying the results to the 16 bytes of input block , we get 128 equations , each describing a block bit .for example , the equation of the processing of the bit by the function ` subbyte ` is given in figure [ fig : bitsubbyte ] page .in the ` shiftrows ` function , the bytes of the third column of the state table are shifted cyclically in an offset whose size is dependent on the line number .the bytes of the first line do not suffer this offset . for this function , we do not need to calculate specific boolean function .indeed , the only change made consists to shift bytes in the states array . in our files, this transformation can be easily solved by using a ` xor ` .thus , for example , the second byte of the status table becomes the sixth byte after the application of ` shiftrows ` .this results in the following lines : 0000000000000000**10000000**00000000000000000000000000000000000000000000000000000000**01000000**00000000000000000000000000000000000000000000000000000000**00100000**00000000000000000000000000000000000000000000000000000000**00010000**00000000000000000000000000000000000000000000000000000000**00001000**00000000000000000000000000000000000000000000000000000000**00000100**00000000000000000000000000000000000000000000000000000000**00000010**00000000000000000000000000000000000000000000000000000000**00000001**0000000000000000000000000000000000000000 in the end , the equations of the function ` shiftrows ` for the 128-bit of the block are : the function ` mixcolumns ` acts on the states array , column by column , treating each column as a polynomial with four terms . each column is multiplied by a square matrix . for each columnwe have : thus , for the first byte of the column we have the equation : as in , is the identity for multiplication , this equation becomes : we have the same simplification for all equations describing the multiplication of the column of the states array by the square matrix .therefore we only need to calculate truth tables for multiplication by and in . for example, the equations of the bits to are the following : to recall , in the algorithm of the aes-128 , words and words , with 1 word = 4 bytes = 32 bits .the function ` addroundkey ` adds a round key to the state table by a simple bitwise ` xor ` operation .these rounds keys are computed by a key expansion function .this latter generates a set of words of 32 bit that to say 11 keys of 128 bits derived from the first key .the algorithm used for the expansion of the key involves two functions ` subword ` and ` rotword ` together with a round constant ` rcon ` .the generation of a global boolean function for the key expansion algorithm is impossible because the generation of the key for the round involves the key of the round .this interweaving of rounds keys does not allow us to generate a global boolean function .on the other hand it is possible to generate a boolean function corresponding to the calculation of a key of one round . the first word of the round key is calculated according to the following equation : with and respectively corresponding to the ` subword ` and ` rotword ` functions .the following words , and are calculated according to the following equation : with .the ` subword ` and ` rotword ` functions are built on the same principle as the ` subbytes ` and ` shiftrows ` functions , thus we can reuse the methodology finalized previously . in python language ,the word generation function is written according to the following code ( see listing [ lst : keyword ] , p. ) . .... defgenerateword(num ) : if ( num < 4 ) : w = generategenericword(wordsize*num , ' x ' ) if ( num > = 4 ) : if ( ( num % 4 ) = = 0 ) : w = generateword(3 ) w = rotword(w ) w = subword(w , rconlist[(num/4)-1 ] ) w = xorwords(w , generateword(0 ) ) else : w = generateword(num-1 ) w = xorwords(w , generateword(num%4 ) ) return w .... in this code , several scenarios are considered .the function ` generateword ` takes in parameter the word number to generate , we know that this number is between 0 and 43. if the number is less than 4 , the function returns the boolean identity function as the first key used by the aes is the encryption key .if the number to modulo 4 is zero , the function returns a boolean functions describing the composition of ` subword ` and ` rotword ` functions and the application of the ` xor ` with the ` rcon ` constant .finally , if the number to modulo 4 is not zero , the function returns the boolean function describing the ` xor ` with the corresponding word in the previous round .we now have a boolean function describing a round expansion of the key .as we have seen , the key expansion algorithm involves at round the keys of round . to integrate our boolean function in the encryption process of the aes , we must , at every round , add a temporary variable corresponding to the key of the previous round . as an example , the boolean equation of the bit of the fourth word on the 44 words generate by the key expansion process , is given in the figure [ fig : word ] page .we have now a boolean function for each function ` subbytes ` , ` shiftrows ` and ` mixcolumns ` . in the arrangement of one round , these functions are combined .so for a 128-bit block as output of the ` addroundkey ` function , the block as output of the combination of these three functions is such that : to realize the files as described above , it is necessary to reduce the composition of these three functions in one boolean equation . to achieve this, we just have to replace each input variable of a function by the output value of the previous function using the following equation : in python language , the round generation function is written according to the following code ( see listing [ lst : round ] , p. ) ..... def writeroundenc(numround , equasb , equasr , equamc ) : printcolor ( ' # # round%s ' % numround , green ) resultsr = [ ] resultmc = [ ] for i in xrange(blocksize ) : equasr[i ] = equasr[i].split ( ' _ ' ) resultsr.append(equasb[int(equasr[i][1 ] ) ] ) for i in xrange(blocksize ) : tmp = '' for monomial in equamc[i].split('+ ' ) : tmp + = resultsr[int(monomial.split('_')[1 ] ) ] tmp + = ' + ' resultmc.append(tmp.rstrip('+ ' ) ) binmon = generatebinarymonomes(resultmc ) return resultmc .... the boolean equation of one round of the aes for the bit is given in the figure [ fig : round ] page .finally , we can now describe under the form of boolean equations the full process of aes encryption .the function in python language computing this process is given in listing [ lst : ciphering ] page ..... def generateencfullfiles ( ) : printcolor ( ' # # ciphering process ' , yellow ) createaesfiles('enc ' ) addroundkey(0 , ' enc ' ) writeroundenc(0 , subbytes ( ) , shiftrows ( ) , mixcolumns ( ) ) addroundkey(1 , ' enc ' ) writeroundenc(1 , subbytes ( ) , shiftrows ( ) , mixcolumns ( ) ) addroundkey(2 , ' enc ' ) writeroundenc(2 , subbytes ( ) , shiftrows ( ) , mixcolumns ( ) ) addroundkey(3 , ' enc ' ) writeroundenc(3 , subbytes ( ) , shiftrows ( ) , mixcolumns ( ) ) addroundkey(4 , ' enc ' ) writeroundenc(4 , subbytes ( ) , shiftrows ( ) , mixcolumns ( ) ) addroundkey(5 , ' enc ' ) writeroundenc(5 , subbytes ( ) , shiftrows ( ) , mixcolumns ( ) ) addroundkey(6 , ' enc ' ) writeroundenc(6 , subbytes ( ) , shiftrows ( ) , mixcolumns ( ) ) addroundkey(7 , ' enc ' ) writeroundenc(7 , subbytes ( ) , shiftrows ( ) , mixcolumns ( ) ) addroundkey(8 , ' enc ' ) writeroundenc(8 , subbytes ( ) , shiftrows ( ) , mixcolumns ( ) ) addroundkey(9 , ' enc ' ) writefinalroundenc(9 , subbytes ( ) , shiftrows ( ) ) addroundkey(10 , ' enc ' ) writeendflag('enc ' ) printcolor ( ' # # files generated ' , yellow ) .... we will now detail the solution implemented for each of the sub - functions of the aes decryption algorithm .the aes deciphering algorithm uses the ` invshiftrows ` , ` invsubbytes ` and ` invmixcolumns ` functions .those functions are respectively the inverse functions of ` shiftrows ` , ` subbytes ` and ` mixcolumns ` functions , used in the ciphering process .the pseudo code of the decryption function can be written as follows ( see fig . [fig : decipherpseudocode ] , page ) , ` nb ` corresponding to the 32-bits words numbe and ` nr ` corresponding to the rounds number used in the algorithm .byte state[4,nb ] state in addrounkey(state , w[nr*nb , ( nr+1)*nb-1 ] ) invshiftrows(state ) invsubbytes(state ) addroundkey(state , w[round*nb , ( round+1)*nb-1 ] ) invmixcolumns(state )invshiftrows(state ) invsubbytes(state ) addrounkey(state , w[0 , nb-1 ] ) state the internal mechanisms to the three functions used in the round during decryption are similar to encryption functions .so we use the same reasoning as the one implemented earlier to generate the corresponding boolean equations .for example , the boolean equation of the three transformations used in the deciphering process for the bit are given in figure [ fig : threefunctions ] page .the key expansion function is the same for both ciphering and deciphering process .boolean equations we built previously are reusable .we have now a boolean equation for each of ` invsubbytes ` , ` invshiftrows ` and ` invmixcolumns ` functions .however , unlike the arrangement of intermediate rounds of the encryption process , these three functions are not combined among them .indeed , the function ` addroundkey ` no longer occurs at the end of the round but sits between ` invsubbytes ` and ` invmixcolumns ` functions .thus , for a block and a key as input of the round , the block as output is such that : to reduce the boolean equations , we will not therefore be able to combine the equations of ` invsubbytes ` and ` invshiftrows ` . as before , to achieve this we just have to replace each input variable of a function with its output value of the previous function using the following equation : in python language , the round generation function is written according to the following code ( see listing [ lst : invround ] , p. ) ..... def writerounddec(numround , equasb , equasr ) : printcolor ( ' # # round % s ' % numround , green ) resultsr = [ ] for i in xrange(blocksize ) : equasr[i ] = equasr[i].split ( ' _ ' ) resultsr.append(equasb[int(equasr[i][1 ] ) ] ) binmon = generatebinarymonomes(resultsr ) return resultsr .... as for the encryption process , we can now describe under the form of boolean equations the full process of the aes decryption .the function in python language computing this process is given in listing [ lst : deciphering ] page ..... def generatedecfullfiles ( ) : printcolor ( ' # # deciphering process ' , yellow ) createaesfiles('dec ' ) addroundkey(10 , ' dec ' ) writerounddec(9 , invsubbytes ( ) , invshiftrows ( ) ) addroundkey(9 , ' dec ' ) writeinvmixcolumns(9 ) writerounddec(8 , invsubbytes ( ) , invshiftrows ( ) ) addroundkey(8 , ' dec ' ) writeinvmixcolumns(8 ) writerounddec(7 , invsubbytes ( ) , invshiftrows ( ) ) addroundkey(7 , ' dec ' ) writeinvmixcolumns(7 ) writerounddec(6 , invsubbytes ( ) , invshiftrows ( ) ) addroundkey(6 , ' dec ' ) writeinvmixcolumns(6 ) writerounddec(5 , invsubbytes ( ) , invshiftrows ( ) ) addroundkey(5 , ' dec ' ) writeinvmixcolumns(5 ) writerounddec(4 , invsubbytes ( ) , invshiftrows ( ) ) addroundkey(4 , ' dec ' ) writeinvmixcolumns(4 ) writerounddec(3 , invsubbytes ( ) , invshiftrows ( ) ) addroundkey(3 , ' dec ' ) writeinvmixcolumns(3 ) writerounddec(2 , invsubbytes ( ) , invshiftrows ( ) ) addroundkey(2 , ' dec ' ) writeinvmixcolumns(2 ) writerounddec(1 , invsubbytes ( ) , invshiftrows ( ) ) addroundkey(1 , ' dec ' ) writeinvmixcolumns(1 ) writerounddec(0 , invsubbytes ( ) , invshiftrows ( ) ) addroundkey(0 , ' dec ' ) writeendflag('dec ' ) printcolor ( ' # # files generated ' , yellow ) .... we now have two systems of boolean equations corresponding to the encryption process and decryption of aes .these two systems each have : * 128 equations , one for each bit block ; * 1280 variables for the input block ; * 1280 variables for the key .concerning the variables of keys , the fact that we have a boolean equation by round key involve that we have a set of 128 new variables at each round that is 1280 variables for the aes-128 .each of the variables of the round key being described in terms of variables of the round key . consequently and due to the ` xor ` bitwise operation between the round key and the bits resulting from the round function , we are obliged to insert a new set of 128 variables to describe the block transformation at each round .finally we described the aes encryption and decryption process in the form of two systems of boolean equations with 128 equations and 2560 variables .this mechanism allows us then to describe all of the aes encryption process in the form of files using the same representation as described above .so we have 128 files , one by bit of block . in these files ,each line describes a monomial and the transition from one line to the next is done by the ` xor ` operation . to implement this mechanism of the description of the aes encryption algorithm and generate the 128 files , we have developed and used a python script based on that described earlier in our presentation of aes .the main program , ` aes_equa.py ` , offers the possibility of one hand to generate the files for aes ciphering and deciphering functions with the ` generateencfullfiles ( ) ` and ` generatedecfullfiles ( ) ` functions and on the other hand , to control that the encryption and the decryption obtained from files is consistent .thus , the functions ` controlencfullfiles ( ) ` and ` controldecfullfiles ` performs respectively the encryption and the decryption from the previously generated files .the function ` controlencfullfiles ( ) ` takes as input a block of 128 bits of plain text and a 128-bit block of key while the function ` controldecfullfiles ( ) ` takes as input a block of 128 bits of cipher text and a a 128-bit block of key .the selected blocks are those provided as test vectors in appendix b of fips 197 .the obtained results correspond to those provided in the fips : files we generated well represent the aes encryption and decryption algorithm .the result obtained by the function ` generateencfullfiles ( ) ` is shown in figure [ lst : generate_encfiles ] page and the result obtained by the ` controlencfullfiles ( ) ` is shown in the listing [ lst : control_encfiles ] page .the control function ` controlencfullfiles ( ) ` injects in the boolean functions the 128 initial variables corresponding to the clear text block and the 1280 variables corresponding to the key blocks of each round .0.38 .... ./aes_equa.py # # ciphering process # # create directory aes_files # # addroundkey0 # # round0 # # addroundkey1 # # round1 # # addroundkey2 # # round2 # # addroundkey3 # # round3 # # addroundkey4 # # round4 # # addroundkey5 # # round5 # # addroundkey6 # # round6 # # addroundkey7 # # round7 # # addroundkey8 # # round8 # # addroundkey9 # # round9 # # addroundkey10 # # files generated .... 0.58 ...../aes_equa.py # # clear block 00112233445566778899aabbccddeeff # # key block 000102030405060708090a0b0c0d0e0f # # addroundkey0 00102030405060708090a0b0c0d0e0f0 32 # # round0 5f72641557f5bc92f7be3b291db9f91a 32 # # addroundkey1 89d810e8855ace682d1843d8cb128fe4 32 # # round1 ff87968431d86a51645151fa773ad009 32 # # addroundkey2 4915598f55e5d7a0daca94fa1f0a63f7 32 # # round2 4c9c1e66f771f0762c3f868e534df256 32 # # addroundkey3 fa636a2825b339c940668a3157244d17 32 # # round3 6385b79ffc538df997be478e7547d691 32 # # addroundkey4 247240236966b3fa6ed2753288425b6c 32 # # round4 f4bcd45432e554d075f1d6c51dd03b3c 32 # # addroundkey5 c81677bc9b7ac93b25027992b0261996 32 # # round5 9816ee7400f87f556b2c049c8e5ad036 32 # # addroundkey6 c62fe109f75eedc3cc79395d84f9cf5d 32 # # round6 c57e1c159a9bd286f05f4be098c63439 32 # # addroundkey7 d1876c0f79c4300ab45594add66ff41f 32 # # round7 baa03de7a1f9b56ed5512cba5f414d23 32 # # addroundkey8 fde3bad205e5d0d73547964ef1fe37f1 32 # # round8 e9f74eec023020f61bf2ccf2353c21c7 32 # # addroundkey9 bd6e7c3df2b5779e0b61216e8b10b689 32 # # round9 7ad5fda789ef4e272bca100b3d9ff59f 32 # # addroundkey10 69c4e0d86a7b0430d8cdb78070b4c55a 32 69c4e0d86a7b0430d8cdb78070b4c55a ( fips result ) .... according to the same principle as for boolean functions of encryption , the result obtained by the function ` generatedecfullfiles ( ) ` is shown in the listing [ lst : generate_decfiles ] page and the obtained result from the ` controldecfullfiles ( ) ` function is shown in the listing [ lst : control_decfiles ] page .0.38 .... ./aes_equa.py # # deciphering process # # create directory aes_files # # addroundkey10 # # round 9 # # addroundkey9 # # invmixcolumns 9 # # round 8 # # addroundkey8 # # invmixcolumns 8 # # round 7 # # addroundkey7 # # invmixcolumns 7 # # round 6 # # addroundkey6 # # invmixcolumns 6 # # round 5 # # addroundkey5 # # invmixcolumns 5 # # round 4 # # addroundkey4 # # invmixcolumns 4 # # round 3 # # addroundkey3 # # invmixcolumns 3 # # round 2 # # addroundkey2 # # invmixcolumns 2 # # round 1 # # addroundkey1 # # invmixcolumns 1 # # round 0 # # addroundkey0 # # files generated .... 0.58 .... ./aes_equa.py # # cipher block 69c4e0d86a7b0430d8cdb78070b4c55a # # key block 000102030405060708090a0b0c0d0e0f # # addroundkey10 7ad5fda789ef4e272bca100b3d9ff59f 32 # # round9 bd6e7c3df2b5779e0b61216e8b10b689 32 # # addroundkey9 e9f74eec023020f61bf2ccf2353c21c7 32 # # invmixcolumns9 54d990a16ba09ab596bbf40ea111702f 32 # # round8 fde3bad205e5d0d73547964ef1fe37f1 32 # # addroundkey8 baa03de7a1f9b56ed5512cba5f414d23 32 # # invmixcolumns8 3e1c22c0b6fcbf768da85067f6170495 32 # # round7 ... # # round3 fa636a2825b339c940668a3157244d17 32 # # addroundkey3 4c9c1e66f771f0762c3f868e534df256 32 # # invmixcolumns3 3bd92268fc74fb735767cbe0c0590e2d 32 # # round2 4915598f55e5d7a0daca94fa1f0a63f7 32 # # addroundkey2 ff87968431d86a51645151fa773ad009 32 # # invmixcolumns2 a7be1a6997ad739bd8c9ca451f618b61 32 # # round1 89d810e8855ace682d1843d8cb128fe4 32 # # addroundkey1 5f72641557f5bc92f7be3b291db9f91a 32 # # invmixcolumns1 6353e08c0960e104cd70b751bacad0e7 32 # # round0 00102030405060708090a0b0c0d0e0f0 32 # # addroundkey0 00112233445566778899aabbccddeeff 32 00112233445566778899aabbccddeeff ( fips result ) .... in both cases , encryption and decryption , the results we obtain by using our files to cipher and to decipher blocks are conform to those described in the fips 197 .so our boolean equation system describing the aes algorithm is right .after presenting briefly the boolean algebra , boolean functions and two of their presentations , we have developed a process that allows us to translate the aes encryption and decryption algorithms in boolean functions. then we defined a mode of representation of these boolean functions in the form of computer files .finally , we have developed a program to implement this process and to check that the expected results are consistent with those provided in the fips . in the end , we got a two new systems of boolean equations , the first one describing the entire ciphering process while the second describes the entire deciphering process of the _ advanced encryption standard _ and each one including 128 equations and variables. the next step could be to search , through statistical and combinatorial analysis , new ways to cryptanalyse the aes .either by finding a solution to resolve our equations system either by using statistical bias exploitable with this system .michel dubois and ric filiol , _ proposal for a new equation system modelling of block ciphers and application to aes 128 _ , proceedings of the 11th european conference on information warfare and security , 2012 .michel dubois and ric filiol , _proposal for a new equation system modelling of block ciphers and application to aes 128 - long version _ , pioneer journal of algebra , number theory and its applications , 2012 .
one of the major issues of cryptography is the cryptanalysis of cipher algorithms . cryptanalysis is the study of methods for obtaining the meaning of encrypted information , without access to the secret information that is normally required . some mechanisms for breaking codes include differential cryptanalysis , advanced statistics and brute - force . recent works also attempt to use algebraic tools to reduce the cryptanalysis of a block cipher algorithm to the resolution of a system of quadratic equations describing the ciphering structure . in our study , we will also use algebraic tools but in a new way : by using boolean functions and their properties . a boolean function is a function from with , characterized by its truth table . the arguments of boolean functions are binary words of length . any boolean function can be represented , uniquely , by its algebraic normal form which is an equation which only contains additions modulo 2 the ` xor ` function and multiplications modulo 2 the ` and ` function . our aim is to describe the aes algorithm as a set of boolean functions then calculate their algebraic normal forms by using the mbius transforms . after , we use a specific representation for these equations to facilitate their analysis and particularly to try a combinatorial analysis . through this approach we obtain a new kind of equations system . this equations system is more easily implementable and could open new ways to cryptanalysis . * keywords * : block cipher , boolean function , cryptanalysis , aes
one of the motivations of this paper is to discuss some `` mysterious '' configurations of zeros of polynomials , defined by an orthogonality condition with respect to a sum of exponential functions on the plane , that appeared as a results of our numerical experiments .it turned out that in this apparently simple situation the orthogonal polynomials may exhibit a behavior which existing theoretical models do not explain , or the explanation is not straightforward . in order to make our argumentsself - contained , we present a brief outline of the fundamental concepts and known results and discuss their possible generalizations . the so - called complex or non - hermitianorthogonal polynomials with analytic weights appear in approximation theory as denominators of rational approximants to analytic functions and in the study of continued fractions .recently , non - hermitian orthogonality found applications in several new areas , for instance in the description of the rational solutions to painlev equations , in theoretical physics and in numerical analysis .observe that due to analyticity , there is a freedom in the choice of the integration contour for the non - hermitian orthogonal polynomials , which means that the location of their zeros is a priori not clear .the problem of their limit zero distribution is one of the central aspects studied in the theory of orthogonal polynomials , especially in the last few decades .several important general results in this direction have been obtained , and describing them is one of the goals of the first parts of this paper . however , the general theory is far from being complete , and many natural questions remain unanswered or have only a partial explanation , as some of the examples presented in the second part of this work will illustrate .we will deal with one of the simplest situation that is still posing many open questions .complex non - hermitian orthogonal polynomials are denominators of the diagonal pad approximants to functions with branch points and thus play a key role in the study of the asymptotic behavior of these approximants , in particular , in their convergence .since the mid - twentieth century convergence problems for pad approximants have been attracting wide interest , and consequently , complex orthogonal polynomials have become one of the central topics in analysis and approximation theory .there is a natural historical parallel of this situation with the one occurred in the middle of the ninetieth century , when pad approximants ( studied then as continued fractions ) for markov- and stieltjes - type functions led to the introduction of general orthogonal polynomials on the real line .the original fundamental theorems by p. chebyshev , a. markov and t. stieltjes on the subject gave birth to the theory of general orthogonal polynomials . in 1986 stahl proved a fundamental theorem explaining the geometry of configurations of zeros of non - hermitian orthogonal polynomials and presented an analytic description of the curves `` drawn '' by the strings of zeros .those curves are important particular cases of what we now call -curves .they may be defined by the symmetry property of their green functions or as trajectories of some quadratic differential .the fact that the denominators of the diagonal pad approximants to an analytic function at infinity satisfy non - hermitian orthogonality relations is straightforward and was definitely known in the nineteenth century .just nobody believed that such an orthogonality could be used to study the properties pad denominators .stahl s theorem showed that complex orthogonality relations may be effectively used for these purposes , at least for functions with a `` small '' set of singular points , some of them being branch points .this , without any doubt , was a beginning of a new theory of orthogonal polynomials . before the work of stahl , asymptotics of these polynomials was studied for some subclasses of functions and by appealing to their additional properties .for instance , several important results were obtained by gonchar and collaborators in 1970s and the beginning of the 1980s .the geometry of their zero distribution was conjectured ( and in partially proved , e.g. , for hyperelliptic functions ) by j. nuttall and collaborators .later , in , the case when the logarithmic derivative of the approximated function is rational ( the so - called semiclassical or laguerre class ) was analyzed .the associated orthogonal polynomials , known as the semiclassical or generalized jacobi polynomials , satisfy a second - order differential equation , and the classical liouville green ( a.k.a wkb ) method may be used to study their strong asymptotics , as it was done by nuttall , see also a recent paper .stahl s ideas were considerably extended by gonchar and rakhmanov to cover the case when the orthogonality weight depends on the degree of the polynomial , which requires the inclusion of a non - trivial external field ( or background potential ) in the picture .the curves , describing the location of the strings of zeros of the orthogonal polynomials , feature a symmetry property ( the -property , so we call them the -curves ) , and their geometry is much more involved . the resulting gonchar rakhmanov stahl ( or grs ) theory , founded by , allows to formulate statements about the asymptotics of the zeros of complex orthogonal polynomials conditional to the existence of the -curves , which is a non - trivial problem from the geometric function theory. further contributions in this direction , worth mentioning here , are .the notion of the -property can be interpreted also in the light of the deift - zhou s nonlinear steepest descent method for the riemann hilbert problems .one of the key steps in the asymptotic analysis is the deformation of the contours , the so - called lens opening , along the level sets of certain functions .it is precisely the -property of these sets which guarantees that the contribution on all non - relevant contours becomes asymptotically negligible .an important further development was a systematic investigation of the critical measures , presented in .critical measures are a wider class that encompasses the equilibrium measures on -curves , see sect . [ sec : grs ] for the precise definition and further details .one of the contributions of was the description of their supports in terms of trajectories of certain quadratic differentials ( this description for the equilibrium measures with the -property is originally due to stahl ) . in this way, the problem of existence of the appropriate -curves is reduced to the question about the global structure of such trajectories .let us finish by describing the content of this paper .section [ sec : examples ] is a showcase of some zero configurations of polynomials of complex orthogonality , appearing in different settings .the presentation is mostly informal , it relies on some numerical experiments , and its goal is mainly to illustrate the situation and eventually to arouse the reader s curiosity .[ sec : logpotential ] contains a brief overview of some basic definitions from the logarithmic potential theory , necessary for the subsequent discussion , as well as some simple applications of these notions to polynomials .this section is essentially introductory , and a knowledgeable reader may skip it safely . in sect .[ sec : grs ] we present the known basic theorem on asymptotics of complex orthogonal polynomials .we simplify settings as much as possible without losing essential content .the definitions and results contained here constitute the core of what we call the _grs theory_. altogether , sects . [ sec : examples][sec : grs ] are expository . finally , in sections [ subsec6.7 ] and [ sec:6 ] we present some recent or totally new results . for instance, section [ subsec6.7 ] is about the so - called vector critical measures , which find applications in the analysis of the hermite pad approximants of the second kind for a couple of power series at infinity of a special form , as well as in the study of the problems tackled in sect .[ sec:6 ] .this last section of the paper deals with the orthogonality with respect to _ a sum _ of two ( or more ) analytic weights . in order to build some intuition, we present another set of curious numerical results in sect .[ sec : numerical experiments ] , and the title of this paper ( partially borrowed from philip k. dick ) is motivated by the amazing variety and beauty of possible configurations . as the analysis of these experiments shows even for the simplest model , corresponding to the sum of two exponential weights , in some domain in the parameter space of the problem the standard grs theory still explains the observed behavior , while in other domains it needs to be modified or adapted , which leads to some new equilibrium problems. finally , there are regions in the parameter space where it is not yet clear how to generalize the grs theory to explain our numerical results , and we can not go beyond an empirical discussion .we start by presenting some motivating examples that should illustrate the choice of the title of this work .pad approximants are the _ locally best rational approximants of a power series _ ; in a broader sense , they are constructive rational approximants _ with free poles_. let denote the set of algebraic polynomials with complex coefficients and degree , and let be a ( formal ) power series .for any arbitrary non - negative integer there always exist polynomials and , , satisfying the condition this equation is again formal and means that ( called the _ remainder _ ) is a power series in descending powers of , starting at least at . in order to find polynomials and we first use condition to determine the coefficients of , after which is just the truncation of at the terms of nonnegative degree .it is easy to see that condition does not determine the pair uniquely .nevertheless , the corresponding rational function is unique , and it is known as the _ ( diagonal ) pad approximant to at of degree . hence , the denominator is the central object in the construction of diagonal pad approximants , and its poles constitute the main obstruction to convergence of in a domain of the complex plane . with this definition we can associate a formal orthogonality verified by the denominators ( see , e.g. , a recent survey and the references therein ) .however , the most interesting theory is developed when is an analytic germ at infinity .indeed , if converges for , then choosing a jordan closed curve in and using the cauchy theorem we conclude that this condition is an example of a _ non - hermitian orthogonality _satisfied by the denominators . in particular interestingis the case when corresponds to an algebraic ( multivalued ) function , being approximated by intrinsically single - valued rational functions . as an illustration we plot in figure [ fig1pade ] the poles of for the analytic germs at infinity of two functions , both normalized by .these functions belong to the so - called laguerre class ( or are also known as `` semiclassical '' ) : their logarithmic derivatives are rational functions .cc fig1a & fig1b a quick examination of the pictures puts forward two phenomena : 1 .generally , poles of distribute on in a rather regular way .our eye can not avoid `` drawing '' curves along which the zeros align almost perfectly .2 . there are some exceptions to this beautiful order : observe a clear outlier on figure [ fig1pade ] , right .these `` outliers '' are known as the _ spurious poles _ of the pad approximants .this is the `` most classical '' family of polynomials , which includes the chebyshev polynomials as a particular case .they can be defined explicitly ( see ) , or , equivalently , by the well - known rodrigues formula .\ ] ] incidentally , jacobi polynomials could have been considered in the previous section : for they are also denominators of the diagonal pad approximants ( at infinity ) to the function in fact , denominators of the diagonal pad approximants to semiclassical functions as in are known as _ generalized jacobi polynomials _ , see . clearly , polynomials are entire functions of the complex parameters .when they are orthogonal on ] such that here we denote by the weak- * convergence of measures .an expression for the cauchy transform of can be obtained in an elementary way directly from ( [ difjac ] ) .the derivation of the continued fraction for from the differential equation appears in the perron s monograph , although the original ideas are contained already in the work of euler . for more recent applications , see .the differential equation can be rewritten in terms of the function well defined at least for ] , and by our assumption , uniformly on compact subsets of ( a.k.a . locally uniformly in ) ] we can use the stieltjes perron ( or sokhotsky plemelj ) inversion formula to recover the measure : we conclude that is absolutely continuous on ] , and .\ ] ] due to the uniqueness of this expression , we conclude that the limit in holds for the whole sequence .limit with given by holds actually for families of orthogonal polynomials with respect to a wide class of measures on ] . in order to define it properly, we need to introduce the concept of the _ logarithmic energy _ of a borel measure : moreover , given a real - valued function ( the _ external field _ ) , we consider also the _ weighted energy _ the electrostatic model of stieltjes for the zeros of the jacobi polynomials says precisely that the normalized zero - counting measure minimizes , with among all discrete measures of the form supported on ] , minimizes the logarithmic energy among all probability borel measures living on that interval : ] , the value is its _ robin constant _ , and )=\exp(-e(\lambda))=\frac{1}{2}\ ] ] is its _logarithmic capacity_. as it was mentioned , this asymptotic zero behavior is in a sense universal : it corresponds not only to , but to any sequence of orthogonal polynomials on ] _ is the set of minimal logarithmic capacity _ among all continua joining and . if we recall that the jacobi polynomials are denominators of the diagonal pad approximants ( at infinity ) to the function , using markov s theorem ( see ) we can formulate our conclusion as follows : the diagonal pad approximants to this converge ( locally uniformly ) in , where is the continuum of minimal capacity joining and is easy to see that is a multivalued analytic function with branch points at .it turns out that this fact is much more general , and is one of the outcomes of the gonchar stahl ( or grs ) theory .let us denote by the class of functions holomorphic ( i.e. , analytic and single - valued ) in a domain , and let be defined by the minimal capacity property stahl proved that the sequence converges to in capacity in the complement to , under the assumption that the set of singularities of has capacity .the convergence in capacity ( instead of uniform convergence ) is the strongest possible assertion for an arbitrary function due to the presence of the so - called spurious poles of the pad approximants that can be everywhere dense , even for an entire , see , as well as figure [ fig1pade ] .more precisely , stahl established the existence of a unique of minimal capacity , comprised of a union of analytic arcs , such that the jump across each arc is , as well as the fact that for the denominator of pad approximants we have , where is equilibrium measure for . here and in what followswe denote by ( resp . , ) the left ( resp . , right )boundary values of a function on an oriented curve .the original work of stahl contained not only the proof of existence , but a very useful characterization of the extremal set : on each arc of this set where are the normal vectors to pointing in the opposite directions .this relation is known as the _ -property _ of the compact .notice that stahl s assertion is not conditional , and the existence of such a compact set of minimal capacity is guaranteed . in the case of a finite set of singularities ,the simplest instance of such a statement is the content of the so - called _chebotarev s problem _ from the geometric function theory about existence and characterization of a continuum of minimal capacity containing a given finite set .it was solved independently by grtzsch and lavrentiev in the 1930s .a particular case of stahl s results , related to chebotarev s problem , states that given a finite set of distinct points in there exist a unique set where is the class of continua with .the complex green function for has the form where and is a polynomial uniquely defined by . in particular , we have and is an immediate consequence of these expressions .another consequence is that is a union of arcs of critical trajectories of the quadratic differential .this is also the zero level of the ( real ) green function of the two - sheeted riemann surface for . in order to study the limit zero distribution of pad denominators stahl created an original potential theoretic method based directly on the non - hermitian orthogonality relations satisfied by these polynomials ; incidentally , he also showed for the first time how to deal with a non - hermitian orthogonality .the method was further developed by gonchar and rakhmanov in for the case of varying orthogonality ( see also ) . the underlying potential theoretic model in this casemust be modified by including a non trivial external field .if the set on the plane is comprised of a finite number of piece - wise analytic arcs , we say that it exhibits the _-property in the external field _ if where is now the minimizer of the weighted energy among all probability measures supported on , and .in other words , is the equilibrium measure of in the external field , and can be characterized by the following variational ( equilibrium ) conditions : there exists a constant ( the _ equilibrium constant _ ) such that equation uniquely defines both the probability measure on and the constant .the pair of conditions has a standard electrostatic interpretation , which turns useful for understanding the structure of the -configurations .indeed , it follows from that distribution of a positive charge presented by is in equilibrium on the fixed conductor s , while the -property of compact in means that forces acting on the element of charge at from both sides of are equal .so , the distribution of an -curve will remain in equilibrium if we remove the condition ( `` scaffolding '' ) that the charge belongs to and make the whole plane a conductor ( except for a few exceptional insulating points , such as the endpoints of some of arcs in the support of ) . in other words , is a distribution of charges which is in an ( unstable ) equilibrium in a conducting domain .let be a domain in , a compact subset of of positive capacity , , and let the sequence .assume that polynomials are defined by the non - hermitian orthogonality relations where the integration goes along the boundary of ( if such integral exists , otherwise integration goes over an equivalent cycle in ) . a slightly simplified version of one of the main results of gonchar and rakhmanov is the following : [ grsthm ] assume that converge locally uniformly in ( as ) to a function .if has the -property in and if the complement to the support of the equilibrium measure is connected , then .theorem [ grsthm ] was proved in , where it was called `` generalized stahl s theorem '' .observe that unlike the original stahl s theorem , its statement is conditional : _ if _ we are able to find a compact set with the -property ( in a harmonic external field ) and connected complement , _ then _ the weak- * convergence is assured . under general assumptions on the class of integration paths and in the presence of a non trivial external field , neither existence nor uniqueness of a set with the -property are guaranteed .a general method of solving the existence problem is based on the maximization of the equilibrium energy , which is inspired by the minimum capacity characterization ( in the absence of an external field ) , see .more exactly , consider the problem of finding with the property where is the set of all probability borel measures on .if a solution of this extremal problem exists then under `` normal circumstances '' is an -curve in the external field , see . _ critical measures _ are borel measures on the plane with a vanishing local variation of their energy in the class of all smooth local variations with a prescribed set of fixed points .they are , therefore equilibrium distributions in a conducting plane with a number of insulating points of the external field .the zeros of the heine stieltjes polynomials ( see section [ sec : hspolyns ] ) are ( discrete ) critical measures .this observation lead in to a theorem on asymptotics of these polynomials in terms of continuous critical measures with a finite number of insulating points of the external field .it turns out that the equilibrium measures of compact sets with the -property are critical ; the reciprocal statement is that the critical measures may be interpreted as the equilibrium measures of -curves in piece - wise constant external fields .both notions , however , are defined in a somewhat different geometric settings , and it is in many ways convenient to distinguish and use both concepts in the study of many particular situations .the idea of studying critical ( stationary ) measures has its origins in .later it was used in in combination with the min max ansatz , and systematically in , see also .the formal definition is as follows : let be a domain , be a finite set and a harmonic function in .let and , satisfying .then for any ( signed ) borel measure the mapping ( `` -variation '' ) defines the pull - back measure , as well as the associated variation of the weighted energy , where was introduced in .we say that is _ -critical _if for any -variation , such that the limit above exists , we have when , we write -critical instead of -critical measure . the relation between the critical measures and the -property is very tight : every equilibrium measures with an -property is critical , while the potential of any critical measure satisfies .however , in some occasion it is more convenient to analyze the larger set of critical measures .as it was proved in , for any -critical measure we have this formula implies the following description of : it consists of a finite number of critical or closed trajectories of the quadratic differential on ( moreover , all trajectories of this quadratic differential are either closed or critical , see ( * ? ? ?* theorem 5.1 ) or ) . together with thisyields the representation finally , the -property on and the formula for the density on any open arc of follow directly from . in this way, function becomes the main parameter in the construction of a critical measure : if we know it ( or can guess it correctly ) , then consider the problem solved .[ examplesaffconf ] let us apply the grs theory to the following simple example : we want to study zeros asymptotics of the polynomials defined by the orthogonality relation where the varying ( depending on ) weight function has the form and the integration goes along a jordan arc connecting and .since uniformly on every compact subset of , the application of the grs theory is reduced to finding a jordan arc , connecting and , such that the equilibrium measure has the -property in the external field . for , such an is just the vertical straight segment connecting both endpoints . using and the results of is easy to show that for small values of ( roughly speaking , for ) , valid a.e .( with respect to the plane lebesgue measure ) on . in this case, is connected , and is the critical trajectory of the quadratic differential connecting . as an illustration of this case we depicted the zeros of for , see figure [ figexampleexp ] , left . ] .cc fig5a & fig5b at a critical value of the double zero hits the critical trajectory and splits ( for ) into two simple zeros , that can be computed , so that now in this situation is made of two real - symmetric open arcs , each connecting with lying in the same half - plane , delimited by ; they are the critical trajectory of the quadratic differential on . for illustration , see the zeros of for , see figure [ figexampleexp ] , right .formula and the fact that is comprised of arcs of trajectories of on show that is a constant on . however , this constant is not necessarily the same on each connected component of : this additional condition singles out the equilibrium measures with the -property within the class of critical measures .equivalently , the equilibrium measure can be identified by the validity of the variational condition .recall that one of the motivations for the study of critical ( instead of just equilibrium ) measures is the characterization of the zero distributions of heine stieltjes polynomials , see section [ sec : hspolyns ] .one of the main results in is the following .if we have a convergent sequence of van vleck polynomials , then for corresponding heine stieltjes polynomials we have where is an -critical measure .moreover , any -critical measure may be obtained this way . in electrostatic terms, is a discrete critical measure and the result simply means that a weak limit of a sequence of discrete critical measures is a ( continuous ) critical measure . in the case of the heine stieltjes polynomials , in is a rational function of the form , with given by , and the polynomial determined by a system of nonlinear equations . in the case of varying jacobi polynomials , , with , satisfying we can obtain the explicit expression for using the arguments of section [ sec : logpotential ] . by the grs theory , the problem of the weak- * asymptotics of the zeros of such polynomials boils down to the proof of the existence of a critical trajectory of the corresponding quadratic differential , joining the two zeros of , and of the connectedness of its complement in , see , as well as figure [ fig : withzeros ] .if we check the motivating examples from section [ sec : examples ] , we will realize that at this point we only lack tools to explain the asymptotics of the zeros of the hermite pad polynomials ( section [ sec : hpapprox ] ) . for this, we need to extend the notion of critical measures ( and in particular , of equilibrium measures on sets with an -property ) to a vector case .assume we are given a vector of nonnegative measures , compactly supported on the plane , a symmetric and positive - semidefinite interaction matrix , and a vector of real - valued harmonic external fields , , .we consider the total ( vector ) energy functional of the form ( compare with ) , where is the mutual logarithmic energy of two borel measures and .typical cases of the matrix for are corresponding to the so - called angelesco and nikishin systems , respectively , see . as in the scalar situation , for and , denote by the push - forward measure of induced by the variation of the plane , .we say that is a_ critical vector measure _ ( or a saddle point of the energy ) if for every function .usually , critical vector measures a sought within a class specified by their possible support and by some constraints on the size of each component .the vector _ equilibrium _ problems deal with the minimizers of the energy functional over such a family of measures .for instance , we can be given families of analytic curves , so that , , and additionally some constraints on the size of each component of . in a classical settingthis means that we fix the values , such that , and impose the condition see e.g. .more recently new type of constraints have found applications in this type of problems , when the conditions are imposed on a linear combination of .for instance , considers the case of , with the interaction matrix polynomial external fields , and conditions ] either contains no zeros of , or they are equally spaced , in concordance with the zeros of the weight there .the complement of this vertical segment in ] in the external field , and its existence and uniqueness is guaranteed by the classical theory , see e.g. . as usual ,the main step consists in finding the support of , which in this case , for any values of the parameters and in , takes the form \cup [ v , a ] , \quad\text { where}\quad v = v(a , k ) \in ( 0,a ) ; \ ] ] the fact that follow from , as well as from the calculations below .once the support is determined , we can find the expression for the cauchy transform of , and from there to recover the equilibrium measure ( and by theorem [ thm2 ] , the asymptotics of ) by sokhotsky plemelj s formula .notice that the -property is satisfied simply by symmetry reasons , so the existence of an -curve for any such is now automatic .the assertion of theorem [ thm-3 ] boils down now to where and the branch of in is positive for . from these expressions we get , which is absolutely continuous on : ,\ ] ] where as usual , are the boundary values of this function on form the upper half plane .the two real parameters and in are defined by the equations the first identity expressing the equality of the equilibrium constants on both components of , and the second one assuring that is a probability measure , with its mass distributed evenly on ] . for every and is a unique solution of this system , and hence , a unique solution of the -problem for this case .now assume that we can no longer guarantee the existence of an -curve in class of curves connecting and for all values of the parameters . however , it is intuitively clear that for small enough , an curve exists , is discontinuous , and it still lies in the domain where is harmonic ( cf . figure [ figcosreal ] , right , with and ) .it turns out that the formulas , and remain valid with obvious modifications and equations for the parameters are similar to those in . for other values of the parametersthe limit distribution of the zeros of can be either a continuum , cutting the imaginary axis in a single ( or a finite number of ) point ( cf . figure [ figassym1 ] , left , with and ) , or it can split into disjoint arcs , each or some of them crossing the imaginary axis ( cf . figure [ figassym1 ] , right , with and ) . in this casewe expect these arcs to be the support of the equilibrium measure for the pseudo - vector problem described in section [ sec : pseudo ] .we conclude our exposition with the remark that the reformulation of the orthogonality conditions as could be crucial for the application of the nonlinear steepest descent analysis of deift - zhou for the study of the strong ( and in consequence , also weak- * ) asymptotics of s , especially in the most elusive cases when the -property is apparently lost .we plan to address this problem in a forthcoming publication .this paper is based on a plenary talk at the focm conference in montevideo , uruguay , in 2014 by the first author ( amf ) , who is very grateful to the organizing committee for the invitation .this work was completed during a stay of amf as a visiting chair professor at the department of mathematics of the shanghai jiao tong university ( sjtu ) , china .he acknowledges the hospitality of the host department and of the sjtu .amf was partially supported by the spanish government together with the european regional development fund ( erdf ) under grants mtm2011 - 28952-c02 - 01 ( from micinn ) and mtm2014 - 53963-p ( from mineco ) , by junta de andaluca ( the excellence grant p11-fqm-7276 and the research group fqm-229 ) , and by campus de excelencia internacional del mar ( ceimar ) of the university of almera .we are grateful to s. p. suetin for allowing us to use the results of his calculations in figure [ fig1hermitepade ] , as well as to an anonymous referee for several remarks that helped to improve the presentation .a. i. aptekarev and h. stahl .asymptotics of hermite - pad polynomials . in _progress in approximation theory ( tampa , fl , 1990 ) _ , volume 19 of _ springer ser ._ , pages 127167 .springer , new york , 1992 .j. baik , t. kriecherbauer , k. d. t .- r .mclaughlin , and p. d. miller .uniform asymptotics for polynomials orthogonal with respect to a general class of discrete weights and universality results for associated ensembles : announcement of results . , ( 15):821858 , 2003 .a. a. gonchar and e. a. rakhmanov . on the convergence of simultaneous pad approximants for systems of functions of markov type . , 157:3148 , 234 , 1981 .number theory , mathematical analysis and their applications .a. a. gonchar and e. a. rakhmanov .equilibrium measure and the distribution of zeros of extremal polynomials ., 125(2):117127 , 1984 .translation from mat .134(176 ) , no.3(11 ) , 306 - 352 ( 1987 ) . a. a. gonchar and e. a. rakhmanov .equilibrium distributions and degree of rational approximation of analytic functions . , 62(2):305348 , 1987 .translation from mat .134(176 ) , no.3(11 ) , 306 - 352 ( 1987 ) .g. lpez lagomasino and a. ribalta .approximation of transfer functions of unstable infinite dimensional control systems by rational interpolants with prescribed poles . in_ proceedings of the international conference on rational approximation , icra99 ( antwerp ) _ , volume 61 , pages 267294 , 2000 .a. martines - finkelshten , e. a. rakhmanov , and s. p. suetin .differential equations for hermite - pad polynomials . , 68(1(409)):197198 , 2013 .translation in russian math .surveys 68 ( 2013 ) , no .1 , 183185 . a. martnez - finkelshtein , p. martnez - gonzlez , and r. orive .zeros of jacobi polynomials with varying non - classical parameters . in _ special functions ( hong kong , 1999 ) _ , pages 98113 .world sci .publ . , river edge , nj , 2000 .a. martnez - finkelshtein , p. martnez - gonzlez , and f. thabet .trajectories of quadratic differentials for jacobi polynomials with complex parameters .preprint arxiv math.1506.03434 , to appear in comput .methods and funct .theory , 2015 .a. martnez - finkelshtein and e. a. rakhmanov . on asymptotic behavior of heine - stieltjes and van vleck polynomials . in _ recent trends in orthogonal polynomials andapproximation theory _ ,volume 507 of _ contemp ._ , pages 209232 .soc . , providence , ri , 2010 .a. martnez - finkelshtein , e. a. rakhmanov , and s. p. suetin .heine , hilbert , pad , riemann , and stieltjes : john nuttall s work 25 years later . in _ recent advances in orthogonal polynomials , special functions , and their applications _ ,volume 578 of _ contemp ._ , pages 165193 .amer . math .soc . , providence , ri , 2012 .j. nuttall .the convergence of pad approximants to functions with branch points . in _ pad and rational approximation ( proc .south florida , tampa , fla ., 1976 ) _ , pages 101109 . academic press , new york , 1977 .frank w. j. olver , daniel w. lozier , ronald f. boisvert , and charles w. clark , editors . .department of commerce , national institute of standards and technology , washington , dc ; cambridge university press , cambridge , 2010 . with 1 cd - rom ( windows , macintosh and unix ) .e. a. rakhmanov .orthogonal polynomials and -curves . in _ recent advances in orthogonal polynomials , special functions , and their applications _ ,volume 578 of _ contemp ._ , pages 195239 .soc . , providence , ri , 2012 .e. b. saff , j. l. ullman , and r. s. varga .incomplete polynomials : an electrostatics approach . in _ approximation theory , iii ( proc . conf ., univ . texas , austin , tex . , 1980 )_ , pages 769782 . academic press , new york , 1980 . h. stahl . on the divergence of certain pad approximant and the behaviour of the associated orthogonal polynomials . in _orthogonal polynomials and applications ( bar - le - duc , 1984 ) _ , volume 1171 of _ lecture notes in math ._ , pages 321330 .springer , berlin , 1985 .h. stahl .asymptotics of hermite - pad polynomials and related convergence results a summary of results . in _ nonlinear numerical methods and rational approximation ( wilrijk , 1987 )_ , volume 43 of _ math ._ , pages 2353 .reidel , dordrecht , 1988 .
the complex or non - hermitian orthogonal polynomials with analytic weights are ubiquitous in several areas such as approximation theory , random matrix models , theoretical physics and in numerical analysis , to mention a few . due to the freedom in the choice of the integration contour for such polynomials , the location of their zeros is a priori not clear . nevertheless , numerical experiments , such as those presented in this paper , show that the zeros not simply cluster somewhere on the plane , but persistently choose to align on certain curves , and in a very regular fashion . the problem of the limit zero distribution for the non - hermitian orthogonal polynomials is one of the central aspects of their theory . several important results in this direction have been obtained , especially in the last 30 years , and describing them is one of the goals of the first parts of this paper . however , the general theory is far from being complete , and many natural questions remain unanswered or have only a partial explanation . thus , the second motivation of this paper is to discuss some `` mysterious '' configurations of zeros of polynomials , defined by an orthogonality condition with respect to a sum of exponential functions on the plane , that appeared as a results of our numerical experiments . in this apparently simple situation the zeros of these orthogonal polynomials may exhibit different behaviors : for some of them we state the rigorous results , while others are presented as conjectures ( apparently , within a reach of modern techniques ) . finally , there are cases for which it is not yet clear how to explain our numerical results , and where we can not go beyond an empirical discussion .
we will work up to the general question by first examining the special ( low ratings ) case when one candidate has at least as many votes as the other throughout the tally .this is the classical `` ballot problem '' , in which candidate e and candidate n are competing for a public office .candidate e wins the election with votes .how many ways are there to report the votes so that at all times during the tally n is not ahead of e ?we may represent the state of the tally at any moment by the pair , where the coordinates and count the votes received by e and n respectively .then a tally consists of a sequence of points on the integer lattice in the plane made in steps of and .such a sequence is called a _ northeast latticepath_. we say that the lattice path is _ restricted _ by the lattice path if no part of lies directly above .for example , figure [ restrictedpaths ] shows two northeast lattice paths from to that are restricted by the `` staircase '' , or , equivalently , that do not go above the line .the ballot problem asks for the number of these paths .( note that if the tally ends at , we may uniquely continue it to a northeast lattice path ending at . )[ restrictedpaths ] to restricted by . ]the ballot problem can be solved by constructing a simple recurrence .let be a northeast lattice path restricted by the staircase .consider the point on where it first revisits the line , and let be the -coordinate of this point .( this point exists since ends at . ) for the upper path in figure [ restrictedpaths ] , ; for the lower path , .notice that since does not go above and begins at its first step is ; further , its last step before reaching the point is .therefore we may delete these steps to obtain a northeast lattice path from to that does not go above the line .there are ways to form such a path , and there are ways to continue this path from to , so we have that .this we recognize as the familiar recurrence satisfied by the catalan numbers ( * ? ? ?* exercise 6.19(h ) ) , so we simply check that the initial condition agrees .we now consider a generalization of the ballot problem .let be the number of northeast lattice paths restricted by an arbitrary northeast lattice path from to .the path represents the network s predetermined restrictions on the tally .it was known by macmahon that the sum of over all such paths is however , we are interested in computing for specific .first we develop notation for lattice paths .it is possible to represent a northeast lattice path as a word on , such as for the upper path in figure [ restrictedpaths ] .however , this representation is redundant , because the location of each step determines the path uniquely .therefore , we may represent a northeast lattice path by the sequence of heights of the path along each interval from to .for example , for the upper path in figure [ restrictedpaths ] we have .this representation is always a nondecreasing tuple of integers , and it is our primary representation of lattice paths in this note .a lattice path is restricted by the lattice path precisely when componentwise , i.e. , whenever . to write the main result, however , it turns out to be more natural to use still another representation of a northeast lattice path its difference sequence let .since is a northeast lattice path , the entries of are nonnegative integers .the entry is the number of steps taken along the line , so we can think of this representation as determining a path by the location of each step .the operator has an inverse , which produces the sequence of partial sums : the relationship between and can be interpreted in another way . if is a tuple of nonnegative integers , the pitman stanley polytope defined by is thus a tuple of nonnegative integers is a lattice point inside precisely when the northeast lattice path is restricted by . in other words, provides a bijection from the northeast lattice paths restricted by to the lattice points in .we now return to the question at hand : how many northeast lattice paths are restricted by the path ?equivalently , how many lattice points lie inside ?one answer to this question is the following determinant enumeration .let be the matrix with entries .then the number of northeast lattice paths restricted by is , as given by kreweras and mohanty ( * ? ? ? * theorem 2.1 ) .this fact can be obtained from the triangular system of equations for ( where ) , which comes from an inclusion exclusion argument ; solve for using cramer s rule , and in the numerator expand by minors along the last column .the following theorem presents a formula for in which the lattice points in play a central role .this gives a non - determinantal formula for the number of northeast lattice paths restricted by .a generalization of the formula has been independently discovered by gessel and by pitman and stanley ( * ? ? ?* equation ( 33 ) ) in more advanced contexts .our proof uses elementary combinatorial methods .let be a northeast lattice path from to , and let .the number of northeast lattice paths restricted by is where the sum is over all lattice points in .we immediately obtain two well - known results as special cases .for we see that , which gives since ( from the generalization of the binomial theorem to ) , the only nonzero terms in the sum come from lattice points of the form , and therefore as expected . for recover the ballot problem .namely , , so equation ( [ thm ] ) allows one to compute not only for explicit integer paths but for symbolic paths , and the resulting expressions have the pleasant property that they are written in the basis of rising factorials .for example , . for a general path of length , we have and is putting equation ( [ thm ] ) together with the determinantal formula for , we obtain a formula for a certain symbolic determinant in the same basis : where again .we note that amdeberhan and stanley ( * ? ? ?* corollary 4.7 ) show that also gives the number of monomials in the expanded form of the multivariate polynomial in the variables .moreover , is the number of noncrossing matchings of a certain type ( * ? ? ?* corollary 4.9 ) .let be the number of lattice points in , where .that is , . the following recurrence will be used .we have the only lattice point in is ; hence . for , we partition the lattice points in according to the first entry .since is a lattice point in , then , so .therefore , lattice points in are in bijection ( by deleting the first entry ) with lattice points in .thus is the number of lattice points in with first entry , giving the recurrence . to prove the theorem , then , it suffices to show that equation ( [ thm ] ) satisfies this recurrence .the base case is easily checked , since the product is empty ; we have since again has only one lattice point .the remainder of this note is devoted to showing that for where the left sum is over all lattice points in and the right sum is over all lattice points in .we proceed by simplifying this equation until it becomes a statement about sums of binomial coefficients , given in the lemma below .first interchange the two summations on the right side of equation ( [ rec ] ) .next , fix on the right side , and break up the sum on the left according to the choice of in the following way .the _ children _ of are the elements of the set for example , the children of the lattice point are , , , and .it is immediate that each lattice point has a unique parent .this definition is central to the proof .the reason for defining children in this way is that is a lattice point in if and only if s parent is a lattice point in .this property provides a many - to - one correspondence between the -dimensional lattice points in and the ( )-dimensional lattice points in . using this correspondence to break up equation ( [ rec ] ) ,we obtain for each , where the left sum is over all children of .it now suffices to prove equation ( [ eq2 ] ) for a fixed , since summing both sides of equation ( [ eq2 ] ) over all lattice points in produces equation ( [ rec ] ) .note that if is a child of then for , so we may divide both sides of equation ( [ eq2 ] ) by the product to obtain we know what the children of look like , so the sum on the left side can be written as the first term in this expression , which corresponds to the child of , is equal to the term on the right side of equation ( [ eq3 ] ) . removing this term from both sides leaves which is proved in the following lemma under the substitution , , and .let , , and be nonnegative integers .then we show that both sides of the equation are equal to the right side is a telescoping sum : the result for the left side follows from a generalization of the vandermonde identity , namely ( * ? ? ?* problem 1.42(i ) ) .the summand on the left side of this equation counts the -element subsets of whose element is by choosing of the first elements to be _ not _ in the set and of the last elements to be _ not _ in the set .the right side counts all -element subsets of by selecting the elements _ not _ in the set .subtract from both sides of this equation and substitute , , , and to obtain thus the director of programming may , for example , determine the likelihood that a random tally of votes will satisfy the network s needs .we thank dimitrije kostic for generating our interest in this topic , tewodros amdeberhan for pointing us to relevant literature , doron zeilberger for encouraging us to simplify , and an anonymous referee for a number of helpful suggestions .
we provide an elementary proof of a formula for the number of northeast lattice paths that lie in a certain region of the plane . equivalently , this formula counts the lattice points inside the pitman stanley polytope of an -tuple . suppose that on election day a tv news network of questionable morality wants to increase their viewership as polling results come in . while the reporters can not control the outcome of the election , they can control the order in which votes are reported to the public . if one candidate is ahead in the tally throughout the entire day , viewership will wane since it is clear that she will win the election . on the other hand , a more riveting broadcast occurs when one candidate is ahead at certain times and the other candidate is ahead at others . in fact , the network employs a group of psychologists and market analysts who have worked out certain margins they would like to achieve at certain points in the tally . the director of programming needs to know the number of ways this can be done .
by exploiting signal sparsity and smart reconstruction schemes , compressive sensing ( cs ) can enable signal acquisition with fewer measurements than traditional sampling . in cs ,an -dimensional signal is measured through random linear measurements .although the signal may be undersampled ( ) , it may be possible to recover assuming some sparsity structure .so far , most of the cs literature has considered signal recovery directly from linear measurements . however , in many practical applications , measurements have to be discretized to a finite number of bits .the effect of such quantized measurements on the performance of the cs reconstruction has been studied in . in authors adapt cs reconstruction algorithms to mitigate quantization effects . in ,high - resolution functional scalar quantization theory was used to design quantizers for lasso reconstruction .the contribution of this paper to the quantized cs problem is twofold : first , for quantized measurements , we propose reconstruction algorithms based on gaussian approximations of belief propagation ( bp ) .bp is a graphical model - based estimation algorithm widely used in machine learning and channel coding that has also received significant recent attention in compressed sensing .although exact implementation of bp for dense measurement matrices is generally computationally difficult , gaussian approximations of bp have been effective in a range of applications .we consider a recently developed gaussian - approximated bp algorithm , called _ relaxed belief propagation _ , that extends earlier methods to nonlinear output channels .we show that the relaxed bp method is computationally simple and , with quantized measurements , provides significantly improved performance over traditional cs reconstruction based on convex relaxations .our second contribution concerns the quantizer design . with linear reconstruction and mean - squared error distortion, the optimal quantizer simply minimizes the mean squared error ( mse ) of the transform outputs .thus , the quantizer can be optimized independently of the reconstruction method .however , when the quantizer outputs are used as an input to a nonlinear estimation algorithm , minimizing the mse between quantizer input and output is not necessarily equivalent to minimizing the mse of the final reconstruction . to optimize the quantizer for the relaxed bp algorithm , we use the fact that the mse under large random transforms can be predicted accurately from a set of simple state evolution ( se ) equations , by modeling the quantizer as a part of the measurement channel , we use the se formalism to optimize the quantizer to asymptotically minimize distortions after the reconstruction by relaxed bp .in a noiseless cs setting the signal is acquired via linear measurements of the type where is the _ measurement matrix_. the objective is to recover from . although the system of equations formed is underdetermined , the signal is still recoverable if some favorable assumptions about the structure of and are made .generally , in cs the common assumption is that the signal is exactly or approximately sparse in some orthonormal basis , i.e. , there is a vector with most of its elements equal or close to zero .additionally , for certain guarantees on the recoverability of the signal to hold , the matrix must satisfy the _ restricted isometry property _ ( rip ) .some families of random matrices , like appropriately - dimensioned matrices with i.i.d .gaussian elements , have been demonstrated to satisfy the rip with high probability . a common method for recovering the signal from the measurements is basis pursuit .this involves solving the following optimization problem : where is the -norm of the signal .although it is possible to solve basis pursuit in polynomial time by casting it as a linear program ( lp ) , its complexity has motivated researchers to look for even cheaper alternatives like numerous recently - proposed iterative methods . moreover , in real applications cs reconstruction scheme must be able to mitigate imperfect measurements , due to noise or limited precision . a quantizer is a function that discretizes its input by performing a mapping from a continuous set to some discrete set . more specifically , consider -point regular scalar quantizer , defined by its output levels , decision boundaries , and a mapping when .additionally define the inverse image of the output level under as a _ cell _for , if we replace the closed interval by an open interval .typically quantizers are optimized by selecting decision boundaries and output levels in order to minimize the distortion between the random vector and its quantized representation .for example , for a given vector and the mse distortion metric , optimization is performed by solving where minimization is done over all -level regular scalar quantizers .one standard way of optimizing is via the _lloyd algorithm _ , which iteratively updates the decision boundaries and output levels by applying necessary conditions for quantizer optimality .however , for the cs framework finding the quantizer that minimizes mse between and is not necessarily equivalent to minimizing mse between the sparse vector and its cs reconstruction from quantized measurements .this is due to the nonlinear effect added by any particular cs reconstruction function .hence , instead of solving ( [ equ : quantization : optimal ] ) , it is more interesting to solve where minimization is performed over all -level regular scalar quantizers and is obtained through a cs reconstruction method like relaxed bp or amp .this is the approach taken in this work .consider the problem of estimating a random vector from noisy measurements , where the noise is described by a measurement channel , which acts identically on each measurement of the vector obtained via ( [ equ : cs : measurement ] ) .moreover suppose that elements in the vector are distributed i.i.d . according to .then we can construct the following conditional probability distribution over random vector given the measurements : where is the normalization constant and . by marginalizing this distributionit is possible to estimate each .although direct marginalization of is computationally intractable in practice , we approximate marginals through bp .bp is an iterative algorithm commonly used for decoding of ldpc codes .we apply bp by constructing a bipartite factor graph from ( [ equ : rbp : marginalization ] ) and passing the following messages along the edges of the graph : where means that the distribution is to be normalized so that it has unit integral and integration is over all the elements of except .we refer to messages as variable updates and to messages as measurement updates .we initialize bp by setting .earlier works on bp reconstruction have shown that it is asymptotically mse optimal under certain verifiable conditions .these conditions involve simple single - dimensional recursive equations called _ state evolution ( se ) _ , which predicts that bp is optimal when the corresponding se admits a unique fixed point .nonetheless , direct implementation of bp is still impractical due to the dense structure of , which implies that the algorithm must compute the marginal of a high - dimensional distribution at each measurement node .however , as mentioned in section [ sec : intro ] , bp can be simplified through various gaussian approximations , including the _ relaxed bp _ method and _ approximate message passing ( amp )_ .recent theoretical work and extensive numerical experiments have demonstrated that , in the case of certain large random measurement matrices , the error performance of both relaxed bp and amp can also be accurately predicted by se . hence the optimal quantizers can be obtained in parallel for both of the methods , however in this paper we concentrate on design for relaxed bp , while keeping in mind that identical work can be done for amp as well .due to space limitations , in this paper we will limit our presentation of relaxed bp and se equations to the setting in figure [ fig : probmod ] .see for more general and detailed analysis .the vector denotes noiseless random measurements.,width=307 ] consider the cs setting in figure [ fig : probmod ] , where without loss of generality we assumed that .the vector is measured through the random matrix to result in , which is further perturbed by some additive white gaussian noise ( awgn ) .the resulting vector can be written as where are i.i.d .random variables distributed as .these noisy measurements are then quantized by the -level scalar quantizer to give the cs measurements .the relaxed bp algorithm is used to estimate the signal from the corrupted measurements , given the matrix , noise variance , and the quantizer mapping . note that under this model each quantized measurement indicates that , hence our measurement channel can be characterized as for and where is gaussian function relaxed bp can be implemented by replacing probability densities in ( [ equ : rbp : varupdatebp ] ) and ( [ equ : rbp : mesupdatebp ] ) by two scalar parameters each , which can be computed according to the following rules : where is the variance of the components .additionally , at each iteration we estimate the signal via for each .we refer to messages as variable updates and to messages as measurement updates .the algorithm is initialized by setting and where and are the mean and variance of the prior .the nonlinear functions and are the conditional mean and variance where , , and .note that these functions admit closed - form expressions and can easily be evaluated for the given values of and .similarly , the functions and can be computed via where the functions and are the conditional mean and variance of the random variable .these functions admit closed - form expressions in terms of .the equations ( [ equ : rbp : varupdaterbpmean])([equ : rbp : varestimationrbp ] ) are easy to implement , however they provide us no insight into the performance of the algorithm .the goal of se equations is to describe the asymptotic behavior of relaxed bp under large measurement matrices .the se for our setting in figure [ fig : probmod ] is given by the recursion where is the iteration number , is a fixed number denoting the measurement ratio , and is the variance of the awgn components which is also fixed .we initialize the recursion by setting , where is the variance of according to the prior .we define the function as where the expectation is taken over the scalar random variable , with , and .similarly , the function is defined as where is given by ( [ equ : rbp : d2 ] ) and the expectation is taken over and , with the covariance matrix one of the main results of , which we present below for completeness , was to demonstrate the convergence of the error performance of the relaxed bp algorithm to the se equations under large sparse measurement matrices .denote by the number of nonzero elements per column of . in the large sparse limit analysis , first let with and keeping fixed .this enables the local - tree properties of the factor graph .then let , which will enable the use of a central limit theorem approximation .[ thm : rbptoseconvergence ] consider the relaxed bp algorithm under the large sparse limit model above with transform matrix and index satisfying the assumption 1 of for some fixed iteration number .then the error variances satisfy the limit where is the output of the se equation ( [ equ : se : serecursion ] ) .see .another important result regarding se recursion in ( [ equ : se : serecursion ] ) is that it admits at least one fixed point .it has been showed that as the recursion decreases monotonically to its largest fixed point and if the se admits a unique fixed point , then relaxed bp is asymptotically mean - square optimal .although in practice measurement matrices are rarely sparse , simulations show that se predicts well the behavior of relaxed bp .moreover , recently more sophisticated techniques were used to demonstrate the convergence of approximate message passing algorithms to se under large i.i.d .gaussian matrices .we now return to the problem of designing mse - optimal quantizers under relaxed bp presented in ( [ equ : quantization : csoptimal ] ) . by modeling the quantizer as part of the channel and working out the resulting equations for relaxed bp and se, we can make use of the convergence results to recast our optimization problem to where minimization is done over all -level regular scalar quantizers . in practice ,about 10 to 20 iterations are sufficient to reach the fixed point of . then by applying theorem [ thm : rbptoseconvergence ] , we know that the asymptotic performance of will be identical to that of .it is important to note that the se recursion behaves well under quantizer optimization .this is due to the fact that se is independent of actual output levels and small changes in the quantizer boundaries result in only minor change in the recursion ( see ( [ equ : rbp : eout ] ) ) .although closed - form expressions for the derivatives of for large s are difficult to obtain , we can approximate them by using finite difference methods .finally , the recursion itself is fast to evaluate , which makes the scheme in ( [ equ : quantization : seoptimal ] ) practically realizable under standard optimization methods like sequential quadratic programming ( sqp ) .we now present experimental validation for our results .assume that the signal is generated with i.i.d .elements from the gauss - bernoulli distribution where is the sparsity ratio that represents the average fraction of nonzero components of . in the following experiments we assume .we form the measurement matrix from i.i.d .gaussian random variables , i.e. , ; and we assume that awgn with variance perturbs measurements before quantization .now , we can formulate the se equation ( [ equ : se : serecursion ] ) and perform optimization ( [ equ : quantization : seoptimal ] ) .we compare two cs - optimized quantizers : _ uniform _ and _ optimal_. we fix boundary points and , and compute the former quantizer through optimization of type ( [ equ : quantization : optimal ] ) . in particular , by applying the central limit theorem we approximate elements of to be gaussian and determine the _ uniform _ quantizer by solving ( [ equ : quantization : optimal ] ) , but with an additional constraint of equally - spaced output levels . to determine _ optimal _ quantizer , we perform ( [ equ : quantization : seoptimal ] ) by using a standard sqp optimization algorithm for nonlinear continuous optimization . bits / component of .optimal quantizer is found by optimizing quantizer boundaries for each and then picking the result with smallest distortion , width=288 ] figure [ fig : boundaries ] presents an example of quantization boundaries .for the given bit rate over the components of the input vector , we can express the rate over the measurements as , where is the measurement ratio . to determine the optimal quantizer for the given rate we perform optimization for all and return the quantizer with the least mse .as we can see , in comparison with the uniform quantizer obtained by merely minimizing the distortion between the quantizer input and output , the one obtained via se minimization is very different ; in fact , it looks more concentrated around zero .this is due to the fact that by minimizing se we are in fact searching for quantizers that asymptotically minimize the mse of the relaxed bp reconstruction by taking into consideration the nonlinear effects due to the method .the trend of having more quantizer points near zero is opposite to the trend shown in for quantizers optimized for lasso reconstruction .figure [ fig : expdist ] presents a comparison of reconstruction distortions for our two quantizers and confirms the advantage of using quantizers optimized via ( [ equ : se : serecursion ] ) . to obtain the results we vary the quantization rate from to bits per component of , and for each quantization rate , we optimize quantizers using the methods discussed above . for comparison, the figure also plots the mse performance for two other reconstruction methods : linear mmse estimation and the widely - used lasso method , both assuming a bounded uniform quantizer .the lasso performance was predicted by state evolution equations in , with the thresholding parameter optimized by the iterative approach in .it can be seen that the proposed relaxed bp algorithm offers dramatically better performance more that db improvement at low rates . at higher rates ,the gap is slightly smaller since relaxed bp performance saturates due to the awgn at the quantizer input .similarly we can see that the mse of the quantizer optimized for the relaxed bp reconstruction is much smaller than the mse of the standard one , with more than 4 db difference for many rates .we present relaxed belief propagation as an efficient algorithm for compressive sensing reconstruction from the quantized measurements .we integrate ideas from recent generalization of the algorithm for arbitrary measurement channels to design a method for determining optimal quantizers under relaxed bp reconstruction .although computationally simpler , experimental results show that under quantized measurements relaxed bp offers significantly improved performance over traditional reconstruction schemes . additionally , performance of the algorithm is further improved by using the state evolution framework to optimize the quantizers .e. j. cands , j. romberg , and t. tao , `` robust uncertainty principles : exact signal reconstruction from highly incomplete frequency information , '' _ ieee trans .inform . theory _52 , pp . 489509 , feb .
we consider the optimal quantization of compressive sensing measurements following the work on generalization of relaxed belief propagation ( bp ) for arbitrary measurement channels . relaxed bp is an iterative reconstruction scheme inspired by message passing algorithms on bipartite graphs . its asymptotic error performance can be accurately predicted and tracked through the state evolution formalism . we utilize these results to design mean - square optimal scalar quantizers for relaxed bp signal reconstruction and empirically demonstrate the superior error performance of the resulting quantizers .
many physical systems , such as power systems and economic systems , often suffer from random changes in their parameters .these parameter changes may result from abrupt environmental disturbances , component failures or repairs , etc .in many cases , a markov chain provides a suitable model to describe the system parameter changes .a markovian jump system is a hybrid system with different operation modes .each operation mode corresponds to a deterministic system and the jumping transition from one mode to another is governed by a markov chain .recently , markovian jump systems have received a lot of attention and many control issues have been studied , such as stability and stabilization , time delay , filtering , control , control , model reduction . for more information on markovian jump systems ,we refer the reader to . in this paper , we consider the decentralized stabilization problem for a class of uncertain markovian jump large - scale systems .the aim is to design a set of appropriate local feedback control laws , such that the resulting closed - loop large - scale system is stable even in the presence of uncertainties .recently , the decentralized stabilization problem for markovian jump large - scale systems has been investigated in the literature ; see e.g. , and the references therein .it is important to point out that the stabilizing techniques developed in and many other papers are built upon an implicit assumption that the mode information of the large - scale system must be known to all of the local controllers . in other words ,the mode information of all the subsystems must be measured and then broadcast to every local controller .such an assumption , however , may be unrealistic either because the broadcast of mode information among the subsystems is impossible in practice or because the implementation is expensive . to eliminate the need for broadcasting mode information , a local mode dependent control approach has been developed in . this control approach is fully decentralized .the local controllers use only local subsystem states or outputs and local subsystem mode information to generate local control inputs . to emphasize this feature ,this type of controller is referred to as a _ local mode dependent controller _ in .as pointed out in , the local mode dependent control approach offers many advantages in practice .first , it eliminates the need for broadcasting mode information among the subsystems and hence is more suitable for practical applications .second , it significantly reduces the number of control gains and hence results in cost reduction , easier installation and maintenance . in this paper , we focus on the state feedback case of markovian jump large - scale systems and aim to build a bridge between the results in and .we assume that each local controller is able to access and utilize mode information of its neighboring subsystems including the subsystem it controls .this assumption is motivated by the fact that some subsystems may be close to each other in practice and hence exchange of mode information may be possible among these subsystems . under this assumption, we develop an approach , which we call a _ neighboring mode dependent control approach _ , to stabilize markovian jump large - scale systems .compared to the local mode dependent control approach , our approach can stabilize a wider range of large - scale systems in practice .it is demonstrated in the numerical section that the system performance will improve as more detailed mode information is available to the local controllers .hence the system performance achieved by our approach is better than that achieved by the local mode dependent control approach .furthermore , both the global and the local mode dependent control approaches proposed in and can be regarded as special cases of the neighboring mode dependent control approach . _notation : _ denotes the set of positive real numbers ; denotes the set of positive definite matrices ; denotes the set of real vectors ; denotes the set of real matrices . ] naturally describes the mode switching of the entire large - scale system .we assume that ^{t} ] will take a different value .hence .in addition , we have ( not necessarily , because the mode processes , , may depend on each other ) .let , then a bijective mapping exists , because and have the same number of elements .let ^{t}) ] is transformed into the random scalar process , which carries the same mode information of the large - scale system . for this reason , referred to as the global mode process in the sequel .the inverse function is given by ^{t} ] , where if , and .the initial distribution of the process is ^{t} ] represents an admissible local uncertainty input for the large - scale system if , given any locally square integrable signals ^{t} ] , there exists a time sequence , , such that for all and for all , dt\right ) \ge -x_{i0}^{t}\bar{s}_{i}x_{i0}. \label{un : local } \end{aligned}\ ] ] the set of all such admissible local uncertainty inputs is denoted by . [ defn : un : ic ] given a set of positive definite matrices , .a locally square integrable signal ^{t} ] , ^{t} ] , ^{t} ] .the iqcs are used to describe relations between the input and output signals in the uncertainty blocks in fig .[ system ] .the constant terms on the right - hand sides of the inequalities and allow for nonzero initial conditions in the uncertainty dynamics .these definitions can capture a broad class of uncertainties such as nonlinear , time - varying , dynamic uncertainties ; see ( * ? ? ?* chapter 2.3 ) for details .let \in \mathbb{r}^{n\times n} ] .a zero entry in this vector means that the mode information of the corresponding subsystem is not available .we assume that the random vector process ^{t} ] , , , .let ^{t}) ] .hence is referred to as a neighboring mode process in the sequel .both the global and the local mode dependent control problems studied in can be regarded as special cases of the neighboring mode dependent control problem with ( a matrix with all the elements being ones ) and , respectively . for the large - scale system withthe uncertainty constraints , , our objective is to design a neighboring mode dependent decentralized control law such that the resulting closed - loop large - scale system is robustly stochastically stable in the following sense .the closed - loop large - scale system corresponding to the uncertain large - scale system , , and the controller is said to be robustly stochastically stable if there exists a finite constant such that for any ^{t} ] , ^{t}\in\pi ] .now we assume that the mode processes , , are subject to the constraints below : then contains only four elements , i.e. , ^{t } , [ 1 , 1 , 2]^{t } , [ 1 , 2 , 2]^{t } , [ 2 , 1 , 2]^{t}\} ] . thus . the mapping can be defined as follows : ^{t } { \rightextendsymbol{-}{-}{\longrightarrow}{\rule{0.5cm}{0cm}}{\varphi_{1 } } } 1,\quad [ 1 , 2 , 0]^{t } { \rightextendsymbol{-}{-}{\longrightarrow}{\rule{0.5cm}{0cm}}{\varphi_{1 } } } 2 , \quad [ 2 , 1 , 0]^{t } { \rightextendsymbol{-}{-}{\longrightarrow}{\rule{0.5cm}{0cm}}{\varphi_{1 } } } 3 .\end{matrix } \end{aligned}\ ] ] in this case , by , the many - to - one mapping is given by : this section , we first turn to a new uncertain markovian jump large - scale system which is similar to the large - scale system .global mode dependent stabilizing controllers are designed for this new large - scale system using the results of .then we will show how to derive neighboring mode dependent stabilizing controllers for the large - scale system from these obtained global mode dependent controllers .finally , all of the conditions for the existence of such neighboring mode dependent controllers are combined as a feasible lmi problem with rank constraints .consider a new large - scale system comprising subsystems , .the subsystem is as follows : \\ & \qquad + \tilde{e}_{i}(\eta(t))\tilde{\xi}_{i}(t ) + \tilde{l}_{i}(\eta(t))\tilde{r}_{i}(t ) , \\\tilde{\zeta}_{i}(t ) & = \tilde{h}_{i}(\eta(t))\tilde{x}_{i}(t ) , \end{aligned}\right.\end{aligned}\ ] ] where , , , , for all and , .the initial state , .the uncertainties , , , , satisfy the following constraints , respectively .[ new : defn : un : local ] a locally square integrable signal ^{t} ] , ^{t} ] , there exists a time sequence , , such that for all and for all , dt\right ) \ge -\tilde{x}_{i0}^{t}\bar{s}_{i}\tilde{x}_{i0}. \label{new : un : local } \end{aligned}\ ] ] the set of all such admissible local uncertainty inputs is denoted by .[ new : defn : un : ic ] a locally square integrable signal ^{t} ] , ^{t} ] , there exists a time sequence , , such that for all and for all , dt\right)\ge -\tilde{x}_{i0}^{t}\tilde{s}_{i}\tilde{x}_{i0}. \label{new : un : ic } \end{aligned}\ ] ] the set of all such admissible interconnection inputs is denoted by .[ dfn:1 ] suppose , , .a locally square integrable signal ^{t} ] , ^{t} ] and for all , the set of all such admissible input uncertainties is denoted by .we assume that the same sequences are chosen in definitions [ new : defn : un : local ] , [ new : defn : un : ic ] whenever they correspond to the same signals ^{t} ] , ^{t} ] .furthermore , one can verify that the system has the same system matrices as the system , i.e. , , , , , at any time .using this fact , we will show that , . for convenience ,let denote the set of all locally square integrable signals of dimension , and let denote the set of all locally square integrable signals of dimension .given ^{t}\in\tilde\xi ] , ^{t}\in\mathcal{l}^{m}(t) ] .this implies that the inequality holds for any ^{t}\in\mathcal{l}^{m}(t) ] and ^{t}\equiv 0 ] , i.e. , . to show , suppose ^{t}\in\xi ] . by definition [ new : defn : un : local ] , we need to prove that the inequality holds when we apply this signal ^{t} ] , ^{t}\in\mathcal{l}^{m}(t) ] to the large - scale system .note that the two inputs ^{t} ] in the large - scale system can be considered as an equivalent input ^{t} ] , ^{t}\in\mathcal{l}^{m}(t) ] .thus , it suffices to prove that the inequality holds when we apply ^{t} ] , ^{t}\in\mathcal{l}^{s}(t) ] .hence ^{t}\in\tilde\xi ] . after obtainingthe global mode dependent stabilizing controllers for the large - scale system , the next step is to derive neighboring mode dependent stabilizing controllers for the large - scale system .the following result is an extension of theorem 1 in to the neighboring mode dependent control case .the proof is similar to that of theorem 1 in and hence is omitted .[ thm2 ] given the global mode dependent controllers which stabilize the large - scale system with the uncertainty constraints , , .if the gains in the controllers are chosen to satisfy for all , , , then the neighboring mode dependent controllers stabilize the large - scale system with the uncertainty constraints , . in the following remark, we use an example to illustrate the fact that theorem [ thm2 ] is less conservative than theorem 1 in . consider the markovian jump large - scale system in example [ exmp:1 ] .three control gains need to be scheduled for the first local controller if using the neighboring mode dependent control approach , while two control gains are needed if using the local mode dependent control approach .we denote the three neighboring mode dependent control gains as and the two local mode dependent control gains as . for comparison ,given the global mode dependent control gains and the scalars , the constraints imposed on , are specified as follows based on theorem 1 in and our theorem [ thm2 ] , respectively : these inequalities are illustrated in fig .[ fig1 ] where each circle denotes a euclidean ball . is the center and the radius of the ball for . as shown in fig .[ fig1 ] , the set where takes values is only a subset of the set where takes values .hence the proposed framework provides greater flexibility in choosing control gains .potentially , this will allow one to achieve better system performance than obtained using local mode dependent controllers .we also mention that if the euclidean ball centered at does not intersect the euclidean ball centered at ( or ) , then no local mode dependent controllers exist .however , the existence of the neighboring mode dependent controllers is not affected. therefore our technique potentially produces less conservative results than that in .next , the conditions in theorem [ thm1 ] and theorem [ thm2 ] are combined and recast as a rank constrained lmi problem . although rank constrained lmi problems are non - convex in general , numerical methods such as the lmirank toolbox often yield good results in solving these problems .[ thm3 ] suppose there exist matrices , , and scalars , , , , , , , such that the following inequalities hold : where , \\\mathcal{g}_{i13}(\mu ) & = y_{i}(\mu)\left[i\;i\;\tilde{h}_{i}^{t}(\mu)\;\cdots\;\tilde{h}_{i}^{t}(\mu)\right ] , \\\mathcal{g}_{i22}(\mu ) & = -\operatorname{diag}[y_{i}(1),\cdots , y_{i}(\mu-1),\\ & \quad y_{i}(\mu+1),\cdots , y_{i}(m)],\\ \mathcal{g}_{i33}(\mu ) & = -\operatorname{diag}[\tilde{r}_{i}^{-1}(\mu),\bar{\beta}_{i}(\mu)i,\tilde{\tau}_{i}i,\tilde{\theta}_{1}i,\cdots,\tilde{\theta}_{i-1}i,\\ & \quad \tilde{\theta}_{i+1}i,\cdots,\tilde{\theta}_{n}i],\\ \upsilon_{i}(\mu ) & = k_{i}(\phi_{i}(\mu))+\tilde{g}_{i}^{-1}(\mu)\tilde{b}_{i}^{t}(\mu)x_{i}(\mu).\end{aligned}\ ] ] then a stabilizing controller is given by : , for , .from , and , we have . similarly , . on the other hand , if is satisfied , by setting , , , , and applying the schur complement equivalence , the inequality is satisfied. then , by theorem [ thm1 ] , the global mode dependent controllers can be designed to stabilize the large - scale system with the uncertainty constraints , , .also , the lmi and the equation imply that for all , , .that is , the inequality holds .then , by theorem [ thm2 ] , the constructed controllers stabilize the large - scale system with the uncertainty constraints , . in , a control gain form has been proposed for the design of local mode dependent controllers .that is , each local mode dependent control gain is chosen to be a weighted average of the related global mode dependent control gains .this particular gain form is then incorporated into the coupled lmis from which the local mode dependent control gains are computed ; see theorem 3 and theorem 4 in for details . unfortunately , choosing such a gain form is not helpful in terms of an improvement in system performance , and sometimes may even result in infeasibility of the corresponding lmis .a demonstration of this fact is given in section [ sec : ie ] .indeed , such a gain form imposes an additional constraint and hence is not used in this paper .consider the markovian jump large - scale system given in .the mode information is ^{t},[1,2,2]^{t},[2,1,2]^{t},[2,2,1]^{t}\right\}} ] , which can be computed from the infinitesimal generator matrix . given a neighboring mode information pattern , our objective is to find the corresponding neighboring mode dependent stabilizing controllers for this large - scale system .an upper bound on the quadratic cost is also evaluated for the resulting closed - loop large - scale system .the main software we use is the lmirank toolbox .the procedure is summarized as follows : 1 .solve the optimization problem {i0}<\gamma,\\ & \text{and~\eqref{eq : lmi},~\eqref{eq : theorem2lmi},~\eqref{beta_inverse},~\eqref{x_inverse}}. \end{aligned}\ ] ] if an optimal value is found , feasible neighboring mode dependent control gains are obtained .2 . apply the obtained controllers to the large - scale system and compute the cost upper bound for the resulting closed - loop large - scale system .the method for computing this upper bound is taken from .it involves solving a worst - case performance analysis problem .five cases are considered , i.e. , it can be seen that each neighboring mode information pattern contains more mode information than the preceding one . corresponds to the local mode dependent control case , while corresponds to the global mode dependent control case . by using the preceding procedure , neighboring mode dependentstabilizing controllers are found for each of these cases .furthermore , if we apply the obtained controllers to the large - scale system , the cost upper bounds for the resulting closed - loop large - scale systems are shown in fig .[ fig2 ] .note that the cost upper bound found here in the local mode dependent control case is different from ( in fact , less than ) that in .this is because the gain form proposed by theorem 3 in is not used in our computation .one may also notice that the cost upper bound found in the case of is the same as the one in the case of .we now explain why this happens . in the case of ,each local controller obtains two subsystem modes directly .in fact , the third subsystem mode can be derived from these two modes based on possible mode combinations in .hence and are equivalent , in the sense that they yield the same performance .this example demonstrates that the system achieves better ( or at least equal ) performance if more information about the subsystem modes is available to the local controllers .it also shows that sometimes complete information about the global mode of the large - scale system may be redundant .this paper has presented a decentralized control scheme for uncertain markovian jump large - scale systems .the proposed controllers use local subsystem states and neighboring mode information to generate local control inputs . a computational algorithm involving rankconstrained lmis has been developed for the design of such controllers .the developed theory is illustrated by a numerical example .j. xiong , v. ugrinovskii , and i. r. petersen .decentralized output feedback guaranteed cost control of uncertain markovian jump large - scale systems : local mode dependent control approach . in j.mohammadpour and k. m. grigoriadis , editors , _ efficient modeling and control of large - scale systems _ , pages 167196 .springer , new york , 2010 .
this paper is concerned with the decentralized stabilization problem for a class of uncertain large - scale systems with markovian jump parameters . the controllers use local subsystem states and neighboring mode information to generate local control inputs . a sufficient condition involving rank constrained linear matrix inequalities is proposed for the design of such controllers . a numerical example is given to illustrate the developed theory . , , , large - scale systems ; linear matrix inequalities ; markovian jump systems ; stabilization .
the prevailing approach to quantum computation evolved from classical reversible algorithmic computation ( bennett 1979 , fredkin and toffoli 1982 ) , where a stored program drives a sequence of elementary logically reversible transformations .in reversible - algorithmic computation a time - varying hamiltonian drives a sequence of unitary transformations ( benioff 1982 , feynman 1985 ) .it was then found ( first by deutsch 1985 ) that entanglement , interference and measurement yield in principle dramatic speed - ups over the corresponding classical algorithms in solving some problems . in spite of this important result ,this form of computation faces two possibly basic difficulties .its speed - ups rely on quantum interference , which requires computation reversibility .decoherence may then limit computation size below practical interest .only two speed - ups of practical interest have been found so far ( factoring and database search ) , and none since 1996 .reversible - algorithmic computation is not the most general form of quantum computation .its limitations justify reconsidering quantum ground - state computation ( castagnoli 1998 , farhi et al .2001 , kadowaki 2002 , among others ) , a formerly neglected approach still believed to be mathematically intractable .quantum ground - state computation evolved from classical ground - state computation ( kirkpatrick & selman 1994 , among others ) , a well - developed approach competitive with algorithmic computation for solving boolean networks .a boolean network is a set of nodes ( boolean variables ) variously connected by gates and wires that impose relations on the variables they connect ( fig .1 ) . a boolean assignment satisfying all gates and wires is a network solution .roughly speaking , all np problems can readily be converted to the problem of solving a boolean network . in quantum ground - state computation ,one sets up a quantum network whose energy is minimum when all gates and wires are satisfied . in quantum annealing , one form of ground - state computation ,coupling the network with a heat - bath of suitably decreasing temperature relaxes the network to its ground state , a mixture of solutions ( we assume with no significant restriction that there is at least one ) . measuring the node variables ( hermitian operators with eigenvalues and ) yields a solution .it is believed that quantum annealing yields a ( still ill - defined ) speed - up over its classical counterpart .quantum tunneling reduces the risk that the network , in its way toward the absolute energy minimum , remains trapped in local minima ( e.g. kadowaki 2002 ). however , long simulation times seriously limit research on this approach . herewe develop a hybrid mode of computation .we implement wires by ground - state computation .we implement gates as algebraic identities resulting from quantum symmetries and statistics .we show that relaxation - computation time is comparable with that an easier ( more loosely constrained ) logical network where all the gate constraints implemented by quantum symmetries are removed .the comparison is based on a special projection method .we show that the relaxation of the actual network can be obtained as a special projection of the relaxation of the easier comparison network .this projection method shortcuts mathematical complexity and sheds light on the nature of this form of computation .we conjecture that for this computation mode all hard - to - solve ( np ) networks become easy ( p ) and support this conjecture with plausible estimates .decoherence is not expected to be as serious a problem for this computation model as for algorithmic computation since the network state is intentionally a thermal mixture during most of the computation .this discussion of quantum computation still belongs to the realm of principles , like other literature on quantum ground - state computation , while algorithmic - reversible computation is now almost a technology .nevertheless it is worth starting over with a new approach that might overcome fundamental limitations of algorithmic computation .we use a network normal form composed just of wires and triodes ( fig.1 ) . each triode properly a partial gate connects three nodes labeled , , ( replaced by collective indices in fig.1 ) with the sum-2 relation where s are boolean variables and denotes arithmetical sum .the three solutions are the rows of table i. each wire is an equality relation between two nodes ( table ii ) .the example in fig .1 , with nodes , wires ( lines ) , and triodes ( dashed triangles ) , has just one solution : , .{ccc } \begin{tabular}{|l|l|l| } \hline ] of equal length within each we consider the relaxation of the comparison xor network in , described by . at the end of each we project * * * * * * * * on , then take the limit .this yields the actual network relaxation . within each consider the decomposition * describes networks with satisfied triodes and wires , namely solutions of the actual network ; its probability is * describes networks with satisfied triodes and at least one frustrated wire ; * describes networks with at least one violated triode , wires are either satisfied or frustrated ; we have considered all the possible states of the comparison network . therefore . goes to zero with and is annihilated by each projection .the actual network - bath interaction soon randomly generates a , a mixture of solutions of the actual network , with an extremely small probability . for a given confidence level , does not depend on .for we apply the projection method . becomes the nucleus of condensation of the network solutions . within each and every , we take a constant - average logarithmic rate of decrease of the frustration energy of the comparison network : will show later that there is no error in taking a constant - average rate . the relaxation time constant is by assumption poly( ) .we have .in fact there is no contribution from , which is the ground state of and a possible contribution from would anyhow be second order infinitesimal .let be the -th ( population ) element of the diagonal of .of course . is diagonal , thus we have , where is the -th diagonal element of .therefore and go to zero together .thus on average : decrease of implies an equal increase of it is reasonable and conservative to consider the increase of dominant .in fact the relaxation of the comparison network is quicker because triodes can be violated .note that we compare relaxation rates , not directions : the comparison network can head toward , the actual network remains in .it is like comparing the speed of the keel and the wind on a broad reach .speeds are proportional , while keel and wind go to different places . furthermore , does not couple with or .in fact = .therefore neither decreases nor increases on average .since decreases and does not , the ratio decreases .when we project on at the end of , we remain with a smaller and a larger ( probability of solutions of the actual network ) .we can focus on the take - off of the probability of solution from the extremely small value to close to , say .during take - off and within each we have ; grows from to , because of ( 17 ) and the assumption that remains unaltered .the projection at the end of annihilates reducing by about renormalizing then multiplies by about at each . after a time and in the limit , we obtain for the actual network : probability of having solutions of the actual network becomes in a time ()( ) .using a different for each , with average value , yields the same result : in should be replaced by .the extradynamical algebraic relations expressing particle statistics and angular momentum composition can replace the dynamical algebraic relations following from equations of motion as computational gates . in this new form of quantum computation ,the gates of a boolean network are always satisfied as constants of the motion , leaving only equality relations ( wires ) to be implemented dynamically .this form of quantum computation is expected to be robust , since it relies on thermal mixtures , not pure states , and is plausibly conjectured to be fast , turning all np problems in principle into p. as in quantum algorithmic computation , the speed - up is due to the extradynamical character of the computation ( castagnoli & finkelstein 2001 , 2002 ) .this model of computation highlights the conceptual difference between how structures can be assembled in the classical and quantum domain .quantally it is as though one could assemble a jigsaw puzzle simply by piling the pieces up and letting gravity lower them into mutual positions that solve the puzzle , analogously to quantum wire relaxation .classically this way of assembling the pieces would be plagued by local energy minima .one may wonder whether the assembly of biological molecules under hydrophobic pressure draws on similar quantum effects .
we develop a computation model for solving boolean networks that implements wires through quantum ground - state computation and implements gates through identities following from angular momentum algebra and statistics . the gates are static in the sense that they contribute hamiltonian 0 and hold as constants of the motion ; only the wires are dynamic . just as a spin 1/2 makes an ideal 1-bit memory element , a spin 1 makes an ideal 3-bit gate . such gates cost no computation time : relaxing the wires alone solves the network . we compare computation time with that of an easier boolean network where all the gate constraints are simply removed . this computation model is robust with respect to decoherence and yields a generalized quantum speed - up for all np problems .
since its introduction in 1993, the density matrix renormalisation group method ( dmrg ) has seen tremendous use in the study of one - dimensional systems. various improvements such as real - space parallelisation, the use of abelian and non - abelian symmetries and multi - grid methods have been proposed .most markedly , the introduction of density matrix perturbation steps allowed the switch from two - site dmrg to single - site dmrg in 2005 , which provided a major speed - up and improved convergence in particular for systems with long - range interactions .nevertheless , despite some progress, ( nearly ) two - dimensional systems , such as long cylinders , are still a hard problem for dmrg .the main reason for this is the different scaling of entanglement due to the area law: in one dimension , entanglement and hence matrix dimensions in dmrg are essentially size - independent for ground states of gapped systems , whereas in two dimensions , entanglement grows linearly and matrix dimensions roughly exponentially with system width . as a result ,the part of the hilbert space considered by dmrg during its ground state search increases dramatically , resulting mainly in three problems : firstly , the dmrg algorithm becomes numerically more challenging as the sizes of matrices involved grow ( we will assume matrix - matrix multiplications to scale as throughout the paper ) .secondly , the increased search space size makes it more likely to get stuck in local minima .thirdly , while sequential updates work well in 1-d chains with short - range interactions , nearest - neighbour sites in the 2-d lattice can be separated much farther in the dmrg chain .therefore , improvements to the core dmrg algorithm are still highly worthwhile . in this paper, we will adopt parts of the amen method developed in the tensor train / numerical linear algebra community to construct a strictly single - site dmrg algorithm that works without accessing the ( full ) reduced density matrix .compared to the existing _ centermatrix wavefunction formalism _( cwf), we achieve a speed - up of during each application of to in the eigensolver during the central optimisation routine , where is the dimension of the physical state space on each site .the layout of this paper is as follows : section [ sec : notation ] will establish the notation .section [ sec : cwf ] will recapitulate the density matrix perturbation method and the cwf .section [ sec : subexp ] will introduce the subspace expansion method and the heuristic expansion term with a simple two - spin example .the trictly ingle - ite dmrg algorithm ( dmrg3s ) will be presented in section [ sec : dmrg3s ] alongside a comparison with the existing cwf . as both the original perturbation method and the heuristic subspace expansion require a _ mixing factor _, section [ sec : alpha ] describes how to adaptively choose for fastest convergence .numerical comparisons and examples will be given in section [ sec : numexps ] .the notation established here closely follows the review article ref .. consider a state of a system of sites .each site has a physical state dimension , e.g. , for a system of spins : in practice , the dimension of the physical basis is usually constant , , but we will keep the subscript to refer to one specific basis on site where necessary .it is then possible to decompose the coefficients as a series of rank-3 tensors of size respectively , with .the coefficient can then be written as the matrix product of the corresponding matrices in : the maximal dimension is called the _ mps bond dimension_. in typical one - dimensional calculations , , but for e.g. cylinders , is often necessary .it is in these numerically demanding cases that our improvements are of particular relevance .similarly , a hamiltonian operator can be written as a _ matrix product operator _ ( mpo ) , where each tensor is now of rank 4 , namely : is called the _ mpo bond dimension_. we will usually assume that for most , and . in practice , this holds nearly everywhere except at the ends of the chain , where the grow exponentially from to .the basis of ( ) of dimension ( ) is called the left - hand side ( lhs ) basis , whereas the basis of dimension ( ) is the right - hand side ( rhs ) basis of this tensor . for simplicity , , and can also refer to the specific basis ( and not only its dimension ) when unambiguous . instead of , we will also write ( for a left ( right ) normalised mps tensor : we then define the contractions we can rewrite from as that is , when only considering one specific bond , the left and right mps bases at this bond are built up from the states generated by the mps tensor chains to the left and right of the bond .individual elements of an mps basis are therefore called `` state '' .furthermore , define and with summation over all possible indices .similarly , and . with these contractions , it is possible to write for any h current bond . hence , in the ground state and ignoring numerical errors , the rhs basis of this is identical to that of .truncation from to is then possible without inducing errors .numerically , it seems possible to choose arbitrarily large without hindering convergence or perturbing the state too much in simple ( one - dimensional ) problems .however , if the chosen maximal bond dimension is insufficient to faithfully capture the ground state of the given system , has to be taken to zero eventually to allow convergence .otherwise , will continuously add new states and disturb the result of the eigensolver , which is optimal at this specific value of but not an eigenstate of yet .the cost of a single subspace expansion is for the calculation of , potentially for the addition to and respectively and for the svd of an matrix formed from .if we restrict the svd to singular values , then the resulting matrices will be of dimension , and respectively .the first can be reformed into at cost and the second and third multiplied into at cost .the total cost of this step is dominated by the cost of the svd at , which is still cheaper than the calculation of the perturbation term in , not considering the other costs associated to using the density matrix for truncation . in the following , we will demonstrate and illustrate the method of subspace expansion at the simple example of a system of two spins with from to as it would occur during a left - to - right sweep .assume the hamiltonian with mpo - components \\w_2 & = \left [ \frac{1}{\sqrt{2 } } s_- \quad \frac{1}{\sqrt{2 } } s_+ \quad s_z \right]^t \quad.\end{aligned}\ ] ] let the initial state be an mps , described by components \quad & a_1^\downarrow = \left [ \sqrt{1-a^2 } \right ] \\b_2^\uparrow = [ b ] \quad & b_2^\downarrow = \left [ \sqrt{1-b^2 } \right]\end{aligned}\ ] ] where square brackets denote matrices in the mps bond indices . due to the standard normalisation constraints, there are only two free scalar variables here , and .subspace expansion of is straightforward ( keep in mind that for convenience ) : \\p_1^\downarrow & = w_1^{\downarrow\uparrow } a_1^\uparrow + w_1^{\downarrow\downarrow } a_1^\downarrow \\ & = \left [ \begin{matrix } 0 & \frac{a}{\sqrt{2 } } & - \sqrt{1-a^2 } \end{matrix } \right]\end{aligned}\ ] ] resulting in and directly after the expansion : \\a_1^{\prime\downarrow } & = \left [ \begin{matrix } \sqrt{1-a^2 } & 0 & \frac{a}{\sqrt{2 } } & - \sqrt{1-a^2 } \end{matrix } \right ] \\b_2^{\prime\uparrow } & = \left [ \begin{matrix } b \\ 0 \\ 0 \\ 0 \end{matrix } \right ] \quad b_2^{\prime\downarrow } = \left [ \begin{matrix } \sqrt{1-b^2 } \\ 0 \\ 0 \\ 0 \end{matrix } \right ] \quad.\end{aligned}\ ] ] normalising via a singular value decomposition as and multiplying gives : \\a_1^{\prime\prime\downarrow } & = \left [ \begin{matrix } 0 & 1 \end{matrix } \right ] \\sv^\dagger & = \left [ \begin{matrix } a & \frac{\sqrt{1-a^2}}{\sqrt{2 } } & 0 & a \\\sqrt{1-a^2 } & 0 & \frac{a}{\sqrt{2 } } & - \sqrt{1-a^2 } \end{matrix}\right ] \\b_2^{\prime\prime\uparrow } & = \left [ \begin{matrix } a b \\ \sqrt{1-a^2 } b \end{matrix } \right ] \\b_2^{\prime\prime\downarrow } & = \left [ \begin{matrix } a \sqrt{1-b^2 } \\\sqrt{1-a^2 } \sqrt{1-b^2 } \end{matrix}\right]\quad.\end{aligned}\ ] ] as expected , the final state is still entirely unchanged , but there is now a one - to - one correspondence between the four entries of and the coefficients in the computational basis , making the optimisation towards trivial .we can now combine standard single - site dmrg ( e.g.ref ., p. 67 ) with the subspace expansion method as a way to enrich the local state space , leading to a trictly ingle - ite dmrg implementation ( dmrg3s ) that works without referring to the density matrix at any point . with the notation from section [ sec : notation ] ,the steps follow mostly standard single - site dmrg . in an outermost loop ,the algorithm sweeps over the system from left - to - right and right - to - left until convergence is reached .criteria for convergence are e.g. diminishing changes in energy or an overlap close to between the states at the ends of subsequent sweeps . the inner loop sweeps over the system , iterating over and updating the tensors on each site sequentially .each local update during a left - to - right sweep ( right - to - left sweeps work analogously ) consists of the following steps : 1 .optimise the tensor : use an eigensolver targeting the smallest eigenvalue to find a solution to the eigenvalue problem is the new current energy estimate .this first step dominates the computational cost .2 . build according to using .build an appropriately - sized zero block after the dimensions of are known .3 . subspace - expand with and with .4 . apply a svd to and truncate its right basis to again , resulting in .5 . multiply the remainder of the svd ( ) into .build from , and .calculate a new energy value after truncation based on , , and . use this energy value and to adapt the current value of ( cf. section [ sec : alpha ] ) .continue on site . of these ,step 2 and 3 implement the actual subspace expansion , whereas all others are identical to standard single - site dmrg .it is important to note that the only change from standard single - site dmrg is the addition of an enrichment step via subspace expansion .therefore , this method does not interfere with e.g. real - space parallelised dmrg, the use of nonabelian symmetries or multi - grid methods. to analyse the computational cost , we have to take special care to ensure optimal ordering of the multiplications during each eigensolver iteration in .the problem is to contract , with and , and .the optimal ordering is then : 1 .contract and over the left mps bond at cost .multiply in over the physical bond of and the left mpo bond at cost .3 . finally contract with over the right mpo and mps bonds at cost .the total cost of this procedure to apply to is . assuming large is small , this gives a speed - up in the eigensolver multiplications of over the cwf approach , which takes .in addition to this speed - up , the subspace expansion is considerably cheaper than the density matrix perturbation .since the perturbation / truncation step can often take up to 30% of total computational time , improvements there also have a high impact . at the same time, the number of sweeps at large needed to converge does not seem to increase compared to the cwf approach ( cf .section [ sec : numexps ] ) and sometimes even decreases .both density matrix perturbation and subspace expansion generally require some small mixing factor to moderate the contributions of the perturbation terms .the optimal choice of this depends on the number of states available and those required to represent the ground state , as well as the current speed of convergence .too large values for hinder convergence by destroying the improvements made by the local optimiser , whereas too small values lead to the calculation being stuck in local minima with vital states not added for the reasons given in section [ sec : cwf : rhopert ] .the correct choice of hence affects calculations to a large degree , but is also difficult to estimate before the start of the calculation .( colour online ) energies of the state at different points during a single update : before optimisation , the state has some initial energy .local optimisation via the eigensolver takes this energy down by to .subsequent truncation causes a rise in energy by with the final value at the end of this update being . ][ fig : energy - levels - alpha ] displays the individual steps within a single update from the energy perspective : let denote the gain in energy during the optimisation step and let denote the subsequent rise in energy during the truncation following the enrichment step . only occurs if some enrichment ( either via density matrix perturbation or subspace expansion ) has occurred , otherwise there would be no need for any sort of truncation .we can hence control the approximate value of via , which leads to a simple adaptive and computationally cheap algorithm : if was very small or even negative ( after changing the optimised state by expansion of its right basis ) during the current update , we can increase during the next update step on the next site .if , on the other hand , , that is , if the error incurred during truncation nullified the gain in energy during the optimisation step , we should reduce the value of at the next iteration to avoid making this mistake again . in practice , it seems that keeping gives the fastest convergence .given the order - of - magnitude nature of , it is furthermore best to increase / decrease it via multiplication with some factor greater / smaller than as opposed to adding or subtracting fixed values .some special cases for very small ( stuck in a local minimum or converged to the ground state ? ) and or have to be considered , mostly depending on the exact implementation .it is unclear whether there is a causal relation between the optimal choice of and the ratio of or whether both simply correlate with a proceeding dmrg calculation : at the beginning , gains in energy are large and is optimally chosen large , whereas later on , energy decreases more slowly and smaller values of are more appropriate .it is important to note that this is a tool to reach convergence more quickly .if one is primarily interested in a wavefunction representing the ground state , the calculation of a new at each iteration comes at essentially zero cost .if , however , the aim is to extrapolate in the truncation error during the calculation , then a fixed value for is of course absolutely necessary .in this sub - section , we will give a short example of how dmrg can get stuck in a local minimum even on a very small system .consider spins with isotropic antiferromagnetic interactions and open boundary conditions .the symmetry of the system is exploited on the mps basis , with the overall forced to be zero .the initial state is constructed from 20 linearly independent states , all with sites on the very right at and in total .the quantum number distribution at each bond is plotted in fig .[ fig : qnums - dist ] as black circles .dmrg3s is run with subspace expansion disabled , i.e. throughout the calculation .the algorithm `` converges '' to some high - energy state at . the resulting quantum number distribution ( red squares in fig . [fig : qnums - dist ] ) shows clear asymmetry both between the left and right parts of the system and the and sectors at any given bond .it is also visible that while some states are removed by dmrg3s without enrichment , it can not add new states : the red squares only occur together with the black filled circles from the input state .if we enable enrichment via subspace expansion , i.e. take , dmrg3s quickly converges to a much better ground state at .the quantum numbers are now evenly distributed between the left- and right parts of the system and symmetry is also restored .( colour online ) the quantum number distribution as counted from the right at each bond of a system with and .the artificial input state is shown with black circles .two dmrg calculations have then been done on this input state , once with no enrichment term ( , red squares ) and once with subspace expansion enabled ( , blue diamonds ) .it is clearly visible that without enrichment , dmrg3s can reduce some weights to zero , but can not add new states red only occurs together with black .as soon as enrichment is enabled , dmrg3s restores symmetry and reflective symmetry over the 10 bond and finds a much better ground state . ] in the following subsections , we will compare the two single - site dmrg algorithms cwf and dmrg3s when applied to four different physical systems : a heisenberg spin chain with periodic boundary conditions , a bosonic system with an optical lattice potential , a fermi - hubbard model at and quarter - filling and a system of free fermions at half - filling .each algorithm is run at three different values of from the same initial state and run to convergence .this way , it is possible to both observe the behaviour of the methods at low and high accuracies . the usual setup in dmrg calculations of starting at small and increasing slowly while the calculation progresses makes it unfortunately very difficult to compare between the three methods .this is because different methods require different configurations to converge optimally .we therefore restrict ourselves to fixed throughout an entire calculation , even though all methods could be sped up further by increasing slowly during the calculation .errors in energy compared to a numerically exact reference value are plotted as a function of sweeps and cpu time .it should be stressed that this error in energy is not directly comparable to the truncation error traditionally used in two - site dmrg or the variance sometimes considered in single - site dmrg .even small differences in energy can lead to vastly different physical states and reaching maximal accuracy in energy is crucial to ensure that the true ground state has been reached .furthermore , a traditional two - site dmrg ( 2dmrg ) calculation without perturbations is done and its error in energy and runtime to convergence is compared to the two single - site algorithms . here , _convergence _ is defined as a normalised change in energy less than ( for ) resp . ( for ) .the _ runtime to convergence _ is the cpu time used until that energy was output by the eigensolver for the first time .all calculations were performed on a single core of a xeon e5 - 2650 .firstly , we consider a heisenberg spin chain with sites and periodic boundary conditions implemented on the level of the hamiltonian as a simple link between the first and last site : symmetries are exploited and the calculations are forced in the sector .( colour online ) spin chain eq . : normalised error in energy as a function of sweeps ( left ) and cpu time used ( right ) of the two single - site algorithms at different .dmrg3s shows both a speed - up and an improved convergence per sweep compared to cwf , with a long tail of slow convergence very visible for cwf at high accuracies . ].[tab : numexps : spinchain]spin chain eq . :normalised error in energy at convergence and runtime to convergence of all three methods .dmrg3s is consistently faster than cwf , whereas the energies provided by 2dmrg are not comparable in accuracy .[ cols=">,^,^,^",options="header " , ] finally , we consider a model of free fermions on a chain of 100 sites with hamiltonian \;. \label{eq : numexps : fermions}\ ] ] the maximally delocalised wavefunction found in the ground - state of this system is notoriously difficult for mps formats in general to reproduce faithfully . at the same time, most other parameters are identical ( , , ) or very close ( ) to those in the fermi - hubbard model from section [ sec : numexps : fermi - hubbard ] .the calculation is done using and symmetries at half - filling with fermions and .the choice of is the same as for the fermi - hubbard system , namely .we used as the reference value , since all methods converged to this ground - state energy at .the results in tab .[ tab : numexps : fermions ] and fig .[ fig : numexps : fermions ] mostly follow the previous results for locally interacting systems : accuracies of all methods are essentially identical , whereas time to convergence varies between the methods . at small , there are some speed - ups of dmrg3s over cwf , largely due to better convergence behaviour per sweep , whereas a significant advantage of dmrg3s becomes visible at larger , when numerical operations become cheaper compared to the cwf method .correspondingly , the speed - up from cwf to dmrg3s increases from at to at .similarly , the larger numerical cost of two - site dmrg becomes more noticeable at larger , with the speed - up between 2dmrg and dmrg3s increasing from at to more than at .compared to the non - critical fermi - hubbard system from section [ sec : numexps : fermi - hubbard ] , we observe larger errors in energy at fixed , as expected .correspondingly , as more eigenvalues contribute significantly , convergence of both the eigenvalue solver and the singular value decompositions becomes slower , leading to a slow - down of all three methods .the new strictly single - site dmrg ( dmrg3s ) algorithm results in a theoretical speed - up of during the optimisation steps compared to the centermatrix wavefunction formalism ( cwf ) , provided that is small .further , convergence rates per sweep are improved in the important and computationally most expensive high - accuracy / large- phase of the calculation .in addition , auxiliary calculations ( enrichment , normalisation , etc . ) are sped up and memory requirements are relaxed .numerical experiments confirm a speed - up within the theoretical expectations compared to the cwf method .the efficiency of single - site dmrg in general compared to the traditional two - site dmrg was substantiated further by a large speed - up at comparable accuracies in energy .we would like to thank s. dolgov , d. savostyanov and i. kuprov for very helpful discussions . c. hubig acknowledges funding through the exqm graduate school and the nanosystems initiate munich .f. a. wolf acknowledges support by the research unit for 1807 of the dfg .17ifxundefined [ 1 ] ifx#1 ifnum [ 1 ] # 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1'''' [ 0]secondoftwosanitize [ 0 ] + 12$12 & 12#1212_12%12[1][0] link:\doibase 10.1103/physrevlett.69.2863 [ * * , ( ) ] link:\doibase 10.1103/physrevb.48.10345 [ * * , ( ) ] link:\doibase 10.1103/revmodphys.77.259 [ * * , ( ) ] link:\doibase 10.1016/j.aop.2010.09.012 [ * * , ( ) ] link:\doibase 10.1103/physrevb.87.155137 [ * * , ( ) ] link:\doibase 10.1209/epl / i2002 - 00393 - 0 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.109.020604 [ * * , ( ) ] link:\doibase 10.1103/physrevb.72.180403 [ * * , ( ) ] link:\doibase 10.1126/science.1201080 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.109.067201 [ * * , ( ) ] link:\doibase 10.1146/annurev - conmatphys-020911 - 125018 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.90.227902 [ * * , ( ) ] link:\doibase 10.1103/revmodphys.82.277 [ * * , ( ) ] link:\doibase 10.1088/1742 - 5468/2007/10/p10014 [ * * , ( ) ] link:\doibase 10.1137/140953289 [ * * , ( ) ] ( ) , http://nbn-resolving.de/urn:nbn:de:bvb:19-159631 [ `` , '' ] ( ) ,
we introduce a strictly single - site dmrg algorithm based on the subspace expansion of the alternating minimal energy ( amen ) method . the proposed new mps basis enrichment method is sufficient to avoid local minima during the optimisation , similarly to the density matrix perturbation method , but computationally cheaper . each application of to in the central eigensolver is reduced in cost for a speed - up of , with the physical site dimension . further speed - ups result from cheaper auxiliary calculations and an often greatly improved convergence behaviour . runtime to convergence improves by up to a factor of 2.5 on the fermi - hubbard model compared to the previous single - site method and by up to a factor of 3.9 compared to two - site dmrg . the method is compatible with real - space parallelisation and non - abelian symmetries .
it has long been realized that a high - altitude observing platform located in the stratosphere and thus above a significant fraction of the earth s atmosphere could offer image quality competitive with space - based platforms .this was the motivation behind the series of stratoscope i and ii balloon flights that ran in the late 1950s , 1960s , and early 1970s flying 0.3 to 0.9 m telescopes to an altitude of 24 km ( 80 kft ) and obtaining 0.2 arcseconds resolution images of the sun , planets , and selected stars and galaxies .stratoscope images along with recent atmospheric turbulence studies have shown that near diffraction - limited image quality can be achieved at altitudes at or above 20 km ( 65 kft ) where the telescope is above 95% or more of the atmosphere .since the end of the stratoscope missions , few high - altitude balloon flights have carried optical and near - infrared astronomical telescopes and detectors .nasa s highly successful multi - million cubic foot , high - altitude balloons flown at altitudes of 30 to 40 km ( 100 130 kft ) have largely been limited to the arctic and antarctic summers and have typically involved heliophysics , x - ray , gamma - ray , particle astrophysics , and ir / sub - mm programs that are unaffected by daylight observing conditions .only a few high altitude balloon flights , like the recent heliophysics sunrise telescope , have been conducted outside of the polar regions. however , such high altitude , daylight balloon missions are generally not suitable for a broad spectrum of general astronomical observing programs requiring dark sky observing conditions .the few nighttime high - altitude astronomical balloon flights that have occurred have been limited to relatively short duration times of a week or less .despite an ever increasing number of space missions , there has been renewed interest in recent years for exploring the use of high - altitude balloon flights for nighttime astronomical research .this has resulted in a number of papers discussing possible lighter - than - air ( lta ) vehicles and telescope arrangements for optical and infrared observations from non - polar locations .a self - propelled , high - altitude , long endurance ( hale ) stratospheric airship capable of keeping station over a desired geographic location would be a highly attractive platform for a variety of astronomical and other science missions .a solar - powered airship operating at altitudes near 20 km , where the stratospheric winds are lightest could , in principle , remain aloft for days , weeks , or even months thus serving as a general purpose astronomical observatory for night observations covering a broad set of targets having a wide range of declinations .besides avoiding so - called `` no - fly zones '' over some countries that restrict free - floating balloon flights over their territory , a station - keeping airship could provide simple and continuous line - of - sight telemetry allowing for high - bandwidth data communication to a single ground station . in basic terms, a stratospheric airship differs from a conventional airship or blimp in terms of cruising altitude , balloon fabric , and propulsion .blimps have thick and robust gas envelopes , are flown at relatively low altitudes ( m ) , at low speeds ( m / s ) , and are powered by conventional piston engines . their advantage over airplanesis their ability to stay aloft and hover for long durations without refueling and to do so at a relatively low cost of energy consumption ( see the recent historical review of airships by liao and pasternak ) .the possibility of relatively low construction and operations costs have made airships attractive for a host of potential uses .for example , the us department of defense ( dod ) has funded several high - altitude airship designs and test programs over the last decade with the goal of developing a reliable low cost stratospheric , long duration platform which could provide wide area surveillance and communications capabilities with good air defense .recent dod projects include southwest research institute s ( swri ) sounder and hisentinel vehicles and lockheed - martin s high altitude airship ( haa ) and high altitude endurance - demonstrator ( hale - d ) airships . unfortunately , despite considerable effort and expense , no self - propelled airship built by any manufacturer has flown at stratospheric altitudes for more than one day .the current record for a high altitude airship flight duration may still be the high platform ii vehicle built by raven industries and flown in the late 1960s at 20.4 km ( 67 kft ) for a few hours . a 2007 nasa study of a variety of lta and heavier - than - air ( hta ) unmanned hale vehicles found lta vehicle concepts attractive in terms of performance but were viewed as carrying a high technical risk .this assessment was arrived at , in part , due to the fact that the design and construction of a high - altitude airship poses several major obstacles including large envelope size , extremely lightweight and fragile balloon fabric for lifting gas containment , energy storage and power systems , launch and recovery operations , diurnal thermal management , and high - altitude propulsion motors and propellers . a more recent 2012 assessment of us military airship efforts ( gao report 13 - 81 ) also gave an unfavorable outlook for the future development and deployment of high altitude airships . in reviewing various recent hale airship efforts ,the report noted that many have been either terminated or have suffered `` significant technical challenges , such as overweight components , and difficulties with integration of software development , which , in turn , have driven up costs and delayed schedules . '' despite such setbacks , strong interest in the development of a high - altitude , long endurance airship persists .several commercial telecommunication companies continue to pursue hale airship development because such platforms could provide communication and data services to consumers in rural or remote areas and would combine some of the best features of satellite and fixed wireless services such as short transmission delay times , low propagation loss , and relatively large service areas .airship programs such as the recently completed european hapcos project ( http://www.hapcos.org ) , the japanese stratospheric airship platform study , the google internet balloon project ( `` project loon '' ) , and thales alenia space consortium s `` stratosbus '' are among some of the more recent efforts to use balloons for telecommunications purposes .one of the most difficult problems in airship design is propulsion power .while stratospheric wind speeds are lowest ( 5 15 m / s ) at altitudes around 20 km ( 65 kft ) , wind can vary significantly both daily and throughout the year , exceeding 25 m / s at times and even higher in gusts . at these speeds ,wind force on a conventional natural shape balloon is considerable , driving airship designers toward aerodynamic balloon shapes with low form drag values and propulsion systems involving large solar arrays or hydrogen fuel cells . the form or shape drag force acting on a vehicle moving through a fluid of density at speed is where is the drag area ( equal to the projected frontal area ) of the vehicle and is the coefficient of form drag corresponding to the particular shape of the vehicle .similarly , the frictional drag force is where the area is the wetted surface " and the coefficient is the skin frictional drag coefficient ( which depends on the viscosity of the fluid ) . to illustrate the wind induced drag forces on an airship, we will consider the hisentinel50 airship built by swri .this vehicle was cylindrical in shape with length m and diameter m. its frontal area was m , its wetted area was m with drag coefficients estimated at and . at an altitude of 65 kftthe air density is kg / m meaning that for a wind speed of 10 m / s , its total drag force is this force is the thrust needed to oppose its wind - induced drag .the power the airship needs to match this wind force and thereby enable it to keep station is this amount of power is relatively small and practical for an airship using photovoltaic ( pv ) panels and lightweight electric motors .but this example represents a fairly favorable scenario in terms of mild stratospheric winds of just 10 m / s at the sweet spot " altitude around 20 km plus a very low drag airship design . since drag is proportional to the square of velocity and poweris drag times velocity , propulsion power is really proportional to .thus airship power requirements increase rapidly with wind speed . for instance , using the same airship numbers above but now for a wind speed of 30 m / s , the airship s total drag force increases to nearly 370 n requiring 11 kw of power to keep station .this is a considerable amount of power to generate in order to maintain the airship floating above its desired position point , apart from any power that might be required by the airship s payload .however , even at lower wind speeds , having an airship keep station could be challenging . if the airship s overall drag forces were twice as large due perhaps to a larger form drag coefficient for the airship or caused by a large and highly non - streamlined mission payload shape , the power required would increase by a factor of two .this would mean some 20 kw of power would then be needed for station - keeping , an amount difficult to generate using pv panels alone on this relatively modest sized airship . since steady wind speeds around 30 m / sare not exceptionally rare at 20 km , this means that strict year - round station - keeping for such an airship might simply not be possible .a radically different approach for establishing a lighter - than - air stratospheric station - keeping platform involves tethering the vehicle to a ground station .this scheme again would keep the platform s altitude to 20 km or so as to take advantage of the lightest stratospheric winds and hence the lowest drag forces on the airship .however , no tethered high altitude stratospheric aerostat has been successfully flown for even one full diurnal cycle , although several attempts were made by french atmospheric scientists in the late 1970s .the main obstacles include aviation restrictions , tether strength and weight , the tether winch , and tether wind drag .storms and wind gusts in the troposphere can generate large transient wind loads on the tether , the winch , and the vehicle itself especially during initial deployment and recovery . despite this , a tethered stratospheric aerostat offers some distinct advantages over powered airships .these include no propulsion motors or propellers allowing for higher mass payloads , no large solar panel arrays to power the propulsion motors , and no large batteries for nighttime propulsion .in addition , the advent of technically advanced , high tensile strength materials such as ultra high molecular weight polyethylene ( uhmwpe ) such as spectra and dyneema ) , polybenzobisoxazole ( pbo ) such as zylon , and liquid crystal polymers such as vectran , kevlar , and technora has made the concept of a tethered stratospheric aerostat more practical than in the past .several papers concerning the feasibility and flight properties of a tethered aerostat at altitudes around 20 km have appeared recently .these include a study of a sea - anchored stratospheric , long duration balloon , the construction , launch and operation of tethered stratospheric balloons as alternatives for satellites , and investigations of the dynamic response for a high altitude tethered balloon aerostat and tether line to winds and their effects on payload pointing stability .the chief advantage of the tethered lta platform scheme is simplicity . in principle , a land or sea deployed tether to a stratospheric balloon from a launch site with favorable tropospheric winds , few aviation hazards or flight restrictions , and seasonal periods of low stratospheric wind speeds , might allow flight durations exceeding a few days .however , weather conditions throughout the tropospheric column ( e.g. , surface and low altitude winds and gusts , storms and downdrafts ) along with tether mass and tether wind loading may severely restrict its applicability and flight duration .as is done for low altitude aerostats , most high - altitude tethered airship models have the tether attached to a ground - based winch which must be operated so as to limit the tension on the tether below its minimum breaking strength . despitea number of articles discussing this approach , the only partially successful series of flights seems to have been done by atmospheric researchers in the 1970s and , to our knowledge , no high - altitude tethered aerostats have been attempted since .here we describe an alternative means of establishing a stratospheric station - keeping lta platform that makes use of a tether . during certain times of the year at mid- and low latitudes , winds in the upper stratospheremove in nearly the opposite direction than the wind in the lower stratosphere . a balloon or airship at high altitudecould be tethered to a heavier - than - air glider tug " at a lower altitude where the wind blows essentially in the opposite direction . by adjusting the aerodynamic configuration of the tug, wind forces acting on it can be made to counteract those acting on the airship .n in winter to nearly 40 in summer .the seasons and latitudes are reversed in the southern hemisphere . adapted from plots taken from the european centre for medium - range weather forecasts ( ecmwf ) era-40 website . ]an example configuration exploiting this naturally occurring wind shear is shown in figure [ fig:1 ] .the airship and its payload float at an altitude around 24 km ( 80 kft ) while the tug flies some 7 km lower at around 17 km ( 55 kft ) .the tether connecting them is shorter and hence lighter than it would need to be if it were to extend to the ground and it does not penetrate the turbulent weather of the troposphere .the tug s relatively high altitude places it well above the maximum operating ceilings of all commercial aircraft ( 43 kft ) and private or corporate jets ( 51 kft ) thereby greatly reducing aviation restrictions and hazards .wind at the tug s altitude is generally stronger and the air denser than higher up meaning the tug can be relatively small and still develop the necessary forces to balance that experienced by the upper airship .this approach to a station - keeping capability depends upon stratospheric wind shear that is , the difference in wind speed and direction between the altitude of the airship and that of the tug .figure [ fig:2 ] shows plots of wind speed and direction as a function of altitude and latitude where altitude is indicated by the associated atmospheric pressure .although these plots are multi - year averages for each season , they illustrate the basic stratospheric wind shear phenomenon .each plot is annotated with the example altitudes discussed above and with a range of latitudes for which favorable conditions prevail .although the plots of figure [ fig:2 ] and the results of other stratospheric wind studies indicate the existence of a usable stratospheric wind shear , such multi - year average plots do not reflect the variable day - to - day wind conditions that would actually govern the behavior of the proposed system .such day - by - day wind direction differences at altitudes of 16.7 and 24.4 km ( 55 and 80 kft ) are shown in figure [ fig:3 ] .each of the four plots is for a 60-day interval in the spring of the years 2000 , 2005 , 2010 , and 2013 for the atmosphere above hilo , hawaii ( latitude + 19.8 ) and assembled from radiosonde data available from the university of wyoming s upper air sounding website ( http://weather.uwyo.edu ) .typically , two radiosonde flights are made each day and both measurements are plotted when available .the plot for 2013 shows the most recently available data .more details may be found in .km above hilo , hawaii for four years based on radiosonde data . a difference of 180 ( solid line ) indicates directly opposing winds ., title="fig : " ] km above hilo , hawaii for four years based on radiosonde data .a difference of 180 ( solid line ) indicates directly opposing winds . ,title="fig : " ] + km above hilo , hawaii for four years based on radiosonde data .a difference of 180 ( solid line ) indicates directly opposing winds ., title="fig : " ] km above hilo , hawaii for four years based on radiosonde data .a difference of 180 ( solid line ) indicates directly opposing winds . , title="fig : " ] data for a tug altitude of 16.7 km ( 55 kft ) were extracted from the radiosonde database within a relatively small altitude range ( 0.2 km ) , while the airship s altitude was allowed to vary by km so as to reflect the likelihood of altitude variations due to diurnal heating effects . in cases of missing radiosonde data within these altitude ranges ,we interpolated between the two closest values .although the data shown in figure [ fig:3 ] cover an upper altitude range centered at 24.4 km , nearly 75% of the measurements plotted correspond to values taken at altitudes between 23.5 and 24.3 km .it is important to note that not all sounding data covering these time intervals are shown in these plots . besides some missing radiosonde data ( typically just a few days during a month ), we do not show wind direction differences that exceed 70 .large variations in upper air flows can occasionally lead to unfavorable wind conditions for several days each month .this is the reason that during the year 2000 we show wind direction differences for march 16 may 15 rather than april 1 may 30 . during that year , the wind direction reversal formed over hawaii about two weeks earlier than typically seen . during the four periods shown , the number of 12-hour periods during which easterly and westerly wind direction were greater than 70 apart were 23 in 2000 , 18 in 2005 , 15 in 2010 , and 20 in 2013 .however , on many of these occasions , wind speeds were relatively low at one or both altitudes . keeping in mind these limitations , the plots of figure [ fig:3 ] illustrate that between 16.7 and 24.4 km ( 55 kft and 80 kft ) the stratospheric wind directions are within 45 of being 180 apart for the majority of the days shown .the best of these two - month periods is april and may 2005 when over 85% of the time the upper and lower altitude winds were within 30 of being 180 apart .the worst two month period shown occurred in 2013 . marked differences year to year is not surprising .this is , after all , weather , and weather patterns can change significantly from one year to the next .however , the regular appearance of such opposing wind flows between stratospheric layers only some 7 km apart can be exploited to maintain the geographical location of a high - altitude platform without the need of propulsion power . because of seasonal wind variations above a particular geographic location , stratospheric wind shear will not permit year - round station - keeping .suitable opposing winds are found around latitude in hemispheric summers , but in spring and fall they are found at lower latitudes around 15 to 25 ( see figure [ fig:2 ] .this is shown in figure 4 where we plot wind direction differences at 15.2 km and 24.4 km ( 50 kft and 80 kft ) for the months of june and july in the years 2000 and 2010 over denver , colorado ( latitude + 39.8 ) .although there is considerable scatter , the lower to upper stratospheric wind shear is still within 45 degrees of being directly opposite over 75% of the time .these plots also show that the wind shear can be experienced by a tug at lower altitudes , here at 15.2 km ( 50 kft ) .the seasonal shift in latitude of the stratospheric wind shear means that in order to operate year - round the airship and tug will need to move north or south some 20 30 degrees in latitude during the course of a year .a shift in latitude of the wind shear may be partially responsible for some of the unfavorable wind shear days seen in the spring months over hawaii ( see fig.3 ) .km above denver , colorado for the years 2000 and 2010 based on radiosonde data .a difference of 180 ( solid line ) indicates directly opposing winds ., title="fig : " ] km above denver , colorado for the years 2000 and 2010 based on radiosonde data .a difference of 180 ( solid line ) indicates directly opposing winds ., title="fig : " ] +the operation of the proposed station - keeping system depends on balancing the aerodynamic forces on the airship with those acting on the tug .the air density at the tug s altitude of 17 km is roughly three times that at the airship s altitude of 24 km .in addition , wind speeds are generally two to four times greater at the lower altitude than at the higher altitude. the tug will therefore experience drag forces some 10 to 50 times higher than the airship even if the two vehicles are of similar size and shape .if the airship is streamlined so as to minimize drag , the tug could be made quite compact and lightweight while still developing the counter force necessary for hold the airship steady in the wind .table [ shipforce ] shows sample wind induced drag force calculations for three airship sizes and shapes .these computations assumed an airship altitude of 23.7 km ( 78 kft ) and a variety of ambient wind speeds .the listed drag force values were calculated assuming only form and surface drag .case 1 is an airship similar in size to swri s streamlined hisentinel80 airship , case 2 is a `` super - sized '' hisentinel80 , and case 3 is for a spherical balloon having a displaced volume similar to that of hisentinel80 in case 1 . comparing cases 1 and 3 ,it is clear that having a streamlined airship versus a spherical balloon lowers the total wind drag force by about a factor of 20 .also , going from a small to a larger streamlined airship ( cases 1 and 2 ) the system gains a factor of nearly 10 in potential lift while the total drag force increases by only a factor around 2.5 .* case 1 : * hisentinel80 : d = 15 m , l = 60 m ; volume : 10600 m + balloon : 320 kg ( @ 0.1kg / m ) ; helium : 70 kg ; displaced air : 510 kg + ccccc drag & & & wind speed & + parameters & 5 m / s & 10 m / s & 20 m / s & 30 m / s + c = 0.03 ; a = 177 m & 3 n & 13 n & 51 n & 114 n + c = 0.003 ; a = 3500 m & 6 n & 25 n & 100 n & 227 n + total drag force & 9 n & 38 n & 151 n & 341 n + + * case 2 : * super - hisentinel : d = 25 m , l = 100 m ; volume : 49100 m + balloon : 885 kg ( @ 0.1kg / m ) ; helium : 325 kg ; displaced air : 2350 kg + ccccc drag & & & wind speed & + parameters & 5 m / s & 10 m / s & 20 m / s & 30 m / s + c = 0.03 ; a = 490 m & 9 n & 35 n & 141 n & 317 n + c = 0.003 ; a = 9000 m & 16 n & 65 n & 260 n & 583 n + total drag force & 25 n & 100 n & 401 n & 900 n + + * case 3 : * spherical balloon : d = 28 m ; volume : 11500 m + balloon : 250 kg ( @ 0.1 kg / m ) ; helium : 75 kg ; displaced air : 550 kg + ccccc drag & & & wind speed & + parameters & 5 m / s & 10 m / s & 20 m / s & 30 m / s + c = 0.5 ; a = 615 m & 185 n & 740 n & 3000 n & 6700 n + c = 0.003 ; a = 2460 m & 4 n & 18 n & 70 n & 160 n + total drag force & 190 n & 760 n & 3100 n & 6900n + the drag values listed in table [ shipforce ] must be comparable to the wind drag numbers for the lower altitude tug vehicle which are listed in table [ tugforce ] for a range of wind speeds likely to be encountered at the tug s altitude of around 17 km .as an example , we have adopted a tug design in the form of a conventional glider consisting of a narrow fuselage and thin , high - aspect wings with high lift - to - drag ratios .we have assumed some sort of variable drag device as part of the tug with a form drag force proportional to an adjustable area of the device .the table shows drag force values resulting from both open and closed configurations .as table [ tugforce ] shows , it appears feasible for a tug to generate drag forces covering the complete wind speed range calculated for the two streamlined airship cases in table [ shipforce ] ( cases 1 and 2 ) but not for a spherically shaped airship ( case 3 ) .this again disfavors a spherical airship shape .tug : d = 0.75 m , l = 4 m fuselage + 4 m drag device , altitude = 16.7 km ( 55 kft ) + ccccc vehicle & drag & & wind speed & + component & parameters & 10 m / s & 20 m / s & 30 m / s + fuselage + wings & c = 0.12 ; a = 1.8 m & 2.0 n & 9.0 n & 20 n + & c = 0.03 ; a = 7.1 m & 2.0 n & 8.0 n & 19 n + drag device closed & c = 0.03 ; a = 7.1 m & 2.0 n & 8.0 n & 19n + `` '' opened & c = 1.0 ; a = 28 m & 270 n & 1090 n & 2450 n + total force range & & 6 - 275 n & 26 - 1115 n & 60 - 2490 n + single tether : altitude : 0 to 20 km ; total tether length : 20 km ; c = 1.0 + crrrrrr & & & altitude & & + dyneema & 0 - 5 km & 5 - 10 km & 10 - 15 km & 15 - 20 km & + ( sk78 ) & 0.96 kg / m & 0.56 kg / m & 0.30 kg / m & 0.13 kg / m & totals + 5 mm ; mbs 3300 kg & 480 kg & 280 kg & & & 760 kg + mass : 15 kg / km & 75 kg & 75 kg & & & 150 kg + 3 mm : mbs 1400 kg & & & 90 kg & 40 kg & 130 kg + mass : 5 kg / km & & & 25 kg & 25 kg & + & & & & & 1090 kg + tug - airship : altitudes : 17 and 24 km ; total tether length : 12 km ; c = 1.0 + crrrrrr & & & altitude & & + dyneema & ... & ... & 17 - 21 km & 21 - 24 km & + ( sk78 ) & ... & ... & 0.10kg / m & 0.06 kg / m & totals + 2 mm : mbs 450 kg & & & 16 kg & 7 kg & 23 kg + mass : 2.4 kg / km & & & 17 kg & 12 kg & + & & & & & 52 kg +lastly , we show in table [ tetherforce ] estimated wind loading values for both a ground - tethered high altitude aerostat and our proposed high altitude airship - tug scheme . herewe have assumed a constant wind speed of 20 m / s at all altitudes .although there are a variety of tether materials that could be used in either scheme , for these sample calculations we chose dyneema sk78 as the tether material .there are stronger tether material options which have higher breaking strengths but these numbers serve to give a sense of mass and wind loads at various altitudes and hence required tether strength . for the single ground - tethered scheme , we chose a 5 mm tether for altitudes 0 10 km ( 0 - 33 kft ) and a 3 mm tether for altitudes 10 20 km ( 43 - 65 kft ). a thicker tether might be required at lower altitudes since it transverses the whole troposphere where more severe transient wind loads are likely to be experienced .in contrast , a thinner 2 mm cord was chosen for the the airship - tug tether since wind loading conditions are far more benign above the jet stream and most storms at an altitudes 17 km ( 55 kft ) and higher .comparison of the two high - altitude tethered airship approaches in table [ tetherforce ] shows that a single tether will experience just under one metric ton of horizontal wind loading plus tension due to 200 kg of tether mass .although this estimate assumes a constant wind speed of 20 m / s along the entire 20 km length , these wind loads and tether mass could actually be an underestimate .it is unlikely that the tether would be as short as 20 km given wind loading and varying wind directions and speeds from the ground winch up to the altitude of 20 km , and thus a tether length as much as 30 km is probably more realistic . in that case , again dividing the tether into 3 mm and 5 mm thicknesses but now each 15 km long an even greater wind loading might exist while the total tether mass increases to around 300 kg . in real life , the situation might be even less favorable since tropospheric wind speeds often exceed 20 m / s and can even be over 50 m / s in the jet stream . at a wind speed of 40 m/ s , just a 1 km long section of a 5 mm thick tether at an altitude around 10 km ( 30 kft ; kg / m ) would have a wind load of 150 kg for this short section . in any case, a minimum break strength ( mbs ) safety factor for a single tether with the chosen thicknesses is low and much less than the usually desired factor of 5 or more .thus , such tether weight and wind loading estimates would seem to pose serious operational challenges for maintaining a stratospheric airship with a grounded tether complicating the ground tether approach further .while , as in the single tether case , a considerably longer tether will be needed in reality than just the 7 km altitude separation of airship and tug , a shorter and thinner tether in an airship - tug scheme offers both a lower tether mass and wind loading . a tether of sk78 dyneema 12 kmlong will have a combined mass load and wind load well below the tether s mbs of 450 kg .for example , even a relatively high 30 m / s wind speed over the entire 12 km long 2 mm tether at altitudes between 17 and 24 km will only generate a total wind load of less than 100 kg .the airship - tug station - keeping arrangement discussed above uses the naturally occurring seasonal stratospheric wind shear to provide the needed energy to keep the system on station .the payload carrying platform s altitude around 80 kft is also much higher than that of a self - propelled airship at 65 kft thereby providing wider horizon to horizon coverage of the earth and better upward viewing image quality .this tether scheme also avoids several problems associated with a ground - based tethered platform ; namely , little if any aviation hazard , no winch , no stormy weather to fly through , and a shorter tether meaning less tether weight and wind loading .in addition , the tether is expected to be always under some tension so slack issues that can arise in a ground - based winch tether arrangement are reduced .wind loading at altitudes above 15 to 17 km ( 50 to 55 kft ) should also be relatively low even in high wind conditions , making a thin and lightweight tether practical .there are several key components of the concept that will determine its reliability and effectiveness .the higher - altitude lta platform must be constructed so as to have no appreciable fabric or seam leaks of lifting gas ( i.e. , hydrogen or helium ) thus permitting long float durations of weeks to months .both it , the tug and the tether should be as lightweight as possible enabling the greatest payload mass in relation to the balloon s lift capability .ideally , the upper lta platform would also have a streamlined aerodynamic shape so as to lessen wind drag forces as much as possible .it should also have some directional lift capability such as through a rear vertical stabilizer so as to help steer it into or against the prevailing winds and be designed for flexibility in payload mounting configuration . for example , astronomers may want a top - mounted telescope that has unobstructed access to targets near the zenith , while earth scientists may prefer down pointing instruments . however , the most critical component of the proposed concept is perhaps the tug vehicle .we conceive the tug as taking the form of a ultra - lightweight glider with intrinsically low drag .it could develop the forces needed to counter drift of the airship in two ways : deployment of a variable drag device such as a parachute or umbrella like device or variable pitch propeller(s ) , or it could generate appropriate aerodynamic forces with its wings .drag is necessarily in the direction of airflow , so it may seem that the high - drag configuration would only work if the winds at the two altitudes exactly oppose .but if the airship were a dirigible " design , it could develop aerodynamic forces that are not precisely parallel to wind direction .similarly , the tug could be controlled to fly in a direction that produced the necessary tether force over a wide range of angles relative to the wind direction .the combination of a semi - steerable lta airship and a maneuverable drone - like tug with variable lift capability could allow the system to keep station in a variety of wind combinations .it could even maneuver to find better wind conditions , and climb and descend to some degree as needed .the tug will need to be able to generate it own power to serve its operating flight systems and possibly to be self - propelled to some extent .in addition to solar pv power stored in batteries , the tug could be equipped with a propeller to serve as a variable drag device and power from the propeller could be used to generate electricity both day and night .a cruder wind force balancing scheme was proposed in 1969 by r. bourke in a raytheon company report .he described a concept in which a conventional balloon floating in the stratospheric easterlies could deploy a parachute into the lower stratospheric westerlies to provide a drag force to overcome the balloon s drift . using available wind data available at the time, bourke concluded that this arrangement could work for certain months of the year , mainly during summer months at mid - latitudes .but he also found that the altitude and latitude of the lowest stratospheric winds varied seasonally leading to difficulties in maintaining accurate station keeping .nonetheless , he viewed the concept as `` provocative in its intrinsic simplicity '' .however , to our knowledge no high - altitude balloon plus drag chute system was ever deployed and tested by raytheon or anyone else .our scheme differs substantially from that proposed by bourke .he suggested that the upper altitude balloon have self - propulsion capabilities and proposed a simple drag chute lowered from the balloon using a winch only as a supplemental element to aid the airship s station - keeping ability .in contrast , our concept consists of a passive and ideally aerodynamically - shaped , stratospheric balloon or airship tethered to a lower altitude robotic tug vehicle that can precisely control its aerodynamic wind forces .our stratospheric airship would have no self - propulsion element but could have some directional steering capabilities much like that demonstrated in a high altitude wing guidance system .bourke s use of a winch - lowered drag chute may have been an attempt to simplify the balloon launch .our scheme could also include some sort of tether storage system possibly attached to the tug vehicle in an effort to better control deployment and recovery of both upper and lower vehicles .one application for a stratospheric platform would be wide - field , high resolution optical and near - infrared imaging of astronomical targets . the value of high angular resolution imaging for astronomy can not be overstated .the chief reason for the enormous impact of the hubble space telescope ( hst ) across a wide spectrum of research topics despite its modest size mirror ( 2.4 m ) has been its ability to obtain diffraction - limited imaging due to its location above earth s atmosphere .however , with no repair or refurbishment missions currently planned , hubble s expected useful lifetime will probably end before the year 2020 due instrument failures or degradation of its batteries , solar panels , pointing gyros , and associated equipment . with no present follow - up optical / uv space mission to hubble, its loss may mean astronomical high - resolution imaging might be confined for the near future to small space telescopes or ground - based adaptive optics ( ao ) instruments which employ one or more natural or laser guide stars to correct for atmospheric turbulence .unfortunately , ao instruments work best in the infrared and under good seeing conditions and provide limited field - of - view ( 1 arcmin ) with strehl ratios less than 60% . a reliable lta platform situated at an altitude of 20 km or highershould , if properly equipped , provide image quality competitive with space - based telescopes .such an observatory could provide sub - arcsecond imaging with short response times at a much lower cost than a comparable space - based telescope .for example , at an altitude of 24 km ( 80 kft ) an astronomical telescope would be above the weather and all but 2.5% of the atmosphere .it would experience virtually perfectly clear skies every night with image quality at or approaching the diffraction limit of the main aperture .thus , an optical telescope located at such stratospheric altitudes with a mirror just 0.5 m in diameter ( 20-inch ) with sufficient pointing stability and large ccd arrays could provide wide - field images with fwhm = 0.25 arcsecond at 500 nm , making it virtually superior to any ground - based imaging system .being above the weather , it could provide such data quality night after night for as long as the platform remained at this altitude . the lack of appreciable water vapor , dust and other particulates in the remaining atmosphere above these altitudes such a platform would also enjoy excellent atmospheric transmission .light scattering from moonlight would be expected to be minimal and not a major factor in scheduling faint target observations , making most observing time effectively astronomical `` dark time . ''this feature would greatly enhance the platform s ability to respond rapidly to opportunities for observations of faint transient targets such as supernovae and gamma - ray bursters .also , unlike low earth orbit ( leo ) satellites such as hst , data transfer to and from a high - altitude station - keeping observatory could involve simple line - of - sight communications running continuously to a single ground station .finally , a stratospheric astronomical observatory could also provide reliable science support for a host of space - based missions at an estimated cost of a few percent of a conventional leo satellite .we have described a new method for establishing a near station keeping , stratospheric lta vehicle at low and mid - latitudes .this concept uses the naturally occurring seasonal wind shear between upper and lower layers of the stratosphere to provide forces that counter platform wind drift and allow it to keep station over a specified geographical location .we have necessarily left out many details about the architecture .these include platform migration issues in order to follow seasonal variations in latitude where optimal stratospheric wind shears are found , launch and recovery problems and solutions , specific airship and tug design constraints , and science payload arrangements to permit unobscured horizon - to - horizon observations .if this method is shown to be practical , then the quest for the long - sought method of station keeping a scientific hale platform may finally be realized , within season and latitude restrictions .this concept could provide the means for obtaining high quality data rivaling space - based platforms but at a small fraction of the cost .the development of an affordable stratospheric platform that could keep station for weeks or months would be a powerful new tool for a variety of users and could be a game - changer for astronomical , atmospheric , and earth - science research , as well as for a host of other applications including military surveillance and civil telecommunications services .the authors gratefully acknowledge valuable advice and conversations about high - altitude lta science platforms from participants in the w. m. keck institute for space studies ( kiss ) workshop entitled `` airships : a new horizon for science , '' especially jeff hall , steve lord , steve smith , mike smith , and workshop co - leads sarah miller , lynne hillibrand , and jason rhodes .djuknic , g .m . ,freidenfelds , j. , & okunev , y. : establishing wireless communications services via high - altitude aeronautical platforms : a concept whose time has come ? ieee communications magazine , pp 128135 , september 1997 equchi , k. et al . :feasibility study program on stratospheric platform airship technology in japan .aiaa s 13th lighter - than - air systems technology conference , norfolk va ( aiaa 99 - 3912 ) ( 1998 ) norfolk va ( aiaa 99 - 3912 ) .grace , d. , thornton , j. , chen , g. , white , g.p . , & tozer , t.c .: improving the system capacity of broadband services using multiple high altitude platforms .ieee transactions on wireless communications , * 4 * , no .2 , p 700 ( 2005 ) habib , a. , vernin , j. , benkhaldoun , z. , & lanteri , h. : single star scidar : atmospheric parameters profiling using the simulated annealing algorithm .monthly notices of the royal astronomical society , volume 368 , issue 3 , pp .1456 - 1462 ( 2006 ) hibbitts , c. a. , young , e. , kremic , t. , & landis , r. : science measurements and instruments for a planetary science stratospheric balloon platform .proceedings of the aerospace conference , ieee . 2 - 9march 2013 .big sky , mt . isbn : 978 - 1 - 4673 - 1812 - 9 , id.178 ( 2013 ) hoegemann , c. k. , chueca , s. , delgado , j. m. , et al . : cute scidar : presentation of the new canarian instrument and first observational results .spie 5490 , advancements in adaptive optics , 774 ( october 25 , 2004 ) ; doi:10.1117/12.551795 lee , m. , smith , s. , & androulakakis , s. : the high altitude lighter than air airship efforts at the us army space and missile defense command / army forces strategic command .18th aiaa lighter - than - air systems technology conference , 4 - 7 may 2009 , seattle , washington aiaa 2009 - 2852 perotti , f. , della - ventura , a. , sechi , g. , et al . : balloon - borne observations of ngc 4151 using the miso telescope . in : non - solar gamma - rays ; proceedings of the symposium , bangalore , india , may 29-june 9 , 1979 .oxford , pergamon press , ltd ., p. 67 - 70( 1980 ) .rigaut , f. , neichel , b. , boccas , m. , et al . :gemini multiconjugate adaptive optics system review - i. design , trade - offs and integration .monthly notices of the royal astronomical society , * 437 * , p.2361 - 2375 , 437 ( 2014 ) smith , m.s . , and rainwater , e.l .: applications of scientific ballooning technology to high altitude airships .aiaa s 3rd annual aviation technology , integration , and operations , denver co , 17 - 19 november 2003 smith , i.s . ,fortenberry , m.l . ,lee , m. & judy , r. : hisentinel80 : flight of a high altitude airship .presented at the 11th aiaa aviation aviation technology , integration , and operations conference , 19th aiaa lighter - than - air , virginia beach , va . ,september 2011 .von appen - schnur , g. f. , kueke , r. , schaefer , i. , & stenvers , k .- h . : aerostatic platforms , past , present , and future : a prototype for astronomy ?spie * 4014 * , p. 226 - 236 ,airborne telescope systems , ramsey k. melugin ; hans - peter roeser ; eds ( 2000 ) welsh , b. y. , boksenberg , a. , anderson , b. , & towlson , w. a. : high resolution ultra - violet observations of alpha lyrae using the university college london balloon - borne telescope system . astronomy and astrophysics , * 126 * , 335 - 340 ( 1983 ) wilson , r. w. , wooder , n. j. , rigal , f. , & dainty , j. c. : estimation of anisoplanatism in adaptive optics by generalized scidar profiling .monthly notice of the royal astronomical society , volume 339 , issue 2 , pp .491 - 494 ( 2003 )
during certain times of the year at middle and low latitudes , winds in the upper stratosphere move in nearly the opposite direction than the wind in the lower stratosphere . here we present a method for maintaining a high - altitude balloon platform in near station - keeping mode that utilizes this stratospheric wind shear . the proposed method places a balloon - borne science platform high in the stratosphere connected by a lightweight , high - strength tether to a `` tug '' vehicle located in the lower or middle stratosphere . using aerodynamic control surfaces , wind - induced aerodynamic forces on the tug can be manipulated to counter the wind drag acting on the higher altitude science vehicle , thus controlling the upper vehicle s geographic location . we describe the general framework of this station - keeping method , some important properties required for the upper stratospheric science payload and lower tug platforms , and compare this station - keeping approach with the capabilities of a high altitude airship and conventional tethered aerostat approaches . we conclude by discussing the advantages of such a platform for a variety of missions with emphasis on astrophysical research .
in reliability engineering two crucial objectives are considered : ( 1 ) to maximize an estimate of system reliability and ( 2 ) to minimize the variance of the reliability estimate . because system designers and users are risk - averse, they generally prefer the second objective which leads to a system design with a slightly lower reliability estimate but a lower variance of that estimate , ( eg , ) .it provides decision makers efficient rules compared to other designs which have a higher system reliability estimate , but with a high variability of that estimate . in the case ofparallel series and/or by duality series parallel systems , the variance of the reliability estimate can be lowered by allocation of a fixed sample size ( the number of observations or units tested in the system ) , while reliability estimate is obtained by testing components , see berry .allocation schemes for estimation with cost , see for example , lead generally to a discrete optimization problem which can be solved sequentially using adaptive designs in a fixed or a bayesian framework . based on a decision theoretic approach ,the authors seek to minimize either the variance or the bayes risk associated to a squared error loss function .the problem of optimal reliability estimation reduces to a problem of optimal allocation of the sample sizes between bernoulli populations .such problems can be solved _ via _ dynamic programming but this technique becomes costly and intractable for complex systems . in the case of a two components series or parallel system , optimal procedures can be obtained and solved analytically when the coefficients of variation of the associated bernoulli populations are known , cf .eg , .unfortunately , the coefficients of variation are not known in practice since they depend themselves on the unknown components reliabilities of the system . in , the author has defined a sequential allocation scheme in the case of a series system and has shown its first order asymptotic optimality for large sample sizes with comparison to the balanced scheme . in , a reliabilitysequential schemes ( r - ss ) was applied successfully to parallel series systems , when the total number of units to be tested in each subsystem was fixed .recently , in , a two stage design for the same purpose was presented and shown to be asymptotically optimal when the subsystems sample sizes are fixed and large at the same order of the total sample size of the system .the problem considered in this paper is useful to estimate the reliability of a parallel - series and/or by duality a series - parallel system , when the components reliabilities are unknown as well as the total numbers of units allowed to be tested in each subsystem .this work improves the results in by developing a hybrid two stage design to get a dynamic allocation between the sample sizes allowed for subsystems and those allowed for their components .for example , consider a parallel system of four components ( 1),(2),(3 ) and ( 4 ) , with reliabilities 0.05 , 0.1 , 0.95 and 0.99 , respectively , under the constraint that the total number of observations allowed is .then , the sequential scheme given in suggests to test , respectively , 10 , 10 , 28 and 52 units and produces a variance of the system reliability estimate equal , approximately . this is visibly better ,compared to the balanced scheme which takes an allocation equal 25 in each component and produces a variance ten times greater then the former .the hybrid sequential scheme proposed in this paper is a tool to solve the same problem when the components are replaced by subsystems .more precisely , it combines the schemes developed for parallel and/or series systems in order to obtain approximately the best allocation at subsystems level as well as at components level . in section [ prelim ] , definitions and preliminary resultsare presented accompanied by the proper two stage design for a parallel subsystem just as was defined in and its asymptotic optimality is proved for a fixed and large sample size . in section[ lower ] , a parallel - series system is considered and it is shown that the variance of its reliability estimate has a lower bound independent of allocation .this leads , in section [ overlapping ] , to the main result of this paper which lies in the hybrid two stage algorithm and its asymptotic optimality for a fixed and large sample size allowed for the system . in section [ monte carlo ] ,the results are validated _ via _ monte carlo simulation and it is shown that our algorithm leads asymptotically to the best allocation scheme to reach the lower bound of the variance of the reliability estimate . the last section is reserved for conclusion and remarks .consider a system of subsystems connected in series , each subsystem contains components connected in parallel .the system should be referred as parallel - series system .assume s - independence within and across populations , then the system reliability is where the reliability of the parallel subsystem and the reliability of component .an estimator of is assumed to be the product of sample reliabilities where and is the sample mean of functioning units in component , is used to estimate where is the sample size and is the binary outcome of the unit in component .it should be pointed that a unit is not necessarily a physical object in a component , but it represents just a bernoulli observation of the functioning / failure state of that component .hence , for each subsystem , one must allocate such that the estimated reliability of the system is based on a total sample size in the series case , with the help of s - independence and the fact that a sample mean is an unbiased estimator of a bernoulli parameter , see , the variance of the estimated reliability incurred by any allocation scheme can be obtained , \label{vp}\]]is given as a function of the allocation numbers and the coefficients of variation of bernoulli populations have found convenient to work with the equivalent expression of ( [ vp ] ) , ,\ ] ] where a sum over all the products of at least two of its arguments . the problem is to estimate when components reliabilities are unknowns and a total number of units must be tested in the system at components level .the aim is to minimize the variance of .hence , the problem can be addressed by developing allocation schemes to select , the numbers of units to be tested in each component in the subsystem , under the constraint that the variance of is as small as possible .reliability sequential schemes ( r - ss ) exist for the series , parallel or parallel - series configurations when the sample sizes of the subsystems are fixed . therefore , one can fully optimize the variance of just by applying the ( r - ss ) to find the best partition of .unfortunately , a full sequential design can not be used in practice for large systems since the number of operations will growth dramatically . for this reason, we reasonably propose a hybrid two stage design which is shown to be asymptotically optimal when is large .for the asymptotic optimization of the variance of the estimated reliabilities , we make use of the well - known lagrange s identity which can be written in the form : let , for and , then the following identity holds.\ ] ] [ th0]denote by the proof is a direct consequence of the previous identity ( [ lem1 ] ) .indeed following the expansion ( [ var rj ] ) and since contains second order terms ( see later ) , one gives interest to the numbers which minimize the expression must verify for implies that if one assumes that is fixed then a proper two stage scheme can be used to determine , just as was defined in , as follows : choose as a function of such that : 1 . must be large if is large , 2 . 3 . .one can take for example ] denotes the integer part .: : sample units from each component in the subsystem , estimate by its maximum likelihood estimator ( m.l.e ) and define the predictor , according to ( [ mij]),,~i=1,\ldots , n_{j}-1\ ] ] stage 2 .: : sample units for which are units from component in the subsystem where is the corrector of defined by [ th1 ] choosing the according to the previous two stage sampling scheme , one obtains from relation ( [ var rj ] ) , one can write is large enough , condition ( iii ) gives for .so , the strong law of large numbers with the integer part properties give , when , .hence, and on the other hand which achieves the proof .we consider now the parallel series system . from expression ( [ g ] ) , one can write\ ] ] the following theorem gives a lower bound for the variance of .[ th2 ] denote by ^{2}\]]then expanding the right hand side of ( [ g ] ) and using ( [ r ] ) , one obtains,\ ] ] which gives with the help of theorem [ th0] last expression has the form can be expanded , thanks to identity ( [ lem1 ] ) , as follows ^{2}\nonumber \\ & + & r^{2 } t^{-1}\sum\limits_{i=1}^{n-1}\sum\limits_{j = i+1}^{n}\frac{\left ( t_{i}\frac{1-r_{j}% } { r_{j}}\sum\limits_{k=1}^{n_{j}}c_{kj}^{-1}-t_{j}\frac{1-r_{i}}{r_{i}}% \sum\limits_{k=1}^{n_{i}}c_{ki}^{-1}\right ) ^{2}}{t_{i}t_{j } } \label{vr2e}\end{aligned}\]]and as a consequence ^{2}=q,\]]which achieves the proof .similarly to the case of a subsystem an from expressions ( [ vr1 ] ) and ( [ vr2e ] ) , one gives interest to the numbers which minimize the quantity obtains the asymptotic optimality criteria all , which gives the rule we can now implement a hybrid two stage design for the determination of the numbers as well as as follows : stage 1 : : choose ] .next , obtain the predictor , according to the rule ( [ tj]),,~j=1,\ldots , n-1.\ ] ] stage 2 : : define the corrector take back the two stage scheme for each subsystem to calculate with the sample size equals .now , the main result of this paper is given by the following theorem .[ th3]choosing the and according to the hybrid two stage design , one obtains is defined in theorem [ th2 ] .the relation ( [ vrj ] ) implies that a consequence of the hybrid two stage design and the strong law of large numbers , and remain bounded for all as .it follows that , as , to ( [ eq1 ] ) and ( [ eq2 ] ) .thus, implies that as a consequence,\]]now , expanding the product within the limit and applying identity ( [ lem1 ] ) , after having replaced by its expression ( [ qj ] ) , one obtains \\ & = & q+r^{2 } \left ( a+b\right ) , \end{aligned}\ ] ] where once more , the hybrid two stage allocation scheme and the strong law of large numbers provide achieves the proof .let us remark first that the lower bound is a first order approximation of the optimal variance of the reliability estimate under the constraint ( [ constr ] ) when is large . in the first experiment, we will validate the fact that the hybrid scheme provides the best allocation at system level . as in figure[ fig : f1 ] , we consider a simple parallel - series system of two subsystems each one , with varying reliabilities and a fixed sample size .for each situation a , b , c and d and for each partition sample size where varies from ] , by step one , we have applied the proper two stage design for each parallel subsystem and reported in a bar diagram as a function of , see figure [ fig : f2 ] . on the other hand , in table [ tab:1 ] , we have reported the expected value of given by the hybrid two stage design . as expected, our scheme gives the best allocation for each situation .the second experiment deals with a non trivial parallel - series system just as in , where subsystems are composed , respectively , of 2,3,4 and 5 components , see figure [ fig : f3 ] .the partition total numbers to test in each subsystem are evaluated systematically by the hybrid two stage design while their sum is incremented from 100 to 10000 by step of 100 .figure [ fig : f4 ] shows the rate of the excess of variance at logarithmic scale as a function of the sample size .the asymptotic optimality of the hybrid scheme is validated . as a function of for each case a , b , c and d : shows the minimum of ,scaledwidth=75.0% ] .expected value of given by the hybrid two stage design [ cols="<,<,<,<,<,<",options="header " , ]the proof of the first order asymptotic optimality for the proper two stage design for a parallel subsystem as well as for the hybrid two stage design for the full system has been obtained mainly through the following steps * an adequate writing of the variance of the reliability estimate , * a lower bound for this variance , independent of allocation , * the allocation defined by the hybrid sampling scheme and the strong law of large numbers . with a straightforward but tedious adaptation ,the above study can be namely extended to deal with complex systems involving a multi - criteria optimization problem under a set of constraints such as risk , system weight , cost , performance and others , in a fixed or in a bayesian framework .this work is supported with grants by the national research project ( pnr ) and the l.a.a.r laboratory of the department of physics in the university mohamed boudiaf of oran .m. terbeche , o. broderick , two stage design for estimation of mean difference in the exponential family , advances and applications in statistics 5(3 ) ( 2005 ) 325339 .m. woodroofe , j. hardwick , sequential allocation for an estimation problem with ethical costs , annals of statistics 18(3 ) ( 1990 ) 13581377 .
we give a hybrid two stage design which can be useful to estimate the reliability of a parallel series and/or by duality a series parallel system , when the component reliabilities are unknown as well as the total numbers of units allowed to be tested in each subsystem . when a total sample size is fixed large , asymptotic optimality is proved systematically and validated _ via _ monte carlo simulation . * keywords . * asymptotic optimality ; hybrid ; reliability ; parallel - series ; two stage design .
for many decades , statisticians have made attempts to prepare the bayesian omelette without breaking the bayesian eggs ; that is , to obtain probabilistic likelihood - based inferences without relying on informative prior distributions .a recent example is murray aitkin s recent book , _ statistical inference _ , which is the culmination of a long research program on the topic of integrated evidence , exemplified by the discussion paper of .the book , subtitled _ an integrated bayesian / likelihood approach _ , proposes handling statistical hypothesis testing and model selection via comparisons of posterior distributions of likelihood functions under the competing models or via the posterior distribution of the likelihood ratios corresponding to those models .( the essence of the proposal is detailed in section [ small ? ] . ) instead of comparing bayes factors or performing posterior predictive checks ( comparing observed data to posterior replicated pseudo - datasets ) , _ statistical inference _ recommends a fusion between likelihood and bayesian paradigms that allows for the perpetuation of noninformative priors in testing settings where standard bayesian practice prohibits their usage or requires an extended decision - theoretic framework .while we appreciate the considerable effort made by aitkin to place his theory within a bayesian framework , we remain unconvinced of the said coherence , for reasons exposed in this note .l0.22 from our bayesian perspective , and for several distinct reasons detailed in the present note , integrated bayesian / likelihood inference can not fit within the philosophy of bayesian inference . aitkins commendable attempt at creating a framework that incorporate the use of arbitrary noninformative priors in model choice procedures is thus incoherent in this bayesian respect . when using improper priors lead to meaningless bayesian procedures for posterior model comparison , we see this as a sign that the bayesian model will not work for the problem at hand . rather than trying at all cost to keep the offending model and define marginal posterior probabilities by fiat ( whether by bic , dic , intrinsic bayes factors , or posterior likelihoods ) , we prefer to follow the full logic of bayesian inference and recognize that , when one s bayesian approach leads to a dead end , one must change either one s methodologies or one s beliefs ( or both ) .bayesians , both subjective and objective , have long recognized the need for tuning , expanding , or otherwise altering a model in light of its predictions ( see , for example , and ) , and we view undefined bayes factors as an example where otherwise useful methods are being extended beyond their applicability .to try to work around such problems without altering the prior distribution is , we believe , an abandonment of bayesian principles and , more importantly , an abandoned opportunity for model improvement .the criticisms found in the current review are therefore not limited to aitkin s book ; they also apply to previous patches such as the deviance information criterion ( dic ) of ( which also uses a posterior " expectation of the log - likelihood ) and the pseudo - posteriors of (which make an extensive use of the data in their product of predictives ) .unlike the author , who has felt the call to construct a partly new if tentatively unifying foundation for statistical inference , we have the luxury of feeling that we already live in a comfortable ( even if not flawless ) inferential house .thus , we come to aitkin s book not with a perceived need to rebuild but rather with a view toward strengthening the potential shakiness of the pillars that support our own inferences .a key question when looking at any method for probabilistic inference that is not fully bayesian is : for the applied problems that interest us , does the proposed new approach achieve better performances than our existing methods ?our answer , to which we arrive after careful thought , is no . as an evaluation of the ideas found in _statistical inference _ , the criticisms found in this review are inherently limited .we do not claim here that aitkin s approach is wrong _ per se _ merely that it does not fit within our inferential methodology , namely bayesian statistics , despite using bayesian tools .we acknowledge that statistical methods do not , and most likely never will , form a seamless logical structure. it may thus very well be that the approach of comparing posterior distributions of likelihoods could be useful for some actual applications , and perhaps aitkin s book will inspire future researchers to demonstrate this ._ statistical inference_ begins with a crisp review of frequentist , likelihood and bayesian approaches to inference and then proceeds to the main issue : introducing the `` integrated bayes / likelihood approach '' , first described in chapter 2 .much of the remaining methodological material appears in chapters 4 ( `` unified analysis of finite populations '' ) and 7 ( `` goodness of fit and model diagnostics '' ) .the remaining chapters apply aitkin s principles to various examples .the present article discusses the basic ideas in _ statistical inference _ , then consider the relevance of aitkin s methodology within the bayesian paradigm . this quite small change to standard bayesian analysis allows a very general approach to a wide range of apparently different inference problems ; a particular advantage of the approach is that it can use the same noninformative priors . " _ statistical inference _ , page xiii the quite small change " advocated by _statistical inference _ consists in envisioning the likelihood function as a generic function of the parameter that can be processed a posteriori ( that is , with a distribution induced by the posterior ) , hence allowing for ( posterior ) cdf , mean , variance and quantiles .in particular , the central tool for aitkin s model fit is the posterior cdf " of the likelihood , as argued by the author ( chapter 2 , page 21 ) , this small change " in perspective has several appealing features : * the approach is general and allows to resolve the difficulties with the bayesian processing of point null hypotheses , being defined solely by the bayesian model associated with ; * the approach allows for the use of generic noninformative and improper priors , again by being relative to a single model ; * the approach handles more naturally the vexed question of model fit " , still for the same reason ; * the approach is `` simple . '' as noted above , the setting is quite similar to spiegelhalter et al.s ( ) dic in that the deviance is a renaming of the likelihood and is considered a posteriori " both in $ ] and in , where is a bayesian estimator of , since the discussion of made this point clear , see in particular , even though the authors disagreed . make a similarly ambiguous proposal that also relates to by its usage of cross - validation quantities .we however dispute both the appropriateness and the magnitude of the change advocated in _ statistical inference _ and show below why , in our opinion , this shift in paradigm constitutes a new branch of statistical inference , differing from bayesian analysis on many points .first , using priors and posteriors is no guarantee that inference is bayesian .empirical bayes techniques are witnesses of this .aitkin s key departure from bayesian principles means that his procedure has to be validated on its own , rather than benefiting from the coherence inherent to bayesian procedures .the practical advantage of the likelihood / bayesian approach may be convenience , but the drawback is that the method pushes both the user and the statistician _ away _ from progress in model building . within a model , even while giving meaningless ( or at least not universally accepted ) values for marginal likelihoods that are needed for bayesian model comparison .it does when interest shifts from to that the bayesian must set aside most of noninformative and , perhaps reluctantly , set up an informative model .see , e.g. , and for some current perspectives on bayesian model choice using noninformative priors . ]we envision bayesian data analysis as comprising three steps : ( 1 ) model building , ( 2 ) inference , and ( 3 ) model checking . in particular , we view steps ( 2 ) and ( 3 ) as separate .inference works well , with many exciting developments still in the coming , handling complex models , leading to an unlimited range of applications , and a partial integration with classical approaches ( as in the empirical bayes work of , or more recently the similarities between hierarchical bayes and frequentist false discovery rates discussed by ) , causal inference , machine learning , and other aims and methods of statistical inference . even in the face of all this progress on inference ,bayesian model checking remains a bit of an anomaly , with the three leading bayesian approaches being bayes factors , posterior predictive checks , and comparisons of models based on prediction error and other loss - based measures .( decision - theoretic analyses as in , while intellectually convincing , have not gained the same amount of popularity . )unfortunately , as aitkin points out , none of these model checking methods works completely smoothly : bayes factors depend on aspects of a model that are untestable and are commonly assigned arbitrarily ; posterior predictive checks are , in general , conservative " in the sense of producing -values whose probability distributions are concentrated near ; and prediction error measures ( which include cross - validation and dic ) require the user to divide data into test and validation sets , lest they use the data twice ( a point discussed immediately below ) . the setting is even bleaker when trying to incorporate noninformative priors and new proposals are clearly of interest . a persistent criticism of the posterior likelihood approach ( ... )has been based on the claim that these approaches are ` using the data twice , ' or are violating temporal coherence . " _ statistical inference _, page 48 using the data twice " is not our main reservation about the method if only because this is a rather vague concept .obviously , one could criticize the use of the posterior expectation " of the likelihod as being the ratio of the marginal of the twice replicated data over the marginal of the original data , = \int l(\theta , x ) \pi(\theta|x)\,\text{d}\theta = \dfrac{m(x , x)}{m(x)}\,,\ ] ] similar to ( a criticism clearly expressed in the discussion therein ) .however , a more fundamental issue is that the posterior " distribution of the likelihood function can not be justified from a bayesian perspective . _ statistical inference _ stays away from decision - theory ( as stated on page xiv ) so there is no derivation based on a loss function or such .our primary difficulty with the integrated likelihood idea ( and dic as well ) is ( a ) that the likelihood function does not exist a priori and ( b ) that it requires a joint distribution to be properly defined in the case of model comparison .the case for ( a ) is arguable , as aitkin would presumably contest that there exists a joint distribution on the likelihood , even though the case of an improper prior stands out ( see below ) .we still see the concept of a posterior probability that the likelihood ratio is larger than as meaningless . the case for ( b )is more clear - cut in that when considering two models , hence a likelihood ratio , a bayesian analysis does require a joint distribution on the two sets of parameters to reach a decision , even though in the end only one set will be used .as detailed below in section [ prodpost ] , this point is related with the introduction of pseudo - priors by who needed arbitrary defined prior distributions on the parameters that do not exist . in the specific case of an improper prior , aitkin sapproach can not be validated in a probability setting for the reason that there is no joint probability on .obviously , one could always advance that the whole issue is irrelevant since improper priors do not stand within probability theory .however , improper priors do stand within the bayesian framework , as demonstrated for instance by and it is easy to give those priors an exact meaning .when the data are made of iid observations from and an improper prior is used on , we can consider a _ training sample _ , with such that if we construct a probability distribution on by the posterior distribution associated with this distribution and the remainder of the sample is given by this distribution is independent from the choice of the training sample ; it only depends on the likelihood of the whole data and it therefore leads to a non - ambiguous posterior distribution on .however , as is well known , this construction does not lead to produce a joint distribution on , which would be required to give a meaning to aitkin s integrated likelihood .therefore , his approach can not cover the case of improper priors within a probabilistic framework and thus fails to solve the very difficulty with noninformative priors it aimed at solving .this is further illustrated by the use of haldane s prior in chapter 4 of _ statistical inference _ , despite it not allowing for empty cells in a contingency table .`` the -value is equal to the posterior probability that the likelihood ratio , for null hypothesis to alternative , is greater than 1 ( ... ) the posterior probability is that the posterior probability of is greater than 0.5 . '' _ statistical inference _, pages 4243 those two equivalent statements show that it is difficult to give a bayesian interpretation to aitkin s method , since the two posterior probabilities " quoted above are incompatible . indeed ,a fundamental bayesian property is that the posterior probability of an event related with the parameters of the model is not a random quantity but a number . to consider the posterior probability of the posterior probability " means we are exiting the bayesian domain , both from logical and philosophical viewpoints . in chapter 2 , aitkin exposes his ( foundational ) reasons for choosing this new approach by integrated bayes / likelihood .his criticism of bayes factors is based on several points we feel useful to reproduce here : 1 .[ i ] have we really eliminated the uncertainty about the model parameters by integration ?the integrated likelihood ( ... ) is the expected value of the likelihood .but what of the prior variance of the likelihood ? "( page 47 ) .[ ii ] any expectation with respect to the prior implies that the data has not yet been observed ( ... ) so the integrated likelihood `` is the joint distribution of random variables drawn by a two - stage process .( ... ) the marginal distribution of these random variables is not the same as the distribution of ( ... ) and does not bear on the question of the value of in that population '' ( page 47 ) .[ iii ] we can not use an improper prior to compute the integrated likelihood .this eliminate the usual improper noninformative priors widely used in posterior inference . "( page 47 ) .[ iv ] any parameters in the priors ( ... ) will affect the value of the integrated likelihood and this effect does not disappear with increasing sample size "( page 47 ) .[ v ] the bayes factor is equal to the posterior mean of the likelihood ratio between the models " _ [ meaning under the full model posterior ] _( page 48 ) . 6 .[ vi ] `` the bayes factor diverges as the prior becomes diffuse .( ... ) this property of the bayes factor has been known since the lindley / bartlett paradox of 1957 '' ( page 48 ) .the representation [ i ] of the integrated " ( or marginal ) likelihood as an expectation under the prior \ ] ] is unassailable and is for instance used as a starting point for motivating the nested sampling method .this does not imply that the extension to the variance or to any other moment stated in [ i ] has a similar meaning , nor that the move to the expectation under the posterior is valid within the bayesian paradigm . while the difficulty [ iii ] with improper priors is real , and while the impact of the prior modelling [ iv ] may have a lingering effect , the other points can be easily rejected on the ground that the posterior distribution of the likelihood is meaningless within a bayesian perspective .this criticism is anticipated by aitkin who protests on pages 48 - 49 that , given point [ v ] , the posterior distribution must be meaningful , " since the posterior mean is meaningful " , but the interpretation of the bayes factor as a posterior mean " is only an interpretation of an existing integral ( in the specific case of nested models ) , it does not give any validation to the analysis .( the marginal likelihood may similarly be interpreted as a prior mean , despite depending on the observation , as in the nested sampling perspective .more generaly , bridge sampling techniques also exploit those multiple representations of a ratio of integrals , . )one could just as well take [ ii ] above as an argument _ against _ the integrated likelihood / bayes perspective .in the case of unrelated models to be compared , the fundamental theoretical argument against using posterior distributions of the likelihoods and of related terms is that the approach leads to parallel and separate simulations from the posteriors under each model . _ statistical inference_ recommends that models be compared via the distribution of the likelihood ratio values , where the s and s are drawn from the respective posteriors .this choice is similar to scott s ( ) and to congdon s ( ) mistaken solutions exposed in , in that mcmc simulations are run for each model separately and the resulting samples are then gathered together to produce either the posterior expectation ( in scott s , 2002 , case ) or the posterior distribution ( for the current paper ) of which do not correspond to genuine bayesian solutions ( see ) .again , this is not as much because the dataset is used repeatedly in this process ( since reversible mcmc produces as well separate samples from the different posteriors ) as the fundamental lack of a common joint distribution that is needed in the bayesian framework .this means , e.g. , that the integrated likelihood / bayes technology is producing samples from the product of the posteriors ( a product that clearly is not defined in a bayesian framework ) instead of using pseudo - priors as in , i.e. of considering a joint posterior on , which is [ proportional to ] this makes a difference in the outcome , as illustrated in figure [ fig : sxot ] , which compares the distribution of the likelihood ratio under the true posterior and under the product of posteriors , when assessing the fit of a poisson model against the fit of a binomial model with trials , for the observation .the joint simulation produces a much more supportive argument in favor of the binomial model , when compared with the product of the posteriors .( again , this is inherently the flaw found in the reasoning leading to scott s , 2002 , and congdon s , 2006 , methods for approximating bayes factors . ) _ comparison of the distribution of the likelihood ratio under the correct joint posterior and under the product of the model - based posteriors , when assessing a poisson model against a binomial with trials , for .the joint simulation produces a much more supportive argument in favor of the negative binomial model , when compared with the product of the posteriors ._ ] although we do not advocate its use , a bayesian version of aitkin s proposal can be constructed based on the following loss function that evaluates the estimation of the model index based on the values of the parameters under both models and on the observation : here means that model is chosen , and denotes the likelihood under model . under this loss , the bayes ( optimal )solution is > 1/2\\ 2 & \text{otherwise , } \end{cases}\ ] ] which depends on the _ joint _ posterior distribution on , thus differs from aitkin s solution .we have = & \pi(\mathcal m_1| x ) \int_{\theta_2 } \mbox{pr}^{\pi_1}\left [ l^1(\theta_1 ) > l^2(\theta_2)| x , \theta_2 \right ] \,\text{d}\pi_2(\theta_2)\\ & + \pi(\mathcal m_2| x ) \int_{\theta_1 } \mbox{pr}^{\pi_2}\left [ l^1(\theta_1 ) > l^2(\theta_2)| x , \theta_1\right ] \,\text{d}\pi_1(\theta_1)\,,\end{aligned}\ ] ] where and denote the log - likelihoods and where the probabilities within the integrals are computed under and , respectively .( pseudo - priors as in could be used instead of the true priors , a requirement when at least one of those priors is improper . ) an asymptotic evaluation of the above procedure is possible : consider a sample of size , .if is the true " model , then and we have & = \mbox{pr } \left [ -\mathcal x^2_{p_1 } > l^2(\theta_2)- l^2(\hat{\theta_1 } ) \right ] + o_p(1/\sqrt{n } ) \\ & = f_{p_1}\left [ l^1(\hat{\theta_1 } ) - l^2(\theta_2 ) \right ] + o_p(1/\sqrt{n})\,,\end{aligned}\ ] ] with obvious notations for the corresponding log - likelihoods , the dimension of , the maximum likelihood estimator of , and a chi - square random variable with degrees of freedom . notealso that , since , where denotes the kullback leibler divergence and denotes the _ projection _ of the true model on : , we have = 1 + o_p(1)\,.\ ] ] by symmetry , the same asymptotic consistency occurs under model . on the opposite, aitkin s approach leads ( at least in regular models ) to the approximation ,\ ] ] where the and random variables are independent , hence producing quite a different result that depends on the asymptotic behavior of the likelihood ratio .note that for both approaches to be equivalent one would need a pseudo - prior for ( resp . if were _ true _ ) as tight around the maximum likelihood as the posterior , which would be equivalent to some kind of empirical bayes type of procedure .furthermore , in the case of embedded models , and , aitkin s approach can be given a probabilistic interpretation .to this effect , we write the parameter under as , being a fixed known quantity , and under as , so that comparing with corresponds to testing the null hypothesis .aitkin does not impose a positive prior probability on , since his prior only bears on ( in a spirit close to the savage - dickey representation , see ) .his approach is therefore similar to the inversion of a confidence region into a testing procedure ( or vice - versa ) . under the model , denoting by the log - likelihood of the bigger model , & \approx & \mbox{pr}\left [ \mathcal x^2_{p_2-p_1 } > - l(\hat{\theta}_1(\psi_0 ) , \psi_0)+ l(\hat{\theta}_1,\hat{\psi})\right ] \\ & \approx & 1 - f_{p_2-p_1 } [ - l(\hat{\theta}_1(\psi_0 ) , \psi_0)+ l(\hat{\theta}_1,\hat{\psi } ) ] , \end{aligned}\ ] ] which is the approximate -value associated with the likelihood ratio test .therefore , the aim of this approach seems to be , at least for embedded models where the bernstein von mises theorem holds for the posterior distribution , to construct a _bayesian _ procedure reproducing the -value associated with the likelihood ratio test . from a frequentist point of view it is of interest to see that the posterior probability of the likelihood ratio being greater than one is approximately a -value , at least in cases when the bernstein - von mises theorem holds , e.g. for embedded models and proper priors .this -value can then be given a finite - sample meaning ( under the above restrictions ) , however it seems more interesting from a frequentist perspective than from a bayesian one . from a bayesian decision - theoretic viewpoint , this is even more dubious , since the loss function ( [ loss ] ) is difficult to interpret and to justify . without a specific alternative , the best we can do is to make posterior probability statements about and transfer these to the posterior distribution of the likelihood ratio ( .. ) there can not be strong evidence in favor of a point null hypothesis against a general alternative hypothesis . " _statistical inference _ , pages 4244 we further note that , once _ statistical inference _ has set the principle of using the posterior distribution of the likelihood ratio ( or rather of the divergence difference since this is at least symmetric in both hypotheses ) , there is a whole range of outputs available including confidence intervals on the difference , for checking whether or not they contain zero . from our ( bayesian ) perspective , this solution ( a ) is not bayesian for reasons exposed above, ( b ) is not parameterization invariant , and ( c ) relies once again on an arbitrary confidence level .we have focused in this review on aitkin s proposals rather than on his characterizations of other statistical methods . in a few places , however , we believe that there have been some unfortunate confusions from his part . on page 22, aitkin describes bayesian posterior distributions as `` formally a measure of personal uncertainty about the model parameter , '' a statement that we believe holds generally only under a definition of `` personal '' that is so broad as to be meaningless . as we have discussed elsewhere( gelman , 2008 ) , bayesian probabilities can be viewed as `` subjective '' or `` personal '' but this is not necessary . or , to put it another way ,if you want to label my posterior distribution as `` personal '' because it is based on my personal choice of prior distribution , you should also label inferences from the proportional hazards model as `` personal '' because it is based on the user s choice of the parameterization of cox ( 1972 ) ; you should also label any linear regression ( classical or otherwise ) as `` personal '' as based on the individual s choice of predictors and assumptions of additivity , linearity , variance function , and error distribution ; and so on for all but the very simplest models in existence .in a nearly century - long tradition in statistics , any probability model is sharply divided into `` likelihood '' ( which is considered to be objective and , in textbook presentations , is often simply given as part of the mathematical specification of the problem ) and `` prior '' ( a dangerously subjective entity to which the statistical researcher is encouraged to pour all of his or her pent - up skepticism ) .this may be a tradition but it has no logical basis . if writers such as aitkin wish to consider their likelihoods as objective and consider their priors as subjective , that is their privilege .but we would prefer them to restrain themselves when characterizing the models as others .it would be polite to either tentatively accept the objectivity of others models or , contrariwise , to gallantly affirm the subjectivity of one s own choices .aitkin also mischaracterizes hierarchical models , writing `` it is important not to interpret the prior as in some sense a_ model for nature _ [ italics in the original ] that nature has used a random process to draw a parameter value from a higher distribution of parameter values '' on the contrary , that is exactly how we interpret the prior distribution in the ideal case .admittedly , we do not generally approach this ideal ( except in settings such as genetics where the population distribution of parameters has a clear sampling distribution ) , just as in practice the error terms in our regression models do not capture the true distribution of errors . despite these imperfections, we believe that it can often be helpful to interpret the prior as a model for the parameter - generation process and to improve this model where appropriate ._ statistical inference _ points out several important facts that are individually known well ( but perhaps not well enough ! ) , but by putting them all in one place it foregrounds the difficulty or impossibility of putting all the different approaches to model checking in one place . we all know that the -value is in no way the posterior probability of a null hypothesis being true ; in addition , bayes factors as generally practiced correspond to no actual probability model . also , it is well - known that the so - called harmonic mean approach to calculating bayes factors is inherently unstable , to the extent that in the situations where it does work , " it works by implicitly integrating over a space different from that of its nominal model . yes , we all know these things , but as is often the case with scientific anomalies , they are associated with such a high level of discomfort that many researchers tend to forget the problems or try to finesse them .it is refreshing to see the anomalies laid out so clearly . at some points , however , aitkin disappoints .for example , at the end of section 7.2 , he writes : `` in the remaining sections of this chapter , we first consider the posterior predictive -value and point out difficulties with the posterior predictive distribution which closely parallel those of bayes factors . ''he follows up with a section entitled `` the posterior predictive distribution , '' which concludes with an example that he writes `` should be a matter of _ serious _ concern [ emphasis in original ] to those using posterior predictive distributions for predictive probability statements . ''what is this example of serious concern ?it is an imaginary problem in which he observes 1 success in 10 independent trials and then is asked to compute the probability of getting at most 2 successes in 20 more trials from the same process ._ statistical inference_ assumes a uniform prior distribution on the success probability and yields a predictive probability or 0.447 , which , to him , `` looks a vastly optimistic and unsound statement . '' here , we think aitkin should take bayes a bit more seriously . if you think this predictive probability is unsound , there should be some aspect of the prior distribution or the likelihood that is unsound as well .this is what good ( ) called `` the device of imaginary results . ''we suggest that , rather than abandoning highly effective methods based on predictive distributions , aitkin should look more carefully at his predictive distributions and either alter his model to fit his intuitions , alter his intuitions to fit his model , or do a bit of both .this is the value of inferential coherence as an ideal .several of the examples in _ statistical inference _ represent solutions to problems that seem to us to be artificial or conventional tasks with no clear analogy to applied work . they are artificial and are expressed in terms of a survey of 100 individuals expressing support ( yes / no ) for the president , before and after a presidential address ( ... ) the question of interest is whether there has been a change in support between the surveys ( ... ) .we want to assess the evidence for the hypothesis of equality against the alternative hypothesis of a change . " _ statistical inference _, page 147 based on our experience in public opinion research , this is not a real question .support for any political position is always changing .the real question is how much the support has changed , or perhaps how this change is distributed across the population .a defender of aitkin ( and of classical hypothesis testing ) might respond at this point that , yes , everybody knows that changes are never exactly zero and that we should take a more `` grown - up '' view of the null hypothesis , not that the change is zero but that it is nearly zero .unfortunately , the metaphorical interpretation of hypothesis tests has problems similar to the theological doctrines of the unitarian church .once you have abandoned literal belief in the bible , the question soon arises : why follow it at all ?similarly , once one recognizes the inappropriateness of the point null hypothesis , it makes more sense not to try to rehabilitate it or treat it as treasured metaphor but rather to attack our statistical problems directly , in this case by performing inference on the change in opinion in the population . to be clear: we are not denying the value of hypothesis testing . in this example, we find it completely reasonable to ask whether observed changes are statistically significant , i.e. whether the data are consistent with a null hypothesis of zero change .what we do not find reasonable is the statement that `` the question of interest is whether there has been a change in support . ''_ ( a ) hypothetical graph of presidential approval with discrete jumps ; ( b ) presidential approval series ( for george w. bush ) showing movement at many different time scales .if the approval series looked like the graph on the left , then aitkin s question of interest " of whether there has been a change in support between the surveys " would be completely reasonable . in the context of actual public opinion data , the question does not make sense ; instead , we prefer to think of presidential approval as a continuously - varying process . _ , title="fig : " ] _ ( a ) hypothetical graph of presidential approval with discrete jumps ; ( b ) presidential approval series ( for george w. bush ) showing movement at many different time scales .if the approval series looked like the graph on the left , then aitkin s question of interest " of whether there has been a change in support between the surveys " would be completely reasonable . in the context of actual public opinion data , the question does not make sense ; instead , we prefer to think of presidential approval as a continuously - varying process . _ , title="fig : " ] all this is application - specific .suppose public opinion was observed to really be flat , punctuated by occasional changes , as in the left graph in figure [ fig : president ] .in that case , aitkin s question of `` whether there has been a change '' would be well - defined and appropriate , in that we could interpret the null hypothesis of no change as some minimal level of baseline variation . real public opinion, however , does not look like baseline noise plus jumps , but rather shows continuous movement on many time scales at once , as can be seen from the right graph in figure [ fig : president ] , which shows actual presidential approval data . in this example, we do not see aitkin s question as at all reasonable .any attempt to work with a null hypothesis of opinion stability will be inherently arbitrary .it would make much more sense to model opinion as a continuously - varying process .the statistical problem here is not merely that the null hypothesis of zero change is nonsensical ; it is that the null is in no sense a reasonable approximation to any interesting model .the sociological problem is that , from onward , many bayesians have felt the need to mimic the classical null - hypothesis testing framework , even where it makes no sense .aitkin is unfortunately no exception , taking a straightforward statistical question estimating a time trend in opinion and re - expressing it as an abstracted hypothesis testing problem that pulls the analyst away from any interesting political questions .`` the posterior has a non - integrable spike at zero .this is equivalent to assigning zero prior probability to these unobserved values . '' _ statistical inference _, page 98 a skeptical ( or even not so skeptical ) reader might at this point ask , why did we bother to write a detailed review of a somewhat obscure statistical method that we do not even like ?our motivation surely was not to protect the world from a dangerous idea ; if anything , we suspect our review will interest some readers who otherwise would not have heard about the approach ( as previously illustrated by ) . in 1970 , a book such as _ statistical inference _could have had a large influence in statistics .as aitkin notes in his preface , there was a resurgence of interest in the foundations of statistics around that time , with lindley , dempster , barnard , and others writing about the intersections between classical and bayesian inference ( going beyond the long - understood results of asymptotic equivalence ) and researchers such as akaike and mallows beginning to integrate model - based and predictive approaches to inference .a glance at the influential text of cox and hinkley ( 1974 ) reveals that theoretical statistics at that time was focused on inference from independent data from specified sampling distributions ( possibly after discarding information , as in rank - based tests ) , and `` likelihood '' was central to all these discussions .forty years on , a book on likelihood inference is more of a niche item .partly this is simply part of the growth of the field with the proliferation of books , journals , and online publications , it is much more difficult for any single book to gain prominence .more than that , though , we think statistical theory has moved away from iid analysis , toward more complex , structured problems .that said , the foundational problems that _ statistical inference _discusses are indeed important and they have not yet been resolved . as models get larger , the problem of `` nuisance parameters '' is revealed to be not a mere nuisance but rather a central fact in all methods of statistical inference . as noted above, aitkin makes valuable points known , but not well - enough known about the difficulties of bayes factors , pure likelihood , and other superficially attractive approaches to model comparison .we believe it is a natural continuation of this work to point out the problems of the integrated likelihood approach as well . for now , we recommend model expansion , bayes factors where reasonable , cross - validation , and predictive model checking based on graphics rather than -values .we recognize that each of these approaches has loose ends .but , as practical idealists , we consider inferential challenges to be opportunities for model improvement with the bayesian realm rather than motivations for a new theory of noninformative priors that takes us in uncharted territories .
for many decades , statisticians have made attempts to prepare the bayesian omelette without breaking the bayesian eggs ; that is , to obtain probabilistic likelihood - based inferences without relying on informative prior distributions . a recent example is murray aitkin s recent book , _ statistical inference _ , which presents an approach to statistical hypothesis testing based on comparisons of posterior distributions of likelihoods under competing models . aitkin develops and illustrates his method using some simple examples of inference from iid data and two - way tests of independence . we analyze in this note some consequences of the inferential paradigm adopted therein , discussing why the approach is incompatible with a bayesian perspective and why we do not find it relevant for applied work . * keywords : * foundations , likelihood , bayesian , bayes factor , model choice , testing of hypotheses , improper priors , coherence .
the idea of tipping points has captured the public s attention from topics as diverse as segregation , marketing , rioting , and global warming .robustness considerations have extended beyond engineering and ecology to political regimes , computer algorithms , and decision procedures . analyzing and exploiting path dependence plays a significant role in technology spread , institutional design , legal theory and the evolution of culture .however these concepts have not been generally and formally defined and , as a result , the terms uses across these various applications are hardly consistent . at timesa tipping point refers to a threshold beyond which the system s outcome is known .other times ` tipping point ' is used to describe an event that suffices to achieve a particular outcome , or an aspect of such an event , or the time of such an event .another use of tipping points is to label the conditions to which the system is most sensitive . the idea is frequently tied up with processes such as positive feedback , externalities , sustainable operation , perturbation , etc . robustness and path dependence sharealso in this preponderance of senses and this paper aims to elucidate the distinctions among these and other uses of the terms . to accomplish this conceptual analysis this paper puts forth formal definitions for each concept ( with an implied algorithm ) to measure properties of system dynamics .the analysis utilizes markov model representations of systems and so definitions of the foundational concepts of system dynamics ( equilibrium , basin of attraction , support , etc . )are first provided. then various tipping point - related concepts are described , defined , and illustrated with a simplified graphical example .this treatment is then repeated for robustness - related concepts and then again for a variety of path sensitivities in system dynamics .an additional section identifies projects for future work in considerable detail . through presentations and conversationsthe techniques presented here have garnered considerable interested from within academics and from government and industry .the next step is clearly to apply these measures to existing data and models to refine the measures and contribute to science .a planned extension provides a methodology ( and software ) to ( automatically ) generate the state transition representation from observational and model - generated data .this software tool is necessary for many of the intended and most useful applications of these measures .there is an additional and unexpected potential conceptual benefit to the robustness formalization presented here for the philosophical study of dispositional properties .dispositions such as fragile , soluble , and malleable have long resisted necessary and sufficient conditions to distinguish them from categorical properties ( like red , liquid , and square ) .the formal definition of one dispositional property ( robust ) may shed some conceptual light on how to proceed .other research thrusts that extend the conceptual and methodological benefits of what follows are also presented .this paper applies methods from ergodic theory , network theory and graph theory to whole systems encoded as markov models to find and measure tipping points , robustness , and path sensitivity in system dynamics .the fusion of these techniques to this purpose is novel , but certainly there is nothing unprecedented about analyzing systems to find these properties or modeling dynamics with markov models .but since this paper represents the first marriage of these two realms , the story thus far must be told as two separate threads .much previous work in finding and measuring properties of system dynamics has focused on explanation - the answering of the ` why ' question .not surprisingly since these papers , books , and discussions were couched in scientific contexts where a particular phenomenon ( or class of phenomena ) required explanation .each such jaunt into explaining tipping points , robustness , or path dependence was accompanied by a custom - suited methodology capable of generating and detecting that property in the model provided ( to answer the ` how ' question ) .these models achieved varying levels of generality , but each was limited by the desire to explain the property in a particular model or context .this is limiting because in order to explain how a process generates ( say ) path dependent behavior one has to model that process explicitly .the current work is one of pure methodology rather than a purported model of any particular system or causal apparatus .it is meant to be completely abstract and general and therefore capable of measuring these system properties in any system . because it does not model any generating process it can not address the ` why ' or ` how ' questions .it is not meant to .this paper answers the ` whether ' and ` how much ' questions .these questions are also asked in previous work , but results could not be compared between models because the methodology was model - specific .a general methodology provides a framework through which all modelers ( and some data analysts ) can determine whether and how much of each of these properties of system dynamics obtains and compare results across models regardless of the generating mechanisms .the ability to compare measures across systems is achieved through a focus on scale - free measures - measures that do not depend on the size of the system being analyzed .this framework allows scientists to focus on making appropriate models of their subject - matter by eliminating the burden of figuring out how to measure these properties for their model .here i review some previous work that includes the similar measured properties to highlight where this methodology might prove useful .the term ` tipping point ' was first coined by morton grodzins in 1957 to describe the threshold level of non - white occupants that a white neighborhood could have before `` white flight '' occurred .the term continued to be used in this context through the work of eleanor wolf and thomas schelling who also extended the concept to other similar social phenomena . though these researchers had a specific usage with narrow focus , the idea of a critical parameter value past which aggregate behavior is recognizably different spread across disciplines where its meaning and application varied considerably .malcolm gladwell s pop sociology book _ the tipping point _ has played a significant part in bringing the term to the public s awareness .the notion of tipping point most frequently used by gladwell is an event that makes something unusual ( such as hush puppy shoes ) become popular .more precisely this is a critical value for producing a phase transition for percolation in certain heterogeneous social network structures .this form of tipping point behavior also appears in the work of mark granovetter and peyton young for the propagation of rioting behavior and technology respectively .this version of tipping will play only a minor role in what follows , however the fact that the expression has made it into the everyman s conceptual vocabulary boosts the importance of establishing rigorous scientific definitions to disambiguate loose usage .a recent trend in reports of climate change is to refer to a hypothesized tipping point in global warming and ice cap and glacial melting .james e. hansen has claimed that `` earth is approaching a tipping point that can be tilted , and only slightly at best , in its favor if global warming can be limited to less than one degree celsius . '' this usage reflects hansen s belief that `` humans now control the global climate , for better or worse . ''gabrielle walker states , `` a tipping point usually means the moment at which internal dynamics start to propel a change previously driven by external forces . '' it is unclear whether walker s and hansen s comments are compatible ; the conceptual ambiguity of the terms may be making them talk past each other .but even if their usage is meaningful within their fields , they fail as general characteristics . identifying tipping points ( as a property of system dynamics ) should not depend on whether humans are in control of system behavior or what is driving the dynamics ( even if explaining why those are the dynamics does ) .but not all heretofore definitions of the term ` tipping point ' have been loose or subject - matter specific .it is often deployed as a semi - technical term in equation - based models of various sorts .for example , it can refer to an unstable manifold in a differential equation model , the set of boundary parameters for comparative statistics , or inflection points in the behavior of functional models .each of these uses of ` tipping points ' conforms to our intuitive sense of the term s meaning and at some slightly higher level of abstraction these tipping behaviors are the same - and match the definitions provided in this paper . but not all models can be faithfully represented as systems of equations and this limits the usefulness of equation - dependent definitions .one set of tests we can perform on the compatibility of current analysis is to generate markov models based on the existing differential equation and comparative static models and then determine whether the definitions provided here identify the same states as tipping points .such a project is left for future work .robustness considerations are already a common analysis path for researchers in many fields : ecology , engineering , evolutionary biology , logistics , computer science , decision theory , and even statistics .models in these fields are often developed specifically to enhance system robustness , avoid system failures , mitigate vulnerabilities , and otherwise cope with variations in an unpredictable environment .these previous analyses provide some understanding of what features make certain systems persist and others fail , but there is little in the area of general theory .one hope is that the present construction of general measures of robustness - related concepts will inform and facilitate the construction of general theories of what systemic features produce these properties of system dynamics .if it can reveal that robust configurations and dynamics in these varied fields can be captured by a single measure , then we will have taken the first step towards a unified theory of robustness .understanding how social systems can be both simultaneously flexible and strong has garnered increasing interest recently . in an upcoming bookjenna bednar investigates how institutional design can affect the robustness of a federalist governing body .`` by explicitly acknowledging the context dependence of institutional performance , we can understand how safeguards intersect for a robust system : strong , flexible , and able to recover from internal errors . ''bednar has identified the properties that make institutional system robust ( compliance , resilience , and adaptation ) in a way that is somewhat specific to the subject matter . that is beneficial and to be expected for explaining and improving the robustness of political institutions .such an analysis stands to gain from the conceptual refinements derived from formal measures of multiple robustness - related features of system dynamics .bednar s work especially underlies the thought that understanding many systems of interest requires more than traditional equilibria analysis ; the dynamic nature of dissipative structures ( see example [ dissipativestructure ] ) requires new notions of stability , resilience , and robustness that i hope to help inform through the provided measures .thomas sargent has made extensive use of principles from robust control theory in his analysis of monetary policy and pricing ( and other topics ) .the sense of ` robustness ' used in robust control theory is a gap between modeled levels and actual levels of parameters .it is used to formalize misinformation , uncertainty , and lack of confidence in agents knowledge and , more generally , to facilitate high levels of performance despite errors and in known less - than - ideal conditions .this sense of robustness applies across the decision theoretic sciences and planning literature ( e.g. the work of rob lempert and company ) . however , not all robustness analyses are to cope with uncertainty . in geneticsthe term robustness refers to a species consistency of phenotype through changes in the genotype .robustness can be considered at two levels : 1 ) through how much mutation is a member of a species viable and 2 ) how much genetic variation is required to transform a species physical characteristics .the first level takes genetic profiles of organisms and determines which can survive to reproductive age and which can not ( or are sterile ) .the number of genotypic variations that remain viable is a measure of the species robustness according to that usage . on the evolutionary time scalewe wish to understand how incremental genetic drift is responsible for large phenotypic variations over time .walter fontana has demonstrated that a network of neutral mutations ( ones that do not affect fitness ) can sufficiently explain the observed punctuated equilibria ( see example [ punctuatedequilibria ] ) in species evolution . though fitness may remain neutral through some genetic variation , the connection between fitness change and phenotype change is strong . a model that tracks fitness through genetic variationscould then approximately measure how robust each stage in the evolutionary progression is .it is clear that these two concepts of robustness are distinct ; and they are both distinct from control and decision theories usage as well .we can add robustness measures from statistics and computer science to the variety of senses that ` robustness ' can take . in computer science an algorithm , procedure , measure , orprocess is robust if small changes ( errors , abnormalities , variations , or adjustments ) have a proportionally small affect on the algorithm , procedure , measure , or process .the time complexity of two algorithms may change in different ways .algorithm a may require one step per input ( ) and algorithm b may require one step per two to the power of the input size ( ) ; in this case algorithm a is more robust to changes in input size .statistical robustness is either when an estimator performs sufficiently well despite the assumptions required by that estimator being violated or when ( like in computer science ) a measure changes little compared to changes in the input . for example , the median is a more robust measure than the mean because to alter the median a data point has to cross the median point , whereas any input value change will change the mean s value . and there are more variations in information theory , data security , engineering , law , ecology , and just about every field has their own version of robustness .they all share certain high - level conceptual commonalities , but differ in their details and criterion for application .the definitions below produce necessary and sufficient conditions for the application of several robustness - related concepts .there are many different ways in which systems can cope with variation are each has its own definition. this level of refinement ( combined with a very inclusive markov modeling technique ) may be able to bring discussions of robustness in the different fields to a single table and foster inter - discipline research .the level of interest in explaining path dependent processes has risen in recent years .this is in part due to an overall appreciation for the importance of such dynamics in complex systems across domains .this is also partly due to a natural need to explain observed path dependent phenomena such as convention lock in ( e.g. qwerty keyboards ) , climate change , and political instability . andanother part is due to an increased prevalence in models wherein path dependence could potentially be formally measured .each technique to measure path dependence requires a definition to characterize path dependence in a manner measurable by the formal machinery presented . because previous work focused on explaining path dependence through demonstrating sufficient mechanisms to generate it , and this paper s goals are to provide a general system - level definition and way to measure it, the current work will only barely touch on previous research .yet insofar as the definitions should be compatible it is worth taking a look at previous , recent formal definitions of path dependence in the literature . according to james mahoney are three basic characteristics of path dependence in the social science literature .the first type of path dependence appreciates sensitivity to events that take place in the early stages of a sequence of events .secondly , there are some historical events early in the sequence that are not explained by prior events . finally , the sequence of events exhibit some kind of `` inertia '' culminating in an equilibrium - type outcome .though later analyses ( including this one ) deny these characteristics in favor of other ones , this work does demonstrate some popular thoughts in the formal literature on path dependent processes .paul pierson presents one definition that contrasts with mahoney s . in piersons work path dependence describes how early historical events act to select among multiple possible equilibria .this equilibria selection process , however , is due to exogenous shocks to the system , a characterization which does not seem necessary for a definition of path dependence .it does ring true that if only one equilibria - like outcome is possible then system ought to be characterized as * path independent * ( or at least that the outcome be characterized as path - independent ) .kollman & jackson have a variant of page s definition of path dependence ( see below ) - one that applies only to specifically parameterized dynamical system models and requires `` very specific and stringent conditions in order for there to be path dependence . ''path dependence is revealed when a certain time - varying autoregressive parameter is ( or converges to ) one as the system s dynamics progress through specified shocks .one of their results is that `` several steps ought to be taken prior to proposing that some process is path dependent .such a proposal should be based on rigorous analysis of the process at issue .we can not offer a set of computational tools that can be used ` off the shelf . 'there is no substitute for theoretical modeling appropriate to the system under study as a first step. '' indeed , building a data - generating model is a necessary first - step for measuring path dependence in most cases .however , if data satisfies certain properties ( described in the next section ) then the methodology presented in this paper can be used `` off the shelf '' to measure the degree of many forms of path dependence ( see _ automatically generating the markov model _ in the future work section for more details ) . if one is primarily interested in finding out whether and how much path dependence a process produces , however , the model need not be specified with the details required by their methodologysuch model - level tinkering is only necessary to provide an explanation of whence the path dependence .but if we accept that there are very many mechanisms that can generate path - dependence and that this feature of the mechanisms will be revealed through the data they produce , then we can ignore the mechanisms specifics when measuring their degrees of path dependence .this is the approach taken here .the work of scott page in identifying the types and causes of path dependence is closest in flavor to the work presented here .though again an example of mechanism identification , the motivation identified in his introduction applies equally to this paper and is worth including here . __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ attempts to extend what is meant by path dependence reflect a need for a finer unpacking of historical causality .we need to differentiate between types of path dependence .the way to do that is with a formal framework .an obvious advantage of having such a framework is that we can conduct empirical analyses and discern whether the evidence supported or refuted a claim of the extent and scope of the sway of the past .that said , empirical testing of a framework of causality is far from the only reason for constructing a framework for modeling historical forces .formal models discipline thicker , descriptive accounts . by boiling down causes and effects to their spare fundamentals ,they enable us to understand the how s and why s ; they tell us where to look and where not to look for evidence .they also help us to identify conditions that are necessary and/or sufficient for past choices and outcomes to influence the present . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the differences that page recognizes in the underlying forces driving law making , pest control , or technology choice are points well taken .insofar as the different forces generate differentiable system dynamics the techniques of this paper will be able to identify and precisely measure how much and in what ways past states influence the future .the properties defined through the formal framework presented below is intended to guide scientists towards characteristics of the original system that merit closer examination .markov modeling has a long history in mathematics , engineering , and in applications to fields as diverse as condensed matter physics , genomics , sociology , and marketing .techniques to use markovian processes to uncover information about system dynamics fall within the field of ergodic theory . several properties and techniques from standard markov modelingare employed below . in their abstract formone can compute features such as the equilibrium distribution , expected number of steps between two states ( with standard deviation ) , reversibility , and periodicity .these features gain added meaning when interpreted for the system being modeled , but this paper utilizes them as part of defining ( and creating algorithms to uncover ) interesting system dynamic properties .computer scientists have long been analyzing networks in the form of actual communication networks as well as various abstractions from these problems .they have invented several useful measures and exceptionally well - crafted algorithms to calculate connectedness , load - bearing properties , path switching , transmission speed , and packet splitting and fusion to name a few .my analysis borrows heavily from this work in terms of algorithms , though each has been repurposed to the abstract markov model system representation .computer science is also the home of finite state machines : mathematical objects that share their states & transitions structure with markov models ( though state machines are frequently not probabilistic ) .the states of a finite state machine represent the internal states of some agent and the transitions represent the behavior rules by which agents change their states .few of the techniques invented to analyze finite state machines will apply to this research because few are adapted to purely probabilistic transitions .hardware engineers and their physicist partners have worked out several interesting measures for circuit design problems .multiple paths , variable resistance , flow injection , capacitance and many other characteristics of electronic circuits have analogs in the markov models presented below .though these are only partially explored at this stage , future work will look deeply at borrowing techniques from circuit research . andfinally graph theory offers a few useful measures for our purposes , and moreover provides a wealth of definitions for graph structure and node relationships .structural properties will play a larger role in future work addressing changes of resolution and in establishing equivalence classes of system dynamics .also , many of the features that graph theory identifies have been given alternative definitions that underlie the probabilistic nature of the markov model analysis .few explicit references to previous work in these methodological subjects appear below because the methods used generally fall in the category of common knowledge .when a specific algorithm or specialized technique is used , a reference is provided .the measures defined here are meant to stand on their own as improvements in our conceptual understanding of the included features of system dynamics . by differentiating and formally defining these properties of processes we gain both a common vocabulary with which to discuss our models and a detailed typography of behavior to include and detect in system models .many of the applications i have in mind are to include these measures in constructive models across multiple disciplines where the models are iterated with multiple initial settings and/or have stochastic parameters . these include game theoretic models , network models , physical models , and the whole gamut of modelswhich may be considered agent - based .certain static data sets , the sorts collected by surveys , are also analyzable via this methodology .the data must satisfy certain criteria to be thus analyzable - basic properties it must have before even considering the statistical issues involved with particular applications .there must be 1 .data across time ( because we re measuring properties of dynamics ) 2 .repeated system states ( so the markov model is nt deterministic ) 3 . known ( or known to be fixed ) time between observations to have repeated system states from collected data we typically will need data from multiple independent trials . in some cases independence will not be true but will be a useful / necessary approximation .for example , voter data from each state or county are not truly independent , but each state or county could constitute a separate trial . in many ways these considerations parallel issues already present in statistical analysis , though the motivation for these requirements are quite different .other features of typical statistical analysis will also show up in this technique s application ( e.g. correlation , covariance , fixed effects , kurtosis ) , but they will not be highlighted except where doing so enhances the discussion . in some casesthe data may be recoded or otherwise translated into system states in such a way to as to highlight those features of the system we wish to track and uncover the tippiness , robustness and path sensitivities of . in some cases this will be a resolution choice , in others this will be a shift to recording markov states as rate changes , and in others it may be converting fixed - time dynamics into event - drive dynamics .it will take expertise to determine which , if any , conversion is necessary and a great deal of trial and error to develop that expertise . eventually standards will be uncovered as people build proficiency in this methodology .a final consideration has nothing to do with the structure and format of the data . even if the data is amenable to the analyses presented below, the data may not be observations from a system for which we think robustness or path sensitivity apply .for example , even if we could run the robustness analysis on voter data , it is not clear what the result would be telling us .similarly , given any set of real numbers we can calculate the mean value , but there is no useful interpretation of that mean value for some sets of real numbers ( e.g. telephone numbers ) .path sensitivity , however , is something we expect to uncover in poll data and so the amenability of the voter data to the markov modeling enables this analysis .the modeling and analysis techniques need to be both possible _ and _ appropriate , but only a human can determine appropriateness .one promising application of the measures and methods defined in this paper is in developing self - managing large - scale systems - so - called _autonomic systems_. autonomic systems apply top - down measures of their internal mechanisms to adapt to changes in resource needs and availability .some computer systems , such as internet routers , satellites , autopilots , and explorer robots , use autonomic management programs .another major application could be logistics management : the routing of parcels , fuel , food , luggage , etc . to minimize service breakage and costs. with formal measures of robustness in hand ( especially in combination with understanding of path sensitive trajectories ) these systems could guide themselves to maintain their function and cope with environmental perturbations using levers uncovered through the analyses presented below . because my research extends into each of these research arms the potential gain for having these measures constitutes a significant personal motivation to develop them .at its most basic a markov model is a collection of states and a set of transition probabilities between pairs of states .there are several ways to represent a markov model , but the standard techniques are to 1 ) model the states as vertices ( nodes ) and the transition probabilities as weighted edges of a graph or 2 ) use such a graph s corresponding adjacency matrix or edge list .different applications of markov modeling take different system features as the states , but the nodes in the markov models used here represent a complete description of a state of the system ( see below ) .the transition probabilities represent either observed system dynamics or theoretically posited state changes . given that states and transitions are defined this way it is clear that the set of states and their transition probabilities are constant for the markov models utilized in this paper . before beginning the breakdown of the aforementioned phenomena into their various categories ,a general typology of state spaces will be helpful .a system state is a complete set of instantiations of the aspects of the system ( values for variables , existence for agents , etc . ) . throughoutwe will analyze systems with a finite ( but possibly arbitrarily large ) number of states each with a finite number of aspects .insofar as some parameters may take on unbounded values ( e.g. a continuum of real values ) bounding the number of the parameters ( i.e. dimensionality of the parameter space ) does not ensure a bounded or discrete state space .the analysis that follows is limited to a finite , discrete state space achieved by binning continuous parameters .[ aspects ] a state in the markov model is a complete specification of the aspects of one configuration of the system . where is the value of aspect in state [ samestate ] two states are represented as one state of the markov model if all the aspects of the two states are identically valued . an obvious ( but still useful ) corollary results directly from the truth values of biconditional equivalence .[ difference ] a difference in any aspect marks a different state of the system . if our system is an iterated strategic - form game played by six players each with four possible actions then each state of the system has six aspects and each aspect takes on one of four values .that is and a particular state might be .there are combinations of four actions for six players , but the markov model may not include all of them .recall that the model is expected to be built from either collected data or a theoretical model so some combinations of aspect values may be unobserved or theoretically impossible or irrelevant . to analyze schelling s segregation model we have several options for how we capture the states of the system .consider an grid with 50 agents of two types .we could choose to track the and coordinates of each agent in the model which would generate states with 100 aspects ( two for each agent ) .we could instead track whether each agent is happy with its neighborhood so there would be 50 binary _ \{yes , no } _ aspects to each state . alternatively the aspects could represent what is in each of the 64 grid spaces with values from _ \{empty , agent type1 , agent type2}_. note that not all combinations are possible because there are fixed numbers of each type of agent and two grid spaces change every transition ( agent moving ) . these and other specifications could be combined with each other and/or measures of the configuration ( e.g. how clustered the agents are ) .the choice of what to count as the aspects of the states will determine what the measures defined below can reveal .a set of states is demarked with boldface type : .the set of all the states in the markov model is which has size thus is also the number of nodes in the graph representation .the state of the system at time ( denoted ) changes to in discrete , homogenous time intervals .state transitions are probabilistic and specified by the system s transition diagram or matrix ( see figure [ markovexample ] ) .we write the probability of transitioning from state to state as .it will later be useful to denote the set of transitions and the size of this set as .m n p2 mm c & p p p p 1 & 2 & 3 & 4 + + [ cols= " > " , ] & .5 & 0 & 0 & .5 + 0 & 0 & 1 & 0 + 0 & .9 & .1 & 0 + + & = + + [ o][f]@=15 mm 1 @(l , u)[]^.25 _.25 ^.5 & 2 ^.5 ^.5 + 3 @(l , d)[]_1 & 4 ^.1^.9 + ( a ) transition matrix & ( b ) transition diagram + following the standard definition from probability theory : [ probability ] the sum of a state s exit probabilities must equal one . the entry in the transition matrix at row and column represents and so each row must sum to 1 .the probability of a state change equals the probability that each of the aspects of the state changes . this theorem , which relates state transitions back to changes in their constituent aspects , follows directly from definition [ samestate ] of state sameness and applies to self transitions ( i.e. ) as well .it is important to note that the state change probability is not the sum of the aspect change probabilities .each single or multiple aspect change produces a distinct , independent state change with its own probability .this property ( and others related to aspect changes ) will be useful in the discussion of levers below .to use a markov diagram to represent system dynamics we will need to define various types of system behaviors in terms of system states , sets of states , and state transitions . as a preliminary to the common features of system behaviori will present definitions of some structural features that will be utilized . in graph theory a path ( of length )is typically defined as a set of vertices and edges satisfying the schema where the edge links the vertex to .self - transitions , which represent both a lack of change and a change too small to count as a state change , are an important feature of markov modeling and hence both nodes and edges may be repeated along paths .so ` path ' as it is used here is the broader notion sometimes called a ` walk ' in the graph theory literature .since there are neither multi - edges nor hyper - edges in a markov diagram the set of vertices ( or the set of edges ) alone is sufficient to uniquely specify a path as long as successively repeated vertices ( or edges ) are interpreted as self - transitions .a path in a markov model could be defined as a set satisfying the same schema used in graph theory , but we will use a slightly different definition to make the probabilistic aspects explicit .[ path ] a _ path _ is an ordered collection of states and transitions such that from each state there exists a positive probability to transition to the successor state within the collection .a path from to denoted or is the set of states such that * * * this definition establishes necessary and sufficient conditions for to be a path , but does not provide a schema for specifying a particular path . to specify intermediate states ( _ markers _ )for the system to pass through we can write to denote a path from to that passes through ( at least ) .such a path is merely the conjunction of the two subpaths and .any number of markers can be thus specified .the order of the states specified must be satisfied by the path taken , but it does preclude other states from being visited between the marked states . to specify a long sequence of path markers this paper uses the notation . to completely specify each state along a path we adopt the notation for short sequences and for long ones . [ exactpath]an exact path ( ) satisfies definition [ path ] of a path above . where for each subpath the in item ( ii ) of definition [ path ] equals 1 .though the term ` transition ' appeared above in the general description of markov models , it was not defined precisely .[ transition ] the _ transition _ from to is .[ length]the _ length _ of a path is the number of transitions taken between the first and last states . this ( possibly overly complicated ) formal definition of length simply uses features of the definition of path above , but it is equivalent to the number of edges traversed along the path .a path built from a set of states is at least as long as the number of states in the set . this theorem follows from the fact that states may be revisited along the path .the definition of length suffices for completely specified paths , but the length of merely marked paths can take a range of values depending on the exact sequence of nodes visited .[ cycle ] a _ cycle _ is a path that starts and ends with the same state . a cycle of length one is a self - transition .this observation follows directly from the definition of a transition and theorem [ exactpath ] .[ elementarypath ] an _ elementary path _ is a path that visits each state within the path exactly once from this definition it is clear that an elementary path is a path with no cycles - including no self - transitions .a restriction to elementary paths is particularly helpful for ascertaining certain features ( e.g. path existence ) because simple algorithms exist for them and because of the following property .the length of an elementary path equals the number of states in the path .though this fact obviously follows from the definition of path with the exclusion of cycles , providing a formal definition is trivial and obvious and will be omitted . graph and network theorists have developed a great many algorithms for finding paths , calculated their lengths , and measuring properties germane to their application in those fields .some of those will come up later in measuring properties of system dynamics , but the definitions and theorems presented above will suffice to move forward in examining our markov models .this subsection provides definitions and sketches of some algorithms for common structural properties of markov models used to represent system dynamics .many of these features have existing definitions in terms of matrix operations or limiting distributions , but this paper will present alternative definitions in many cases .the definitions will make focused use of the finitude of the markov models , the resolution of the states , and the granularity of the probability measurements .my motivation for the alternative definitions is to facilitate clear intuitions about a system s processes and how to measure them precisely .[ equilibrium ] a system state that always transitions to itself is called an _ equilibrium _ or _stable state_. , stability refers to a tendency to self - transition .hence an equilibrium is equivalent to a * fully * stable state . ]an equilibrium is a state such that in some cases a set of states plays a role similar to that of an equilibrium .[ orbit ] an _ orbit _ is a set of states such that if the system enters that set it will always revisit every member of the set and the system can never leave that set . is an orbit if and [ oscillator ] an _ oscillator _ is an orbit that is also a cycle .equilibria can not be proper subsets of an orbit assume .from definition [ orbit ] such that .since is in definition [ orbit ] implies .definition [ equilibrium ] implies that which further implies that . for all , .therefore by contradiction with definition [ orbit ] . given the definitions of equilibrium and orbit above it is clear that an equilibrium is an orbit of just one state . [ attractor ] an _ attractor _ ( denoted ) is either an equilibrium state or an orbit of the system .the attractor states in are those where as . or in a more computable formulation , given a desired degree of significance for the probability measures such that if and only if is an attractor state .although it becomes a little more complicated if one wants to isolate the individual equilibria and orbits .techniques ( such as this one ) for measuring properties ( whether mathematical or computational ) will be presented in an appendix in future versions and are occasionally referred to in this text or in footnotes . ]the choice of a resolution determines whether an orbit appears as an equilibrium or _vice versa_. in cases where attractor avoidance is the aim of the model ( see tipping and robustness below ) we can collapse orbits into a single attractor state without loss of information . for this reason i will use ` ' as if it were a single state except in cases where it being an orbit affects the analysis . in later sections we will encounter the idea of a dissipative structure ( see example [ dissipativestructure ] ) for which equilibria analysis is inappropriate .it is not the case that these systems fail to have attractors , it s just that the goal of such systems is to remain in continual flux and avoid equilibria and other `` point attractors '' ( i.e. attractors that incorporate a small percentage of the total number of states ) .given the definitions established above every markov model must have some set of states satisfying the conditions for being an attractor .[ minattractor ] every system has at least one attractor .assume that there exists a system with no attractors .if is not an orbit then by definition [ orbit ] for some it must be the case that either 1 ) there is some time in the future after which does not get visited or 2 ) some state outside gets visited .let be . for any , case ( 1 ) implies that any orbit must be smaller than and case ( 2 ) implies that there must be at states in the system .for it is not possible for the orbit to be smaller so any orbit must contain at least states . by induction on must be the case that if there are no orbits of size ( is the whole system ) then any orbit must be of size which is impossible .that contradicts the assumption that there exists a system with no attractors , so every system must have at least one attractor .[ basin ] those states from which the system will eventually move into a specific attractor are said to be in that attractor s _ basin of attraction_. the basin of or is a set of states such that .see previous footnote for more details . ] some systems may spend a great deal of time in a basin of attraction before reaching the attractor located within it thus making system behavior in the basin similar to an orbit itself ( also note for the discussion of robust sets below ) . in such casesit is sometimes helpful to utilize the following property to describe and make inferences about system behavior .once in a basin of attraction the system can never leave it . [ support ] the _ support _ of a state ( also known as its _ in - component _ ) is the set of states which have a path to it .the support of or is the set of states such that we can expand this definition to the support of a set of states as the union of the supports of the members of .some facts relating these features of system dynamics are clear from the above definitions .an attractor is a subset of its basin of attraction , and an attractor s basin of attraction is a subset of its support . the equilibrium may be the only member of either its basin or support , but if it is the only member of its support then it is disconnected from the rest of the graph ( e.g. in figure [ equilbasin ] ) .the _ indeterminate states _ of a system , ones that are not members of a basin of attraction , convey a wealth information about the system s dynamics and its future states . recall that the whole system may be a single orbit and there may be no indeterminate states .but if there are multiple attractors , then the indeterminate states are the ones in multiple attractors supports .[ overlap ] the _ overlap _ of a collection of states ( whether attractors or not ) is the set of states in all of their in - components ( i.e. the intersection of supports ) .the overlap of , written , is the set of states in basins of attraction can only contain one equilibrium state or orbit and hence the basins for two different attractors can not overlap .supports of different collections of attractors may or may not have sets of overlapping states .the overlap states are of interest because these are the states with positive probabilities for ending up in each of the attractors for which the supports are overlapping .if no attractors supports overlap then every attractor s support is just its basin of attraction and the system has a _deterministic outcome_. models referred to as dynamical systems models typically use sets of differential equations and are typically fully deterministic because the map from to is always a function in the strict sense . because the dynamics are produced from these functions , every initial condition is mapped to a particular equilibrium state .certain parameterized systems allow changes in their state maps that can change the number , location , and/or `` strength '' of equilibria .but for any given value of those parameters , each initial value still has one possible outcome .many systems can be usefully modeled with such fully deterministic systems , but in this paper we are mostly concerned with the non - deterministic parts of systems because that is where the critical points occur ( see next section ) .[ outdegree ] a state s _ out - degree _ is the number of distinct successor states ( states that may be immediate transitioned into ) .the out - degree of state equals will be used to denote a neighboring state and the set of neighboring states .[ indegree ] the number of states that can transition into a state is its _ in - degree_. the in - degree measure will be rarely used in what follows ( except for algorithms utilizing a reversed markov model subgraph ) and will not need its own symbol . because of this asymmetry the term ` degree ' will refer to a state s out - degree unless otherwise noted .[ reach ] the _ reach _ of a state ( also called its _ out - component _ ) is the set of states that the system may enter by following some sequence of transitions ; i.e. _ all possible _ future states given an initial state .the reach of or is the set of such that [ decreasingreach ] every successor state s reach is less than or equal to the intial state s reach . [ reachpath ] is in the reach of if and only if there exists at least one path from to . this theorem , which can be used as an alternate definition of reach , follows from the definitions of path and reach .an obvious corollary of the definitions of basin and reach is that if the reach of a state includes only one attractor then must be in that attractor s basin of attraction an attractor state s reach is just the states within the attractor ; an equilibrium s reach is itself . for all transitions it must be the case that from the definition of reach since satisfies .furthermore every in the reach of is also in the reach of because for whatever makes it true that , makes it true for . to achieve the inequality it suffices for there to be at least one state in not in .it is _ prima facie _ obvious that it is possible for such that and .theorem [ decreasingreach ] generalizes to all paths ( which is just a sequence of transitions ) so that reach never increases as the systems transitions along any path .this property relies on the fact that the transition structure is fixed for the markov models used in this paper ; future work will relax this requirement and the theorem does not necessarily hold for models with changing dynamics .a _ strongly connected component _ of a directed network is a set of vertices such that there is a path from every vertex in the set to every vertex in the set ( including itself ) .we will find the same concept useful , but this paper adopts a different name for it .[ core ] a _ core _ of a set is a subset wherein every member of the subset is in the reach of every member of the subset .the core of some set is written and is a subset satisfying the condition some sets will have multiple cores the set of s cores can be called s _mantle_. [ reachcycle]every cycle within is ( at least part of ) a core and every state in a core is in at least one cycle .[ reachcore]every state in a core has the same reach .the states on the boundary of a set are a useful set of states to identify .[ perimeter ] the _ perimeter _ of a set , , is a collection of those states in the set that may transition to states outside the set .that is , such that in keeping with the core and mantle analogy , the perimeter states of the mantle of will be referred to as s _ crust_. perimeter states themselves , without further specification , describe one commonly deployed ( though weak ) concept of tipping points , although we will see in the next section that specifying different base sets produces different types of tips . in the next sectionwe apply the above definitions in different combinations and different contexts to identify various system behaviors .most extant systems analysis focuses on equilibria , but a lot of interesting behavior happens away from equilibrium . in the indeterminate states of a systemwe can not know precisely which states the system will reach or which state it will be in at a given time in the future , we only know probability distributions over the future states . but by understanding a system s behavior we might know whether some particular change facilitates a specific outcome or path through the dynamic - this may be helpful information .discerning these sorts of facts about system dynamics can increases one s information about the system ( in both the technical and colloquial senses ) .considerations such as this one are the building blocks of the formal theory of tipping points , robustness , and related phenomena immediately to follow .using markov models and the states and sets defined above as a springboard , this section defines and briefly describes several terms related to the concept ( or more to the point , concepts ) of tipping points .critical phenomena and tipping points of various kinds share the defining feature that ( for whatever reason ) behavior is different before and after some transition .behavior in this analysis is just the properties of systems dynamics .there are , of course , many ways in which the properties of system dynamics can differ and each way is a different kind of tipping phenomenon . recalling that state changes occur if and only if there is a change in some aspect of the initial state ( see corollary [ difference ] ) , our analysis of tipping phenomena starts with state aspects .[ levers ] the _ levers _ of a state are the aspects of a state such that a change in those aspects is sufficient to change the system s state .the levers of , denoted , is so given a state in the markov model , the levers are those aspects of the state that are different in any neighboring state and each such is a distinct lever . in some casesneighboring states will differ by more than one aspect . in those casesthe respective element of the set will be a list of all the aspects that need to change as one lever .[ leverpoint ] a _ lever point _ is a transition resulting from a change in a particular aspect ( or set of aspects ) .an aspect s lever points is the collection of transitions that a change in that aspect ( or those aspects ) alone generates .the lever points of is the set of transitions created by levers and lever points work complementarily : for levers we pick a state and find the aspects that change and for lever points we pick the aspect and find the state changes it produces .it is occasionally helpful to refer to the aspect change(s ) that generate a specific transition .[ leverset ] symbolizes the _ lever set _ of : the aspect or aspects that differ between and . in some applications we will be interested in how many aspects change for a transition .[ magnitude ] the _ magnitude _ of the lever set of a specific transitions is .though levers as they are defined here do not depend on the ability to control that aspect , the choice of ` lever ' for this concept is motivated by the realization that in some models control of some aspects is available .one may be performing a tipping points analysis precisely because one is choosing levers to bring about one state versus another ( or agents within the model may be choosing ) .[ policychange]imagine a model wherein each aspect is a variable representing some part of a policy ( e.g. amount of money spent on each line item ) .each aspect change has an associated cost ( legal , bureaucratic , time , etc . ) .the modeler may be trying to determine the lowest cost , feasible route from the current policy to some desired policy ; or perhaps to determine how far policy can be changed on a specific budget .the cumulative magnitudes of lever sets along a path may adequately approximate such a cost measure . in general the sum of the magnitudes alonga path is a rough measure of how difficult it is for the system to behave that way .techniques from circuit design applied to the markov model may be gainfully applied to such models . in some contextswe may wish to know how much change an aspect is responsible for across the system s dynamics .[ strength]the _ strength _ of a lever is the sum of the probabilities of all transitions that result from changing that lever .the strength of is equal to this measure is not scale - free since the sum depends on the number of transitions in the markov model , but it is useful for comparing levers within a system .the strength measure could be used , for example , to determine which aspects to control to maximize ( or minimize ) one s ability to manipulate the system .it could also be associated with a cost of letting that aspect vary over time .we will revisit levers below in other forms as they apply to other measures of system dynamics . in some caseswe are interested not just in which aspects change through a transition but also in the * * values * * of levers at transitions .[ threshold ] a _ threshold _ or _ threshold point _ is a particular value for a lever such that if the value of the aspect crosses the threshold value it generates a transition .so is a threshold value of if this definition can be applied _mutatis mutandis _ for a set of values for a lever set which can be distinguished by the name _ threshold line _ when appropriate . if there are multiple states with transitions crossing the same threshold value then knowing that information refines our understanding of the lever s role in system dynamics . thus determining the threshold value for one transition is merely a means to the end of determining the strength of the levers with that threshold .[ thresholdstrength ] the _ threshold strength _ of is the strength of the levers for which is the threshold value : if a particular value for a particular aspect plays a large role in system dynamics then crossing that threshold is another oft - used version of `` tipping point '' behavior .these definitions of threshold and threshold strength only require that the end state s value be different from the start state s value . in common usage , however , thresholds establish different and separate boundary values for ascending and descending values .if a threshold only affects system dynamics in one direction then we can determine that from the markov model using the following definitions .an _ upper bound threshold _ of is a value such that a _ lower bound threshold _ of is a value such that the threshold strength measure can be adapted to these ascending and descending definitions in the obvious ways .sets satisfying these definitions can tell us how frequently crossing that threshold in that direction acts as a lever .if there are multiple states with transitions crossing the same threshold value then knowing that information refines our understanding of the lever s role in system dynamics .this general definition admits examples from many different kinds of systems and can even apply to parts of systems ( such as agents ) . in granovetter s model of riotspreading we can talk of each agent having its own threshold - the number of rioting agents necessary to make each agent join the riot .this is just the same threshold definition applied to a lever set where the levers happen to be the same feature of each agent . in granovetters model the threshold value is the same in both directions . we can also talk of thresholds in the properties of the system dynamics that track how the system transitions through states . instead of being a value for an aspect within the model, it would be a value for one of the measures defined in this paper. the following might be the case for some system : once the energy level of the current state drops below three the system is at most three transitions away from being in an attractor . though having an energy level is not part of the system that the markov model represents we can associate this property of system dynamics with each state of the system as if it were one of its possible leversthen we can explore the relations of specific values of this property of system dynamics to its other dynamics .relating values of properties of system dynamics back to aspects within the model will also provide useful information in many cases .in some system analyses the property of interest is what is available for the future in the most general terms .if one does not know much a system s dynamics then even knowing how many states could potentially be transitioned to provides an informational benefit .the measures below become increasingly refined and detailed , but we start with some simple measures that may suffice for some applications .[ stretch ] a state s is the number of states in its reach .so the stretch of equals [ criticalbehavior ] a system dynamic ( i.e. a particular state transition ) is considered _ critical behavior _ if and only if it produces a decrease in stretch ; that is , critical behavior is any such that in addition to identifying the transitions that limit the system s future states , we can also measure how critical the transition is .subtracting the end state s stretch from the start state s stretch provides such a measure , but it is not scale - free and so can not be readily compared across different systems .we can normalize the stretch difference with the size of the system to which it is being applied to produce a percentage measure .[ stretchgap ] the _ stretch - gap _ of a transition is the change in the percent of the total number of states that can be reached .this quantity equals because this measure includes the total number of states in the system it clearly is not scale - free either .however despite this limitation it does provide information about the system s future and is an intuitive way to compare transitions within the same system - even at different resolutions .as example [ dropexample ] below demonstrates the stretch - gap reports how much of the system s state space is cut off by each transition and this information could be used , for example , to manipulate system dynamics to prolong system longevity .we also have an alternative , fully scale - free , measure of the drop in reach across a transition . [ criticality ] a * transition s * _ criticality _ is one minus the ratio of the start and end states stretch .the criticality of equals recall from theorem [ decreasingreach ] that from any initial starting point , as the system transitions through its states the sizes of the states reach are monotonically decreasing . as a result of that theoremwe have the following corollary regarding the range of values for the ratio of reaches . a transition s criticality will be between zero and one .let and for transition . from theorem [ decreasingreach ] . produces which yields a criticality of zero . for can decrease or increase to find the other bound , but since and are natural numbers increasing is the better approach . using a well - known mathematical fact suffices for finding the other bound : .transitions within a cycle ( which includes self - transitions ) always have zero criticality and zero stretch - gap .this observation follows from observations [ reachcycle ] and [ reachcore ] and the definition of a core .the concept of criticality agrees with this measure insofar as any transition that has no affect on what states may be visited in the future should not be a critical transition .[ dropexample ] the system represented as figure [ criticalityexample ] has thirty - three states in total .each one is color - coded by its stretch .compare the patterns in color to the attractors , basins , and support in figure [ equilbasin ] and the overlap in figure [ overlapdiagram ] .stretch alone , though a rather simplistic measure , works decently to partition the system dynamics into regions of similar behavior .stretch - gap performs well as a discriminator that groups states into these regions in a way similar to spectral analysis for detecting community structure in networks . to wit, stretch drops between zero and four states within any basin or overlap , but drops four to eighteen states crossing a boundary ( in this example ) .this is not completely reliable , of course , but for many systems this simple technique may provide all the information required ; and it may be the best one can do with available data . has a stretch of 21 , has a stretch of 6 , and has a stretch of 2 .there are 33 states in this system so the stretch - gaps of and are 45.45% and 12.12% respectively .that means that 45.45% fewer of the system s states can be reached after the transition .we can also use this to determine the stretch - gap of as 57.58% regardless of the particular path taken .only the start and end states stretches are necessary to calculate this , but the result is always equal to the sum of the stretch gaps of each transition taken . let s compare these figures to the criticality of the same transitions .the criticality of is 0.714 and the criticality of is 0.667 .that means that the system only has 71.5% of the possible future states in state as it did in .a composite measure is also possible for the criticality of .it can be calculated just using the start and end states stretches , using the standard percentage of a percentage of a percentage calculation .so the criticality of these measures above are intended to be just rough measures useful in certain limited contexts and when information about the system is limited .for starters , these measures only consider only the structure of the markov models , not the probabilities .also , they apply to transitions rather than states .[ statecriticality ] the _ criticality _ of a * state * is the probabilistically weighted sum of the criticality of all the transitions from that state .so to find the criticality of we calculate where by convention that is the sum over s neighbors . for that are not neighbors. this convention will be used throughout - including cases where limiting an operation to neighbors matters . ]because by definition [ probability ] the sum of the probabilities sum to one , state criticality will also be a scale - free measure with values between zero and one .all these criticality measures quantify the constriction of future possibilities on a state - by - state basis which is useful if we want to keep our options open " . as we will see laterthat is sometimes exactly what we want to measure , but sometimes we will want to measure system dynamics with reference to some particular features and that is what the following definitions for tipping points allow us to measure . as a refinement of levers from definition [ levers ] we can apply the lever concept to critical states to identify another feature of system dynamics .[ criticallever ] a state has a _critical lever _ if a change in that aspect ( or those aspects ) of the state will reduce the reach .this merely combines the concept of a lever with the concept of critical behavior ( definition [ criticalbehavior ] ) .by looking more deeply at the aspects driving the state changes and calculating the magnitude and strength of different critical levers we can gain a better understanding of how microfeatures generate the macrobehavior of the model . as mentioned in the section introduction, the common feature of the measures in this section is that some states or transitions mark a shift in the properties of a system s dynamics . for the criticality measures above the differencewas the number of reachable states .the following measures generalize to any sets distinguished by a chosen characteristic .given states exhaustively compartmentalized by the property ( or properties ) of interest the following techniques can find where shifts occur and measure their magnitude . for some models we are interested in the achievement of a particular state ( e.g. an equilibrium ) or a particular system behavior ( e.g. a path linking two states ) .we denote the particular state ( or set ) of interest as the _ reference state _ ( or _ reference set _ ) .below we will see examples of specific reference states ( e.g. attractors and functional states ) but first the general case .there are many ways in which behavior may change with respect to a reference state or set ( e.g. probability of reaching it , probability of returning to it , or probability of visiting an intermediate state ) : each property may partition the states into different equivalence classes ( groups with the same value of the property ) .it is the movement between equivalence classes that counts as tipping behavior .a _ tipping point _ is a state which is in the perimeter of an equivalence class for some property .recall from definition [ perimeter ] that perimeter states are those from which the system s dynamics can leave the specified set .because the sets here are determined by the properties of system behavior leaving a set implies a change in that behavioral property - and that is a tip .this definition does not preclude that the system could tip back into a previously visited set : that possibility depends on what property is establishing the equivalence classes .[ climatechange1 ] a climate change model that relates the co content of the atmosphere to global temperature may have states that are grouped together according to a shared property of those states ( e.g. sea level , precipitation , glacial coverage ) . due to feedback mechanisms in the system it is likely the case that these qualitative features change in punctuated equilibria ( see also example [ punctuatedequilibria ] ) thus producing equivalence classes for some states of the system . _ex hypothesi _ people can manipulate the level of co to higher or lower values .the values at which the property shifts happen may differ for the increasing and decreasing directions , but the point is that co levels could raise temperatures to the point where glaciers disappear and then later lower past the point where glaciers will form again .for some systems behavior can tip out of a equivalence class and then later tip back in .phase transitions in condensed matter physics are another example of reversible tipping behavior .so while some have posited that tipping points are points of no return for system behavior , that turns out to be true only for certain systems and is not properly part of the definition .dynamics of staying , leaving , returning , and avoiding a specified set of states will be covered in the section below on robustness . herewe continue with ways to quantify changes in what is possible for system dynamics for different states and transitions .these measures apply for any reference state or reference set , but for convenience and intuition pumping the following presentation will adopt the notation of attractor ( ) for a reference state and set and for a collection of reference states . recall from definition [ attractor ] that attractors may be equilibria or orbits and both possibilities were symbolized with and treated as singular .that convention will be continued here . is a collection of independent reference states and sets each of which satisfies a property while may be a set of states that collectively satisfies a particular dynamical property ( such as an orbit as a whole satisfies the dynamical property of an equilibrium ) .[ energylevel ] the _ energy level _ of a state is the number of reference states within its reach . we write this as and it equals energy level quantities partition the system s states into equivalence classes .[ energyplateau ] the equivalence class mapping created by states energy levels is called the system s _ energy plateaus_. each energy plateau is a set .[ energyprecipice ] the change in energy across a transition is called an _ energy precipice _ or _energy drop_. we can measure the magnitude of an energy precipice in the obvious way : [ energydrop ] an energy precipice is never negative : . from definition [ energylevel ]the energy level of any state is the number of attractors in . by theorem [ decreasingreach ] for any .this decreasing reach property implies since the end state of a transition always has a lower or equal energy level , is always greater than or equal to zero .we now can measure the degree to which a state is likely to be the site of a tip ( in a similar fashion to definition [ statecriticality ] of state criticality above ) .[ tippiness ] the _ tippiness _ of a state is the probabilistically weighted proportional drops in energy of its immediate successors : [ tippinessrange ] tippiness ranges from zero to one .the lower bound occurs when all neighbors can reach the same number of reference states : .denotes the degree of a state and is a neighboring state . ] in this case tippiness equals by theorem [ probability ] . by theorem [ energydrop ] .when the upper bound occurs when a state has every attractor and only attractors as neighbors .again by theorem [ probability ] . attractors by definition [ attractor ] have an energy level of one and the energy level of in this case is so s tippiness is .thus the upper bound of s tippiness goes to 1 as .note that tippiness uses the ratio of energies rather than the difference ; this makes tippiness a dimensionless metric and thus comparable across any state or system . sometimes one will be more interested in minimizing the magnitude of energy drops , or avoiding states with the highest expected magnitude of energy drops , which have obvious formulations given the above definitions . as another refinement of levers from definition [ levers ]we can apply the lever concept to tipping points to relate tips to the individual aspect changes that drive them .[ tippinglever ] a state has a _ tipping lever _ with respect to some specified set of states if a change in that aspect ( or those aspects ) of the state will take the system out of that set of states .this merely combines the concept of a lever with the general concept of tipping behavior .so while every state accept equilibria has levers , only perimeter states have critical levers . identifying the tipping levers of certain sets of states is precisely what we d like a `` tipping point '' analysis to reveal because it is just the aspects of a tipping point that actually change when the system tips out of a set of states .these ideas are further refined in the analysis of robustness and related concepts below .this section will use the markov model framework provided above to establish formal definitions of several related concepts : robust , sustainable , resilient , recoverable , stable , and static ; as well as their counterparts : susceptible , vulnerable , fragile , and collapsible . as before , it is unlikely that any mathematically precise definition will maintain all the nuances of the full concept sharing the same name .furthermore , existing definitions and formal treatments of the same concepts or that use the same terms risk distracting from or confusing the thrusts of this research .what is more important than the terms used is that the definition is useful and we can easily refer to it , though some care has been spent on finding the closest word to the provided definition . adding to the ambiguous and often synonymous usage of these terms are the conceptual questions that arise in considering their dispositional nature .dispositional properties are philosophically troubling for many reasons but the philosophical troubles will not interfere with their definition and ascription here ( mostly these stem from the question of whether their subjunctive conditional status distinguishes them from categorical properties ) .i mention this here because i want to hint that in addition to the obvious and direct application of this methodology to improve the performance capabilities of systems , it may also produce insights into the nature of dispositional properties in general .this analysis reveals how these particular dispositional properties are behavioral in nature and emerge from the microbehavior of the system components .the philosophical issues will be addressed in separate work ( see future work subsection below ) , but it may be interesting to the reader to consider how to apply the following methodology to investigate other dispositional properties such as soluble , malleable , affordable , and differentiate them from other types of properties with the much sought after necessary and sufficient conditions .the measures in this first set are conceptually simple with intuitive mathematical definitions and straightforward algorithms .they nevertheless identify important features of system dynamics and act as building blocks for more sophisticated measures .[ statestability ] a * state s * _ stability _ is how likely that state is to self - transition . stability is while this may seems a trivial property , it is consistent with a useful distinction from system dynamics : the difference between stable and unstable equilibria . due to the resolution of the markov model s states, an attractor state will include a neighborhood of aspect values around the equilibrium point values .thus exit behavior from the attractor node includes the response to small perturbations to ( or variations around ) the equilibrium point values .stable states will tend to stay within this neighborhood and this is reflected in a high self transition probability value . since values that are nearby an unstable equilibrium but not exactly on the equilibrium point values will tend to move away from the equilibrium values , we would see this reflected in low self - transition probabilities .these results exactly match attributions of stability and instability in the markov model via definition [ statestability ] we can extend stability to apply to sets of states in the obvious way .[ setstability ] the _ stability _ of a * set * is the probability that the system will not transition out of the set given that the system starts within the set .we calculate this as the average of the individual states exit probabilities , so set stability is this is a crude measure because is does nt properly reflect the probability of staying within the set over time , it only looks one time period ahead . a more sophisticated notion of staying within a set of statesis presented by definition [ sustainability ] below of sustainability , but in some cases ( discussed in that subsection ) the two measures generate the same value . the word ` static ' is often used to indicate a lack of dynamics in a system , and that is the sense attached to the following formal definition . recalling that self - transitions can be interpreted as a lack of transition it aggregates the lack of transitions among states .[ static ] the degree to which a set is _static _ is the average of the states stability values : this definition , though simple , captures how likely a system is to be in the same state for consecutive time steps in a way that is comparable across sets and systems with different numbers of states .what this definition fails to capture is that sets with equilibria will spend an infinite amount of time in them whereas sets lacking equilibria will continue to transition for eternity ( even if that s within an attractor ) ; and yet because this measure uses average stability it is easy to construct cases where an equilibrating systems has a lower static level on the given definition .static and stable set measurements are similar in their calculation but distinct in their sense .set stability is a measurement of lack of change , but it is a lack of change out of a set ( though it ignores dynamics that stay with in the set ) .it is therefore only applicable if the set chosen is smaller than the whole system .staticness can apply to the whole system and is useful for comparing systems overall level of dynamism . if ( i.e. a set with one state )then the static measurement equals the stability measurement .this theorem clearly follows from the fact that the sum of transitions staying within a set equals the sum of self - transitions for a set of one state .also in this case set stability naturally equals state stability because the set is a state . for any set , the set stability measure is always greater than or equal to the level of staticness . for any given , the set stability measure is the probability of transferring to another state in , is greater than or equal to zero .if that equals zero then s contribution to set stability becomes which is equal to s contribution to the static measure . if for any then s set stability is greater than its degree of being static .on the other end of the spectrum as stability and staticness are measures of how likely the system is to change states . because the above measures are defined in terms of probabilities , most simple measures of the presence of dynamics can be calculated as one minus the appropriate measure above .there is one additional simple measure to present here ; it is a rough measure of how predictable state changes are .the _ turbulence _ of a set is the average percentage of states that its states can transition into .we can calculate s turbulence with the average ratio of each state s degree to the number of states in : this measure ranges from zero to one where the zero case occurs as and a turbulence of one means that the set is fully connected ( including self - transitions for each state ) .the idea is that when each state has only a few possible transitions then there are far fewer possible paths through the system dynamics .if each state can transition into many others then , like with the common usage of ` turbulent ' , there is a great deal of uncertainty regarding the path that a series of transitions will take . plotting the degree distribution would reveal a set s ( which might be the whole system or just a portion ) turbulence profile .if one were to find something like a power - law distribution ( where a few states have many transitions and most have just a few transitions ) the high - degree states would seem to satisfy yet another concept of tipping behavior . combining turbulence profiling with ( for example ) the identification of perimeter states could be used to classify systems by the dynamical properties ( see future work section for more details ) .though the turbulence measure may provide sufficient information in many systems , it fails to differentiate the effects of high and low probability transitions .transition weights clearly play a role in determine how confident one can be that a particular trajectory will be taken rather than another .for example , if all but one of a state s transitions have very small probabilities associated with them then the set should be considerable less turbulent than if all the transitions are equally probable .[ weightedturbulence ] as a refinement of turbulence , _ weighted turbulence _ of the state equals zero if and for can be calculated as because by definition [ probability ] the sum of exit probabilities sum to one , the average of the exit probabilities is regardless of the number and their individual weights .the innermost component of this calculation , therefore , finds the difference between each transition and the average weight and squares it .squaring has the dual effect of producing an absolute value and intensifying differences ; the intensification is not crucial and is merely adopted by convention . because turbulence is maximal when each weight is equal , we subtract the differences from one to calculate each state s turbulence .case is that the definition produces a value of one instead of zero .this is an artifact of the fact that if there is only one edge then all the edges have a weight equal to the average weight - and that is the case that produces maximal turbulence for all other . ]to determine the weighted turbulence of a set we simply average each included state s weighted turbulence in common parlance something is sustainable if it can perpetually maintain its operation , function , or existence .it is often used in connection to environmental considerations such as whether humans are using up resources faster than they can be replenished or to the ecological question of whether population dynamics will drive any species to extinction .political institutions , academic reading groups , pools of workers , and any other system that undergoes inflows and outflows of its parts and might collapse or fail is a potential subject for sustainability considerations . roughly speaking , for this analysis a set of states is sustainable if the system can stay within that set of states .there are multiple ways to calculate a measure of this sort and each reports a slightly different concept of sustainability .as a crude approximation to the long - term sustainability we can find the cumulative sum of the power of the set stability measure from definition [ setstability ] up to some sufficiently large ( see observation [ bigenough ] below ) .we could call such a measure _ naive sustainability _ and identify conditions for its appropriate application , but instead we will move on to a more sophisticated measure .the previous measure is crude because within a set there may be ( for example ) heavily weighted cycles such that if the system starts in one of the cycle - states it is very likely to go around the cycle for a long period . to properly account for this , while still remaining agnostic overwhich state of the system starts in , we calculate a refined sustainability measure .[ sustainability ] the _ sustainability _ of is the average cumulative long - term probability density of future states that remain in the set starting from each state in the set .power up to ( see observation [ bigenough ] ) .this is done each time to clear out the probability mass outside the set so that it ca nt return .the result is the sum of the resulting vectors for the states in each step .usually algorithms are been banished to the appendix , but understanding this calculation is likely to make understanding definition [ susceptible ] of susceptibility much clearer . ][ bigenough ] if the chosen set does not contain an attractor then this calculation is unproblematic because some probability density `` escapes '' the set each iteration and there exists some time after which the remaining probability density in each state of is less than any arbitrarily chosen minimum resolution .sustainability measurements are therefore only appropriate for sets that do not include attractors . if an application to sets including attractors were deemed useful then we could separate out the basin(s ) of the attractor(s ) from the other states and apply the sustainability measure above to the remaining states of the set .it is still not clear how to recombine the two subsets into a single measure or how to cope with epistemic barriers to knowing whether a set contains an attractor before running the analysis and thus whether this would be necessary . because sustainability is a cumulative measure the calculation will not produce a probability .but because the cumulative sum is divided by the size of it is normalized and comparable across differently sized sets . what we uncover through this process is a measure of the expectation over time that the system will stay in the set given that the system starts somewhere in it .if is chosen to be an energy plateau then all exit transitions are one - way .if there are no cycles in then the probability mass will quickly dissipate from within yielding a very low sustainability measure .if there is at least one cycle then there is a chance that the system will stay within the set indefinitely , but since that cycle can not be an attractor the system will leave the set in expectation .the stronger the weights of the transitions among the cycle states the greater the sustainability measure .the energy plateau application is especially helpful for the `` keep our options open '' mindset and where each equilibria is a different form of system failure ( e.g. for dissipative structures ) .it can even be useful to calculate the sustainability of a basin of attraction ( excluding the attractor itself ) - which is also an energy plateau .the system may exhibit interesting and long - lived behavior within a basin of attraction that may reveal much more about the processes affecting a system than just which attractor it is likely to end up in .time to equilibria may be on the galactic time - scale and knowing that will alter our interpretation of the system s characteristics . the term ` susceptible ' is typically followed by ` to ' and an indication of what the thing is susceptible to .i preserve that usage with the measure presented here .we will talk of sets being susceptible and sets can be defined according to different properties for different applications .states are what sets are susceptible to .this may sound odd , but the probability of transitioning out of a set depends on which state within the set the system is currently in .so the characteristic represented by staying within a set risks being lost to a degree contingent upon the state . [ susceptible ] the degree to which is susceptible to is how much more ( or less ) likely it is to transition out of conditional on it being in a particular state of compared to the sustainability of overall . given this definition we can see that a positive susceptibility means a lower probability to stay within .we can also determine and measure the set s susceptibility to the lever points of an aspect .recall from definition [ leverpoint ] that the lever points of an aspect are all those transitions that result from a change in that aspect .we can calculate the susceptibility of to a collection of lever points as sustainable / susceptibility analysis can be used to help systems maintain a performance level .we can take the set to be a contiguous collection of states that count as functional in some system : such as all the configurations of an airplane that the autopilot can manage .for the airplane system some state changes will be exogenous perturbations due to environmental factors ( wind , rain , pressure , lightening , passenger movement , etc . ) , others will be endogenous control adjustments by either the pilot or the autopilot , and some will be a mix .first one would calculate the sustainability of the whole set of autopilot capable states. then one would calculate how susceptibility that set is to each state ( or smaller collection of states ) . using this information the autopilot and/or pilot could select actions that minimize susceptibility across the states visited and this means maximizing the probability of staying within the set of autopilot capable states .this example can be generalized to any case where maintaining functionality is the modeler s goal .stability , staticness , and sustainability are different ways to measure a system s dynamics tendency not to leave a state or set ; we now turn to measures of returning to a state or set once it has been left .[ resilience ] a state s _ resilience _ is the cumulative probability of returning to a state given that the system starts in that state .the resilience of equals it is the sum of the individual probabilities of returning in time steps . because the sum of exit probabilities of every state equals one and the probability of traversing a path is the product of the states along the path this cumulative sum is always less than or equal to one and is a true probability measure .[ fragility ] a state s _ fragility _ is a measure of how likely it is that the system will never return to that state .this is just one minus the resilience of that state .so equilibria have zero fragility and states with no return paths have a fragility of one .measuring the degree of fragility requires the same calculation as measuring the resilience , but finding out whether a state is ever revisited is much easier because we can utilize our definition of a state s reach .[ brittle ] a state is _ brittle _ if and only if it has a fragility value of one ( i.e. a resilience value of zero ) .brittle states are the ones such that except for the brittle states which have a specific formal significance , the choice of whether to use a resilience or fragility measure will depend on which feature the user would like to highlight ( glass half - full or half - empty ). we can also define the resilience and fragility of a set in an analogous way .[ setresilience ] _ set resilience _ is the probability that the system will return to a set if the initial state of a sequence is within the state . though the definition is exactly parallel to the single - state case , the algorithm to calculate this probability is considerably more difficult .a few facts about entering and leaving sets will help refine our understanding .* transitions exit through the perimeter states of .* transitions enter through a set of entry points of .* we can refine the definition of set resilience to .so to calculate set resilience we need first to find all the paths from each element in to each element in . in the worst case this can be done in time via a breadth - first search .set fragility is one minus set resilience .[ resilienceiszero ] if is an energy plateau then the resilience of is zero .by definition [ energyplateau ] an energy plateau contains all the states in the system with the same number of attractors in their reach .any transition out of such a set would be to a state with a different energy level and by theorem [ energydrop ] it must be a lower energy level . also by theorem [ energydrop ]no transition can be to a higher energy level .hence if a system transitions out of an energy plateau then it can never transition back into it .if the system can not transition back into the set then by definition [ setresilience ] s resilience is zero .the susceptibility measure determines how sustainability changes depending on the specific starting state . we may wish to have a similar measure for resilience that reports how likely the system dynamics are to return to a state given that it exits via a particular transition .[ recoverable ] a transition out of the set is _ recoverable _ to the degree that the system will return to the set after the transition . is recoverable from to the degree calculated by note that leaving via a particular transition is the same as exiting due to a particular lever change .thus we can uncover the recoverability of a set of lever points ( from definition [ leverpoint ] ) for a particular aspect as the average of the recoverability of each transition in it . also note that there may be multiple paths from back into each of .each path leading from back into can be called a _ recovery path_. continuing with the autopilot example ,imagine that there are many known points of failure for maintaining autopilot control .each of these is a transition out of the set via a known lever change .but not all failures are equally as problematic . by calculating the recoverability of each of the failure transitionsthey can be ranked by their seriousness .such a ranking can guide both the pilot in adjusting to the failure and the autopilot to avoid it in the first place .again , the autopilot example can be generalized to the maintenance of any system : political regimes , sports clubs , ecosystems , viable crop production , etc .sustainability measures the likelihood of a system s dynamic s staying in a certain set given that it starts within that set and resilience measures how likely it is to return to the set if the dynamics leave the set , but these measures do not include the case where the system s state starts outside the set and then enters it . when the set of interest is an energy plateau resilience is always zero ( as shown by theorem [ resilienceiszero ] ) but the set may still receive probability mass from parts of the system with higher energy levels . and in cases where a non - equilibrium analysis is appropriate we might be comparing different subsets within an energy plateau ( e.g. the relative probability mass of two cores within the mantle of an energy plateau - see example [ punctuatedequilibria ] below ) . in this final subsectionthe above measures will culminate in the most inclusive measures of system robustness . before defining the measure that allows for inflow , we first define a measure that combines the features of sustainability and reliability . [ reliable ]the _ reliability _ of a set is the average cumulative long - term probability density over the states in the set given that the system starts within that set . this measure combines the concepts of sustainability and resilience , but it is not just the sum of those two measures . reliability starts the flow in the set and calculates the probability of being in each state on each consecutive time step . it does restrict the probability mass summation to the specified set , but it tracks probability mass throughout the system . the reason that this is nt merely a sum of resilience and sustainability is because when combining those two it was not possible to track probability mass that leaves the set , cycles back into the set , and then circulates within the set ( and maybe even repeats this process ) . with reliabilitywe can reincorporate probability flow that leaves and then re - enters the set .a characteristic captured by the chosen set is reliable if it can be maintained or , if lost , can be regained .if is an energy plateau then s reliability equals s sustainability .this theorem does not follow directly from theorem [ resilienceiszero ] because there is no direct link between resilience and reliability , but the reasoning is the same . because there can not be any paths leading out of an energy plateau back into it , all the probability mass that gets for the reliability measure is from the initial distribution . leaving massnever returns so that produces an equivalent measure as not counting the returning mass : this is the sustainability measure .hence the sustainability values in figure [ sustainabilityfigure ] are also those energy plateaus reliability values . finally we add to the reliability measure the possibility that the system did not start in the set , but transitions into it .[ robust ] the _ robustness _ of a set is the average cumulative long - term probability density over the states in the set given that the system may start at any state .robust characteristics not only have high retaining power and recoverability , they also draw the system in from states outside the characteristic set . sets with high robustness values are sets that the system s dynamics tends towards .that description makes robust sets sound a lot like attractors ; and this is as we would expect .attractors will typically have high robustness measures on account of their perfect sustainability and the fact that typically several states will lead into them . ) or a proper subset of an attractor . ]the attractor - like behavior related to robust sets provides interesting and useful insights into many systems dynamics .[ punctuatedequilibria ] sets that behave like ( and are defined as ) equilibria in other modeling techniques may be revealed to be highly robust sets under the current analysis .the phenomena of _ punctuated equilibria _ describes a system that spends long periods of time in characteristic patterns with interspersed and short - lived periods of rapid change . in the markov model representationwe might see a mantle of an energy plateau with multiple highly robust cores .these cores could have relatively short transition paths among them .each is a different cohesive pattern with larger probabilities of staying in than going out .but because these cores are not attractors the system will eventually transition out of them and into the next core .[ dissipativestructure ] one of the foci of complex systems science is the study of the self - maintaining ( or _ autopoietic _ ) nature of dissipative structures .dissipative structures are those where a continual flow of energy , matter , or other resource is necessary to maintain system structure and performance .biological systems are like this , constantly changing and adapting to maintain functionality , and so are many other complex systems .these are systems where there are no equilibria or all equilibria are states to be avoided so that the energy level of the system remains mostly constant .some set(s ) of states are preferred to others for exogenous reasons ( functionality , performance , diversity , longevity , or other utility measures ) and the goal is to maximize time spent in the desired states .the goal might also be to maintain some characteristic feature of transient system behavior .the current techniques offer new measures of behavior for non - equilibrium analysis .these can be used to embed an existing equilibria model into a larger context and/or to push down the level of analysis to see what is happening inside an `` equilibrium '' state . using a definition parallel to that of susceptibility, the following measure calculates how much more ( or less ) likely the system is to be in the set conditional on the dynamics starting in a particular state ( not necessarily in that set ) .[ vulnerable ] a set s _ vulnerability _ is the difference in the average long - term probability density over the states in the set compared to the density generated by starting in section s status is somewhere between completed work and future work - which i suppose makes it work in progress .it has not been prepared to the degree of rigor of the previous sections , but there is sufficient material here to provide a strong indicator of how a markov model representation can be used to uncover a variety of measures related to path sensitivity .the algorithms for the presented measures ( and more ) have been prepared , but the mathematical and conceptual work needs more smoothing and filling in .it is offered in its present form for review , evaluation , and feedback purposes .path sensitivity is not a single , well understood and properly defined concept . for starters ,different features of system dynamics can have the path sensitivity property : outcomes can be path sensitive , processes can be path sensitive , measures can be path sensitive , and paths can be path sensitive .previous work has focused on explaining why a particular system exhibits path sensitivity or how a particular mechanism can generate it , but they have not provided general , causally agnostic measures applicable to any system s dynamics .what follows is a collection of distinctions and formal definitions for several different forms of path sensitivity .and like the above measures of tipping- and robustness - related concepts , these measures apply to markov models representing either observational or model - generated data .one type of path sensitivity results when a transition excludes some states from any possible future of the system s dynamics .such an occurrence is closely related to the measures of criticality and tipping behavior presented in definitions [ criticality ] and [ tippiness ] respectively .[ weakpathpreclusion ] any reduction in the size of the reach across a transition is instance of _ weak path preclusion_. the degree of path preclusion of is the criticality measure of . such a definition is likely to be useful in limited number of models but it is conceptually intuitive that merely excluding a set of states from the future of a system s dynamics is a relevant form of path sensitivity . in applications geared towards preserving , tracking , or monitoring some characteristic of the system or its behaviorwe need a concept that accounts for the preclusion of that characteristic .[ strongpathpreclusion ] when there is a reduction in the number of reference states or sets ( e.g. attractors or specific equivalence classes ) that can be reached then this is tipping behavior that exhibits _ strong path preclusion_. the strong path preclusion of is measured by the tippiness of .recall from example [ climatechange1 ] that not being able to return to a state or set was * not * properly part of the definition of tipping behavior .whether a tip is path preclusive or not marks an important feature of the transition .in many cases we will be more interested in whether a transition is path preclusive than whether it is tipping out of some characteristic .it is this dynamic and its importance for understanding system dynamics that gives tippiness its relevance as a measure .some systems have only one long - term outcome ( a singular attractor ) and some have none ( when the whole system is an orbit ) .in either case we might care less about where the system goes than how it gets there , i.e. which intermediate states the system realizes between two anchor states .we might care about the exact path our dynamics takes through the system s states because some states are preferred to others ( for exogenous reasons ) or because they have different dynamical properties ( e.g. susceptibility or vulnerability measures ) .two different tips may include the same reference states but force the system dynamics to take different paths to get there .[ trajectoryforcing]_trajectory forcing _ is when a particular transition sends the dynamics down a specified sequence of states .the _ force _ of an exact path from to can be measured as the product of the probabilities of all the transitions required to stay that course : given this general definition it is clear that forcing is relative to the specific set of connected states . if , for example , one wanted to maximize the probability of reaching a specific equilibrium then calculating the force of each path from the tips of the current core to that equilibrium would provide the necessary guidance ( see figure [ trajectoryforcingfigure ] ) .to the red path from to in the figure has a force of and the green path has a force of .force is not the same thing as the probability of reaching given and respectively ; that is calculated by the tippiness of those two transitions with only as a reference state .these force measurements are the probabilities of following those exact paths to get between those two states . for non - exact paths forcecan be calculated as the sum of the probabilities of the paths that visit each of the specified markers .the term `` markovian '' means memoryless in the related fields of mathematics where it refers to any processes wherein future probabilities of events do not depend on past occurrences ( e.g. poisson processes ) .the markov model representation may therefore seem an unlikely tool for uncovering dependencies in paths of system dynamics .the conditional probability assignments to the transitions imply that the current state is sufficient for knowing the probability distributions of future states . butanother way to interpret this markov model structure is that it encapsulates the probability distribution of future states * if * all one knows is the current state .it can also be used to track correlations in the specific paths followed .[ pathdependence ] a state s exit transitions are _ path dependent _ if and only if the distribution of their probabilities changes conditional on previous states . the degree to which s transitions are path dependent on a set of historical sets equals this definition is very general and is meant to capture all the different ways that probability distributions could change due to different types of historical sets ( see page for several types of path dependence ) .this therefore admits to a refinement for each type of historical set dependence : exact path leading to , unordered collection of states preceding , existence of a path for different , stability of , length of , and many others .but insofar as each of these conditions can alter the transition probability distribution , their degree of path dependence can be measure in the same way . ]consider the situation depicted in figure [ pathdependencefigure ] ; both states and transition into state which may transition to either or ( edges and respectively ) .the path dependence of s transitions is revealed through the markov model when there are significant correlations in ( say ) and .so even though it may be the case that , i.e. , it may not be the case that or that .assume that the system dynamics enter equally often from both and .given the system at let s assume .analyzing the individual time series of data may reveal that and .so s path dependence on . s path dependence on . considering the conditional probabilities and the relative value of these figures matches intuition .this paper is very much a work in progress and so each definition , measure , theorem , algorithm and example is a subject for future work .in addition to bolstering the current contents i have several planned avenues for expansion and spin - off projects , some of which are outlined in this section .good methodology exists as a facilitator to good science , so the first and perhaps most important extension of this project is to apply these measures to models within substantive research projects . over the past year of developmenti have given a few presentations and have engaged in many conversations about these techniques and their potential to reveal interesting features of system dynamics .the potential collaborative projects that resulted from these discussions ( outlined below ) are in addition to planned applications to personal research in the evolution of culture and morality , institutional design , biological contagion control , supply chain management , ecological robustness , resource sustainability , and various philosophical implications of seeing multiple systems dynamics as instances of the same underlying phenomena . within the university of michiganseveral parties have expressed interest in this methodology .qing tian ( working with dan brown ) plans to use vulnerability analysis to identify the social and physical levers of the well - being of people in the poyang lake area of china where flooding frequently disrupts economic and social activity .abe gong in the department of political science and public policy wants to incorporate these tools for research into far - from - equilibrium dynamics in organizational change .dominick wright ( working with scott atran ) want to find points of susceptibility to terrorist cell activity ; intelligently disrupting sustainable operation of terrorist networks might offer low - impact methods to benefit national security .warren whatley of the department of economics and the center for african america studies would like to apply the path dependency measures to his data and econometric model of the 18 century british slave trade in america to identify any lasting effects on african economic and political stability .chris chapman of the multimedia development team at the university of michigan medical school wants to uncover user behavior patterns in educational technology in order to improve retention and pinpoint weak links to improve educational efficacy .several other individuals have expressed interest in the technique , but the above selection suffices to demonstrate the potential benefits this methodology has for substantive research fields across multiple disciplines .this methodology has also garnered interest from the private sector .state farm insurance has expressed interest in building a model to better understand the effects of word - of - mouth spread in insurance company choices that exploits tipping behavior and path dependence .lockheed martin is pushing to develop software to read in data from existing simulations and engineering tools to build the markov model and identify the features of system dynamics defined herein .palantir technologies is considering incorporating these analysis capabilities into their social network and financial market analysis software platforms . because the methods are designed to be as general possible , algorithms implementing them could be constructed as code libraries for popular scientific programming languages ( java , c , c++ , python , matlab , mathematica , etc . ) and made open source for popular consumption and widespread use .i have started to pursue government grant funding to develop this software package . through each of these applications and others that follow i expectt to uncover exceptions , caveats , refinements , and alternatives to the work as it stands now .i also expect to discover many more features of system dynamics that can be uncovered via the markov model representation .this paper presents a first attempt at capturing the above - defined properties ; through applied research i will be able to validate these measures usefulness and improve them where necessary .as mentioned in the text , i have sketched a methodology to generate the markov model representation from data sets ( whether from a database or collected from a generative model ) .because actual data sets and parameter spaces are typically quite large , a tool to automatically create the markov transition diagram is necessary to perform the above measurements .part of designing this tool will be specifying precisely what the restrictions and requirements of a data set are .another part includes providing the algorithms to use available data to generate a system s states and transitions properly weighted . because the methodology presented in this paper is essentially a statistical technique ( more on this below ) some timewill be spent to demonstrate that the assumptions made are the minimal assumptions and that the structure generated is the most justified result from the observable sample .one major goal of complex systems research is to identify common underlying mathematical properties in a myriad of seemingly very different phenomena .the markov modeling technique allows us to create a common representation of almost any system s dynamics .differences in the definitions of system states , however , will still mask many of the similarities .that difficulty notwithstanding we can make great gains by identifying network _ motifs _( repeated patterns in the graph structure ) and establishing cross - disciplinary equivalence classes of system behavior .achieving this goal will require solving issues with the choice of system resolution and `` playing with '' the resolution to find the matching patterns . though this may sound suspicious , changing the resolution is nothing more than altering the level of organization to which we are applying the properties .as long as we are consistent in our application of these techniques then we may be able to discover similarities in many complex systems dynamics .there are two potential non - trivial objections to the above - given probabilistic accounts of properties of system dynamics .the first is that probabilistic definitions are inadequate because we aim to understand these features as properties that systems possess rather than dynamics they _ might _ have . to answer this questionwe may first need a better grasp of dispositional versus categorical properties more generally ( see below ) .but it may be the case the other definitions cashed out purely in terms of structural properties of markov models may be what some people would find more intuitive .it could also be that the definitions these potential objectors are seeking can not be formulated within markov models at all .as long as the above definitions reveal useful distinctions and patterns of system behavior the project was a success , but still better ( or at least different and also useful ) measures may be available if build from a different formal foundation .i will , naturally , continue to pursue other and hugely different measures of system dynamics .the other objection to the probabilistic definitions provided is that a person may insist that for many of these concepts the definition is incomplete without the causal explanation for how it comes about. like all other statistics - like approaches ( see `` metastatistics '' below ) these measures may be realized by many different micro - level dynamics .some of those dynamics may not seem proper candidates for robustness or tipping behavior even if the data they generate reveals it as such from this analysis .but if this were to happen then i would consider the project a huge success. this would be similar to discovering scale - free degree distributions in many different networks from disparate research fields .finding that common property urged researchers to pursue more deeply the phenomena and they eventually uncovered several different mechanisms by which scale - free network may be created .our understanding of each of those systems greatly increased because we had a common yardstick with which to measure them .the probabilistic measures presented here are not intended to replace or make unnecessary the deeper scientific analysis - they are supposed to foster it . while working on the formal definitions of robustness - related properties i realized that these are all dispositional properties ; and dispositional properties constitute a long - standing philosophical problem .that connection immediately made me wonder if my mathematical formalism might shed some new light on how to differentiate dispositional properties from categorical ones .dispositional properties are philosophically troubling for many reasons , but primarily these stem from the question of whether their subjunctive conditional status distinguishes them from categorical properties. it may therefore be interesting to consider how to apply my methodology to investigate other dispositional properties such as soluble , malleable , affordable , and differentiate them from other properties with accepted necessary and sufficient conditions . if in addition to the obvious and direct application of this methodology to improve the performance capabilities of systems my research also produces insights into the nature of dispositional properties in general that would be unanticipated but certainly welcome news .let s look at some details of the problem .a big part of the problem is that any property can be given a subjunctive conditional description , but a property would be dispositional if and only if such a definition is the only possible one .color properties are known as primary properties and should be excellent candidates of clear - cut categorical properties . yet being red is dispositional in the sense that nothing seems red in the dark . whether or not redness ( or conductive , or triangular , or ) is dispositional in the same way that fragility is has not been solved .my mathematical analysis reveals how particular dispositional properties ( robustness - related ones ) are behavioral in nature and emerge from the microbehavior of the system components .what i hope is that this can be expanded to develop necessary and sufficient conditions for properties to be dispositional in different ways or at least a step in a helpful direction .i have said elsewhere in this paper that i consider the methodology presented here to be similar in kind to statistics .it starts with data ( perhaps generated from simulations of a model ) , fits a model ( a markov model ) to the data , and then purports to describe the real system with measures over that model ( my definitions ) .statistics as we usually see it takes a different kind of model ( some form of distribution or estimator ) , but its purpose and general method of attack are very similar. and this procedure is clearly different from other sorts of models in that neither standard statistics nor my methodology can explain the phenomena being analyzed . statistics ( and my methodology ) can produce evidence that some generative theory - driven model does explain the observed data , but the theory behind the generative model is what is doing the explaining .standard statistics and my methodology are certainly not unique in their abilities to measure but not explain phenomena .much of complex network analysis can be seen in this light as well .the network representation facilitates the calculation of measures on the generating data but not because the links identified in the network representation are in the actual system s features .classifier systems , bayes nets , hidden markov models , and neural nets are all further examples where the formal representation can permit measures and produce predictions without mirroring the structure and dynamics of the underlying behavior - generating system . seeing all these different techniques under the same metastatistical light may allow us to 1 ) bridge gaps among these techniques , 2 ) identify broader guidelines for the proper application and interpretation of these techniques , and 3 ) find new statistics - like techniques with desired features .young , h. peyton .`` the diffusion of innovations in social networks '' _ the economy as a complex evolving system _ , vol .iii , lawrence e. blume and steven n. durlauf , eds .( oxford university press , 2003 ) .
this paper draws distinctions among various concepts related to tipping points , robustness , path dependence , and other properties of system dynamics . for each concept a formal definition is provided that utilizes markov model representations of systems . we start with the basic features of markov models and definitions of the foundational concepts of system dynamics . then various tipping point - related concepts are described , defined , and illustrated with a simplified graphical example in the form of a stylized state transition diagram . the tipping point definitions are then used as a springboard to describe , formally define , and illustrate many distinct concepts collectively referred to as `` robustness '' . the final definitional section explores concepts of path sensitivity and how they can be revealed in markov models . the definitions provided are presented using probability theory ; in addition , each measure has an associated algorithm using matrix operations ( excluded from current draft ) . finally an extensive future work section indicates many directions this research can branch into and which methodological , conceptual , and practical benefits can be realized through this suite of techniques .
the kaczmarz method is an iterative projection algorithm for solving linear systems of equations . due to its simplicity, the kaczmarz method has found numerous applications including image reconstruction , distributed computation and signal processing to name a few , see for more applications .the kaczmarz method has also been rediscovered in the field of image reconstruction and called art ( algebraic reconstruction technique ) , see also for additional references .it has been also applied to more general settings , see ( * ? ? ?* table 1 ) and for non - linear versions of the kaczmarz method .let and . throughout the paper all vectorsare assumed to be column vectors .the kaczmarz method operates as follows : initially , it starts with an arbitrary vector . in each iteration, the kaczmarz method goes through the rows of in a cyclic manner . ] and for each selected row , say -th row , it orthogonally projects the current estimate vector onto the affine hyperplane defined by the -th constraint of , i.e. , where is the euclidean inner product . more precisely , assuming that the -th row has been selected at -th iteration , then the -th estimate vector is inductively defined by where are the so - called relaxation parameters and denotes the euclidean norm .the original kaczmarz method corresponds to for all and all other setting of s are usually referred as the _ relaxed kaczmarz method _ in the literature .kaczmarz proved that this process converges to the unique solution for square non - singular matrices , but without any attempt to bound the rate of convergence .bounds on the rate of convergence of the kaczmarz method are given in , and ( * ? ? ?* theorem 4.4 , p.120 ) .in addition , an error analysis of the kaczmarz method under the finite precision model of computation is given in .nevertheless , the kaczmarz method converges even if the linear system is overdetermined ( ) and has no solution . in this case and provided that has full column rank , the kaczmarz method converges to the least squares estimate .this was first observed by whitney and meany who proved that the relaxed kaczmarz method converges provided that the relaxation parameters are within ] . throughout the paper all vectorsare assumed to be column vectors .we denote the rows and columns of by and , respectively ( both viewed as column vectors ) . denotes the column space of , i.e. , and denotes the orthogonal complement of . given any , we can uniquely write it as , where is the projection of onto . and denotes the frobenius norm and spectral norm , respectively . let be the non - zero singular values of .we will usually refer to and as and , respectively .the moore - pensore pseudo - inverse of is denoted by .recall that . for any non - zero real matrix , we define related to this is the scaled square condition number introduced by demmel in , see also .it is easy to check that the above parameter is related with the condition number of , , via the inequalities : .we denote by the number of non - zero entries of its argument matrix .we define the _ average row sparsity _ and _ average column sparsity _ of by and , respectively , as follows : where for every } ]. the following fact will be used extensively in the paper .[ fact : xls ] let be any non - zero real matrix and . denote by . then .we frequently use the inequality for every .we conclude this section by collecting a few basic facts from probability theory that will be frequently used . for any random variable ,we denote its expectation by ] .let and be two random variables , then =\operatorname{\mathbb{e}}[x ] + \operatorname{\mathbb{e}}[y] ] with probability ] .observe that , i.e. , is a projector matrix .let be a random variable over that picks index with probability .it is clear that = { \mathbf{i}}_m - { { { \ensuremath{\mathsf{a } } } } } { { { \ensuremath{\mathsf{a } } } } } ^\top /{\ensuremath{\left\| { { { \ensuremath{\mathsf{a } } } } } \right\|_{\text{\rm f}}}}^2 ] .moreover , it is easy to see that for every is in the column space of , since , and in addition is a projector matrix for every ] , i.e. , the conditional expectation conditioned on the first iteration of the algorithm .it follows that { \ensuremath{{\mathbf e}}}^{(k-1 ) } } \right\rangle } \\ & \leq { \ensuremath{\left\|{\ensuremath{{\mathbf e}}}^{(k-1)}\right\|_2 } } { \ensuremath{\left\| \left({\mathbf{i}}_m - \frac { { { { \ensuremath{\mathsf{a } } } } } { { { \ensuremath{\mathsf{a } } } } } ^\top } { { \ensuremath{\left\| { { { \ensuremath{\mathsf{a } } } } } \right\|_{\text{\rm f}}}}^2}\right ) { \ensuremath{{\mathbf e}}}^{(k-1 ) } \right\|_2 } } \ \leq \ \left(1 - \frac{\sigma^2_{\min}}{{\ensuremath{\left\| { { { \ensuremath{\mathsf{a } } } } } \right\|_{\text{\rm f}}}}^2}\right ) { \ensuremath{\left\|{\ensuremath{{\mathbf e}}}^{(k-1)}\right\|_2}}^2\end{aligned}\ ] ] where we used linearity of expectation , the fact that is a projector matrix , cauchy - schwarz inequality and fact [ lem : technical ] . repeating the same argument timeswe get that note that to conclude .step can be rewritten as . at every iteration ,the inner product and the update from to require at most operations for some } ] with probability ] with distribution and assume that is a vector in the row space of .if ( in exact arithmetic ) , then theorem [ thm : rk : consistent ] follows by iterating lemma [ lem : avg ] , we get that the analysis of strohmer and vershynin is based on the restrictive assumption that the linear system has a solution .needell made a step further and analyzed the more general setting in which the linear system does not have any solution and has full column rank . in thissetting , it turns out that the randomized kaczmarz algorithm computes an estimate vector that is within a fixed distance from the solution ; the distance is proportional to the norm of the `` noise vector '' multiplied by .the following theorem is a restatement of the main result in with two modifications : the full column rank assumption on the input matrix is dropped and the additive term of theorem in is improved to .the only technical difference here from is that the full column rank assumption is not necessary , so we defer the proof to the appendix for completeness .[ thm : rk : inconsistent ] assume that the system has a solution for some .denote by .let denote the -th iterate of the randomized kaczmarz algorithm applied to the linear system with for any fixed , i.e. , run algorithm [ alg : randomized ] with input .in exact arithmetic , it follows that in particular , any least squares problem , theorem [ thm : rk : inconsistent ] with tells us that the randomized kaczmarz algorithm works well for least square problems whose least squares error is very close to zero , i.e. , .roughly speaking , in this case the randomized kaczmarz algorithm approaches the minimum -norm least squares solution up to an additive error that depends on the distance between and the column space of . in the present paper ,the main observation is that it is possible to efficiently reduce the norm of the `` noisy '' part of , ( using algorithm [ alg : randop ] ) and then apply the randomized kaczmarz algorithm on a new linear system whose right hand side vector is now arbitrarily close to the column space of , i.e. , .this idea together with the observation that the least squares solution of the latter linear system is equal ( in the limit ) to the least squares solution of the original system ( see fact [ fact : xls ] ) implies a randomized algorithm for solving least squares .next we present the randomized extended kaczmarz algorithm which is a specific combination of the randomized orthogonal projection algorithm together with the randomized kaczmarz algorithm .initialize and pick ] pick ] set set [ alg : stopping ] check every iterations and terminate if it holds : output we describe a randomized algorithm that converges in expectation to the minimum -norm solution vector ( algorithm [ alg : rek ] ) .the proposed algorithm consists of two components . the first component consisting of steps and is responsible to implicitly maintain an approximation to formed by .the second component , consisting of steps 4 and 7 , applies the randomized kaczmarz algorithm with input and the current approximation of , i.e. , applies the randomized kaczmarz on the system .since converges to , will eventually converge to the minimum euclidean norm solution of which equals to ( see fact [ fact : xls ] ) .the stopping criterion of step [ alg : stopping ] was decided based on the following analysis .assume that the termination criteria are met for some .let for some ( which holds by the definition of ) .then , by re - arranging terms and using the second part of the termination criterion , it follows that .now , where we used the triangle inequality , the first part of the termination rule together with and the above discussion .now , since , it follows that equation demonstrates that the forward error of rek after termination is bounded . the following theorem bounds the expected rate of convergence of algorithm [ alg : rek ] .[ thm : rek ] after iterations , in exact arithmetic , algorithm [ alg : rek ] with input ( possibly rank - deficient ) and computes a vector such that for the sake of notation , set and denote by : = \operatorname{\mathbb{e}}[\cdot \ |\ i_0,j_0 , i_1,j_1,\ldots , i_k , j_k] ] and } ] and in the first and second inequality . a similar argument shows that using the inequality } } { \ensuremath{\left\| { { { { \ensuremath{\mathsf{a } } } } } ^{(i)}}\right\|_2}}^2 \leq \sigma_{\max}^2 ] in constant time and linear time preprocessing , generates one sample of the given distribution in constant time .we use an implementation of w. d. smith that is described in and c s _ drand48 _ ( ) to get uniform samples from ] .assume that for some arbitrary ] define the affine hyper - planes : assume for now that at the -th iteration of the randomized kaczmarz algorithm applied on , the -th row is selected .note that is the projection of on by the definition of the randomized kaczmarz algorithm on input .let us denote the projection of on by .the two affine hyper - planes are _ parallel _ with common normal , so is the projection of on and the minimum distance between and equals .in addition , since , therefore by orthogonality we get that since is the projection of onto ( that is to say , is a randomized kaczmarz step applied on input where the -th row is selected on the -th iteration ) and is in the row space of , lemma [ lem : avg ] tells us that note that for given selected row we have ; by the distribution of selecting the rows of we have that inequality follows by taking expectation on both sides of equation and bounding its resulting right hand side using equations and . applying inequality inductively , it follows that where we used that is in the row space of .the latter sum is bounded above by .
we present a randomized iterative algorithm that exponentially converges in expectation to the minimum euclidean norm least squares solution of a given linear system of equations . the expected number of arithmetic operations required to obtain an estimate of given accuracy is proportional to the square condition number of the system multiplied by the number of non - zeros entries of the input matrix . the proposed algorithm is an extension of the randomized kaczmarz method that was analyzed by strohmer and vershynin .
erich l. lehmann was born in strasbourg on november 20th , 1917 .he passed away in berkeley , california on the morning of september 12th , 2009 .his family left germany in 1933 , as the nazis came to power , to settle in switzerland .he spent five years in zrich and two years in trinity college in cambridge studying mathematics . under the united states french immigration quota strasbourg was , by then , part of france as a consequence of the versailles treaty he arrived in new york at the end of 1940 .edmund landau , the famous number theorist , was an acquaintance of the lehmann family and had suggested trinity college as the place erich should go to study mathematics .landau died in 1938 from a heart attack , but his wife wrote a letter of introduction for erich to take to landau s gttingen colleague richard courant who was now in new york developing what became the courant institute .courant , having offered the option to `` live in new york or in the united states , '' and erich having opted for the latter , recommended the university of california as an up - and - coming good place .erich arrived in berkeley , california in january 1 , 1941 .erich s first order of business was to speak with griffith c. evans , chair of the mathematics department , who immediately accepted him as a probationary graduate student .the probationary status resulted from erich not having a degree .evans , who had been recruited from the mathematics department at rice institute now rice university had a broad vision for mathematics and had the intention of hiring ronald a. fisher , whom he knew .however , a visit by fisher to berkeley did not go well . the news of jerzy neyman s successful visit to the united states , culminating with a set of lectures at the u.s .department of agriculture , reached evans who in 1937 offered neyman a job in the mathematics department at the university of california without having met him . with the advent of the second world war ,evans advised erich that it might be a good idea to move from mathematics to some other area perhaps physics or statistics that could be more useful to the war efforts .erich , not being fond of physics , opted for statistics .his initial experiences , however , led him to second - guess his decision . in lehmann ( 2008b ) , erich writes that `` statistics did not possess the beauty that i had found in the integers and later in other parts of mathematics . instead , ad hoc methods were used to solve problems that were messy and that were based on questionable assumptions that seemed quite arbitrary . ''( hereafter , a reference followed by `` b '' indicates book reference in section 9 ; a bracketed reference [ x ] refers to that numbered reference in section 8 ; other references appear at the end of this work . ) after some soul - searching , he decided to go back to mathematics and approached the great logician alfred tarski .tarski accepted him as a student , but before erich had an opportunity to let evans and neyman know about his decision , neyman offered him a job as a lecturer with some implicit potential for the position to become permanent . feeling that this represented a great opportunity to become part of a community , something that erich very much desired at that point in time , he decided to take the offer and abandoned his plans for returning to mathematics . in 1942erich received an m.a .degree in mathematics , and was a teaching assistant in the statistical laboratory from 1942 to 1944 and from 1945 to 1946 .these early years as a graduate student and a teaching member of the department while sharing office space with charles stein , joseph hodges and evelyn fix helped to forge lifetime friendships and productive collaborations . after he spent the year from august 1944 to august 1945 stationed in guam as an operations analyst in the united states air force , erich returned to berkeley and started working on a thesis problem proposed by pao - lu hsu in consultation with neyman .the problem was in probability theory some aspect of the moment problem and after obtaining some results and getting ready to write them up , erich discovered that his results were already in markov s work .the situation became complicated as neyman was invited to supervise the greek elections . before leaving , neyman asked hsu if he could provide another thesis topic for erich .hsu obliged but was not able to supervise erich s thesis , as he followed hotelling from columbia to north carolina and then decided to go back to china .neyman turned to george plya at stanford for help .weekly meetings with plya , commuting between berkeley and stanford , finally yielded a thesis . meanwhile , neyman was back from greece after being relieved of his duties for insubordination .neyman had felt that the elections were rigged and decided to check by himself .when asked to stop , he refused .this turn of events allowed neyman to be back in berkeley for erich s examination .thus , in june of 1946 , erich obtained his ph.d .degree with a thesis titled `` optimum tests of a certain class of hypotheses specifying the value of a correlation coefficient . ''erich was not the first of neyman s berkeley ph.d .students , but he was the first one to be hired by the mathematics department .he held the title of assistant professor of mathematics from 1947 to 1950 , and spent the first half of 19501951 as a visiting associate professor at columbia , and as a lecturer at princeton during the second half of that year . partly to allow more time for the tumultuous situation created in berkeley by the anti - communist loyalty oath to settle down , and partly to make a decision on an offer from stanford , erich spent the year of 19511952 as a visiting associate professor at stanford .erich decided to go back to berkeley , but not before he was able to persuade neyman not to require him to do consulting work for the statistical laboratory .( stanford s offer explicitly mentioned that erich was not expected to do any applied work . ) on his return to berkeley in 1952 , erich was promoted to associate professor of mathematics , and then in 1954 was promoted to professor of mathematics . in 1955 , after evans stepped down as chair of mathematics , thus providing neyman with his opportunity for a new department of statistics , erich s title changed to professor of statistics . in 1988 , erich became professor emeritus and then from 1995 to 1997 he was distinguished research scientist at the educational testing service ( ets ) . in spite of his retirement in 1988 , erich continued to be professionally active and a regular participant in the social life of the department . despite offers from stanford in 1951 and from the eidgenssische technische hochschule ( eth ) in 1959 , and except for short stints at columbia , princeton , stanford and ets , erich lived in berkeley from his arrival on january 1st , 1941 until his death on september 12th , 2009erich lehmann s towering contributions to statistics have received many well - deserved accolades .erich was an elected fellow of the institute of mathematical statistics ( ims ) and of the american statistical association ( asa ) , and he was an elected member of the international statistical institute .remarkably , he was the recipient of three guggenheim fellowships ( 1955 , 1966 and 1980 ) and two miller institute for basic research professorships ( 1962 and 1972 ) .the ims honored him as the wald lecturer in 1964the title of his lectures being `` topics in nonparametric statistics . ''this was followed in 1988 by the committee of presidents of statistical societies ( copss ) r. a. fisher memorial lecture entitled `` model specification : fisher s views and some later strategies . '' in 1975 erich was elected fellow of the american academy of arts and sciences and in 1978 he was elected member to the national academy of sciences .election as an honorary fellow of the royal statistical society followed in 1986 and the asa recognized him with the wilks memorial award in 1996 .his life - long work was recognized with two doctorates _ honoris causa _ , the first from the university of leiden in 1985 , and the second from the university of chicago in 1991 .the honor from leiden carries with it the distinction of being the first dr ._ h. c. _ granted by the university of leiden to a mathematician in a century , the previous one having been awarded to stieltjes in 1884 . in 1997 , to celebrate erich s 80th birthday , the berkeley statistics department instituted the lehmann fund to provide support for students . in 2000erich became the first goffried noether award recipient and lecturer for his influential work in nonparametrics .his noether lecture , entitled `` parametrics versus nonparametrics : two alternative methodologies , '' formed the basis for an invited paper with discussion in the _ journal of nonparametrics _ ( jnps ) in 2009 [ 121 ] .posthumously , erich received the best jnps paper award for 2009 .his students and colleagues honored him with a set of reminiscences in 1972 ( j. rojo , ed . ) , a _festschrift for erich l. lehmann _ organized by bickel , doksum and hodges in 1982 [ see also bickel , doksum and hodges ( 1983 ) ] , and a series of _ lehmann symposia _ , organized by rojo and perez - abreu in 2002 , and rojo in 2004 , 2007 and 2011 .perhaps surprisingly , although he was honored with the fisher lecture , he never received the honor of being the neyman lecturer .it may be surmised that erich s lack of affinity for applied work impeded his being so honored .erich served the profession well .although initially reluctant to serve as chair of the statistics department at berkeley , he did so from 19731976 .and he did it very well .brillinger ( 2010 ) writes : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ he had always refused previously for a variety of reasons .he did it so well that i sometimes thought that he must have thought through how a chair should behave and put his conclusions into practice .for example , to the delight of visitors and others he was in the coffee room each day at 10 a.m. he focused on the whole department staff , students , colleagues and visitors ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ during 19601961 , erich was ims president and was a leader in the internationalization of the ims [ see , e.g. , lehmann ( 2008b ) and van zwet ( 2011 ) ] .he was a member of the executive committee of the miller institute ( 19661970 ) , and a member of the committee of visitors to the harvard department of statistics ( 19741980 ) and princeton ( 19751980 ) .he served as editor of the _ annals of mathematical statistics _ from 19531955 and as associate editor from 19551968 .he was invited to stay on for a second term as editor but , after accepting , had to decline .for details see lehmann ( 2008b ) and van zwet ( 2011 ) .in his youth , erich leo lehmann had a desire to become a writer . in lehmann ( 2008b ), he wrote , `` _ _ my passion was german literature , my dream to become a writer , perhaps another thomas mann or gottfried keller__. '' surely it was this passion that drove erich to write his successful and influential books .the list includes : 1 .* testing statistical hypotheses*. three editions ( 1959 , 1986 , 2005 ) .the 2005 edition is joint with joseph p. romano .the 1959 edition was translated into russian ( 1964 ) , polish ( 1968 ) and japanese .* basic concepts of probability and statistics * , with joseph l. hodges . two editions ( 1964 , 1970 ) . reprinted in 2005 as part of the siam series classics in applied mathematics .the book was translated into hebrew ( 1972 ) , farsi ( 1994 ) , italian ( 1971 ) and danish ( 1969 ) .* elements of finite probability * , with joseph l. hodges . two editions ( 1965 , 1970 ) .nonparametrics : statistical methods based on ranks * , with the assistance of h. j. m. dabrera .hardcover edition ( 1975 ) by holden - day .paperback edition ( 1998 ) by prentice - hall , inc . ,simon & schuster , and then by springer science in 2006 .the book was translated into japanese ( 1998 ) .* theory of point estimation*. two editions ( 1983 , 1998with george casella ) .the 1983 edition was translated into russian ( 1991 ) , and the 1998 edition into chinese ( 2004 ) . 6 .* elements of large - sample theory * , 1999 . 7 .* reminiscences of a statistician : the company i kept * , 2008 . additionally , erich collaborated with judith m. tanur on the book _ statistics _ : _ a guide to the unknown_. this book went through several editions and translations [ chinese ( 1980 ) and spanish ( 1992 ) ] .spin - offs from this book were two other books with similar titles : _ statistics _ : _ a guide to the study of the biological and health sciences _ and _ statistics _ : _ a guide to political and social issues _ , both published in 1977 , and on which erich collaborated .erich served as co - editor or special editor .the complete list of books and their translations is given in section 9 .@ + + [ cols="^,^ " , ] + * erich l. lehmann in 1919 , 1992 and 2004 . * the book _ fisher _ , _ neyman _ , _ andthe creation of classical statistics _ has now been published posthumously by springer , lehmann ( 2011b ) .erich was finishing the manuscript at the time of his death .juliet shaffer worked diligently after erich s passing to bring the book to publication form .fritz scholz continues work on a revision , started before erich s death , of the _ nonparametrics _ : _ statistical methods based on ranks _ book .the revision incorporates the use of r and the book is expected to be completed in two years .99 lehmann , e. l. ( 1959 ) ._ testing statistical hypotheses_. wiley , new york .hodges , j. l. jr . and lehmann , e. l. ( 1964 ) ._ basic concepts of probability and statistics_. holden - day , san francisco , ca .lehmann , e. l. ( 1964 ) .russian translation of _ testing statistical hypotheses _ , moscow .hodges , j. l. jr . and lehmann , e. l. ( 1965 ) ._ elements of finite probability_. holden - day , san francisco , ca .lehmann , e. l. ( 1968 ) ._ tesowanie hipotez statvstycznych_. polish translation of _ testing statistical hypotheses_. panstwowe wydawnicto naukowe , warsaw .hodges , j. l. jr . andlehmann , e. l. ( 1969 ) ._ grundbegreger i sandsynhghedsregning og statistik_. danish translation of _ basic concepts of probability and statistics_. nyt nordisk forlag , copenhagen .hodges , j. l. jr . and lehmann , e. l. ( 1970 ) ._ basic concepts of probability and statistics _ , 2nd edholden - day , san francisco , ca .hodges , j. l. jr . and lehmann , e. l. ( 1970 ) ._ elements of finite probability _holden - day , san francisco , ca .hodges , j. l. jr . and lehmann , e. l. ( 1971 ) .italian translation of _ basic concepts of probability and statistics _ , two volumes .societa editrice il mulino , bologna , italy .hodges , j. l. jr . and lehmann , e. l. ( 1972 ) .hebrew translation of _ basic concepts of probability and statistics _, 2nd ed .lehmann , e. l. japanese translation of _ testing statistical hypotheses _ 418 pp .iwanami shoten , tokyo .lehmann , e. l. ( 1975 ) ._ nonparametrics : statistical methods based on ranks_. holden - day , san francisco , ca .tanur , j. m. , mosteller , f. , kruskal , w. h. , lehmann , e. l. , link , r. f. , pieters , r. s. and rising , g. r. , co - eds .( 1977 ) . _statistics _ : _ a guide to the study of the biological and health sciences _ 140 pp .holden - day , san francisco .tanur , j. m. , ed . , lehmann , e. l. , special ed ., mosteller , f. , kruskal , w. h. , link , r. f. , pieters , r. s. and rising , g. r. , co - eds ._ statistics _ : _ a guide to political and social issues _ 141 pp .holden - day , san francisco .lehmann , e. l. ( 1978 ) .japanese translation of _ nonparametrics _ :_ statistical methods based on ranks_. tokyo , 1978 .tanur , j. m. , ed . , lehmann , e. l. , special ed . , mosteller , f. , kruskal , w. h. , link , r. f. , pieters , r. s. and rising , g. r. , co - eds .( 1978 ) . _statistics _ : _ a guide to the unknown _ , 2nd ed .holden - day , san francisco .lehmann , e. l. ( 1983 ) ._ theory of point estimation_. _ wiley series in probability and mathematical statistics _ : _ probability and mathematical statistics_. wiley , new york .tanur , j. m. , ed . , lehmann , e. l. , special ed ., mosteller , f. , kruskal , w. h. , link , r. f. , pieters , r. s. and rising , g. r. , co - eds ._ statistics _ : _ a guide to the unknown _ , 2nd ed . reprint .wadsworth & brooks / cole , monterey , ca .lehmann , e. l. ( 1986 ) ._ testing statistical hypotheses _ , 2nd ed .wiley , new york .tanur , j. m. , mosteller , f. , krusal , w. h. , lehmann , e. l. , link , r. f. , pieters , r. s. and rising , g. r. , co - eds ._ statistics _ : _ a guide to the unknown _ , 3rd ed .wadsworth .tanur , j. m. , ed . , lehmann , e. l. , special ed ., mosteller , f. , kruskal , w. h. , link , r. f. , pieters , r. s. and rising , g. r. , co - eds .chinese translation of _ statistics _ : _ a guide to the unknown _ , 2nd ed .lehmann , e. l. ( 1991 ) .russian translation of _ theory of point estimation_. tanur , j. m. , ed . ,lehmann , e. l. , special ed ., mosteller , f. , kruskal , w. h. , link , r. f. , pieters , r. s. and rising , g. r. , co - eds .spanish translation of _ statistics _ : _ a guide to the unknown _ , 2nd ed .alianza editorial , s. a. , spain , hodges , j. l. jr . and lehmann , e. l. ( 1994 ) .farsi translation of _ basic concepts of probability and statistics_. lehmann , e. l. and casella , g. ( 1998 ) ._ theory of point estimation _springer , new york .lehmann , e. l. ( 1999 ) ._ elements of large - sample theory_. springer , new york .lehmann , e. l. and casella , g. ( 2004 ) .chinese translation of _ theory of point estimation _china statistics press .lehmann , e. l. and romano , j. p. ( 2005 ) ._ testing statistical hypotheses _ , 3rd edspringer , new york .lehmann , e. l. ( 2008 ) ._ reminiscences of a statistician _ : _ the company i kept_. springer , new york. lehmann , e. l. ( 2011 ) ._ fisher , neyman , and the creation of classical statistics_. springer , new york .erich s contributions are multifaceted and too many to do justice to in the allotted space .a more extensive and careful assessment of his work is provided in rojo ( 2011 ) . here , only a small part of his work will be briefly reviewed .some of his ground - breaking work in nonparametric statistics is discussed in this issue by van zwet ( 2011 ) . while still a graduate student at berkeley , erich submitted a paper that was published in 1947 [ 2 ] , in which the issue of what to do when a uniformly most powerful ( ump ) test does not exist is discussed .erich proposed that , due to the many tests available to choose from , one must reduce attention to a class of tests with the property that for any test not in , there is a test in with a power function at least as good as that of . and if and are two tests in , then neither one dominates the other .in addition , the paper characterizes the class for a special case .erich recognized that the class may still be too large to offer much relief in finding a good solution and , therefore , other information or principles may be needed to further narrow down the class .thus , the concept of minimal complete classes , that plays a fundamental role in the theory of statistical decisions of wald ( 1950 ) , was born in this paper . in his book_ statistical decision functions _( 1950 ) , wald credits lehmann : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the concept of complete class of decision functions was introduced by lehmann , and the first result regarding such classes is due to him [ 30] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ interestingly , neyman was not impressed by this work . in degroot ( 1986 ) erich states : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ i wrote it up it was just a few pages and said to neyman that i would like to publish it .he essentially said , `` it s junk. do not bother . ''but i sent it in to wilks anyway ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ some of erich s early work was motivated by the work of hsu ( 1941 ) that dealt with optimal properties of the likelihood ratio test in the context of analysis of variance . in lehmann ( 1959 )[ 34 ] , erich shows that these optimal properties are consequences of the fact that the test is uniformly most powerful invariant .in addition , the paper unified optimality results of kiefer ( 1958 ) for symmetrical nonrandomized designs , and optimality results of wald ( 1942 ) for the analysis of variance test for the general univariate linear hypothesis .hsu ( 1941 ) also proposed a method for finding all similar tests .lehmann ( 1947 ) [ 3 ] extended hsu s results to the composite null hypothesis problem , and ideas in hsu ( 1941 ) motivated the concept of completeness in lehmann and scheff ( 1950 ) [ 12 ] .lehmann and scheff ( 1950 ) [ 12 ] and lehmann and scheff ( 1955 ) [ 26 ] provided a comprehensive study of the concepts of similar regions and sufficient statistics .together with lehmann and stein ( 1950 ) [ 11 ] , where uniformly minimum variance unbiased estimators are discussed in the sequential sampling context , these papers provide the final word on certain problems in hypotheses testing and estimation .hodges and lehmann ( 1950 , 1951 , 1952 ) [ 10 , 13 , 16 ] , provided minimax estimators for several examples and the admissibility of minimax estimators and connections with bayes estimators were discussed . in hodges and lehmann ( 1950 ) [ 10 ] , a minimax estimator for the probability of success in a binomial experiment is obtained by considering the bayes estimator with respect to a beta conjugate prior that yields a bayes estimator with constant risk .the minimax estimator thus found is admissible due to the uniqueness of the bayes estimator .the results are extended to the case of two independent binomial distributions and a minimax estimator is obtained for the difference of the probability of successes when the sample sizes are equal .the question of whether a minimax estimator exists for the difference of the success probabilities for unequal sample sizes remains an open problem .the papers also consider the nonparametric case , and methods for deriving nonparametric minimax estimators are provided under certain conditions .the concept of complete classes having been formalized by wald ( 1950 ) , the paper also shows that , for convex loss functions , the class of nonrandomized estimators is essentially complete .hodges and lehmann ( 1951 ) [ 13 ] used a different approach to obtain minimax and admissible estimators when the loss function is a weighted squared error loss .the method requires the solution of a differential inequality involving the lower bound for the mean squared error .various sequential problems were discussed and minimax estimators were derived .hodges and lehmann ( 1952 ) [ 16 ] proposed finding estimators whose maximum risk does not exceed the minimax risk by more than a given amount .under this restriction it was proposed to find the _ restricted bayes solution _ with respect to some prior distribution .that is , find that minimizes subject to .conditions were discussed for the existence of restricted bayes estimators and several examples were provided that illustrate the method . it was argued that wald s theory can be extended to obtain results for these restricted bayes procedures .wald ( 1950 ) obtained the existence of least favorable distributions under the assumption of a compact parameter space .lehmann ( 1952 ) [ 17 ] addressed this issue and , in the case of hypothesis testing and , more generally , in the case where only a finite number of decisions are available , lehmann weakened the conditions for the existence of least favorable distributions .lehmann and stein ( 1953 ) [ 20 ] proved the admissibility of the most powerful invariant test when testing certain hypotheses in the location parameter family context .erich s work on hypothesis testing is well known . heresome aspects of that work are briefly reviewed .lehmann ( 1947 ) [ 3 ] and lehmann and stein ( 1948 ) [ 5 ] studied the problem of testing a composite ( null ) hypothesis .the 1947 paper extends the work of scheff ( 1942 ) .suppose that is a -dimensional parameter space .let be the subset of given by , for one .then the null hypothesis is an example of a composite ( null ) hypothesis with one constraint , and the parameters , are nuisance parameters .neyman ( 1935 ) provided type b regions for the case of a single nuisance parameter .these results were extended by scheff to the case of several nuisance parameters ( under ) , and scheff provided sufficient conditions for these type b regions to also be type b ( uniformly most powerful unbiased ) regions .lehmann ( 1947 ) [ 3 ] utilized neyman and pearson s ( 1933 ) and hsu s ( 1945 ) methods to determine the totality of similar regions and extended scheff s results to obtain uniformly most powerful tests against one - sided alternatives .hsu s method was also employed to obtain ump regions in cases , for example , location and scale exponential and uniform distributions , where neyman and pearson s method does not apply .the above approach is not as fruitful in the case of more than one constraint , but results of hsu ( 1945 ) are useful in this regard . in lehmann and stein ( 1948 )[ 5 ] the problem of testing a composite hypothesis against a single alternative is addressed by relaxing the condition of similarity to one requiring only that or all , where denotes the critical region of the test .adapting the neyman pearson lemma to hold in this case , sufficient conditions for the existence of most powerful tests were derived .the results for student s problem , with composite null hypothesis given by the normal family with mean and unknown variance , and the simple alternative hypothesis given by the normal distribution with known parameters were somewhat surprising ; see lehmann ( 2008b ) , page 48 . lehmann ( 1950 , 1959 , 2006 )[ 9 , 34 , 118 ] deal with the likelihood ratio principle for testing .although this principle is `` intuitive '' and provides `` reasonable '' tests , it is well known that it may fail .the papers examine different aspects of the problem focusing on the optimality of the likelihood ratio test in some cases , and in its total failure in other cases .lehmann ( 1959 ) [ 34 ] considered a class of invariant tests endowed with an order that satisfies certain properties .it was then shown that , in this case , the likelihood ratio test s optimality properties follow directly from the fact that the test is uniformly most powerful invariant .see also section 4.1 . in lehmann ( 2006 )[ 118 ] and lehmann ( 1950 ) [ 9 ] , properties of tests produced by other approaches are examined and compared to the likelihood ratio tests .for example , when the testing problem remains invariant with respect to a transitive group of transformations , the _ likelihood averaged or integrated with respect to an invariant measure approach _ in lehmann ( 2006 ) [ 118 ] produces tests that turn out to be uniformly at least as powerful as the corresponding likelihood ratio test , with the former being strictly better except when the two coincide ; and in the absence of invariance , the proposed approach continues to improve on the likelihood ratio test for many cases .lehmann ( 1950 ) [ 9 ] was discussed in section 4.1 .lehmann s work on orderings of probability distributions was motivated in part from the need to study properties of power functions .thus , lehmann ( 1955 ) [ 27 ] discussed the stochastic and monotone likelihood ratio orderings .the latter plays a fundamental role in the theory of uniformly most powerful tests and both can be characterized in terms of the function ; see , for example , lehmann and rojo ( 1992 ) [ 98 ] .it is this function that also plays a fundamental role in the lehmann alternatives and , hence , is also connected with the cox proportional hazards model and has now spilled over to the literature on receiving operating characteristic ( roc ) curves .a different collection of partial orderings between distributions and can be defined in terms of the function .bickel and lehmann ( 1979 ) [ 64 ] considered the dispersive ordering defined by requiring that for all , and considered several of its characterizations .this concept is equivalent , under some conditions , to a tail - ordering introduced by doksum ( 1969 ) .this function , , is also useful in comparing location experiments ( lehmann ( 1988 ) [ 85 ] ) .lehmann ( 1966 ) [ 47 ] introduced concepts of dependence for random variables .this work has attracted a lot of attention in the literature from applied probabilists and statisticians alike .erich believed in the frequentist interpretation of probability and in the neyman pearson wald school of optimality , but recognized that both perspectives have their limitations .see , for example , page 188 of lehmann ( 2008b ) .bickel and lehmann ( 2001 ) [ 110 , 111 ] discussed some of the philosophical shortcomings of a frequentist interpretation of probability .erich felt that optimality considerations achieve solutions that may lack robustness and other desirable properties .his work on foundational issues focused on the following : ( i ) model selection ; ( ii ) frequentist statistical inference ; ( iii ) bayesian statistical inference ; and ( iv ) exploratory data analysis . restricting attention to ( ii ) , ( iii ) and ( iv ) , erich viewed the trichotomy as being ordered by the level of model assumptions made .thus , ( iv ) is free of any model assumptions and allows the data to speak for itself , while the frequentist approach relies on a probability model to evaluate the procedures under consideration .the bayesian approach , in addition , brings in the prior distribution .erich felt that none of these approaches is perfect .motivated by this state of affairs , lehmann ( 1985 , 1995 ) [ 82 , 104 ] developed ideas that bridge the divide created by the heated philosophical debates .lehmann ( 1985 ) [ 82 ] discussed how the neyman pearson wald approach contributes to the exploration of underlying data structure and its relation with bayesian inference .lehmann ( 1995 ) [ 104 ] continued with this line of thought : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ in practice , the three approaches can often fruitfully interact , with each benefiting from considerations of the other points of view .it seems clear that model - free data analysis , frequentist and bayesian model - based inference and decision making each has its place .the question appears not to be as it is often phrased which is the correct approach but in what circumstanceseach is most appropriate ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ erich s balanced view of foundational issues is appealing .his work reflects the belief that no single paradigm is totally satisfactory . rather than exacerbating their differences through heated debates, he proposed that a fruitful approach is possible by consolidating the good ideas from ( ii ) , ( iii ) and ( iv)with ( iii ) serving as a bridge that connects all three .although his original position was solidly in the frequentist camp , he shifted , somewhat influenced by classical bayesian ideas .however , he felt that a connection with the radical bayesian position was more challenging .he states in lehmann ( 1995 ) [ 104 ] that `` _ _ bridge building to the radicalbayesian position is more difficult__. '' a definition of the radical bayesian position is not provided , but it can be surmised that this refers to a paradigm that insists on the elicitation of a prior distribution at all costs . in lehmann ( 2008b ) , he writes : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ however , it seems to me that the strength of these beliefs tends to be rather fuzzy , and not sufficiently well defined and stable to assign a definite numerical value to it . if , with considerable effort , such a value is elicited , it is about as trustworthy as a confession extracted through torture ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _i first attended u.c .berkeley during the fall of 1978 .my first course was statistics 210 a the first quarter of theoretical statistics .the recollections of my days as a student during that first quarter , followed by two more quarters of theoretical statistics statistics 210 b and c all taught by erich , are very vivid . during that first academic year, i was very impressed with erich s lecturing style .he would present the material without unnecessarily dwelling too long on technical details , and in such a way that connections with previous material seemed virtually seamless .it was quite enjoyable to follow `` the story '' behind the theory .his lectures were so perfectly organized even when only using a few notes on his characteristic folded - in - the - coats - pocket - yellow sheets ! regarding teaching , erich wrote in lehmann ( 2008b ) : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ while i eschewed very large courses , i loved the teaching that occurred at the other end of the spectrum .working on a one - on - one basis with ph.d .students was , for me , the most enjoyable and rewarding aspect of teaching .at the same time , it was an extension of my research , since these students would help me explore areas in which i was working at the time ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ this love for one - on - one teaching produced a total of 43 ph.d .students . curiously , two of erich s ph.d .students obtained their degrees from columbia rather than from berkeley . that these students graduated from columbia , rather than from berkeley , resulted from a confluence of circumstances .although erich had received an invitation from wald to visit columbia during the 19491950 academic year , erich had to postpone his visit to columbia for the following year since neyman took a sabbatical during the 19491950 academic year .after wald s tragic and untimely death , two of wald s students approached erich with a request to become his students .these students are marked with an asterisk in the following table that presents the names and dissertation titles , by year of degree , for all 43 of erich s ph.d .students . 1 .* colin ross blyth * + _ i. contribution to the statistical theory of the geiger muller counter _ ; + _ ii .on minimax statistical decision procedures and their admissibility .* fred charles andrews * + _ asymptotic behavior of some rank tests for analysis of variance . _ + * jack laderman * * + _ on statistical decision functions for selecting one of k populations .* hendrik salomom konijn * + _ on the power of some tests for independence .* allan birnbaum * * + _ characterizations of complete classes of tests of some multiparametric hypotheses , with applications to likelihood ratio tests . _ + * balkrishna v. sukhatme * + _ testing the hypothesis that two populations differ only in location . _ 5 . *v. j. chacko * + _ testing homogeneity against ordered alternatives ._ 6 . * piotr witold mikulski * + _ some problems in the asymptotic theory of testing statistical hypotheses ._ 7 . * madan lal puri * + _ asymptotic efficiency of a class of c - sample tests . _ + * krishen lal mehra * + _ rank tests for incomplete block designs .paired - comparison case . _ + * subha bhuchongkul sutchritpongsa * + _ class of non - parametric tests for independence in bivariate populations . _ + * shishirkumar shreedhar jogdeo * + _ nonparametric tests for regression models . _ 8 . *peter j. bickel * + _ asymptotically nonparametric statistical inference in the multivariate case . _ + * arnljot hyland * + _ some problems in robust point estimation .* milan kumar gupta * + _ an asymptotically nonparametric test of symmetry . _ + * madabhushi raghavachari * + _ the two - sample scale problem when locations are unknown . _ + * ponnapalli venkata ramachandramurty * + _ on some nonparametric estimates and tests in the behrens fisher situation . _ + * vida greenberg * + _ robust inference in some experimental designs .kjell andreas doksum * + _ asymptotically minimax distribution - free procedures . _ + * william harvey lawton * + _ concentration of random quotients . _ 11 . * shulamith gross * + _ nonparametric tests when nuisance parameters are present . _ + * bruce hoadley * + _ the theory of large deviations with statistical applications . _ + * gouri kanta bhattacharyya * + _ multivariate two - sample normal scores test for shift . _ + * james nwoye adichie * + _ nonparametric inference in linear regression ._ + * dattaprabhakar v. gokhale * + _ some problems in independence and dependence . _ 12 . * frank rudolph hampel * + _ contributions to the theory of robust estimation .* wilhelmine von turk stefansky * + _ on the rejection of outliers by maximum normed residual . _ + * neil h. timm*co - advisors erich leo lehman and leonard marascuilo + _ estimating variance covariance and correlation matrices from incomplete data . _ + * louis jaeckel * + _ robust estimates of location .friedrich wilhelm scholz * + _ comparison of optimal location estimators ._ + * dan anbar * + _ on optimal estimation methods using stochastic approximation procedures .* michael denis stuart * + _ components of 2 for testing normality against certain restricted alternatives . _ + * claude l. guillier* + _ asymptotic relative efficiencies of rank tests for trend alternatives . _ + * sherali mavjibhai makani * + _ admissibility of linear functions for estimating sums and differences of exponential parameters .howard j. m. dabrera * + _ rank tests for ordered alternatives .* hyun - ju yoo jin * + _ robust measures of shift . _ 18 . * amy poon davis * + _ robust measures of association .* jan f. bjornstad * + _ on optimal subset selection procedures .* william paul carmichael * + _ the rate of weak convergence of a vector of u - statistics generated by a single sample . _ + * david draper * + _ rank - based robust analysis of linear models . _ 21 . * wei - yin loh * + _ tail - orderings on symmetric distributions with statistical applications .* marc j. sobel * + _ admissibility in exponential families .javier rojo * + _ on lehmann s general concept of unbiasedness and the existence of l - unbiased estimators . _999 lehmann , e. l. ( 1946 ) .une proprit optimale de certains ensembles critiques du type a1 ._ c. r. acad .paris _ * 223 * 567569 .lehmann , e. l. ( 1947 ) . on families of admissible tests ._ ann . math .statist . _* 18 * 97104 .lehmann , e. l. ( 1947 ) . on optimum tests of composite hypotheses with one constraint .statist . _* 18 * 473494 .lehmann , e. l. and scheff , h. ( 1947 ) . on the problem of similar regions .usa _ * 33 * 382386 .lehmann , e. l. and stein , c. ( 1948 ) .most powerful tests of composite hypotheses. i. normal distributions . __ * 19 * 495516 .lehmann , e. l. ( 1949 ) .some comments on large sample tests . in_ proceedings of the berkeley symposium on mathematical statistics and probability _ 451457 .california press , berkeley .lehmann , e. l. ( 1949 ) .recent publications : elementary statistical analysis . _amer . math . monthly _ * 56 * 429430 .lehmann , e. l. and stein , c. ( 1949 ) .on the theory of some nonparametric hypotheses .statist . _* 20 * 2845 .lehmann , e. l. ( 1950 ) .some principles of the theory of testing hypotheses ._ ann . math .statist . _* 21 * 126 .hodges , j. l. jr . and lehmann , e. l. ( 1950 ) . some problems in minimax point estimation .statist . _* 21 * 182197 .lehmann , e. l. and stein , c. ( 1950 ) .completeness in the sequential case .statist . _* 21 * 376385 .lehmann , e. l. and scheff , h. ( 1950 ) .completeness , similar regions , and unbiased estimation .i. _ sankhy _ * 10 * 305340 .hodges , j. l. jr . and lehmann , e. l. ( 1951 ) .some applications of the cramr rao inequality . in _ proceedings of the second berkeley symposium on mathematical statistics and probability _ 1322 .california press , berkeley .lehmann , e. l. ( 1951 ) .consistency and unbiasedness of certain nonparametric tests ._ ann . math .statist . _* 22 * 165179 .lehmann , e. l. ( 1951 ) .a general concept of unbiasedness .statist . _* 22 * 587592 .hodges , j. l. jr . and lehmann , e. l. ( 1952 ) .the use of previous experience in reaching statistical decisions . _ ann .statist . _* 23 * 396407 .lehmann , e. l. ( 1952 ) . on the existence of least favorable distributions ._ * 23 * 408416 .lehmann , e. l. ( 1952 ) .testing multiparameter hypotheses .statist . _* 23 * 541552 .lehmann , e. l. ( 1953 ) .the power of rank tests .statist . _* 24 * 2343 .lehmann , e. l. and stein , c. m. ( 1953 ) .the admissibility of certain invariant statistical tests involving a translation parameter ._ * 24 * 473479 .chernoff , h. and lehmann , e. l. ( 1954 ) .the use of maximum likelihood estimates in tests for goodness of fit .statist . _* 25 * 579586 .hodges , j. l. jr . and lehmann , e. l. ( 1954 ) . matching in paired comparisons .statist . _* 25 * 787791 .hodges , j. l. jr . and lehmann , e. l. ( 1954 ) .testing the approximate validity of statistical hypotheses .b _ * 16 * 261268 .anderson , t. w. , cramer , h. , hodges , j. l. , freeman , jr .h. a. , lehmann , e. l. , mood , a. m. and stein , c. ( 1955 ) . the life of abraham wald . in _ selected papers in statistics and probability . _ mcgraw - hill , new york .bahadur , r. r. and lehmann , e. l. ( 1955 ) .two comments on `` sufficiency and statistical decision functions . '' _ ann ._ * 26 * 139142 .lehmann , e. l. and scheff , h. ( 1955 ) .completeness , similar regions , and unbiased estimation ._ sankhy _ * 15 * 219236 .lehmann , e. l. ( 1955 ) .ordered families of distributions . __ * 26 * 399419 .hodges , j. l. jr . and lehmann , e. l. ( 1956 ) .two approximations to the robbins monro process . in _ proceedings of the third berkeley symposium on mathematical statistics and probability , 19541955 , vol . i _california press , berkeley .hodges , j. l. jr . and lehmann , e. l. ( 1956 ) .the efficiency of some nonparametric competitors of the -test ._ * 27 * 324335 .lehmann , e. l. ( 1957 ) .a theory of some multiple decision problems ._ * 28 * 125 .lehmann , e. l. ( 1957 ) .a theory of some multiple decision problems ._ * 28 * 547572 .lehmann , e. l. ( 1958 ) .significance level and power .statist . _* 29 * 11671176 .fix , e. , hodges , j. l. jr . and lehmann , e. l. ( 1959 ) . the restricted chi - square test . in _ probability and statistics : the harald cramr volume _( u. grenander , ed . ) 92107 .almqvist & wiksell , stockholm .lehmann , e. l. ( 1959 ) .optimum invariant tests ._ * 30 * 881884 .hodges , j. l. jr . and lehmann , e. l. ( 1961 ) .comparison of the normal scores and wilcoxon tests . in _ proc .4th berkeley sympos .math . statist . and prob .california press , berkeley , ca .lehmann , e. l. ( 1961 ) .some model i problems of selection .statist . _* 32 * 9901012 .hodges , j. l. jr . and lehmann , e. l. ( 1962 ) .rank methods for combination of independent experiments in analysis of variance .statist . _* 33 * 482497 .hodges , j. l. jr . andlehmann , e. l. ( 1962 ) .probabilities of rankings for two widely separated normal distributions . in _ studies in mathematical analysis and related topics_ 146151 .stanford univ . press , stanford , ca .hodges , j. l. jr . and lehmann , e. l. ( 1963 ) .estimates of location based on rank tests .statist . _* 34 * 598611 .lehmann , e. l. ( 1963 ) .robust estimation in analysis of variancestatist . _* 34 * 957966 .lehmann , e. l. ( 1963 ) . a class of selection procedures based on ranks ._ * 150 * 268275 .lehmann , e. l. ( 1963 ) .asymptotically nonparametric inference : an alternative approach to linear models .statist . _* 34 * 14941506 .lehmann , e. l. ( 1963 ) .nonparametric confidence intervals for a shift parameter .* 34 * 15071512 .lehmann , e. l. ( 1964 ) .asymptotically nonparametric inference in some linear models with one observation per cell . _statist . _* 35 * 726734 .lehmann , e. l. ( 1965 ) .on the non - verifiability of certain parametric functions .primen . _ * 10 * 758760. lehmann , e. l. ( 1966 ) . on a theorem of bahadur and goodman ._ ann . math .statist . _* 37 * 16 .lehmann , e. l. ( 1966 ) .some concepts of dependence .statist . _* 37 * 11371153 .hodges , j. l. jr . and lehmann , e. l. ( 1967 ) .moments of chi and power of . in _ proc .fifth berkeley sympos .math . statist . and probability ( berkeley , calif ., 1965/66 ) , vol .i : statistics _ 187201 .california press , berkeley , ca .hodges , j. l. jr . andlehmann , e. l. ( 1967 ) . on medians and quasi medians ._ j. amer .assoc . _ * 62 * 926931 .hodges , j. l. jr . and lehmann , e. l. ( 1968 ) . a compact table for power of the -test .statist . _* 39 * 16291637 .lehmann , e. l. ( 1968 ) .hypothesis testing . in _ international encyclopedia of the social sciences _( d. l. sills , ed . )macmillan , new york .bickel , p. j. and lehmann , e. l. ( 1969 ) .unbiased estimation in convex familiesstatist . _* 40 * 15231535 .hodges , j. l. jr . and lehmann , e. l. ( 1970 ) . deficiency ._ ann . math .statist . _* 41 * 783801 .hodges , j. l. jr . and lehmann , e. l. ( 1973 ) .wilcoxon and test for matched pairs of typed subjects ._ j. amer .* 68 * 151158 .le cam , l. and lehmann , e. l. ( 1974 ) .j. neyman : on the occasion of his 80th birthday .statist . _* 2 * vii xiii .bickel , p. j. and lehmann , e. l. ( 1974 ) .measures of location and scale . in _ proceedings of the prague symposium on asymptotic statistics _ ( _ charles univ . _ ,_ prague _ , 1973 ) , _ vol .i _ ( j. hajek , ed . ) 2536 .charles univ . ,prague .bickel , p. j. and lehmann , e. l. ( 1975 ) .descriptive statistics for nonparametric models .i. introduction .statist . _* 3 * 10381044 .lo kam , l. and lehmann , l. ( 1975 ) .professor jerzy neyman . on the occasion of his 80th birthday ._ fiz .- mat .* 18 * 152156 .bickel , p. j. and lehmann , e. l. ( 1975 ) .descriptive statistics for nonparametric models .ii . location ._ * 3 * 10451069 .bickel , p. j. and lehmann , e. l. ( 1976 ) .descriptive statistics for nonparametric models .statist . _* 4 * 11391158 .lehmann , e. l. and shaffer , j. p. ( 1977 ) . on a fundamental theorem in multiple comparisons ._ j. amer .assoc . _ * 72 * 576578 .lehmann , e. l. ( 1978 ) .henry scheff , 19071977 ._ internat ._ * 46 * 126 .lehmann , e. l. and shaffer , j. p. ( 1979 ) .optimum significance levels for multistage comparison procedures .statist . _ * 7 * 2745 .bickel , p. j. and lehmann , e. l. ( 1979 ) .descriptive statistics for nonparametric models .iv . spread . in _ contributions to statistics _ 3340 .reidel , dordrecht .anderson , t. w. , chung , k. l. and lehmann , e. l. ( 1979 ) .pao lu hsu : 19091970 .statist . _ * 7 * 467470 .lehmann , e. l. ( 1979 ) .hsu s work on inference ._ * 7 * 471473 .daniel , c. and lehmann , e. l. ( 1979 ) .henry scheff , 19071977 ._ * 7 * 11491161 .lehmann , e. l. ( 1980 ) .efficient likelihood estimators .statist . _ * 34 * 233235 .lehmann , e. l. ( 1980 ) .the work of pao lu hsu on statistical inference ._ knowledge practice math ._ * 3 * 68 .anderson , t. w. , chung , k. l. and lehmann , e. l. ( 1980 ) .pao lu hsu : 19101970 ._ knowledge practice math ._ * 3 * 35 .lehmann , e. l. ( 1981 ) .an interpretation of completeness and basu s theorem ._ j. amer .assoc . _ * 76 * 335340 .bickel , p. j. and lehmann , e. l. ( 1981 ) .a minimax property of the sample mean in finite populations .statist . _* 9 * 11191122 .hodges , j. l. jr . and lehmann , e. l. ( 1982 ) .minimax estimation in simple random sampling . in _ statistics and probability : essays in honor of c. r. rao_ ( kallianpur et al . , eds . ) .north - holland , amsterdam .lehmann , e. l. and reid , c. ( 1982 ) .in memoriam : jerzy neyman ( 18941981 ) . _ amer .statist . _ * 36 * 161162 .lehmann , e. l. ( 1982 ) .. _ encycl .* 2 * 20792087 .hodges , j. l. and lehmann , e. l. ( 1983 ) .hodges lehmann estimators .sci . _ * 3 * 31803183 .lehmann , e. l. ( 1983 ) .estimation with inadequate information ._ j. amer .assoc . _ * 78 * 624627 .lehmann , e. l. ( 1983 ) .least informative distributions . in _ recent advances in statistics_ 593599 . academic press , new york .lehmann , e. l. ( 1983 ) .comparison of experiments for some multivariate normal situations . in _ studies in econometrics ,time series , and multivariate statistics _( karlin et al . , eds . ) 491503 . academic press , new york .lehmann , e. l. ( 1984 ) .specification problems in the neyman pearson wald theory . in _ statistics : an appraisal .proceedings 50th anniversary conference _ ( h. a. david and h. t. david , eds . ) 425436 .iowa state univ .press , ames , ia .lehmann , e. l. ( 1985 ) .the neyman pearson lemma ._ * 6 * 224230 .lehmann , e. l. ( 1985 ) .the neyman pearson theory after fifty years . in_ proceedings of the berkeley conference in honor of jerzy neyman and jack kiefer , vol .i _ ( l. le cam and r. a. olshen , eds . ) 114 .wadsworth , belmont , ca .lehmann , e. l. ( 1988 ) .sci . _ * 9 * 386391 .lehmann , e. l. ( 1988 ) .statistics an overview . __ * 8 * 683702 .lehmann , e. l. ( 1988 ) .comparing location experiments .statist . _* 16 * 521533 .lehmann , e. l. and shaffer , j. p. ( 1988 ) .inverted distributions . _statist . _ * 42 * 191194 .lehmann , e. l. ( 1989 ) .group families . in _ encycl . statist ._ 7071 .diaconis , p. and lehmann , e. l. ( 1990 ) .contributions to mathematical statistics . in _ a statistical model _ : _ frederick mosteller s contributions to statistics , science , and public policy _ ( s. e. fienberg , ed . ) 5980 .springer , new york .lehmann , e. l. ( 1990 ) .verifiability and strong verifiability . in _zapiski nauchnykh seminarov leningradskogo otdeleniya matematicheskogo instituta imeni v. a. steklova akademii nauk sssr _( _ lomi _ ) ( ibragimov et al . , eds . ) * 184 * 182188 .lehmann , e. l. ( 1990 ) .model specification : the views of fisher and neyman , and later developments ._ statist .sci . _ * 5 * 160168 .lehmann , e. l. and loh , w .- y .pointwise versus uniform robustness of some large - sample tests and confidence intervals .j. statist . _* 17 * 177187 .lehmann , e. l. ( 1990 ) .comment on lindley : the present position in bayesian statistics . _sci . _ * 5 * 8283 .lehmann , e. l. and scheff , h. ( 1990 ) . in _ dictionary of scientific biography _* 18 * supplement ii .lehmann , e. l. and neyman , j. ( 1990 ) . in _ dictionary of scientific biography _* 18 * supplement ii .( reproduced as : neyman , j. ( 2008 ) . _complete dictionary of scientific biography _ * 18 * 669675 .charles scribner s sons , detroit .gale virtual reference library . web .14 june 2011 . )lehmann , e. l. ( 1991 ) .introduction to student ( 1908 ) : the probable error of a mean . in _ breakthroughs in statistics ,( s. kotz and n. l. johnson , eds . ) 2932 .springer , new york .lehmann , e. l. ( 1991 ) .introduction to neyman , j. and pearson , e. s. ( 1933 ) : on the problem of the most efficient tests of statistical hypotheses . in _ breakthroughs in statistics ,i _ ( s. kotz and n. l. johnson , eds . ) 6772 .springer , new york .lehmann , e. l. and scholz , f. w. ( 1992 ) .ancillarity . in _current issues in statistical inference : essays in honor of d. basu_. _ institute of mathematical statistics lecture notes monograph series _ * 17 * 3251 .ims , hayward , ca .lehmann , e. l. and rojo , j. ( 1992 ) .invariant directional orderings .statist . _* 20 * 21002110 .lehmann , e. l. ( 1993 ) .the fisher , neyman pearson theories of testing hypotheses : one theory or two ?_ j. amer .assoc . _ * 88 * 12421249 .lehmann , e. l. ( 1993 ) .the bertrand borel debate and the origins of the neyman pearson theory . in _ statistics and probability .a raghu raj bahadur festschrift _ ( ghosh et al . , eds . ) 371380 .wiley eastern ltd ., new delhi .lehmann , e. l. ( 1993 ) .mentors and early collaborators : reminiscences from the years 19401956 with an epilogue . _ statist ._ * 8 * 331341 .lehmann , e. l. ( 1994 ) .jerzy neyman ( 18941981 ) . in _ biographical memoirs of the nat .acad . of sci . _ * 63 * 395420 . the national academies press , washington , dc .lehmann , e. l. ( 1990 ) . in _zapiski nauchnykh seminarov leningradskogo otdeleniya matematicheskogo instituta imeni v. a. steklova akademii nauk sssr _( _ lomi _ ) ( ibragimov et al . , eds . ) * 184 * 182188 .lehmann , e. l. ( 1995 ) .foundational issues in statistics : theory and practice ._ foundations of science _ * 1 * 4549. lehmann , e. l. ( 1995 ) .neyman s statistical philosophy .statist . _* 15 * 2936 .dedicated to the memory of jerzy neyman .lehmann , e. l. ( 1996 ) . the creation and early history of the berkeley statistics department . in _statistics , probability and game theory _ ( t. ferguson , ed . ) ._ institute of mathematical statistics lecture notes monograph series _* 30 * 139146 .ims , hayward , ca .lehmann , e. l. ( 1997 ) .le cam at berkeley . in _festschrift for lucien le cam _ 297304 .springer , new york .lehmann , e. l. ( 1997 ) .testing statistical hypotheses : the story of a book . _* 12 * 4852 .lehmann , e. l. ( 1999 ) .`` student '' and small - sample theory . _* 14 * 418426 .bickel , p. j. and lehmann , e. l. ( 2001 ) .frequentist inference . in _international encyclopedia of the social and behavioral sciences _pergamon , oxford .bickel , p. j. and lehmann , e. l. ( 2001 ) .frequentist interpretation of probability . in _ international encyclopedia of the social and behavioral sciences _( n. j. smelser and paul b. baltes , eds . ) 57965798 .pergamon , oxford .lehmann , e. l. ( 2004 ) .optimality and symposia : some history . in _ the first erich l. lehmann symposium optimality_. _ institute of mathematical statistics lecture notes monograph series _ * 44 * 110 .ims , beachwood , oh . lehmann , e. l. , romano , j. p. and shaffer , j. p. ( 2005 ) . on optimality of stepdown and stepup multiple test procedures ._ * 33 * 10841108 .lehmann , e. l. and romano , j. p. ( 2005 ) .generalizations of the familywise error rate .statist . _* 33 * 11381154 .arrow , k. j. and lehmann , e. l. ( 2005 ) .harold hotelling . in _ biographical memoirs_. _ national academy of sciences _ * 87 * 221233 . the national academies press , washington , dc .lehmann , e. l. ( 2006 ) .hodges , joseph lawson , jr . in _ encyclopedia of statistical science _, 2nd ed . 31793180 .lehmann , e. l. ( 2006 ) .scheff , henry . in _ encyclopedia of statistical science _ , 2nd ed .74727474 .lehmann , e. l. ( 2006 ) . on likelihoodratio tests . in _ the second erich l. lehmann symposium optimality _ ( j. rojo , ed . ) ._ institute of mathematical statistics lecture notes monograph series _ * 49 * 19 .ims , beachwood , oh . lehmann , e. l. ( 2008 ) . on the history and use of some standard statistical models . in _ probability and statistics : essays in honor of david a. freedman _ ( d. nolan and t. speed , eds . ) . __ * 2 * 114126 .ims , beachwood , oh . diaconis , p. and lehmann , e. l. ( 2008 ) .comment on `` on student s 1908 article : the probable error of a mean , '' by s. l. zabell ._ j. amer .assoc . _ * 103 * 1619 .lehmann , e. l. ( 2009 ) .parametric versus nonparametrics : two alternative methodologies ._ j. nonparametr .* 21 * 397405 .lehmann , e. l. ( 2009 ) . rejoinder to `` parametric versus nonparametrics : two alternative methodologies ( with discussion ) . '' _j. nonparametr .* 21 * 425426 .lehmann , e. l. ( 2009 ) .some history of optimality . in _optimality_. _ institute of mathematical statistics lecture notes monograph series _ * 57 * 1117 .ims , beachwood , oh .erich s sensitivity toward others , contagious zest for life , gentle spirit , fundamental contributions to statistics and remarkable contributions to human resources development , have been recorded , chronicled and honored through various mechanisms .after his death , erich s life was celebrated with a memorial service that took place at the berkeley women s faculty club on november 9th , 2009 .the service was well attended .his family , friends , students , collaborators and colleagues paid homage .peter bickel organized a memorial session during the 2010 joint statistical meetings in vancouver ( persi diaconis , juliet shaffer and peter bickel speakers ) .the session was very well attended with standing room only .the respect and appreciation for erich was international .willem van zwet organized a memorial session during the 73rd ims annual meeting in gothenburg , sweden in 2010 ( david cox , kjell doksum , willem van zwet , speakers ) , and peter bickel gave a lecture during the latin american congress of probability and mathematical statistics ( clapem ) in venezuela , november 2009 , in remembrance of erich lehmann .recordings of various erich talks are freely accessible to the public for viewing .these include lectures he gave during the second and third lehmann symposia at rice university .obituaries by peter bickel ( 2009 ) and david brillinger ( 2010 ) provide additional information about the life and work of erich l. lehmann .other sources that present fascinating accounts of erich s work and life include lehmann ( 2008b ) , degroot ( 1986 ) and reid ( 1982 ) . a collection of selected works edited by the author will soon be published by springer .the _ selected works of e. l. lehmann _ provides an extended bibliography and , through invited vignettes , examines more closely the various facets of his work .
through the use of a system - building approach , an approach that includes finding common ground for the various philosophical paradigms within statistics , erich l. lehmann is responsible for much of the synthesis of classical statistical knowledge that developed from the neyman pearson wald school . a biographical sketch and a brief summary of some of his many contributions are presented here . his complete bibliography is also included and the references present many other sources of information on his life and his work . .
the solar interior , photosphere and atmosphere are coupled by magnetic fields .it is therefore important to gain insights about the magnetic field structure in all layers of the sun and solar atmosphere .direct and accurate measurements of the magnetic field vector are typically carried out only on the photosphere .although measurements in higher layers are available for a few individual cases , _e.g. _ in the chromosphere by solanki _( 2003 ) and in the corona by , the line - of - sight integrated character of such chromospheric and coronal magnetic field measurements complicates their interpretation .knowledge of the magnetic field in the corona is essential , however , to understand basic physical processes such as the onset of flares , coronal mass ejections and eruptive prominences .inferences of the coronal magnetic field can be obtained by extrapolating measurements of the photospheric magnetic field vector ( _ e.g. _ observed by _hinode_/sot , solis or the upcoming sdo / hmi instruments ) into the corona .because the magnetic pressure dominates the plasma pressure in active - region coronae , making the plasma low , [ see work by and , which discuss the plasma beta over active regions and over the quiet sun , respectively ] , these extrapolations neglect non - magnetic forces and assume the coronal magnetic field to be force - free , such that it obeys : equation ( [ jxb ] ) implies that the electric current density is parallel to the magnetic field .starting more than a quarter century ago , different mathematical methods and numerical implementations have been developed to solve the nonlinear force - free equations ( [ divb ] ) and ( [ jxb ] ) for the solar case .see , for example , for review papers and for evaluations of the performance of corresponding computer programs with model data .the codes use the magnetic field vector ( or quantities derived from the magnetic field vector ) on the bottom boundary of a computational domain as input .one would like to prescribe the measured photospheric data as the bottom boundary of nonlinear force - free fields ( nlfff ) codes , but there is a problem : the observed photospheric magnetic field is usually not force - free .the relatively high plasma in the photosphere means that non - magnetic forces can not be neglected there and that such photospheric magnetic field data are not consistent with well known force - free compatibility conditions defined in .recently , developed a scheme that mitigates this problem , in which the inconsistent and noisy photospheric vector magnetograms used as bottom boundary conditions are preprocessed in order to remove net magnetic forces and torques and to smooth out small - scale noise - like magnetic structures .the resulting magnetic field data are sufficiently force - free and smooth for use with extrapolation codes , but also are found to bear a high resemblance to chromospheric vector magnetic field data .this leads us to the question whether we can constrain the preprocessing tool further by taking direct chromospheric observations , such as h images , into consideration .we will investigate this topic in the present work .in this section , we briefly discuss the criteria on the photospheric boundary data that are required for consistency with a force - free extrapolation of the overlying coronal magnetic field ., , , and show how moments of the lorentz force , integrated over a volume of interest , define constraints on the closed surface bounding this volume . as explained in detail in the sense of these relationsis that on average a force - free field can not exert pressure on the boundary or shear stresses along axes lying in the boundary . for the coronal magnetic field extrapolation calculations discussed here , a localized region of interest , such as an active region , is typically selected for analysis .the extrapolation algorithms applied to the coronal volume overlying such localized regions of interest require boundary conditions , and , except at the lower ( photospheric ) boundary , these boundary conditions are usually chosen to be consistent with potential fields and thus do not possess magnetic forces or torques . in these cases ,the consistency criteria reduce to conditions on the lower boundary only : 1 . on average force - free fields can not exert pressure on the boundary + 2 . on average force - free fields can not create shear stresses along axes lying in the boundary + relations must be fulfilled in order to be suitable boundary conditions for a nonlinear force - free coronal magnetic field extrapolation .we define dimensionless numbers , in order to evaluate how well these criteria are met .ideally , it is necessary for for a force - free coronal magnetic field to exist . pointed out that the magnetic field is probably not force - free in the photosphere , where is measured because the plasma in the photosphere is of the order of unity and pressure gradient and gravity forces are not negligible . the integral relations ( [ prepro1])-([prepro5 ] ) are not satisfied in this case in the photosphere and the measured photospheric field is not a suitable boundary condition for a force - free extrapolation .investigations by metcalf _( 1995 ) revealed that the solar magnetic field is not force - free in the photosphere , but becomes force - free about above the photosphere .the problem has been addressed also by who pointed out that care has to be taken when extrapolating the coronal magnetic field as a force - free field from photospheric measurements , because the force - free low corona is sandwiched between two regions ( photosphere and higher corona ) with a plasma , where the force - free assumption might break down .an additional problem is that measurements of the photospheric magnetic vector field contain inconsistencies and noise .in particular the components of transverse to the line of sight , as measured by current vector magnetographs , are more uncertain than the line - of - sight component . as measurements in higher layers of the solar atmosphere ( where the magnetic field is force - free ) are not routinely available , we have to deal with the problem of inconsistent ( with the force - free assumption as defined by equations ( [ prepro1])([prepro5 ] ) ) photospheric measurements . a routine which uses measured photospheric vector magnetograms to findsuitable boundary conditions for a nonlinear force - free coronal magnetic field extrapolation , dubbed `` preprocessing '' , has been developed by .the preprocessing scheme of involves minimizing a two - dimensional functional of quadratic form similar to the following : where , \label{defl_1 } \\l_2 & = & \left [ \left(\sum_p x \left(b_z^2-b_x^2-b_y^2 \right ) \right)^2 + \left(\sum_p y \left(b_z^2-b_x^2-b_y^2 \right ) \right)^2 \right .\nonumber \\ & & \left .\hspace*{0.8em } + \left(\sum_p y b_x b_z -x b_y b_z \right)^2 \right ] , \\l_3 & = & \left [ \sum_p \left(b_x - b_{xobs } \right)^2+ \sum_p \left(b_y - b_{yobs } \right)^2 \right .\nonumber \\ & & \left .+ \sum_p \left(b_z - b_{zobs } \right)^2 \right ] , \label{defl_3 } \\l_4 & = & \left [ \sum_p \left(\delta b_x \right)^2 + \left(\delta b_y \right)^2 + \left(\delta b_z \right)^2 \right ] .\label{defl_4}\end{aligned}\ ] ] the surface integrals as defined in equations ( [ prepro1])([prepro5 ] ) are here replaced by a summation over all grid nodes of the bottom surface grid .we normalize the magnetic field strength with the average magnetic field on the photosphere and the length scale with the size of the magnetogram .each constraint is weighted by a yet undetermined factor . the first term ( =1 )corresponds to the force - balance conditions ( [ prepro1])-([prepro2 ] ) , the next ( =2 ) to the torque - free condition ( [ prepro3])-([prepro5 ] ) .the following term ( =3 ) contains the difference of the optimized boundary condition with the measured photospheric data and the next term ( =4 ) controls the smoothing .the 2d - laplace operator is designated by and the differentiation in the smoothing term is achieved by the usual 5-point stencil . the last term ( )has not been used in preprocessing so far and will be introduced in the next section .the aim of the preprocessing procedure is to minimize so that all terms if possible are made small simultaneously .this minimization procedure provides us iterative equations for ( see for details ) .as result of the preprocessing we get a data set which is consistent with the assumption of a force - free magnetic field in the corona but also as close as possible to the measured data within the noise level .nonlinear force - free extrapolation codes can be applied only to low plasma regions , where the force - free assumption is justified .this is known not to be the case in the photosphere , but is mostly true for the upper chromosphere and for the corona in quiescent conditions .the preprocessing scheme as used until now modifies observed photospheric vector magnetograms with the aim of approximating the magnetic field vector at the bottom of the force - free domain , _i.e. _ , at a height that we assume to be located in the middle to upper chromosphere . in this study, we investigate whether the use of chromospheric fibril observations as an additional constraint in the preprocessing can bring the resulting field into even better agreement with the expected chromospheric vector field .we discuss this idea in the next section .the idea is to specify another term ( ) in equation ( [ deflprep ] ) which measures how well the preprocessed magnetic field is aligned with fibrils seen in h . as a first step we have to extract the directions of the fibrils , say and out of the h images , where is a unit vector tangent to the chromospheric fibrils projected onto the solar photosphere ( representing the field direction with a 180-degree ambiguity ) . for simplicity one might rebin and to the same resolution as the vector magnetogram . in regions where we can not identify clear filamentary structures in the images we set .these regions are only affected by the other , classical terms of the preprocessing functional ( [ deflprep ] ) .the angle of the projected magnetic field vector on the xy - plane with the h image is where is the projection of the magnetic field vector in the xy - plane and are the directions of the chromospheric h fibrils .the preprocessing aims for deriving the magnetic field vector on the bottom boundary of the force - free domain , which is located in the chromosphere .the chromospheric magnetic field is certainly a priori unknown and as initial condition for the preprocessing routine we take from the photospheric vector magnetogram .we define the functional : please note that the term in equation ( [ defl5 ] ) weights the angle with the magnetic field strength , because it is in particular important to minimize the angle in strong field regions .the space dependent function is not a priori related to the magnetic field strength . can be specified in order to indicate the confidence level of the fibril direction - finding algorithm ( see _ e.g. _ , for the description of a corresponding feature recognition tool ) . for the application to observational data will be ( with appropriate normalization ) provided by this tool .it is likely , however , that the direction of the h fibrils can be identified more accurately in strong magnetic field regions , but this is not an a priori assumption . in section [ opti_halpha ]we investigate the influence of different assumptions for .we take the functional derivative of for a sufficiently small time step we get a decreasing with the iteration equations the aim of our procedure is to make all terms in functional ( [ deflprep ] ) small simultaneously .there are obvious contradictions between some of the terms , such as between the ( photospheric data ) and ( smoothing ) terms .an important task is to find suitable values for the five parameters which control the relative weighting of the terms in equation ( [ deflprep ] ) .the absolute values do not matter ; only the relative weightings are important .we typically give all integral relations of the force and torque conditions ( [ prepro1])-([prepro5 ] ) the same weighting ( unity ) .to fulfill these consistency integrals is essential in order to find suitable boundary conditions for a nonlinear force - free extrapolation . in principleit would be possible to examine different values for the force - free term and torque - free term -or even to give six different weightings for the six integral relations- but giving all integrals the same weighting seems to be a reasonable choice .the torque integrals depend on the choice of the length scale and giving the same weighting to all integrals requires . for the length scale normalization used here leads to .we will test our newly developed method with the help of a model active region in the next section .layer , center : model - photospheric magnetic field , bottom : model - photospheric magnetic field after classical preprocessing with .,width=529 ] we test our extended preprocessing routine with the help of an active region model recently developed by van ballegooijen _( 2007 ) in this model line - of - sight photospheric measurements from soho / mdi have been used to compute a potential field .a twisted flux rope was then inserted into the volume , after which the whole system was relaxed towards a nonlinear force - free state with the magnetofrictional method described in . the van ballegooijen _ et al . _ ( 2007 ) model is force - free throughout the entire computational domain , except within two gridpoints of the bottom boundary .hereafter , we refer to the bottom of the force - free layer as the `` model chromosphere '' ( see the top panel of figure [ magnetograms1 ] ) . on the bottom boundary ( see the central panel of figure [ magnetograms1 ] ) , hereafter referred to as the `` model photosphere '' , the model contains significant non - magnetic forces and the force - free consistency criteria ( [ prepro1])-([prepro5 ] ) are not satisfied .these forces take the form of vertical buoyancy forces directed upward , and have been introduced by van ballegooijen _ et al . _( 2007 ) to mimic the effect of a reduced gas pressure in photospheric flux tubes . the nature of these forces is therefore expected to be similar to those observed on the real sun . for a more detailed discussion we refer to metcalf _( 2007 ) . both the chromospheric ( ) as well as the photospheric magnetic field vector ( ) from the van ballegooijen _ et al . _ ( 2007 ) model have been used to test four sophisticated nonlinear force - free extrapolation codes in a blind algorithm test by metcalf _the codes computed nonlinear force - free codes in a box , which is about at the upper limit current codes can handle on workstations .we briefly summarize the results of metcalf _ et al . _( 2007 ) as : * nlfff - extrapolations from model - chromospheric data recover the original reference field with high accuracy . *when the extrapolations are applied to the model - photospheric data , the reference field is not well recovered .* preprocessing of the model - photospheric data to remove net forces and torques improves the result , but the resulting accuracy was lower than for extrapolations from the model - chromospheric data .the poor performance of extrapolations using the unprocessed model - photospheric data is related to their inconsistency with respect to the force - free conditions ( [ prepro1])-([prepro5 ] ) .the central panel of figure [ magnetograms1 ] shows the photospheric magnetic field and the central panel of figure [ magnetograms1a ] illustrates the difference between the model - chromospheric and model - photospheric fields .it is evident that there are remarkable differences in all components of the magnetic field vector . for real data we usually can not measure the chromospheric magnetic field vector directly ( which was possible for model data ) and we have to apply preprocessing before using the data as input for force - free extrapolation codes .force - free extrapolations using preprocessed data from the model photosphere ( as lower panels of figures [ magnetograms1 ] and [ magnetograms1a ] ) , while encouraging , were not completely satisfactory , in light of the results being worse than when the model - chromospheric data were used as boundary conditions . inwhat follows , we will use an artificial h image created from the model chromosphere to test a modified preprocessing scheme , and compare the results to the classical ( original ) preprocessing scheme .we use the model - chromospheric magnetic field to derive the direction vectors of the artificial h images . for the model case we can simply use the chromospheric model field to specify the direction vectors and , which contain only information regarding the direction of the horizontal components of the magnetic field ( including a ambiguity , but no information about the magnetic field strength . for real datathis information can be derived from high - resolution h images using feature recognition techniques , _e.g. _ the ridge detector of ., right panel : ) with the model chromosphere in dependence of the preprocessing parameters and .we found a maximum correlation at and .,width=529 ] we tested more than possible combinations of and using the model - photospheric field as input , and computed the pearson correlation coefficient between the preprocessed results and the model - chromospheric field . only and used in computing the correlation coefficient , because the correlation of the longitudinal ( _ i.e. _ , the line - of - sight ) component is in general higher than that of the transverse components , due to not being affected by the ambiguity - problem and the noise being much lower than in the other directions .we computed combinations of between with a step size of . hereafter a local maximum around appeared .this region was analyzed in more detail by using these two values as new initial guess .to do this , we tried another combinations around this pair with a reduced step size of in the positive as well as the negative direction .then the absolute maximum of the correlation coefficients for both , and appeared at ( see figure [ corr34 ] ) .the bottom panel of figure [ magnetograms1 ] shows the corresponding preprocessed photospheric magnetic field .fibrils identified from the model chromosphere . the fibrils give us information about the transverse components ( and ) of the chromospheric magnetic field .the fibrils contain a ambiguity and do not provide any information about the chromospheric magnetic field strength .the bottom panels show from left to right the different weighting functions , respectively .regions where is higher are more important is the -preprocessing - term ( [ defl5 ] ) which controls the influence of the h-fibrils.,width=529 ] in the following we aim to find suitable parameters for including information from h images into the preprocessing .our main aim is to investigate the effects of additional chromospheric information . to exclude side effects we therefore keep the combination of - found in the previous section to be able to clearly investigate the effect of the additional term . in principleone could vary all simultaneously .we can not exclude that there might exist a better combination of to with better agreement of our preprocessed field and the model chromospheric field .this is , however , not the aim of this work , because this is not a suitable way to deal with real data , because there is no model chromosphere to test the result .it is not possible to provide an optimal parameter set suitable for all vector magnetographs .the optimal combination has to be carried out for different instruments separately .we expect that an optimal parameter set for a certain instrument and particular region will be also useful for the preprocessing of other regions of the same kind ( say active regions ) observed with the same instrument .we test our methods with `` model fibrils '' extracted from the model chromosphere shown in the top panel of figure [ fig_halpha ] .we define used in equation ( [ defl5 ] ) as one of the following : 1 .we assume that at every point of our h image gives us the exact orientation of the magnetic field ( which is indeed the case , as we calculated it from the chromospheric model data ) and fix our weighting with .we assume that the photospheric magnetic field magnitude gives us the importance of the h information at each point and use + + .+ we scale to a maximum value of .( see figure [ fig_halpha ] bottom left panel . )we do as in the previous case , but assume now , that only points in the magnetogram where the field magnitude is greater than 50 % of the maximum contribute to the h preprocessing .so , we define + + ( see figure [ fig_halpha ] bottom center panel . )4 . in our last casewe assume in the same way as in the previous one , but now only points in the magnetogram where the field magnitude is greater than 10 % of the maximum contribute to the preprocessing .all these grid points are weighted with and the rest with zero .in other words , one defines + + ( see figure [ fig_halpha ] bottom right panel . )we now figure out the optimal value of in equation ( [ deflprep ] ) for the four different weighting functions .initially , we use a step size of and then , around the first appearing maximum , we reduced it to .this is to find a more precise optimal value of .we calculate the pearson correlation coefficient between the chromospheric reference field ( ) and the minimum solution of the preprocessing routine ( ) .this provides us the optimal values of for the different weighting functions , see second row in table [ tab : testparms2 ] .preprocessing with different weighting functions .top : , center : , bottom : , see text.,width=529 ] and the h-preprocessed fields as shown in figure [ magnetograms2].,width=529 ] table [ tab : testparms2 ] lists some metrics related to the various preprocessing schemes , including the dimensionless numbers and from equations ( [ eps_force ] ) and ( [ eps_torque ] ) , the values of the various from section [ sec : preprocessing ] , and the averaged angles between the preprocessing results and the model - chromospheric field .the first three rows of the table list the model chromosphere ( ) and photosphere ( ) data and the classical preprocessing scheme ( ) .when using the unprocessed model - photospheric data ( ) , it is clear that the force - free consistency criteria ( as represented by , , and ) are not fulfilled and are orders of magnitude higher than for the chromospheric data ( ) .consequently , we can not expect the extrapolation codes to result in a meaningful nonlinear force - free field in the corona , as discussed in metcalf _( 2007 ) .the remaining rows in tables [ tab : testparms2 ] and [ tab : extraparms ] list the results for the cases where the h preprocessing was used .a qualitative comparison of the h-preprocessed magnetograms ( shown in figure [ magnetograms2 ] ) with the model chromosphere ( shown in the top panel of figure [ magnetograms1 ] ) indicates a strong resemblance for all three magnetic field components , but certainly not a perfect match .difference images between the h-preprocessed magnetograms and the model chromosphere ( shown in the top panel of figure [ magnetograms1 ] ) are present in figure [ magnetograms2a ] .the resemblance using the h preprocessing scheme is much improved when compared to the magnetograms resulting from the classical preprocessing scheme .table [ tab : extraparms ] displays metrics of the resulting nonlinear force - free extrapolations using each preprocessing scheme.[multiblock footnote omitted ] as expected , the extrapolation codes perform poorly when the unprocessed boundary ( ) is used .in particular , the resulting magnetic energy of this case ( normalized to the energy of the reference solution ) is only 65% of the correct answer , making it almost impossible to estimate the free magnetic energy in the solution available for release during eruptive processes such as flares and coronal mass ejections .taking preprocessing into account ( rows 3 - 7 in both tables ) significantly improves the result .the force - free consistency criteria ( ) are adequately fulfilled for all preprocessed cases and are even better ( lower values ) than the model chromospheric field .this is naturally , however , because the preprocessing routine has been developed in particular to derive force - free - consistent boundary conditions from inconsistent ( forced , noisy ) photospheric measurements .the classical preprocessing has already reduced the angle to the model h fibrils ( last two columns of table [ tab : testparms2 ] ) by almost a factor of two , even though no information about the chromosphere has been used .if we include chromospheric information , ( see figure [ fig_halpha ] ) in our preprocessing routine ( , rows 4 - 7 ) the angle of the preprocessed field with the h images reduces significantly . the second to last row in table [ tab : testparms2 ] contains the average angle and in the last column the angle has been weighted by the magnetic field , which means that measures mainly how well the magnetic field and the chromospheric fibrils are aligned in regions of a high magnetic field strength . for the purpose of coronal magnetic field extrapolations the strong field regions are essential .if we include all information from the h image , as done in row 4 for we find that the magnetic field and the fibrils are almost parallel in the entire region .this is the ideal case , however , as fibrils have been identified all over the region with the same excellent accuracy .for observed data it is more likely that the direction of the fibrils will be identifiable with high accuracy only in bright and magnetically strong regions .this effect is taken into account in rows 5 - 7 of both tables . in the last two rows we take the chromospheric data only into account where the magnetic field strength is larger than and of the maximum field strength , respectively .naturally , the average angle of the chromospheric fibrils with the preprocessed magnetic field becomes larger than for the ideal case .we find , however , that the angle remains relatively low in strong field regions , except for the case .we can easily understand that ( chromospheric information ignored where the magnetic field is less than of its maximum ) provides less accurate results , because the area where chromospheric data have been taken into account , is only a very small fraction of the entire region ( see figure [ fig_halpha ] lower central panel ) .case has few nonzero points .these points are , however , in the regions with the strongest magnetic field strength .the terms minimizes the angle between magnetic field and chromospheric fibrils only in these nonzero points .this local correction does , however , influence the magnetogram globally , because the and term contain global measures and the terms couples neighbouring points . as a consequencethe preprocessing result is different from classical preprocessing , even if the term is nonzero only for a limited number of pixel . for observational datathe weighting ( last row in the tables , areas with less than ignored ; see also [ fig_halpha ] lower right panel ) seems to be more realistic . in this casethe overall average angle is not better than for classical preprocessing , but is different by only about when preferential weighting is given to the more important strong field regions .the ultimate test regarding the success of our extended preprocessing scheme is to use the preprocessed field as boundary conditions for a nonlinear force - free coronal magnetic field extrapolation .the results are presented in table [ tab : extraparms ] , row 3 for classical preprocessing and rows 4 - 7 for h preprocessing .we find that all preprocessed fields provide much better results than using the unprocessed data . for classical preprocessingwe get the magnetic energy correct with an error of ( for unprocessed data we got an error of ) . taking the h information into accountimproves the result and the magnetic energy is computed with an accuracy of or better , even for the cases where we used chromospheric information only in parts of the entire regions ..results of the various preprocessing schemes : the model chromosphere and photosphere ( first two rows ) , classical preprocessing ( third row ) , and the h preprocessing cases ( last four rows ) .column 1 identifies the data set , columns 2 and 3 the value of and the weighting scheme used for the h preprocessing cases .columns 4 - 7 provide the value of the functionals , as defined in equations ( [ defl_1])-([defl_4 ] ) and ( [ defl5 ] ) , respectively . in columns 8 and 9we show how well the force - free and torque - free consistency criteria ( ) as defined in equations ( [ eps_force ] ) and ( [ eps_torque ] ) are fulfilled .the last two columns contain the averaged angle ( ) of the field with the model chromospheric data and a magnetic field weighted average angle ( ) with as defined in equation ( [ angle_phi ] ) .[ cols="^,^,^,^,^,^,^,^,^ , > , > " , ]within this work we developed an improved algorithm for the preprocessing of photospheric vector magnetograms for the purpose of getting suitable boundary conditions for nonlinear force - free extrapolations .we extended the preprocessing routine developed by , which is referred to here as `` classical preprocessing '' .the main motivation for this work is related to the fact that active - region coronal magnetic fields are force - free due to the low coronal plasma , but the magnetic field vector can be measured with high accuracy only on the photosphere , where the plasma is about unity and non - magnetic forces can not be ignored .our original ( `` classical '' ) preprocessing removes these non - magnetic forces and makes the field compatible with the force - free assumption leading to more chromospheric - like configurations . in this study, we have found that by taking direct chromospheric observations into account ( such as by using fibrils seen in h images ) , the preprocessing is improved beyond the classical scheme .this improved scheme includes a term which minimizes the angle between the preprocessed magnetic field and the fibrils .we tested our method with the help of a model active region developed by van ballegooijen _( 2007 ) , which includes the forced photospheric and force - free chromospheric and coronal layers .this model has been used by metcalf _( 2007 ) for an inter - comparison of nonlinear force - free extrapolation codes .the comparison revealed that the model coronal magnetic field was reconstructed very well if chromospheric magnetic fields have been used as input , but in contrast the reconstructed fields compared poorly when unprocessed model - photospheric data were used .classical preprocessing significantly improves the result , but the h preprocessing developed in this paper is even better as the main features of the model corona are reconstructed with high accuracy .our extended preprocessing tool provides a fair estimate of the chromospheric magnetic field , which is used as boundary condition for computing the nonlinear force - free coronal magnetic field .in particular , the magnetic energy in the force - free domain above the chromosphere agrees with the model corona within , even if only strong - field regions of the model chromosphere , where the fibrils can be identified with highest accuracy , influence the final solution . from these testswe conclude that our improved preprocessing routine is a useful tool for providing suitable boundary conditions for the computation of coronal magnetic fields from measured photospheric vector magnetograms as provided for example from _the combination of preprocessing and nonlinear force - free field extrapolations seem likely to provide accurate computation of the magnetic field in the corona .we will still not get the magnetic field structure in the relative thin layer between / in the photosphere and the chromosphere correct , because here non - magnetic forces can not be neglected due to the finite plasma .although this layer is vertically thin ( _ e.g. _ , 2 vertical grid points in the van ballegooijen _ et al . _ ( 2007 ) model compared to 256 vertical grid points in the corona ) it contains a significant part of the total magnetic energy of the entire domain , see metcalf _ et al . _( 2007 ) . unfortunately, this part of the energy can not be recovered by force - free extrapolations , because the region is non - force - free .our improved preprocessing routine includes chromospheric information and therefore provides us with a closer approximation of the chromospheric magnetic field .this leads to more accurate estimates of the total magnetic energy in the corona. a further improvement of the preprocessing routine could be done with the help of additional observations , _e.g. _ the line - of - sight chromospheric field , as planned for solis .one could include these measurement directly in the -term ( [ defl_3 ] ) either as the only information or in some weighted combination with the photospheric field measurement .an investigation of the true 3d - structure of the thin non - force - free layer between photosphere and chromosphere requires further research .first steps towards non - force - free magnetohydrostatic extrapolation codes might help to reveal the secrets of this layer .non force - free magnetic field extrapolations will require additional observational constraints , because the magnetic field , the plasma density and pressure must be computed self - consistently in one model . the work of t. wiegelmann was supported by dlr - grant 50 oc 0501 and j.k .thalmann got financial support by dfg - grant wi 3211/1 - 1 .m. derosa , t. metcalf , and c. schrijver were supported by lockheed martin independent research funds .we acknowledge stimulating discussions during the fourth nlfff - consortium meeting in june , 2007 in paris .we briefly summarize our nonlinear force - free extrapolation code here , which has been used to compute the 3d magnetic fields .we solve the force - free equations ( [ divb ] ) and ( [ jxb ] ) by optimizing ( minimizing ) the following functional : \ ; d^3x \label{defl1},\ ] ] where and are weighting functions .it is obvious that ( for ) the force - free equations ( [ divb ] ) and ( [ jxb ] ) are fulfilled when is zero .the optimization method was proposed by wheatland , sturrock , and roumeliotis ( 2000 ) and further developed in wiegelmann and neukirch ( 2003 ) .here we use the implementation of wiegelmann ( 2004 ) which has been applied to data in wiegelmann et al .( 2005 ) . in this article, we used a recent update including of our code that included a multi - scale approach ( see metcalf et al .( 2007 ) for details ) .this version of the optimization code was also used with the ( same as in this paper ) model - chromospheric , photospheric and classical preprocessed photospheric magnetic field vector as part of an inter - code - comparison in . for alternative methods to solve the force - free equations ( [ divb ] ) and ( [ jxb ] )see the review papers by and references therein .in order to quantify the degree of agreement between the extrapolated vector fields of the input model field ( * b * , _ i.e. _ , the extrapolated chromospheric ( reference ) field ) and the nonlinear force - free solutions ( * b * , _ i.e. _ , the extrapolated preprocessed photospheric field ) that are specified on identical sets of grid points , we use five metrics in table [ tab : extraparms ] that compare either local characteristics or the global energy content in addition to the force and divergence integrals .these measures have been developed in schrijver et al .( 2006 ) and subsequently been used to evaluate the quality of force - free and non - force - free extrapolation codes . where is the total number of vectors in the volume , and the angle between * b * and * b * at point .it is entirely a measure of the angular differences of the vector fields , i. e. if * b * * b * , if they are anti - parallel , and if at each point . unlike the first two metrics , perfect agreement of the two vector fields results in . for an easier comparison with the others, we list , so that all measures reach unity for a perfect match .
the solar magnetic field is key to understanding the physical processes in the solar atmosphere . nonlinear force - free codes have been shown to be useful in extrapolating the coronal field upward from underlying vector boundary data . however , we can only measure the magnetic field vector routinely with high accuracy in the photosphere , and unfortunately these data do not fulfill the force - free condition . we must therefore apply some transformations to these data before nonlinear force - free extrapolation codes can be self - consistently applied . to this end , we have developed a minimization procedure that yields a more chromosphere - like field , using the measured photospheric field vectors as input . the procedure includes force - free consistency integrals , spatial smoothing , and newly included in the version presented here an improved match to the field direction as inferred from fibrils as can be observed in , _ e.g. _ , chromospheric h images . we test the procedure using a model active - region field that included buoyancy forces at the photospheric level . the proposed preprocessing method allows us to approximate the chromospheric vector field to within a few degrees and the free energy in the coronal field to within one percent .
for a biological sample , the dna copy number of a genomic region is the number of copies of the dna in that region within the genome of the sample , relative to either a single control sample or a pool of population reference samples .dna copy number variants ( cnvs ) are genomic regions where copy number differs among individuals .such variation in copy number constitutes a common type of population - level genetic polymorphism .see , , and for detailed discussions on cnv in the human population . on another front, the genomes of tumor cells often undergo somatic structural mutations such as deletions and duplications that affect copy number .this results in copy number differences between tumor cells and normal cells within the same individual .these changes are often termed copy number aberrations or copy number alternations ( cna ) .there is significant scientific interest in finding cnvs in normal individuals and cnas in tumors , both of which entail locating the boundaries of the regions in the genome that have undergone copy number change ( i.e. , the breakpoints ) , and estimating the copy numbers within these regions . in this article, we use next - generation sequencing data for copy number estimation .microarrays have become a commonly used platform for high - throughput measurement of copy number .there are many computational methods that estimate copy number using the relative amount of dna hybridization to an array .see , and for a general review of existing methods for array - based data. however , the precision of breakpoint estimates with array - based technology is limited by its ability to measure genomic distances between probes , which currently averages about 1000 bases ( 1 kb ) on most arrays .hence , the lower limit in the length of detectable cnv events is about 1 kb . with sequencing capacity growing and its cost dropping dramatically , massively parallel sequencing is now an appealing method for measuring dna copy number . in these newer sequencing technologies, a large number of short reads ( 36100 bp ) are sequenced in parallel from the fragmentation of sample dna .then each read is mapped to a reference genome .the basic rationale is that _ coverage _ , defined as the number of reads mapped to a region of the reference genome , reflects the copy number of that region in the sample , but with many systematic biases and much variability across the genome . was one of the first to use genome - wide sequencing to detect cna events .the reader is also referred to for a review of recent studies in cnv / cna detection using sequencing data .more details of the data , with an illustrative example ( figure [ figillusreads ] ) , are given in section [ secdataexistingmethod ] .in the shift from array - based to sequencing - based copy number profiling , the main statistical challenge arises from the fundamental change in the type of data observed .array - based data are represented by a large but fixed number of continuous valued random variables that are approximately normal after appropriate preprocessing , and cnv / cna signals based on array data can be modeled as shifts in mean .sequencing - based data , as we will discuss further in section [ secdataexistingmethod ] , are realizations of point processes , where cnv / cna signals are represented by shifts in intensity of the process .while one can apply a normal approximation to the large number of discrete events in sequencing data , hence translating the problem into the familiar array - based setting , this approach is inefficient and imprecise .a more direct model of the point process is preferred .this type of data calls for a new statistical model , new test statistics , and , due to the quick growth of sequencing capacity , new and highly efficient computing implementation . in copy number profiling it is important to assess the confidence in the estimated copy numbers . with the exception of ,existing segmentation methods , both for array data and for sequencing data , give a hard segmentation and do not quantify the uncertainty in their change - point estimates .some methods , such as and , provide confidence assessments for the called cnv or cna regions , in the form of false discovery rates or -values , thus inherently casting the problem in a hypothesis testing framework .however , for the analysis of complex regions with nested changes , such as those in tumor data , confidence intervals on the copy number , from an estimation perspective , are often more useful .intuitively , the copy number estimate is less reliable for a region near a change point than for a region far away from any change points .also , copy number estimates are more reliable for regions with high coverage than for regions with low coverage , since coverage directly affects the number of observations used for estimation .this latter point makes confidence intervals particularly important for interpretation of results derived from short read sequencing data , where coverage can be highly uneven across the genome . in this paper, we take a bayesian approach with noninformative priors to compute point - wise confidence intervals , as described in section [ secbayesianci ] .the proposed methods are based on a simple and flexible inhomogeneous poisson process model for sequenced reads .we derive the score and generalized likelihood ratio statistics for this model to detect regions where the read intensity shifts in the target sample , as compared to a reference .we construct a modified bayes information criterion ( mbic ) to select the appropriate number of change points and propose bayesian point - wise confidence intervals as a way to assess the confidence in the copy number estimates . as a proof of concept ,we apply seqcbs , our sequencing - based cnv / cna detection algorithm , to a number of actual data sets and found it to have good concordance with array - based results .we also conduct a spike - in study and compare the proposed method to segseq , a method proposed by .the methods developed in this paper have been implemented in an open - source r - package , seqcbs , available from cran ` http://cran.r - project .org / web / packages / seqcbs / index.html ` .in a general next - generation genome sequencing / resequencing pipeline , shown in figure [ figseqpipe ] , the dna in the sample is randomly fragmented , and a short sequence of the ends of the fragments is `` read '' by the sequencer .after the bases in the reads are called , the reads are mapped to the reference genome .there are many different approaches to the preparation of the dna library prior to the sequencing step , some involving amplification by polymerase chain reaction , which lead to different distribution of reads along the reference genome .when a region of the genome is duplicated , fragments from this region have a higher representation , and thus its clones are more likely to be read by the sequencer .hence , when mapped to reference genome , this duplicated region has a higher read intensity .similarly , a deletion manifests as a decrease in read intensity . since reads are contiguous fixed length sequences , it suffices to keep track of the reference mapping location of one of the bases within the read .customarily , the reference mapping location of the 5 end of the read is stored and reported .this yields a point process with the reference genome as the event space.=-1 as noted in previous studies , sequencing coverage is dependent on characteristics of the local dna sequence , and fluctuates even when there are no changes in copy number , as shown in . just as adjusting for probe - effectsis important for interpretation of microarray data , adjusting for these baseline fluctuations in depth of coverage is important for sequencing data .the bottom panels of figure [ figseqarrayhcc1954 ] show the varying depth of coverage for chromosomes 8 and 11 in the sequencing of a normal human sample , hcc1954 .many factors cause the inhomogeneity of depth of coverage .for example , regions of the genome that contain more g / c bases are typically more difficult to fragment in an experiment .this results in lower depth of coverage in such regions . some regions of the genome are highly repetitive .it is challenging to map reads from repetitive regions correctly onto the reference genome and , hence , some of the reads are inevitably discarded as unmappable , resulting in loss of coverage in that region , even though no actual deletion has occurred .some ongoing efforts on the analysis of sequencing data involve modeling the effects of measurable quantities , such as gc content and mappability , on baseline depth . demonstrated that read counts in sequencing are highly dependent on gc content and mappability , and discussed a method to account for such systematic biases . investigated the relationship between gc content and read count on the illumina sequencing platform with a single position model , and identified a family of unimodal curves that describes the effect of gc content on read count .we take the approach of empirically controlling for the baseline fluctuations by comparing the sample of interest to a control sample that was prepared and sequenced by the same protocol . in the context of tumor cna detection ,the control is preferably a matched normal sample , for it eliminates the discovery of germline copy number variants and allows one to focus on somatic cna regions of the specific tumor genome .if a perfectly matched sample is not possible , a carefully chosen control or a pool of controls , with sequencing performed on the same platform with the same experimental protocol , would work for our method as well since almost all of the normal human genome are identical . as a simple and illustrative example of the data, we generated points according to a nonhomogeneous poisson process .figure [ figillusreads ] shows the point processes and the underlying function , defined as the probability that a read at genomic position is from the case / tumor sample , conditional on the existence of a read at position .the model is discussed in more detail in section [ secthemodel ] .the -values for the points are jittered for graphical clarity . .] existing methods on cnv and cna detection with sequencing data generally follow the change - point paradigm , which is natural since copy number changes reflect actual breakpoints along chromosomes . proposed the algorithm segseq that segments the genomes of a tumor and a matched normal sample by using a sliding fixed size window , reducing the data to the ratio of read counts for each window . proposed cnv - seq that detects cnv regions between two individuals based on binning the read counts and then applying methods developed for array data . designed a method named event - wise testing ( ewt ) that detects cnv events using a fixed - window scan on the gc content adjusted read counts . proposed a method called cnaseg that uses read counts in windows of predefined size , and discovers cnv using a hidden markov model segmentation .as for single sample cnv detection method , constructed a computational algorithm that normalizes read counts by gc content and estimates the absolute copy number .these existing methods approach this statistical problem by binning or imposing fixed local windows .some methods utilize the log ratio of read counts in the bin or window as a test statistic , thereby reducing the data to the familiar representation of array - based cnv / cna detection , with being an exception in that it uses the difference in tumor - normal window read counts in their hmm segmentation .there are a number of downsides to the binning or local window approach .first , due to the inhomogeneity of reads , certain bins will receive much larger number of reads overall than other bins , and the optimal window size varies across the genome .if the number of reads in a bin is not large enough , the normal approximations that are employed in many of these methods break down .second , by binning or fixed - size window sliding , the estimated cnv / cna boundaries can be imprecise if the actual breakpoints are not close to the bin or window boundary .this problem can be somewhat mitigated by refining the boundary after the change point is called , as done in segseq . in this paper, we propose a unified model , one that detects the change points , estimates their locations , and assesses their uncertainties simultaneously . to illustrate and evaluate our method , we apply it to real and spiked - in data based on a pair of nci-60 tumor / normal cell lines , hcc1954 and bl1954 .the data for these samples were produced and investigated by .the whole - genome shotgun sequencing was performed on the illumina platform and the reads are 36 bp long .after read and mapping quality exclusions , 7.72 million and 6.65 million reads were used for the tumor ( hcc1954 ) and normal ( bl1954 ) samples , respectively .newer sequencing platforms produce much more massive data sets .we start with a statistical model for the sequenced reads .let and be the number of reads whose first base maps to the left of base location of a given chromosome for the case and control samples , respectively .we can view these count processes as realizations of two nonhomogeneous poisson processes ( nhpp ) , one each for the case and control samples , \\[-8pt ] \nonumber \ { y_{t}\ } & \sim&\operatorname{nhpp}(\lambda_{t}).\end{aligned}\ ] ] the scale is in base pairs .the scenario where two or more reads are mapped to the same genomic position is allowed by letting and take values larger than 1 and assuming that the observed process is binned at the integers .we propose a change - point model on the conditional probability of an event at position being from , given that there is such an event from either or , namely , an example of data according to this model is shown in figure [ figillusreads ] .the change - point model assumption can be equivalently expressed as where ] of any change point , any monotonically increasing function \rightarrow[0,t] ] , we have the following theorem shows that any breakpoint estimator satisfying the equivariance condition can be decomposed into a simpler form . [ thmequivthm ] let , which takes values in , be an estimator of the breakpoints .then is monotone transform equivariant if and only if , where taking integer values does not depend on . for ease of notation , we let be the natural extension of .suppose , where .note that is invariant to all monotone transformations of the arrival times , hence so is . therefore , . in the other direction ,since and contain the same information as , we must have .suppose that depend on in a nontrivial way but satisfies the monotone transform equivariance condition .this means that there exist such that .but since and are both increasing finite sequences on ] interval and another parameter for outside the interval . from the binomial log - likelihood function one obtains }\biggl\ { z_{k}\log\biggl(\frac { \hat{p}_{ij}}{\hat{p}}\biggr)+(1-z_{k})\log\biggl(\frac { 1-\hat{p}_{ij}}{1-\hat{p}}\biggr)\biggr\}\\ & & { } + \sum_{k\notin[i , j]}\biggl\ { z_{k}\log\biggl(\frac{\hat { p}_{0}}{\hat{p}}\biggr)+(1-z_{k})\log\biggl(\frac{1-\hat { p}_{0}}{1-\hat{p}}\biggr)\biggr\ } , \label{eqbinomglr}\end{aligned}\ ] ] where , , are maximum likelihood estimates of success probabilities }z_{k}/(j - i+1 ) , \\\hat{p}_{0}&=&\sum_{k\notin[i , j]}z_{k}/(m - j+i-1).\end{aligned}\ ] ] the glr and score statistics allow us to measure how distinct a specific interval $ ] is compared to the entire chromosome .for the more general problem in which is not given but only one such pair exists , we compute the statistic for all unique pairs of to find the most significantly distinct interval .this operation is and to improve efficiency , we have implemented a search - refinement scheme called iterative grid scan in our software .it works by identifying larger interesting intervals on a coarse grid and then iteratively improving the interval boundary estimates .the computational complexity is roughly and hence scales easily .a similar idea was studied in . in the general model with multiple unknown change points, one could theoretically estimate all change points simultaneously by searching through all possible combinations of .but this is a combinatorial problem where even the best dynamic programming solution [ ; ; ] would not scale well for a data set containing millions of reads .thus , we adapted circular binary segmentation [ ; ] to our change - point model as a greedy alternative .in short , we find the most significant region over the entire chromosome , which divides the chromosome in to 3 regions ( or two , if one of the change points lies on the edge ) .then we further scan each of the regions , yielding a candidate subinterval in each region . at each step , we add the most significant change point(s ) over all of the regions to the collection of change - point calls .model complexity grows as we introduce more change points .this brings us to the issue of model selection : we need a method to choose an appropriate number of change points . proposed a solution to this problem for gaussian change - point models with shifts in mean . like the gaussian model, the poisson change - point model has irregularities that make classic measures such as the aic and the bic inappropriate .an extension of gives a modified bayes information criterion ( mbic ) for our model , derived as a large sample approximation to the bayes factor in the spirit of : where is the number of unique values in .the first term of mbic is the generalized log - likelihood ratio for the model with change points versus the null model with no change points . in our context, ideally reflects the number of biological breakpoints that yield the copy number variants .the remaining terms can be interpreted as a `` penalty '' for model complexity .these penalty terms differ from the penalty term in the classic bic of due to nondifferentiability of the likelihood function in the change - point parameters and also due to the fact that the range of values for grow with the number of observations . for more details on the interpretation of the terms in the mbic , see .finally , we report the segmentation with change points .as noted in the , it is particularly important for sequencing data to assess the uncertainty in the relative read intensity function at each genomic position .we approach this problem by constructing approximate bayesian confidence intervals .suppose are independent realizations of bernoulli random variables with success probabilities .consider first the one change - point model ( which can be seen as a local part of a multiple change - point model ) , where without loss of generality , we may take .assume a uniform prior for on this discrete set .let be the number of successes up to and including the realization , our goal is to construct confidence bands for at each .assume a prior for and .if we knew , then the posterior distribution of and is now , without knowing the actual , we compute the posterior distribution of as as before , the first part of the summation term is a beta distribution , and for the second term , we define the likelihood of the change point at as and observe that with the uniform prior on , where and and are with respect to the prior distributions of and . with priors on and , we can find the closed form expression of : \\[-8pt ] \nonumber & = & \frac{b(\alpha+s _{ i},\beta+ i - s _ { i})b(\alpha + s_{m}-s _ { i},\beta+m- i - s_{m}+s _ { i})}{b(\alpha,\beta ) ^{2}}\\ & = & \frac{\gamma(\alpha+s _ { i})\gamma(\beta+i - s _ { i})}{\gamma(\alpha+\beta+ i)}\nonumber\\ & & { } \times\frac{\gamma(\alpha+s_{m}-s _ { i})\gamma(\beta+m- i - s_{m}+s _ { i})}{\gamma(\alpha+\beta+m- i)}\frac{\gamma ( \alpha+\beta)^{2}}{\gamma(\alpha)^{2}\gamma ( \beta)^{2}}.\nonumber\end{aligned}\ ] ] hence , we can compute , without knowing the actual value of , observe that the posterior distribution is a mixture of distributions . in theory , we could compute weights for all positions and then numerically compute quantiles of the posterior beta mixture distribution to obtain the bayesian confidence intervals . however , in practice , one can approximate the sum in ( [ newlab ] ) by ,\ ] ] for some small , hence ignoring the highly unlikely locations for the change points .empirically , we use .it is easy to see that the sequence of log likelihood ratios for alternative change points , , form random walks with negative drift as moves away from the true change point [ ] .the negative drift depends on the true , and is larger in absolute magnitude when the difference between and is larger . with unknown , since can be made arbitrarily close to 1 for , one can make the same random walk construction for bounded away by from , as done in .this implies that , for any , one may find a constant such that for any at least steps away from , with probability approaching 1 .hence , it is reasonable to use a small cutoff to produce a close approximation to the posterior distribution .the extension of this construction to multiple change points is straightforward .it entails augmenting the mixture components of one change point with that of its neighboring change points .this gives a computationally efficient way of approximating the bayesian confidence interval using , typically , a few hundred mixture components , which has been implemented in ` seqcbs ` .there is also an extensive body of literature on constructing confidence intervals and confidence sets for estimators of the change point .we refer interested readers to for discussion and efficiency comparison of various confidence sets in change - point problems .we first applied the proposed method to a matched pair of tumor and normal nci-60 cell lines , hcc1954 and bl1954 . conducted the sequencing of these samples using the illumina platform .for comparison with array - based copy number profiles on the same samples , we obtained array data on hcc1954 and bl1954 from the nci-60 database at http://www.sanger.ac.uk / genetics / cgp / nci60/. we applied the cbs algorithm [ , ] with modified bic stopping algorithm [ ] to estimate relative copy numbers based on the array data . [ cols="^,^ " , ] figure [ figperfoverall ] summarizes the performance comparison at default settings for a number of spike - in signal lengths .the horizontal lines are mean recall and precision rates for the methods . we see that seqcbs , used with either the score test statistic or the glr statistic , offers significant improvement over the existing method in both precision and recall .the performances of the score and glr statistics are very similar , as their recall and precision curves almost overlap .the improvement in precision can be largely attributed to the fact that mbic provides a good estimate of model complexity , as can be seen in figure [ fig6](a ) ..3d1.3d1.3d1.3d1.3d1.3d1.3d1.3d1.3@ + length & 2.49 & 2.68 & 2.91 & 3.10 & 3.18 & 3.29 & 3.51 & 3.73 & 3.95 + + segseq & + default & 0.066 & 0.146 & 0.394 & 0.514 & 0.464 & 0.476 & 0.502 & 0.438 & 0.424 + w250 & 0.23 & 0.422 & 0.67 & 0.662 & 0.59 & 0.654 & 0.64 & 0.564 &0.516 + w750 & & & 0.148 & 0.206 & 0.248 & 0.36 & 0.408 & 0.384 & 0.346 + a500 & & 0.148 & 0.394 & 0.514 & 0.464 & 0.476 & 0.502 & 0.438 & 0.424 + a2000 & & 0.146 & 0.394 & 0.514 & 0.464 & 0.476 & 0.502 & 0.438 & 0.424 + b25 & & 0.182 & 0.404 & 0.532 & 0.484 & 0.502 & 0.506 & 0.452 & 0.458 + b5 & & 0.126 & 0.382 & 0.476 & 0.432 & 0.442 & 0.476 & 0.428 & 0.364 + seqcbs & + scr - def & 0.49 & 0.714 & 0.95 & 0.988 & 0.95 & 0.936 & 0.968 & 0.878 & 0.782 + bin - def & 0.492 & 0.718 & 0.948 & 0.99 & 0.956 & 0.936 & 0.968 & 0.876 & 0.81 + scr - g5 & 0.496 & 0.71 & 0.922 & 0.99 & 0.956 & 0.956 & 0.978 & 0.922 & 0.844 + bin - g5 & 0.496 & 0.712 & 0.928 & 0.99 & 0.958 & 0.962 & 0.98 & 0.946 & 0.844 + scr - g15 & 0.494 & 0.708 & 0.926 & 0.974 & 0.942 & 0.938 & 0.968 & 0.89 & 0.736 + bin - g15 & 0.496 & 0.716 & 0.93 & 0.976 & 0.946 & 0.96 & 0.972 & 0.91 & 0.748 + + segseq & + default & 0.049 & 0.105 & 0.235 & 0.305 & 0.284 & 0.263 & 0.276 & 0.237 & 0.255 + w250 & 0.174 & 0.317 & 0.490 & 0.472 & 0.467 & 0.478 & 0.442 & 0.405 &0.399 + w750 & & & 0.107 & 0.137 & 0.165 & 0.227 & 0.242 & 0.232 & 0.212 + a500 & & 0.097 & 0.235 & 0.305 & 0.284 & 0.263 & 0.276 & 0.237 & 0.255 + a2000 & & 0.101 & 0.235 & 0.305 & 0.284 & 0.263 & 0.276 & 0.237 & 0.255 + b25 & & 0.101 & 0.203 & 0.278 & 0.254 & 0.243 & 0.246 & 0.219 & 0.240 + b5 & & 0.104 & 0.278 & 0.361 & 0.323 & 0.308 & 0.327 & 0.295 & 0.271 + seqcbs & + scr - def & 0.980 & 0.997 & 0.985 & 0.988 & 0.985 & 0.944 & 0.968 & 0.878 & 0.839 + bin - def & 0.984 & 0.997 & 0.988 & 0.990 & 0.992 & 0.947 & 0.968 & 0.876 & 0.884 + scr - g5 & 0.984 & 0.997 & 0.956 & 0.990 & 0.992 & 0.980 & 0.994 & 0.945 & 0.942 + bin - g5 & 0.984 & 0.994 & 0.959 & 0.990 & 0.994 & 0.990 & 0.996 & 0.977 & 0.942 + scr - g15 & 0.980 & 0.994 & 0.953 & 0.944 & 0.961 & 0.949 & 0.964 & 0.876 & 0.710 + bin - g15 & 0.984 & 0.994 & 0.953 & 0.946 & 0.973 & 0.984 & 0.972 & 0.910 & 0.733 + we studied the performance sensitivity on tuning parameters .segseq allows three tuning parameters : local window size ( w ) , number of false positive candidates for initialization ( a ) , and number of false positive segments for termination ( b ) .the proposed method has a step size parameter ( g ) that controls the trade - off between speed and accuracy in our iterative grid scan component , and hence influences performance .we varied these parameters and recorded the performance measures in table [ tabperformancemeasures ] .it appears that local window size ( w ) is an important tuning parameter for segseq , and in scenarios with relatively short signal length , a smaller w provides significant improvement in its performance .this echoes with our previous discussion that methods using a single fixed window size would perform less well when the signals are not of the corresponding length .some of the parameter combinations for segseq result in program running errors in some scenarios , and are marked as na .the step size parameter ( g ) in seqcbs , in constrast , controls the rate at which coarse segment candidates are refined and the rate at which the program descends into searching smaller local change points , rather than defining a fixed window size .a smaller step size typically yields slightly better performance .however , the proposed method is not nearly as sensitive to its tuning parameters .we also conducted a timing experiment to provide the reader with a sense of the required computational resources to derive the solution .our proposed method compares favorably with segseq as seen in figure [ fig6](b ) .the glr statistic is slightly more complex to compute than the score statistic , as is reflected in the timing experiment .however , copy number profiling is inherently a highly parallelizable computing problem : one may distribute the task for each chromosome among a multi - cpu computing grid , hence dramatically reducing the amount of time required for this analysis .we proposed an approach based on nonhomogeneous poisson processes to directly model next - generation dna sequencing data , and formulated a change - point model to conduct copy number profiling . the model yields simple score and generalized likelihood ratio statistics , as well as a modified bayes information criterion for model selection .the proposed method has been applied to real sequencing data and its performance compares favorably to an existing method in a spike - in simulation study .statistical inference , in the form of confidence estimates , is very important for sequencing - based data , since , unlike arrays , the effective sample size ( i.e. , coverage ) for estimating copy number varies substantially across the genome . in this paper, we derived a procedure to compute bayesian confidence intervals on the estimated copy number .other types of inference , such as -values or confidence intervals on the estimated change points , may also be useful . compares different types of confidence intervals on the change points , and the methods there can be directly applied to this problem .the reader is referred to and for existing methods on significance evaluation .some sequencing experiments produce paired end reads , where two short reads are performed on the two ends of a longer fragment of dna. the pairing information can be quite useful in the profiling of structural genomic changes .it will be important to extend the approach in this paper to handle this more complex data type .a limitation of the proposed method and the existing methods is that they do not handle allele - specific copy number variants .it is possible to extend our model to accommodate this need . with deep sequencing, one may assess whether each loci in a cnv is heterozygous , and estimate the degree to which each allele contributes to the gain or loss of copy number , by considering the number of reads covering the locus with the major allele versus those with the minor allele .this is particularly helpful for detecting deletion .furthermore , in the context of assessing the allele - specific copy number , existing snp arrays have the advantage that the assay targets specific sites for that problem , whereas to obtain sufficient evidence of allele - specific copy number variants with sequencing , a much greater coverage would be required since the overwhelming majority of reads would land in nonallelic genomic regions .spatial models that borrow information across adjacent variant sites , such as and , would be helpful for improving power .recently , there has been increased attention to the problem of simultaneous segmentation of multiple samples [ ; ; ; ] .one may also wish to extend this method to the multi - sample setting , where in addition to modeling challenges , one also needs to address more sources of systematic biases , such as batch effects and carry - over problems .computational challenges remain in this field . with sequencing capacity growing at record speed ,even basic operations on the data set are resource - consuming .it is pertinent to develop faster and more parallelizable solutions to the copy number profiling problem .we thank h. p. ji , g. walther and d. o. siegmund for their inputs .
we propose a flexible change - point model for inhomogeneous poisson processes , which arise naturally from next - generation dna sequencing , and derive score and generalized likelihood statistics for shifts in intensity functions . we construct a modified bayesian information criterion ( mbic ) to guide model selection , and point - wise approximate bayesian confidence intervals for assessing the confidence in the segmentation . the model is applied to dna copy number profiling with sequencing data and evaluated on simulated spike - in and real data sets . .
coupled oscillator theory is now a pervasive part of the theoretical neuroscientist s toolkit for studying the dynamics of models of biological neural networks .undoubtedly this technique originally arose in the broader scientific community through a fascination with understanding synchronisation in networks of interacting heterogeneous oscillators , and can be traced back to the work of huygens on an odd kind of sympathy `` between coupled pendulum clocks '' .subsequently the theory has been developed and applied to the interaction between organ pipes , phase - locking phenomena in electronic circuits , the analysis of brain rhythms , chemical oscillations , cardiac pacemakers , circadian rhythms , flashing fireflies , coupled josephson junctions , rhythmic applause , animal flocking , fish schooling , and behaviours in social networks . for a recent overview of the application of coupled phase oscillator theory to areas as diverse as vehicle coordination , electric power networks , and clock synchronisation in decentralised networkssee the recent survey article by drfler and bullo . given the widespread nature of oscillations in neural systems it is no surprise that the science of oscillators has found such ready application in neuroscience .this has proven especially fruitful for shedding light on the functional role that oscillations can play in feature binding , cognition , memory processes , odour perception , information transfer mechanisms , inter - limb coordination , and the generation of rhythmic motor output .neural oscillations also play an important role in many neurological disorders , such as excessive synchronisation during seizure activity in epilepsy , tremor in patients with parkinson s disease or disruption of cortical phase synchronisation in schizophrenia . as suchit has proven highly beneficial to develop methods for the control of ( de)synchronisation in oscillatory networks , as exemplified by the work of tass _ et al_. for therapeutic brain stimulation techniques . from a transformative technological perspective, oscillatory activity is increasingly being used to control external devices in brain - computer interfaces , in which subjects can control an external device by changing the amplitude of a particular brain rhythm .neural oscillations can emerge in a variety of ways , including intrinsic mechanisms within individual neurons or by interactions between neurons . at the single neuron level, sub - threshold oscillations can be seen in membrane voltage as well as rhythmic patterns of action potentials .both can be modelled using the hodgkin - huxley conductance formalism , and analysed mathematically with dynamical systems techniques to shed light on the mechanisms that underly various forms of rhythmic behaviour , including tonic spiking and bursting ( see e.g. ) .the high dimensionality of biophysically realistic single neuron models has also encouraged the use of reduction techniques , such as the separation of time - scales recently reviewed in , or the use of phenomenological models , such as fitzhugh - nagumo ( fhn ) , to regain some level of mathematical tractability .this has proven especially useful when studying the response of single neurons to forcing , itself a precursor to understanding how networks of interacting neurons can behave .when mediated by synaptic interactions , the repetitive firing of pre - synaptic neurons can cause oscillatory activation of post - synaptic neurons . at the level of neural ensembles ,synchronised activity of large numbers of neurons gives rise to macroscopic oscillations , which can be recorded with a micro - electrode embedded within neuronal tissue as a voltage change referred to as a local field potential ( lfp ) .these oscillations were first observed outside the brain by hans berger in 1924 in electroencephalogram ( eeg ) recordings , and have given rise to the modern classification of brain rhythms into frequency bands for alpha activity ( 8 - 13 hz ) ( recorded from the occipital lobe during relaxed wakefulness ) , delta ( 1 - 4 hz ) , theta ( 4 - 8 hz ) , beta ( 13 - 30 hz ) and gamma ( 30 - 70 hz ) .the latter rhythm is often associated with cognitive processing , and it is now common to link large scale neural oscillations with cognitive states , such as awareness and consciousness .for example , from a practical perspective the monitoring of brain states via eeg is used to determine depth of anaesthesia .such macroscopic signals can also arise from interactions between different brain areas , the thalamo - cortical loop being a classic example .neural mass models ( describing the coarse grained activity of large populations of neurons and synapses ) have proven especially useful in understanding eeg rhythms , as well as in augmenting the dynamic causal modelling framework ( driven by large scale neuroimaging data ) for understanding how event - related responses result from the dynamics of coupled neural populations .one very influential mathematical technique for analysing networks of neural oscillators , whether they be built from single neuron or neural mass models , has been that of weakly coupled oscillator theory , as comprehensively described by hoppensteadt and izhikevich . in the limit of weak coupling between limit cycle oscillators invariant manifold theory and averaging theory be used to reduce the dynamics to a set of phase equations in which the relative phase between oscillators is the relevant dynamical variable .this approach has been applied to neural behaviour ranging from that seen in small rhythmic networks up to the whole brain . despite the powerful tools and wide - spread use afforded by this formalism ,it does have a number of limitations ( such as assuming the persistence of the limit cycle under coupling ) and it is well to remember that there are other tools from the mathematical sciences relevant to understanding network behaviour . in this reviewwe wrap the weakly coupled oscillator formalism in a variety of other techniques ranging from symmetric bifurcation theory and groupoid formalisms through to more `` physics - based '' approaches for obtaining reduced models of large networks .this highlights the regimes where the standard formalism is applicable , and provides a set of complementary tools when it does not .these are especially useful when investigating systems with strong coupling , or ones for which the rate of attraction to a limit cycle is slow . in [ sec :neuronoscillators ] we review some of the key mathematical models of oscillators in neuroscience , ranging from single neuron to neural mass , as well as introduce the standard machinery for describing synaptic and gap junction coupling .we then present in [ sec : collectivedyns ] an overview of some of the more powerful mathematical approaches to understanding the collective behaviour in coupled oscillator networks , mainly drawn from the theory of symmetric dynamics .we touch upon the master stability function approach and the groupoid formalism for handling coupled cell systems . in [ sec : coupledlimitcycles ] we review some special cases where it is either possible to say something about the stability of the globally synchronous state in a general setting , or that of phase - locked states for strongly coupled networks of integrate - and - fire neurons .the challenge of the general case is laid out in [ sec : reduced ] , where we advocate the use of phase - amplitude coordinates as a starting point for either direct network analysis or network reduction . to highlight the importance of dynamics off cycle we discuss the phenomenon of shear - induced chaos . in the same sectionwe review the reduction to the standard phase - only description of an oscillator , covering the well known notions of isochrons and phase response curves .the construction of phase interaction functions for weakly coupled phase oscillator networks is covered in [ sec : weakcoupling ] , together with tools for analysing phase - locked states .moreover , we go beyond standard approaches and describe the emergence of turbulent states in continuum models with non - local coupling .another example of something more complicated than a periodic attractor is that of a heteroclinic attractor , and these are the subject of [ sec : heteroclinic ] .the subtleties of phase reduction in the presence of stochastic forcing are outlined in [sec : stochastic ] .the search for reduced descriptions of very large networks is the topic of [ sec : macroscopic ] , where we cover recent results for winfree networks that provide an exact mean - field description in terms of a complex order parameter .this approach makes use of the ott - antonsen ansatz that has also found application to chimera states , and which we discuss in a neural context . in [ sec : applications ] we briefly review some examples where the mathematics of this review have been applied , and finally in [ sec : discussion ] we discuss some of the many open challenges in the field of neural network dynamics .we will assume the reader has familiarity with the following : * the basics of nonlinear differential equation descriptions of dynamical systems such as linear stability and phase - plane analysis .* ideas from the qualitative theory of differential equations / dynamical systems such as asymptotic stability , attractors and limit cycles .* generic codimension - one bifurcation theory of equilibria ( saddle - node , hopf ) and of periodic orbits ( saddle - node of limit cycles , heteroclinic , torus , flip ) .there are a number of texts that cover this material very well in the context of neuroscience modelling , for example . at the endwe include a glossary of some abbreviations that are introduced in the text .nonlinear ionic currents , mediated by voltage - gated ion channels , play a key role in generating membrane potential oscillations and action potentials .there are many ordinary differential equation ( ode ) models for voltage oscillations , ranging from biophysically detailed conductance - based models through to simple integrate - and - fire ( if ) caricatures .this style of modelling has also proved fruitful at the population level , for tracking the averaged activity of a near synchronous state . in all these cases bifurcation analysisis especially useful for classifying the types of oscillatory ( and possibly resonant ) behaviour that are possible .here we give a brief overview of some of the key oscillator models encountered in computational neuroscience , as well as models for electrical and chemical coupling necessary to build networks .the work of hodgkin and huxley in elucidating the mechanism of action potentials in the squid giant axon is one of the major breakthroughs of dynamical modelling in physiology , and see for a review .their work underpins all modern electrophysiological models , exploiting the observation that cell membranes behave much like electrical circuits .the basic circuit elements are 1 ) the phospholipid bilayer , which is analogous to a capacitor in that it accumulates ionic charge as the electrical potential across the membrane changes ; 2 ) the ionic permeabilities of the membrane , which are analogous to resistors in an electronic circuit ; and 3 ) the electrochemical driving forces , which are analogous to batteries driving the ionic currents .these ionic currents are arranged in a parallel circuit .thus the electrical behaviour of cells is based upon the transfer and storage of ions such as k and na .our goal here is to illustrate , by exploiting specific models of excitable membrane , some of the concepts and techniques which can be used to understand , predict , and interpret the excitable and oscillatory behaviours that are commonly observed in single cell electrophysiological recordings .we begin with the mathematical description of the hodgkin - huxley model .the standard dynamical system for describing a neuron as a spatially isopotential cell with constant membrane potential is based upon conservation of electric charge , so that where is the cell capacitance , the applied current and represents the sum of individual ionic currents : in the hodgkin - huxley ( hh ) model the membrane current arises mainly through the conduction of sodium and potassium ions through voltage dependent channels in the membrane .the contribution from other ionic currents is assumed to obey ohm s law ( and is called the leak current ) .the , and are conductances ( conductance=1/resistance ) .the great insight of hodgkin and huxley was to realise that depends upon four activation gates : , whereas depends upon three activation gates and one inactivation gate : . herethe gating variables all obey equations of the form the conductance variables described by take values between and and approach the asymptotic values with time constants .these six functions are obtained from fits with experimental data .it is common practice to write , , where and have the interpretation of opening and closing channel transition rates respectively .the details of the hh model are provided in appendix a for completeness .a numerical bifurcation diagram of the model in response to constant current injection is shown in fig . [ fig : hhbif ] , illustrating that oscillations can emerge in a hopf bifurcation with increasing drive . .the green circles show the amplitude of a stable limit cycle and the blue circles indicate unstable limit cycle behaviour .the solid red line shows the stable fixed point and the black line shows the unstable fixed point .details of the model are in appendix a. [ fig : hhbif ] , width=288 ] the mathematical forms chosen by hodgkin and huxley for the functions and , , are all transcendental functions .both this and the high dimensionality of the model make analysis difficult . however , considerable simplification is attained with the following observations : ( i ) is small for all so that the variable rapidly approaches its equilibrium value , and ( ii ) the equations for and have similar time - courses and may be _ slaved _ together .this has been put on a more formal footing by abbott and kepler , to obtain a reduced planar model for obtained from the full hodgkin - huxley model under the replacement and for with a prescribed choice of .the phase - plane and nullclines for this system are shown in fig . [fig : hhr ] . andgreen for ) of the reduced hh neuron mode , obtained using the reduction technique of abbott and kepler , for the oscillatory regime ( ) capable of supporting a periodic train of spikes .the periodic orbit is shown in blue .[ fig : hhr ] , width=288 ] for zero external input the fixed point is stable and the neuron is said to be _excitable_. when a positive external current is applied the low - voltage portion of the nullcline moves up whilst the high - voltage part remains relatively unchanged . for sufficiently large constant external inputthe intersection of the two nullclines falls within the portion of the nullcline with positive slope . in this casethe fixed point is unstable and the system may support a limit cycle .the system is said to be oscillatory as it may produce a train of action potentials .action potentials may also be induced in the absence of an external current for synaptic stimuli of sufficient strength and duration .this simple planar model captures all of the essential features of the original hh model yet is much easier to understand from a geometric perspective .indeed the model is highly reminiscent of the famous fhn model , in which the voltage nullcline is taken to be a cubic function .both models show the onset of repetitive firing at a non - zero frequency as observed in the hh model ( when an excitable state loses stability via a subcritical hopf bifurcation ) . however , unlike real cortical neurons they can not fire at arbitrarily low frequency .this brings us to consider modifications of the original hh formalism to accommodate bifurcation mechanisms from excitable to oscillatory behaviours that can respect this experimental observation .many of the properties of real cortical neurons can be captured by making the equation for the recovery variable of the fhn equations quadratic ( instead of linear ) .we are thus led to the cortical model of wilson : where and . here is like the membrane potential , and plays the role of a gating variable .in addition to the single fixed point of the fhn model it is possible to have another pair of fixed points , as shown in fig .[ fig : cortical ] ( right ) . as increases two fixed points can annihilate at a _saddle node on an invariant curve _( snic ) bifurcation at . in the neighbourhood of this global bifurcation the firingfrequency scales like .for large enough there is only one fixed point on the middle branch of the cubic , as illustrated in fig .[ fig : cortical ] ( left ) .in this instance an oscillatory solution occurs via the same mechanism as for the fhn model . , , , .the voltage nullcline is shown in red and that of the recovery variable in green .left : , showing a stable fixed point ( black filled circle ) , a saddle ( grey filled circle ) and an unstable fixed point ( white filled circle ) .right : , where there is an unstable fixed point ( white filled circle ) with a stable limit cycle ( in blue ) for .[ fig : cortical ] , width=384 ] a snic bifurcation is not the only way to achieve a low firing rate in a single neurone model .it is also possible to achieve this via a homoclinic bifurcation , as is possible in the morris - lecar ( ml ) model .this was originally developed as a model for barnacle muscle fiber .morris and lecar introduced a set of coupled ordinary differential equations ( odes ) incorporating two ionic currents : an outward going , non - inactivating potassium current and an inward going , non - inactivating calcium current .assuming that the calcium currents operate on a much faster time scale than the potassium current one they formulated the following two dimensional system : with ) ] , and ] in order to have any chance of uniquely defining the future dynamics ; this means that for a state to be linearly stable , an infinite number of eigenvalues need to have real part less than zero .nonetheless , much can be learned about stability , control and bifurcation of dynamically synchronous states in the presence of delay ; for example , and the volume includes a number of contributions by authors working in this area .there are also well - developed numerical tools such as dde - biftool that allow continuation , stability and bifurcation analysis of coupled systems with delays .for an application of these techniques to the study of a wilson - cowan neural population model with two delays we refer the reader to .although no system is ever truly symmetric , in practise many models have a high degree of symmetry . neurons , but of the order of types http://neuromorpho.org/neuromorpho/index.jsp meaning there is a very high replication of cells that are only different by their location and exact morphology .] indeed many real world networks that have _ grown _ ( e.g. giving rise to tree - like structures ) are expected to be well approximated by models that have large symmetry groups .symmetric ( more precisely , equivariant ) dynamics provides a number of powerful mathematical tools that one can use to understand emergent properties of systems of the form with .we say ( [ eq : ode ] ) is _ equivariant _ under the action of a group if and only if for any and .there is a well developed theory of dynamics with symmetry ; in particular see .these give methods that help in a number of ways : * * description : * one can identify symmetries of networks and dynamic states to help classify and differentiate between them * * bifurcation : * there is a well - developed theory of bifurcation with symmetry to help understand the emergence of dynamically interesting ( symmetry broken ) states from higher symmetry states * * stability : * bifurcation with symmetry often gives predictions about possible bifurcation scenarios that includes information about stability * * generic dynamics : * symmetries and invariant subspaces can provide a powerful structure with which one can understand more complex attractors such as heteroclinic cycles * * design : * one can use symmetries to systematically build models and test hypotheses the types of symmetries that are often most relevant for mathematical modelling of finite networks of neurons are the permutation groups , i.e. the symmetric groups and their subgroups .nonetheless , continuum models of neural systems may have continuous symmetries that influence the dynamics and can be used as a tool to understand the dynamics ; see for example .we review some aspects of the equivariant dynamics that have proven useful in coupled systems that are relevant to neural dynamics - see for example . in doing sowe mostly discuss dynamics that respects some symmetry group of permutations of the systems .the full permutation symmetry group ( or simply , the symmetric group ) on objects , , is defined to be the set of all possible permutations of objects .formally it is the set of permutations ( invertible maps of this set ) . to determine effects of the symmetry , not only the group must be known but also its _ action _ on phase space .if this action is linear then it is a _ representation _ of the group .the representation of the symmetry group is critical to the structure of the stability , bifurcations and generic dynamics that are equivariant with the symmetry .for example , if each system is characterised by a single real variable , one can view the action of the permutations on as a _ permutation matrix _{ij } = \left\ { \begin{array}{cl } 1 & \mbox { if } i=\sigma(j)\\ 0 & \mbox { otherwise}\end{array}\right . , \ ] ] for each ; note that for any .table [ tab : symms ] lists some commonly considered examples of symmetry groups used in coupled oscillator network models ..some permutation symmetry groups that have been considered as examples of symmetries of coupled oscillator networks . [ cols="<,^,<",options="header " , ] we now review some techniques of reduction which can be employed to study the dynamics of when so that the perturbations may take the dynamics away from the limit cycle . in doingso we will reduce for example to an ode for taken modulo .clearly any solution of an ode must be continuous in and typically will be unbounded in growing at a rate that corresponds to the frequency of the oscillator . strictly speaking ,the coordinate we are referring to in this case is on the _ lift _ of the circle to a covering space , and for any phase there are infinitely many lifts to given by for .however , in common with most literature in this area we will not make a notational difference between whether the phase is understood on the unit cell e.g. or on the lift , e.g. modulo .consider with .the _ asymptotic _ ( or _ latent _ ) _ phase _ of a point in the basin of attraction of the limit cycle of period is the value of such that where is a trajectory starting at .thus if and are trajectories on and off the limit cycle respectively , they have the same asymptotic phase if the distance between and vanishes as .the locus of all points with the same asymptotic phase is called an _isochron_. thus an isochron extends the notion of phase off the cycle ( within its basin of attraction ) .isochrons can also be interpreted as the leaves of the stable manifold of a hyperbolic limit cycle .they fully specify the dynamics in the absence of perturbations .there are very few instances where the isochrons can be computed in closed form ( though see the examples in for plane - polar models where the radial variable decouples from the angular one ) .computing the isochron foliation of the basin of attraction of a limit cycle is a major challenge since it requires knowledge of the limit cycle and therefore can only be computed in special cases or numerically. one computationally efficient method for numerically determining the isochrons is backward integration , however it is unstable and in particular for strongly attracting limit cycles the trajectories determined by backwards integration may quickly diverge to infinity .see izhikevich for a matlab code which determines smooth curves approximating isochrons .other methods include the continuation based algorithm introduced by osinga and moehlis , the geometric approach of guillamon and huguet to find high order approximations to isochrons in planar systems , quadratic and higher order approximations , and the forward integration method using the koopman operator and fourier averages as introduced by mauroy and mezi .this latter method is particularly appealing and given its novelty we describe the technique below the koopman operator approach for constructing isochrons for a -periodic orbit focuses on tracking observables ( or measures on a state space ) rather than the identification of invariant sets .the koopman operator , , is defined by , where is some observable of the state space and denotes the flow evolved for a time , staring at a point .the fourier average of an observable is defined as for a fixed , ( [ fourier ] ) is equivalent to a fourier transform of the ( time - varying ) observable computed along a trajectory .hence , for a dynamics with a stable limit cycle ( of frequency ) , it is clear that the fourier average can be nonzero only for the frequencies , .the fourier averages are the eigenfunctions of , so that perhaps rather remarkably the isochrons are level sets of for almost all observables .the only restriction being that the first fourier coefficient of the fourier observable evaluated along the limit cycle is nonzero over one period .an example of the use of this approach is shown in fig .[ fig : isochron ] , where we plot the isochrons of a stuart - landau oscillator .- z |z|^2 ( 1+i c)/2 ] .averaging the above over one period gives where we have used the result that .the function is -periodic and can be written as a fourier series , with the simplest example of an averaged phase - dynamics being which is called the adler equation .if we denote the maximum and minimum of by and respectively then for a _ phase - locked _ : state defined by require . in this casethere are two fixed points defined by .one of these is unstable ( say , so that ) and the other stable ( , with ) .this gives rise to a rotating solution with constant rotation frequency so that .the two solutions coalesce in a saddle - node bifurcation when and ( or equivalently when ) . in the case of the adler model the parameter region for phase - lockingis given explicitly by a triangular wedge defined by a so - called arnold tongue .outside of this tongue solutions _ drift _ ( they can not lock to the forcing period ) according to ( [ averaged ] ) , and the system evolves quasi - periodically .we treat weakly coupled phase oscillators in [ sec : weakcoupling ] .the theory of weakly coupled oscillators is now a standard tool of dynamical systems theory and has been used by many authors to study oscillatory neural networks , see for example .the book by hoppensteadt and izhikevich provides a very comprehensive review of this framework , which can also be adapted to study networks of relaxation oscillators ( in some singular limit ) .consider , for illustration , a system of interacting limit - cycle oscillators ( [ eq : pairwisecoupledcells ] ) . following the method in [ subsec : phaseresponse ] , similar to ( [ averaged ] ) we obtain the network s phase dynamics in the form where the frequency allows for the fact that oscillators are not identical and , for this reason , we will assume that .precisely this form of network model was originally suggested by winfree to describe populations of coupled oscillators .the winfree model assumes a separation of time - scales so that an oscillator can be solely characterised by its phase on cycle ( fast attraction to cycle ) and is described by the network equations describing a globally coupled network with a biologically realistic prc and pulsatile interaction function . using a mixture of analysis and numerics winfreefound that with large there was a transition to macroscopic synchrony at a critical value of the _ homogeneity _ of the population .following this kuramoto introduced a simpler model with interactions mediated by phase differences , and showed how the transition to collective synchronisation could be understood from a more mathematical perspective .for an excellent review of the kuramoto model see and . the natural way to obtain a phase - difference model from ( [ phase dynamicsnetwork ] ) is , as in [ subsec : phaseresponse ] , to average over one period of oscillation . for simplicitylet us assume that all the oscillators are identical , and , in which case we find that where the -periodic function is referred to as the _ phase interaction function_. if we write complex fourier series for and as respectively then with .note that certain caution has to be exercised in applying averaging theory .in general , one can only establish that a solution of the unaveraged equations is -close to a corresponding solution of the averaged system for times of .no such problem arises in the case of hyperbolic fixed points corresponding to phase - locked states . when describing a piece of cortex or a central pattern generator circuit with a set of oscillators , the biological realism of the model typically resides in the phase interaction function .the simplest example is , which when combined with a choice of global coupling defines the well known kuramoto model .however , to model realistic neural networks one should calculate ( [ h ] ) directly , using knowledge of the single neuron iprc and the form of interaction . as an example consider synaptic coupling , described in [ sec : coupling ] , that can be written in the form , and a single neuron model for which the iprc in the voltage variable is given by ( say experimentally or from numerical investigation ) . in this case instead we were interested in diffusive ( gap - junction ) coupling then we would have { { \rm d}}s .\label{hgap } \nonumber\ ] ] for the hh model is known to have a shape like for a spike centred on the origin ( see fig . [fig : prc ] ) . making the further choice that then ( [ hsynapse ] ) can be evaluated as \sin ( \psi)-2/\alpha \cos(\psi)}{2 \pi [ 1+(1/\alpha)^2]^2}. \label{halphahh}\ ] ] in the particular case of two oscillators with reciprocal coupling and then and we define .a phase - locked solution satisfies constant phase difference that is a zero of the ( odd ) function .\ ] ] a given phase - locked state is then stable provided that . note that by symmetry both the in - phase ( ) and anti - phase ( ) states are guaranteed to exist . for the form of phase interaction function given by , the stability of the synchronous solution is governed by the sign of : \right \ } .\nonumber\ ] ] thus for inhibitory coupling ( ) synchronisation will occur if , namely when the synapse is _ slow _ ( ) .it is also a simple matter to show that the anti - synchronous solution ( ) is stable for a sufficiently _ fast _ synapse ( ) .it is also possible to develop a general theory for the existence and stability of phase - locked states in larger networks than just a pair .now suppose we have a general population of coupled phase oscillators described by phases . for a particular continuous choice of phases for the trajectory one can define the _ frequency _ of the oscillator as .\ ] ] this limit will converge under fairly weak assumptions on the dynamics , though it may vary for different attractors in the same system , for different oscillators and in some cases it may vary even for different trajectories within the same attractor .we say two oscillators and are _ phase locked _ with ratio for with no common factors of and , if there is an such that for all .the oscillators are _ frequency locked _ with ratio if if we say they are simply phase ( or frequency locked ) without explicit mention of the ratio , we are using the convention that they are phase ( or frequency ) locked .the definition of means that if two oscillators are phase locked then they are frequency locked .the converse is not necessarily the case : two oscillators may be frequency locked but not phase locked if the phase difference grows sublinearly with . for the special case globally coupled averaged networks ( for the system ( [ phasenetwork ] ) ) is equivariant . by topological arguments ,maximally symmetric solutions describing synchronous , splay , and a variety of cluster states are expected to exist generically .the system ( [ phasenetwork ] ) with global coupling is in itself an interesting subject of study in that it is of arbitrarily high dimension but is effectively determined by the single function that is computable from a single pair of oscillators .the system ( and variants thereof ) have been productively studied by thousands of papers since the seminal work of kuramoto .the collective dynamics of phase oscillators have been investigated for a range of regular network structures including linear arrays and rings with uni- or bi - directional coupling e.g. , and hierarchical networks . in some casesthe systems can be usefully investigated in terms of permutation symmetries of ( [ phasenetwork ] ) with global coupling , for example or for uni- or bi - directionally coupled rings .in other cases a variety of approaches have been developed adapted to particular structures though these have not in all cases been specifically applied to oscillators ; some of these approaches are discussed in [ sec : networks ] we recall that the form of the coupling in ( [ phasenetwork ] ) is special in the sense that it assumes the interactions between two oscillators are independent of any third - pairwise coupling .if there are degeneracies such as which can appear when some of the fourier components of are zero , this can lead to degeneracies in the dynamics .for example , while ( * ? ?* theorem 7.1 ) shows that if satisfies ( [ eq : degenerate ] ) for some and is a multiple of then ( [ phasenetwork ] ) , with global coupling , will have -dimensional invariant tori in phase space that are foliated by neutrally stable periodic orbits .this degeneracy will disappear on including either non - pairwise coupling or introducing small but non - zero fourier components in the expansion of but as noted in this will typically be the case for the interaction of oscillators even if they are near a hopf bifurcation .we examine in more detail some of the phase locked states that can arise in weakly coupled networks of identical phase oscillators described by ( [ phasenetwork ] ) .we define a : phase - locked solution to be of the form , where is a constant phase and is the collective frequency of the coupled oscillators .substitution into the averaged system ( [ phasenetwork ] ) gives after choosing some reference oscillator , these equations determine the collective frequency and relative phases with the latter independent of .it is interesting to compare the weak coupling theory for phase - locked states with the analysis of lif networks from [ sec : ifnetworks ] .equation ( [ ifphases ] ) has an identical structure to that of equation ( [ omega ] ) ( for for all ) , so that the classification of solutions using group theoretic methods is the same in both situations .there are , however , a number of significant differences between phase - locking equations ( [ omega ] ) and ( [ ifphases ] ) .first , equation ( [ ifphases ] ) is exact , whereas equation ( [ omega ] ) is valid only to since it is derived under the assumption of weak coupling .second , the collective period of oscillations must be determined self - consistently in equation ( [ ifphases ] ) . in order to analyse the local stability of a phase - locked solution , we linearise the system by setting and expand to first - order in : where and .one of the eigenvalues of the jacobian is always zero , and the corresponding eigenvector points in the direction of the flow , that is .the phase - locked solution will be stable provided that all other eigenvalues have a negative real part .we note that the jacobian has the form of a graph - laplacian mixing both anatomy and dynamics , namely it is the graph - laplacian of the matrix with components .synchrony ( more precisely , exact phase synchrony ) is where for some fixed frequency is a classic example of a phase - locked state .substitution into ( [ phasenetwork ] ) , describing a network of identical oscillators , shows that must satisfy the condition one way for this to be true for all is if , which is the case say for or for diffusive coupling , which is linear in the difference between two state variables so that . the existence of synchronous solutions is also guaranteed if is independent of .this would be the case for global coupling where , so that the system has permutation symmetry .if the synchronous solution exists then the jacobian is given by where is the graph - laplacian with components . .[ cocomac],width=226 ] we note that has one zero eigenvalue , with eigenvector . hence if all the other eigenvalues of lie on one side of the imaginary axis then stability is solely determined by the sign of . in fig .[ cocomac ] we show the eigenvalues of the graph - laplacian of the anatomical network structure of the macaque monkey brain , as determined from the cocomac database . hereall the eigenvalues lie to the left of the imaginary axis so that the stability of the synchronous solution ( should it exist ) is solely determined by the sign of . for global couplingwe have that , and the ( degenerate ) eigenvalue is .hence the synchronous solution will be stable provided .another example of a phase - locked state is the purely asynchronous solution whereby all phases are uniformly distributed around the unit circle .this is sometimes referred to as a _ splay state _ or _ splay - phase state _ and can be written with . like the synchronous solution it will be present but not necessarily stable in networks with global coupling , with an emergent frequency that depends on : in this case the jacobian takes the form , \nonumber\ ] ] where and .hence the eigenvalues are given by /n ] .it is a nontrivial problem to discover which of these subspaces contain periodic solutions .note that in - phase corresponds to , while splay phase corresponds to , .the stability of several classes of these solutions can be computed in terms of properties of ; see for example [ sec : synchrony ] and [ sec : asynchrony ] and for other classes of solution .bifurcation properties of the globally coupled oscillator system ( [ phasenetwork ] ) on a varying parameter that affects the coupling are surprisingly complicated because of the symmetries present in the system ; see [ subsec : symmetry ] .in particular , the high multiplicity of the eigenvalues for loss of stability of the synchronous solution means : * path following numerical bifurcation programmes such as auto or xppaut need to be done with great care when applying to problems with identical oscillators - these typically will not be able to find all solutions branching from one that loses stability . * a large number of branches with a range of symmetriesmay generically be involved in the bifurcation ; indeed , all 2-cluster states .* local bifurcations may have global bifurcation consequences owing to the presence of connections that are facilitated by the nontrivial topology of the torus .* branches of degenerate attractors such as heteroclinic attractors may appear at such bifurcations for oscillators .et al_. consider the system ( [ phasenetwork ] ) with global coupling and phase interaction function of the form for fixed parameters ; detailed bifurcation scenarios in the cases are shown in . as an example , figure [ fig : fouroscbifs ] shows regions on stability of synchrony , splay phase solutions and robust heteroclinic attractors as discussed later in [ sec : heteroclinic ] .-oscillator system ( [ phasenetwork ] ) with phase interaction function ( [ eq : hmmcoupling ] ) and parameters in the region , .the narrow stripes show the region of stability of synchrony , while the wide stripes show the region of stability of the splay phase solution .the pink shaded area shows a region of existence of a robust heteroclinic network that is an attractor with in the checkerboard region ; the boundaries are described in .[ fig : fouroscbifs ] , width=384 ] the phase reduction method has been applied to a number of important biological systems , including the study of travelling waves in chains of weakly coupled oscillators that model processes such as the generation and control of rhythmic activity in central pattern generators ( cpgs ) underlying locomotion and peristalsis in vascular and intestinal smooth muscle .related phase models have been motivated by the observation that synchronisation and waves of excitation can occur during sensory processing in the cortex . in the former casethe focus has been on dynamics on a lattice and in the latter continuum models have been preferred .we now present examples of both these types of model , focusing on _ phase wave _ solutions .the lamprey is an eel - like vertebrate which swims by generating travelling waves of neural activity that pass down its spinal cord .the spinal cord contains about segments , each of which is a simple half - center neural circuit capable of generating alternating contraction and relaxation of the body muscles on either side of body during swimming . in a seminal series of papers , ermentrout and kopell carried out a detailed study of the dynamics of a chain of weakly coupled limit cycle oscillators , motivated by the known physiology of the lamprey spinal cord .they considered phase - oscillators arranged on a chain with nearest - neighbour anisotropic interactions and identified a travelling wave as a phase - locked state with a constant phase difference between adjacent segments .the intersegmental phase differences are defined as .if then the wave travels from head to tail whilst for the wave travels from the tail to the head .phase oscillators with . ,width=432 ] for a chain we set to obtain where . pairwise subtraction and substitution of leads to an dimensional system for the phase differences + w_- [ h ( -\varphi_i)-h ( -\varphi_{i-1 } ) ] , \nonumber\ ] ] for , with boundary conditions , where .there are at least two different mechanisms that can generate travelling wave solutions .the first is based on the presence of a gradient of frequencies along the chain , that is , has the same sign for all , with the wave propagating from the high frequency region to the low frequency region .this can be established explicitly in the case of an isotropic , odd interaction function , and , where we have the fixed points satisfy the matrix equation , where , , and is a tridiagonal matrix with elements , .for the sake of illustration suppose that . then a solution will exist if every component of lies between .let . if then for each there are two distinct solutions in the interval ] and generate a fokker - planck equation for .the last three terms in ( [ fp ] ) vanish after integration due to the boundary conditions , so that we are left with we now expand about to give and , where is identified as the infinitesimal phase response . in the limit of small where we note that , for an arbitrary function , making use of the above gives the fokker - planck equation as + d { \frac{\partial ^2[z^2p]}{\partial \vartheta^2 } } , \label{fpphase}\ ] ] where .hence , the corresponding it equation is + z(\vartheta ) \xi(t ) , \label{itophase}\ ] ] while the stratonovich version is equations ( [ itophase ] ) and ( [ phasestrat ] ) are the stochastic phase oscillator descriptions for a limit cycle driven by weak white noise .these make it clear that naively adding noise to the phase description misses not only a multiplication by the iprc but also the addition of a further term that contains information about the _ amplitude _ response of the underlying limit cycle oscillator .we are now in a position to calculate the steady state probability distribution and use this to calculate the moments of the phase dynamics .consider ( [ fpphase ] ) with the boundary condition , and set .adopting a fourier representation for , and as , , , allows us to obtain a set of equations for the unknown amplitudes as for some constant .for we have that , and after enforcing normalisation we may set . for small we may then substitute into ( [ amplitudes ] ) and work to next order in to obtain an approximation for the remaining amplitudes , , in the form using this we may reconstruct the distribution , for small d , from ( [ fourier ] ) as the mean frequency of the oscillatoris defined by .this can be calculated by replacing the time average with the ensemble average . for an arbitrary -periodic function we set . using ( [ p ] ) and ( [ itophase ] ) we obtain where we have used the fact that .we may also calculate the phase - diffusion as \left [ \frac{{{\rm d}}}{{{\rm d}}t } { \vartheta}(t ) - \left \langle \frac{{{\rm d}}}{{{\rm d}}t } { \vartheta } \right \rangle \right ] \right \rangle { { \rm d}}\tau \nonumber \\ & = \frac{d}{\pi } \int_0^{2 \pi } z^2(\vartheta ) { { \rm d}}\vartheta + o(d^2 ) , \nonumber\end{aligned}\ ] ] where we use the fact that and .this recovers a well known result of kuramoto . a recent paper by teramae _et al_. shows that when one considers noise described by an ornstein - uhlenbeck ( ou ) process with a finite correlation time then this can interact with the attraction time - scale of the limit cycle and give fundamentally different results when compared to gaussian white noise ( which has a zero correlation time ) .this observation has also been independently made in .both approaches assume weak noise , though makes no assumptions about relative time - scales , and is thus a slightly more general approach than that of .related work by goldobin _et al_. for noise with zero - mean and prescribed auto - correlation function , yields the reduced stratonovich phase equation where where is the average rate of attraction to the limit cycle .note that for , and ( [ phasegeneral ] ) reduces to ( [ phasestrat ] ) as expected .to lowest order in the noise strength the steady state probability distribution will simply be .therefore to lowest noise order the mean frequency is determined from an ensemble average as where the last term comes from using the it form of ( [ phasegeneral ] ) and the subscript notation is defined as in ( [ p ] ) .the phase - diffusion coefficient at lowest noise order is given by let us now consider the example of ou noise so that . furthermore let us take the simultaneous limit ( zero correlation timescale ) and ( infinitely fast attraction ) , such that the ratio is constant .in this case we have from ( [ ytilde ] ) that hence , when the correlation time of the noise is much smaller than the decay time constant and we recover the result for white gaussian noise . in the other extreme when , where the amplitude of the limit cycle decays much faster than the correlation time of the noise , then vanishes and the reduced phase equation is simply , as would be obtained using the standard phase reduction technique without paying attention to the stochastic nature of the perturbation .the self - organisation of large networks of coupled neurons into macroscopic coherent states , such as observed in phase - locking , has inspired a search for equivalent low - dimensional dynamical descriptions .however , the mathematical step from microscopic to macroscopic dynamics has proved elusive in all but a few special cases .for example , neural mass models of the type described in [ subsec : neuralmass ] only track mean activity levels and not the higher order correlations of an underlying spiking model . only in the thermodynamic limit of a large number of neurons firing asynchronously ( producing null correlations )are such rate models expected to provide a reduction of the microscopic dynamics .moreover , even here the link from spike to rate is often phenomenological rather than rigorous .unfortunately only in some rare instances has it been possible to analyse spiking networks directly ( usually under some restrictive assumption such as global coupling ) as in the spike - density approach , which makes heavy use of the numerical solution of coupled pdes .recently however , exact results for globally pulse - coupled oscillators described by the winfree model have been obtained by paz and montbri .this makes use of the ott - antonsen ( oa ) ansatz , which was originally used to find solutions on a reduced invariant manifold of the kuramoto model .the major difference between the two phase - oscillator models being that the former has interactions described by a phase product structure and the latter a phase difference structure .the winfree model is described in [ sec : weakcoupling ] as a model for weakly globally pulse - coupled biological oscillators , and can support incoherence , frequency locking , and oscillator death when and .we note however that the same model is _ exact _ when describing nonlinear if models described by a single voltage equation , and that in this case we do not have to restrict attention to weak - coupling . indeed the oa ansatz has proved equally successful in describing both the winfree network with a sinusoidal prc and a network of qif neurons .this is perhaps not surprising since the prc of a qif neutron can be computed using ( [ rif ] ) , and for the case described by ( [ eq : one ] ) with and and infinite threshold and reset then with for ( which is the shape expected for an oscillator near a snic bifurcation ) .we shall now focus on this choice of prc and a pulsatile coupling that we write in the form where we have introduced a convenient fourier representation for the periodic function .we now consider the large limit in ( [ winfree ] ) and let be the fraction of oscillators with phases between and and natural frequency at time .the dynamics of the density is governed by the continuity equation ( expressing the conservation of oscillators ) , \label{continuity}\ ] ] where the mean - field drive is .boundary conditions are periodic in the probability flux , namely .a further reduction in dimensionality is obtained for the choice that the distribution of frequencies is described by a lorentzian with for fixed and ( controlling the width and mean of the distribution respectively ) , which has simple poles at .a generalised set of order parameters is defined as the integration over can be done using a contour in the lower half complex plane so that , where we have introduced the inner product .the oa ansatz assumes that the density can be written in a restricted fourier representation as where cc stands for complex conjugate .substitution into the continuity equation ( [ continuity ] ) and balancing terms in shows that must obey moreover , using the inner product structure of we easily see that ^m$ ] .thus the kuramoto order parameter is governed by ( [ alpha ] ) with yielding : .\label{rpsi}\ ] ] to calculate the mean - field drive we note that it can be written as hence , we have explicitly that with the planar system of equations defined by ( [ rpsi ] ) and ( [ h ] ) can be readily analysed using numerical bifurcation analysis .we note that the oa density ( [ oa ] ) can be written in the succinct form . \nonumber\ ] ] the form of this equation suggests recasting the density in terms of new variables and . in this casewe find that has a lorentzian shape given explicitly by with , where we have set and re - scaled by a factor of .using the continuity equation for and then integrating over gives the evolution of as , \qquad v = \int_\infty^\infty { { \rm d}}v \rho(v|\omega , t ) v , \label{w}\ ] ] where we identify as the average of .the firing probabilities for arbitrary cells at time are equal to the passage rate of the probability density through the spike phase , which gives the population firing rate as thus we may use ( [ w ] ) to determine the evolution of the coupled rate and average network activity by integrating over the distribution of parameters .the evolution of is then found as where we have used the result that .exploiting the fact that the order parameters in the ` ' and ` ' descriptions of macroscopic dynamics are related by a conformal mapping , namely , we may explore both the firing rate and degree of synchrony of the network .thus ( [ mfoa ] ) is a mean field reduction of a population of spiking qif neurons , and unlike a phenomenological neural mass model it is able to track a measure of within population synchrony .the system ( [ mfoa ] ) is capable of supporting bistable fixed point behaviour as shown in fig .[ fig : roxin ] , where we also plot the basin boundary ( stable manifold of a saddle ) . 'dynamics showing the rate and average plane with three fixed points for and . for this parameterset the system is bistable and the stable ( blue ) and unstable ( green ) manifolds of the saddle are plotted .the remaining curves are nullclines .[ fig : roxin ] , width=288 ] the oa ansatz has also proved remarkably useful in understanding non - trivial solutions such as chimera states ( where a sub - population of oscillators synchronises in an otherwise incoherent sea ) .phase or cluster synchronised states in systems of identical coupled oscillators have distinct limitations as descriptions of neural systems where not just phase but also frequency clearly play a part in the processing , computation and output of information .indeed , one might expect that for any coupled oscillator system that is homogeneous ( in the sense that any oscillators can be essentially replaced by any other by a suitable permutation of the oscillators ) , the only possible dynamical states are homogeneous in the sense that the oscillators behave in either a coherent or an incoherent way .this expectation however is not justified - there can be many dynamical states that can not be easily classified as coherent or incoherent , but that seem to have a mixture of coherent and incoherent regions .such states have been given the name `` chimera state '' by abrams and strogatz and have been the subject of intensive research over the past five years . for reviews of chimera state dynamicswe refer the reader to .kuramoto and battogtokh investigated continuum systems of oscillators of the form where represent phases at locations , the kernel represents a non - local coupling and are constants .interestingly this model is precisely in the form presented in [ sec : waves ] as equation ( [ phasecontinuum ] ) for an oscillatory model of cortex , although here there are no space - dependent delays and the interaction function is .kuramoto and battogtokh found for a range of parameters near , and carefully selected initial conditions , that the oscillators can split into two regions in , one region which is frequency synchronised ( or coherent ) while the other region shows a nontrivial dependence of frequency on location .an example of a chimera state is shown in fig .[ fig : chimera ] . ) in a system of length using numerical grid points and periodic boundary conditions . here , , and .[ fig : chimera ] , width=302 ] note that a discretisation of ( [ eq : continuumnonlocal ] ) to a finite set of coupled oscillators is where represents the phase at location and the coupling matrix is the discretised interaction kernel ( assuming a domain of length 1 ) . using different kernels , and an approximation for small , abrams and strogatz identified similar behaviour and discussed a limiting case of parameters such that the continuum system provably has chimera solutions .the oa reduction discussed in [ sec : ott - antonsen ] allows an exact reduction of oscillator networks of this form and in the continuum limit this can give a solvable low - order system whose solutions include a variety of chimera states .it is useful to note that when , pure cosine coupling results in an integrable hamiltonian system , such that disordered initial states will remain disordered .thus determines a balance between spontaneous order and permanent disorder .however , it seems that chimera states are much more `` slippery '' in finite oscillator systems than in the continuum limit . in particular , wolfrum and omelchenko note that for finite approximations of the ring ( [ eq : continuumnonlocal ] ) by oscillators , with a mixture of local and nearest -neighbour coupling corresponding to ( [ eq : discretenonlocal ] ) with a particular choice of coupling matrix , chimera states apparently only exist as transients .however , the lifetime of the typical transient apparently grows exponentially with .thus , at least for some systems of the form ( [ eq : discretenonlocal ] ) , chimeras appear to be a type of chaotic saddle .this corresponds to the fact that the boundaries between the regions of coherent and incoherent oscillation fluctuate apparently randomly over a long timescale .these fluctuations lead to wandering of the incoherent region as well as change in size of the region .eventually these fluctuations appear to result in typical collapse to either fully coherent or fully incoherent oscillation .although this appears to be the case for chimeras for ( [ eq : discretenonlocal ] ) , there are networks such as coupled groups of oscillators ; or two dimensional lattices where chimera attractors can appear .it is not clear what will cause a chimera to be transient or not , or indeed exactly what types of chimera - like states can appear in finite oscillator networks .a suggestion of is that robust neutrally stable chimeras may be due to the special type of single - harmonic phase interaction function used in ( [ eq : continuumnonlocal],[eq : discretenonlocal ] ) .more recent work includes investigations of chimeras ( or chimera - like states ) in chemical or mechanical oscillator networks ; chimeras in systems of coupled oscillators other than phase oscillators have been investigated in many papers ; for example in stuart - landau oscillators , winfree oscillators and models with inertia .other recent work includes discussion of feedback control to stabilise chimeras , investigations of chimeras with multiple patches of incoherence and multicluster and traveling chimera states . in a neural context chimerashave also been found in pulse - coupled lif networks , and hypothesised to underly coordinated oscillations in unihemispheric slow - wave sleep , whereby one brain hemisphere appears to be inactive while the other remains active .we briefly review a few examples where mathematical frameworks are being applied to neural modelling questions .these cover functional and structural connectivity in neuroimaging , central pattern generators ( cpgs ) and perceptual rivalry .there are many other applications we do not review , for example to deep brain stimulation protocols or to modelling of epileptic seizures where network structures play a key role .functional connectivity ( fc ) refers to the temporal synchronisation of neural activity in spatially remote areas .it is widely believed to be significant for the integrative processes in brain function .anatomical or structural connectivity ( sc ) , is widely believed to play an important role in determining the observed spatial patterns of fc .however , there is clearly a role to be played by the dynamics of the neural tissue . even in a globally connected networkwe would expect this to be the case , given our understanding of how synchronised solutions can lose stability for weak coupling .thus it becomes useful to study models of brain like systems built from neural mass models ( such as the jansen - rit model of [ subsec : neuralmass ] ) , and ascertain how the local behaviour of the oscillatory node dynamics can contribute to global patterns of activity . for simplicity , consider a network of globally coupled identical wilson - cowan neural mass models : for . here represents the activity in each of a pair of coupled neuronal population models , is a sigmoidal firing rate given by and represent external drives .the strength of connections within a local population is prescribed by the co - efficients , which we choose as and as in .for it is straight forward to analyse the dynamics of a local node and find the bifurcation diagram in the plane as shown in fig .[ fig : wc ] .moreover , for we may invoke weak coupling theory to describe the dynamics of the full network within the oscillatory regime bounded by the two hopf curves shown in fig .[ fig : wc ] left . from the theory of [ sec : synchrony ] we would expect the synchronous solution to be stable if .taking we can consider as a proxy for the robustness of synchrony .the numerical construction of this quantity , as in , predicts that there will be regions in the plane associated with a breakdown of fc ( where ) , as indicated by points a and b in fig .[ fig : wc ] .this highlights the role that local node dynamics can have on emergent network dynamics .moreover , we see that simply by tuning the local dynamics to be deeper within the existence region for oscillatory solutions we can , at least for this model , enhance the degree of fc . plane . herehb denotes hopf bifurcation and sn a saddle node of fixed - points bifurcation . at pointsa and c we find and at point b .a breakdown of fc ( loss of global synchrony ) within a globally coupled network is predicted at points a and c. [ fig : wc ] , height=240 ] it would be interesting to explore this simple model further for more realistic brain like connectivities , along the lines described in .moreover , given that this would preclude the existence of the synchronous state by default ( since we would neither have that or would be independent of ) then it would be opportune to explore the use recent ideas in to understand how the system could organise into a regime of remote synchronisation whereby pairs of nodes with the same network symmetry could synchronise .for related work on wilson - cowan networks with some small dynamic noise see , though here the authors construct a phase - oscillator network by linearising around an unstable fixed point , rather than use the notion of phase response .cpgs are ( real or notional ) neural subsystems that are implicated in the generation of spatio - temporal patterns of activity , in particular for driving the relatively autonomous activities such as as locomotion or for driving involuntary activities such as heartbeat , respiration or digestion .these systems are assumed to be behind the creation of the range of walking or running patterns ( gaits ) that appear in different animals .the analysis of phase - locking provides a basis for understanding the behaviour of many cpgs , and for a nice overview see the review articles by marder and bucher and hooper . in some cases , such as the leech ( _ hirudo medicinalis _ ) heart or _caenorhabditis elegans _ locomotion , the neural circuitry is well studied . for more complex neural systems and in more general cases cpgsare still a powerful conceptual tool to construct notional minimal neural circuitry needed to undertake a simple task . in this notional sensethey have extensively been investigated to design control circuits for actuators for robots ; see for example the review .recent work in this area includes robots that can reproduce salamander walking and swimming patterns .since the control of motion of autonomous `` legged '' robots is still a very challenging problem in real - time control , one hope of this research is that nature s solutions ( for example , how to walk stably on two legs ) will help inspire robotic ways of doing this .cpgs are called upon to produce one or more rhythmic patterns of actuation ; in the particular problem of locomotion , a likely cpg is one that will produce the range of observed rhythms of muscle actuation , and ideally the observed transitions between then . for an early discussion of design principles for modelling cpgs ,see .this is an area of modelling where consideration of symmetries as in [ subsec : symmetry ] has been usefully applied to constrain the models .for example examine models for generating the gaits in a range of vertebrate animals , from those with two legs ( humans ) through those with four ( quadrupeds such as horses have a wide range of gaits - walk , trot , pace , canter , gallop - they may use ) or larger numbers of legs ( myriapods such as centipedes ) .insects make use of six legs for locomotion while other invertebrates such as centipedes and millipedes have a large number of legs that are to some extend independently actuated . as an example , consider a schematic cpg of oscillators for animals with legs , as shown in fig .[ fig : golcpg](a ) .the authors use symmetry arguments and theorem [ thm : honk ] to draw a number of model - independent conclusions from the cpg structure .coupled cells that is used to model gait patterns in animals with legs and ( b ) a three cell motif of bursters with varying coupling strengths , as considered in .,width=377 ] one can also view cpgs as a window into more fundamental problems of how small groups of neurons coordinate to produce a range of spatio - temporal patterns .in particular , it is interesting to see how the observable structure of the connections influences the range and type of dynamical patterns that can be produced .for example , consider a simple three - cell `` motif '' networks of bursters and classify a range of emergent spatio - temporal patterns in terms of the coupling parameters .detailed studies investigate properties such multistability and bifurcation of different patterns and the influence of inhomogeneities in the system .this is done by investigating return maps for the burst timings relative to each other .the approach mentioned in [ subsec : groupoid ] and outlined in seems to be a promising way of providing a context in which view cpgs where there is a constrained connection structure suggested by either the neurobiology or from conceptual arguments , but this structure is not purely related to symmetries of the network .for example , use that formalism to understand possible spatio - temporal patterns that arise in lattices or that relates synchrony properties of small motif networks to spectral properties of the adjacency matrix .many neural systems process information - they need to produce outputs that depend on inputs .if the system effectively has no internal degrees of freedom then this will give a functional relationship between output and input so that any temporal variation in the output corresponds to a temporal variation of the input .however , this is not the case for all but the simplest systems and often outputs can vary temporally unrelated to the input .a particularly important and well - studied system that is a model for autonomous temporal output is _ perceptual rivalry _, where conflicting information input to a neural system results , not in a rejection or merging of the information , but in an apparently random `` flipping '' between possible `` rival '' states ( or percepts ) of perception .this nontrivial temporal dynamics of the perception appears even in the absence of a temporally varying input .the best studied example of this type is _ binocular rivalry _ , where conflicting inputs are simultaneously made to each eye .it is widely reported by subjects that perception switches from one eye to the other , with a frequency that depends on a number of factors such as the contrast of the image .more general perceptual rivalry , often used in `` optical illusions '' such as ambiguous figures - the rubin vase , the necker cube - show similar behaviour with percepts shifting temporally between possible interpretations .various approaches have been made to construct nonlinear dynamical models of the generation of a temporal shifting between possible percepts such as competition models , bifurcation models , ones based on neural circuitry , or conceptual ones based on network structures or on heteroclinic attractors .as with any review we have had to leave out many topics that will be of interest to the reader .in particular we have confined ourselves to `` cell '' and `` system - level '' dynamics rather that `` sub - cellular '' behaviour of neurons .we briefly mention some other active areas of mathematical research relevant to the science of rhythmic neural networks .perhaps the most obvious area that we have not covered in any depth is that of single unit ( cell or population ) forcing , which itself is a rather natural starting point for gaining insights into network behaviour and how best to develop mathematical tools for understanding response . for a general perspective on mode - locked responses to periodicforcing see and . fora more recent discussion of the importance of mode - locking in auditory neuroscience see and in motor systems see .however , it is well to note that not much is known about nonlinear systems with three or more interacting frequencies , as opposed to periodically forced systems where the notions of farey tree and the devil s staircase have proven especially useful .we have also painted the notion of synchrony with a broad mathematical brush , and not discussed more subtle notions of envelope locking that may arise between coupled bursting neurons ( where the within burst patterns may desynchronise ) .this is especially relevant to studies of synchronised bursting and the emergence of chaotic phenomena .indeed , we have said very little about coupling between systems that are chaotic , such as described in , the emergence of chaos in networks or chaos in symmetric networks .the issue of chaos is also relevant to notions of reliability , where one is interested in the stability of spike trains against fluctuations .this has often been discussed in relation to stochastic oscillator forcing rather than those arising deterministically in a high dimensional setting .of course , given the sparsity of firing in cortex means that it may not even be appropriate to treat neurons as oscillators .however , some of the ideas developed for oscillators can be extended to excitable systems , as described in . as well as thisit is important to point out that neurons are not point processors , and have an extensive dendritic tree , which can also contribute significantly to emergent rhythms as described in , as well as couple strongly to glial cells .although the latter do not fire spikes , they do show oscillations of membrane potential . at the macroscopic level it is also important to acknowledge that the amplitude of different brain waves can also be significantly affected by neuromodulation ,say through release of norepinephrine , serotonin and acetylcholine , and the latter is also thought to be able to modulate the prc of a single neuron .this review has focused mainly on the embedding of weakly coupled oscillator theory within a slightly wider framework .this is useful in setting out some of the neuroscience driven challenges for the mathematical community in establishing inroads into a more general theory of coupled oscillators .heterogeneity is one issue that we have mainly side - stepped , and remember that the weakly coupled oscillator approach requires frequencies of individual oscillators to be close .this can have a strong effect on emergent network dynamics , and it is highly likely that even a theory with heterogeneous phase response curves will have little bearing on real networks . the _ equation - free _ coarse - graining approach may have merit in this regard , though is a numerically intensive technique .we suggest a good project for the future is to develop a theory of strongly coupled heterogeneous networks based upon the phase - amplitude coordinate system described in [ subsec : phase - amplitude ] , with the challenge to develop a reduced network description in terms of a set of phase - amplitude interaction functions , and an emphasis on understanding the new and generic phenomena associated with nontrivial amplitude dynamics ( such as clustered phase - amplitude chaos and multiple attractors ) . to achieve thisone might further tap into recent ideas for classifying emergent dynamics based upon the group of structural symmetries of the network .this can be computed as the group of automorphisms for the graph describing the network . for many real - world networks, this can be decomposed into direct and wreath products of symmetric groups .this would allow the use of tools from computational group theory to be used , and open up a way to classify the generic forms of behaviour that a given network may exhibit using the techniques of equivariant bifurcation theory .the hodgkin - huxley description of nerve tissue is completed with : } , & \alpha_h(v ) & = 0.07 \exp [ -0.05(v+65 ) ] , \nonumber \\\alpha_n(v ) & = \frac{0.01(v+55)}{1-\exp[-0.1(v+55 ) ] } , & \beta_m(v ) & = 4.0 \exp[-0.0556(v+65 ) ] , \nonumber \\\beta_h(v ) & = \frac{1}{1 + \exp[-0.1(v+35 ) ] } , & \beta_n(v ) & = 0.125 \exp[-0.0125(v+65 ) ] , \nonumber\end{aligned}\ ] ] and , , , , , and .( all potentials are measured in mv , all times in ms and all currents in per ) .we give a brief list of some of the abbreviations used within the review .dde : : delay differential equation if : : integrate and fire ( model for neural oscillator ) iprc : : infinitesimal phase response curve isi : : inter - spike interval fhn : : fitzhugh - nagumo equation ( model for neural oscillator ) hh : : hodgkin - huxley equation ( model for neural oscillator ) lif : : leaky integrate and fire ( model for neural oscillator ) ml : : morris - lecar equation ( model for neural oscillator ) msf : : master stability function ode : : ordinary differential equation pde : : partial differential equation prc : : phase response curve qif : : quadratic integrate and fire ( model for neural oscillator ) sde : : stochastic differential equation shc : : stable heteroclinic channel snic : : saddle - node on an invariant circle ( bifurcation ) wlc : : winnerless competitionthe authors declare that they have no competing interests .pa , sc and rn contributed equally .all authors read and approved the final manuscript .we would like to thank kyle wedgwood and ine byrne for useful comments made on draft versions of this manuscript .sc was supported by the european commission through the fp7 marie curie initial training network 289146 , nett : neural engineering transformative technologies .l f abbott and t b kepler .model neurons : from hodgkin huxley to hopfield . in luis garrido , editor , _ statistical mechanics of neural networks _ , number 368 in lecture notes in physics , pages 518 .springer - verlag , berlin heidelberg , 1990 .xerxes d. arsiwalla , riccardo zucca , alberto betella , enrique martinez , david dalmazzo , pedro omedas , gustavo deco , and paul f.m.j .network dynamics with brainx3 : a large - scale simulation of the human brain network with real - time interaction . , 9(2 ) , 2015 .o. benjamin , t.h.b .fitzgerald , p. ashwin , k. tsaneva - atanasova , f. chowdhury , m.p .richardson , and j.r .a phenomenological model of seizure initiation suggests network structure may explain seizure frequency in idiopathic generalised epilepsy ., 2:141 , 2012 .m breakspear , ja roberts , john r terry , s rodrigues , n mahant , and pa robinson . a unifying explanation of primary generalized seizures through nonlinear brain modeling and bifurcation analysis ., 16:12961313 , 2006 .t g brown . on the nature of the fundamental activity of the nervous centres; together with an analysis of the conditioning of rhythmic activity in progression and a theory of the evolution of function in the nervous system . , 48:1846 , 1914 .fahmida a chowdhury , wessel woldman , thomas hb fitzgerald , robert dc elwes , lina nashef , john r terry , and mark p richardson .revealing a brain network endophenotype in families with idiopathic generalised epilepsy ., 9:e110136 , 2014 .kiyoshi kotani , ikuhiro yamaguchi , yutaro ogawa , yasuhiko jimbo , hiroya nakao , and g. bard ermentrout .adjoint method provides phase response functions for delay - induced oscillations ., 109:044101 , 2012 .y a kuznetsov , v v levitin , and a r skovoroda .continuation of stationary solutions to evolution problems in content .technical report report am - r9611 , centrum voor wiskunde en informatica , amsterdam , the netherlands , 1996 .
the tools of weakly coupled phase oscillator theory have had a profound impact on the neuroscience community , providing insight into a variety of network behaviours ranging from central pattern generation to synchronisation , as well as predicting novel network states such as chimeras . however , there are many instances when this theory is expected to break down , say in the presence of strong coupling , or must be carefully interpreted , as in the presence of stochastic forcing . there are also surprises in the dynamical complexity of the attractors that can robustly appear - for example , heteroclinic network attractors . in this review we present a set of mathematical tools that are suitable for addressing the dynamics of oscillatory neural networks , broadening from a standard phase oscillator perspective to provide a practical framework for further successful applications of mathematics to understanding network dynamics in neuroscience .
: : - the entry angle from workpiece to the roller , also known as the attack angle [ ( fig. 4 ) : : - overall contact area [ mm ( section [ sec : application ] ) : : - planar projection of the contact area [ mm ( section [ sec : application ] ) : : - planar projection of the contact area [ mm ( section [ sec : application ] ) : : - planar projection of the contact area [ mm ( section [ sec : application ] ) : : - the trailing angle from the workpiece to the roller , also known as exit angle or planishing angle [ ( fig .4 ) : : - distance between analytical surface and experimental surface nearest neighbour points ( section [ sec : validation ] ) : : - distance between analytical surface and experimental surface interpolant ( section [ sec : validation ] ) : : - the distance between the center of the mandrel and the center of the roller [ mm](fig . 4 , ) : : - axial feed rate of the roller down the face of the cylinder , along the direction [ mm / min ] mse : : - mean square error ( eq . [ eq : mse ] ) : : - the mandrel rate of rotation [ revolutions / min ] : : - the roller path pitch or distance traveled axially by the roller in one revolution [ mm ] ( ) : : - roller nose radius [ mm ] ( fig .4 ) : : - numeric resolution of the solution ( section [ sec : surdef ] ) : : - initial workpiece radius ( ) : : - the mandrel radius[ mm ] ( fig . 4 ) : : - the roller radius excluding the radius of the nose [ mm ] ( fig .4 ) : : - intermediate set of radial quantities used to find and ( eq . [ eq : s ] ) : : - the angle of contact between the roller and workpiece [ rad](fig . 2 , eq . [ eq : tf ] ) : : - intermediate value of used for the iterative solution of the contact area [ rad ] ( eq .[ eq : tf ] ) : : - maximum angular limit of the solution ( eq . [ eq : thetamax ] ) : : - angular coordinates used to define boundary surfaces ( eq . [ eq : tcoords ] ) : : - starting material thickness [ mm ] ( fig .4 ) : : - the final material thickness [ mm ] ( fig .4 ) : : - coordinates lying on the instantaneous roller position ( eq . [ eq : x_i ] ) : : - coordinate used for intersection conditioning ( eq . [ eq : xi ] ) : : - coordinate used for intersection conditioning ( eq . [ eq : xlower ] ) : : - coordinate lying on the cylinder defined by ( eq . [ eq : x_m ] ) : : - maximum limit in the direction of the solution ( eq . [ eq : xmax ] ) : : - coordinates lying on the previous roller path ( eq . [ eq : xp ] ) : : - coordinates within the roller / workpiece contact area ( eq . [ eq : xs ] ) : : - coordinate used for intersection conditioning ( eq .[ eq : xupper ] ) : : - coordinates lying on the instantaneous roller position ( eq . [ eq : yi ] ) : : - coordinates lying on the cylinder defined by ( eq .[ eq : y_m ] ) : : - maximum limit in the direction of the solution ( eq . [ eq : ymax ] ) : : - coordinates lying on the previous roller path ( eq . [ eq : yp ] ) : : - coordinates used to define boundary surfaces ( eq . [ eq : ycoords ] ) : : - coordinates within the roller / workpiece contact area ( eq . [ eq : ys ] ) : : - axial limits of the workpiece / roller contact area , coordinate of the endpoint of contour 1 and starting point of contour 2 ( eq . [ eq : lowerlimflat1 ] and [ eq : lowerlimitflat2 ] ) : : - axial limits of the workpiece / roller contact area , coordinate of the endpoint of contour 1 and starting point of contour 3 ( eq . [ eq : za ] to [ eq : zd ] ) : : - coordinates lying on the instantaneous roller position ( eq . [ eq : zi ] ) : : - coordinate used for intersection conditioning ( eq . [ eq : zi ] ) : : - coordinate used for intersection conditioning ( eq . [ eq : zlower ] ) : : - coordinates lying on the cylinder defined by ( eq . [ eq : zm ] ) : : - coordinates lying on the previous roller path ( eq . [ eq : zp ] ) : : - coordinates used to define boundary surfaces ( eq . [ eq : zcoords ] ) : : - coordinates within the roller / workpiece contact area ( eq . [ eq : zs ] ) : : - coordinate used for intersection conditioning ( eq . [ eq : zupper ] )to determine the energy required to form a component , the size and orientation of the tooling interface on the workpiece is necessary . while purely analytical models describing this contact are preferable , they are usually difficult to attain for complex metal forming processes . in this study ,an analytical approach is presented to model the tooling / workpiece contact area in an application of rotary forming . while the present work focuses on an implementation for flow forming ,the applied technique can be applied to other variants of rotary forming operations such as metal spinning , shear forming , thread rolling and crankshaft fillet rolling .flow forming , a variant of metal spinning , is a process used to fabricate rotationally symmetrical parts from ductile materials , after . during flow forming ,the workpiece is clamped to a rotating mandrel and pressed into contact with the mandrel by rollers .the rollers induce high levels of plasticity in the workpiece causing it to undergo both reduction in thickness and axial lengthening .since the rollers press on only a very small area of the overall workpiece at any given time , the deformation is highly localized between the roller and workpiece . to properly understand the distribution of this intense local plastic deformation it is essential to be able to calculate the roller / workpiece contact area from the geometric parameters that govern the flow forming process .in addition , the roller / workpiece contact area is critical to coupling other experimental findings , such as power consumption , frictional effects , force , stress and strain distributions through the workpiece back to geometric process parameters . in flow forming , the combined mandrel rotation and linear movement of the rollers induce contact on the workpiece along a helical path .this helical tool path , coupled with the curved profile of the rollers leads to a very complicated roller / workpiece contact area . in terms of related tool contact studies , an important analytical derivation of the workpiece contact in shear spinningwas completed by . however , in a comprehensive review of metal spinning processes , highlighted that the mechanics of flow forming are quite different than shear spinning .this is also true for the contact area formulation as there is little roller penetration into the workpiece and deformation proceeds according to the sine rule . in terms of flow forming specific research , investigations made by gur and tirosh ( 1982 ) , singhal et al .( 1995 ) , ma ( 1993 ) and jahazi and ebrahimi ( 2000 ) have proposed analytical models of this contact .gur and tirosh ( 1982 ) developed the formulation of a planar contact area in each of the primary rolling and extrusion deformation directions in backwards flow forming .singhal et al .( 1995 ) derived the contact area imposed by tooling in the flow forming of small diameter tubes where the assumption made was that material is assumed to be perfectly plastic , and the tools were assumed rigid .ma ( 1993 ) extended the work of gur and tirosh ( 1982 ) to derive a critical angle of attack and jahazi and ebrahimi ( 2000 ) extended the contact formulation made by gur and tirosh ( 1982 ) to investigate the mechanics in a specific application of flow forming .more recently , kemin et al .( 1996 ) , xu et al .( 2001 ) and hua et al .( 2005 ) have developed finite element ( fe ) models of single roller flow forming . in each of these studies, contact was modeled explicitly within each respective fe model . furthermore , with the exception of the work by and , all previous works have made assumptions concerning the roller / workpiece contact geometry that do not necessarily reflect the actual contact during flow forming .these assumptions include : 1 . idealized roller geometry ( i.e. no blending radii ) ( fig .1 ) ; 2 .the use of two - dimensional treatments that do not account for the three - dimensional aspects of the workpiece contact ; 3 . not considering the influence of prior forming steps ( i.e. roller path overlap ) on the instantaneous roller / workpiece contact area . the most successful technique for modelling the roller / workpiece contact area , and other facets of the flow forming process , has been through fe analyses . addressed items 1 and 2 listed above in their work to numerically calculate the roller / workpiece of a single roller flow forming .however , did not give the details of their calculation of the contact area , nor did they specifically address item 3 . has developed a thorough 3-d fe model that addresses all three items , but an fe approach is still limited to case - by - case application involving extensive pre - processing and explicit geometric modelling .an analytical solution provides a solution with significantly lower effort . in the present work, a generalized solution is developed for the roller / workpiece contact area during a single roller flow forming operation that accommodates items 1 to 3 . to accomplish this ,the following assumptions are made : 1 . the single roller flow forming process proceeds under steady state conditions . the final and starting thickness ,mandrel rotation and feed rate are constant .the deformation response of the workpiece is perfectly plastic .elastic effects are not considered .volume of the flow formed workpiece is conserved outside the tool interface .no material build - up occurs in front of the roller as the workpiece conforms completely with the rigid roller .during flow forming , the roller contacts the workpiece along a path having a constant pitch ( fig .1 ) . the profile of the roller can be divided into three regions ; the entry region , the nose region and the exit region .these regions dictate the size and shape of the roller / workpiece contact area .the contact area is bounded by three contours : the tangential exit contour , the axial entry contour and the axial exit contour , labeled 1 - 3 respectively in fig .the contact area extends angularly from the tangential exit contour ( ) through to ( fig .2(b ) ) . [ fig:1 ]if the roller has an archetypal flow forming profile similar to that shown in fig .1 with distinct flat entry and exit regions and a blending radius between the two that creates a nosed roller , the final contact area is dependent on six surfaces ( fig .3 ) . contour 1 , and the starting points of contours 2 and 3 ( fig .2 ) can be calculated directly as they lie exclusively on the -plane .the _ a priori _ -coordinates of the extents of contour 1 define the axial limits of the roller / workpiece contact area .once the a solution has been found for the starting and ending points of contour 1 ( by definition the starting points of contours 2 and 3 ) , the common end point of contours 2 and 3 is then solved using an iterative technique . [ fig:3 ] it is first necessary to calculate the axial limits of contact by determining the endpoints of contour 1 .contour 2 is a function of the instantaneous roller contact with the workpiece at pitch .contour 3 is a function of the instantaneous roller contact on the material as well as the tool contact on the workpiece one revolution of the mandrel beforehand , at .contour 1 exists solely on the plane and is bound by the points of intersection with contours 2 and 3 .contour 1 is both dependent on roller geometry and the roller path pitch , .there are four possible conditions describing the intersection of the current roller position with that of its position on the previous mandrel revolution ( fig .[ fig:4 ] corresponds to the instantaneous roller position and position corresponds to the roller at one mandrel revolution beforehand .the upper endpoint of contour 1 can occur within the nosed region of the roller on both the instantaneous position and the position on the previous mandrel revolution ( condition a ) .it can also occur at the intersection of the exit / entry profiles ( condition b ) , the nosed / entry profiles ( condition c ) or the exit / nosed profiles ( condition d ) of the instantaneous and the previous roller positions.,title="fig:",width=4 ] calculation of the location of the upper end point of contour 1 for the four conditions shown in fig .4 is accomplished through comparison of the endpoints of the roller nose profile on the plane . for comparison purposes ,the local coordinate system is moved on the axis from the global origin by ( fig .the and coordinates of the upper end point of the nosed region of contour 1 , and ( fig .5 ) : [ fig:5 ] for the lower and coordinates of the end point of the nose region of contour 1 , and ( fig .5 ) : the entry profile of the previous roller path and the instantaneous exit profile of the roller will occur at and . these are expressed as : the values of , , , , and can be compared to identify which contact condition shown in fig .4 applies .the conditions and the relationships that must be simultaneously satisfied are shown in table [ table : conditions ] ..upper axial limits of contact [ cols="^,^,^ " , ] the ofat analysis technique is limited as , by definition , it does not allow for simultaneous changes in multiple variables . for the given geometry , however , this analysis does display the following important observations : * in terms of the largest effect on the overall contact area , changing the material starting and final thicknesses and the pitch had the largest effect .this is also true for all of the area components , , and . in order of precedence ,the variables that had largest sensitivity on the overall area other than thicknesses and pitch were the radius of the mandrel , the attack angle , radius of the roller , with the nose radius having the least effect overall .* varying the the roller nose radius had the least effect on the contact area as well as the and components . *the rolling component , , followed the same trends as the overall area for changes in thicknesses , pitch , mandrel / roller radii and attack angle .this component decreased while the extrusion / drawing component and the overall area increased for larger roller nose radii .furthermore , , is more sensitive to the attack angle than the mandrel radius . * the largest effect on the component , or the drawing / extrusion part of the deformation saw the same precedence of variables as for the overall contact area .this component showed the same response to variable changes as the overall area . *the tangential deformation component , , is marginally more sensitive to the radius of the roller than the radius of the mandrel , and the roller nose radius has approximately the same sensitivity as the attack angle .this component also increased for larger values of pitch and mandrel radius , but decreased for larger roller radii and nose radii . remained unaffected by changes in attack angle .* the overall contact area increased with increased variable values in all cases except for the final thickness value and the attack angle .this decrease was a linear for the former and non - linear for the latter . *the effect of changing the starting thickness , final thickness and roller nose radius is a linear change for all area components while all others are non - linear .these findings are of practical importance to flow forming .if a worn roller is to be re - used after resurfacing , it may be necessary to modify the pitch in order to maintain the same forming geometry when the process was first commissioned .if a single set of rollers are to be used with different mandrels , it is also important from a process design standpoint so that the same forming zone geometry can be maintained . furthermore , knowing the sensitivity of each of the variables on the overall contact and therefore deformation mode also permits easier troubleshooting of existing processes .an analytical model of the roller / workpiece interface in flow forming has been developed such that it may predict the contact area .this model is applicable to all tooling geometries for both forward and backward flow forming processes . due to the general nature of the description of the geometry, the approach taken can be used for other rotary forming operations where a die or a roller is used to deform a cylindrical workpiece locally .this model has been compared to experimental data generated from physical modelling and shows excellent correspondence . specifically , the analytical model was found to describe the experimental surface within 0.4 mm based on mean square error .an example of the application of the model has been demonstrated in the form of a ofat sensitivity analysis applied to independent geometric variables determining tooling interaction .the independent geometric variables examined were starting and final thicknesses ( , ) , forming pitch ( ) , mandrel radius ( ) , attack angle ( ) as well as roller and roller nose radii ( , ) . these variables were modified over a range of for the starting thickness and 150% for all others .specific findings showed that on the basis of a unit change in the respective variables : * had four times the effect on the the change in overall area and components * had 33% more of an effect * had 50% less of an effect , with the exception of the tangential deformation component , which had 50% more of an effect * , , and have less than a effect * caused the least change : less than 7% change in area the present work could be extended to study the multi - variant effects on the contact area to fully account for the geometric changes during complicated forming processes .however , geometric factors are not the only process parameters which govern the process .the main direction of future work is to link the geometric factors to other process factors such as workpiece material properties and tribological considerations to gain deeper insight into the overall process mechanics .
flow forming involves complicated tooling / workpiece interactions . purely analytical models of the tool contact area are difficult to formulate , resulting in numerical approaches that are case - specific . provided are the details of an analytical model that describes the steady - state tooling / workpiece contact area allowing for easy modification of the dominant geometric variables . the assumptions made in formulating this analytical model are validated with experimental results attained from physical modelling . the analysis procedure can be extended to other rotary forming operations such as metal spinning , shear forming , thread rolling and crankshaft fillet rolling . flow forming , metal forming , physical modelling , contact interface , analytical model
network science has attracted much attention in recent years due to its interdisciplinary applications .many network results have been obtained by analyzing isolated networks , but most real - world networks do in fact interact with and depend on other networks .thus , in analogy to the ideal gas laws that are valid only in the limiting case that molecules do not interact , so the extensive results for the case of non - interacting networks hold only when it is justified to neglect the interactions between networks .recently several studies have addressed the resilience as well as other properties of interacting networks .a framework based on percolation theory has been developed to analyze the cascading failures caused by interdependencies between two networks . in interdependent networks ,when nodes in one network fail they usually cause the failure of dependent nodes in other networks , and this in turn can cause further damage to the first network and result in cascading failures , which could lead to abrupt collapse of the system .later on , two important generalizations of the basic model have been developed .because in real - world scenarios the initial failure of important nodes ( `` hubs '' ) may not be random but targeted , a mathematical framework for understanding the robustness of interdependent networks under an initial targeted attack on specific degree of nodes has been studied by huang et al . and later extended by dong et al .also in real - world scenarios , the assumption that each node in network a depends on one and only one node in network b and vice versa may not be valid . to release this assumption , a theoretical framework for understanding the robustness of interdependent networks with a random number of support and dependency relationships has been developed and studied by shao et al. .more recently , gao et al . developed an analytical framework to study percolation of a tree - like network formed by interdependent networks .gao et al .found that while for the percolation transition is a second order , for any cascading failures occur and the network collapses as in a first order transition .indeed cascading failures have caused blackouts in interdependent communication and power grid systems spanning several countries . to be able to design resilient infrastructures or improve existing infrastructures we need to understand how venerability is affected by such interdependencies .here we generalize the theory of interdependent networks to regular and random regular ( rr ) network of interdependent networks that include loops .figures [ fig1](a ) and [ fig1](b ) illustrate such network of networks ( netonet ) , in which each network depends on the same number of other networks .we develop an exact analytical approach for percolation of a regular and a random regular netonet system composed of partially interdependent networks .we show that for an rr network with degree of interdependent networks where each network has the same degree distribution , same average degree and the fraction of dependence nodes between a pair of interdependent networks , , is the same for all pairs , the number of networks is irrelevant .we obtain analytically the fraction of survived nodes in each network after cascading failures , as a function of , and .in our model , each node in the netonet is itself a network and each link represents a _ fully _ or _ partially _ dependent pair of networks [ see fig .[ fig1 ] ] .we assume that each network ( ) of the netonet consists of nodes linked together by connectivity links .two networks and form a partially dependent pair if a certain fraction of nodes in network directly depend on nodes in network , i.e. , nodes in network can not function if the corresponding nodes in network do not function . a node in a network will not function if it is removed or if it does not belong to the largest connected cluster ( giant component ) in network .dependent pairs may be connected by unidirectional dependency links pointing from network to network [ see fig .[ fig1](c ) ] .this convention indicates that nodes in network may get a crucial support from nodes in network , e.g. , electric power if network is a power grid .we assume that after an attack or failure only a fraction of nodes in each network remains .we also assume that only nodes that belong to a giant component in each network will remain functional .when a cascade of failures occurs , nodes in network that do not belong to the giant component in network fail and cause nodes in other networks that depend on them to also fail .when those nodes fail , dependent nodes and isolated nodes in the other networks also fail , and the cascade can cause further failures back in network . in order to determine the fraction of nodes in each network that remains functional ( i.e. , the fraction of nodes that constitutes the giant component ) after the cascade of failures as a function of and , we need to analyze the dynamics of the cascading failures .we assume that all nodes in network are randomly assigned a degree from a probability distribution , they are randomly connected , and the only constraint is that a node with degree has exactly links .we define the generating function of the degree distribution , where is an arbitrary complex variable .the generating function of this branching process is defined as .once a fraction of nodes is randomly removed from a network , the probability that a randomly chosen node belongs to a giant component , is given by ,\ ] ] where satisfies .\ ] ] we assume that ( i ) each node in network depends with a probability on only one node in network , and that , ( ii ) if node in network depends on node in network and node in network depends on node in network , then node coincides with node , i.e. , we have a no - feedback situation . in sectioniv we study the case of feedback condition , i.e. , node can be different from in network .the no feedback condition prevents configurations from collapsing even without having their internal connectivity in each network .next , we develop the dynamic process of cascading failures step by step . at , in networks of the netonet we randomly remove a fraction of nodes .after the initial removal of nodes , the remaining fraction of nodes in network , is .the remaining functional part of network therefore constituents a fraction of the network nodes , where is defined by eqs .( [ ge2 ] ) and ( [ ge3 ] ) .furthermore , we denote by the fraction of nodes in network that survive after the damage from all the networks connected to network except network is taken into account , so if , . when , all the networks receive the damages from their neighboring networks one by one . without loss of generality , we assume that network is the first , network second , ... , and network is last . in fig .1(c ) , for example , since a fraction , , and of nodes of network depends on nodes from network , , and respectively , the remaining fraction of network nodes is , ,\ ] ] and ( ) satisfies the remaining functional part of network therefore contains a fraction of the network nodes .similarly , we obtain the remaining fraction of network nodes , \prod_{s > i } [ q_{si}y_{si , t-1}g_s(\psi'_{s , t-1})-q_{si}+1],\ ] ] and is and is following this approach we can construct the sequence , of the remaining fraction of nodes at each stage of the cascade of failures. the general form is given by }\prod_{s > i}{[q_{si}y_{si , t-1}g_s(\psi'_{s , t-1})-q_{si}+1 ] } , & \mbox { } & \\y_{ij , t}=\frac{\psi'_{i , t}}{q_{ji}y_{ji , t-1}g_j(\psi'_{j , t})-q_{js}+1 } , & \mbox { } & \\ y_{is , t}=\frac{\psi'_{i , t}}{q_{si}y_{si , t-1}g_s(\psi'_{s , t-1})-q_{si}+1}. & \mbox { } & \\ \end{array}\ ] ] we compare the theoretical formulas of the dynamics , eqs .( [ ge10 ] ) and simulation results in fig .as seen the theory of the dynamics ( [ ge10 ] ) agrees well with simulations . to determine the state of the system at the end of the cascade process we look at at the limit of limit must satisfy the equations since eventually the clusters stop fragmenting and the fractions of randomly removed nodes at step and are equal . denoting ,we arrive for the networks , at the stationary state , to a system of equations with unknowns , where the product is taken over networks interlinked with network by partial ( or fully ) dependency links [ see fig .[ fig1 ] ] and is the fraction of nodes in network that survive after the damage from all the networks connected to network except network itself is taken into account .the damage from network itself is excluded due to the no - feedback condition .equation ( [ e1 ] ) is valid for any type of interdependent netonet , while eqs .( [ e2 ] ) represents the no - feedback condition . for two coupled networks , eqs .( [ e1 ] ) and ( [ e2 ] ) are equivalent to eq .( 13 ) of ref . for the specific case of single dependency links . our general framework for percolation of interdependent network of networks , eqs .( [ e1 ] ) and ( [ e2 ] ) , can be generalized in two directions : ( i ) coupling with feedback condition ( ii ) coupling with multiple - support .\(i ) in the existence of the feedback , is simply and eqs .( [ e1 ] ) and ( [ e2 ] ) become a single equation , the feedback condition leads to an extreme vulnerability of the network of interdependent networks . as we know for two fully interdependent networks with no - feedback condition if the average degree is large enough both networks exist . however , for two fully interdependent networks with feedback condition , no matter how large the average degree is , both networks collapse even after a single node is removed .the analytical results about the feedback condition are given in section iv .\(ii ) equation ( [ e1 ] ) can be generalized to the case of multiple dependency links studied for a pair of coupled networks in by , \right),\ ] ] where represents the generating function of the degree distribution of multiple support links that network depends on network .on one hand , the term reflects the topology of network , which can be an er network , a rr network , a scale free ( sf ) network , or even a small world ( sw ) network . on the other hand , {n\times n} ] as as a function of has a quite complex behaviour for various degree distributions .we present two examples to demonstrate our general results on ( i ) rr network of er networks and ( ii ) rr network of sf networks .* for the case of rr network of er networks we find a critical such that , when the system shows a second order phase transition and the critical threshold depends on and average degree .when the system shows a first order phase transition , and when there is no phase transition because all the networks collapse even for a single node failure .* for the case of rr network of sf networks , the phase diagram is different from the er case , because there is no pure first order phase transition .however , there exists an effective , when , the system shows a second order phase transition and the critical threshold is for infinite number of nodes in each network , i.e. , the maximum degree goes to .but for finite size of network , there exists an effective , when the giant component of each network is very close to 0 , and when finite size of giant component smoothly emerges .it becomes more interesting .when , the system shows a hybrid transition as follows .when decreases from 1 to 0 , the giant component as function of shows a sharp jump at , which is like a first order transition to a finite small value , and then ( when further decreases ) goes smoothly to 0 . for is no phase transition because all the networks collapse even for a single node failure . for er networks ,the generating function satisfies , & \mbox { } &\\ f=\exp[\bar{k } x(f-1 ) ] .\end{array}\ ] ] substituting eqs .( [ e13 - 0 ] ) into eqs .( [ e11 ] ) , we get ^m(f-1)\ } } , & \mbox { } & \\ y = p[qy(1-f)-q+1]^{m-1 } , & \mbox { } & \\p_{\infty } = -(\log f)/\bar{k}. \end{array}\ ] ] eliminating from eq .( [ e13 ] ) , we obtain an equation for , ^{\frac{2}{m}}+(q-1)[\frac{\ln f}{\bar{k}p(f-1)}]^{\frac{1}{m}}+\frac{q}{\bar{k}}\log f = 0 .\end{array}\ ] ] considering ^{1/m} ] ) , we obtain and next , we prove that is a decreasing function of , i.e. , .it is easy to see and the equal condition is satisfied only when , so .thus we obtain that is a monotonous decreasing function of , which is very different from the no feedback condition .so the maximum of is obtained only when , which corresponds to the critical value of , which is the same as eq .( [ e20 ] ) .thus , the second order threshold of no feedback is the same as the feedback , which is also shown in fig .[ fig9 ] ( a ) . however , the feedback case is still more vulnerable than the no feedback case .[ fig9 ] ( b ) and ( c ) show for , i.e. the giant component in each network of the netonet when there is no node failures , as a function of .we can see that for the no feedback case , fig .[ fig9 ] ( b ) , the system still has very large giant component left when both and are large , but for the feedback case , there is not giant component when both and are large .this happens because of the single connected nodes and isolated nodes in each network .substituting into eq .( [ ge25 ] ) , we obtain or , which represents the minimum and maximum for which a phase transition exists , and equations ( [ ge26 ] ) and ( [ ge27 ] ) demonstrate that the netonet collapses when and are fixed and and when and are fixed and , i.e. , there is no phase transition in these zones .however , of the feedback case is smaller than that of no feedback case shown in fig .[ fig8 ] ( a ) , which shows that the feedback case is more vulnerable than the no feedback case . in fig .[ fig8 ] ( b ) we show that increasing or decreasing will increase , i.e. , increase the robustness of netonet. next we study the feedback condition for the case of rr netonet formed of rr networks of degree . in this case , eq . [ ge22 ] becomes ^{\frac{1}{k}}=p\left\{1-\left[1-\frac{p_{\infty}}{p(1-q - qp_{\infty})}\right]^{\frac{k-1}{k}}\right\ } ( 1-q+qp_{\infty})^m.\ ] ] we find that the rr networks are very different from the er networks , and the system shows first order phase transition for large and a second order phase transition for small as shown in fig .[ fig12 ] .in summary , we develop a general framework , eqs .( [ e1 ] ) and ( [ e2 ] ) , for studying percolation in several types of netonet of any degree distribution .we demonstrate our approach for a rr network of er networks that can be exactly solved analytically , eqs .( [ e16 ] ) and for rr of sf networks for which the analytical expressions can be solved numerically .we find that and exist , where a netonet shows a second - order transition when , a hybrid transition when , and that in all other cases there is no phase transition because all nodes in the netonet spontaneously collapse .thus the percolation theory of a single network is a limiting case of a more general case of percolation of interdependent networks .our results show that the percolation threshold and the giant component depend solely on the average degree of the er network and the degree of the rr network , but not on the number of networks .these findings enable us to study the percolation of different topologies of netonet .we expect this work to provide insights leading to further analysis of real data on interdependent networks .the benchmark models we present here can be used to study the structural , functional , and robustness properties of interdependent networks . because , in real netonets , individual networks are not randomly connected and their interdependent nodes are not selected at random , it is crucial that we understand many types of correlations existing in real - world systems and to further develop the theoretical tools studying all of them .future studies of interdependent networks will need to focus on ( i ) an analysis of real data from many different interdependent systems and ( ii ) the development of mathematical tools for studying the vulnerability of real - world interdependent systems .m. j. pocock , d. m. evans & j. memmott , science * 335*,973977 ( 2012 ) .a. bashan et .nature communications * 3 * , 702 ( 2012 ) .k. zhao , kun & g. bianconi , arxiv preprint arxiv:1210.7498 .* 104 * , 018701 , ( 2010 ) . c. m schneider , a .a moreira , j. s andrade , s. havlin , h. j herrmann , proc .sci . * 108 * , 38383841 ( 2011 ) . s. v. buldyrev et al . ,nature * 464 * , 1025 ( 2010 ) . y. hu , b. ksherim , r. cohen , s. havlin , phys .e * 84 * , 066116 ( 2011 ) .s. v. buldyrev et al . , phys .e * 83 * , 016112 ( 2011 ) .ian dobson , benjamin a. carreras , vickie e. lynch , and david e. newman , chaos * 17 * , 026103 ( 2007 ) , i.e. , each network depends on 4 networks .( b ) a rr network composed of 6 interdependent networks represented by 6 circles .the degree of the netonet is , i.e. , each network depends on 3 networks .the analytical results for the netonet [ eqs .( [ e11 ] ) and ( [ ge5 ] ) ] are exact and the same for both cases ( a ) and ( b ) .( c ) schematic representation of the dependencies of the networks .circles represent networks in the netonet , and the arrows represent the partially interdependent pairs .for example , a fraction of of nodes in network depends on nodes in network 3 .pairs of networks which are not connected by dependency links do not have nodes that directly depend on each other ., title="fig:",scaledwidth=40.0% ] , i.e. , each network depends on 4 networks .( b ) a rr network composed of 6 interdependent networks represented by 6 circles .the degree of the netonet is , i.e. , each network depends on 3 networks .the analytical results for the netonet [ eqs .( [ e11 ] ) and ( [ ge5 ] ) ] are exact and the same for both cases ( a ) and ( b ) .( c ) schematic representation of the dependencies of the networks .circles represent networks in the netonet , and the arrows represent the partially interdependent pairs . for example , a fraction of of nodes in network depends on nodes in network 3 .pairs of networks which are not connected by dependency links do not have nodes that directly depend on each other ., title="fig:",scaledwidth=42.0% ] , i.e. , each network depends on 4 networks .( b ) a rr network composed of 6 interdependent networks represented by 6 circles .the degree of the netonet is , i.e. , each network depends on 3 networks .the analytical results for the netonet [ eqs .( [ e11 ] ) and ( [ ge5 ] ) ] are exact and the same for both cases ( a ) and ( b ) .( c ) schematic representation of the dependencies of the networks .circles represent networks in the netonet , and the arrows represent the partially interdependent pairs .for example , a fraction of of nodes in network depends on nodes in network 3 .pairs of networks which are not connected by dependency links do not have nodes that directly depend on each other ., title="fig:",scaledwidth=42.0% ] , , after cascading failures for the lattice netonet composed of 9 er networks shown in fig .for each network in the netonet , , and , and ( predicted by eq .( [ e25 ] ) .the chosen value of is , and the predicted threshold is ( from eq .( [ ge18 ] ) .( b ) simulations compared to theory of the giant component , , for the random regular netonet composed of 6 er networks shown in fig .1(b ) with the no feedback condition . for each network in the netonet , , , , ( predicted by eq . ( [ e25 ] ) ) , and for ( from eq .( [ ge18 ] ) ) .( c ) simulations compared to theory of the giant component , , for the random regular netonet composed of 6 er networks shown in fig .1(b ) with the feedback condition . for each network in the netonet , , , ( predicted by eq . ( [ ge27 ] ) ) , and for ( from eq .[ ge25 ] ) ) .the results are averaged over 20 simulated realizations of the giant component left after stages of the cascading failures which is compared with the theoretical prediction of eq .( [ ge10]).,title="fig:",scaledwidth=47.0% ] , , after cascading failures for the lattice netonet composed of 9 er networks shown in fig .1(a ) . for each network in the netonet , , and , and ( predicted by eq .( [ e25 ] ) .the chosen value of is , and the predicted threshold is ( from eq .( [ ge18 ] ) .( b ) simulations compared to theory of the giant component , , for the random regular netonet composed of 6 er networks shown in fig .1(b ) with the no feedback condition . for each network in the netonet , , , , ( predicted by eq . ( [ e25 ] ) ) , and for ( from eq .( [ ge18 ] ) ) .( c ) simulations compared to theory of the giant component , , for the random regular netonet composed of 6 er networks shown in fig .1(b ) with the feedback condition . for each network in the netonet , , , ( predicted by eq .( [ ge27 ] ) ) , and for ( from eq .[ ge25 ] ) ) .the results are averaged over 20 simulated realizations of the giant component left after stages of the cascading failures which is compared with the theoretical prediction of eq .( [ ge10]).,title="fig:",scaledwidth=47.0% ] , , after cascading failures for the lattice netonet composed of 9 er networks shown in fig .for each network in the netonet , , and , and ( predicted by eq .( [ e25 ] ) .the chosen value of is , and the predicted threshold is ( from eq .( [ ge18 ] ) .( b ) simulations compared to theory of the giant component , , for the random regular netonet composed of 6 er networks shown in fig .1(b ) with the no feedback condition . for each network in the netonet , , , , ( predicted by eq .( [ e25 ] ) ) , and for ( from eq .( [ ge18 ] ) ) .( c ) simulations compared to theory of the giant component , , for the random regular netonet composed of 6 er networks shown in fig .1(b ) with the feedback condition . for each network in the netonet , , , ( predicted by eq . ( [ ge27 ] ) ) , and for ( from eq .[ ge25 ] ) ) .the results are averaged over 20 simulated realizations of the giant component left after stages of the cascading failures which is compared with the theoretical prediction of eq .( [ ge10]).,title="fig:",scaledwidth=47.0% ] , as a function of , for er networks with average degree , ( a ) for two different values of and , ( b ) for two different values of and .the curves in ( a ) and ( b ) are obtained using eq .( [ e16 ] ) and are in excellent agreement with simulations .the points symbols are obtained from simulations by averaging over 20 realizations for . in ( a ), simulation results are shown as circles ( ) for and as diamonds ( ) for .these simulation results support our theoretical result , eq .( [ e16 ] ) , which is indeed independent of number of networks . ,title="fig:",scaledwidth=47.0% ] , as a function of , for er networks with average degree , ( a ) for two different values of and , ( b ) for two different values of and .the curves in ( a ) and ( b ) are obtained using eq .( [ e16 ] ) and are in excellent agreement with simulations .the points symbols are obtained from simulations by averaging over 20 realizations for . in ( a ), simulation results are shown as circles ( ) for and as diamonds ( ) for .these simulation results support our theoretical result , eq .( [ e16 ] ) , which is indeed independent of number of networks ., title="fig:",scaledwidth=47.0% ] as a function of for an rr network of er networks , for different values of when and .all the lines are produced using eq .( [ ge15 ] ) .the symbols and show the critical thresholds when and when .these critical thresholds coincide with the results in fig .the dashed dotted line shows that when the function ( [ ge15 ] ) has no solution for , which corresponding to the case of complete collapse of the netonet.,scaledwidth=49.0% ] , , as a function of .the curves are ( a ) for and two different values of , and ( b ) for and two different values of .the curves are obtained using eqs .( [ e16 ] ) and ( [ e20 ] ) and are in excellent agreement with simulations ( symbols ) .panels ( a ) and ( b ) show the location of and for two values of . between and the transition is first order represented by . for transition is second order since and for the netonet collapses ( ) and there is no phase transition ( ).,title="fig:",scaledwidth=47.0% ] , , as a function of .the curves are ( a ) for and two different values of , and ( b ) for and two different values of .the curves are obtained using eqs .( [ e16 ] ) and ( [ e20 ] ) and are in excellent agreement with simulations ( symbols ) .panels ( a ) and ( b ) show the location of and for two values of . between and transition is first order represented by . for the transition is second order since and for the netonet collapses ( ) and there is no phase transition ( ).,title="fig:",scaledwidth=47.0% ] and , ( b ) for and .the solid curves show the second order phase transition ( predicted by eq .( [ e25 ] ) ) and the dashed - dotted curves show the first order phase phase transition , leading at from zero to non - zero values ( the rhs axis ) .as decreases and increases , the region for increases , showing a better robustness .the circle shows the tri - critical point , below which second order transition occurs and above which a first order transition occurs .the square shows the critical point , above which the netonet completely collapse even when .,title="fig:",scaledwidth=47.0% ] and , ( b ) for and .the solid curves show the second order phase transition ( predicted by eq .( [ e25 ] ) ) and the dashed - dotted curves show the first order phase phase transition , leading at from zero to non - zero values ( the rhs axis ) .as decreases and increases , the region for increases , showing a better robustness .the circle shows the tri - critical point , below which second order transition occurs and above which a first order transition occurs .the square shows the critical point , above which the netonet completely collapse even when .,title="fig:",scaledwidth=47.0% ] as a function of for different values of when , , and .( i ) when is small ( ) , is a monotonically increasing function of , the system shows a second order phase transition .( ii ) when is larger ( ) , as a function of shows a peak at which corresponds to a hybrid phase transition .the square symbol represents the critical point of the sharp jump ( ) .( iii ) when is large enough ( ) , decreases with first , and then increases with , which corresponds the system collapses.,scaledwidth=47.0% ] as a function of for different values of and for .( b ) the critical threshold and ( c ) the corresponding giant component at the threshold as a function of coupling strength for and . the symbols in ( a ) represent simulation results , obtained by averaging over 20 realizations for and number of networks ( squares ) and ( circles ) .the lines are the theoretical results obtained using eqs .( [ ge5 ] ) and ( [ ge1])-([ge3 ] ) .we can see in ( a ) that the system shows a hybrid phase transition for and .when the system shows a second order phase transition and the critical threshold is .however , in the simulation when is small ( but not zero ) .this happens because is valid only when the network size and , but in simulations we have finite systems .furthermore , when the system shows a hybrid transition shown in ( a ) and ( c ) , and when all the networks collapse even if one node fails .we call this hybrid transition because , which is different from the case of er networks with first order phase transition where .,title="fig:",scaledwidth=40.0% ] as a function of for different values of and for .( b ) the critical threshold and ( c ) the corresponding giant component at the threshold as a function of coupling strength for and . the symbols in ( a ) represent simulation results , obtained by averaging over 20 realizations for and number of networks ( squares ) and ( circles ) .the lines are the theoretical results obtained using eqs .( [ ge5 ] ) and ( [ ge1])-([ge3 ] ) .we can see in ( a ) that the system shows a hybrid phase transition for and .when the system shows a second order phase transition and the critical threshold is .however , in the simulation when is small ( but not zero ) . this happens because is valid only when the network size and , but in simulations we have finite systems .furthermore , when the system shows a hybrid transition shown in ( a ) and ( c ) , and when all the networks collapse even if one node fails .we call this hybrid transition because , which is different from the case of er networks with first order phase transition where .,title="fig:",scaledwidth=47.0% ] as a function of for different values of and for .( b ) the critical threshold and ( c ) the corresponding giant component at the threshold as a function of coupling strength for and .the symbols in ( a ) represent simulation results , obtained by averaging over 20 realizations for and number of networks ( squares ) and ( circles ) .the lines are the theoretical results obtained using eqs .( [ ge5 ] ) and ( [ ge1])-([ge3 ] ) .we can see in ( a ) that the system shows a hybrid phase transition for and .when the system shows a second order phase transition and the critical threshold is . however , in the simulation when is small ( but not zero ) happens because is valid only when the network size and , but in simulations we have finite systems .furthermore , when the system shows a hybrid transition shown in ( a ) and ( c ) , and when all the networks collapse even if one node fails .we call this hybrid transition because , which is different from the case of er networks with first order phase transition where .,title="fig:",scaledwidth=47.0% ] , as a function of for er average degree , for different values of when ( a ) and for different values of when . the curves in ( a ) and ( b )are obtained using eq .( [ ge22 ] ) and are in excellent agreement with simulations .the points symbols are obtained from simulations of fig .1(b ) topology when and networks forming a circle when by averaging over 20 realizations for .the absence of first order regime in netonet formed of er networks is due to the fact that at the initial stage nodes in each network are interdependent on isolated nodes ( or clusters ) in the other network .however , if only nodes in the giant components of both networks are interdependent , all three regimes , second order , first order and collapse will occur , like in the case of rr netonet formed of rr networks ( see eq .( 50 ) and fig .title="fig:",scaledwidth=47.0% ] , as a function of for er average degree , for different values of when ( a ) and for different values of when .the curves in ( a ) and ( b ) are obtained using eq .( [ ge22 ] ) and are in excellent agreement with simulations .the points symbols are obtained from simulations of fig .1(b ) topology when and networks forming a circle when by averaging over 20 realizations for .the absence of first order regime in netonet formed of er networks is due to the fact that at the initial stage nodes in each network are interdependent on isolated nodes ( or clusters ) in the other network .however , if only nodes in the giant components of both networks are interdependent , all three regimes , second order , first order and collapse will occur , like in the case of rr netonet formed of rr networks ( see eq .( 50 ) and fig ., title="fig:",scaledwidth=47.0% ] as a function of for both no feedback condition and feedback condition when . for no feedback condition ,the parts of curves bellow the symbols show and above the symbols show . for the feedback condition, they only have the of second order , and for the no feedback case is equal to of the feedback case , but this does not mean that these two cases have equal vulnerability . as a function of for different values of when with ( b ) no feedback condition and ( c ) feedback condition .when , for all and both feedback and no feedback cases . comparing ( b ) and ( c ) , we can see that the feedback case is much more vulnerable than the no feedback condition , because of no feedback case is much less than that of feedback case.,title="fig:",scaledwidth=49.0% ] as a function of for both no feedback condition and feedback condition when . for no feedback condition ,the parts of curves bellow the symbols show and above the symbols show . for the feedback condition, they only have the of second order , and for the no feedback case is equal to of the feedback case , but this does not mean that these two cases have equal vulnerability . as a function of for different values of when with ( b ) no feedback condition and ( c ) feedback condition .when , for all and both feedback and no feedback cases . comparing ( b ) and ( c ) , we can see that the feedback case is much more vulnerable than the no feedback condition , because of no feedback case is much less than that of feedback case.,title="fig:",scaledwidth=46.0% ] as a function of for both no feedback condition and feedback condition when . for no feedback condition , the parts of curves bellow the symbols show and above the symbols show . for the feedback condition, they only have the of second order , and for the no feedback case is equal to of the feedback case , but this does not mean that these two cases have equal vulnerability . as a function of for different values of when with ( b ) no feedback condition and ( c ) feedback condition .when , for all and both feedback and no feedback cases . comparing ( b ) and ( c ) , we can see that the feedback case is much more vulnerable than the no feedback condition , because of no feedback case is much less than that of feedback case.,title="fig:",scaledwidth=46.0% ] as a function of for the case of feedback condition and no feedback condition when .we can see that of no feedback case is larger than that of the feedback case , which indicate that the no feedback case is more robust compared to the feedback case .( b ) the maximum value of coupling strength as a function of with the feedback condition for different values of , which shows that increasing or decreasing will increase , i.e. , increase the robustness of netonet.,title="fig:",scaledwidth=49.0% ] as a function of for the case of feedback condition and no feedback condition when .we can see that of no feedback case is larger than that of the feedback case , which indicate that the no feedback case is more robust compared to the feedback case .( b ) the maximum value of coupling strength as a function of with the feedback condition for different values of , which shows that increasing or decreasing will increase , i.e. , increase the robustness of netonet.,title="fig:",scaledwidth=49.0% ] , as a function of for rr of degree and , for two different values of .the curves are obtained using eq .( [ ge28 ] ) , which shows a first order phase transition when is large but a second order phase transition when is small.,scaledwidth=47.0% ]
percolation theory is an approach to study vulnerability of a system . we develop analytical framework and analyze percolation properties of a network composed of interdependent networks ( netonet ) . typically , percolation of a single network shows that the damage in the network due to a failure is a continuous function of the fraction of failed nodes . in sharp contrast , in netonet , due to the cascading failures , the percolation transition may be discontinuous and even a single node failure may lead to abrupt collapse of the system . we demonstrate our general framework for a netonet composed of classic erds - rnyi ( er ) networks , where each network depends on the same number of other networks , i.e. , a random regular network of interdependent er networks . in contrast to a _ treelike _ netonet in which the size of the largest connected cluster ( mutual component ) depends on , the loops in the rr netonet cause the largest connected cluster to depend only on . we also analyzed the extremely vulnerable feedback condition of coupling . in the case of er networks , the netonet only exhibits two phases , a second order phase transition and collapse , and there is no first phase transition regime unlike the no feedback condition . in the case of netonet composed of rr networks , there exists a first order phase transition when is large and second order phase transition when is small . our results can help in designing robust interdependent systems .
since the work on complex networks by strogatz , watts , barabsi and albert ( see ) many researchers from such distinct fields as statistical mechanics , molecular biology , ecology , physical chemistry , genetics e.g. or social science have studied the emerging complex structure and the behavior of networks in their respective field of research .a special subset of scale - free non - equilibrium networks can emerge from a construction procedure in which at each time step one vertex is added and connected to existing vertices with preferential linking .this preference is proportional to the number of already existing connections of that particular vertex . by definitionthe average number of connections remains constant .the distribution of the degree of connections is of particular interest as it provides for the possibility to distinguish different classes .one observes in scale - free - networks a behavior that was first discussed by simon and in the context of citations networks by price .extensions to a more complex linking procedure or more general linkage properties were recently discussed in detail . to study the evolution of the distribution the continuum approximation is often used . at time the average number of connections of a vertex created at time is in an undirected network bianconi and barabsi pointed to the effects of distributions of fitness of individuals to attract new connections .this can be already regarded as one prototypical example of incomplete information by interpreting the fitness in their model as an incomplete knowledge of all newer vertices about the _individual _ properties or existence of the _ present _ vertices .mossa et al. showed that the power - law behavior might be truncated due to information filtering . in their casea newly attached vertex is only aware of a certain subset of the existing vertices .this subset is however chosen randomly for each vertex individually .therefore the incomplete information has no global properties but is instead a local property .here we want however to follow another route with two distinct models to deal with the more interesting case that the incomplete information is attached to the new vertices individually and still global with respect to the whole network .one model will mimic a generic and global effect that is present in all real citation networks while the other describes the influence of individual information unawareness .for a newly created vertex the only relevant information is a list of existing vertices to connect to and their respective degrees .incomplete information results in the ignorance of some of those vertices .this effect is here mediated via an awareness function that makes the newly connected vertex aware of the existing vertex . eq . ( [ eq::approx ] ) becomes then while there are many choices for we will propose one particular structure to resemble actual effects in citation networks .we will further set as this network property has no influence on the scaling exponent in the models .a newly created link might not be aware of the most recent created entities in the network .one encounters this situation actually very often : the author of a new www - page can not be aware of other recently created pages that he would like to link to .search engines do not provide for instantaneous listing of just created pages and therefore authors can find new pages just by chance .this incomplete information about the vertices of the network results in a selection of older vertices for linking .we model this by the setting with some constant and the heaviside - step - function .therefore a newly created vertex is only aware of the oldest fraction of the existing vertices .while this seems to be at a first glance to strong of an assumption , this is realized in exponentially growing systems like the www : suppose from the current real - time we can not know new pages younger than some period . then all existing pages that are capable of attracting a link are taken from the interval ] .here we have set .recall that the time of eq .( [ eq::approx ] ) is actually the number of vertices so that we actually draw a vertex from ] reduces the number of available vertices in the growth process and the network gets more dense : the shortest path lengths get smaller and the probability of vertices with larger number of connections bigger - as can be seen from the previous result . for very small see however an increase in and .this is an artifact due to our initial setting of the first vertices to form a chain .while the chain guarantees that none of the first vertices is preferred to another initial vertex , the path lengths are largely influenced now .for the smallest used here we get an . herethe chain length dominates the path lengths . .for every point we sampled over independent networks .the first vertices where initialized to form a chain , so giving no preferences to any of them . ]suppose that www - pages are the vertices in this scenario and links are the edges of the network .this model then takes into account the time that e.g. search engines need to encounter new web - pages and make the general public aware of those sites .the anonymity of a vertex is healed over time by sliding into the focus of new vertices as soon as . herethe incomplete information refers to the knowledge of _ every _ new vertex uniformly .there is also the opposite scenario in which some vertices are aware of the full information ( that is the number of connections all the existing vertices possess ) and others are just ignorant and connect with equal probability to any of the existing ones . herethe incompleteness of information is restricted to a subset of individuals :suppose that a newly added vertex is with some probability aware of all the connectivities of the other vertices . in this case it is attached with preferential linking described above .with probability it is connected without preference .we want to deduce the effect on the connectivity distribution from the master equation for the average number of connections of degree at time \nonumber\\ & & + ( 1-p)\cdot \left [ n(k , t ) + \frac{1}{t } n(k-1,t ) - \frac{1}{t}n(k , t)\right ] + \delta_{k,1 } \end{aligned}\ ] ] here is the average degree of each vertex .the first term describes the preferential linking with its in- and outflow while the second term provides for the additional connections or loss thereof with equal probability .notice that there are currently vertices in the network , so is the probability of hitting any one of those .the third term is finally responsible for the newly added guy. by changing to continuous time we get from eq .( [ eq::p ] ) (k-1,t ) - \left [ p\frac{k}{\bar{k}}+1-p\right]p(k-1,t ) + \delta_{k,1}\ ] ] where is the density of vertices with degree at time .we can now solve for the stationary distribution for .we arrive at the recursion this is further written as using the relationship \ ] ] for large we conclude that for large with a divergence for the exponent when approaching as in this case we have no preferential linking at all . in this case the starting master equation ( [ eq::p ] ) leads correctly to the poisson - distribution .the diverging behavior of the exponent was for instance also found by krapivsky and redner in their treatment of growing networks with redirection .figure [ fig::np ] shows the results from computer experiments for this model .the smaller the the more difficult it is to see any indication of the power - law. of degree in networks of size averaged over independent runs .the data was shifted for a better overview .the straight line through the data of is the derived result of eq .( [ eq::p2 ] ) with while the broken line indicates an asymptotic power - law for the cumulative number of a distribution with . ]in this paper we developed two distinct models to describe the effect of 1 ) global incomplete information caused by penetration rates while constructing a citation network and 2 ) local incomplete information of individual vertices that are attached with a probability of non - knowledge. we derived the scaling behavior of the degree distribution for large degrees in both cases and compared this to computer experiments .both models approach the analytic value of when reaching full information .the incomplete information in the two models does not destroy the scale - free - behavior of the systems while mossa et al. found a cross - over from scale - free - behavior to an exponential in another model which takes information into account . by comparisonone can see the influence incomplete information may have on the global structure of growing networks .we will work out particulars on real - world - networks and the influence of incomplete information in a forthcoming study .kh is supported through a liebig - fellowship of the fonds der chemischen industrie .computational resources were provided under a grant of the howard hughes medical institute and by the nsf - sponsored center for theoretical biological physics ( grant numbers phy-0216576 and 0225630 ) .+ i thank s. redner and s. mossa for bringing ref . and respectively to my attention . stimulating discussions with and comments from c. gros , j.a .mccammon , and t. hwa are gratefully acknowledged .
we investigate the effect of incomplete information on the growth process of scale - free networks - a situation that occurs frequently e.g. in real existing citation networks . two models are proposed and solved analytically for the scaling behavior of the connectivity distribution . these models show a varying scaling exponent with respect to the model parameters but no break - down of scaling thus introducing the first models of scale - free networks in an environment of incomplete information . we compare to results from computer simulations which show a very good agreement . * keywords : * random graphs , networks , socio - economic networks , stochastic processes , growth processes
cooperative behaviors are ubiquitous in real world , ranging from biological systems to socioeconomic systems .however , the question of how natural selection can lead to cooperation has fascinated evolutionary biologists for several decades .fortunately , together with classic game theory , evolutionary game theory provides a systematic and convenient framework for understanding the emergence and maintenance of cooperative behaviors among selfish individuals . especially , the prisoner s dilemma game ( pdg ) as a general metaphor for studying the evolution of cooperation has attracted considerable interests . in the original pdg, two players simultaneously decide whether to cooperate ( c ) or to defect ( d ) .they both receive upon mutual cooperation and upon mutual defection . a defector exploiting a c playergets , and the exploited cooperator receives , such that and . as a result, it is best to defect regardless of the co - player s decision .thus , in well - mixed infinite populations , defection is the evolutionarily stable strategy ( ess ) , even though all individuals would be better off if they cooperated .thereby this creates the social dilemma , because when everybody defects , the mean population payoff is lower than that when everybody cooperates . ina recent review nowak suggested five rules for the evolution of cooperation ( see ref . and references therein ) .most noteworthy , departure from the well - mixed population scenario , the rule `` network reciprocity '' conditions the emergence of cooperation among players occupying the network vertices .that is , the benefit - to - cost ratio must exceed the average number of neighbors per individual .actually , the successful development of network science provides a convenient framework to describe the population structure on which the evolution of cooperation is studied .the vertices represent players , while the edges denote links between players in terms of game dynamical interactions .furthermore , interactions in real - world network of contacts are heterogeneous , often associated with scale - free ( power - law ) dependence on the degree distribution , with . accordingly ,the evolution of cooperation on model networks with features such as lattices , small - world , scale - free , and community structure has been scrutinized .interestingly , santos __ found that scale - free networks provide a unifying framework for the emergency of cooperation . from the best of our knowledge ,so far much previous works of games on networks are based on crystalized ( static ) networks , i.e. the social networks on which the evolution of cooperation is studied are fixed from the outset and not affected by evolutionary dynamics on top of them . however , interaction networks in real world are continuously evolving ones , rather than static graphs .indeed , individuals have adaptations on the number , frequency , and duration of their social ties base upon some certain feedback mechanisms . instead of investigating the evolutionary games on static networks which constitute just one snapshot of the real evolving ones ,recently , some researchers proposed that the network structure may co - evolve with the evolutionary game dynamics .interestingly , as pointed out in refs . , the entangled evolution of individual strategy and network structure constitutes a key mechanism for the sustainability of cooperation in social networks .therefore , to understand the emergence of cooperative behavior in realistic situations ( networks ) , one should combine strategy evolution with topological evolution . from this perspective, we propose a computational model in which both the adaptation of underlying network of interactions and the evolution of behavioral strategy are taken into account simultaneously . in our model, each agent plays a -round prisoner s dilemma game with its immediate neighbors , after that , based upon self - interest , partial individuals may punish their defective neighbors by dismissing the social tie to the one who defects the most times , meanwhile seek for a new partner at random from the neighbors of the punished agent .we shall show that such individual s local adaptive interactions lead to the situation where cooperators become evolutionarily competitive due to the preference of assortative mixing between cooperators .the remainder of this paper is organized as follows . in the following section ,the model is introduced in detail .iii presents the simulation results and discussions .we finally draw conclusions in sec . iv .we consider a symmetric two - player game where individuals engage in the prisoner s dilemma game ( pdg ) over a network .the total number of edges is fixed during the evolutionary process .each individual plays with its immediate neighbors defined by the underlying network .the neighbor set of individual is denoted as , which is allowed to evolve according to the game results .let us denote by the strategy of individual .player can follow two simple strategies : cooperation [ c , and defection [ d , in each round . following previous studies , the payoff matrix has a rescaled form depending on a single parameter , where . + in each round , each agent plays the same strategy with all its neighbors , and accumulates the payoff , observing the aggregate payoff and strategy of its neighbors .the total income of the player at the site can be expressed as where the sum runs over all the neighboring sites of , . in evolutionary gamesthe players are allowed to adopt the strategies of their neighbors after each round .then , the individual randomly selects a neighbor for possibly updating its strategy .the site will adopt s strategy with probability determined by the total payoff difference between them : },\ ] ] where the parameter is an inverse temperature in statistical physics , the value of which characterizes the intensity of selection . leads to neutral ( random ) drift whereas corresponds to the imitation dynamics where the s strategy replaces s whenever . for finite value of ,the larger is , the fitter strategy is more apt to replace to the less one , thus the value of indicates the intensity of selection . in the present model , we assume each agent plays a -round pdg with its neighbors ( ) , and then randomly selected individuals are allowed to adapt their social ties according to the game results ( ) . herethe individuals are endowed with the limited cognitive capacities each agent records the strategies of its opponents used in the -round game. then they are able to decide to maintain those ties from which they benefit from , and to rewire the adverse links . for the sake of simplicity , if someone is picked for updating its neighbors , only the most disadvantageous edge is rewired .it dismisses the link to the one , who defects the most times ( if there exist more than one individuals who defect the same maximum times , the one is chosen at random ) , and redirects the link to a random neighbor of the punished ( see fig . [ fig1 ] as an illustrative example ) .the advantage of rewiring to neighbor s neighbor is twofold : first , individuals tend to interact with others that are close by in a social manner , i.e. friend s friend is more likely to become a friend ( partner ) ; second , every agent is seeking to attach to cooperators , thus redirecting to neighbor s neighbor will be a good choice since the neighbor also tries to establish links with cooperators . hence rewiring to a neighbor of a defectoris no doubt a good choice for individuals with local information only .herein , the parameters and can be viewed as the corresponding time scales of strategy evolution and network structure adaptation .as our strategy evolution uses synchronous updating , while evolution of network topology adopts asynchronous updating , in our case , the strategy updating event proceeds naturally much more frequent than evolution of network structure ( as ) .nevertheless , even though network structure adaption is much lower than game dynamics , cooperation is still promoted by the efficient interplay between the two dynamics .-round prisoner s dilemma game with immediate neighbors .a dismisses the link to b , who defects the most times , and rewires the link to c , a random neighbor of b. [ fig1],width=226 ] let us point out the differences between our model and previous works . in refs . , the evolution of strategy adopted the `` best - take - over '' update rule where each agent imitates the strategy of the best neighbor .besides , individuals are divided into two types based on the payoffs : satisfied and unsatisfied . if individual s payoff is the highest among its neighbors , then it is satisfied . otherwise , it is unsatisfied .the network adaptation dynamics is restricted to the ones who are defectors and unsatisfied .thus the unsatisfied defector breaks the link with defective neighbor with probability , replaces it by randomly choosing agent uniformly from the network .more recently , ref . proposed another minimal model that combined strategy evolution with topological evolution .they used asynchronous update rule both for evolution of strategy and structure by the fermi function . in their model , topological evolution is manipulated in the following way : a pair of c - d or d - d are chosen at random , the one may compete with the other to rewire the link , rewiring being attempted to a random neighbor s neighbor with certain probability determined by payoff difference between them .whereas in our model , we argue that individuals are exclusively based on their self - interest .even if the individual is a cooperator , it could not bear the exploitation by defectors . furthermore , in our situation , individuals have enough inspection over their opponents because they are engaged in a -round pdg .subsequently , each agent can punish the most defective neighbor by dismissing the link , meanwhile seeks to establish links with cooperators .especially , in our model the agents are endowed with limited memory abilities by which they can punish the most defective one .in addition , the way of associated time scales with respect to evolution of strategy and structure is different in our model from the previous related works . as aforementioned , after playing -round pdg with neighbors , individuals update their local interactions according to the game results .such timely feedback mechanism is ubiquitous in natural world .besides , the adaption of network structure is much slower than evolution of strategies in our model .such feature reflects the fact that individuals may not respond rapidly to the surrounding since maintaining and rewiring interactions are costly to them . in previous investigations ,the time scale often is implemented in a stochastic manner .although the implementation of time scales in the literature is equivalent to our model in a way , our method may be more plausible .therefore , our model is different from the previous ones in these respects and captures the characteristics in real situation . inwhat follows , we investigate under which conditions cooperation may thrive by extensive numerical simulations . and also , we show the effects of the changing parameters in our model on the evolution of cooperation .we consider individuals occupying the network vertices . each interaction between two agentsis represented by an undirected edge ( a total of ) .the social networks are evolving in time as individuals adapt their ties .the average connectivity is conserved during the topological evolution since we do not introduce or destroy links .this point assumes a constrained resource environment , resulting in limited possibilities of network configurations . besides, we impose that nodes linked by a single edge can not lose this connection , thus the evolving networks are connected at all times .we calculated the amount of heterogeneity of the networks as ( the variance of the network degree sequcence ) , where gives the number of vertices with edges .additionally , in order to investigate the degree - degree correlation pattern about the emerging social networks , we adopted the assortativity coefficient suggested by newman , ^ 2}{m^{-1}\sum_i\frac{1}{2}(j_i^2+k_i^2)-[m^{-1}\sum_i\frac{1}{2}(j_i+k_i)]^2},\ ] ] here are the degrees of the vertices at the ends of the edge , with .networks with assortative mixing pattern , i.e. , are those in which nodes with large degree tend to be connected to other nodes with many connections and vice versa . the interplay between network structure and game dynamicsis implemented as following steps : * step ( 1 ) : the evolution of strategy uses synchronous updating .each agent plays the pdg with its immediate neighbors for consecutive rounds . after each round ,each individual adapts its strategy according to eq .( [ transp ] ) , and records down the defection times of its every neighbors . * step ( 2 ) : the update of individual s local social interactions is asynchronous . agents are successively chosen at random to rewire the most adverse links ( if any ) as shown in fig .[ fig1 ] . * step ( 3 ) : repeat the above two steps until the population converges to an absorbing state ( full cooperators or defectors ) , or stop repeating the above two steps after generations .we start from a homogeneous random graph by using the method in ref . , where all nodes have the same number of edges , randomly linked to arbitrary nodes .initially , an equal percentage of cooperators and defectors is randomly distributed among the elements of the population .we run 100 independent simulations for the corresponding parameters , , , , , and .we also compute the fraction of runs that ended up with 100% cooperators .if the evolution has not reached an absorbing state after generations , we take the average fraction of cooperators in the population as the final result .moreover , we observe the time evolution of the network structure and strategy , including the degree - degree correlation coefficient , the degree of heterogeneity , the frequency of cooperators , the fraction of c - c / c - d / d - d links , etc . finally , we confirm that our results are valid for different population size and edge number .we report a typical time evolution of the network structure as a result of the adaption of social ties in fig .[ fig2 ] , with relevant parameters , , , , , and .the emerging social network shows disassortative mixing pattern , indicating that large - degree nodes tend to be connected to low - degree nodes .the degree - degree correlation coefficient of the network we started from is zero .once the network structure adaption is in effect , disassortative mixing pattern will be developed .since the rewiring process is attempted to a random neighbor s neighbor , thus the nodes with large connectivity are more possible to be attached by others . due tosuch `` rich gets richer '' , inhomogeneity is induced as shown in fig .[ fig2](b ) .the amount of the heterogeneity ( degree variance ) increases in virtue of the rewiring process .the inset in fig .[ fig2](b ) plots the cumulative degree distribution of the stationary ( final ) network , which exhibits high heterogeneity with a power - law tail . fig .[ fig2](c ) displays the evolution of cooperation .we find that the frequency of cooperators decreases at first due to the temptation to defect , and then because of the adaptive interactions , the cooperation level thrives gradually and the population converges into an absorbing state of 100% cooperators .the viability of cooperation is also in part promoted by the heterogeneity of the underlying heterogeneity . from fig .[ fig2](d ) , we can see that local assortative interactions between cooperators is enhanced by structural updating , while assortative interactions between defectors and defectors is inhibited remarkably .the disassortativity between cooperators and defectors is promoted in the beginning by strategy updating , however , being diminished eventually by structural updating .clearly , it is thus indicated that the interplay between strategy and structure will facilitate the emergence of cooperation . ) , randomly linked to arbitrary nodes .the corresponding parameters are , , , , , and .[fig2 ] , width=377 ] for different values of .we ran 100 simulations , starting from cooperators .the values plotted correspond to the fraction of runs which ended with cooperators . from left to right , respectively .the inset plots the critical value vs . , , , and .[fig3],width=377 ] to evolution of cooperation for fixed , , , , and .[fig4],width=377 ] to evolution of cooperation . fraction of cooperators as a function of for different values . from left to right , respectively . , , , and .[fig5],width=377 ] for different values . from left to right , respectively . , , , and .[fig6],width=377 ] let us consider the effect of the amount of temptation to defect to the evolution of cooperation .the relevant result is presented in fig .[ fig3 ] . with increasing ,the structural updating event must be sufficiently frequent to guarantee the survival of cooperators . in other words , the fraction of individuals chosen for updating social tiesshould be accordingly increased to ensure the sustainability of cooperators . when the value of is enlarged , the defectors become more favorable by the nature selection .nevertheless , with the aid of structural updating , a small fraction of surviving cooperators promotes them into hubs ( large degree nodes ) , since they are attractive to neighborhood . such co - evolution of strategy and structure leads to highly heterogeneous networks in which the cooperators become evolutionarily competitive as demonstrated in refs . . for fixed , we observed a critical value for , above which the cooperator will wipe out defectors . for fixed number of rounds , the critical value monotonously increases with increasing , as shown at the inset of fig .[ fig3 ] . therefore the prompt network adaption prevents cooperators from becoming extinct , further , resulting in an underlying heterogeneous social network which is the `` green house '' for cooperators to prevail under strategy dynamics .consequently , the entangled co - evolution of strategy and structure promotes the evolution of cooperation among selfish individuals .furthermore , we investigated the effect of number of rounds to the emergence of cooperation .fix fixed and other parameters , there exists a critical value for , above which the cooperators will vanish as shown in fig .indeed , although the structural updating promotes the cooperators to a certain extent , its role will be suppressed by the long - time strategy dynamics ( corresponding to large ) . in our case ,strategy dynamics is synchronous while structural updating is asynchronous , namely , for each repetition in the simulations , strategy updating happens at a frequency of while the structural updating occurs at a frequency of .hence the evolution of strategy is much more frequent than that of structure .thus , with large , able defectors outperform cooperators through strategy dynamics , even though the heterogeneity , resulting from structural updating , is positive to evolution of cooperation .this result illustrates that even if the evolution of network topology is less frequent than the evolution of strategy , cooperators still have chances to beat defectors under appropriate conditions .as is well known , cooperation is promoted in the situation where individuals are constrained to interact with few others along the edges of networks with low average connectivity . to understand the cooperation in real - world interaction networksof which the average connectivity is normally relatively high , one needs new insight into the underlying mechanism promoting cooperation . here , the role of average connectivity to evolution of cooperation is inspected . in fig .[ fig5 ] , it is shown that for increasing , the individuals must be able to promptly adjust their social ties for cooperation to thrive , corresponding to increasing .thus in order to explain the cooperation in communities with a high average number of social ties , the entangled co - evolution of network structure and strategy should be taken into account simultaneously .on static networks , maximum cooperation level occurs at intermediate average degree .moreover , when the connections are dense among individuals ( large average connectivity ) , cooperators die out due to mean - field behavior .conversely , our results suggest that even in highly - connected network , on account of the proposed structural adaption , cooperators can beat back the defectors and dominate the populations .finally , we report the influence of changing intensity of selection on the evolution of cooperation in fig .it is indicated that reducing will demote the influence of the game dynamics , thereby increase the survivability of the less fit . clearly , the smaller the value of is , the smaller the critical value of . in fact , for small , cooperators survival probability increases with decreasing although cooperators are generally less fit . such increased survivability enhances assortative interactions between cooperators through network structure adaption . as a result, the critical value of above which cooperators dominate defectors decreases with decreasing .in summary , we have studied the coupled dynamics of strategy evolution and the underlying network structure adaption . we provided a computational model in which individuals are endowed with limited cognitive abilities in the -round pdg limited memories for recording the defection times of opponents . after the -round game , randomly chosen individuals are allowed to adjust their social ties based on the game results .the values of and are corresponding to the associated time scales of strategy dynamics and structural updating respectively .we found that for a given average connectivity of the population and the number of rounds , there is a critical value for the fraction of individuals adapting their social interactions above which cooperators wipe out defectors .in addition , the critical value of above which cooperators dominate defectors decreases with decreasing intensity of selection .moreover , for increasing average connectivity , the individuals must be able to swiftly adjust their social ties for cooperators to thrive . finally , the emerging social networks at steady states exhibit nontrivial heterogeneity which is the catalyst for emergence of cooperation among selfish agents . to a certain extent ,our results shed some light on the underlying mechanism promoting cooperation among selfish individuals , and also provide an alternative insight into the properties accruing to those networked systems and organizations in natural world .delightful discussion with dr .wenxu wang is gratefully acknowledged .this work was supported by nnsfc ( 60674050 and 60528007 ) , national 973 program ( 2002cb312200 ) , national 863 program ( 2006aa04z258 ) and 11 - 5 project ( a2120061303 ) .
we consider the coupled dynamics of the adaption of network structure and the evolution of strategies played by individuals occupying the network vertices . we propose a computational model in which each agent plays a -round prisoner s dilemma game with its immediate neighbors , after that , based upon self - interest , partial individuals may punish their defective neighbors by dismissing the social tie to the one who defects the most times , meanwhile seek for a new partner at random from the neighbors of the punished agent . it is found that the promotion of cooperation is attributed to the entangled evolution of individual strategy and network structure . moreover , we show that the emerging social networks exhibit high heterogeneity and disassortative mixing pattern . for a given average connectivity of the population and the number of rounds , there is a critical value for the fraction of individuals adapting their social interactions , above which cooperators wipe out defectors . besides , the effects of the average degree , the number of rounds , and the intensity of selection are investigated by extensive numerical simulations . our results to some extent reflect the underlying mechanism promoting cooperation . , , , social networks , network structure adaption , heterogeneity , prisoner s dilemma , cooperation 89.75.hc , 02.50.le.,89.75.fb , 87.23.ge