article
stringlengths
0
456k
abstract
stringlengths
0
65.5k
quantum cryptography is now 30 years old as it was first introduced in 1984 when bennett and brassard proposed the first protocol of quantum key distribution ( qkd ) which is now known as bb84 protocol .this pioneering work drew considerable attention of the entire cryptography community as it was successful in achieving unconditional security , a much desirable feat that is never achievable in the classical cryptography . to be precise ,all the classical cryptographic protocols including the widely used rsa protocol are secure only under some assumptions , whereas quantum cryptographic protocols are unconditionally secure . due to this existing feature of qkd ,bennett and brassard s initial proposal was followed by a large number of alternate protocols of qkd .the applicability of early protocols of quantum cryptography were limited to qkd .however , it was soon realized that quantum states can be employed for other cryptographic tasks , too .for example , quantum states can be used for quantum secret sharing ( qss ) of classical secrets _ _ , _ _ _ _ deterministic secure quantum communication _ _ ( dsqc ) , quantum secure direct communication ( qsdc ) , quantum dialogue , quantum key agreement ( ref . and references therein ) , etc .reviews on these topics , present challenges and future prospects of secure quantum communication can be found in refs . . the unconditional security of the existing protocols are usually claimed to be obtained using different approaches and different quantum resources like , single particle states , entangled state , teleportation , entanglement swapping , rearrangement of order of particles , etc .although these protocols differ with each other with respect to the procedure followed and the quantum resources used , the security of all these protocols of secure quantum communication essentially arises from the use of conjugate coding ( i.e. , from the quantum non - commutativity or equivalently from the use of two or more mutually unbiased bases ( mubs ) ) as in all these protocols the existence of an eavesdropper is traced by measuring verification qubits in 2 or more mubs .thus all these protocols may be viewed as conjugate - coding - based protocols of quantum communication , alternatively these protocols may be referred to as bb84-type protocols of quantum communication .the existence of such a large number of conjugate - coding - based protocols of quantum communication leads to a fundamental question : is conjugate coding essential for unconditionally secure quantum communication ?the answer is `` no '' . specifically , it is possible to design protocols of secure quantum communication using orthogonal states alone .thus we can design protocols of secure quantum communication using orthogonal states for encoding of information , decoding of information and eavesdropping check i.e. , using a single basis for implementation of the entire protocol without involving any use of two or more mubs or conjugate coding .first such orthogonal - state - based protocol was reported in 1995 by goldenberg and vaidman and subsequently a few other orthogonal - state - based protocols of qkd were reported . however , till recent past activities on orthogonal - state based protocols of quantum communication were limited to qkd and theoretical studies alone . only recentlya set of exciting experiments on orthogonal - state - based protocols of quantum communication have been reported .further , new orthogonal - state - based protocols are proposed for quantum cryptographic tasks beyond qkd .these orthogonal - state - based proposals can be broadly classified in two classes : ( i ) gv - type protocols which are analogous to the original gv - protocol and in which transmission of qubits that carry secret information through the quantum channel is allowed , but the information is protected from the eavesdropping by geographically separating an orthogonal state into two or more quantum pieces that are not simultaneously accessible to eve and ( ii ) n09-type protocols or counterfactual protocols that use interaction free measurement and circumvents the transmission of information carrying qubits through the quantum channel .gv - type protocols are mostly investigated by the present authors and their collaborators .specifically , we have shown that it is feasible to construct orthogonal - state - based protocols of qka , qsdc and dsqc .practically , we have established that all the secure quantum communication tasks that can be performed using two or more mubs can also be achieved by using single basis .similarly , much progress has recently been made in designing of counterfactual ( i.e. , n09-type ) protocols .for example , in 2013 , salih et al .have claimed to design a counterfactual protocol of direct quantum communication .the claim was subsequently criticized by vaidman and the criticism lead to a very interesting debate on the issue .further , recently salih has also proposed counterfactual protocols for transportation of an unknown qubit and tripartite quantum cryptography , guo et al .have proposed protocol of counterfactual entanglement distribution , guo et al .proposed protocol of counterfactual information transfer , sun and wen have proposed a modified n09 protocol which is more efficient than the actual n09 protocol and some of the present authors proposed protocols of counterfactual certificate authentication and semi - counterfactual qkd .these exciting developments of recent past motivated us to briefly review these recent achievements with specific attention to works of our group . here we will briefly review a set of existing orthogonal - state - based protocols and describe a trick that helps us to transform bb84-type protocols into goldenberg - vaidman ( gv ) type protocols , which uses only orthogonal states for encoding , decoding and error checking , as was done in the original gv protocol of qkd . subsequently, we will describe two orthogonal - state - based protocols of quantum communication introduced by us and briefly describe how they can be extended .these two orthogonal - state - based protocols are fundamentally different from conjugate - coding - based ( bb84-type ) protocols as their security does not depend on noncommutativity .consequently , they are very important from the foundational perspective .the trick that can transform bb84-type protocols into gv - type protocol requires the rearrangement of orders of particles or _ permutation of particles _ ( pop ) .as pop plays a very crucial role in our protocol , it would be apt to note that this technique was first introduced by deng and long in 2003 , while they proposed a protocol of qkd based on this technique .subsequently , a dsqc protocol based on the rearrangement of orders of particles was proposed by zhu et al__. _ _ in 2006 .however , it was shown to be insecure under a trojan - horse attack by li et al . . in ref . , li et al . had also provided an improved version of zhu et al .protocol that is free from the above mentioned trojan - horse attack .thus we may consider li et al .protocol as the first unconditionally secure protocol of dsqc based on pop .recently , many pop based protocols are proposed ( see and references therein ) .specifically , many such pop - based protocols of quantum communication have been proposed in recent past .for example , banerjee and pathak , shukla , banerjee and pathak , yuan et al__. __ and tsai et al__. _ _ have recently proposed pop - based protocols of direct secure quantum communication . in what follows, we will see that pop provides us a useful tool for the generalization of the original gv protocol into corresponding multipartite version .the remaining part of the present paper is organized as follows , in section [ sec : a - chronological - history ] , we briefly review the development of orthogonal - state - based secure quantum communication until now by providing a chronological history of developments of protocols of orthogonal - state - based secure quantum communication and their experimental verifications . in section [ sec : role - of - nocloning ] , we discuss the role of no - cloning and randomness in secure communication and with some specific examples show that it is possible to transform all bb84-type protocols of secure quantum communication to corresponding gv - type protocols .finally , the paper is concluded in section [ sec : conclusions ] .1995 : : : all the protocols of quantum cryptography proposed until 1995 were based on nonorthogonal states and security of those protocols arose directly or indirectly through noncommutativity , but in 1995 , goldenberg and vaidman proposed a completely orthogonal - state - based protocol of qkd , where the security arises due to duality ( for single particle ) .this was the birth of orthogonal - state - based protocol of quantum cryptography .interestingly , the fact that gv protocol is fundamentally different from the bb84-type protocol was questioned by peres .however , goldenberg and vaidman successfully defended their work and established the fact that this orthogonal - state - based protocol is fundamentally different from the conventional bb84-type protocol . in the next sectionwe have briefly described this protocol and have shown that the protocol uses a slightly modified mach - zehnder interferometer ( see fig .[ fig:1]a ) .1997 : : : koashi and imoto generalized the gv protocol and proposed a protocol similar to gv protocol , but does not require random sending time .1998 : : : mor showed that it is not always possible to clone orthogonal states .specifically , an orthogonal state can not be cloned if the full state can not be accessed at the same time . using this idea ,mor provided a clear and innovative explanation of the origin of security of gv protocol .1999 : : : four years after the introduction of first orthogonal - state - based qkd protocol ( i.e. , gv protocol ) , guo and shi proposed the second orthogonal - state - based protocol of qkd using the concept of interaction - free measurement or quantum interrogation , an idea that was introduced earlier by elitzur and vaidman in context of a very interesting hypothetical situation in which some of the active bombs can be separated from the inactive ones without directly observing the active bombs ( i.e. , without sending any photon to the isolated active bombs which blasts when receives a photon , whereas inactive bombs does not show any response on receiving a photon ) .actually , the bombs are placed in the lower arm of a mach - zehnder interferometer and a single photon is sent through the input port ( see fig . [ fig:1]b ) . with 50% probabilitythe single photon travels through the upper arm of the interferometer . even in these 50% cases ,if we have an active bomb ( thus a detector ) in the lower arm , the interference is destroyed as we obtain the which path information , and consequently in half of these incidents ( i.e. , 25% of the total ) the detector present at the output port of the interferometer that does not click in absence of any detector in the lower arm would click . as a consequence we will be able to detect 25% of the active bombs without blasting them .thus , in brief , the presence of the obstacle ( active bomb ) disrupts the destructive interference that would otherwise occur and thereby reveal its presence .guo and shi modified the idea and in their protocol alice ( bob ) randomly inserts an absorber in upper ( lower ) arm of the interferometer ( see fig .[ fig:1]c ) .form the clicks of the upper detector which does not click in absence of the detector , bob knows that one of the absorber was present in one of the arm . in these caseshe discloses that his upper detector has been clicked . as alice ( bob )knows whether she ( he ) has inserted the absorber , using the observation of bob she ( he ) can conclude whether bob ( alice ) has inserted the absorber or not and subsequently use this to form a key using a pre - decided rule : presence of alice s ( bob s ) absorber implies bit value anyway , guo and shi s effort was the first step towards orthogonal - state - based counterfactual qkd and in recent years interaction - free measurement is frequently used as a tool for the designing of counterfactual quantum cryptographic protocols .2009 : : : a protocol of counterfactual qkd ( orthogonal - state - based ) was proposed by noh in 2009 using the elitzur and vaidman s idea of interaction - free measurement .this protocol of qkd is now known as n09 protocol or counterfactual protocol .this protocol led to many subsequent counterfactual protocols of secure quantum communication .the beauty of this protocol and other counterfactual protocols is that a secure key is distributed ( or other cryptographic task is achieved ) without transmitting a particle that carries secret information through the quantum channel .interestingly , in gv , koashi - imoto and guo - shi protocols mach - zehnder interferometer was used , but in this protocol a michelson interferometer is used ( see fig .[ fig:1]d)[multiblock footnote omitted ] .2010 : : : sun and wen improved the original n09 protocol by providing analogous counterfactual protocol with higher frequency . in the same year , avella et al .experimentally implemented gv protocol . to the best of our knowledgethis was the first ever experimental demonstration of orthogonal - state - based protocol of qkd .2011 : : : experimental realization of n09 protocol was reported shortly after realization of gv protocol .precisely , in 2011 , ren et al . reported experimental realization of n09 protocol .2012 : : : soon after ren et al.s work two more groups reported experimental realization of n09 protocol .specifically , brida et al . and liu et al . , independently implemented this protocol of counterfactual quantum communication . in theoretical front ,some of the present authors generalized the single particle gv protocol to multipartite case and showed that gv - type protocol can be used for secure direct communication and established that while in gv the encoding states are perfectly indistinguishable , in the bi - partite case , they are partially distinguishable , leading to a qualitatively different kind of information - vs - disturbance trade - off and also options for eve in the two cases .further , generalizing the idea we had also established that gv - type protocol of dsqc , qsdc and qkd can be realized using arbitrary quantum states .essential ideas that lead to these multipartite gv - type protocols will be explained briefly in the next section .2013 : : : while the above counterfactual protocols are probabilistic , h. salih et al . proposed a protocol for counterfactual direct quantum communication .this work of salih et al ., led to an interesting debate and whether the protocol is counterfactual for only one of the two bit values has been controversial .this protocol efficiently uses chained quantum zeno effect and an arrangement of sequence of mach - zehnder interferometers , where each of the mach - zehnder interferometer essentially uses elitzur and vaidman setup for interaction free measurement . in the same year , zhang et al .proposed a counterfactual protocol of private database queries which also uses a similar setup of sequence of mach - zehnder interferometer .further , the applicability of gv - type orthogonal - state - based protocols of secure quantum communication was extended by some of the present authors to quantum key agreement ( qka ) where alice and bob contribute equally to the final shared key and none of them can control the final key . 2014 : : : 2014 is the most active year in the history of orthogonal - state - based secure quantum communication . in this year many interesting results appeared . herewe list a few of them : ( i ) guo et al .proposed a protocol of counterfactual quantum - information transfer , ( ii ) guo et al . proposed a counterfactual protocol of entanglement distribution , ( iii ) salih proposed a multiparty ( tripartite ) scheme of counterfactual quantum communication and ( iv ) some of the present authors proposed a scheme for counterfactual quantum certificate authorization . in the above chronological reviewwe have seen that majority of the interesting developments in orthogonal - state - based secure quantum communication happened in last few years .the development is expected to continue and it is expected to play important role in practical realization of secure quantum communication and also in our understanding of quantum mechanics in general and origin of security in quantum mechanics and post - quantum theories in particular . keeping these facts in mind , in the next section we briefly review role of no - cloning theorem in realization of orthogonal - state - based protocols andalso briefly describe a few orthogonal - state - based gv - type protocols of secure quantum communication .( a ) a schematic diagram of a modified mach - zehnder interferometer that can be used to implement gv protocol [ 4 ] if the symmetric beam spliters used as 50:50 , otherwise ( i.e. , if the symmetric beam splitters are not 50:50 ) the same device implements koashi - imoto protocol [ 29 ] . heresr denotes a delay .( b ) a schematic diagram of a mach - zehnder interferometer that can be used to implement elitzur and vaidman s idea of interaction - free measurement or quantum interrogation [ 54 ] . herethe absorber is an active bomb that blasts when receives a photon .( c ) a schematic diagram of mach - zehnder interferometer that can be used to realize orthogonal - state - based protocol of guo - shi [ 28 ] which uses interaction - free measurement . here and are the obstacles that are randomly inserted by alice and bob , respectively .( d ) a schematic diagram of the experimental setup used in refs .32 ] to implement n09 protocol [ 27 ] of counterfactual qkd .in all the diagrams bs , m , c , pbs , hwp , d and od represent beam splitter , mirror , circulator , polarizing beam splitter , half wave plate , detector and optical delay , respectively . ]it is well known that unknown quantum states can not be cloned and several proofs of no - cloning theorem are provided using unitary evolution , no - signaling , linearity .a closer look into these proofs reveals that there exist fine differences among these proofs and those differences lead to a fundamental question : what nonclassical resources are required for the existence of no - cloning theorem in a theory .recently , we have shown that no - cloning theorem should hold in any theory possessing uncertainty and disturbance on measurement .thus we can construct post - quantum theories with no - cloning . without going into detail of those theories ,let us try to follow a simpler argument that can give us a general perception of no - cloning theorem . to begin with let us try to address another simple question : what distinguishes a completely stochastic classical theory from the quantum mechanics ? clearly , in a completely stochastic classical theorythe outcomes of measurement are always probabilistic whereas in quantum mechanics we can have a deterministic outcome if the state to be measured is part of the basis set used for the measurement .for example , if we measure in basis we will always get ( thus the outcome is deterministic as the state is part of the basis ) , but if we measure in basis we will have probabilistic outcome .we may say that the basis is _ special basis _ as it leads to deterministic outcome .we may now generalize the idea and say that for measurement of a state a particular basis set will be referred to as special basis if the state can be perfectly measured in that basis .it is easy to recognize that existence of special basis implies perfect measurement and thus complete information of the state being measured .this information implies that the state is known and thus can be cloned .in contrast , absence of special basis implies no - cloning .as the elements of any basis set are orthogonal to each other , two nonorthogonal states can not be part of the same basis set and thus can not be cloned .however , this viewpoint does not demand that the orthogonal states can always be cloned . specifically , by using geographical separation among the components of a superposition state we can make it non - clonable . in a completely different languagethis viewpoint was elaborated by mor in 1998 .of course mor s work appeared after the gv protocol , but it helped us to understand and generalize gv protocol .let us elaborate this point by briefly describing gv protocol .let us consider two orthogonal states and are two localized wave packets .further , and represent bit values and , respectively .alice sends wave packets and to bob by using two different arms of a mach - zehnder interferometer as shown in the fig .[ fig:1 ] a. alice sends bob either or but is always sent first and is delayed by time . heretraveling time ( ) of wave packets from alice to bob is shorter than .thus enters the communication channel only after is received by bob .consequently , both the wave packets and ( i.e. , the entire superposition ) are never found simultaneously in the transmission channel .this geographic separation between and restricts eve from measuring the state communicated by alice in basis .in fact , this geographic separation method compels eve to measure the state communicated by alice either in basis or in some suitably constructed positive - operator valued measure ( povm ) .thus the geographic separation ensures unavailability of special basis and thus implies no - cloning and security of gv protocol .this is how one can look at the security of gv protocol using the concept of special basis or the idea of mor . although the special basis is not available to eve , it is available to bob as bob delays by and recreates the superposition state sent by alice after he receives ( cf .[ fig:1 ] a ) . in order to restrict eve to perform the similar operation ( i.e. , to delay till the arrival of alice and bob need to perform following tests : 1 .alice and bob compare the receiving time with the sending time for each state to ensure that eve can not delay and wait for to reach her so that she can do a measurement in .specifically , alice and bob checks that + this test ensures that eve can not delay a wave packet , but it does not stop her from replacing a wave packet by a fake wave packet .the following test detects such an attack .alice and bob look for changes in the data by comparing a portion of the transmitted bits with the same portion of the received bits .it is important to note that sending time in gv protocol must be random .otherwise , eve can prepare a fake state in and send the fake to bob at the known arrival time .eve can keep the original and the fake wave packets with her till the arrival of original .when arrives then she measures the original state .if the measurement yields then she sends the fake wave packet to bob . otherwise , she corrects the phase of the fake wave packet and sends to bob .if we assume that the time required for eve s measurement is negligible , then following this procedure eve can obtain the key without being detected .interestingly , this requirement of random sending time can be circumvented just by replacing the 50:50 beam spliters present in the gv setup ( cf .[ fig:1 ] a ) by identical beam splitters having , where and are reflectivity and transmissivity , respectively . this small change in gv setup ( [ fig:1 ] a ) turns it into koashi - imoto protocol . in the above wehave already seen that it is possible to separate two pieces of orthogonal state and that leads to unavailability of special basis and thus no - cloning and orthogonal - state - based qkd . in what followswe will show that validity of gv - type protocol is not limited to single particle case and qkd , it can be easily generalized to multipartite case and to design protocols of dsqc and qsdc . before we describe an orthogonal - state - based protocol of secure direct quantum communication, we wish to note that gv in its original form is a protocol of qkd only and it can not be directly used for secure direct quantum communication . keeping this in mind , let us first describe a conjugate coding based protocol of secure direct quantum communication .the protocol is popularly known as ping - pong ( pp ) protocol and is described in the following section .ping - pong ( pp ) protocol which was introduced by and felbinger in 2002 is a protocol of qsdc and it may be described briefly as follows : pp1 : : bob prepares copies of the bell state ( i.e. , ) , and transmits all the first qubits of the bell pairs to alice , keeping all the second particles with himself .pp2 : : alice randomly selects a set of qubits from the string received by her as a verification string , and applies the bb84 subroutine or basis and announces the measurement outcome , position of that qubit in the string and the basis used for the particular measurement. bob ( alice ) also measures the corresponding qubit using the same basis ( if needed ) and compares his ( her ) results with the announced result of alice ( bob ) to detect eavesdropping . ] on the verification string to detect eavesdropping .if sufficiently few errors are found , they proceed to the next step ; else , they return to the previous step .pp3 : : alice randomly selects half of the unmeasured qubits as verification string for the return path and encodes her message in the remaining qubits using following rule : alice does nothing to encode 0 on a message qubit , and applies an gate to encode 1 .after completion of the encoding operation , she sends all the qubits of her possession to bob .pp4 : : alice discloses the coordinates of the verification qubits after receiving authenticated acknowledgment of receipt of all the qubits from bob .bob applies the bb84 subroutine on the verification qubits and computes the error rate . if sufficiently few errors are found , they proceed to the next step ; else , they return to pp1 .pp5 : : bob performs bell - state measurements on the remaining bell pairs , and decodes the message . if in * pp3 * alice has encoded 0 then bob will obtain ( the same as he had sent ) in * pp5 * , otherwise he will receive since and are orthogonal a bell measurement will deterministically distinguish and and consequently decode the message encrypted by alice .this two - way protocol is referred to as the ping - pong protocol as the travel qubit moves from bob to alice and comes back just like a table tennis ( ping - pong ) ball which moves back and forth between two sides of the table .it is easy to observe that in the original pp protocol full power of dense coding is not used .alice could have used and to encode and respectively and that would have increased the efficiency of ping - pong protocol .this is so because the same amount of communication would have successfully carried two bits of classical information .this fact was first formally included in a modified pp protocol proposed by cai and li in 2004 .in fact , in principle any entangled state can be used to design a ping - pong type protocol for qsdc .here it is interesting to observe that in the above version of pp protocol ( and in cl protocol ) encoding and decoding of information is done by using orthogonal states alone .however , the eavesdropping checking is done with the help of bb84 subroutine .thus to convert pp protocol into an orthogonal - state - based protocol we would require to replace bb84 subroutine by a gv - type subroutine for eavesdropping check . while describing the role of special basis on the origin of security of gv protocol , we have already mentioned that if we can visualize an orthogonal state as superposition of two quantum pieces that are geographically separable then the orthogonal state can be transmitted in such a way that eve can neither clone it nor measure it without disturbing .in addition , we may note that an entangled state is a superposition in tensor product space .now just consider a simple situation that alice prepares a product of two bell states say and randomly changes the sequence of the particles and sends them to bob over a channel .now eve knows that two bell states are sent and she has to do a bell measurement to know which bell state is sent , but she does not know which particle is entangled to which particle . consequently , any wrong choice of partner particles would lead to entanglement swapping ( say if eve does bell measurement on and/or that would lead to entanglement swapping ) .now consider that at a later time , when bob informs alice that he has received 4 qubits then alice discloses the actual sequence of the transmitted qubits and bob uses that data to rearrange the qubits in his hand into the original sequence and perform bell measurement on them .clearly , attempts of eavesdropping will leave detectable traces through the entanglement swapping and whenever bob s bell measurement would yield any result other than they will know there exists a eve . clearly , this new eavesdropping checking subroutine is of gv - type as it uses orthogonal states only and as it geographically separates two quantum pieces of an orthogonal state .further , pop technique applied here actually ensures that the special basis ( bell basis in this case ) is not available with eve when the particles are in the channel , but after alice s disclosure of the actual sequence of the qubit , bob obtains access to the special basis .once we understand the essence of this strategy , we may generalize it to develop a gv - type subroutine as follows : 1 . to communicate a sequence of message qubits, alice creates an additional sequence of decoy qubits prepared as .she concatenates with to obtain a new sequence of qubits and applies a permutation operator on to yield 3 . after receiving authenticated acknowledgment from bob that he has received all the qubits sent to him, alice discloses the actual sequence of the decoy qubits only ( she does not disclose the sequence of message qubits ) so that bob can perform bell measurement on partner particles ( original bell pairs ) and reveal any effort of eavesdropping through the disturbance introduced by eve s measurements . asthe message qubits are also randomized and as alice does not disclose the actual sequence till she knows that eavesdropping has not happened .above subroutine for eavesdropping checking which we referred to as gv subroutine can be used to convert any bb84-type protocol of secure quantum communication that utilizes orthogonal states for encoding and decoding .for example , pp , cai - li and dll protocol can be converted easily into gv - type protocol .this idea is extensively discussed in our recent publications . for the completeness of the present paper we elaborate this point here by explicitly describing a gv - type version of pp protocol which we referred to as pp .more detail about this protocol can be found at refs . . in what followswe briefly describe the pp protocol introduced by yadav , srikanth and pathak .we can convert pp protocol to pp protocol by modifying steps * pp1 * , * pp2 * and * pp4 * of pp protocol described above as follows : pp : : bob prepares the state .he keeps half of the second qubits of the bell pairs with himself . on the remaining qubits he applies a random permutation operation and transmits them to alice . of the transmitted qubits are bell pairs and the remaining are the partner particles of the particles which remained with bob .pp : : after receiving alice s authenticated acknowledgment , bob announces , the coordinates of the transmitted bell pairs .alice measures them in the bell basis to determine if they are each in the state .if the error detected by alice is within the tolerable limit , they continue to the next step .otherwise , they discard the protocol and restart from * pp*. pp : : alice discloses the coordinates of the verification qubits after receiving bob s authenticated acknowledgment of receipt of all the qubits .bob combines the qubits of verification string with their partner particles already in his possession and measures them in the bell basis to compute the ( return trip ) error rate .the other steps in pp remain the same .briefly , security in pp and cl arises as follows .the reordering has the same effect as time control and time randomization in gv .eve is unable to apply a 2-qubit operation on legitimate partner particles to determine the encoding in spite of their orthogonality .any correlation she generates by interacting with individual particles will diminish the observed correlations between alice and bob because of restrictions on shareability of quantum correlations .it is not our purpose to discuss the security of the protocol in detail here .interested readers may found detailed discussions on the security of pp in refs . .the pp protocol of yadav , srikanth and pathak was the first ever orthogonal - state - based protocol of qsdc .pp protocol described above is a two way protocol in the sense that the qubits travel in both the direction ( i.e. , from alice to bob and bob to alice ) .however , it is possible to modify them into one - way protocols . a very interesting one - way protocol known as dll protocolwas introduced by deng , long and liu in 2003 .this protocol can be obtained by modifying cl protocol . in what followswe will describe dll protocol and subsequently modify that to a gv - type protocol which we refer to as dll .a relatively detailed description of this protocol can be found at ref . .before we describe dll protocol we may note that after pp1 , alice and bob share entanglement . to share an entanglement it is not required to be created by bob as in pp protocol , even alice can create an entangled state and send a qubit to bob .let us modify the first step of pp protocol and see what happens .dll1 : : alice prepares the state , where , and transmits all the second qubits ( say ) of the bell pairs to bob , keeping the other half ( ) with herself .dll2 : : bob randomly chooses a set of qubits from the string received by him to form a verification string , on which the bb84 subroutine to detect eavesdropping is applied .if sufficiently few errors are found , they proceed to the next step ; else , they return to * dll1*. dll3 : : alice randomly chooses half of the qubits in her possession to form the verification string for the next round of communication , and encodes her message in the remaining qubits . to encode a 2-bit key message, alice applies one of the four pauli operations on her qubits .specifically , to encode and she applies and , respectively .after the encoding operation , alice sends all the qubits in her possession to bob .dll4 : : alice discloses the coordinates of the verification qubits after receiving authenticated acknowledgment of receipt of all the qubits from bob .bob applies a bb84 subroutine to the verification string and computes the error rate .dll5 : : if the error rate is tolerably low , then bob decodes the encoded states via a bell - state measurement on the remaining bell pairs .dll protocol described in this way helps us to illustrate the symmetry among pp , cl and dll protocols .this is a one - way two - step qsdc protocol .dll protocol looks similar to pp protocol with dense coding ( i.e. , cl protocol ) .however , there is a fundamental difference between a two - way protocol and a two - step one - way protocol which uses the same resources and encoding operations .the difference lies in the fact that in a two - way protocol home qubit always remains at sender s port but in a one - way two - step protocol both the qubits travel through the channel . at this specific pointwe observe a symmetry between dll protocol and gv protocol . herethe superposition is broken into two pieces in such a way that the entire superposed ( entangled ) state is never available in the transmission channel but only the entire superposition ( i.e. , the superposed state or entangled state ) contains meaningful information .visualization of this intrinsic symmetry helps us to generalize dll protocol to obtain an orthogonal version of gv protocol .based on the reasoning analogous to the one used for turning pp to pp , we may propose the following gv - like version of dll , which may be called dll in accordance with the recent work of yadav , srikanth and pathak .as before , we retain the steps of dll , replacing only steps * dll1 * , * dll2 * and * dll4 * as follows : dll : : alice prepares the state .she keeps half of the first qubits of the bell pairs with herself .on the remaining qubits she applies a random permutation operation and transmits them to bob ; of the transmitted qubits are bell pairs while the remaining are the entangled partners of the particles remaining with alice .dll : : after receiving bob s authenticated acknowledgment , alice announces , the coordinates of the transmitted bell pairs .bob measures them in the bell basis to determine if they are each in the state .if the error detected by bob is within a tolerable limit , they continue to the next step .otherwise , they discard the protocol and restart from * dll*. dll : : same as * pp * , except that the return trip is replaced by alice s second onward communication .so two - way protocols of qsdc are now converted to one - way protocols .but still we need two steps .this motivates us to ask : do we always need at least two steps for secure direct quantum communications ?apparently it looks so because if we send both the qubits of an entangled pair together then eve may perform bell measurement and find out the message .even if eve is detected afterward it would not be of any use because she has already obtained the message .however , using rearrangement of particle order ( pop ) we can restrict eve from measuring in bell ( special ) basis and circumvent this problem .we have already used pop in implementing pp , cl and dll using pop a one - step one - way protocol of dsqc is already provided by us in ref .however , due to space restriction we do not elaborate the one - step one - way orthogonal - state - based protocol here .we end - up this section by drawing your attention to the fact that in all the existing protocols information splitting is done in such a way that eve does not get access to the special basis .thus unavailability of special basis leads to no - cloning and thus to secure quantum communication and in the above described orthogonal - state - based protocol we have primarily ensured unavailability of special basis by geographically separating a quantum state into two pieces and avoiding eve s simultaneous access to both the pieces .in the present work we have briefly reviewed the recent developments on orthogonal - state - based protocols of secure quantum communication .we have classified the recently proposed orthogonal - state - based protocols into two sub - classes : gv - type and counterfactual .gv - type protocols are discussed with relatively more detail and it is explicitly shown that by using a gv - type subroutine where bell states are used as decoy qubits we can convert any conjugate coding based protocol with orthogonal - state - based encoding and decoding into gv - type completely orthogonal - state - based protocol . thus in principle , every task that can be done using conjugate coding can also be done using orthogonal states alone . as examples ,we have explicitly shown here how pp and dll protocols can be converted to corresponding gv - type protocols .further , since earlier proposals of orthogonal - state - based protocols are experimentally implemented recently , we may hope that ideas presented in this work and our more detailed related works will be implemented soon and this type of protocols would draw much more attention of cryptography community because of their fundamentally different nature .* acknowledgment : * a .p .thanks department of science and technology ( dst ) , india for support provided through the dst project no .sr / s2/lop-0012/2010 .he also thanks k. thapliyal for carefully reading the manuscript and for helping in preparation of the figures .a. p. and r. s. thank n. alam , p. yadav , a. shenoy and s. arvinda for their contribution on the research works of the group that are reviewed in the present paper .authors dedicate this work to prof .jozef gruska on his 80th birth day .bennett , c. h. , brassard , g. : quantum cryptography : public key distribution and coin tossing .proceedings of the ieee international conference on computers , systems , and signal processing , bangalore , 175 - 179 ( 1984 ) yuan , h. , song , j. , zhou , j. , zhang , g. , wei , x .- f . :high - capacity deterministic secure four - qubit w state protocol for quantum communication based on order rearrangement of particle pairs .50 , 2403 - 2409 ( 2011 ) traina , p. , gramegna , m. , avella , a. , cavanna , a. , carpentras , d. , degiovanni , i. p. , brida , g. , genovese , m. : review on recent groundbreaking experiments on quantum communication with orthogonal states .quantum matter 2 , 153 - 166 ( 2013 ) liu , y. , ju , l. , liang , x .-l . , tang , s .- b . ,s. , zhou , l. , peng , c .- z . , chen , k. , chen , t .- y . ,chen , z .- b . , pan , j .- w .: experimental demonstration of counterfactual quantum communication .109 , 030501 ( 2012 ) shukla , c. , pathak , a. , srikanth , r. : beyond the goldenberg - vaidman protocol : secure and efficient quantum communication using arbitrary , orthogonal , multi - particle quantum states .10 , 1241009 ( 2012 )
in majority of protocols of secure quantum communication ( such as , bb84 , b92 , etc . ) , the unconditional security of the protocols are obtained by using conjugate coding ( two or more mutually unbiased bases ) . initially all the conjugate - coding - based protocols of secure quantum communication were restricted to quantum key distribution ( qkd ) , but later on they were extended to other cryptographic tasks ( such as , secure direct quantum communication and quantum key agreement ) . in contrast to the conjugate - coding - based protocols , a few completely orthogonal - state - based protocols of unconditionally secure qkd ( such as , goldenberg - vaidman ( gv ) and n09 ) were also proposed . however , till the recent past orthogonal - state - based protocols were only a theoretical concept and were limited to qkd . only recently , orthogonal - state - based protocols of qkd are experimentally realized and extended to cryptographic tasks beyond qkd . this paper aims to briefly review the orthogonal - state - based protocols of secure quantum communication that are recently introduced by our group and other researchers . # 1#1 :
volcanology has greatly benefited from continued advances in sensor networks that are deployed on volcanoes for continuous monitoring of various activities and events over extended periods of time .currently the scientific community is expanding and enhancing such sensor networks to improve their spatial and temporal resolutions , thus creating new opportunities for conducting large scale studies and detecting new events and behaviors .however , the increasing amount of data recorded by geodetic instruments poses formidable challenges in terms of storage , data access , and processing .this trend makes manual detection and analysis of new discoveries increasingly difficult .large data sets also drive the need for cloud data storage and high - performance computing as local machines gradually become less capable of handling these computational workloads .therefore , more sophisticated computational techniques and tools are required to address to this challenge . to expand current toolsets and capabilities in volcanology ,this article presents a computer - aided discovery approach to volcanic time series analysis and event detection that helps researchers analyze the ever - increasing volume and number of volcanic data sets .a key novel contribution of this approach is incorporating models of volcanology physics to generate relevant and meaningful results .we demonstrate its applicability by examining geodetic data from the continuous global positioning system ( gps ) network operated by the plate boundary observatory ( pbo ; http://pbo.unavco.org ) , and through the detection and analysis of transient events in alaska .our computer - aided discovery system implements a processing pipeline with configurable , user - definable stages .we can then select from a number of algorithmic choices and allowable ranges for numerical parameters that define different approaches to volcanic data processing . using these specifications ,the system employs systematic techniques to generate variants of this pipeline to identify transient volcanic events .variations in the processing pipeline the choice of filters , the range of parameters , and so forth can highlight or suppress features of a transient event and alter the certainty of its detection .as the optimum pipeline variant is not usually known a priori and is typically selected manually , a tool that automates this search helps make the discovery process more efficient and scalable . additionally ,once configured , these software pipelines can be reused for other data sets either to test and reproduce results from past studies , or to detect new or previously unknown events .the article is organized as follows .section 2 introduces the different geodetic data sets used in this study and briefly outlines the geographic region of interest .section 3 describes the idea of a computer - aided framework and the specific implementation developed and utilized for this study .section 4 presents the detection and validation of three transient deformation events , whose physical characteristics are discussed in section 5 , along with a discussion of the requirements and advantages of the discovery pipeline .finally , section 6 concludes with the contributions of the developed computer aided system and suggestions for further application and refinements .continuous gps position measurements ( `` time series '' data ) have become a particularly useful source of geodetic data in the past several decades with millimeter - scale resolution and wide - scale deployment . here , we analyze daily gps time series data collected by averaging 24 hours of recorded measurements at pbo stations in alaska between 2005 and 2015 .the gps data are obtained from the pbo archives as a level 2 product describing gps station time series positions .this data is generated from geodesy advancing geosciences and earthscope gps analysis centers at central washington university and new mexico institute of mining and technology and combined into a final derived product by the analysis center coordinator at massachusetts institute of technology .locations of alaskan volcanoes along with logs of monitored volcanic activity are obtained from the ( avo ; http://avo.alaska.edu ) and used to provide regions of interest that bound subsets of the pbo gps stations .in addition to the gps data , we also utilize snow cover data from the national snow and ice data center ( nsidc ; http://nsidc.org ) to address possible data corruption due to accumulated snow on gps antennas , which can be significant as shown in figure [ fig : fig1 ] .this novel data fusion of gps time series data with the nsidc national ice center s interactive multisensor snow and ice mapping system daily northern hemisphere snow and ice analysis 4 km resolution data product helps identify stations with corrupted data resulting from seasonally occurring errors . to reduce the computational time and complexity, we apply a preprocessing step that combines nsidc snow mapping data with the pbo gps data as a single data product , which we use in later analyses .this data fusion approach takes a first step towards solving the challenging problem of resolving data corrupted by complex snow conditions .we demonstrate the utility of our approach for volcanology by analyzing gps measurements of displacements near and at volcanoes in alaska between 2005 and 2015 .we evaluate 137 volcanoes to determine sites with a sufficient number of gps stations and time coverage and then analyze the selected volcanoes for potential transient signals indicative of inflation events .we find 3 transient signals , consistent with volcanic inflation , between 1/1/2005 and 1/1/2015 at akutan , westdahl , and shishaldin volcanoes in the alaskan aleutian islands .as validation for our approach , we compare our detected inflation event at akutan with prior work by .complications in continuous gps position measurements necessitate first preprocessing and conditioning of the data .in particular , the extensive spatial scale of the pbo network makes manual inspection of volcanoes overly time consuming , while the gps instruments themselves suffer from spatially and temporally correlated noise that could mask signals of interest .the computer aided discovery approach helps overcome these inherent challenges in analyzing the gps time series data by providing a framework for uniformly preprocessing to reduce temporally correlated noise and remove secular drifts and seasonal trends .our framework also supports independent and parallelizable runs examining multiple points of interest to first exclude large numbers of sites without events and then present a more tractable set of points of interest for further examination .these parallel runs can be distributed locally as multiple processes on a multi - core machine or offloaded to servers in the amazon web services cloud .the computer aided discovery system utilized here is an implementation under continuing development .the overall approach is to create configurable data processing frameworks that then generate a specific analysis pipeline configuration .figure [ fig : fig2](a ) exemplifies our pipeline synoptic model , which is a meta - model summarizing possible pipelines that make scientific sense for our particular gps data and analysis goals .choices for processing stages and the parameter ranges for each stage can then be selected as desired for a particular pipeline instance .for example , the notation at stage `` ( 4 ) denoising '' means that the general denoising step in the pipeline can be implemented by either choosing a kalman filter or a median filter .the notation for an `` alternative '' choice is based on the notation commonly used in the generative programming community .if a kalman filter is chosen , then its three parameters can be chosen from the following intervals : ] , and \backslash$npapers://8461d6ef-4184-45b2-aa3d-395291ea6525/paper/p3868 . ., , , , , , , , & ( ) . , _ _ , .the heatmap in figure 6 is generated from multiple executions of the pipeline using the configurations listed below , where the filtering / smoothing stage was perturbed . for the kalman filter stage container ,the three parameters are ( correlation time in days ) , ( variance of the correlated noise ) , and ( measurement noise ) . for the median filter ,the one parameter is the window length ( the number of days of the window for calculating the median ) . to give two examples from our 25 configurations , configuration 0 employs a kalman filter with a of , a of , and an of , and configuration 2 employs a median filter with a window length of days .+ configuration 0 : kalmanfilter1[120 , 4 , 1 ]
analysis of transient deformation events in time series data observed via networks of continuous global positioning system ( gps ) ground stations provide insight into the magmatic and tectonic processes that drive volcanic activity . typical analyses of spatial positions originating from each station require careful tuning of algorithmic parameters and selection of time and spatial regions of interest to observe possible transient events . this iterative , manual process is tedious when attempting to make new discoveries and does not easily scale with the number of stations . addressing this challenge , we introduce a novel approach based on a computer - aided discovery system that facilitates the discovery of such potential transient events . the advantages of this approach are demonstrated by actual detections of transient deformation events at volcanoes selected from the alaska volcano observatory database using data recorded by gps stations from the plate boundary observatory network . our technique successfully reproduces the analysis of a transient signal detected in the first half of 2008 at akutan volcano and is also directly applicable to 3 additional volcanoes in alaska , with the new detection of 2 previously unnoticed inflation events : in early 2011 at westdahl and in early 2013 at shishaldin . this study also discusses the benefits of our computer - aided discovery approach for volcanology in general . advantages include the rapid analysis on multi - scale resolutions of transient deformation events at a large number of sites of interest and the capability to enhance reusability and reproducibility in volcano studies . alaska ; volcanoes ; transient inflation events ; gps ; computer - aided discovery
during the last decades revenues from indirect tax have become increasingly important in many economies .substantial attention has been devoted to evasion of indirect taxes .it is well known that indirect tax evasion , especially evasion of vat , may erode a substantial part of tax revenues [ 2 ] , [ 4 ] , [ 5 ] . in [ 3 ]a model with tax evasion is presented .the authors consider n firms which enter the market with a homogenous good .these firms have to pay an ad valorem sales tax , but may evade a certain amount of their tax duty .the aims of the firms are to maximize their profits .the equilibrium point is determined and an economic analysis is made .based on [ 1 ] , [ 3 ] , [ 7 ] , [ 8 ] , [ 10 ] , in our paper we present three economic models with tax evasion : the static model of cournot duopoly with tax evasion in section 2 , the dynamic model of cournot duopoly with tax evasion in section 3 and the dynamic model with tax evasion and time delay in section 4 . in section 2 , in the static model the purpose of the firmsis to maximize their profits .we determine the firms outputs and the declared revenues which maximize the profits , as well as the conditions for the model s parameters in which the maxim profits are obtained . using maple 11 ,the variables orbits are displayed . in section 3, the dynamic model describes the variation in time of the firms outputs and the declared revenues .we study the local stability for the stationary state and the conditions under which it is asymptotically stable . in section 4, we formulate a new dynamic model , based on the model from section 3 , in which the time delay is introduced .that means , the two firms do not enter the market at the same time .one of them is the leader firm and the other is the follower firm .the second one knows the leader s output in the previous moment using classical methods [ 6 ] , [ 9 ] we investigate the local stability of the stationary state by analyzing the corresponding transcendental characteristic equation of the linearized system . by choosing the delay as a bifurcation parameterwe show that this model exhibits a limit cycle .finally numerical simulations , some conclusions and future research possibilities are offered .the static model of cournot duopoly is described by an economic game , where two firms enter the market with a homogenous consumption product .the elements which describe the model are : the quantities which enter the market from the two firms the declared revenues ; the inverse demand function ( is a derivable function with ; the penalty function ( is a derivable function with the cost functions ( are derivable functions with ) .the government levies an ad valorem tax on each firm s sales at the rate and $ ] is the probability with which the tax evasion is detected. the true tax base of firm is firm declares as tax base to the tax authority . accordingly ,evaded revenues of firm are given by with probability tax evasion remains undetected and the tax bill of firm amounts to the tax authority detects tax evasion of firm with probability . in case of detection , firm has to pay taxes on the full amount of revenues , and , in addition , a penalty .the penalty is increasing and convex in evaded revenues .moreover , it is assumed that , namely law - abiding firms go unpunished .the profit functions of the two firms are : , given by: + \notag \\ & & q\left [ \left ( 1-t_{1}\right ) x_{i}p\left ( x_{1}+x_{2}\right ) -c_{i}\left ( x_{i}\right ) -f\left ( x_{i}p\left ( x_{1}+x_{2}\right ) -z_{i}\right ) \right ] .\quad\end{aligned}\]]the first bracketed term in equals the profit of firm if evasion activities remain undetected .the second term in represents the profit of firm in case tax evasion is detected .the firm s aim is to maximize with respect to output and declared revenues this aim represents a mathematical optimization problem which is given by: the hypothesis about the functions , we have : the solution of problem is given by the solution of the following system: \left [ p\left ( x_{1}+x_{2}\right ) + x_{i}p^{\prime } \left ( x_{1}+x_{2}\right ) \right ] -c_{i}\left ( x_{i}\right ) = 0 \notag \\\dfrac{\partial p_{i}}{\partial z_{i } } & = & -\left ( 1-q\right ) t_{1}+qf^{\prime } \left ( x_{i}p\left ( x_{1}+x_{2}\right ) -z_{i}\right ) = 0,\quad i=\overline{1,2}.\end{aligned}\ ] ] in what follows , we will consider the penalty function the cost functions and the price function . from can deduce : if then the solution of system is given by : for the parameters , , , , the variations of the variables , , and the profits , are given in the following figures : [ cols="^,^,^ " , ]the dynamic model describes the variation in time of output taking into account the marginal profits assume that each agent adjusts its declared revenue , proportionally to the marginal profits then , the dynamic model is given by the following differential system of equations: \cdot \notag \\ & & \left [ p\left ( x_{1}+x_{2}\right ) + x_{i}p^{\prime } \left ( x_{1}+x_{2}\right ) \right ] -c_{i}\left ( x_{i}\right)\ } , \quad \\ \overset{\cdot } { z}_{i}\left ( t\right ) & = & h_{i}\dfrac{\partial p_{i}}{% \partial z_{i}}=h_{i}\left [ -\left ( 1-q\right ) t_{1}+qf^{\prime } \left ( x_{i}p\left ( x_{1}+x_{2}\right ) -z_{i}\right ) \right ] , \quad i=\overline{1,2 } \notag\end{aligned}\]]with the initial conditions and for and system becomes : \cdot \notag \\ & & \cdot \left [ p\left ( x_{1}\left ( t\right ) + x_{2}\left ( t\right ) \right ) + x_{1}\left ( t\right ) p^{\prime } \left ( x_{1}\left ( t\right ) + x_{2}\left ( t\right ) \right ) \right ] -c_{1}\ } \notag \\ \overset{\cdot } { x}_{2}\left ( t\right ) & = & k_{2}\{\left [ 1-qt_{1}-qst_{1}% \left ( x_{2}\left ( t\right ) p\left ( x_{1}\left ( t\right ) + x_{2}\left ( t\right ) \right ) -z_{2}\left ( t\right ) \right ) \right ] \cdot \notag \\ & & \cdot \left [ p\left ( x_{1}\left ( t\right ) + x_{2}\left ( t\right ) \right ) + x_{2}\left ( t\right ) p^{\prime } \left ( x_{1}\left ( t\right ) + x_{2}\left ( t\right ) \right ) \right ] -c_{2}\ } \\ \overset{\cdot } { z}_{1}\left ( t\right ) & = & h_{1}\left [ -\left ( 1-q\right ) t_{1}+qst_{1}\left ( x_{1}\left ( t\right ) p\left ( x_{1}\left ( t\right ) + x_{2}\left ( t\right ) \right ) -z_{1}\left ( t\right ) \right ) \right ] \notag \\ \overset{\cdot } { z}_{2}\left ( t\right ) & = & h_{2}\left [ -\left ( 1-q\right ) t_{1}+qst_{1}\left ( x_{2}\left ( t\right ) p\left ( x_{1}\left ( t\right ) + x_{2}\left ( t\right ) \right ) -z_{2}\left ( t\right ) \right ) \right ] \notag \\ x_{i}\left ( 0\right ) & = & x_{i0},\text { } z_{i}\left ( 0\right ) = z_{i0},\text { } % i=1,2 .\notag\end{aligned}\ ] ] system has the stationary state given by let , , , by expanding in a taylor series around the stationary state and neglecting the terms of higher order than the first order , we have the following linear approximation of system : : the characteristic equation associated to is given by:: a necessary and sufficient condition as equation has all roots with negative real part is given by routh - hurwitz criterion : from the routh - hurwitz criterion we have : the stationary state of system is asymptotically stable if and only if conditions hold .in [ 7 ] and [ 8 ] we have studied the rent seeking games with time delay and distributed delay . in the present sectionwe analyze the rent seeking games with tax evasion and delay .for we obtain the model from [ 3 ] . for and obtain the model from [ 1 ] .we consider the model from section 3 where we introduce the time delay .we suppose the first firm is the leader and the second firm is the follower .the follower knows the quantity of the leader firm , which entered the market at the moment the differential system which describes this model is given by: \cdot \notag \\ & & \left [ p\left ( x_{1}\left ( t\right ) + x_{2}\left ( t\right ) \right ) + x_{1}\left ( t\right ) p^{\prime } \left ( x_{1}\left ( t\right ) + x_{2}\left ( t\right ) \right ) \right ] -c_{1}\ } \notag \\ \overset{\cdot } { x}_{2}\left ( t\right ) & = & k_{2}\{\left [ 1-qt_{1}-qst_{1}% \left ( x_{2}\left ( t\right ) p\left ( x_{1}\left ( t-\tau \right ) + x_{2}\left ( t\right ) \right ) -z_{2}\left ( t\right ) \right ) \right ] \cdot \notag \\ & & \left [ p\left ( x_{1}\left ( t-\tau \right ) + x_{2}\left ( t\right ) \right ) + x_{2}\left ( t\right ) p^{\prime } \left ( x_{1}\left ( t-\tau \right ) + x_{2}\left ( t\right ) \right ) \right ] -c_{2}\ } \\\overset{\cdot } { z}_{1}\left ( t\right ) & = & h_{1}\left [ -\left ( 1-q\right ) t_{1}+qst_{1}\left ( x_{1}\left ( t\right ) p\left ( x_{1}\left ( t\right ) + x_{2}\left ( t\right ) \right ) -z_{1}\left ( t\right ) \right ) \right ] \notag \\\overset{\cdot } { z}_{2}\left ( t\right ) & = & h_{2}\left [ -\left ( 1-q\right ) t_{1}+qst_{1}\left ( x_{2}\left ( t\right ) p\left ( x_{1}\left ( t\right ) + x_{2}\left ( t\right ) \right ) -z_{2}\left ( t\right ) \right ) \right ] \notag \\x_{1}\left ( \theta \right ) & = & \varphi \left ( \theta \right ) , \text { } \theta \in \left [ -\tau , 0\right ] , \text { } x_{2}\left ( 0\right ) = x_{20},\text { } % z_{i}\left ( 0\right ) = z_{i0},\text { } k_{i}>0,\text { } h_{i}>0,\text { } i=1,2 .\notag\end{aligned}\]]for the stationary state of system is given by with respect to the transformation , , , and by expanding in a taylor series around the stationary state and neglecting the terms of higher order than the first order , we obtain the following linear approximation of system : where are given by the corresponding characteristic equation of is : the roots of depend on considering as parameter , we determine so that is a root of substituting into equation we obtain: from the above equation we have: if is a positive root of then there is a hopf bifurcation and the value of is given by : we can conclude with the following theorem : \(i ) if is a positive root of and where is given by , then a hopf bifurcation occurs at the stationary state as passes through \(ii ) if conditions ( 11 ) hold and , then the stationary state is asymptotically stable for any .for the numerical simulation we use maple 11 and the following data : , , , , , , , , .the stationary state is : , , , .for the routh - hurwicz conditions are satisfied .then , the stationary state is stable .the positive solution of ( 15 ) is and . for the stationary stateis asymptotically stable and for the stationary state is unstable . for there is a hopf bifurcation . for , , , , , , , , .the stationary state is : , , , .for the routh - hurwicz conditions are satisfied .then , the stationary state is stable .the equation ( 15 ) has no positive root .then , the stationary state is asymptotically stable for any .in the static model with tax evasion , the parameters and characterize the behavior of the firms with respect to evasion . the presented figures allow the analysis of the declared revenues and the profits with respect to . for the dynamic model with tax evasion ,using routh - hurwitz criterion we have determined the conditions for which the stationary state is asymptotically stable for the dynamic model with tax evasion and time delay , using the delay as a bifurcation parameter we have shown that a hopf bifurcation occurs when passes through a critical value the direction of the hopf bifurcation , the stability and the period of the bifurcating periodic solutions will be analyzed in a future paper .the findings of the present paper can be extended in an oligopoly case .[ 1 ] . c. chiarella , f. szidarovszky , _ on the asymptotic behavior of dynamic rent - seeking games _ , electronic journal : southwest journal of pure and applied mathematics , issue 2:17 - 27 , 2000 . [ 7 ] .m. neamu , c. chilrescu , _ on the asymtotic behavior of dynamic rent - seeking games with distributed time _ , proceedings of the sixth international conference on economic informatics , bucharest , 8 - 11 may , ed .economica , 216 - 223 , 2003 .
we consider the static and dynamic models of cournot duopoly with tax evasion . in the dynamic model we introduce the time delay and we analyze the local stability of the stationary state . there is a critical value of the delay when the hopf bifurcation occurs . cccccccc + + + + + _ mathematics subject classification : 34k18 , 47n10 ; jel classification : c61 , c62 , h26 _
it has been widely known that quantum computation enormously promotes computational efficiency by using several basic and purely physical features of quantum mechanics , such as coherent superposition , quantum entanglement , measurement collapse etc . .the accelerant ability of quantum computation is related to quantum entanglement and tensor product structure , which are essential to allow growing exponentially the computation resource with the number of qubits .yet the practical quantum computations are difficult to be realized for restrictions of quantum system controllability , decoherence property and measurement randomness knill , nielsen2,browne , kok .the classical simulation of quantum systems , especially of quantum entanglement has been under investigation for a long time cerf , massar , spreeuw .in addition to easy implementations , the researches on the classical simulations can help understand some fundamental concepts in quantum mechanics .however , it has been pointed out by several researchers that the classical simulation of quantum systems requires exponentially scaling of physical resources with the number of quantum particles . in ref . , an optical analogy to quantum systems was introduced , in which the number of light beams and optical components required grows exponentially with the number of qubits . in ref . , a classical protocol to efficiently simulate any pure - state quantum computation is presented , yet the amount of entanglement involved is restricted . in ref . , it is elucidated that in classical theory , the state space of a composite system is the direct product of subsystems , whereas in quantum theory it is the tensor product .it is generally accepted that the essential distinction between direct and tensor products is precise the phenomenon of quantum entanglement , and regarded as the origin of the limitation of any classical systems . recently, several researches have proposed a new concept of realization of classical entanglement based on classical optical fields by introducing a new degree of freedom , such as orbital angular momentum , to realize tensor product in quantum entanglement .however , the method can not provide enough orthogonal degrees of freedom , which the scalability might be doubtful . in this paper , we propose an optical parallel computation similar to quantum computation that can be realized by introducing pseudorandom phase sequences into optical fields with two orthogonal modes .the two orthogonal modes ( polarization or transverse ) of optical field are encoded as optical analogies to quantum bits and . in wireless and optical communications ,orthogonal pseudorandom sequences have been widely applied to code division multiple access ( cdma ) communication technology as a way to distinguish different users .a set of pseudorandom sequences guided by a galois field gf( ) is generated by using a linear feedback shift register method , which satisfies orthogonal , closure and balance properties . in phase shift keying ( psk ) communication technology , the information is encoded in the phase of classical optical / electromagnetic fields , where the phase values in for the -ary communication . combining the two communication technologies , we introduce the pseudorandom phase sequences in our scheme . guaranteed by the orthogonal property, the optical / electromagnetic fields modulated with different pseudorandom phase sequences can transmit in one communication channel simultaneously without crosstalk , and can be easily distinguished by implementing a coherent demodulation .different from other schemes , the pseudorandom phase sequences employed in our scheme are able to provide not only scalable degrees of freedom to support arbitrary dimensional tensor product structure , also a theoretical framework of phase ensemble model similar to the concept of quantum ensemble . using the ensemble model, we can demonstrate the inseparable correlation between the optical fields with different pseudorandom phase sequences similar to quantum entanglement .it is interesting that a dimensional hilbert space can be spanned by optical fields which is larger than that spanned by quantum particles .this leads a problem for our scheme that is not the lack of resources but the redundancy of resources . in order to reduce the redundancy, we have to introduce a sequential cycle permutation mechanism based on coherent demodulation to realize the bijection imitation of certain quantum states .optical analogies to some typical quantum states are also discussed , including bell states , ghz and w states . for better fault tolerance, we devise each orthogonal mode of optical fields is measured to assign discrete values .it means that a discrete computation model is provided in our scheme .furthermore , we propose a gate array model to imitate quantum computation based on four kinds of mode control gates . as some examples ,we demonstrate the imitations of shor s algorithm , grover s algorithm and quantum fourier algorithm nielsen . in order to verify the feasibility ,we numerically simulate our scheme using the widely used optical communication simulation software optisystem .the paper is organized as follows : in section [ sec ii ] , we introduce some preparing knowledges required later in this paper . in section [ sec iii ] , a theoretical framework of phase ensemble model and the optical analogies of several typical quantum states are then discussed . in section[ sec iv ] , a gate array model to imitate quantum computation is proposed .finally , we summarize our conclusions in section [ sec v ] .in this section , we introduce some notations and basic results required later in this paper . we first introduce pseudorandom phase sequences ( ppss ) and their properties .then we introduce the scheme of modulation and demodulation on optical fields with ppss . finally , we discuss the similarities between an optical field and a single - particle quantum state . as far as we know ,orthogonal pseudorandom sequences have been widely applied to cdma communication technology as a way to distinguish different users .a set of pseudorandom sequences is generated from a shift register guided by a galois field gf( ) , which satisfies orthogonal , closure and balance properties .the orthogonal property ensures that sequences of the set are independent and distinguished each other with an excellent correlation property .the closure property ensures that any linear combination of the sequences remains in the same set .the balance property ensures that the occurrence rate of all non - zero - element in every sequence is equal , and the the number of zero - elements is exactly one less than the other elements. one famous generator of pseudorandom sequences is linear feedback shift register ( lfsr ) , which can produce a maximal period sequence , called m - sequence golomb .we consider an m - sequence of period ( ) generated by a primitive polynomial of degree over gf( ) .since the correlation between different shifts of an m - sequence is almost zero , they can be used as different codes with their excellent correlation property . in this regard ,the set of m - sequences of length can be obtained by cyclic shifting of a single m - sequence . in this paper , we employ ppss with -ary phase shift modulation .although the phases should uniformly distribute in ] . for better understanding our scheme ,the ppss in the cases of modulating and optical fields are illustrated below .an m - sequence of length is generated by a primitive polynomial of the lowest degree over , which is ] .then we obtain the set that includes ppss of length : , where the ppss are shown as follows \times \pi /2 , \label{2 } \\ \lambda ^{\left ( 2\right ) } & = & \left [ \begin{array}{cccccccc } 1 & 1 & 0 & 0 & 1 & 0 & 1 & 0% \end{array}% \right ] \times \pi /2 , \nonumber \\ \lambda ^{\left ( 3\right ) } & = & \left [ \begin{array}{cccccccc } 1 & 1 & 1 & 0 & 0 & 1 & 0 & 0% \end{array}% \right ] \times \pi /2 , \nonumber \\ \lambda ^{\left ( 4\right ) } & = & \left [ \begin{array}{cccccccc } 0 & 1 & 1 & 1 & 0 & 0 & 1 & 0% \end{array}% \right ] \times \pi /2 , \nonumber \\ \lambda ^{\left ( 5\right ) } & = & \left [ \begin{array}{cccccccc } 1 & 0 & 1 & 1 & 1 & 0 & 0 & 0% \end{array}% \right ] \times \pi /2 , \nonumber \\\lambda ^{\left ( 6\right ) } & = & \left [ \begin{array}{cccccccc } 0 & 1 & 0 & 1 & 1 & 1 & 0 & 0% \end{array}% \right ] \times \pi /2 , \nonumber \\\lambda ^{\left ( 7\right ) } & = & \left [ \begin{array}{cccccccc } 0 & 0 & 1 & 0 & 1 & 1 & 1 & 0% \end{array}% \right ] \times \pi /2 , \nonumber \\\lambda ^{\left ( 8\right ) } & = & \left [ \begin{array}{cccccccc } 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0% \end{array}% \right ] \times \pi /2 .\nonumber\end{aligned}\ ] ] further , we define a map on the set of , and obtain a new sequence set .the map corresponds to the phase modulations of the ppss on optical fields . according to the properties of m - sequence, we can obtain following properties of the set , ( 1 ) the closure property : the product of any sequences remains in the same set , in additional phase contributed by the power of a sequence ; ( 2 ) the balance property : in exception to , any sequences of the set satisfy , (3 ) the orthogonal property : any two sequences satisfy the following normalized correlation in conclusion , according to the properties above , the optical fields modulated with different ppss become independent and distinguishable in any case . in this section, we mainly focus on the modulation and demodulation of optical fields with single polarization mode .we first consider the modulation process of an optical field with a pps from the set .for example , we choose to modulate the optical field labeled as the signal optical ( so ) field , its electric field component is are the amplitude and frequency of the optical field respectively , and is the phase unit of at the -th time slot . in order to perform the demodulation of pps , we design a coherent detection scheme as shown in fig .[ fig3 ] that has been widely used in the coherent communication . in the detection scheme , the local optical ( lo )beam and the so beam interfere with each other through a beam coupler ( bc ) . in order to ensure the coherence of them , the two beams can be split from the same optical source through a beam splitter ( bs ) .the lo field can be expressed as can be an arbitrary sequence of the set and the amplitude assumed . after the coherent superposition through the bc, the output fields can be expressed as , the output electric signals of photodetectors and is proportional to , \label{8 } \\d_{2 } & = & \mu \left\vert e_{2}\left ( t\right ) \right\vert ^{2}=\mu \left\vert a_{s}\right\vert ^{2}\left [ 1+\sin \left ( \lambda _ { k}^{\left ( 1\right ) } -\lambda _ { k}^{\left ( n\right ) } \right ) \right ] , \nonumber\end{aligned}\]]where is the parameter related to the sensitivity of photodetectors . finally , after correlation analysis of the two electric signals, we can obtain as follow = \left\ { \begin{array}{c } 8\mu ^{2}\left\vert a_{s}\right\vert ^{4}\delta t,\ n=1 \\4\mu ^{2}\left\vert a_{s}\right\vert ^{4}\delta t,\ n\neq 1% \end{array}% \right . , \label{9}\]]where is the pps time slot .in addition to a constant , the result satisfies the orthogonality of pps . to verify the above scheme, we utilize the software optisystem to numerically simulate it .[ fig4 ] shows the electric signals of two photodetectors within a sequence period .[ fig5 ] shows the correlation analysis results of the so field and the lo field modulated with different ppss. we can find out the correlation result is the largest when the so and lo fields modulated with the same pps .hence , the orthogonality of ppss can be used to distinguish the optical fields with different ppss . and : photodetectors , : multiplier and : integrator ( integrate over entire sequence period).,width=279,height=173 ] ( a ) and (b ) are shown when the sequence of the lo field is .,width=353,height=194 ] and the lo fields with ( corresponding to ) , where ( a ) is the correlation intergrals vary with the sequence time slots , and ( b ) is the final correlation results.,width=381,height=184 ] we note the similarities between maxwell equation and schrdinger equation .in fact , some properties utilized in quantum information are wave properties , where the wave might not be a quantum wave spreeuw .analogously to quantum states , optical fields also obey a superposition principle , and can be transformed to any superposition state by unitary transformations .those analogous properties make possible the analogies to quantum states using * * * * polarization or transverse modes of optical fields **. * * we first consider two orthogonal modes ( polarization or transverse ) of optical fields , as the optical analogies to quantum bits ( qubits ) and , any quantum state of a single particle can be imitated by the mode superposition of optical field as follow .obviously , all the mode superposition can also span a hilbert space .we can transform a mode state to any other state using the unitary transformation as follow are real - valued . the modes and can be transformed to any mode superpositions by using , respectively , as follows now , we consider some optical devices with one input and two outputs , such as a beam splitter or a mode splitter , which split one input field into two output fields and . for the case of the beam splitter ,the output fields are and with an arbitrary power ratio between the output beams , where are the additional phases due to the splitter . for the case of the mode splitter ,the output fields are and , where are also the additional phases .conversely , the devices can act as a beam coupler or a mode coupler in which beams or modes from two inputs are combined into the one output .in this section , we discuss optical analogies to multiparticle quantum states using optical fields modulated with ppss .we first demonstrate that optical fields modulated with different ppss can span a dimensional hilbert space that contains a tensor product structure .then , we introduce a phase ensemble model to imitate the quantum ensemble .further , by performing coherent demodulation scheme , we can obtain a mode status matrix of the optical fields .based on the mode status matrix , we propose a sequential cycle permutation mechanism ( scpm ) to imitate some typical quantum states , such as the product state , bell states , ghz state and w state . in , an effective simulation of quantum entanglement using optical fields modulated with ppsswas discussed . in this paper , we will promote this proposal further . referring from the concept of quantum ensemble, we propose a new concept of a pseudorandom phase ensemble model .a phase ensemble is defined as a large number of same optical fields modulated with different ppss , which are labeled by the phase units of .a phase ensemble is discrete if the phase unit is a uniformly distributed discrete value within $ ] .then we can define that a phase ensemble is complete if finite phase units are ergodic . according to m - sequence theory , the occurrence of each value in a sequence is the same .the phase ensemble is obviously ergodic in finite length .clearly , we can conclude that optical fields modulated with ppss constitute a complete discrete phase ensemble .an ensemble average is defined as weighted average of any sequence within a sequence period as follow is a sequence unit labled by the phase unit .a normalized correlation for two sequences and is defined as are the sequence units of labled by the phase unit , respectively .now we discuss a hilbert space spanned by optical fields modulated with ppss .there are two orthogonal modes ( polarization or transverse ) of the optical field , which are denoted by and , respectively .thus , a qubit state can be expressed by the mode superposition , where hilbert space . choosing any ppss from the set to modulate optical fields ,we can obtain the states expressed as follows to the properties of ppss and hilbert space , we can define the inner product of any two fields and .we obtain the orthogonal property as follow are the -th units of and , respectively .the orthogonal property supports the tensor product structure of the multiple fields .a formal product state for the optical fields is defined as being a direct product of , to the definition , optical fields of eq .( [ 16 ] ) can be expressed as the following state . by using an array of several mode transformation gates, the optical field can be transformed from eq .( [ 16 ] ) to the following general state , the formal product state can be written as , we can obtain each item of the superposition of as follows \left\vert 00\cdots 0\right\rangle , \\c_{00\cdots 1}\left\vert 00\cdots 1\right\rangle = \left [ \left ( \sum\limits_{i=1}^{n}\alpha _ { 1}^{\left ( i\right ) } e^{i\lambda ^{\left ( i\right ) } } \right ) \left ( \sum\limits_{i=1}^{n}\alpha _ { 2}^{\left ( i\right ) } e^{i\lambda ^{\left ( i\right ) } } \right ) \cdots \left ( \sum\limits_{j=1}^{n}\beta _ { n}^{\left ( j\right ) } e^{i\lambda ^{\left ( j\right ) } } \right ) \right ] \left\vert 00\cdots 1\right\rangle , \\ \vdots \\c_{11\cdots 1}\left\vert 11\cdots 1\right\rangle = \left [ \left ( \sum\limits_{j=1}^{n}\beta _ { 1}^{\left ( j\right ) } e^{i\lambda ^{\left ( j\right ) } } \right ) \left ( \sum\limits_{j=1}^{n}\beta _ { 2}^{\left ( j\right ) } e^{i\lambda ^{\left ( i\right ) } } \right ) \cdots \left ( \sum\limits_{j=1}^{n}\beta _ { n}^{\left ( j\right ) } e^{i\lambda ^{\left ( j\right ) } } \right ) \right ] \left\vert 11\cdots 1\right\rangle , % \end{array } \label{22}\]]where . according to the closure property ,the ppss of remain in the set , which means .therefore , we obtain the following conclusion that the formal product state can be expressed a linear superposition in the hilbert space with the basis as follows , \label{23}\]]where denotes a total of coefficients . apparently , these fields span the dimensional hilbert space .the nonlocality correlation of quantum entanglement is demonstrated as the inseparability of any two - party quantum states .the correlation depends on the ensemble summaries of many measurement results .similarly , we discuss the state inseparability demonstrated in optical fields based on the phase ensemble framework . in order to research the properties , we first classify the subsets of the formal product state according to the ppss .a consensus pps sub - state ( cpss ) is defined as being items with the same pps in the formal product state .a single pps sub - state ( spss ) is defined as being each of the items , except all consensus pps sub - states , in the formal product state .for example , we assume that the pps corresponds to the first cpss set , ... , the pps corresponds to the -th cpss set , and other spsss in the formal product state .thus can be expressed as , and are the distinct ppss and are the superposition coefficients of the cpsss and the spsss , respectively .further , we introduce the definition of the density matrix can be simplified to denote all coefficients of the cpsss and spsss .noteworthy , the last three items must retain the ppss due to the distinct ppss and their closure property . according to the definition of phase ensemble average eq .( [ 14 ] ) , an ensemble - averaged density matrix ( eadm ) can be defined and obtained as follow to all ppss satisfying .note that all off - diagonal elements of the eadm are contributed from the cpsss after ensemble averaged .also , it shows that the eadm might not be expressed in terms of direct products of the states due to only non - diagonal term contributed from cpsss remaining , simialr to the case of quantum entanglement states . in the phase ensemble ,the expectation value of an arbitrary operator can be defined as follow , according to the exchange of summation and matrix trace , the expectation value can be simplified to = tr\left ( \tilde{\rho}\hat{p}\right ) \label{29 } \\ & = & \sum\limits_{n=1}^{2^{n}}\left\vert c_{n}\right\vert ^{2}\left\langle x_{n}\right\vert \hat{p}\left\vert x_{n}\right\rangle + \sum\limits_{m=1}^{p}\sum\limits_{i\neq i^{\prime } = 1}^{n_{m}}\left ( c_{i}^{\left ( m\right ) } c_{i^{\prime } } ^{\left ( m\right ) \ast } \left\langle s_{i^{\prime } } ^{\left ( m\right ) } \right\vert \hat{p}\left\vert s_{i}^{\left ( m\right ) } \right\rangle + c_{i^{\prime } } ^{\left ( m\right ) } c_{i}^{\left ( m\right ) \ast } \left\langle s_{i}^{\left ( m\right ) } \right\vert \hat{p}\left\vert s_{i^{\prime } } ^{\left ( m\right ) } \right\rangle \right ) .\nonumber\end{aligned}\ ] ] it is worth noting that the inseparability is demostrated due to only off - diagonal items contributed from cpsss remaining in the correlation measurement . using this property, we can imitate a quantum state that formally agrees with cpsss in the formal product state under the phase ensemble framework . in the phase ensemble model , we are interested in the simplest model that requires minimal resources to be constructed . we define that a minimum complete phase ensemble has the least cpss set , in which the state has only one cpss set . by the definition , the state in eq .( [ 16 ] ) is a type of the minimal complete state .the cpsss of the minimum complete state correspond to one and only one pps , which is the sum of all used ppss .the simplest case is each field modulated with a different pps as follow , we can express a minimum complete state as follow correspond to all spsss . according to the analysis in the last subsection, the eadm can be obtained in conclusion , the minimum complete state satisfies the necessary conditions for analogies to quantum states . considering all possible combinations, we can obtain nonredundant combinations of optical fields and ppss . andall combinations can be obtained in the same form .therefore , in the formal product state , a cpss has equivalent direct product decompositions .there is a very important problem for our scheme that is not the lack of resources but the redundancy of resources . in order to reduce the redundancy, we have to introduce a simple and unique mechanism : a sequential cycle permutation mechanism ( scpm ) to realize the bijection imitation of certain quantum states .the sequential cycle permutation is shown as follows is clear that the sequential cycle permutation is a subset of the sequential full permutations . according to the definition ,the imitation state obtained by using the sequential cycle permutation is obvious a minimum complete state . according to eq .( [ 33 ] ) , each state corresponds to the sequential cycle permutation as follows , each sequential cycle permutation provides a subset of the minimum complete states .quantum entanglement is only defined for the hilbert spaces that have a rigorous tensor product structure in terms of subsystems . in quantum mechanics ,quantum entanglement can not be expressed in terms of direct products , but only is characterized by the correlation measurements .the nonlocal correlations decided with bell s inequality and ghz s equality criteria are the most fundamental property of quantum entanglement chsh , ghz . here, it is necessary to introduce the correlation analysis analogy to quantum measurement . in order to introduce correlation analysis ,a measurement operator locally performed on is given convenience , coefficients are equal to , yielding .further we generalize to the case of optical fields according to eq .( [ 29 ] ) , we obtain the correlation analysis for the state of eq .( [ 31 ] ) using and the density matrix as follow = tr\left [ \tilde{% \rho}\hat{p}(\theta _ { 1},\ldots , \theta _ { n})\right ] \label{41 } \\ & = & \sum_{n=1}^{2^{n}}|c_{n}|^{2}\left\langle x_{n}\right\vert \hat{p}% \left\vert x_{n}\right\rangle + \sum_{i\neq i^{\prime } = 1}^{n^{\prime } } \left ( c_{i}c_{i^{\prime } } ^{\ast } \left\langle x_{i^{\prime } } \right\vert \hat{p}\left\vert x_{i}\right\rangle + c_{i^{\prime } } c_{i}^{\ast } \left\langle x_{i}\right\vert \hat{p}\left\vert x_{i^{\prime } } \right\rangle \right ) .( [ 41 ] ) shows that only non - diagonal terms contributed from cpsss remain . for convenience ,we first consider two optical fields modulated with the ppss .chosen any two ppss of and from the set , two fields modulated with the ppss can be expressed as follows and are assumed to be two orthogonal polarization modes , respectively .here we assume are equal to .the direct product state of the two fields can be expressed as follows remains in the set due to the closure property . by using a polarization beam splitter ,the modes of and are exchanged as shown in fig .. then we obtain the following fields the relative phase sequences ( rpss ) , and .the state is obtained .\label{43}\]]due to and , we obtain the eadm as follow the eadm can not be decomposed into the direct products due to only non - diagonal term remaining , which is similar to the bell state . )to imitate one of bell states is shown , where pbs is beam splitter.,width=306,height=167 ] then we obtain the results and for the measurement operators and locally performed on the fields and , where are the -th units of the rpss and , respectively . then the correlation function is is the normalization coefficient .the fields in eq .( [ 42 ] ) are considered to be the optical analogy to the bell state . by substitutingthe above correlation functions into bell inequality ( chsh inequality ) and are and , respectively , bell s inequality is maximally violated .bell state differs from by phase .similarly , the optical analogy to the bell state is expressed as performing the transformation on of the state , we obtain the optical analogy to the bell state expressed as of expressed as their correlation functions are obtained . to substitute the correlation functions into eq .( [ 45 ] ) , we also obtain the maximal violation of bell s inequality .the violation of bell s criterion demonstrates the nonlocal correlation of the two optical fields in our scheme , which results from shared randomness of the ppss .the nonlocality of the multipartite entangled ghz states can in principle be manifest in a new criterion and need not be statistical as the violation of bell inequality . preparing three fields and similar to eq .( [ 35 ] ) , and cyclically exchanging the modes of the fields , we obtain as follows the rpss and .the fields are considered to be the optical analogy to ghz state .we obtain the local measurement results for the fields and , respectively , and the correlation function can be obtained is the normalized coefficient . if . if . by using ghz state , the family of simple proofs of bell s theorem without inequalitiescan be obtained , which is different from the criterion of chsh inequality .the sign of the correlation function can be also treated as the criterion , such as the negative correlation for nonlocal and the positive correlation for local when .we also obtain the negative correlation using eq .( [ 47 ] ) .the results are similar to the quantum case of ghz states .further , the optical analogy to ghz state could be generalized to the case of particles . by preparing optical fields similar to eq .( [ 35 ] ) and cyclically exchanging the modes , the fields can be obtained as follows the rpss satisfy .we can obtain the correlation function are the local measurement results of the optical fields , and is the normalized coefficient . using the same notion , we can obtain optical analogy results of other quantum entanglement states . it should be pointed out that the phase randomness provided by ppss is different from the case of quantum mixed states .quantum mixed states result from decoherence and all coherent superposition items ( non - diagonal terms ) disappear . in contract to the decoherence , some coherent superposition items remain in the optical analogy state due to the constraints of the rpss , such as for the analogies to bell states and ghz state , respectively .these remaining items make it possible to imitate quantum entangled pure states . herewe numerically simulate the optical analogy to quantum entanglement by using the software optisystem .first we propose the scheme to produce the product state of eq .( [ 35 ] ) shown in fig .[ fig6 ] . in the scheme , we choose two sequences and from the set to modulate the optical fields , and obtain and denote the amplitudes of two orthogonal polarization modes and , respectively . : polarization rotators.,width=251,height=152 ] after mode exchanged as shown in fig .[ fig7 ] , the optical fields can be written as follows numerical simulation scheme of the correlation measurement is shown in fig .[ fig7a ] .first , two modes of and are modulated with phase differences and , respectively .the fields can be written as follows the fields and are split two beams by the polarization beamsplitters at angles and input the photodetectors and , respectively .the differential signals of photodetectors are proportional to are the output electric signals of the photodetectors , assumed .then we calculate the correlation function , where is the normalization coefficient . finally , using the software optisystem, we obtain the numerical results of the normalized correlation function as shown in fig .[ fig8 ] ( detailed simulation data and optisystem models will be provided in the supplementary material ) . by substituting the above correlation results into bell inequality ( chsh inequality ) : where and are and , respectively . and are the modulators to produce phase differences between two polarization modes ; pbs and pbs polarization beam splitters at and respectively ; pr@ : polarization rotators.,width=433,height=199 ] we have discussed the coherent demodulation process in sec . [ sec ii.b ] . herewe discuss how to imitate quantum states with the help of the coherent demodulation .first , we consider the general form of optical fields modulated with ppss chosen from the set , and the states can be expressed as eq .( [ 20 ] ) .it is noteworthy that although multiple ppss are superimposed on two orthogonal modes of the fields , all of the ppss can be demodulated and discriminated by respectively performing the coherent demodulations on the two orthogonal modes , which has already been verified by many actual communication systems .now we propose a scheme , as shown in fig .[ fig9a ] , to perform the coherent demodulation introduced in sec .[ sec ii.b ] . in the scheme ,the coherent demodulations are performed on each so field and lo field modulated with reference ppss .thus a mode status matrix , as shown in fig .[ fig9 ] , can be obtained by performing coherent demodulations on each so field , where are the mode status of the orthogonal polarization modes and , respectively . for better fault tolerance , the mode status output discrete values through the threshold discrimination and binarization of the measurement results . of the matrix , each element represents the mode status of the optical field when the reference pps is , and takes one of four possible discrete values : or , denoting that exists only mode , only mode , both and , neither nor , respectively .thus we obtain a one - to - one correspondence relationship between the optical fields and the matrix .thus we consider the matrix as a bridge to connect the optical fields and the quantum states . :multipliers and : integrators ( integrate over entire sequence period).,width=347,height=214 ] .,width=454,height=177 ] now we discuss how to construct the imitation states based on the matrix . in the subsection [ sec iii.a.3 ] , we propose the scpm to reduce the redundancy of sequential permutation . herewe apply the scpm to imitate quantum states based on the matrix . in order to clearly present the scpm in the matrix , as shown in fig .[ fig10 ] , the matrix elements belonging to the same permutation are labeled with the same color , such as the red color corresponding to , the blue color corresponding to , etc . considering each permutation corresponding to equivalent direct product decompositions ,thus the imitated quantum state must be a direct product of the elements belonging to the same and then superposition of equivalent direct products corresponding to all permutations .therefore we obtain is the mode discrete status obtained from the matrix .related to the optical field and the reference pps , in which the mode status with the same color for the same sequence permutation.,width=373,height=349 ] it is noteworthy that the scpm to reduce the redundancy is one of the feasible ways to imitate quantum states based on the matrix .other mechanisms might also work , as long as a sequential ergodic ensemble can be obtained . in order to prove the feasibility of the scheme , in the next subsection, we will discuss the imitations of several typical quantum states , including the product states , bell states , ghz states and w states . in this subsection, we discuss optical analogies to several typical quantum states and construct their imitation states applying the scheme proposed in the last subsection , including the product state , bell states , ghz state and w state .[ [ the - product - state ] ] the product state + + + + + + + + + + + + + + + + + first , we discuss the optical analogy to the product state of qubit .the optical fields are shown as follows employing the scheme as shown in fig .[ fig10 ] , we obtain the matrix demonstrates that each optical field is the superposition of two orthogonal modes and no entanglement is involved . according to eq .( [ 53 ] ) , we obtain the imitation state as follow is same as the quantum product state expect a normalization factor .[ [ bell - states ] ] bell states + + + + + + + + + + + now we discuss the optical analogy to one of the four bell states , which contains two optical fields as eq .( [ 42 ] ) . by employing the scheme as shown in fig .[ fig10 ] , we obtain the matrix to the scpm , we obtain that and .based on the matrix , for the selection of , we obtain the modes of the fields are and ; for the selection of , we obtain the modes are and . if we randomly choose or , we can randomly obtain the mode status of or , which is similar to quantum measurements for the bell state . according to eq .( [ 53 ] ) , we obtain the imitation state as follow is same as the bell state expect a normalization factor .we discuss the optical analogy to another bell state , which contains two optical fields as eq .( [ 45b ] ) .we then obtain the matrix we randomly choose or , we can also randomly obtain the mode results or , which is similar to quantum measurement for the bell state . we can obtain the imitation state is same as the bell state expect a normalization factor . [ [ ghz - state ] ] ghz state + + + + + + + + + for tripartite systems there are only two different classes of genuine tripartite entanglement , the ghz class and the w class .first we discuss the optical analogy to ghz state , which contains three optical fields as eq .( [ 46 ] ) , where the superscripts of the ppss become .we obtain the matrix to the scpm , we obtain that , and . based on the matrix , for the selection of , we obtain the mode status are ; for the selection of , we obtain the mode status are ; for the selection of , we obtain nothing .thus we can obtain the imitation state for quantum particles , we can obtain the analogy to ghz state , which contains optical fields as eq .( [ 47a ] ) . performing the scheme as shown in fig .[ fig10 ] , we obtain the matrix we can easily obtain the imitation state same as ghz state expect a normalization factor . in the following ,we discuss any unitary transformation states of ghz state . assuming the unitary transformation simplified from eq .( [ 13 ] ) , employ it on ghz state of quantum particles and obtain as follow .\label{70a}\]]similarly , we employ the transformation on the optical analogy fields as follows obtain the matrix the imitation state , we can obtain the result completely similar to quantum states . for a simple case , we discuss the not unitary transformation as follows the transformed imitation state can be expressed as follow to quantum computation , we can obtain a new state by using only one step instead of steps as classical computation .this is a simple example to demonstrate parallel computing capability .[ [ w - state ] ] w state + + + + + + + now we discuss the optical analogy to w state , which contains three optical fields as follows the same scheme in fig .[ fig10 ] , we obtain the matrix to the scpm , we use , and again . based on the matrix , we obtain the mode status of , , for the selection of , , , respectively .we find an interesting fact that if we need to lock the mode status for the first field , must be selected .this will lead to the mode status must be obtained from the other two fields .otherwise if the state status of the first field is obtain , or can be selected .this will lead to the other two fields are still in the state similar to bell state .this fact is quite similar to the case of quantum measurement and the collapse phenomenon for w state in quantum mechanics .we obtain the imitation state as follow for quantum particles , we can obtain the analogy to w state , which contains optical fields as follows we discuss a transformed state of w state . applying the not unitary transformation to , we can obtain the transformed optical fields as follows the transformed imitation state can be expressed as to quantum computation , we can obtain a new state by using only one step instead of steps as classical computation .this is another example to demonstrate parallel computing capability . in last subsection, we demonstrate the optical analogies of quantum states , which are the product state , bell states , ghz state and w state . in this subsection , we discuss numerical simulations of two optical analogies using the software optisystem . to construct the product state , we first choose three ppss and to modulate the optical fields , and obtain to eq .( [ 46 ] ) , the optical analogies to ghz state can be written as follows can be realized by mode exchange of the produce state by using polarization beam splitters , as shown in fig .. then we can express the optical analogies to w state as follows can be realized by using the beam coupler and splitter , as shown in fig .[ fig12 ] . : polarization rotators.,width=338,height=166 ] : polarization rotators , pr@ : polarization rotators.,width=339,height=170 ] further , we make use of the coherent demodulation method mentioned in section [ sec ii ] to obtain the matrix . because each field of the imitations of ghz state and w state has two orthogonal polarization modes , the coherent demodulation scheme need two lo fields with same orthogonal modes as the so fields , as shown in fig .[ fig9a ] . by using the software optisystem, we construct the numerical simulation model as shown in fig .fig13 for the coherent demodulation as mensioned in fig .[ fig9a ] . in fig .[ fig14 ] , the electric signals of pds are shown when the so field is of eq .( [ 81 ] ) and the lo fields are modulated with ppss , , respectively . finally , by performing correlation analysis as mentioned in section [ sec ii.b ] , we can obtain the results for the three fields of eq .( [ 81 ] ) , as shown in fig .fig15 . after disposing of constant and normalization, we can express the measurement result as the matrix mentioned in eq .( [ 66 ] ) . for the imitations of w state , the same results are shown in fig . [ fig16 ] and fig . [ fig17 ] . after disposing of constant and normalization, we can also express the result as the matrix mentioned in eq .( [ 75 ] ) . is shown , where pbs : polarization beam splitters and bc : beam couplers.,width=346,height=185 ] of the ghz imitation state with lo fields modulated with , and are shown , where ( a ) and ( b ) represent the two orthogonal modes and respectively.,width=371,height=366 ] and respectively , , , three fields and , , the sequences , , modulating on lo.,width=376,height=195 ] of the w imitation state with lo fields modulated with , and are shown , where ( a ) and ( b ) represent two orthogonal modes and , respectively.,width=360,height=346 ] and respectively , , , three fields and , , the sequences , , modulating on lo.,width=355,height=180 ]in this section , we will propose a gate array model to imitate quantum computation . in quantum computation, any quantum state can be obtained from an initial state by using a gate array constructed with universal cnot gate and other single qubit gate .similarly , we can construct gate array models to produce the imitations of all kinds of quantum states , such as ghz state and w state , even very sophisticated states like the results of shor s algorithm .we consider the gate array models can be employed as the imitations of quantum computation . based on this understanding ,we construct some gate array models to imitate shor s algorithm , grover s algorithm and quantum fourier algorithm . in ref . , a constructure pathway of the imitation states is shown . herewe use the same model as shown fig .[ fig18 ] , however the gate array model does not always achieve unitary transformations similar to quantum computation .now we discuss some basic units of the gate array model besides the unitary transformation mensioned in eq .( [ 12 ] ) and the mode exchanger shown in fig .[ fig7 ] .\(1 ) combiner and splitter different from quantum state , we can conveniently combine and split an optical field by using an optical coupler / splitter device , which principles are discussed in sec .[ sec ii.c ] .the two basic devices are shown in fig .[ fig19 ] ( a ) and ( b ) , respectively .\(b ) mode control gates further , we define kinds of mode control gates as selective mode transit devices with one input and one output as shown in fig .[ fig20 ] .they are defined as follows now we discuss one of basic structures of gate array models . according to sec .[ sec iii ] , we can in principle imitate all quantum states by using the scpm .similar to field programmable gate array ( fpga ) , we propose a simple structure of gate array to satisfy the scpm as shown in fig .gate array constituted by the basic units can transform to achieve certain .it is easy to know that a sequence permutation with circulation of needs at least combiner devices and control gates .we believe that many imitation states can be constructed by applying this structure that will be strictly proved in future paper .finally , we illustrate two gate array models to transform the product state to ghz state and w state as shown in fig .[ fig22 ] .shor s algorithm is the most famous quantum algorithm for integer factorization that runs only in polynomial time on a quantum computer shor .specifically it takes time and quantum gates of order using fast multiplication , demonstrating that the integer factorization problem can be efficiently solved on a quantum computer and is thus in the complexity class bounded error quantum polynomial time problem ( bqp ) . in this section, we propose a gate array model employing optical fields produces optical analogies that imitates the result state of shor s algorithm to factor into .first , we chose a random number coprime with , for example .we define a function as follow key step of the shor s algorithm is to obtain the period to satisfy in order to construct , we prepare optical fields modulated with ppss can express the product state as follows further , we construct the gate array model as shown in fig .[ fig23 ] . after passing through the gate array ,the optical fields become the following forms using the coherent demodulation , we can obtain the matrix the scheme mentioned in sec .[ sec iii.c ] , we can obtain the imitated state the state is represented with the optical fields and is represented with the optical fields .there are four kinds of superposition classified from last four fields containing the values of ( and ) in the imitation state , which means the period of is .it is worth noting that , different from quantum computing , we might obtain the expected period of without operating quantum fourier transformation , at least for relatively small integer factorization .the remaining task is much easier . because , we obtain , and . finally , we can deduced that .after further research the relation of the factorized integer and the gate array , we believe this might become a true scheme to imitate quantum shor s algorithm .now the computation cost of the model is analyzed .the operation steps to get the imitation state include beam splitting operations , polarization mode operations , beam coupling operations .it can be seen that the number of operations is increased with the number of optical fields growth in linear growth . considering each ppshas phase units , the total number of operations is proportional to . in conclusion , it takes time and gates of order . in this subsection , we discuss the numerical simulation of the gate model as shown fig .[ fig23 ] using the software optisystem .the schematic diagram of numerical simulation is shown in fig . [ fig24 ] .firstly , the initial state can be prepared by the method as shown in fig .[ fig6 ] , that is , modulate optical fields with ppss respectively , and then rotate the polarization of each field by and evolve into the states in eq .( [ 88 ] ) . after numerically simulating the complex gate array ,we obtain the output fields and the electric signals of pds after the interference between each fields and lo fields .finally , the correlation results are obtained , substracted by the constant part and normalized , as shown in fig .[ fig25 ] .after the threshold discrimination and binarization of results , we can express the measurement results as the matrix of eq .( [ 91 ] ) and the imitated state of eq .( [ 92 ] ) .( detailed the numerical calculus , derivation and optisystem models will be provided in the supplementary materials ) . using the scpm mentioned in sec .[ sec ii.b ] , we can obtain the imitated state as shown eq .( [ 90 ] ) . using the software optisystem is shown.,width=386,height=209 ] , and ( b ) for mode are shown , where represent optical fields , and represent ppss .,width=395,height=170 ] grover s algorithm is the quantum algorithm for searching an unsorted database with entries in time and using storage space , that is faster than all classical computations .in fact its time complexity is asymptotically the fastest possible for searching the unsorted database in the linear quantum model , however , it only provides a quadratic speedup rather than exponential speedup over their classical counterparts .there are two key factors for searching an unsorted database : ( 1 ) to encode data into the superposition states of qubits and form a database ; ( 2 ) to verify the existence of a specified number in the superposition states by measurements .in quantum computation , the first condition is very easy to be satisfied , but the second is difficult to achieve . because the collapse of the superposition states to a specified state is completely uncontrollable in the quantum measurement . in grovers algorithm , the specified transformation is repeated to transform the superposition states for increasing the probability of the specified state , and after repetitions the measuring probability will be close to .we now discuss the optical imitation of grover s algorithm . in our scheme, we can use optical fields modulated with ppss . in principle , we can encode any data as the superposition state of optical fields to form a database .different from the quantum computation , we can control the superposition state to output any specified state using a mode control gate array related to the specified state .let denote the superposition state of optical fields , can be encoded by any data .for example , we assume as a superposition state of random numbers choose optical fields modulated with ppss and after passing through a suitable gate array , that become the following form due to the scpm mentioned in sec .[ sec iii ] , any imitated state must correspond to a certain sequencial cycle permutation .therefore , the problem to determine whether exists in become that to search the corresponding sequencial cycle permutation .for example , we search whether the number is in .first , the optical fields of state pass through the gate array controlled by as shown in fig .. then we obtain the matrix by using the coherent demodulation , it is easy to search only the corresponding sequence permutation within operation steps . if we choose , we can obtain the matrix in the matrix , we can not search any corresponding sequencial cycle permutation. therefore we can conclude does not exist in .different from grover s algorithm , this algorithm for searching an unsorted database with entries in operation steps and using storage space . from shown.,width=401,height=255 ] quantum fourier algorithm is one of the most important tools of quantum computation , and one of the algorithms which can bring about exponential speedup .shor s algorithm , hidden subgroup problem and solving systems of linear equations make use of quantum fourier algorithm .quantum fourier algorithm utilizes the superposition of quantum state , whereby the required time and space for computation can be notably reduced from to .hence , the implementation of quantum fourier algorithm is crucial to exponential speedup in quantum computation . in this section ,we first propose an optical fourier algorithm to imitate quantum fourier algorithm , then investigate the required computational resources , and at last demonstrate the algorithm applying to three optical fields as examples to verify its feasibility .generally , quantum fourier transform takes as input a vector of complex numbers , , and output a new vector of complex numbers as follow this calculation involves the additions and multiplications of complex numbers , leading to an increase of computational complexity with the increase of the number of vector components .classically , the most effective algorithm , fast fourier transform is in time . on the contrary, the quantum fourier transform can be defined as a unitary transformation on qubits , which is , the quantum fourier transform of arbitrary state can be expressed as\left\vert k\right\rangle , \nonumber\end{aligned}\]]where .then , we expand into the coefficients satisfy the following equation in quantum fourier transform , after the hadamard gate and controlled - phase gate , we can obtain the final state of quantum fourier transform are hadamard gates and controlled - phase gates on qubit registers , which means the quantum fourier transform takes basic gate operations .nevertheless , the quantum fourier transform can not output precise result of final states directly , but the probability of every state by repeated measurements , which can output the final result of fourier transform at a certain accuracy .as mentioned in sec .[ sec iii ] , a general form of for fields can be constructed from eq .( [ 20 ] ) by using a gate array model, .then , the formal product state eq .( [ 21 ] ) can be written as further , we can obtain each item of the superposition of as follows formal product state is expressed as follow fourier transform , this state evolves into according to the definition eq .( [ 99 ] ) , the relation between the coefficients and of these two states have to be satisfied as eq .( [ 103 ] ) . to obtain the relation between these coefficient ,we design the following algortithm : \(1 ) selected a basis state of ; \(2 ) applying the following controlled - phase transformation on every field of according to the specific value of bits in the selected basis state , we obtain \(3 ) applying hadamard gates on these fields , we obtain \(4 ) applying the mode control gates on these fields according to the specific values in , the mode of every field is identical to the corresponding value in , e.g. , if , becomes , otherwise if , becomes , and so on ; \(5 ) applying the coherent demodulation on these fields and obtain the matrix , we can obtain the corresponding coefficient using the method mentioned in sec .[ sec iii.c ] .the above algorithm can be summarized as the following block diagram in fig .[ fig27 ] . at last, we can analysis the computational complexity : there are optical fields in after controlled - phase gates , hadamard gates , mode selection operations and finally measurements in the coherent demodulation .hence , the total number of operations is in , which is the same as that in quantum fourier algorithm .however , the result obtained in the optical algorithm is with certainty values but not with probability like quantum fourier algorithm . in the phase ensemble, we utilize the characteristic of pps to define the eadm as mentioned in sec .[ sec iii.a.2 ] . as mentioned in sec .[ sec iii.c ] , the imitation states can be constructured by using the scpm corresponding to the minimum complete phase ensemble . actually , the imitation states can be defined as the following ensemble - averaged state that is the sum of all used ppss of the optical fields .then , we discuss about fourier transform for the ensemble - averaged states . from eq .( [ 103 ] ) , we obtain the coefficients of fourier transform satisfies in these equations , the combinations of and satisfy the following relations } .\label{121}\]]obviously , these terms also satisfy the balance property of pps .hence , the ensemble - averaged state can also be used in fourier transform .then , we can obtain the fourier transform last , we show the equivalence of ensemble - averaged states in the fourier transform . according to sec .[ sec iii.c.1 ] , three optical fields with ppss are required to implement the imitations of the quantum states consisting of three particles . modulated with these ppss , three optical fields can be expressed as follows gate array models as mentioned in sec .[ sec iv.a ] , we can obtain arbitrary quantum states which can be expressed as follows according to the algorithm in sec .[ sec iv.c.3 ] , we obtain : \(1 ) apply controlled - phase gates on three optical fields respectively where .\(2 ) hadamard transformation\(3 ) calculate coefficients ( 3.1 ) when and , we obtain the corresponding coefficients and ( 3.2 ) when and , we obtain the corresponding coefficients and ( 3.3 ) when and , we obtain the corresponding coefficients and ( 3.4 ) when and , we obtain the corresponding coefficients and last , we obtain the transform matrix of all coefficients as follow , the result is completely similar to quantum fourier algorithm .we will utilize the above algorithm applying to some imitation states of three optical fields as following examples : \(1 ) the product state in quantum mechanics , the product state of three particles is .we can expressed three fields as eq .( [ 123 ] ) , except for normalization constant .according to the definition of the ensemble - averaged state eq .( [ 119 ] ) , we obtain the imitation state the above algorithm , we can easily obtain the fourier transform coefficients , while the other terms is .then we obtain the ensemble - averaged state is identical to quantum fourier transform , except for the normalization constant .\(2 ) ghz state in quantum mechanics , ghz state is the biggest entanglement state of three particles .according to sec .[ sec iii.c.1 ] , we can obtain the following form of three optical fields formal product state can be expressed as .\nonumber\end{aligned}\]]according to the definition eq .( [ 119 ] ) , we obtain the ensemble - averaged state , except for normalization constant and overall phase factor , the state is identical to ghz state . using the above algorithm, we can easily obtain the fourier transform coefficients as follows to the definition eq .( [ 119 ] ) , we obtain the ensemble - averaged state in conclusion , is the fourier transform of for the imitaion of ghz states .\(3 ) w state in quantum mechanics , w state is the most robust entanglement state .according to sec .[ sec iii.c.1 ] , we can obtain the expression of three fields as follows formal product state can be expressed as \times \left ( \left\vert 100\right\rangle + \left\vert 010\right\rangle \right .\label{156 } \\ & & \left .+ \left\vert 001\right\rangle \right ) + \left [ e^{i\left ( \lambda ^{\left ( 1\right ) } -\lambda ^{\left ( 3\right ) } \right ) } + e^{i\left ( \lambda ^{\left ( 1\right ) } -\lambda ^{\left ( 2\right ) } \right ) } \right ] \left ( \left\vert 011\right\rangle + \left\vert 110\right\rangle + \left\vert 101\right\rangle \right ) + e^{i\left ( 2\lambda ^{\left ( 1\right ) } -\lambda ^{\left ( 2\right ) } -\lambda ^{\left ( 3\right ) } \right ) } \left\vert 111\right\rangle \nonumber \\ & & \left .+ \left [ e^{i\left ( 2\lambda ^{\left ( 2\right ) } -\lambda ^{\left ( 1\right ) } -\lambda ^{\left ( 3\right ) } \right ) } + e^{i\left ( 2\lambda ^{\left ( 3\right ) } -\lambda ^{\left ( 2\right ) } -\lambda ^{\left ( 1\right ) } \right ) } + 3e^{i\left ( \lambda ^{\left ( 2\right ) } -\lambda ^{\left ( 1\right ) } \right ) } + 3e^{i\left ( \lambda ^{\left ( 3\right ) } -\lambda ^{\left ( 1\right ) } \right ) } \right ] \left\vert 000\right\rangle \right\ } . \nonumber\end{aligned}\]]according to the definition eq . ( [ 119 ] ) , we obtain the ensemble - averaged state , except for normalization constant and overall phase factor , the state is identical to w state . using the above algorithm , we can easily obtain the fourier transform coefficients as follows \nonumber \\ & & + e^{3i\lambda ^{\left ( 1\right ) } } + e^{3i\lambda ^{\left ( 2\right ) } } + e^{3i\lambda ^{\left ( 3\right ) } } , \nonumber\end{aligned}\]] -\left ( 1-\omega -\omega ^{2}\right ) \nonumber \\ & & \times \left [ e^{i\left ( 2\lambda ^{\left ( 2\right ) } + \lambda ^{\left ( 1\right ) } \right ) } + e^{i\left ( 2\lambda ^{\left ( 3\right ) } + \lambda ^{\left ( 1\right ) } \right ) } \right ] + e^{i\left ( 2\lambda ^{\left ( 3\right ) } + \lambda ^{\left ( 2\right ) } \right ) } + e^{i\left ( 2\lambda ^{\left ( 2\right ) } + \lambda ^{\left ( 3\right ) } \right ) } \nonumber \\ & & -\omega ^{3}e^{3i\lambda ^{\left ( 1\right ) } } + e^{3i\lambda ^{\left ( 2\right ) } } + e^{3i\lambda ^{\left ( 3\right ) } } , \nonumber\end{aligned}\]] + \omega ^{2}\left [ e^{i\left ( 2\lambda ^{\left ( 2\right ) } + \lambda ^{\left ( 1\right ) } \right ) } + e^{i\left ( 2\lambda ^{\left ( 3\right ) } + \lambda ^{\left ( 1\right ) } \right ) } \right ] + e^{i\left ( 2\lambda ^{\left ( 3\right ) } + \lambda ^{\left ( 2\right ) } \right ) } \nonumber \\ & & + e^{i\left ( 2\lambda ^{\left ( 2\right ) } + \lambda ^{\left ( 3\right ) } \right ) } -\omega ^{2}e^{3i\lambda ^{\left ( 1\right ) } } + e^{3i\lambda ^{\left ( 2\right ) } } + e^{3i\lambda ^{\left ( 3\right ) } } , \nonumber\end{aligned}\]] -\left ( 1+\omega ^{2}-\omega ^{3}\right ) \nonumber \\ & & \times \left [ e^{i\left ( 2\lambda ^{\left ( 2\right ) } + \lambda ^{\left ( 1\right ) } \right ) } + e^{i\left ( 2\lambda ^{\left ( 3\right ) } + \lambda ^{\left ( 1\right ) } \right ) } \right ] + e^{i\left ( 2\lambda ^{\left ( 3\right ) } + \lambda ^{\left ( 2\right ) } \right ) } + e^{i\left ( 2\lambda ^{\left ( 2\right ) } + \lambda ^{\left ( 3\right ) } \right ) } \nonumber \\ & & + \omega ^{5}e^{3i\lambda ^{\left ( 1\right ) } } + e^{3i\lambda ^{\left ( 2\right ) } } + e^{3i\lambda ^{\left ( 3\right ) } } , \nonumber\end{aligned}\]] + 2\left [ e^{i\left ( 2\lambda ^{\left ( 2\right ) } + \lambda ^{\left ( 1\right ) } \right ) } + e^{i\left ( 2\lambda ^{\left ( 3\right ) } + \lambda ^{\left ( 1\right ) } \right ) } \right ] + e^{i\left ( 2\lambda ^{\left ( 3\right ) } + \lambda ^{\left ( 2\right ) } \right ) } \nonumber \\ & & + e^{i\left ( 2\lambda ^{\left ( 2\right ) } + \lambda ^{\left ( 3\right ) } \right ) } -e^{3i\lambda ^{\left ( 1\right ) } } + e^{3i\lambda ^{\left ( 2\right ) } } + e^{3i\lambda ^{\left ( 3\right ) } } , \nonumber\end{aligned}\]] -\left ( 1+\omega -\omega ^{2}\right ) \nonumber \\ & & \times \left [ e^{i\left ( 2\lambda ^{\left ( 2\right ) } + \lambda ^{\left ( 1\right ) } \right ) } + e^{i\left ( 2\lambda ^{\left ( 3\right ) } + \lambda ^{\left ( 1\right ) } \right ) } \right ] + e^{i\left ( 2\lambda ^{\left ( 3\right ) } + \lambda ^{\left ( 2\right ) } \right ) } + e^{i\left ( 2\lambda ^{\left ( 2\right ) } + \lambda ^{\left ( 3\right ) } \right ) } \nonumber \\ & & + \omega ^{3}e^{3i\lambda ^{\left ( 1\right ) } } + e^{3i\lambda ^{\left ( 2\right ) } } + e^{3i\lambda ^{\left ( 3\right ) } } , \nonumber\end{aligned}\]] -\omega ^{2}\left [ e^{i\left ( 2\lambda ^{\left ( 2\right ) } + \lambda ^{\left ( 1\right ) } \right ) } + e^{i\left ( 2\lambda ^{\left ( 3\right ) } + \lambda ^{\left ( 1\right ) } \right ) } \right ] + e^{i\left ( 2\lambda ^{\left ( 3\right ) } + \lambda ^{\left ( 2\right ) } \right ) } \nonumber \\ & & + e^{i\left ( 2\lambda ^{\left ( 2\right ) } + \lambda ^{\left ( 3\right ) } \right ) } + \omega ^{2}e^{3i\lambda ^{\left ( 1\right ) } } + e^{3i\lambda ^{\left ( 2\right ) } } + e^{3i\lambda ^{\left ( 3\right ) } } , \nonumber\end{aligned}\]] -\left ( 1+\omega ^{2}+\omega ^{3}\right ) \nonumber \\ & & \times \left [ e^{i\left ( 2\lambda ^{\left ( 2\right ) } + \lambda ^{\left ( 1\right ) } \right ) } + e^{i\left ( 2\lambda ^{\left ( 3\right ) } + \lambda ^{\left ( 1\right ) } \right ) } \right ] + e^{i\left ( 2\lambda ^{\left ( 3\right ) } + \lambda ^{\left ( 2\right ) } \right ) } + e^{i\left ( 2\lambda ^{\left ( 2\right ) } + \lambda ^{\left ( 3\right ) } \right ) } \nonumber \\ & & -\omega ^{5}e^{3i\lambda ^{\left ( 1\right ) } } + e^{3i\lambda ^{\left ( 2\right ) } } + e^{3i\lambda ^{\left ( 3\right ) } } . \nonumber\end{aligned}\]]according to the definition eq .( [ 119 ] ) , we obtain the ensemble - averaged state in conclusion , is also the fourier transform of for the imitaion of w states .based on the phase ensemble model , we propose an optical fourier algorithm similar to quantum fourier algorithm .the computational resources required for this algorithm is in also similar to quantum fourier algorithm , which means an exponential speedup compared with classical fourier algorithm .in this paper , we have discussed a new approach to imitate quantum states using the optical fields modulated with ppss .we demonstrated that optical fields modulated with different ppss can span a dimensional hilbert space that contains tensor product structure similar to quantum systems .it is noteworthy that a classical optical field is the most similar to a quantum state , especially for coherent superposition state .this is why the space spanned by optical fields can imitate a quantum system yet not the space of probability distributions of classical coins that also contains a tensor product . in this paper, we only build a simple framework for this approach . however , there are still many problems that need to be studied continuously , such as the imitation forms of all quantum states , more general algorithms , unitary universal gate like cnot , etc .it is particularly interesting to simulate higher - dimensional real quantum systems applying this approach , such as qutrits , higher - dimensional hilbert spaces , even quantum fields .the greatest benefits of this approach is an arbitrary dimensional hilbert space can be provided by using linear growth resources .finally , we look forward to verifying the feasibility of the approach through the relevant experiments . we believe the experiments are not difficult to achieve , because all the technologies have been applied in the mature optical communication system .i would like to thank those who have supported me for ten years , including dr .shuo sun who took part in the discussion of shor s algorithm , dr .xutai ma who took part in the discussion of the gate array model , prof .xunkun wu who helped me to revise some subjects of english , prof .wei fang who took part in the discussion of quantum fourier algorithm , and mr .yongzheng ye who took part in the discussion of the software optisystem .s. l. braunstein _et al . _ , phys .lett . * 83 * , 1054 ( 1999 ) ; n. linden and s. popescu , phys .lett . * 87 * , 047901 ( 2001 ) ; r. jozsa _ et al .a * 459 * , 2011 ( 2003 ) ; g. vidal , phys .* 91 * , 147902 ( 2003 ) .a. j. viterbi , _ cdma : principles of spread spectrum communication _( addison - wesley wireless communications series , addison - wesley , 1995 ) .a. j. viterbi , _ principles of coherent communication _( mcgraw - hill , 1966 ) .p. shor , in _ proc .35th annu . symp . on the foundations of computer science _goldwasser , s. ) 124 - 134 ( ieee computer society press , los alamitos , california , 1994 ) .p. shor , siam j. comput .* 26 * , 1484 ( 1997 ) .
we propose an optical parallel computation similar to quantum computation that can be realized by introducing pseudorandom phase sequences into classical optical fields with two orthogonal modes . based on the pseudorandom phase sequences , we first propose a theoretical framework of phase ensemble model referring from the concept of quantum ensemble . using the ensemble model , we further demonstrate the inseparability of the fields similar to quantum entanglement . it is interesting that a dimensional hilbert space spanned by optical fields is larger than that spanned by quantum particles . this leads a problem for our scheme that is not the lack of resources but the redundancy of resources . in order to reduce the redundancy , we propose a special sequence permutation mechanism to efficiently imitate certain quantum states , including the product state , bell states , ghz state and w state . for better fault tolerance , we further devise each orthogonal mode of optical fields is measured to assign discrete values . finally , we propose a generalized gate array model to imitate some quantum algorithms , such as shor s algorithm , grover s algorithm and quantum fourier algorithm . the research on the optical parallel computation might be important , for it not only has the potential beyond quantum computation , but also provides useful insights into fundamental concepts of quantum mechanics .
mathematical models based on the experimentally measured biophysical properties of neurons generally consist of complicated sets of differential equations derived from the historical ( 1952 ) model . extending the hh formalism to branching neuronsrequires a large number of parameters that must be determined to obtain a realistic neuronal model .the techniques previously employed to measure these parameters involve either linear admittance ( or impedance ) measurements or _ ad hoc _ extrapolations from voltage clamp experiments with poor space clamp control .thus , it is important to consider more refined theories from nonlinear analysis , such as nonlinear dynamics of neurons ( ; ) or nonlinear system identification ( ; ; ; ; ) .a goal of nonlinear analysis is not just a refinement of the linear systems approach , but the development of a fundamental insight into how neurons process information . in a seminal paper , fitzhugh derived the equations of the nonlinear response for a single sinusoidal voltage clamp ( ) .this approach has been extended to the quadratic response for a multi - sinusoidal voltage clamp and been developed as a matrix theory termed quadratic sinusoidal analysis ( qsa ) ( ) .the proposed experimental approach is essentially based on qsa , requiring that the stimulus amplitudes evoke mainly linear and quadratic responses .in addition , qsa provides the mathematical tools for a model independent analysis of quadratic nonlinearities and provides an innovative way to quantitatively describe real neurons and their models .the measurement of nonlinearities in neurons under normal physiological conditions is clearly important in order to understand how they process synaptic inputs , which typically evoke 5 - 10 mv post - synaptic nonlinear responses .two types of neurons of the rat prepositus hypoglossi nucleus ( phn ) were investigated .both types clearly manifest nonlinearities at multiple subthreshold step levels .the type d neurons are known to show marked spontaneous , voltage dependent and irregular oscillatory properties . by contrast , type b neurons , the majority in this nucleus , are non - oscillatory and have regular spontaneous activity that is highly dependent on a significant persistent sodium ( gnap ) conductance ( ) . in this paper ,the novel qsa method has been used to investigate the quadratic response to time varying voltage clamped stimuli and establish a quantitative characterization of the nonlinear behavior in order to understand neuronal responses elicited by normal physiological synaptic inputs .it will be shown that at physiological levels of stimulation , neurons and their models can generate significant responses at harmonic and interactive frequencies that are not present in the input signal .thus , the nonlinear frequency responses contain more frequencies over a wider frequency band than the input signal . as a consequence they provide significant amplification at dynamically changing membrane potentials .the use of stimuli with multiple input frequencies allows one to probe neuronal function and characterize it by a matrix of quadratic interactions , namely the qsa matrix .it is then possible to extract information about active membrane properties from this matrix by eigendecomposition .finally , biologically realistic simulations have been implemented using neuronal models based on vestibular neuronal experimental data .these simulations suggest that the nonlinear responses in voltage clamp are dominated by active dendritic structures .this paper is both a theoretical and experimental nonlinear approach to neuronal function that adds to previous steady state linear analyses ( ; ) .it provides a quantitative assessment of quadratic responses of both data recorded from individual neurons and their corresponding biophysical models .experiments were carried out on male wistar rats ( 25- to 52-days - old ) supplied by centre delevage roger janvier ( le genest saint isle , france ) .all efforts were made to minimize animal suffering as well as the number of animals used .all experiments followed the guidelines on the ethical use of animals from the european communities council directive of 24 november 1986 ( 86/609/eec ) .brain dissections were performed as described elsewhere ( idoux et al . , 2008 ) .briefly , after decapitation under deep anesthesia , the brain was quickly removed and placed in ice - cold , phosphate / bicarbonate - buffered artificial cerebro - spinal fluid ( acsf ) , which included ( in mm ) 225 sucrose , 5 kcl , 1 nah , 26 nahco , 0.25 cacl , 1.3 mgcl , 11 glucose and was bubbled with 95% o2 - 5% co ( ph 7.4 ) .four or five 250 m thick , coronal slices containing the phn were cut from the brainstem with a microslicer ( leica , rueil - malmaison , france ) and transferred into an incubating vial filled with a regular acsf containing ( in mm ) 124 nacl , 5 kcl , 1 nah , 26 nahco , 2.5 cacl , 1.3 mgcl , 11 glucose and bubbled with 95% o and 5% co ( ph 7.4 ) .slices were then placed one at a time in the recording chamber maintained at 32 - 34c , where the slice was superfused with regular acsf at a constant flow rate of 3 ml min-1 .patch - clamp pipettes were pulled from borosilicate glass tubing to a resistance of 5 - 8 m .the control internal solution contained ( in mm ) 140 k - gluconate , 2 mgcl , 10 hepes , 0.1 egta , 4 na , and 0.4 na ( adjusted to ph 7.3 with koh ) . the junction potential for this internal solution was not subtracted for the potential measurements or the model simulations .phn neurons were visualized with a nomarski optic microscope under infrared illumination .recordings were made with an axoclamp 2b amplifier ( axon instruments , union city , ca , usa ) or a multiclamp 700b ( molecular devices , sunnyvale , ca , usa ) . the spontaneous discharge was first recorded in the current - clamp mode for 8 to 10 minutes once a stable level had been reached and the recorded phn neuron was determined as b or d type ( see idoux et al . , 2008 ) .phn neurons that had resting membrane potential more negative than -50 mv and a spike amplitude > 45 mv were selected for the voltage clamp experiments .types d and b neurons from the prepositus hypoglossi nucleus were measured for different stimulus amplitudes and membrane potentials .based on the criteria for time invariance discussed in the rationale section , five type d and six type b neurons were selected for detailed analysis .all measurements were made with stimuli applied for twice the duration used in the analysis .only the last half of the record was used to assure that a steady state condition was reached . at voltage clamp potentials near threshold , transient currents due to uncontrolled action potentials occasionally occurred in the non - analyzed initial part of the recording, however they were completely inactivated by the maintained depolarization with no firing during the latter analyzed part of the record . in some experiments 25 - 50 m nmda ( sigma , st quentin fallavier , france ) was applied in the presence or absence of 2 m ttx ( tocris , bristol , uk ) .the data acquisition was done with a pc - compatible computer running windows xp , using matlab scripts ( matlab 7.0 , mathworks , natick , ma , usa ) .recordings were low - pass filtered at 2 khz and digitized at 5 khz ( bnc-2090 + pci-6052e , national instruments , austin , tx , usa ) .the onesidedpvalue ( p - value ) of meantest[x - y ] for paired differences , mean values and standard deviations ( sd ) were calculated with the hypothesistesting package of mathematica 7.0 , ( wolfram research , champain , il , usa ) .phn neurons were analyzed with qsa , which specifically selects harmonic and intermodulation frequencies , described as follows .if a double sinusoidal input has frequencies and then the linear response will have exactly the same frequencies and .however , the quadratic response will include additional harmonics and as well as intermodulation products and .this principle can be generalized to a multi - sinusoidal input in which case the quadratic response will include double of each input frequency as well as sum and difference of each pair of distinct input frequencies .a quadratic response can generate frequency overlaps when distinct combinations of input frequencies generate the same output frequency .for instance , a multi - sinusoidal input with frequencies 1 , 2 , 3 , 4 ( in hertz ) would generate many frequency overlaps such as or and so on . in presence of frequency overlaps ,it is not possible to unambiguously measure the nonlinear frequency interactions . in the previous example , the measurement at 5 hz is ambiguous because one is unable to distinguish between the contributions of and . in order to avoid this problem ,the qsa was used with a flexible algorithm generating incommensurable frequencies .this approach is based on a practical measurement technique , namely harmonic probing on volterra kernels ( ; ) .since harmonic and intermodulation responses also exist for nonlinearities of higher degrees , for example third order intermodulation products , it is important to ensure that the neurons mainly manifest quadratic nonlinearities , otherwise the results would be significantly contaminated . for this , a necessary , but not sufficient condition , consists of using relatively small stimulus amplitudes in such a way that only linear and quadratic responses are significant .this approach was used in our previous piecewise linear analysis ( ) , hence the term piecewise quadratic analysis can be used to describe the qsa extension .the influence of the input amplitude on the harmonic response has been investigated previously ( see ; ) . in order to ensure that the stimulus amplitudes were sufficient to overcome spontaneous noise and avoid significant higher order responses , several algorithms described elsewhere ( )have been implemented in matlab to verify that the experimental traces are time invariant for both linear and quadratic outputs , and that the signal can be adequately reconstructed by quadratic analysis .it will be shown that the oscillatory type d neurons of the phn have quadratic responses over a range of subthreshold membrane potentials , namely they convert limited amplitude and bandwidth input signals to wider bandwidth and more complex output responses as mentioned above for nonlinear responses . under normal physiological conditions of current clamp, a depolarization activates gnap conductance that in turn increases the impedance and consequently reduces the electrotonic length .type b neurons show similar effects and in addition , they have a significant increase in the electrotonic length at hyperpolarized membrane potentials because of the activation of a hyperpolarization activated conductance ( ) .voltage clamp experiments were done to partially control the oscillatory and bistable responses of phn neurons in order to analyze the nonlinear membrane properties of both the somatic and dendritic regions of these neurons .indeed , an important advantage of a quantitative voltage clamp analysis of central neurons is the exploitation of the space clamp problem as a way to separate somatic and dendritic responses , namely the voltage clamp current measured from the voltage controlled soma is generally dominated by the unclamped voltage responses of the dendrites ( ) .thus , this current can be used as a measure of both linear and nonlinear dendritic potential responses while the somatic membrane potential is voltage clamped .these additional currents can flow because of a potential difference between the soma and the rest of the dendrite .in fact , these additional currents reflect the behavior of the dendritic membrane potential that can be taken into account with multi - compartmental models .previous voltage clamp measurements have been done using signals of small amplitude to obtain steady state linear responses at different membrane potentials .these measurements were done over a potential range to obtain a quantitative description of the voltage dependent conductances and have allowed the construction of neuronal multi - compartmental models of both type b and d phn neurons ( ) .these models allow an estimation of both somatic and dendritic membrane properties from somatic voltage clamp experiments that probe the soma and all regions of the dendritic structure .when the stimulus amplitudes are sufficiently small to elicit linear responses , both the voltage clamp and the current clamp generate equivalent linear results , however this is not the case for nonlinear responses .the present series of voltage clamp experiments is a quadratic extension of the steady state linear analysis .voltage clamped neurons show two kinds of nonlinearities : first , space clamped somatic ionic currents , and second , ionic currents in the soma due to an unclamped dendritic membrane .however , in a similar current clamp experiment , the nonlinear behaviors measured from the soma are caused by the voltage responses of both the somatic and the dendritic membranes . due to this asymmetry between voltage clamp and current clamp , there is no obvious way to predict the voltage clamp nonlinear response from the current clamp nonlinear response nor the converse .it is well known in the linear case that the admittance from voltage clamp , , is the inverse of the impedance from current clamp , that is where or refers to fourier transforms of i and v , respectively .in the nonlinear case , with nonoverlapping frequencies , the quadratic response for the voltage clamp is defined by qsa as and for the current clamp by , where is a symmetry factor ( ) . in general and not reciprocally equivalent because of the asymmetry discussed above .hence , this is an important conceptual difference between linear and nonlinear analysis , which also plays a role in interpretation of current versus voltage clamp experiments .the terms , and where and are either positive or negative frequencies refer to all complex values of and at the interactive and harmonic quadratic frequencies .the quadratic response function can then be represented as magnitude and phase plots versus and .since is difficult to interpret , it is convenient to reduce it to a diagonal matrix through eigendecomposition methods .similar methods have been used in quantitative neuronal analyses , for example singular value decomposition ( ) or principal component analysis ( ) . to make this reduction , it is important to note that the matrix obtained by row flipping is actually hermitian . as a consequence , can be reduced to a diagonal matrix such that where is a unitary matrix and its complex conjugate transpose .the unitary matrix contains no information about the magnitude and can be viewed as a kind of generalization of the phase .the magnitude is entirely encoded in the diagonal matrix .the elements of are called eigenvalues .each column of the matrix is a special vector called an eigenvector , whose coordinates are expressed relatively to the stimulus frequencies .the amplitude of each eigenvalue indicates the relative contribution of the corresponding eigenvector to the quadratic response .practically , the eigendecomposition can be interpreted as a reduction of the quadratic neuronal function to a set of quadratic filters in which eigenvalues play the role of amplitudes .the responses at the stimulus frequencies are shown in figure [ fig : nfig1a]a as interpolated black points .the color coded points illustrate the smaller amplitude current responses at the quadratic frequencies . at low frequenciesboth the linear and nonlinear responses are quite comparable despite the fact that the standard deviation of the imposed voltage clamp stimulus was only 2.85 mv . in figure[ fig : nfig1a]b , each cell of the qsa matrix encodes the magnitude of the corresponding quadratic interaction .informally , the qsa matrix can be viewed as a quadratic generalization of the admittance , thus it is an intrinsic characterization of the measured neuronal response .the interpolations were performed by the matlab command griddata ( linear method ) in order to represent the responses in 3d color plots over a continuous range of frequencies . as a characteristic function ,the qsa matrix is independent of the stimulus amplitude , however the current responses become proportionally insignificant as the stimulus approaches zero . the responses of higher ordersare insignificant for stimulation amplitudes below 3 mv .the qsa matrix , , is the core mathematical object which encodes the total effect of the pairwise interactions at the interactive frequencies .each matrix cell is located at the intersection of two input frequencies . in this way ,the one dimensional admittance function is generalized to a two dimensional quadratic function .the qsa along with linear analysis can be used to reconstruct the signal using and .figure [ fig : nfig1a]b shows that the maximum amplitude for the interactive frequencies occurs at the intersection of 2 hz and 10.4 hz ( ) .the eigendecomposition of these data strongly suggests that depolarized type d neurons are dominated by a single eigenvalue as illustrated in figure [ fig : nfig1a]c .even when two eigenvalues are required to adequately describe the response , the eigendecomposition provides a remarkably compact representation of the nonlinear response that otherwise can only be quantitatively described by complex differential equations involving large numbers of dendritic compartments .it would appear that the neuronal function becomes more complex , in the sense of information processing , as the number of significant eigenvalues increases . ]the previous matrix reduction has the great advantage of being reversible , such that from and one can exactly recover .there exists a coarser simplification by summing the qsa matrix by columns to obtain a vector indexed by the stimulation frequencies .the summation is defined as follows : the values of r are illustrated in figure [ fig : nfig1a]d .the advantage of the r functions is that they can be presented as classical bode plots .moreover , each r function can be intuitively interpreted as a measure of the influence of each individual stimulation frequency on the nonlinear responses involving its interaction with all other stimulation frequencies .hence , matrix summation is especially well suited to superimpose and compare with the piecewise quadratic analysis for different steady state responses . ]figure [ fig : methods1a]a shows a reconstruction of the measured currents with the first and second order responses .the contribution of the nonlinear frequencies can be quantitatively expressed by comparing the ratios of the spectral energy of the linear versus linear + quadratic responses to the total spectral energy for the entire range of frequencies .since the maximum stimulus frequency was 17.8 hz , a frequency range of 36 hz > 2 x 17.8 was selected for the total sum . clearly , the quadratic reconstruction is much more accurate than the linear one , which is confirmed by evaluating the signal energy , 61% for the linear analysis against 96% for the quadratic analysis .the discrepancy between the second order ratio and 100% shows how well a second order approximation fits the total response , as well as an indication of the presence of other higher order responses for the frequency range selected .higher order responses clearly are more prominent with a depolarization due to the augmentation of the nonlinear behavior of both second and higher order responses .since the second order responses have relatively low amplitudes it is essential to ensure that the contributions of other noise sources , such as synaptic events or membrane ion channel fluctuations , do not significantly contribute to the observed responses at the harmonic and interacting frequencies .the synchrony and reproducibility of the data is shown by superimposing first order ( figure [ fig : methods1a]b , delta1 ) and second order ( figure [ fig : methods1a]c , delta2 ) computed responses for four sequential measurements . a time invariance correlation function was used to determine an optimal stimulus amplitude , which is large enough to overcome the spontaneous noise of the neuron and not too great to evoke significant higher order responses ( ) .finally , the usual linear impedance , , is shown in figure [ fig : methods1a]d .it is difficult to get accurate measurements of the quadratic responses in current clamp for the type d neurons , mainly due to uncontrolled spontaneous oscillations .thus , voltage clamp experiments were done to control the oscillations and measure the negative current associated with the persistent sodium conductance gnap .the nonlinear responses evoked by 5 - 10 mv voltage clamp stimuli can also be blocked ( not shown ) by riluzole as described in an earlier steady state piecewise linear analysis of gnap ( ) .figure [ fig : dneuron]a shows the increased linear impedance magnitude evoked by progressive depolarizations . since gnap can be seen as a negative conductance , its activation by depolarized membrane potentials reduces the total conductance of the cell , which leads to an increase of impedance .the nonlinear responses are indicated by eigenvalues ( figure [ fig : dneuron]b ) and r summation ( figure [ fig : dneuron]c ) , whose magnitudes clearly increase with depolarization . for this neuron , the spectral energy analysis led to ratios 99% and 100% at -60 mv , and 77% and 98% at -50 mv for the linear versus linear + quadratic responses , respectively . ]it has been shown previously that type b neurons have a prominent gnap , which often leads to net inward current carried by na for a limited range of voltage clamped depolarized membrane potentials ( ) .voltage clamped data in figure [ fig : bneuron ] illustrate that type b nonlinear responses are significantly enhanced by depolarization and in addition , often show a resonant enhancement of the impedance .the impedance shows a maximum at an intermediate depolarization and a shift to a higher resonance frequency with further depolarization .in contrast , figures [ fig : bneuron]b and c show that both the eigenvalues and the r summation values increase monotonically with depolarized membrane potentials .interestingly , the number of significant eigenvalues required to describe the nonlinear response is generally two or more , unlike the single eigenvalue usually needed for type d neurons . for this neuron ,the spectral energy analysis led to ratios 99% and 100% at -60 mv , and 82% and 96% at -41 mv for the linear versus linear + quadratic responses , respectively . ]nonlinear responses are likely to be enhanced by the activation of dendritic nmda receptors , which would occur during synaptic activity of the neural integrator network .nmda activation could trigger dendritic bistable responses that contributes to maintaining a particular firing rate after an input impulse .figures [ fig : nmdaexperiment]b and c show that the addition of nmda clearly enhances nonlinear response of a type d neuron consistent with the trigger hypothesis .the mean value of r and the maximum eigenvalue for all neuronal types increased significantly with the addition of nmda .ttx reduces the nonlinear effects of nmda to control values or less , however it is still capable of inducing potential oscillations in current clamp ( not shown ) . control ( with gnap , without nmda ) versus nmda + ttx treated neurons ( without gnap , with nmda ) did not show a significant p - value .thus , nmda and gnap are synergistic in their action , where the combined effects are generally greater than either alone .since the normal physiological activation of nmda receptors is due to transient synaptic currents , the non - inactivation of gnap and membrane potential bistability contribute to the maintenance of a depolarized potential . in conclusion gnapappears to be essential for sustained nonlinear effects induced by nmda activation and thus would be critical for the operation of the neural integrator . ] in order to provide an interpretation of these experimental results , numerical simulations have been done using previously published models of types d and b for rat phn neurons ( ) .these models have been constructed from a piecewise linear analysis to fit parameters of nonlinear differential equations in voltage clamp . unless otherwise indicated , both models had three uniformly distributed voltage dependent conductances , gk , gnap and gh with a soma and eight dendritic compartments .typically the gh of type d neurons is quite small and could be neglected . ]simulations done with the published average parameter values were consistent with the voltage clamp data , however some appropriate modifications of the average parameters were done for the comparison with the data for the individual neurons of figures [ fig : dneuron ] and [ fig : bneuron ] .four potentials are shown to cover the range observed in the experiments .the type d data and model generally show a dominant eigenvalue ( see -50 mv in table [ tab : table1 ] and figure [ fig : dmodel ] ) .the simulations of the type b model in figure [ fig : bmodel ] show behavior similar to the data of figure [ fig : bneuron ] , showing two or more significant eigenvalues compared to an essentially single dominant eigenvalue for the type d model ( figure [ fig : dmodel ] ) .however tables [ tab : table1 ] and [ tab : table2 ] indicate that the number of significant eigenvalues for both neuronal types is dependent on both the membrane potential and specific parameter values . for example , at -40 mv with or without nmda , figure [ fig : nmdaexperiment ] b shows a type d neuron with multiple significant eigenvalues ( also , see table [ tab : table1 ] ) . in addition , the monotonic increase in the r summation of the type b model nonlinear responses with depolarization contrasts with the peaking of linear impedance increase similar to that observed in the type b neuron of figure [ fig : bneuron ] . in general the impedance increase with depolarization is caused by the activation of the gnap negative conductance that balances the other positive conductances .an impedance maximum , often with a resonance , occurs because the impedance decreases with further depolarization due to an increased gk . ]the nonlinear responses of type b model simulations , in contrast to type d , show that the quadratic response is greatly reduced in magnitude at -60 mv , in part because the type b model has a greater density of voltage dependent gnap conductances on a relatively more compact dendritic structure than found for type d neurons .in addition , the nonlinear responses in the type b model , as in the data of figure [ fig : bneuron ] , are enhanced at hyperpolarized potentials directly due to the gh conductance .the use of nonoverlapping frequencies is required for the construction of qsa matrices from experimental measurements . nevertheless , it is possible to do a coarse interpolation for other frequencies as plotted in figure 8 .the interpolated qsa matrices for type d and b neurons show striking differences , which also occur in their mathematical models .figures [ fig : qsa - plots]a and [ fig : nfig1a]b show local peaks in the qsa plots at two membrane potentials for the same type d neuron , which are comparable to the type d model simulation in figure [ fig : qsa - plots]b .by contrast the type b neuron and model ( figure [ fig : qsa - plots]c , d ) show a prominent low frequency peak distinctly different than observed in type d neurons .the qsa matrix clearly provides significantly more information about the neuronal behavior than linear parameters and has the possiblilty of being extended to all frequencies of interest . in order to test the hypothesis that the dendritic conductances are mainly responsible for the nonlinear responses ,simulations were done with gnap reduced in the soma and unchanged in the dendrite or the converse as indicated in tables [ tab : table1 ] and [ tab : table2 ] . in type d neurons , decreasing the soma gnap has a lesser effect ( 79% ) on the r summation compared with decreasing the dendritic gnap ( 13% ) .simulations of type b model neurons showed a lesser effect for a gnap reduction in the more compact dendrite ( 34% ) .such simulations support the hypothesis that the nonlinear responses measured under voltage clamp conditions are dominated by dendritic responses , while the linear responses are determined by both the soma and dendrite . since the somatic region has good voltage space clamp control, one would expect a dominant linear current response from the somatic conductances at all membrane potentials , however the significantly larger dendritic membrane area is not space clamped leading to uncontrolled voltage excursions and greater nonlinearites . in conclusion , these simulations suggest that nonlinear responses in voltage clamped neurons are dominated by active dendritic structures , when their electrotonic lengths are above 0.3 ..[tab : table1 ] [ cols="^,^,^,^,^,^,^",options="header " , ] ]the neurons of the prepositus hypoglossi nucleus ( phn ) provide a useful system to investigate nonlinear behavior , such as persistent activity to maintain eye position .the oscillatory character of some of these neurons is similar to that observed in the stellate neurons in layer iv of entorhinal cortex ( ; ) , which are also involved in the processing of orientational information .phn neurons are part of the brainstem that receives head velocity signals and integrates them to control eye position for the stabilization of an image at the center of the visual field during head rotation .this specific processing is called neural integration ( ) due to an analogy with integration in mathematical calculus . in order to understand how the phn neural network can perform neural integration, it is important to understand the biophysical properties of individual neurons involved in the circuitry .single neurons of the phn show oscillatory and bistable nonlinear properties that are likely to be involved in the operation of the neural integrator . since models based on recurrent excitation , even including lateral inhibition , are not sufficiently robust , a number of theoretical papers have suggested that the nonlinear properties of individual neurons are essential for the network behavior of the neural integrator ( ; ) .thus , the finding that oscillatory nonlinear behavior is clearly present in the neurons involved in the eye movement circuitry ( ) lends strong support to these theoretical notions .dendritic characteristics ( ) are potentially critical for the function of these neurons , which in addition have also shown to be different for the two main classes of phn neurons ( ) . thus , type d oscillatory neurons have bistability properties , which are consistent with neural integrator models that rely on remote dendritic processing ( ; ) .the experiments and analysis in this paper strongly support the hypothesis that type d neurons have persistent sodium channels in their dendrites , which would promote remote bistable potentials shifts leading to persistent activity .type b neurons have an even higher density of persistent sodium channels and their passive electrotonic length is less than type d. as a consequence , the type b neurons at moderate depolarizations would have a more uniform potential throughout the less isolated dendritic tree .the gnap conductance in the dendrites can easily be activated by the synaptic stimulation of the nmda receptors likely to occur during normal physiological activity .in addition , nmda activation enhances total nonlinearities , which are shown for a type d neuron and also observed in type b neurons ( see figure [ fig : nmdaexperiment ] ) . in current clamp conditions ,both types of neurons show marked potential oscillations in the presence of 25 - 50 m nmda ( ) .clearly the level of nmda activation expected during synaptic activation would be much less , however it is likely to be sufficient to evoke significant nonlinear responses due to the gnap in both type b and d neurons . in this regard , the neural integrator model of ( 2002 ) ,depends on nmda synapses that activate gnap dependent bistable states in dendritic compartments .the quantitative measurement of the biophysical properties of intact neurons is seriously compromised by the inherent inability to voltage clamp the electrotonic structure of the dendritic tree . in general whole cell measurements are restricted to patch clamp electrodes placed in the soma , which makes it difficult to infer the remote properties of the dendrites .previous piecewise linear analyses have permitted the development of realistic neuronal models , however it has been difficult to separate the properties of the dendrites from the soma unless patch clamp electrodes can be placed in the dendrites . the new approach described in this paper takes advantage of the spaceclamp problem associated with voltage clamping neurons .the quadratic analysis has shown that it is possible to characterize the nonlinear behavior of the uncontrolled dendritic membrane voltage responses while maintaining voltage clamp control of the somatic membrane . when the dendritic electrotonic structure is remote with relatively large surface areas compared to the soma , the nonlinear behavior of the dendritic membrane dominates the soma .this is especially the case in a voltage clamp experiment because of the lack of dendritic potential control .it was found that responses in the range of ( 5 - 10 ) mv could be well described by quadratic nonlinearities suggesting that nonlinearities of higher degrees only add marginal improvement .thus , the quadratic response is likely to sufficiently capture most of the nonlinear behavior of neuronal systems except for extremely large synaptic inputs .the quadratic functions are quite sensitive to the mean membrane potential and appear to be valid for a range of sinusoidal inputs .this behavior extends significantly the validity of the quantitative quadratic description .the quadratic functions are computed on particular sets of nonoverlapping frequencies , which can be interpolated over a continuous range of frequencies as illustrated in figures [ fig : nfig1a]b and [ fig : qsa - plots ] .thus , they provide a significantly concise description of the neuronal behavior and could potentially be used as computational devices that would be independent of nonlinear differential equations .practically , this could be an alternative approach to large scale neural network simulations .the estimation of the parameters of both the voltage dependent conductances and the electrotonic structure have shown quantitative differences between type b and d neurons ( ) .in addition , the nonlinear analysis in this paper suggests that the number of significant eigenvalues is greater for the type b versus the type d neurons and their individual models .thus , the measured nonlinearities seems to be structurally different between type b and d , namely the two corresponding types of quadratic functions are intrinsically different ( figure [ fig : qsa - plots ] ) . in generalthe dominant eigenvalue was negative , which is related to the negative slope conductance due to gnap .type d model simulations at large depolarizations ( see table [ tab : table1 ] at -40 mv ) show that the maximum eigenvalue ( ) is positive , consistent with a positive slope conductance due to an increased outward potassium current .in contrast , the greater gnap of the type b model maintains a negative at -40 mv ( table [ tab : table2 ] ) .both type d and b models show positive values if gnap is totally removed from the soma and dendrite . in conclusion, the work described in this paper provides a novel way to concisely quantify the fundamental nonlinearities underlying of individual neurons . by allowing rigorous comparison of any neuronal model with the behavior of real neurons, it makes possible to show that nonlinear responses in voltage clamp are dominated by the active dendritic structure .a determination of the molecular basis of the eigenvalue analysis should provide a better understanding of how neurons use their remarkable nonlinear properties in information processing .erchova i , kreck g , heinemann u , herz avm ( 2004 ) dynamics of rat entorhinal cortex layer ii and iii cells : characteristics of membrane potential resonance at rest predict oscillation properties near threshold . 560 : .idoux e , serafin m , fort p , vidal pp , beraneck m , vibert n , muehlethaler m , moore l ( 2006 ) oscillatory and intrinsic membrane properties of guinea pig nucleus prepositus hypoglossi neurons in vitro . 96 : .
the nonlinear properties of the dendrites in prepositus hypoglossi neurons are involved in maintenance of eye position . the biophysical properties of these neurons are essential for the operation of the vestibular neural integrator that converts a head velocity signal to one that controls eye position . a novel method named qsa ( quadratic sinusoidal analysis ) for voltage clamped neurons was used to quantify nonlinear responses that are dominated by dendrites . the voltage clamp currents were measured at harmonic and interactive frequencies using specific stimulation frequencies , which act as frequency probes of the intrinsic nonlinear neuronal behavior . these responses to paired frequencies form a matrix that can be reduced by eigendecomposition to provide a very compact piecewise quadratic analysis at different membrane potentials that otherwise is usually described by complex differential equations involving a large numbers of parameters and dendritic compartments . moreover , the qsa matrix can be interpolated to capture most of the nonlinear neuronal behavior like a volterra kernel . the interpolated quadratic functions of the two major prepositus hypoglossi neurons , namely type b and d , are strikingly different . a major part of the nonlinear responses is due to the persistent sodium conductance , which appears to be essential for sustained nonlinear effects induced by nmda activation and thus would be critical for the operation of the neural integrator . finally , the dominance of the nonlinear responses by the dendrites supports the hypothesis that persistent sodium conductance channels and nmda receptors act synergistically to dynamically control the influence of individual synaptic inputs on network behavior . * corresponding author
elasticity theory in homogeneous materials is a well developed subject . much lessis known about inhomogeneous materials where the solution of the basic equations of elasticity becomes very involved . in this paperwe focus on a material which consists of one finite area ( of arbitrary shape ) in which a material with given elastic properties is embedded in an infinite sheet of material of different elastic properties .this situation is known as elastic inhomogeneity " and it appears in a variety of solid mechanical contexts .previous studies have concentrated on solving this problem for the relatively symmetric case of an ellipse where it was solved analytically . in other casesthe problem was solved for small perturbations to the circle .mathematically the problem is set as follows , and see fig .[ patch ] .a patch of material of type 1 occupies an area and is delineated by a sharp boundary which will be denoted .the rest of the infinite plane is made of material of type 2 .the material is subjected to forces at infinity ( and see below the precise boundary conditions ) , and is therefore deformed . before the deformationeach point of the material is assigned a point in the two - dimensional plane .the forces at infinity result in a displacement of the material points to a new equilibrium position .the displacement field is defined as the strain field is defined accordingly as /2 \ .\ ] ] in the context of linear elasticity in isotropic materials one then introduces the stress field according to hook s law where . and take on different values , in and , in the rest of the material . in equilibriumthe stress tensor should be divergenceless at each point in the sheet . by defining the stress ( or airy ) potential u : former equation for the stress tensor becomes a partial differential equation for the stress potential : this equation , which is known as the bi - laplace or the bi - harmonic equation is conveniently solved as a non - analytic combination of analytic funcitons . to this aimwe introduce the complex notation , and note the general solutions of eq .( [ bilaplace ] ) in the form \ , \label{uphichi}\ ] ] where and and are any two holomorphic functions . what remains to do in any particular problem is to find the unique analytic functions such that the stress tensor satisfies the boundary conditions .this stress tensor is determined by the two holomorphic functions as : \nonumber\\ \sigma_{xx}(x , y)&=&\re [ 2 \tilde\varphi'(z)-\bar z\tilde\varphi''(z)-\eta''(z)]\nonumber\\ \sigma_{xy}(x , y)&=&\im [ \bar z\tilde\varphi''(z)+\eta''(z)].\end{aligned}\ ] ] we define for convenience : and then : \nonumber\\ \sigma_{xx}(x , y)&=&\re [ 2 \tilde\varphi'(z)-\bar z\tilde\varphi''(z)-\tilde\psi'(z)]\nonumber\\ \sigma_{xy}(x , y)&=&\im [ \bar z\tilde\varphi''(z)+\tilde\psi'(z)].\end{aligned}\ ] ] note that the stress tensor is determined by derivatives of the holomorphic functions , and not by the functions themselves .this leaves us with some freedom , since the functions can be changed with the following gauge : as we shall see below , not all these gauge freedoms are true freedoms once we introduce the boundary and continuity conditions. since the elastic properties are different inside and outside , the potential functions will be different in the two regions : and which are defined on and and which are defined on .nevertheless we will demand continuity of the physical fields .in particular the normal force and the displacement must be continues across the interface ( by newton s third law ) in the absence of surface tension .therefore the continuity conditions are : the continuity conditions for the stress , can be rewritten as : or , after integrating : where is a complex constant of integration . in terms of the holomorphic functions ,the condition ( [ constress ] ) translates to : and the condition ( [ condis ] ) becomes : }{\mu_1}=\frac{[\kappa_2\tilde\varphi^{(2)}(z)\!-\!z\overline{\tilde\varphi'^{(2)}(z)}\!-\!\overline{\tilde\psi^{(2)}(z)}]}{\mu_2 } \ .\ ] ] where .in addition to these continuity conditions on we need to specify boundary conditions at infinity .we choose to the problem of finding the stress field _ outside _ a given domain using conformal maps were described for example in . herewe need to solve for the stress field both inside and outside the given domain . in the followingwe assume that the center of coordinates is inside and the point at infinity is outside .since the stress functions are holomorphic in their domains of definition , we can expand them in the appropriate laurent series , which for the functions with superscript ( 1 ) is of the form i.e. we have no poles at the origin .for the outside domain ( functions with superscript ( 2 ) ) the most general expansions in agreement with the boundary conditions ( [ bcinfty ] ) are of the form i.e we have a pole of order 1 at infinity .accordingly , the leading terms of eqs .( [ laurent2 ] ) are determined by the boundary conditions .we now use one of the gauge freedoms to eliminate the imaginary part of and write : the standard way to proceed would be to substitute the series expansions in the continuity conditions and find the linear equations that determine all the coefficients by equating terms of the same order in . however this can not be done in general since the functions are not orthogonal on arbitrary contours . to overcome this , one maps the regions and into the interior and exterior of the unit circle , respectively .that is , we need two holomorphic , invertible ( and thus conformal ) functions , one is which maps the exterior of the unit circle into , and the other is which maps the unit disk into .since they are both invertible they have inverse functions which we denote and now we express the functions and in terms of and and then expand them on the boundary of the unit circle .this expansion will be a fourier series where the powers of or satisfies the orthogonality relation : we have here used to represent either or .the orthogonality allows us to equate the coefficients of the series term by term .we define : and : we can expand and in terms of and on the unit circle . since the original functions were holomorphic and meromorphic in the original domains , the functions : and : are holomorphic inside and outside the unit disc , respectively .therefore we can expand in terms of and : we now assume that the map of the exterior domain , , maps the point at infinity to infinity .that is , it will have a laurent series on the form from this we get the following relations ( after substituting and taking the limit ) : we can also use the last two freedoms to choose such that . in the interior domain , the functions and also have five freedom .however , the requirement of continuity of the displacement field across the boundary removes three of these freedoms .this continuity was expressed by eq .( [ condis ] ) . applying the apparent gauge freedoms on the lhs of that equation and then subtracting the resulting equation from eq .( [ condis ] ) we find the three conditions using the remaining two freedoms , we can eliminate the constant term in the expansion of by setting . note that this is possible only when we choose such that .we may always define our mapping such that this is satisfied . in terms of the conformal maps we transform the boundary conditions into \nonumber\\&&=\frac{1}{\mu_2}[\kappa_2\varphi^{(2)}(\omega)-\frac{\phi(\omega)}{\overline{\phi'(\omega)}}\overline{\varphi'^{(2)}(\omega)}-\overline{\psi^{(2)}(\omega ) } ] \ . \label{bccrack2conf}\end{aligned}\ ] ]at this point we need to substitute the expansions ( [ laurentp1]),([laurentp2]),([laurent ] ) and an expansion similar to ( [ laurent ] ) for into the equations ( [ bccrack1conf ] ) and ( [ bccrack2conf ] ) and solve for the coefficients and .to understand how to do this in principle we write the expanded equations ( [ laurentp1 ] ) and ( [ laurentp2 ] ) in an abstract form where are linear combinations of the coefficients and whereas are linear combinations of and .as this equation stands we can not use the orthogonality relation eq .( [ ortho ] ) .therefore we expand moments of in terms of in the form we now insert this expression in eq .( [ basic ] ) , and equate the coefficients of same powers to achieve a set of linear equations for the coefficients of and .the actual algebraic manipulations that are involved in reaching a _finite _ set of linear equations are presented in the appendix .needless to say , when we derive a finite set of equations we lose precision . to see this we note that to get the right number of equations for the number of unknowns ( see appendix ) we need to truncate the summations on the lhs and the rhs of eq .( [ doublef ] ) at the same finite n , i.e. for a precisely circular inclusion this truncation introduces no loss of information . for this particular shape the expansion eq .( [ momentexpansion ] ) has only one term with , i.e. .obviously , when the inclusion shape deviates from the circle , the representation of in fourier space deviates from a delta function and it becomes more spread .an example of this phenomenon is presented in fig.[t1t2 ] for an inclusion in the form of an ellipse with aspect ratio of about 1.5 .the upper panel shows the paramterization of the outer mapping as function of that of the inner , . in the lower panelwe show the power spectrum of the moments and of the function in the upper panel .if we truncate the expansion at the dashed line , we lose high frequency information for the higher moments .this loss of informaion will lead to stress field calculations which are less accurate . to seethe difficulty in a pictorial way we can consider the field lines of the conformal mappings for an inclusion that is elongated in shape , see for example fig .[ figfield ] .the external field lines concentrate at the convex parts of the inclusion , whereas the internal field lines concentrate on the concave parts .it becomes increasingly difficult to match field lines since they make a large discontinues jump when we go from the interior to the exterior domain .similarly , for the ellipse in fig .[ t1t2 ] , when we increase the aspect ratio of the ellipse , the slope in the steep parts of become even larger , requiring higher order frequencies in our expansions . eventually for large aspect ratios , our method will break down .in all the calculations we assumed that the conformal maps and are available . forarbitrary inclusion shapes this is far from obvious , and special methods are necessary to obtain these maps .an efficient method to obtain the conformal map from the exterior of the unit circle to the exterior of an arbitrary given shape had been discussed in great detail in . in the present casewe use a slightly different method namely the geodesic algorithm .this method , like the former one , is based on the iterations of a generic conformal map defined by a set of parameters .we then construct the conformal map to an arbitrary shape by an appropriate choice of parameters . in the geodesic algorithm , we discretize the interface of the inclusion by a sequence of points .the points appear sequentially in the positive direction of the interface .we now briefly summarize how to construct the conformal map ( see ) .first we construct iteratively the inverse map that brings the interior of the inclusion to the upper half plane and the interface to the real axis .the conformal map to the shape then follows directly from the inverse .the construction is done in three steps . in the first step ,we move one point to infinity and another to the center of coordinates , e.g. and , respectively . for that purposewe use the mapping , in the next step we find a map that connects to the real axis by a semi circular arc .the inverse of this mapping , , brings to the real axis . where and and .iteratively , we apply this mapping to all the points where in general in the third and last step we unfold the remaining part of the interior to the whole upper half - plane by the map with the conformal map from the upper half - plane to the interior domain and from the lower half - plane to the exterior domain is then given by from the conformal map we easily construct the map from the unit circle to the inclusion . in fig .[ mapping ] we illustrate how this is done .in order to check the validity of our method we calculated the stress fields created by inhomogeneities with 2 different geometries : an ellipse with semi axes and ( aspect ratio of about 1.5 ) and a smoothed triangular curve ( see fig . [ triangle ] ) . in the case of the elliptical inhomogeneity we compared our method to the known analytical solution which was first obtained by hardiman in 1954 . in the example below , the boundary conditions at infinity were set to .the shear moduli used were for the inhomogeneity and for the matrix .the poisson ratio was taken to be for both inhomogeneity and matrix . in figs .[ figsxx ] and [ figsyy ] we can see the components of the stress field calculated outside the ellipse . the blue line is the stress calculated using hardiman s solution and the red spots corresponds to the values obtained by our method .similarly , we have calculated the stress field outside the triangular - like inhomogeneity ( figs [ trigxx ] and [ trigyy ] ) .in comparing our approach to other available algorithms , for example finite elements approximations to the equations of linear elasticity , we should stress that our approach works equally well for compressible and incompressible materials , there is no problem in taking the incompressible limit as the poisson ratio approaches 1/2 .this is not the case for finite elements methods . while the examples shown above worked out very well , indicating that the proposed algorithm is both elegant and numerically feasible , unfortunately it deteriorates very quickly when the shape of the inhomogeneity deviates strongly from circular symmetry .the difficulty in matching the two conformal maps is significant , as can be gleaned from from figs . 2 and 3 .one could think that the problem could be overcome in principle by increasing the numerical accuracy , but in practice , when the inhomogeneity has horns , spikes or deep fjords , the difficulties becomes insurmountable .similar difficulties in another guise are however expected when any other analytic or semi - analytic method is used , leaving very contorted inhomogeneities as a remaining challenge for elasticity theory .this work had been supported in part by the israel science foundation , the german israeli foundation and the minerva foundation .we thank eran bouchbinder , felipe barra , anna pomyalov and charles tresser for some very useful discussions .starting from ( [ bccrack1conf ] ) and ( [ bccrack2conf ] ) we write the series expansions in a compact form : were we have eliminated the zero terms in because of the gauge freedom .substituting : we can also expand : substituting and changing the indices of summation : which leads to we get : in order to find the relations between the coefficients we need that both sides will be expressed in the same fourier harmonics .therefore , we need to expand : in a fourier series .the condition for this to be expanded in a fourier series is that is ( i.e. on the segment ] . \zeta^n \ .\ ] ] define : and substituting : \zeta^n \ .\ ] ] using the linear independence of the s with respect to the fourier integral , and cutting the infinite series at some number n , we get a set of linear equations which is of the form : where is the vector of coefficients ( s and s ) , is a matrix of constants and is the s .next , we substitute the expansions in the continuity equation for the displacement : =\frac{1}{\mu_2}[\kappa_2\varphi^2(\omega)-\frac{\phi(\omega)}{\overline{\phi'(\omega)}}\overline{\varphi'^2(\omega)}-\overline{\psi^2(\omega)}]\nonumber \ . \label{bccrack}\ ] ] substituting : =\nonumber\ ] ] \ .\ ] ] we can also expand : substituting and changing the indices of summation : which leads to : =\nonumber\ ] ] \ .\ ] ] expanding as before and defining : we get : =\nonumber\ ] ] \zeta^n \ .\ ] ] when cutting the infinite series in the same way , we get again a matrix equation ( with the same dimensions ) of the form : combining equations ( [ matform ] ) and ( [ matform2 ] ) we get a matrix equation : where and 99 n. j. hardiman , quart .mech . and applied math .vii , pt.2 ( 1954 ) .h. gao int .j. solids structures vol ., 1991 l.d . landau and e.m .lifshitz , _ theory of elasticity _ , 3rd ed .( pergamon , london , 1986 ) . n. i. muskhelishvili , _ some basic problems of the mathematical theory of elasticity _ , ( noordhoff , 1953 ) .f.barra , m. herrera and i. procaccia , europhys .. lett , * 63 * 708 ( 2003 ) ; e. bouchbinder , j. mathiesen , and i. procaccia , phys . rev .e * 69 * 026127 ( 2004 ) .
a classical problem in elasticity theory involves an inhomogeneity embedded in a material of given stress and shear moduli . the inhomogeneity is a region of arbitrary shape whose stress and shear moduli differ from those of the surrounding medium . in this paper we present a new , semi - analytic method for finding the stress tensor for an infinite plate with such an inhomogeneity . the solution involves two conformal maps , one from the inside and the second from the outside of the unit circle to the inside , and respectively outside , of the inhomogeneity . the method provides a solution by matching the conformal maps on the boundary between the inhomogeneity and the surrounding material . this matching converges well only for relatively mild distortions of the unit circle due to reasons which will be discussed in the article . we provide a comparison of the present result to known previous results .
yeasts are single - celled fungi of which more than one thousand different species have been identified .the most commonly used yeast is _ saccharomyces cerevisiae _ which has been utilized for the production of bread , wine and beer for thousands of years .biologists in a wide variety of fields use _ _ as a model organism .a common experimental method for observing biochemical processes involved in yeast growth is that of continuous cultivation in a chemostat .the cell growth takes place in a vessel that is continuously stirred . a nutrient containing fluidis pumped into the vessel and cell culture flows out of the vessel at the same rate , ensuring that the volume of culture in the reaction vessel remains constant .the rate of flow in and out divided by culture volume is called the _dilution rate_. quantities such as concentrations of chemicals can be measured in a variety of ways , see for methods used in _ _ experiments . as a continuous culture experiment is carried out , it is common for the system to reach a steady state . at the steady state ,the rate of cell division in the culture is equal to the dilution rate .however experimentalists in the late 1960 s , , observed that instead of settling to a steady state , continuous culture experiments of _ _ could in some cases produce stable oscillations .von meyenburg , , discovered in subsequent experiments that these oscillations only occur in an intermediate range of values of the dilution rate ( between about and ) .much work has since been done to understand the cause of such oscillations , see for instance ._ _ has three metabolic pathways for glucose : fermentation , ethanol oxidation and glucose oxidation .the model of hypothesizes that the competing metabolic pathways of the growing yeast cells create feedback responses that produce stable oscillations .it assumes that micro - organisms will utilize the available substrates in a manner that maximizes their growth rate at all times . to enforce this optimization a `` maximum function '' is introduced in the model equations ; as a result , the model is an example of a piecewise - smooth , continuous dynamical system .piecewise - smooth systems are characterized by the presence of codimension - one phase - space boundaries , called _ switching manifolds _ , on which smoothness is lost .such systems have been utilized in diverse fields to model non - smooth behavior , for example vibro - impacting systems and systems with friction , switching in electrical circuits , economics and biology and physiology , .the interaction of invariant sets with switching manifolds often produces bifurcations that are forbidden in smooth systems . for instance , though period - doubling cascades are a common mechanism for the transition to chaos in smooth systems , in piecewise - smooth systems periodic orbits may undergo direct transitions to chaos .these so - called _ discontinuity induced bifurcations _ can be nonsmooth analogues of familiar smooth bifurcations or can be novel bifurcations and unique to piecewise - smooth systems .a bifurcation in the latter category that is simple in appearance ( for example the transition from a stable period-1 solution to a stable period-3 solution in a piecewise - smooth map ) often corresponds to a combination of or countable sequence of smooth bifurcations . in this situation ,arguably , the piecewise - smooth system describes the dynamics more succinctly than any smooth system is able to .alternatively bifurcations in piecewise - smooth systems may be extremely complicated , see for instance and references within .a piecewise - smooth , continuous system is one that is everywhere continuous but nondifferentiable on switching manifolds . in such a system , the collision of a mathematical equilibrium ( i.e. steady state , abbreviated to equilibrium throughout this paper ) with a switching manifold may give rise to a _ discontinuous bifurcation_. as the equilibrium crosses the switching manifold , its associated eigenvalues generically change discontinuously .this may produce a stability change and bifurcation . in two - dimensional systems , all codimension - one discontinuous bifurcationshave been classified , but in higher dimensions there are more allowable geometries and no general classification is known .see for instance for recent investigations into three - dimensional systems .in this paper we present an analysis of discontinuity induced bifurcations in the eight - dimensional _ _ model of .the model equations are stated in [ sec : model ] . in [ sec : bifset ] we illustrate a two - parameter bifurcation set indicating parameter values at which stable oscillations occur .the bifurcation set also shows curves corresponding to codimension - one discontinuous bifurcations .these bifurcations are analogous to saddle - node and andronov - hopf bifurcations in smooth systems .bifurcations relating to stable oscillations are described in [ sec : oscil ] .we observe period - adding sequences over small regions in parameter space . in [ sec : codim2 ] we provide rigorous unfoldings of codimension - two scenarios seen in the bifurcation set from a general viewpoint . finally [ sec : concl ] presents conclusions . give the following model equations : + and represent the concentrations of cell mass , glucose , ethanol ( in gl ) and dissolved oxygen ( in mgl ) in the culture volume , respectively .each represents the intracellular mass fraction of a key enzyme in the metabolic pathway . represents the intracellular carbohydrate mass fraction .the subscripts and correspond to the three pathways : fermentation , ethanol oxidation and glucose oxidation , respectively ; represents the growth rate on each pathway .formulas for the growth rates and other functions are given in appendix [ sec : param ] .details concerning the derivation of the model are found in and expanded in the m.s .thesis of jones . in particular , the equation is introduced to model the regulation of enzyme activity by numerous biochemical mechanisms .each is a smooth function except at points where the two largest are equal ; at these points each has discontinuous derivatives with respect to some of the variables .all eight differential equations have at least one term containing a and thus display the same lack of smoothness properties .therefore the model is a piecewise - smooth , continuous system .our goal is to understand the effects that this nonsmoothness has on the resulting dynamics .this paper focuses on variations in values of two parameters , namely the dilution rate , ( in h ) , and the dissolved oxygen mass transfer coefficient , ( in h ) .values for all other parameters are given in appendix [ sec : param ] and were kept constant throughout the investigations .a key property of the system ( [ eq : alldot ] ) , is that the positive hyper - octant is forward invariant .that is , if the values of all variables are initially positive , they will remain positive for all time .this is , of course , a property that is required for any sensible model , since negative values of the variables are not physical .furthermore , within the positive hyper - octant all trajectories are bounded forward in time .in other words , solutions always approach some attracting set which could be an equilibrium , a periodic orbit or a more complicated and possibly chaotic attractor .in this section we describe a numerically computed bifurcation set for the system ( [ eq : alldot ] ) , see fig .[ fig : bifset ] .we find a single , physically meaningful equilibrium ( steady - state ) except in small windows of parameter space between saddle - node bifurcations . for small values of , this equilibrium is stable .if we fix the value of and increase , the first bifurcation encountered is an andronov - hopf bifurcation ( labelled in fig . [ fig : bifset ] ) .slightly to the right of the equilibrium is unstable and solutions approach a periodic orbit or complicated attractor ( see [ sec : oscil ] ) . as the value of increased further a second hopf bifurcation ( or ) is encountered that restores stability to the equilibrium ( unless , see below ) . ) .hb - hopf bifurcation , sn - saddle - node bifurcation . the parameter space is divided into three regions within each of which a different metabolic pathway is preferred at equilibrium . [fig : bifset],width=498,height=415 ] ( 14.7,6 ) ( 0,0 ) .hb - hopf bifurcation , sn - saddle - node bifurcation .the curves of discontinuity are labelled by their corresponding bifurcations : nb - no bifurcation , dhb - discontinuous hopf bifurcation ( with criticality indicated ) , dsn - discontinuous saddle - node bifurcation .the dotted curves correspond to the grazing of a hopf cycle with a switching manifold .there are three such curves , these emanate from the points ( a ) , ( c ) and ( d ) ( note , the curve that emanates from ( a ) is barely distinguishable from ) . in panel a , one point on a locus of saddle - node bifurcations of a hopf cycle is shown with a red cross .[ fig : bsmag],title="fig:",width=272,height=226 ] ( 7.5,0 ) . hb - hopf bifurcation , sn - saddle - node bifurcation .the curves of discontinuity are labelled by their corresponding bifurcations : nb - no bifurcation , dhb - discontinuous hopf bifurcation ( with criticality indicated ) , dsn - discontinuous saddle - node bifurcation .the dotted curves correspond to the grazing of a hopf cycle with a switching manifold .there are three such curves , these emanate from the points ( a ) , ( c ) and ( d ) ( note , the curve that emanates from ( a ) is barely distinguishable from ) . in panel a , one point on a locus of saddle - node bifurcations of a hopf cycle is shown with a red cross .[ fig : bsmag],title="fig:",width=272,height=226 ] recall , the system is piecewise - smooth , continuous as a result of a maximum function in the rate coefficient , ( [ eq : v_i ] ) . since the switching manifold is codimension - one , it is a codimension - one phenomenon for the equilibrium to lie precisely on a switching manifold .though an analytical formula for the equilibrium seems difficult to obtain , we have been able to numerically compute curves in two - dimensional parameter space along which this codimension - one situation occurs - the black curves in fig . [fig : bifset ] .we refer to these as _ curves of discontinuity_. the curves of discontinuity divide parameter space into three regions where one of the is larger than the other two , at the equilibrium .they may also correspond to discontinuous bifurcations , as described below .physically , crossing a curve of discontinuity corresponds to a change in the preferred metabolic pathway at equilibrium .[ fig : bsmag ] shows an enlargement of fig .[ fig : bifset ] near two points on a curve of discontinuity . in panel a , along the curve of discontinuity , below the point ( a ) and above the point ( c ) , there is no bifurcation . between ( a ) and ( c ) , numerically we have observed that a periodic orbit is created when the equilibrium crosses the switching manifold . between ( a ) and ( b ), the orbit is unstable and emanates to the right of the curve of discontinuity . between ( b ) and ( c )the orbit is stable and emanates to the left .we refer to these bifurcations as subcritical and supercritical _ discontinuous hopf bifurcations _ , respectively . the codimension - two point ( b ) ,is akin to a hopf bifurcation at which the constant determining criticality vanishes .we expect that a locus of saddle - node bifurcations will emanate from this point , in manner similar to that in smooth systems .two of the loci of smooth hopf bifurcations , and , collide with the curve of discontinuity at ( a ) and ( c ) . near these pointsthese hopf bifurcations are subcritical .unstable periodic orbits emanate from the hopf bifurcations and are initially of sufficiently small amplitude to not intersect a switching manifold .however , as we move away from the hopf bifurcations , the amplitude of the hopf cycles grow and they graze the switching manifold along the dotted curves in fig . [fig : bsmag ] .no bifurcation occurs at the grazing because the system is continuous .as we will show in theorem [ th : c2hb ] , the grazing curves intersect the hopf loci tangentially .the unstable cycles persist beyond grazing until they collide with a stable cycle in a saddle - node bifurcation .loci of saddle - node bifurcations of periodic orbits are not shown in the figures because we have not been able to accurately numerically compute more than a single point ( when , see fig .[ fig : bsmag]a ) on the curves due to the stiffness , non - smoothness and high dimensionality of the system ( [ eq : alldot ] ) .we expect one such curve to emanate from ( c ) and lie extremely close to the upper grazing curve as has recently been shown for two - dimensional systems .[ fig : bsmag]b , shows a second magnification of fig .[ fig : bifset ] near and .loci of hopf bifurcations and saddle - node bifurcations of the equilibrium have endpoints at ( d ) and ( e ) that lie on a curve of discontinuity .we will show in [ sec : codim2 ] that bifurcations and dynamical behavior in neighborhoods of ( d ) and ( e ) are predicted by theorems [ th : c2hb ] and [ th : c2sn ] , respectively . to the left of the point ( d ) , no bifurcation occurs along the curve of discontinuity . to the right of ( e ) , points on the curve of discontinuity act as saddle - node bifurcations , hence we refer to these as _ discontinuous saddle - node bifurcations_. from the point ( d ) to very close to ( e ) , the curve of discontinuity corresponds to a supercritical discontinuous hopf bifurcation .a takens - bogdanov bifurcation occurs at ( f ) where the hopf locus , , terminates at the saddle - node locus , and the point ( g ) corresponds to a cusp bifurcation . near the points ( e ) and ( f ) we believe there are a variety of additional bifurcations that we have not yet identified .the system exhibits stable oscillations in the region between the smooth hopf bifurcations and discontinuous hopf bifurcations , but to our knowledge , oscillations in this parameter region have not been observed experimentally .) when .the equilibrium is colored blue when stable and red otherwise .black [ magenta ] dots correspond to local maxima [ minima ] that stable oscillations obtain .two hopf bifurcations are indicated by circles .the asterisk corresponds to where at equilibrium .the period of these oscillations is also indicated .[ fig : bif150],width=498,height=415 ] the experimentally observed oscillations correspond to the region between and in fig .[ fig : bifset ] . in this section, we will discuss the dynamics in this region in more detail .[ fig : bif150 ] , shows a one parameter bifurcation diagram of the system ( [ eq : alldot ] ) when .the black [ magenta ] dots represent local maxima [ minima ] on the stable oscillating cycle . for values of between about and ,all three pathways are at some time preferred over one period of the solution , see fig . [fig : ts150 ] .in particular , very soon after the preferred pathway changes from glucose oxidation to fermentation ( green to cyan in fig . [fig : ts150 ] ) , the concentration of dissolved oxygen rebounds slightly before continuing to decrease . thus local maxima appear below the equilibrium value in fig .[ fig : bif150 ] . for larger values of ,still to the left of the rightmost hopf bifurcation , fermentation is no longer a preferred pathway at any point on the stable solution , and the lower local maximum is lost . also , the absolute maximum undergoes two cusp catastrophes at . , in panel a and in panel b. the solution is colored cyan when , magenta when and green when [ fig : ts150],width=498,height=415 ] ) at six different values of with the same color scheme as fig .[ fig : bif150 ] . a curve of discontinuity and hopf loci shown in fig .[ fig : bifset ] are also included .[ fig : dots3d],width=498,height=415 ] ( 13.2,11 ) ( 0,0 ) , near the leftmost hopf bifurcation . local minimums are not shown .[ fig : dots150z],title="fig:",width=498,height=415 ] different values of yield similar bifurcation diagrams , we show a collection in fig .[ fig : dots3d ] . as a general rulethere is a rapid change from a stable equilibrium to a large amplitude orbit near the leftmost hopf bifurcation and as is increased the amplitude and period of the orbit decrease .the behavior near the leftmost hopf bifurcation is actually quite complex , as indicated in fig .[ fig : dots150z ] , which is a magnification of fig .[ fig : bif150 ] . herethe hopf bifurcation is supercritical giving rise to a stable orbit which then undergoes a period - doubling cascade to chaos over an extremely small interval .the first period - doubling occurs at and the solution appears chaotic by . at attractor suddenly explodes in size and the oscillation amplitude grows considerably .as decreases toward this point we observe a period - adding sequence .period - adding sequences are characterized by successive jumps in the period in a manner that forms an approximately arithmetic sequence .such sequences have been observed in models of many physical systems . to our knowledge period - adding sequencesare not completely understood , but seem to arise when periodic solutions interact with an invariant manifold of a saddle - type equilibrium giving rise to a poincar map that is piecewise - smooth and often discontinuous .period - adding in one - dimensional piecewise - smooth maps has been the subject of recent research .dynamical behavior between period - adding windows ( intervals of the bifurcation parameter within with the period undergoes no sudden change ) is determined by the types and order of various local bifurcations . in fig .[ fig : dots150z ] , the period appears to go to infinity in the period - adding sequence . within the extremely small regions between windows ,we have identified period - doubling bifurcations and complicated attractors although these attractors deviate only slightly from the observed periodic orbits .this section studies dynamics near two of the codimension - two , discontinuous bifurcation scenarios that were identified in [ sec : bifset ] . adopting a general viewpoint, we will first unfold the simultaneous occurrence of a saddle - node bifurcation and a discontinuous bifurcation ; our results are summarized in fig .[ fig : bssch]a .the tangency illustrated in this figure matches our numerically computed bifurcation set , specifically point ( e ) of fig .[ fig : bsmag]b .secondly we will unfold the simultaneous occurrence of a hopf bifurcation and a discontinuous bifurcation , see fig .[ fig : bssch]b .this theoretical prediction also matches numerical results , specifically the points ( a ) , ( c ) and ( d ) of fig .[ fig : bsmag ] .( 14.9,6 ) ( 0,0 ) and [ th : c2hb ] .sn - saddle - node bifurcation , hb - hopf bifurcation .along the curve labelled `` grazing '' , the hopf cycle intersects the switching manifold at one point . along the -axis ,an equilibrium lies on the switching manifold .[ fig : bssch],title="fig:",width=272,height=226 ] ( 7.7,0 ) and [ th : c2hb ] .sn - saddle - node bifurcation , hb - hopf bifurcation .along the curve labelled `` grazing '' , the hopf cycle intersects the switching manifold at one point . along the -axis ,an equilibrium lies on the switching manifold .[ fig : bssch],title="fig:",width=272,height=226 ] the results of this section are presented formally in theorems [ th : c2sn ] and [ th : c2hb ] , proofs of which are given in appendix [ sec : proofs ] .proofs to lemmas [ le : c2hb2d ] and [ le : c2hb2dgen ] are described in . throughout this sectionwe use arbitrary parameters and that do not relate to particular parameters of ( [ eq : alldot ] ) . to clarify our notation , [ denotes terms that are order or greater [ greater than order ] in _ every _ variable and parameter on which the given expression depends . in the neighborhood of a single switching manifold an -dimensional , piecewise - smooth , continuous system may be written as where are and we assume that is sufficiently smooth ( at least ) .the switching manifold is the parameter dependent set , . if , when , we have and then locally is a -dimensional manifold intersecting the origin . via coordinate transformations in a similar manner to those given in , we may assume , to order , that is simply equal to .the higher order terms in do not affect our analysis below ; thus for simplicity , in what follows we will assume is identically equal to .the switching manifold is then the plane and we will refer to as the _ left - half - system _ and as the _ right - half - system_. we assume that when , the origin is an equilibrium . since the origin lies on the switching manifold and ( [ eq : genflow ] )is continuous , it is an equilibrium of both left and right - half - systems .we will assume that is , for the right - half - system , zero is not an associated eigenvalue of the origin when .then by the implicit function theorem the right - half - system has an equilibrium , , with and that depends upon the parameters as a function in some neighborhood of the origin . as is generically the case, we may assume the distance the equilibrium is from the switching manifold varies linearly with some linear combination of the parameters .without loss of generality we may assume is a suitable choice ; that is in this case , the implicit function theorem implies there is a function such that . in other words when , the equilibrium lies on the switching manifold . by performing the nonlinear change ofcoordinates we may factor out of the constant term in the system , i.e. where is .notice the transformation does not alter the switching manifold .the system is now with where and are matrices that are functions of and .since ( [ eq : theflow ] ) is continuous , the matrices and have matching elements in all but possibly their first columns .it directly follows that the _ adjugate _ matrices ( if is non - singular , then ) of and share the same first row to understand the role of the vector , , consider equilibria of ( [ eq : theflow ] ) . when , the origin is an equilibrium . for small non - zero half - system has an equilibrium , , with first component provided that .notice that our non - degeneracy assumption ( [ eq : transvcond ] ) , is satisfied if .if , then is an equilibrium of the piecewise - smooth system ( [ eq : theflow ] ) and is said to be _ admissible _ , otherwise it is _virtual_. similarly is admissible if and only if .finally , notice if and are of the same sign , then and are admissible for different signs of , whereas if and have opposite sign , and are admissible for the same sign of .the former case is known as _ persistence _ ; the latter is known as a _ non - smooth fold _ .the following theorem describes dynamical phenomena near when .+ consider the system ( [ eq : theflow ] ) with ( [ eq : fiform ] ) and assume that and .suppose that near , has an eigenvalue with the associated eigenvector , .in addition , suppose 1 . is of algebraic multiplicity 1 and is the only eigenvalue of with zero real part .then has a non - zero first component and the magnitude of may be scaled such that , .finally suppose that 1 . , 2 . , 3 . .then there exists a unique function with and such that in a neighborhood of , the curve corresponds to a locus of saddle - node bifurcations of equilibria of ( [ eq : theflow ] ) that are admissible when [ th : c2sn ] a proof of theorem [ th : c2sn ] is given in appendix [ sec : proofs ] .the theorem implies a bifurcation diagram like that depicted in fig . [fig : bssch]a .in particular the curve of saddle - node bifurcations is tangent to the -axis as shown .the second theorem , theorem [ th : c2hb ] , describes dynamical phenomena near when has a purely imaginary complex eigenvalue pair .the method of proof is essentially a standard dimension reduction by restriction to the center manifold of the left - half - system the dynamics of the resulting planar system are determined by the following two lemmas .lemma [ le : c2hb2dgen ] provides a transformation of the planar system to observer canonical form .lemma [ le : c2hb2d ] describes local dynamics of the planar system in this canonical form .+ consider the two - dimensional ( ) system = \left [ \begin{array}{c } f(x , y;\mu,\eta ) \\g(x , y;\mu,\eta ) \end{array } \right ] = \left [ \begin{array}{c } 0 \\ -\mu \end{array } \right ] + \left [ \begin{array}{cc } \eta & 1 \\ -\delta(\mu,\eta ) & 0 \end{array } \right ] \left [ \begin{array}{c } x \\ y \end{array } \right ] + o(|x , y|^2 ) + o(k ) \;. \label{eq:2dflowhb}\ ] ] suppose , 1 . for , 2 . , where evaluated at .then , near , there is a unique equilibrium given by functions and there exist , functions respectively , with such that for small , the curve corresponds to andronov - hopf bifurcations of and the curve corresponds to associated hopf cycles intersecting the -axis tangentially at one point .the hopf bifurcations are supercritical if and subcritical if . the hopf cycle lies entirely in the left - half - plane if and only if and lies between and . [ le : c2hb2d ] + consider the system ( [ eq : theflow ] ) with ( [ eq : fiform ] ) and assume that and .suppose has complex eigenvalues with 1 . and , 2 . , 3 . .then there is a nonlinear transformation not altering the switching manifold such that the left - half - flow of ( [ eq : theflow ] ) is given by ( [ eq:2dflowhb ] ) .all conditions in lemma [ le : c2hb2d ] will be satisfied except possibly the non - degeneracy condition , .[ le : c2hb2dgen ] + consider the system ( [ eq : theflow ] ) with ( [ eq : fiform ] ) and assume that and .suppose near , has complex eigenvalues with associated eigenvectors , .suppose 1 . , , and has no other eigenvalues on the imaginary axis , 2 . , 3 . , 4 .either or is non - zero .then , in the extended coordinate system , there exists a four - dimensional center manifold , , for the left - half - system that passes through the origin and is not tangent to the switching manifold at this point .furthermore , , such that in the coordinate system = \left [ \begin{array}{c } x_1 \\ x_i \end{array } \right ] \;,\ ] ] the left - half - flow of ( [ eq : theflow ] ) restricted to is given by ( [ eq:2dflowhb ] ) ( with `` hatted '' variables ) and all conditions in lemma [ le : c2hb2dgen ] will be satisfied .[ th : c2hb ] see appendix [ sec : proofs ] for a proof .theorem [ th : c2hb ] implies that a -dimensional system near a codimension - two point with a simultaneous hopf and discontinuous bifurcation will have a bifurcation diagram like that shown in fig .[ fig : bssch]b .in particular , the curves of grazing and hopf bifurcations are tangent to one another at the origin . for this scenario in two - dimensionsit is known that a curve of saddle - node bifurcations of the hopf cycle may exist very close to the grazing curve .we have not been able to extend this result to higher dimensions .in this paper we investigated the onset of stable oscillations and more complex behavior in a model of _ _ growth taken from .the model assumes an instantaneous switching between competing metabolic pathways resulting in a piecewise - smooth , continuous system of odes . in this paperwe identified a variety of discontinuity induced bifurcations .the model exhibits stable oscillations that arise from andronov - hopf bifurcations for intermediate values of the dilution rate , ; these have also been observed experimentally . as grows , the oscillation amplitude suddenly jumps to a much larger value just slightly beyond the hopf bifurcation .we do not have a detailed explanation for this sudden amplitude change . as is increased further ,the resulting stable orbits undergo a complex sequence of bifurcations causing their periods and amplitudes to decrease , until a second hopf bifurcation occurs resulting again in a stable equilibrium . for the model ( [ eq : alldot ] ) , a change in the preferred metabolic pathway at an equilibrium results in the loss of differentiability for orbits in its neighborhood .the result is often a discontinuity induced bifurcation , and we have identified discontinuous saddle - node and hopf bifurcations .the system also exhibits codimension - two bifurcations that correspond to simultaneous discontinuous saddle - node and hopf bifurcations .we have provided a rigorous unfolding of these scenarios from a general viewpoint .while the behavior that we studied are specific to piecewise smooth , continuous models , a model in which this simplification is relaxed should still have much the same behavior .for example if the relative strength of two competing pathways reverses , then exponential growth will lead to the dominance of one over the other over a small parameter range and a short timescale .though the bifurcations generic to smooth systems are restricted relative to those of discontinuous systems , a rapid sequence of these bifurcations over a small range of parameters may lead to the same behavior on a rougher scale as the discontinuous one .this can be seen , for example , in the simplest models such as a smoothed one - dimensional tent map .* proof of theorem [ th : c2sn ] * + recall , for any matrix , , by putting , multiplying on the left by and using ( [ eq : xidef ] ) we obtain thus is the left eigenvector of for .consequently we may indeed choose the length of the right eigenvector such that .now we show has a non - zero first element .suppose for a contradiction , .let denote the matrix formed by removing the row and column from .. then and for each .therefore each , i.e. , each element in the first column of the cofactor matrix of is zero .thus which is a contradiction .therefore let be generalized eigenvectors of that form a basis of with .let $ ] .we introduce the linear change of coordinates let = \left [ \begin{array}{c } v^{-1 } f^l(v \hat{x};\mu,\eta ) \\ 0 \\ 0 \end{array } \right ] \ ; , \label{eq : fproof}\ ] ] denote the -dimensional extended left - half - flow in the basis of generalized eigenvectors .the jacobian \ ; , \label{eq : dfproof}\ ] ] has a three - dimensional nullspace , .the -dimensional vectors , e_{n+2 } \ } \;,\ ] ] where is a generalized eigenvector and , span . the local center manifold , , is tangent to , thus on , where is equal to except that its first element is zero .notice thus restricted to the dynamics ( [ eq : fproof ] ) become the system where denotes all terms of that are nonlinear in . by expanding each term in ( [ eq : x1hatdot ] )to second order we obtain where , in particular let be an equilibrium of ( [ eq : x1hatdot2 ] ) . since by hypothesis , by the implicit function theorem there exists a unique function , such that and near , the linearization about the equilibrium , has an associated eigenvalue of exactly when since by hypothesis , the implicit function theorem again implies there exists a unique function , such that , ( [ eq : derivxhatdot ] ) is satisfied when and the function is .we now show saddle - node bifurcations occur for the left - half - flow on the curve when is small by verifying the three conditions of the saddle - node bifurcation theorem , see for instance : 1 . by construction, has a zero eigenvalue of algebraic multiplicity , and there are no other eigenvalues with zero real part when is sufficiently small , 2 . where is the left eigenvector of for , 3 . . finally ,notice that on and by ( [ eq : hhat ] ) , when thus the equilibrium at the saddle - node bifurcation is admissible when .+ * proof of theorem [ th : c2hb ] * + since the real and imaginary parts of the eigenvectors , and , are linearly independent , there exists an such that \;,\ ] ] is non - singular . for the remainder of this proof we will set , w.l.o.g .define two new vectors by = \left [ u^{(1)}~u^{(2 ) } \right ] u^{-1 } \ ; , \label{eq : v1v2def}\ ] ] let \;,\ ] ] and introduce the new coordinate system note the inclusion of the matrix , , in ( [ eq : v1v2def ] ) allows for simplification below , in particular , for .the -dimensional extended left - half - flow in the new coordinates is given by ( [ eq : fproof ] ) as before .the jacobian , , ( [ eq : dfproof ] ) , has a four - dimensional linear center manifold , , spanned by , e_{n+2 } \ } \;.\ ] ] notice is not tangent to the switching manifold by condition ( iv ) of the theorem . on the local center manifold where is equal to except that its first two elements are zero .the dynamics on are described by & = & \left [ \begin{array}{c } e_1^{\sf t } \\e_2^{\sf t } \end{array } \right ] \bigg ( v^{-1 } \mu b + v^{-1 } a_l v h(\hat{x}_1,\hat{x}_2;\mu,\eta ) \nonumber \\ & & + ~v^{-1 } g^{(l)}(v h(\hat{x}_1,\hat{x}_2;\mu,\eta);\mu,\eta ) \bigg ) \ ; , \nonumber\end{aligned}\ ] ] where represents all terms of that are nonlinear in .by using ( [ eq : v1v2def ] ) and = \left [ u^{(1)}~u^{(2 ) } \right ] d \;,\ ] ] where \;,\ ] ] we obtain = \mu \hat{b}(\mu,\eta ) + \hat{a}_l(\mu,\eta ) \left [ \begin{array}{c } \hat{x}_1 \\ \hat{x}_2 \end{array } \right ] + o(2 ) \;,\ ] ] where v^{-1 } a_l v \left [ e_1~e_2 \right ] = u d u^{-1 } \;,\ ] ] and v^{-1 } b - \left [ \begin{array}{c } e_1^{\sf t } \\e_2^{\sf t }\end{array } \right ] v^{-1 } a_l v \zeta = \left [ \begin{array}{c } e_1^{\sf t } \\e_2^{\sf t } \end{array } \right ] v^{-1 } a_l \left [ v^{(1)}~v^{(2)}~0 \cdots 0 \right ] a_l^{-1 } b \\& = & \left [ \hat{a}_l~0 \cdots 0 \right ] a_l^{-1 } b \;.\end{aligned}\ ] ] finally , it is easily verified that where .the following is a complete list of functions that are present in the model h.k .fiechter , a. von meyenburg .regulatory properties of growing cell populations of saccharomyces cerevisiae in a continuous culture system . in a. kockova - kratochrilova ,editor , _ proceedings of the 2nd symposium on yeast held in bratislava , 16 - 21 july 1966 .slovenskej , vied ._ , pages 387398 , 1968 .von meyenburg .stable synchrony oscillations in continuous cultures of saccharomyces cerevisiae under glucose limitation . in b.chance , e.k .pye , t.k .ghosh , and b. hess , editors , _ biological and biochemical oscillators ._ , pages 411417 , new york , 1973 . academic press .
we perform a bifurcation analysis of the mathematical model of [ k.d . jones and d.s . kompala , cybernetic model of the growth dynamics of _ saccharomyces cerevisiae _ in batch and continuous cultures , _ j. biotech . _ , 71:105 - 131 , 1999 ] . stable oscillations arise via andronov - hopf bifurcations and exist for intermediate values of the dilution rate as has been noted from experiments previously . a variety of discontinuity induced bifurcations arise from a lack of global differentiability . we identify and classify discontinuous bifurcations including several codimension - two scenarios . bifurcation diagrams are explained by a general unfolding of these singularities . [ section ] [ section ]
understanding information in classical and quantum physics has helped us shed light on the fundamental nature of these theories . indeed , it has even been suggested that quantum theory could be more naturally formulated in terms of its information - theoretic properties . yet, we have barely scratched the surface of understanding the role of information in the natural world . to gain a deeper understanding of information in physical systems , and to help explain _ why_ nature is quantum , it is sometimes instructive to take a step back and view quantum mechanics in a much broader context of possible physical theories .many examples are known that indicate that if our world were only slightly different , our ability to perform information processing tasks could change dramatically . however , before we can hope to really investigate general theories from the perspective of information processing , we first need to find a way to quantify information . in a quantum and classical world , this can be done using the von neumann and shannon entropy respectively , which capture our notions of information and uncertainty in an intuitive way .these quantities have countless practical applications , and have played an important role in understanding the power of such theories with respect to information processing . here , we propose a measure of information that applies to _ any _ physical theory which admits the minimal notions of finite physical _ systems _ , their _ states _ , and the probabilistic outcomes of _ measurements _ performed on them .many such theories have been suggested , each of which shares some aspects with quantum theory , yet have important differences .for example , we might consider quantum mechanics itself with a limited set of allowed measurements , quantum mechanics in a real hilbert space , generalized probabilistic theories , general -algebraic theories , box world ( a theory admitting all non - signalling correlations , previously called generalized non - signalling theory ) , classical theories with an epistemic restriction or theories derived by relaxing uncertainty relations .we propose an entropic measure of information that can be used in any such theory in section [ sec : entropydef ] .we will show that our measure reduces to the von neumann and shannon entropy in the quantum and classical setting respectively .in addition , we show that it shares many of their appealing intuitive properties .for example , we show that the quantity is always positive and bounded for the finite systems we consider .this provides us with a notion that each system has some maximum amount of information that it can contain .furthermore , we might expect that mixing increases entropy .i.e. that the entropy of a probabilistic mixture of states can not be less than the average entropy of its components .this is indeed the case for our entropic quantity .another property that is desirable of a useful measure of information is that it should take on a similar value for states which are close , in the sense that there exists no way to tell them apart very well .this is the case for the von neumann and shannon entropy , and also for our general entropic quantity , given one extra minor assumption .finally , when considering two different systems and , one may consider how the entropy of the joint system relates to the entropy of the individual systems .it is intuitive that our uncertainty about the entire system should not exceed the sum of our uncertainties about and individually .this property is known as subadditivity and is obeyed by our measure of entropy given one additional reasonable assumption on the physical theory .our entropic quantity thus behaves in very intuitive ways .yet , we will see that there exist physical theories for which it is not strongly subadditive , unlike in quantum mechanics .of course , there are multiple ways to quantify information and we discuss our choice by examining some alternatives and possible extensions such as notions of accessible information , relative entropy as well as rnyi entropic quantities in sections [ sec : renyi ] and [ sec : decomp ] . clearly , it is also desirable to capture our uncertainty about some system _ conditioned _ on the fact that we have access to another system .this is captured by the _ conditional _entropy , for which we provide two definitions in section [ sec : conditionaldef ] which are both interesting and useful in their own right .based on such definitions we also define notions of mutual information which allow us to quantify the amount of information that two systems hold about each other .our first definition of conditional entropy is analogous to the quantum setting , and indeed reduces to the conditional von neumann entropy in a quantum world .this is an appealing feature , and opens the possibility of interesting operational interpretations of this quantity as in a quantum setting .yet , we will see that there exists a theory ( called box world ) for which not only the subadditivity of the conditional entropy is violated , but also where conditioning _ increases _ entropy .intuitively , we would not expect to grow more uncertain when given additional information , which we could always choose to ignore .we will hence also introduce a second definition of conditional entropy , which does not reduce to the von neumann entropy in the quantum world .however , it has the advantage that in _ any _ theory conditioning _ reduces _ our uncertainty , as we would intuitively expect when taking an operational viewpoint .nevertheless , even our second definition of the conditional entropy violates subadditivity .naturally , one might ask whether the fact that both our definitions of the conditional entropy violate subadditivity is simply a shortcoming of our definitions . in section [ sec : general ] we therefore examine what properties any ` reasonable ' measure of conditional entropy can have in principle . by reasonable herewe mean that if given access to a system we have no uncertainty about some classical information , then the quantity is 0 , and otherwise it is positive ( or even non - zero ) .we show that under this simple assumption there exists _ no _ measure of conditional entropy in box world that is subadditive or obeys a chain rule .to give some intuition about how our entropies can be used outside of quantum theory , we examine a very simple example in box world in section [ sec : boxentropy ] , which illustrates all the peculiar properties our entropies can have .this is based on a task in which alice must produce an encoding of a string , such that bob can retrieve any bit of his choosing with some probability ( known as a random access encoding ) .it is known that superstrong random access codes exist in box world , leading to a violation of the quantum bound for such encodings .a similar game was used in to argue that one of the defining characteristics that sets the quantum world apart from other possibilities ( and particularly box world ) is that communication of classical bits causes information gain of at most bits , a principle called ` information causality ' . in section [ sec : infocausality ] , we examine this statement using our entropic quantity .we notice that it is the failure of subadditivity of conditional entropy in box world that leads to a violation of the inequality quantifying information causality given in .we conclude our examples by discussing the definition of ` information causality ' more generally . in the classical , as well asthe quantum setting , the shannon and von neumann entropies have appealing operational interpretations as they capture our ability to compress information . in section [ sec : codingtheorem ] , we show that the quantity has a similar interpretation for some physical theories . when defining entropy we have chosen to restrict ourselves to a minimal set of assumptions , only assuming that a theory would have some notion of states and measurements . to consider compressing a state or indeed decoding it again , however , we need to know a little more about our theory . in particular , we first have to define a notion of ` size ' for any compression procedure to make sense .second , we need to consider what kind of encoding and decoding operations we are allowed to perform .given these ideas , and several additional assumptions on our physical theory , we prove a simple coding theorem . in section [ sec : assumptions ] , we introduce a framework for describing states , measurements and transformations in general physical theories , followed in section [ sec : examples ] by some examples . in section [ sec : entropy ]we then define our entropic measures of information that can be applied in any theory .examples of of how these entropies can be applied in box world can be found in section [ sec : boxentropy ] . in section [ sec : general ] we examine what properties we can hope to expect from a conditional entropy in box world .section [ sec : infocausality ] investigates the notion of information causality in our framework and finally we show a coding theorem for many theories in section [ sec : codingtheorem ] .we conclude with many open questions in section [ sec : openquestions ] .we now present a simple framework , based on minimal operational notions ( such as systems , states , measurements and probabilities ) , that encompasses both classical and quantum physics , as well as more novel possibilities ( such as ` box world ' ) .our approach is similar to that in , however it is slightly more general as it does not assume that all measurements that are mathematically well - defined are physically implementable , or that joint systems can be characterised by local measurements .firstly , we will assume that there is a notion of discrete physical _systems_. with each system we associate a set of allowed _ states _ , which may differ for each system .we furthermore assume that we can prepare arbitrary mixtures of states ( for example by tossing a biased coin , and preparing a state dependent on the outcome ) , and therefore take to be a convex set , with denoting the state that is the mixture of with probability and with probability .to characterize when two states are the same , or close to each other , we first need to introduce the notion of measurements .secondly , we thus assume that on each system , we can perform a certain set of allowed measurements .if the system is clear from context , we will omit the subscripts and simply write and . with each measurement we associate a set of outcomes , which for simplicity of exposition we take to be finite .when a particular measurement is performed on a system , the probability of each outcome should be determined by its state .we therefore associate each possible outcome with a functional ] .note that the unit effect corresponds to .normalisation of measurements therefore requires transformations correspond to stochastic maps . in quantum theory ,the convex set of states are the density operators ( trace-1 positive operators ) , and effects correspond to linear functionals of the form where is a positive operator . all measurements satisfying the normalisation constraint are allowed , and the fine - grained measurements are those for which all are rank 1 operators .the allowed transformations represent completely positive trace - preserving maps .note that unlike other approaches our framework also encompasses real hilbert space quantum mechanics .furthermore , because we do not assume that all well - defined operations are physically realizable , it can be used to study quantum or classical theory with a restricted set of states , measurements and transformations ( for an interesting example in the classical case consider spekkens toy model ) .the entropies we would assign in such cases would differ from the standard von neumann entropy , and may be interesting to study . in box world, the state of a single system corresponds to a conditional probability distribution where and are elements of a finite set of ` inputs ' and ` outputs ' respectively .the intuition is that there is a special set of measurements on each system represented by ( referred to as _ fiducial _measurements ) , and that any probability distribution for these measurements corresponds to an allowed state .we represent a system with possible inputs and possible outputs by in the special case in which there is only one possible input , the conditional probability distribution reduces to the standard unconditional probability distribution , and we omit the input line to the box in the diagram . thus boxworld contains classical probability theory as a special case , and we will use such _ classical boxes _ to represent classical information in our treatment of information - theoretic protocols in box world .a multi - partite state in box world corresponds to a joint conditional probability distribution with a separate input and output for each system . aside from the usual constraints of normalisation and positivity, the allowed states must also satisfy the non - signalling conditions : that the marginal probability distribution obtained by summing over , is independent of for all .this means that the other parties can not learn anything about a distant party s measurement choice from their own measurement results .a bipartite state of particular interest is the pr - box state , for which all inputs and outputs are binary , and the probability distribution is where denotes addition modulo 2 .this state is ` more entangled ' than any quantum state , yielding correlations that achieve the maximum possible value of 4 for the clauser - horne - shimony - holt ( chsh ) expression , compared to for quantum theory ( tsirelson s bound ) , and for classical probability theory .we represent entanglement between systems in box world by a zigzag line between them , and classical correlations ( i.e. separable but non - product states ) by a dotted line . in box world , we allow all mathematically well - defined measurements and transformations to be physically implemented .writing and , all effects take the form where can be taken to be positive . the effect corresponding to performing joint fiducial measurements and obtaining results is represented by . because of the positivity of , any effect can be expressed as a weighted sum of such fiducial measurement effects . it follows that a measurement is fine - grained if and only if each of its effects is proportional to some , and that products of fine - grained measurements are themselves fine - grained .the shannon entropy and von neumann entropy are extremely useful tools for analyzing information processing in a classical or quantum world . here, we would like to define an analogous entropy for general probabilistic theories which reduces to and for classical probability theory and quantum theory respectively .we would also like our new entropy to retain as many of the mathematical properties of the shannon and von neumann entropy as possible . not onlywill this help our new entropy conform to our intuitive notions , but it will make it easier to prove general results using these quantities , and transfer known results to the general case . note that although we can use any base for the logarithm in the definition of the shannon and von neumann entropies ( as long as we are consistent ) , in what follows we will use base 2 ( i.e. ) throughout .we now give a concrete definition of entropy for any physical theory , which satisfies the above desiderata .other definitions are certainly possible , and we will consider one alternative ( based on mixed state decomposition ) in section [ sec : decomp ] .however , the following definition has many appealing properties . given any state , we define its entropy by where the infimum is taken over all fine - grained measurements on the state space and is the shannon entropy of the probability distribution over possible outcomes of .this has an intuitive operational meaning as the minimal output uncertainty of any fine - grained measurement on the system .note that for information - gathering purposes , the best measurements are always fine - grained , and without restricting to this subset the unit measurement would always be optimal ( giving zero outcome uncertainty ) .furthermore note that trivial refinements of always generate a higher output entropy , so it is sufficient to only consider measurements in the infimum that have no parallel effects . in appendix [ app : entropy ], we prove that retains several important properties of the shannon and von neumann entropy .in particular , we show : 1 .( _ reduction _ ) reduces to the shannon entropy for classical probability theory , and the von neumann entropy for quantum theory .( _ positivity and boundedness _ )suppose that the minimal number of outcomes for a fine - grained measurement in is .then for all states , 3 .( _ concavity _ ) for any and any mixed state : ( _ limited subadditivity _ ) consider a theory with the additional property that fine - grained measurements remain fine - grained for composite systems .i.e. this is true in quantum theory , classical theory , and box world .when holds , then for any bipartite state and reduced states and 5 .( _ limited continuity _ ) . consider a system for which all allowed measurements have at most outcomes , or for which restricting the allowed measurements to have at most outcomes does not change the entropy of any state . this is true in quantum theory , with , and also in box world and classical theory .then we can prove an analogue of the fannes inequality , which says that the entropy of two states which are close does not differ by too much .in particular , given satisfying , we will also see in section [ sec : codingtheorem ] that has an appealing operational interpretation as a measure of compressibility for some theories . however , one property of the von neumann entropy that does not carry over to is strong subadditivity .in particular , we will see in section [ sec : boxentropy ] there exists a tripartite state in box world such that based on the entropy , we can also define a notion of conditional entropy . in analogy to the von neumann entropy , we define the conditional entropy of a general bipartite state with reduced states and by this has the nice property that for quantum or classical systems it reduces to the conditional von neumann and shannon entropies respectively . in some theories ( including quantum theory but not classical probability theory ) , can be negative , which is strange , but opens the way for an appealing operational interpretation as in the quantum setting .however , unlike in quantum theory , we will see that has the counterintuitive property that it can _ decrease _ when ` forgetting ' information in some probabilistic theories .in particular , the violation of strong subadditivity for in box world implies that it is possible to obtain , and that is not subadditive .these properties will motivate us to consider an alternative definition of the conditional entropy below .however , we will show that no ` reasonable ' entropy in box world can have all the appealing properties of the conditional von neumann entropy . in analogy to the quantum case, we can also define the _ mutual information _ via quantity will be positive whenever subadditivity holds , and reduces to the usual mutual information in the quantum and classical case .similarly , we may define a notion of _ accessible information _ analogous to the quantum setting as where is the classical mutual information .given the problems observed with the previous definition in some theories , we now define a second form of conditional entropy based on , which sometimes captures our intuitive notions about information in a nicer way .for any bipartite state with reduced states and we define where the infimum is taken over all measurements on , and is the reduced state of the first system conditioned on obtaining measurement outcome when performing on the second system .this definition has the appealing property that conditioning on more systems always reduces the entropy , that is , ( see appendix [ app : condentropy ] , lemma [ lem : conditioningreducesentropy ] ) , and it reduces to the conditional shannon entropy in the classical case .note , however , that does not reduce to the conditional von neumann entropy in the quantum setting , as it is always positive .furthermore , we will see in section [ sec : general ] that it is not subadditive , and does not obey the usual chain rule .( even though a limited form of chain rule holds in box world as we show in the appendix section [ sec : boxchain ] ). nevertheless seems quite a natural entropic quantity , and its corresponding quantum version has found an interesting application in the study of quantum correlations .we can also define a corresponding information quantity via which is always positive .however , unlike , this definition is not symmetric and hence it can not really be considered ` mutual information ' .instead , captures the amount of information that holds about . for cryptographic purposes , such as in the setting of device independent security for quantum key distribution , it is useful to define the following rnyi entropic variants of .more precisely , we define where is the rnyi entropy of order .note that ( taking the limit of ) .these quantities can also be useful in order to bound the value of itself as for any state and we have . to define a notion of relative entropy , we adopt a purely operational viewpoint .suppose we are given copies of a state or a state , and let classically , as well as quantumly , the relative entropy captures our ability to distinguish from for large .note that to distinguish the two cases , it is sufficient to coarse grain any measurement to a two outcome measurement , where without loss of generality we associate the outcome ` 1 ' with the state and ` 2 ' with . then denotes the probability that we conclude that the state was , when really we were given .similarly , denotes the probability that we falsely conclude that the state was . in what is called asymmetric hypothesis testing, we wish to minimize the error while simultaneously demanding that is bounded from above by a parameter . herewe fix .we therefore want to determine in a quantum setting , it has been shown that the quantum relative entropy is directly related to this quantity via the quantum stein s lemma , which states that we have this is a deep result giving a clear operational interpretation to the relative entropy , telling us that in the large limit the probability of making the error decreases exponentially with . furthermore ,as it is expressed in operational terms , we can simply adopt ( [ eq : rel_ent ] ) as our definition of relative entropy in any theory for which the limit is well defined .thus we recover the usual value in the quantum ( and classical ) case , and in all other theories we still capture the same operational interpretation .note also that our choice of was quite arbitrary , and one may consider a family of relative entropies , one for each choice of .in quantum theory , these are all equivalent , but they may yield different values in other theories .although the entropy has several appealing properties , and seems quite intuitive , it is nevertheless interesting to consider alternative notions of entropy for general theories .one seemingly natural alternative is the decomposition entropy , which measures the mixedness of a state .there is a special subset of states which can not be obtained by mixing other states : form the extreme points of and are referred to as _ pure states _ ( with the remaining states being _ mixed _ ) .suppose that any state in can be decomposed into a finite sum of pure states .then we can define the entropy of a state by the minimal shannon entropy of its decompositions into pure states .define a decomposition of a state as a probability distribution over the set of pure states that is non - zero for only a finite set of states with probabilities ] are constants depending on the particular theory .for example , for projective measurements in quantum theory and .any projective measurement in quantum theory fulfills these conditions , but these conditions alone do not define projective measurements , hence the slightly different name .in quantum theory , the weak disturbance property can be understood as an instance of the gentle measurement lemma .furthermore , in order to prove our simple coding theorem , we will need to make some additional assumptions on the states and the measurements that achieve the minimal output entropy in our theory .in particular , we assume that for all states , the minimal output entropy can be attained by a pseudo - projective measurement .that is , we assume that for all there exists some pseudo - projective measurement such that .we further assume that for all such measurements , is fine - grained and pseudo - projective , and that course grainings of can also be made pseudo - projective .lastly , we assume that the dimension of is . these assumptions are all true in the classical and quantum case ( where is projective ) .we will see in appendix [ sec : codingtheoremproof ] , that this is all we will need to show the following simple coding theorem following the steps taken by shannon and schumacher ( see for example ) .we consider a source that emits a state with probability , chosen independently at random in each time step .when considering time steps , we hence obtain a sequence of states with , where each sequence occurs with probability .a compression scheme consists of an encoding and decoding procedure .the encoding procedure maps each possible into a state .in turn the decoding procedure maps the states back to states on the original state space . in analogy with the quantum case, we say that the _ compression scheme has rate _ , if the dimension of the smaller space obeys .note that in order for a compression scheme to be useful , it must have ( and hence ) .a compression scheme is called _ reliable _ , if we can recover the original state ( almost ) perfectly , in the sense that the average distance between the original and the reconstructed state can be made arbitrarily small for sufficiently large .i.e. for any and all sufficiently large , note that the output of the source can be described as a mixed state in each time step , and a product state over the course of time steps .we then obtain the following theorem ( see appendix section [ sec : codingapp ] ) in terms of the entropy of the source .note that in order to establish that truly characterizes our ability to compress information , we would also like to have a converse stating that for there exists no reliable compression scheme .in quantum theory , it is not hard to prove the converse of the above theorem since it admits a strong duality between states and measurements , which may also hold for other theories . here , however , we explicitly tried to avoid introducing any such strong assumptions .at the core of our little coding theorem lies an observation about -typical sequences analogous to the classical and quantum setting .define the set of -typical outcomes when measuring on the state as when and are clear from context , we will also use the effects since we assumed that any theory contains arbitrary coarse - grainings of measurements , we can consider the measurement which by assumption we can make pseudo - projective . we refer to the subspaces given by and as the typical and atypical subspaces respectively .if we observe outcome t for the measurement , we conclude that a state lies in the typical subspace associated with the set .otherwise , we conclude that the states lies in the atypical subspace .note that by assumption we have that is a fine - grained measurement .for all states in the typical subspace , only outcomes in the typical set will occur .hence we have that the dimension of the typical subspace satisfies , then for any and sufficiently large , given the statement about typical sequences , we can now complete the proof of theorem [ thm : codingtheorem ] : recall that the source emits a sequence of states with probability . to compress the state we perform a pseudo - projective measurement of given by .if we obtain outcome ` t ' ( corresponding to the typical subspace ) we output the post - measurement state ] are constants given by a particular theory .we then note that the inequality in the last line follows from the typical subspace theorem .as can be chosen to be arbitrarily small , this concludes our proof .we introduced entropic measures to quantify information in any physical theory that admits minimal notions of systems , states and measurements .even though these measures necessarily have some limitations , we nevertheless showed that they also exhibit many intuitive properties , and for some theories have an appealing operational interpretation , quantifying our ability to compress states .most of the problems we encountered with the conditional entropy seem to arise due to a violation of strong subadditivity .it is an interesting question whether quantum and classical theories are the only ones in which is strongly subadditive , or whether this is true for other theories .indeed , it would be an exciting question to turn things around and start by demanding that our entropic measures _ do _ satisfies these properties , and determine how this restricts the set of possible theories .in we defined a natural entropic quantity which differs from the conditional von neumann entropy in quantum theory , and has been used in to study quantum correlations .it would be interesting to study whether this quantity can shed any further light on quantum phenomena , or if an alternative conditional entropy can be defined that behaves like in box world , but still reduces to the conditional shannon entropy in quantum theory . whereas we have proved some intuitive properties of our quantities , it is interesting to see whether other properties of the von neumann or shannon entropy carry over to this setting .in particular , it would be interesting to prove bounds on the mutual and accessible information analogous to holevo s theorem when none of the systems are classical .another interesting question is whether one can find a closed form expression for the relative entropy in general theories . in quantum theory, we can define the mutual information ( and indeed the entropy itself ) in terms of the relative entropy , is the same as the relative entropy between and , and the entropy of is ( minus ) the relative entropy between and the identity operator .] hence such an approach may also yield an alternative definition of other entropic quantities for general theories .the non - local game used in our example above was discovered in collaboration with andrew doherty , whom we thank for the kind permission to use it here .the authors also thank sergio boixo , matthew elliot and jonathan oppenheim for interesting discussions , and matt leifer and ronald de wolf for comments on an earlier draft .sw is supported by nsf grants phy-04056720 and phy-0803371 .ajs is supported by a royal society urf , and in part by the eu qap project ( ct-015848 ) .part of this work was done while ajs was visiting caltech ( pasadena , usa ) .s. popescu and d. rohrlich . causality and nonlocality as axioms for quantum mechanics . in _ proceedings of the symposium of causality and locality in modern physics and astronomy : open questions and possible solutions _ , 1997 .consider states . clearly , using the property of the classical statistical distance , where equality holds iff by definition of the state space . it remains to show that obeys a triangle inequality .let be the optimal measurement to distinguish states and .we then have where the second inequality follows from the fact that the classical statistical distance itself obeys the triangle inequality .we now show that the entropic quantity reduces to the von neumann and shannon entropy in the classical and quantum settings respectively . for the relation to the von neumann entropy, we will need the following little lemma .our goal will be to show that for any fine - grained measurement with the shannon entropy of the distribution is always at least as large as the distribution obtained by measuring in the eigenbasis of , that is , with .let and note that .first of all , note that we can always extend a distribution over elements to a distribution over elements by letting for all and for all .clearly , with . second , note that from which we immediately obtain together with that consider the matrix determined by the entries which allows us to write . note that since and , is a doubly stochastic matrix . using birkhoff s theorem ( see e.g . , ) , we may thus write as a convex combination of permutation matrices , that is , where is a probability distribution over the group of permutations . using the concavity of the shannon entropy we obtain as we can always measure in its eigenbasis it follows that and it is easy to see that ._ boundedness : _ the existence of a measurement with outcomes , combined with the fact that the shannon entropy is maximized for a uniform probability distribution , ensure that which gives boundedness . _ concavity : _ to see that is concave , suppose first that the infimum in the definition of is achieved , such that for some .as effects are linear maps , .hence , by the concavity of the shannon entropy which concludes our claim . on the other hand ,if the infimum is not achievable then for all sufficiently small we can find an such that .using the same argument as before , we find as this holds for all sufficiently small the result follows ._ limited subadditivity : _ given an additional reasonable assumption , we can prove that is subadditive , we first assume that there exist and such that and . by assumption, is a fine - grained measurement on the joint system .thus by the subadditivity of the shannon entropy as claimed .now suppose that the infimum for one or both of or is not achieved .then for all sufficiently small we can find and such that as this holds for all sufficiently small the result follows .note that if and are in a product state , and the theory only allows product measurements on then equality holds in .however given we allow an arbitrary set of joint measurements , equality does not hold when and are in a product state for any possible probabilistic theories ( consider the case in which , but there exists a fine - grained measurement on with only 2 outcomes ) ._ limited continuity : _ here we prove an analogue of the fannes inequality , given an additional reasonable assumption that we can restrict to measurements with at most outcomes without changing the entropy of a system .suppose without loss of generality that .initially , we also suppose that the infimum in the definition of is achieved for some , such that .we can then bound where the first inequality follows from the fact that , the second from fannes inequality applied to the classical case , and the final inequality by noting that if the infimum is not achieved , then for all sufficiently small there nevertheless exists such that .following the same procedure as before , we find from which the result follows .the first inequality follows by choosing the unit measurement in the infimum over in the definition of , and noting that .the second inequality comes from restricting to measurements of the form in the infimum over in the definition of .we now prove a very restricted form of chain rule in box world .this will allow us to show that for our notions of entropy the mutual information about any classical information given an arbitrary state in box world can never increase by more than bits when transmitting bits of information .to show our simple chain rule , we will use the fact that in box world , we have that when considering a composite of a classical system and an arbitrary system , the only allowed measurements on the composite system take the form of first performing the only allowed measurement on , followed by a choice of measurement on that may depend on the outcome of the measurement on . since classical systems in box world admit exactly one measurement ( possibly followed by some classical post - processing ) , we simply write to denote the resulting entropy . for simplicity , we only examine the case where the infimum is attained in , the other case can again be obtained by taking the appropriate limit .since the only measurements on are as described above , we clearly have where the first equality follows from the definition of and the fact that is classical , the second from the definition of the conditional shannon entropy , the third from the chain rule for the conditional shannon entropy , and the final inequality from the definition of , the fact that for classical systems and the fact that conditioning reduces entropy for the shannon entropy .we now see that in consistency with the no - signalling principle , the transmition of an bit message causes the mutual information about a classical system given access to some aribtrary box information to increase by at most bits .note that for our alternate definition of conditional entropy and mutual information we have first , note that we can write by definition .we hence have , where are some set of probabilities and are density operators. then with equality if and only if the states have support on orthogonal subspaces .note that when are pure states , .hence for any pure state decomposition , this implies furthermore , denoting an eigendecomposition of by , it is easy to see that .hence it follows that first consider a single box with binary input / output . for clarity, we will represent its state by giving its probability distribution in vector form : now consider the two states these can both be optimally decomposed into two equally weighted pure states , e.g. hence they satisfy .however now consider the mixed state , which has . hence in this casewe violate concavity to obtain a violation of subadditivity we consider a bipartite state in which each system has a binary input / output , represented in the form of a matrix choose the following allowed state it is known that in this case there are exactly 24 pure states for the bipartite binary input / output case ( 16 product states and 8 entangled states ) , which we denote by . by demanding that be a positive matrix for each pure state we find that any decomposition must satisfy .hence .in fact we can construct an explicit decomposition in terms of an entangled state and three product states ( all equally weighted ) , giving .the marginal states on the other hand satisfy hence we obtain in violation of subadditivity .
information plays an important role in our understanding of the physical world . we hence propose an entropic measure of information for _ any _ physical theory that admits systems , states and measurements . in the quantum and classical world , our measure reduces to the von neumann and shannon entropy respectively . it can even be used in a quantum or classical setting where we are only allowed to perform a limited set of operations . in a world that admits superstrong correlations in the form of non - local boxes , our measure can be used to analyze protocols such as superstrong random access encodings and the violation of ` information causality ' . however , we also show that in such a world _ no _ entropic measure can exhibit all properties we commonly accept in a quantum setting . for example , there exists _ no _ ` reasonable ' measure of conditional entropy that is subadditive . finally , we prove a coding theorem for some theories that is analogous to the quantum and classical setting , providing us with an appealing operational interpretation .
coupling of electromagnetic radiation into enclosures or cavities through apertures both electrically small and large has attracted the interest of electromagnetic community for many years - .full solutions of this problem are particularly complicated because of the mathematical complexity in the solution of the boundary - value problem and because of the sensitivity of the solution to the detail of the enclosure s dimensions , content , and the frequency spectrum of the excitation .these difficulties have motivated the formulation of a statistical description ( known as the random coupling model , rcm ) of the excitation of cavities .the model predicts the properties of the linear relation between voltages and currents at ports in the cavity , when the ports are treated as electrically small antennas . in this paper, we formulate and investigate the rcm as it applies to cases in which the ports are apertures in cavity walls .the aperture is assumed to be illuminated on one side by a plane electromagnetic wave .we then distinguish between the radiation problem , where the aperture radiates into free space , and the cavity problem , where the aperture radiates into a closed electromagnetic ( em ) environment .the solution of the problem in the cavity case is then given in terms of the free - space solution and a fluctuation matrix based on random matrix theory ( rmt ) .thus , there is a clear separation between the system specific aspects of the aperture , in terms of the radiation admittance , and the cavity in terms of the fluctuation matrix .we illustrate our method by focusing on the problem of an electrically narrow aperture , for which the radiation admittance can be easily calculated numerically .we then generalize our result to the interesting case where a resonant mode of the aperture is excited . in this casethe statistical properties of the aperture - cavity system can be given in a general universal form .our results build on previous work on apertures .in particular , rectangular apertures have a very long research tradition in em theory , and continue to be a topic of interest .the first self - consistent treatments have been carried out by bethe , bouwkamp , and , later , by schwinger in aperture scattering , and roberts .subsequent work on apertures is due to ishimaru , cockrell , harrington , and ramat - sahmii , among other investigators .our results are of interest for the physical characterization of the radiation coupled into complex cavities such as reverberation chambers ( rc ) - which is known to be an extremely complicated problem even challenging classical emc techniques - for understanding interference in metallic enclosures , as well as for modeling and predicting radiated emissions in complicated environments .the paper is organized as follows . in sec .we introduce the general model for the cavity backed aperture , and we describe the way the rcm models the cavity . in sec .we apply the formulation of sec .ii . to large aspect ratio , rectangular apertures ; evaluating the elements of the admittance matrix and computing the power entering a cavity with a rectangular aperture . in this sectionwe also develop a simple formula for the power entering a low loss cavity with isolated resonances .iv . describes an extension of the model that accounts for the coupling of power through an aperture , into a cavity , and to an antenna in the cavity .simple formulas for the high - loss case , low - loss , isolated resonance are developed .the random coupling model was originally formulated to model the impedance matrix of quasi - two - planar cavities with single , and multiple point - like ports . in this section ,we develop the model for three - dimensional _ irregular _ enclosures excited through apertures . as depicted in fig .[ fig : cav_geo ] , we consider a cavity with a planar aperture in its wall through which our complex electromagnetic system is accessed from the outside .the size and shape of the aperture are important in our model , their specification constitutes `` system specific '' information that is needed to implement the model .the cavity that we consider in our studies is an electrically large enclosure with an irregular geometry .the irregularity is assumed to be such that ray trajectories within the cavity are chaotic throughout . generally speaking , typical cavities have this feature ; particularly those with curved walls and/or with contents that scatter radiation in multiple directions .the consequences of assuming trajectories are chaotic is that the spectra of modes of the cavity have universal statistical properties that are modeled by random matrix theory ( rmt ) .experiments have been carried out to test the predictions of rcm for quasi - two - dimensional complex enclosures coupled through electrically small ( point - like ) ports .experiments have also been carried out in three - dimensional enclosures , and with electrically - large ports .when apertures are considered , the port approximation invoked in the original derivation of the rcm is no longer valid , hence we need to consider the field distribution on the aperture surface .we consider a planar aperture , i.e. , one that is not subject to boundary curvature .this is consistent with real - world situations such as 3d reverberation chambers ( rcs ) , where the aperture is generally in a planar boundary .we first treat the aperture as if it existed in a metal plate separating two infinite half spaces .we refer to this as the `` free - space '' situation or `` radiation '' case .suppose the port is treated as an aperture in a planar conductor whose surface normal is parallel to the z - axis .the components of the fields transverse to in the aperture can be expressed as a superposition of modes ( an example is the set of modes of a waveguide with the same cross sectional shape as the aperture ) and where is the basis mode , ( having only transverse fields ) normalized such that and is the normal , which we take to be in the -direction . in the radiation casewe solve maxwells equations in the half space subject to the boundary conditions that on the conducting plane except at the aperture where it is given by ( [ eqn : et_rad_aperture ] ) .we do this by removing the conducting plane , adding a magnetic surface current density to faradays law , and solving maxwells equations in the whole space in the fourier domain . for this problem ,the transverse components of the electric field are odd functions of with a jump equal to twice ( [ eqn : et_rad_aperture ] ) at the location of the aperture ; thus satisfying the boundary condition for the half space problem .we then evaluate the transverse components of the magnetic field on the plane , and project them on to the basis at the aperture to find the magnetic field amplitudes in ( [ eqn : ht_rad_aperture ] ) .the result is a matrix relation between the magnetic field amplitudes and the electric field amplitudes in the aperture , where , is the frequency of excitation and we have adopted the phasor convention . here , the radiation admittance matrix is determined from the fourier transform solution for the fields , and is given in terms of a three dimensional integral over wave numbers , where the dyadic tensor is responsible for coupling two arbitrary modes of the aperture , and is the fourier transform of the aperture mode .the elements of the radiation admittance are complex quantities .the residue at the pole in ( [ eqn : yrad_aperture ] ) gives the radiation conductance where is the two dimensional solid angle of the wave vector to be integrated over , and where there appears a modified dyadic tensor + \left [ \left ( \textbf{k } \times \hat{n } \right ) \left ( \textbf{k } \times \hat{n } \right ) / k_{\perp}^2 \right ] \,\ , .\ ] ] the radiation conductance is frequency dependent through .we note that there is an implicit dependence through the fourier transforms of the aperture modes , and we set in ( [ eqn : es_fourier ] ) and ( [ eqn : deltagrad ] ) .the remaining part of ( [ eqn : yrad_aperture ] ) , gives the radiation susceptance .part of this can be expressed as a principle part integral of the radiation conductance .however , there is an additional inductive contribution ( ) ( which we term the magnetostatic conductance ) coming from the last term in the parentheses in ( [ eqn : yrad_kern ] ) that contains a factor that cancels the resonant denominator in ( [ eqn : yrad_aperture ] ) .the reactive response of the aperture can be expressed in terms of the cauchy principal value of the radiation conductance ( [ eqn : yrad_aperture ] ) , and the previously mentioned inductive contribution , yielding where stands for magnetostatic conductance , defined as and where / k^2 ] , where ( the minus sign accounts for the mirror symmetry ) and the factor of two multiplying the incident field comes from the addition of the incident and specularly reflected magnetic fields .projecting the two magnetic field expressions on the aperture basis , and equating the amplitudes gives where and is the fourier transform of the aperture electric field and is defined in ( [ eqn : es_fourier ] ) .equation ( [ eqn : radiation_aperture ] ) can be inverted to find the vector of voltages and then the net power passing through the aperture is given by where can be either a radiation ( free - space ) admittance ( [ eqn : yrad_aperture ] ) or a cavity admittance matrix ( [ eqn : ycav_fluct ] ) .we consider rectangular apertures and select a basis for representation of the tangential fields in the aperture in ( [ eqn : et_rad_aperture ] ) and ( [ eqn : ht_rad_aperture ] ) .one choice for the basis are the modes of a waveguide with a rectangular cross section .these can be written either as a sum of te and tm modes , or simply as a fourier representation of the individual cartesian field components . in emc studies , narrow aperturesare often considered , as they frequently occur in practical em scenarios . the aperture of fig .[ fig : ap_geo ] is elongated and thin : it has only one electrically large dimension , and the field component .hence , in the particular case of a rectangular aperture with and , the field will be dominated by modes }{n } \hat{y } \,\ , , \ ] ] for , where .once the aperture field basis is specified , the procedure for calculating the cavity admittance matrix ( [ eqn : yrad_aperture ] ) is given by the following steps .first , calculate the fourier transform ( [ eqn : es_fourier ] ) .second , use to calculate the radiation conductance ( [ eqn : grad_aperture ] ) .third , use to calculate the magnetostatic susceptance ( [ eqn : bms_aperture ] ) .fourth , use above quantities to form the radiation susceptance .finally , use the so - formed radiation admittance to generate the cavity admittance ( [ eqn : ycav_fluct ] ) , similar to .the conductance and susceptance have the property that off diagonal terms vanish if is odd and is even , and vice versa , due to the even and odd parity in of the basis modes .further , in the limit of high frequency , , the components have the limits thus , at high frequencies the admittance matrix is diagonal and equal to the free space conductance as would be expected when the radiation wavelength is much smaller than the aperture size .we have evaluated the elements of the radiation conductance matrix as functions of frequency for a rectangular aperture with dimensions cm cm .plots of these appear in figs .[ fig : gnn_1_7 ] and [ fig : gnn_1_7_inset ] . for the diagonal elementsthe conductance increases nonmonotonically from zero , and asymptotes to its free - space value . for the off - diagonal components ( not shown )the conductance first rises and then falls to zero with increasing frequency . combining numerical evaluations of the two contributions in ( [ eqn : brad_aperture ] ) yields the total susceptance .this is plotted for several diagonal elements in fig .[ fig : brad_1_5 ] , which have the feature that for some elements the susceptance passes through zero at a particular frequency . at these frequencies the conductances ( fig .[ fig : gnn_1_7_inset ] ) also tend to be small .this indicates the presence of resonant modes of the aperture . at the frequencies of these modesthe aperture will allow more power to pass through the plane of the conductor than would expected based on the aperture area .our next step is to evaluate the power passing through the aperture when it is illuminated by a plane wave .this involves solving ( [ eqn : voltage ] ) for the vector of voltages , and inserting these in ( [ eqn : power_back ] ) for the power transmitted through the aperture .this will be done a number of times ; first with to determine the power passing through the aperture in the radiation case ; then again with to determine the power through the aperture when it is backed by a cavity . in the cavity casea number of realizations of the cavity admittance matrix will be considered , modeling cavities with different distributions of resonant modes .figure [ fig : power_rad_oblique ] shows the frequency behavior of the transmitted power defined in ( [ eqn : power_back ] ) , with , for an oblique plane wave incident with angles , , and , according to the coordinate frame of fig .[ fig : ap_geo ] . in fig .[ fig : power_rad_oblique ] , the net power is parameterized by the number of aperture modes included in the calculation , ranging from to .as expected , the higher the frequency the greater the number of modes required to achieve an accurate prediction of the transmitted power .interestingly , we notice the presence of a sharp peak at mhz , which is the slit resonance frequency , i.e. , , and other broader resonances located at . at normal incidence , the peak of the resonance next to the sharp peak is reduced of a factor of .this is confirmed in previous studies based on the transmission line model of a narrow aperture .having computed the radiation conductance and susceptance for a narrow slit of dimensions cm cm , we can investigate the effect of a wave - chaotic cavity backing the aperture by replacing by and exploiting the statistical model , ( [ eqn : ycav_fluct ] ) .we first calculate distributions of the cavity admittance elements from the rcm by using a monte carlo technique and the radiation admittance of a rectangular aperture .numerical calculations of ( [ eqn : yrad_aperture ] ) and monte carlo simulation of ( [ eqn : universal_fluct ] ) allow for generating an ensemble of cavity admittances of the form ( [ eqn : ycav_fluct ] ) .in particular , we use the statistical method described in refs . , and , with aperture modes , cavity modes , and a loss factor of , simulating a chaotic cavity with high losses , to create the bare fluctuation matrix ( [ eqn : universal_fluct ] ) .here , we repeat the simulation of ( [ eqn : universal_fluct ] ) times to create an ensemble of fluctuation matrices . by virtue of its construction ,the average cavity admittance equals the radiation matrix .further , in the high loss limit ( ) fluctuations in the cavity matrix become small and . for finite losses the character of the fluctuations in the elements of the cavity admittance matrix change from lorenzian at low loss ( ) to gaussian at high loss ( ) .we now consider the net power coupled through an aperture that is backed by a wave - chaotic cavity .this involves evaluating ( [ eqn : voltage ] ) and ( [ eqn : power_back ] ) with and with evaluated according to ( [ eqn : ycav_fluct ] ) .when this is done the frequency dependence of the net power acquires structure that is dependent on the density of modes in the cavity .this is illustrated in figs .[ fig : power_cavity_a1 ] and [ fig : power_cavity_a0p1 ] where net power through the m by m rectangular aperture is plotted versus frequency in the range ghz for a few realizations of the fluctuating admittance matrix . in both cases the reference frequency in ( [ eqn : k_0_deviation ] ) is set at ghz , and the mean spacing between modes is set to be mhz .figure [ fig : power_cavity_a1 ] corresponds to a moderate loss case ( ) , and fig .[ fig : power_cavity_a0p1 ] to a low loss case ( ) .clearly , in the low loss case the peaks in power associated with different resonances are more distinct and extend to higher power . also plotted in figs .[ fig : power_cavity_a1 ] and [ fig : power_cavity_a0p1 ] are the average over realizations of the power coupled into the cavity and the power coupled through the aperture in the radiation case .notice that both of these curves are smooth functions of frequency and that in the moderate loss case of fig .[ fig : power_cavity_a1 ] the power averaged over many realizations is only slightly less than transmitted power in the radiation case . in the low loss case of fig .[ fig : power_cavity_a0p1 ] the average power coupled into the cavity is about half the value of the radiation case .the features of the coupled power in fig .[ fig : power_cavity_a1 ] and [ fig : power_cavity_a0p1 ] can be captured by a simple model .we consider the frequency range in fig .[ fig : power_rad_oblique ] where the resonances in the aperture are isolated , roughly speaking ghz . in this case , in a given narrow frequency range the aperture fields are dominated by a single mode , and we can replace the matrix equations ( [ eqn : radiation_aperture]-[eqn : power_back ] ) with their scalar version where or depending on whether a cavity is present , and with here , , and are defined as before following ( [ eqn : universal_fluct ] ) and the are a set of independent and identically distributed gaussian random variables with zero mean and unit variance .the quantities and are properties of the aperture , and will be discussed below .the power transmitted through the aperture in the absence of a cavity is given by the aperture resonance occurs at a frequency where , corresponding to a peak in fig .[ fig : power_rad_oblique ] , and the coupled power at the peak is , with .we can then express the frequency dependence of the power coupled through the aperture for frequencies near in the form of a lorenzian resonance function where and the effective quality factor for the aperture is given by when a cavity backs the aperture , becomes a random frequency dependent function through the variable .this quantity is frequency dependent through the denominators in ( [ eqn : xi_single_mode ] ) and is random due to the random vectors of coupling coefficients and eigenvalues .the frequency scale for variation of is determined by the frequency spacing of modes of the cavity . in the typical casethe frequency spacing of cavity modes is much smaller than that of aperture modes , as depicted in figs .[ fig : power_cavity_a1 ] and [ fig : power_cavity_a0p1 ] .the behavior of the coupled power as a function of frequency will thus follow the envelope of the radiation case , with fluctuations on the frequency scale of the separation between cavity modes .this behavior can be captured in the simple mode if we assume the frequency is close to one of the poles ( ) of ( [ eqn : xi_single_mode ] ) .specifically we write where the term is in the form of ( [ eqn : xi_single_mode ] ) with the term removed and .since is assumed to be small it can be neglected in ( making purely real ) whereas it is retained in the second term in ( [ eqn : xi_complex ] ) since we consider frequencies such that is small and comparable to .the statistical properties of the sum given by were detailed by hart _if we express in the form then the pdf of is . for now , since we are focusing on the behavior of individual realizations , we leave the value of unspecified .having assumed the cavity response is dominated by a single resonance we can manipulate ( [ eqn : pt_single_mode ] ) into a general form where , , and here , the variables have the following interpretations : is the `` external '' loss factor describing the damping of the cavity mode due to the aperture .note , it is added to the internal loss factor in the denominator of ( [ eqn : pt_alpha_alpha_n ] ) .it is a statistical quantity , mainly through the gaussian random variables .this is responsible for variation in the height of the peaks in fig .[ fig : power_cavity_a0p1 ] .the quantity represents the modification of the aperture resonance function ( [ eqn : delta_aperture ] ) by the reactive fields of the nonresonant cavity modes ( ) .note that it affects the external damping factor , which is largest when , i.e. , when is near the aperture resonant frequency .finally , determines the shifted cavity mode frequency . that is , using definition ( [ eqn : k_0_deviation ] ) the resonant cavity mode frequency , , becomes equation ( [ eqn : pt_alpha_alpha_n ] ) implies that the power coupled into the cavity at frequency is bounded above by the power that can be transmitted through the aperture at the aperture resonance .these powers are equal if the mode is resonant , , and the cavity is critically coupled , .note , however , that the coupled power in the cavity case ( [ eqn : pt_alpha_alpha_n ] ) can exceed the radiation case ( [ eqn : pt_rad_cav ] ) at the same frequencies if the aperture is off resonance .this is evident at the peaks of the coupled power in fig .[ fig : power_cavity_a0p1 ] .basically , what is happening is the cavity susceptance , which alternates in sign as frequency varies on the scale of the cavity modes , cancels the aperture susceptance , thus making the aperture resonant for frequencies away from the natural resonance . when the cavity loss parameter is small as in fig .[ fig : power_cavity_a0p1 ] or ( [ eqn : pt_alpha_alpha_n ] ) there are large variations in coupled power as frequency is varied .a broad band signal would average over these variations .we can treat this case by computing the power coupled through the aperture averaged over realizations of the random variable defined in ( [ eqn : xi_single_mode ] ) .such averages are shown in fig .[ fig : power_cavity_a1 ] and [ fig : power_cavity_a0p1 ] based on monte carlo evaluations of the full system ( [ eqn : power_back ] ) .we perform this average in our simple model . a plot of a numerical evaluation of from ( [ eqn : pt_alpha_alpha_n ] ) as a function of for different loss parameters appears in fig .[ fig : univ_factor_power ] .interestingly , a loss parameter that is as large as is sufficient to make the average power entering a cavity of that passing through an unbacked aperture .in the previous section we determined the properties of the power entering a cavity through an aperture . the contents of the cavity were treated as a distributed loss characterized by a single parameter .we will now extend this model to treat the case in which we identify a second port to the cavity that we will treat as an electrically small antenna .this port could be an actual port where a connection is made to the outside world as illustrated in fig .[ fig : cav_geo ] , or the port could represent the pin of a circuit element on which the voltage is of interest .we consider the configuration of fig .[ fig : cav_geo ] , where we have the joint presence of apertures and ports .we have previously considered the case of fields excited by a current distribution of the form where is a set of basis functions used to represent the current distribution in terms of a set of amplitudes which we call the port currents .the corresponding port voltages were defined in ref . as and as a result the power entering the cavity through the ports is . in analogy to our treatment of the aperture , we consider two cases : one in which the current distribution radiates into free - space and one in which the current distribution radiates into a cavity .the linear relationship between the port voltages and the port currents is then characterized by impedance matrices and depending on the case under consideration , where for the radiation case it was shown where is the fourier transform of the basis function , , and the dyadic is given by the radiation impedance matrix can be decomposed where is the residue from the pole at in ( [ eqn : zrad_multiport ] ) ( the radiation resistance ) and is the reactive contribution . for the cavity case it was shown ^{1/2 }\cdot \ushortdw{\xi } \cdot \left [ \ushortdw{r}^{rad } \right ] ^{1/2 } \,\ , , \ ] ] where the fluctuating matrix is defined in ( [ eqn : universal_fluct ] ) .we are now in a position to describe a statistical model for a cavity including both an aperture and a localized current distribution . in this casewe construct an input column vector that consists of the aperture voltages and port currents and an output vector that consists of the aperture currents and port voltages \ ] ] and \ ] ] where are the aperture and port voltages , and are the aperture and port currents .these are then related by a hybrid matrix , , where ^{1/2 } \cdot \ushortdw{\xi } \cdot \left [ \ushortdw{v } \right ] ^{1/2 } \,\ , .\ ] ] here the matrices and are block diagonal , viz . , \,\ , , \ ] ] and ] , , with the direction of uniformly distributed over the half solid angle corresponding to , , with uniformly distributed in angle in the plane perpendicular to . except as mentioned , all random variables characterizing each plane wave are independent .a similar expression can be made for the scalar potential generating the magnetostatic modes . with eigenfunctions expressed asa superposition of random plane waves each factor appearing in ( [ eqn : zcav_multiport ] ) becomes a zero mean guassian random variable .the correlation matrix between two such factors can then be evaluated by forming the product of two terms , averaging over the random variables parameterizing the eigenfunctions and taking the limit .we find for the electromagnetic modes the following expectation value \cdot \tilde{\textbf{e}}_{s^ { ' } } \,\ , , \end{split}\ ] ] here , represents the spherical solid angle of , and is the volume of the cavity .a similar analysis of the magnetostatic modes gives \cdot \tilde{\textbf{e}}_{s^ { ' } } \,\ , .\end{split}\ ] ] the connection between the cavity case ( [ eqn : zcav_multiport ] ) and the radiation case ( [ eqn : yrad_aperture ] ) is now apparent . specifically , we note that the factors are zero mean gaussian random variables with a correlation matrix given by ( [ eqn : corr_em_mode_ampl ] ) . we can express the product in terms of uncorrelated zero mean , unit width gaussian random variables by diagonalizing the correlation matrix .we again introduce matrix notation and represent the element of the product ^{1/2 } \cdot \ushortw{w}_{n } \ushortw{w}^{t}_{n } \cdot \left [ \ushortdw{g}^{rad } \right ] ^{1/2 } \right \}_{s s^ { ' } } \,\ , , \end{split}\ ] ] where is the radiation admittance matrix ( [ eqn : grad_aperture ] ) , and is the mean separation between resonant wave numbers for electromagnetic modes on a cavity of volume . substituting ( [ eqn : corr_em_mode_ampl ] ) into ( [ eqn : zcav_multiport ] ) we have ^{1/2 } \cdot \ushortw{w}_{n } \ushortw{w}^{t}_{n } \cdot \left [ \ushortdw{g}^{rad } \right ] ^{1/2 } \right \}_{s s^ { ' } } \\+ b^{ms}_{s s^ { ' } } \left ( k_0 \right ) \,\ , , \end{split}\ ] ] where in the limit of a large cavity we have approximated the sum of the pairs of gaussian random variables representing the magnetostatic contribution to the cavity admittance by their average values and using for magnetostatic modes converted the sum to an integral equation ( [ eqn : ycav_el ] ) is the random coupling model prediction for the cavity admittance .the last steps are to replace the exact spectrum of eigenvalues , by a spectrum produced by random matrix theory and to insert a loss term .we introduce a reference wave frequency and associated wave number , and we assume that the cavity is filled with a uniform dielectric with loss tangent . under these assumptions for frequencies close to the reference frequency , the frequency dependent fraction appearing in ( [ eqn : ycav_el ] ) can be expressed as ^{-1 } \,\, , \ ] ] where measures the deviation in frequency from the reference frequency , and , is the mean spacing in resonant frequencies .the resonant wave numbers are now represented as a set of dimensionless values which by their definition have mean spacing of unity .these are then taken to be the eigenvalues of a random matrix from the gaussian orthogonal ensemble normalized to have mean spacing unity .finally , the loss factor is defined to be we note that the zeros of the denominator are given by , which implies .thus .this results in an expression analogous to those obtained in for the impedance matrix ^{1/2 } \cdot \ushortdw{\xi } \cdot \left [ \ushortdw{g}^{rad } \right ] ^{1/2 } \,\ , , \ ] ] thus , we have seen that in cases of ports described by planar apertures , we can express the model cavity admittance in terms of the corresponding radiation impedance or admittance and a universal statistical matrix . c. butler , y. rahmat - samii , and r. mittra , `` electromagnetic penetration through apertures in conducting surfaces , '' _ antennas and propagation , ieee transactions on _ , vol .26 , no . 1 , pp . 82 93 , jan 1978 .g. gradoni , j .- h .yeh , b. xiao , t. m. antonsen , s. m. anlage , and e. ott , `` predicting the statistics of wave transport through chaotic cavities by the random coupling model : a review and recent progress , '' _ wave motion _51 , no . 4 , pp . 606 621 , 2014 , innovations in wave modelling .t. wang , r. harrington , and j. mautz , `` electromagnetic scattering from and transmission through arbitrary apertures in conducting bodies , '' _ antennas and propagation , ieee transactions on _ , vol .38 , no . 11 , pp .1805 1814 , nov 1990 . d. hill , m. ma , a. ondrejka , b. riddle , m. crawford , and r. johnk , `` aperture excitation of electrically large , lossy cavities , '' _ electromagnetic compatibility , ieee transactions on _ , vol . 36 , no . 3 , pp . 169178 ,aug 1994 .j. ladbury , t. lehman , and g. koepke , `` coupling to devices in electrically large cavities , or why classical emc evaluation techniques are becoming obsolete , '' in _ electromagnetic compatibility , 2002 .ieee international symposium on _ , vol . 2 , aug 2002 , pp . 648655 vol.2 .d. fedeli , g. gradoni , v. primiani , and f. moglie , `` accurate analysis of reverberation field penetration into an equipment - level enclosure , '' _ electromagnetic compatibility , ieee transactions on _ , vol .51 , no . 2 , pp . 170 180 , may 2009 .x. zheng , s. hemmady , t. m. antonsen , s. m. anlage , and e. ott , `` characterization of fluctuations of impedance and scattering matrices in wave chaotic scattering , '' _ phys .e _ , vol .73 , p. 046208, apr 2006 .s. hemmady , x. zheng , j. hart , t. m. antonsen , e. ott , and s. m. anlage , `` universal properties of two - port scattering , impedance , and admittance matrices of wave - chaotic systems , '' _ phys .e _ , vol .74 , p. 036213, sep 2006 .s. hemmady , t. antonsen , e. ott , and s. anlage , `` statistical prediction and measurement of induced voltages on components within complicated enclosures : a wave - chaotic approach , '' _ electromagnetic compatibility , ieee transactions on _ , vol . 54 , no . 4 , pp . 758771 , aug 2012 .z. drikas , j. gil gil , s. hong , t. andreadis , j .- h .yeh , b. taddese , and s. anlage , `` application of the random coupling model to electromagnetic statistics in complex enclosures , '' _ electromagnetic compatibility , ieee transactions on _ , vol .56 , no . 6 , pp .14801487 , dec 2014 . c. holloway , d. hill , j. ladbury , g. koepke , and r. garzia , `` shielding effectiveness measurements of materials using nested reverberation chambers , '' _ electromagnetic compatibility , ieee transactions on _ , vol .45 , no . 2 , pp .350356 , may 2003 .g. gradoni and l. arnaut , `` higher order statistical characterization of received power fluctuations for partially coherent random fields , '' _ electromagnetic compatibility , ieee transactions on _ , vol .51 , no . 3 , pp .583591 , 2009 .z. khan , c. bunting , and m. d. deshpande , `` shielding effectiveness of metallic enclosures at oblique and arbitrary polarizations , '' _ electromagnetic compatibility , ieee transactions on _ , vol . 47 , no . 1 , pp .112122 , feb 2005 .v. rajamani , c. bunting , m. d. deshpande , and z. khan , `` validation of modal / mom in shielding effectiveness studies of rectangular enclosures with apertures , '' _ electromagnetic compatibility , ieee transactions on _ , vol .48 , no . 2 , pp . 348353 , may 2006 .l. warne and k. chen , `` slot apertures having depth and losses described by local transmission line theory , '' _ electromagnetic compatibility , ieee transactions on _ , vol . 32 , no . 3 , pp .185196 , aug 1990 .t. m. antonsen , g. gradoni , s. anlage , and e. ott , `` statistical characterization of complex enclosures with distributed ports , '' in _ proceedings of the ieee international symposium on emc _, long beach , ca ( usa ) , august 2011 .yeh , j. a. hart , e. bradshaw , t. m. antonsen , e. ott , and s. m. anlage , `` experimental examination of the effect of short ray trajectories in two - port wave - chaotic scattering systems , '' _ phys .82 , p. 041114, oct 2010 .
in this paper , a statistical model for the coupling of electromagnetic radiation into enclosures through apertures is presented . the model gives a unified picture bridging deterministic theories of aperture radiation , and statistical models necessary for capturing the properties of irregular shaped enclosures . a monte carlo technique based on random matrix theory is used to predict and study the power transmitted through the aperture into the enclosure . universal behavior of the net power entering the aperture is found . results are of interest for predicting the coupling of external radiation through openings in irregular enclosures and reverberation chambers . statistical electromagnetics , cavities , aperture coupling , admittance matrix , chaos , reverberation chamber .
computed tomography ( ct ) is widely used for clinical diagnosis . in classical ct applicationsit is assumed that projection data are obtained over all angular ranges .for example , an exact reconstruction requires a consecutive or 180+fan angle scans for parallel or fan beam reconstruction , respectively . for limited angle scans ,however , the projection data will cover an angular range of less than .a limited - angle scan is used because of the large object size , large - pitch helical ct , and restricted scanning . in interventional imaging with c - arm ct ,the limited angle acquisition is a commonly used protocol due to the hardware limitations .since limited angular scanning provides only a small subset of complete projection data , the use of a conventional filtered - back projection ( fbp ) algorithm generally produces images with heavy directional artifacts .therefore , many studies have been proposed to compensate for artifacts in limited - angle ct images .several iterative reconstruction algorithms such as singular value decomposition , wavelet decomposition , and reprojection have been developed .inspired by the success of compressed sensing ( cs ) algorithm in a sparse - view ct , many cs - based methods have also been proposed in limited - angle ct . a representative cs - based algorithm is pocs with total variation minimization .although these algorithms are good in restoring the localized artifacts or disturbances , these algorithms have not been successful in correcting globalized artifact patterns in the ct image from a limited angle acquisition .in addition , the cs - based methods require a large amount of iterative computation . therefore , cs - based methods need to improve image quality and reconstruction time . recently ,deep learning algorithms using convolutional neural network ( cnn ) show successful results in computer vision applications including image classification , denoising and segmentation .some studies have tried to use the deep learning algorithms for image reconstruction problems . in the ct area , kang et al proposed a wavelet - domain deep learning network for denoising for low - dose ct and showed the promising results by winning second place in the aapm low - dose grand challenge .jin et al and han et al independently proposed multi - scale residual learning network for sparse view ct reconstruction . recently , the first applications of cnn appeared for limited angle tomography .however , the cnn in used only three layers and has not been explicitly designed to correct for the directional artifacts in the limited angle tomography . by extending our prior work , here we propose a novel multi - scale wavelet - domain residual learning network for limited - angle ct reconstruction . in particular ,the network is designed in a directional wavelet transform domain to exploit the directional property of the limited angle artifacts .in addition , instead of directly learning the artifact - free image , our network is designed as a residual network that estimates the artifact directly in the wavelet domain . to take into account the globally distributed artifacts ,a multi - resolution network architecture was used , inspired by .once the wavelet domain residual is estimated , artifact - free wavelet coefficients are estimated by subtracting the residuals , after which the wavelet recomposition is performed to obtain the full resolution image .numerical results confirmed the proposed wavelet domain deep residual learning exceeds the existing methods in image quality and reconstruction time .limited angle ct and ( b ) their corresponding spectra.,width=302 ] $ ] and the associated vectors .( b ) the corresponding missing frequency region.,width=302 ] figure [ fig : spectrum ] shows spectral analysis for limited angle ct artifact images that correspond to the difference between the full view and limited view reconstruction .specifically , fig .[ fig : spectrum](a ) shows the limited angle artifact image corresponding to the scanning trajectory in fig . [fig : angle](a ) , and fig .[ fig : spectrum](b ) show their corresponding spectra . despite different image contents , the spectral components of the artifacts have similar directional characteristics since the spectral components of the artifacts are determined by the missing angle .this phenomenon can be analyzed using the spectrum of limited - angle reconstruction .the fourier slice theorem can show the missing frequency bands especially for parallel beam geometry . for general scanning geometry , the katsevich formula can be used to analyze this . here , we consider 2-d fanbeam scanning geometry of objects within a field of view .then , for a given a filtering direction and the x - ray source angle between and , the katsevich integral at can be represented as follows : here , is the fourier transform of the image and \label{eq : sigma_fourier } \end{aligned}\ ] ] where is the vector connecting the x - ray source at and the reconstruction position ( see fig . [fig : angle](a ) ) . the filtering direction is often set to .note that is identical to the inverse fourier transform if . this can be achieved in a full scanning geometry because , for any , we can find and such that .however , with limited angle scanning , there exist frequency such that , providing .given the limited scanning geometry in fig .[ fig : angle](a ) , for reconstructing the pixel on the isocenter , the vectors between the red and blue dashed lines results in .in fact , this corresponds to the missing frequency region described in fig .[ fig : angle](b ) , resulting in an elongated artifact along y - direction .the missing frequency region depends on , which is why we have additional components in fig .[ fig : spectrum](b ) , but we can still see that the general directional characteristics of the artifacts are similar .this suggests design guidelines for the deep network .first , in limited angle tomography , the artifacts , even if the artifact - free images are drastically different from each other , are similar to each other .this means that learning the artifact is easier than learning artifact - free original images .secondly , the artifacts have a strong directional characteristic so that the learning on the directional wavelet domain is preferred to the original image domain . finally , as shown in fig .[ fig : spectrum](a ) , the artifacts are distributed globally , so the network should have large receptive field .inspired by these observations , we proposed a multi - scale wavelet domain residual learning network for limited - angle tomography . specifically , ct reconstruction images are first decomposed using non - decimated redundant directional wavelet transform , called contour transformation . the contourlet transformation performs a multiscale directional decomposition of an image . here , the image is divided into high frequency components and low - frequency components , and the directional filter banks are applied to the divided frequency components . more specifically , when denotes the number of multiscale decomposition stage , the number of channels in directional decomposition is .we used four stage of multiscale decomposition .the total number of channels in a wavelet domain is therefore .then , we developed a wavelet domain deep residual network by modifying the u - net .the cnn architecture is shown in fig .[ fig_architecture ] . a basic operation , a blue arrow in fig . [ fig_architecture ] , consists of convolutions followed by a rectified linear unit ( relu ) and batch normalization .the u - net architecture consists of a contracting path and an expansive path . in the contracting path, max pooling operation follows after two basic operations .after max pooling operation , a number of channels is doubled .in contrast , average unpooling operation is used in the expansive path instead of max pooling operation .a skip and concatenation operation , represented by violet arrows in fig .[ fig_architecture ] , directly concatenates the results in the contracting path and the results in the expansive path .this multi - scale deep network structure in u - net is shown to increase the receptive field so that it can effectively capture the globally distributed artifacts as the limited angle artifacts .once the artifact is estimated using contourlet domain u - net , then artifact - free wavelet coefficients are estimated by subtracting the residuals , after which the wavelet recomposition is performed to obtain the full resolution image .we used nine sets of real projection data from aapm low - dose ct grand challenge .the provided data sets are acquired in helical ct , so we rebinned the projection data from the helical ct to angular scan fan - beam ct .the artifact free images are reconstructed using full - angle fan - beam projection data .in limited angle experiments , only or angular scan were used to reconstruct limited - angle ct images .every reconstruction process used traditional fbp algorithm .eight sets out of nine data sets were used for network training .the remaining set was used for validation .we developed the proposed network using matconvnet toolbox in matlab r2015a environment .gtx 1080 graphics processing unit and intel core i7 - 4790 central processing unit were used for research .the network is trained by 256 256 size randomly selected patches .then , the learned filtered are used to process 512 images at the test phase .training time lasted about 24 hours .stochastic gradient reduction was used to train the network .the number of epochs was 150 .the initial learning rate was , which gradually dropped to .the regularization parameter was . in order to investigate the optimal nature of the proposed network, we compared the proposed method with two different neural networks .first network is a single - resolution residual learning cnn network .the first network eliminates the artifact in the image domain and uses single - resolution network architecture in fig . [fig_ref_networks](a ) .the second network is a multi - resolution residual learning cnn network as shown in fig . [fig_ref_networks](b ) .the second network directly uses the ct image instead of decomposed wavelet coefficients .the reconstruction results by fbp , tv , baseline network and the proposed method for 120 scanning are shown in fig .[ fig_results120 ] .the single resolution cnn performs better than the tv approaches , but the internal and outer area are incorrectly recovered . on the other hand ,the multi - resolution cnn is significantly better than the single resolution cnn . however , there are remaining ghost artifacts beneath the patient bed , and the internal structures are reconstructed blurry . the proposed multi - resolution wavelet domain residual network allowed for a clear reconstruction and effectively compensated the artifacts of ct compared to the other methods . comparing the entire image ,the global artifacts are removed and the boundary of body is clearly recovered in the proposed method .the enlarged images , the yellow boxes in fig .[ fig_results120 ] , show that the proposed method eliminates small local artifacts and reconstructs the detail structures .it is remarkable that the proposed method successfully preserves the detail edge structures while compensating for heavy directional and blurring artifacts .moreover , the computation time of the proposed method is about 0.34 sec / slice .it is about 10 times faster than the tv method which takes sec / slice as a computation time . for the 150 scanning experiments in fig .[ fig_results150 ] , similar reconstruction behaviours can be observed , although the visual difference between the multi - resolution cnn and multi - resolution wavelet domain cnn is not as significant as for the 120 cases .the enlarged images , the yellow boxes in fig .[ fig_results150 ] , show that wavelet domain multi - resolution cnn has less artifacts than the image domain multi - resolution cnn . for small missing angles , however , both the image domain and the wavelet domain multi - resolution residual network exhibited excellent subjective performances . in table[ tab_methods ] and table [ tab_methods2 ] , numerical analysis of image quality for different method , which are average values from 488 slices of images , are arranged .all deep learning approaches showed better performance than the tv method .however , the proposed method achieved the highest value in peak signal - to - noise ratio ( psnr ) and structural similarity ( ssim ) , and the lowest value in normalized root mean square error ( nrmse ) .the results of numerical analysis and reconstruction images confirmed that the proposed method exceeded the previous methods for ct reconstructions with a limited angle . scanning geometry.,width=377 ] scanning geometry.,width=340 ] .comparison of various methods for limited angle ct compensation . [ cols="^,^,^,^,^,^",options="header " , ]in this work , we proposed a novel deep residual learning network for ct reconstruction with a limited angle projection data .the proposed network used u - network structure to increase the receptive field .since the typical artifacts in the limited ct show globalized patterns like directional streaks and blurs , the network was trained on a directional wavelet transform domain .compared to fbp and tv methods , the proposed network shows excellent results .the authors would like to thanks dr .cynthia mccollough , the mayo clinic , the american association of physicists in medicine ( aapm ) , and grant eb01705 and eb01785 from the national institute of biomedical imaging and bioengineering for providing the low - dose ct grand challenge data set .this work is supported by korea science and engineering foundation , grant number nrf-2016r1a2b3008104 .t. wu , a. stewart , m. stanton , t. mccauley , w. phillips , d. b. kopans , r. h. moore , j. w. eberhard , b. opsahl - ong , l. niklason _et al . _ ,`` tomographic mammography using a limited number of low - dose cone - beam projection images , '' _ medical physics _ ,30 , no . 3 , pp .365380 , 2003 .l. li , z. chen , l. zhang , and k. kang , `` an exact reconstruction algorithm in variable pitch helical cone - beam ct when pi - line exists , '' _ journal of x - ray science and technology _ , vol .14 , no . 2 ,109118 , 2006 .h. gao , l. zhang , y. xing , z. chen , j. zhang , and j. cheng , `` volumetric imaging from a multisegment straight - line trajectory and a practical reconstruction algorithm , '' _ optical engineering _ , vol . 46 , no . 7 , pp .077004077004 , 2007 .m. rantala , s. vanska , s. jarvenpaa , m. kalke , m. lassas , j. moberg , and s. siltanen , `` wavelet - based reconstruction for limited - angle x - ray tomography , '' _ ieee transactions on medical imaging _ , vol .25 , no . 2 , pp .210217 , 2006 .e. j. cands , j. romberg , and t. tao , `` robust uncertainty principles : exact signal reconstruction from highly incomplete frequency information , '' _ ieee transactions on information theory _ ,52 , no . 2 ,489509 , 2006 .e. y. sidky and x. pan , `` image reconstruction in circular cone - beam computed tomography by constrained , total - variation minimization , '' _ physics in medicine and biology _ , vol .53 , no .17 , p. 4777, 2008 .o. ronneberger , p. fischer , and t. brox , `` u - net : convolutional networks for biomedical image segmentation , '' in _ international conference on medical image computing and computer - assisted intervention_.1em plus 0.5em minus 0.4emspringer , 2015 , pp .234241 .h. zhang , l. li , k. qiao , l. wang , b. yan , l. li , and g. hu , `` image prediction for limited - angle tomography via deep learning with convolutional neural network , '' _ arxiv preprint arxiv:1607.08707 _ , 2016 .a. vedaldi and k. lenc , `` matconvnet : convolutional neural networks for matlab , '' in _ proceedings of the 23rd acm international conference on multimedia_.1em plus 0.5em minus 0.4emacm , 2015 , pp .
limited - angle computed tomography ( ct ) is often used in clinical applications such as c - arm ct for interventional imaging . however , ct images from limited angles suffers from heavy artifacts due to incomplete projection data . existing iterative methods require extensive calculations but can not deliver satisfactory results . based on the observation that the artifacts from limited angles have some directional property and are globally distributed , we propose a novel multi - scale wavelet domain residual learning architecture , which compensates for the artifacts . experiments have shown that the proposed method effectively eliminates artifacts , thereby preserving edge and global structures of the image .
the analysis of social networks has been , for a few years now , a very attractive topic that many researchers have taken interest in .as one particular type of social network , mobile phone graphs have been widely studied in the past decade and researchers have explored the potential to use such data as , among others , sensors to uncover trends of human behavior , mobility habits or social interactions . however , those studies are inherently based on partial data , often covering a subset of the population , a specific time frame or a single country , as mostly provided by a single telecommunication company .moreover , the covered subset of the population may be biased , as some providers are more popular across , for example , a given age group , a given scale of revenue , or users preferring voice calls to text messages , to name only a few of the potential sources of bias .mobile telecommunication companies often do not have a monopolistic position , due to market regulations by the authorities , and the data and the analyses on such data are therefore subject to inherent bias , due to the partial ( yet significant ) coverage of the population by the telecom operator in the country of interest .+ in particular , the network of mobile phone users of a given operator is subject to changes over time : users can join or leave the network due to various reasons , for example by subscribing a contract with a different operator .the datasets studied in research always cover a given period of time , during which some users appeared and others disappeared from the network .thus , instead of analyzing the behavior of a fixed group of users during the given time period , studies observe only a partial view of the network and some users are only observed during part of the observation time window .furthermore , even though we know that this bias exists , it is difficult to remove it .indeed , it is often very difficult to distinguish between a user that is simply inactive for a few days and a user that has permanently left the network .+ most of the literature on mobile phone datasets analysis , however , is based on the assumption that the network of users is not biased .only few studies show the potential bias of their data , and very little is known about the qualitative and quantitative effects that a biased sampling of the users of a country could have on the results of the analyses of a mobile phone network .moreover , besides this inherent bias of the dataset available , many researchers start by preprocessing the data , often called cleaning " , removing links or nodes that are not active enough . for example , onnela _ et al . _remove all links that are non - reciprocated , that is , if an individual called another user , but never called , then the link is removed from the network . going even further , lambiotte_ et al . _impose that a pair of nodes communicated at least 6 times in each direction for the link to be taken into account in the analyses .these apparently innocuous filtering methods applied before the analysis of the network generate additional biases that are often overlooked . in this paper , we show that if we only take into account the users that remain in the network during the whole observation period , thus removing all users that joined or left the network during that time frame , the degree distribution changes from a dpln to a lognormal , both in empirical and theoretical scenarios .the limits of validity of this observation are not quite understood and we formulate assumptions to that effect . in that regard ,we point to an observation that , far from settling the case of cleaning - induced biases , opens a line of research .one of the first things that come to mind when studying social networks is to count the number of acquaintances of each user in the network , referred to as the user s degree .different studies have shown that the size and shape of the distribution of degrees in a mobile phone network can vary depending on many parameters . in one of the first studies on mobile phone data , looking at one day of data , aiello _ et al . _ observed that the network exhibited a power - law ( or pareto ) distribution , corresponding to a random graph model with probability distribution : .this observation was later confirmed by studies on different mobile phone datasets .however , in a later study by seshadri _et al . _ , this time using a longer period of one month of data , the authors observed that the mobile call graph had a degree distribution corresponding to a double pareto lognormal ( dpln ) .dpln distributions are composed of two pareto distributions , one for small values and one characterizing the tail , joined by a smooth transition and can be derived as a mixture of lognormal distributions .+ in this paper , we analyze two large databases of mobile phone communications .+ * data for belgium * are recorded over a 6-month period from october 1 , 2006 to march 31 , 2007 by one large provider in belgium whose market share is around 30% .the database contains the communications ( sms and voice calls ) of about 3.3 m users geographically spread over the whole country . for each call ,the information contained in the call detail records ( cdrs ) is the caller and callee anonymized i d s , the date and time of the communication , whether it is a voice call or sms , and the duration in case of a voice call .this dataset has already been used in several research projects addressing different questions . + * data for portugal * are recorded over a 15-month period from april 1 , 2006 to june 30 , 2007 , but with a gap where data is missing between september 16 , and october 31 , 2006 .the data contain all voice calls between clients of the same provider , but sms are not recorded .the dataset contains information over about 1.9 m users , which represent approximately 20% of the population of the country . for each call, the information recorded is the caller and callee anonymized i d s , the date and time of the communication , and the i d of the cell tower that recorded the call .this dataset has been used before to study mobility patterns in portugal .+ we study the distribution of degrees in the network of mobile phone communications .we draw a link between two users as soon as they have communicated at least once , thus without applying any filter on the links .we observe the degree distribution of two networks , namely the network of all users present in the database ( hereafter referred to as ) , and another network where only users that are active already from the beginning of the observation period _ and _ are still active at the end , are taken into account ( hereafter referred to as ) . to this end , in this second network , we only consider users that are active at least once in the first four weeks , and once in the last four weeks of the observed time period . in the belgian dataset , these nodes of represent about 70% of the nodes of . in the portuguese dataset ,the nodes that remain active throughout the observation period represent 45% of the nodes of the whole network .the detailed numbers of nodes are given in table [ tab : ts_numnodes ] , along with the numbers of links after 26 weeks of observation , and after 50 weeks in the case of portugal .+ interestingly , we observe two different degree distributions in these networks : the degree distribution of seems to follow a double pareto lognormal distribution ( dpln ) while the sampled network shows a lognormal degree distribution .figure [ fig : ts_degdistfits ] shows the degree distributions of and and fitted dpln and lognormal for the belgian and portuguese datasets . + first , let us notice that the two datasets give qualitatively similar results .distributions for both networks seem to correspond well to a dpln distribution , while both distributions for networks present lognormal behavior , indicating that this result may be universal across datasets .furthermore , the parameters fitted for the distributions are of the same order of magnitude in both datasets . the small variations of parameters were to be expected since the sampling of the two datasets do not cover exactly the same proportion of the population , and since the two datasets come from different countries and have been recorded with different methods .therefore , their characteristics may differ sufficiently to induce small discrepancies in the parameters . + it is hardly surprising that the degree distributions of the two networks and are different . however , it is interesting to notice that there is a very close relationship between those two distributions , and that this relationship seems to be universal across different datasets .this relationship could explain some of the discrepancies between the observations published previously on networks of mobile phone datasets .we now give a short description of the dpln and of a process to produce these distributions , and for more details we refer the interested reader to the full paper on dpln distributions .the probability density function of the dpln distribution is of the form : , \end{aligned}\ ] ] with and where and represent the cumulative distribution function of the standard normal distribution , and its complementary distribution function , respectively .one way of generating a dpln distribution is to observe a geometric brownian motion process with an initial condition drawn from a lognormal distribution , but to stop the observation of this process at a random time .indeed , if we suppose a variable evolves with a geometric brownian motion : where is the time variable and is a brownian motion , and if we suppose that has an initial condition drawn from a lognormal distribution : , then for any constant time , is distributed as a lognormal with parameters depending on as : if , on the other hand , is drawn from an exponential distribution with probability density function then exhibits a dpln distribution . + in our case , to apply this situation to a mobile phone network , we let the variable correspond to the degree of a user , growing with time as the user makes new contacts .further , we let variable correspond to the time between the first and last observations of activity of the user , representing the time during which the degree of the user could grow and be observed in our dataset .the degree distribution we observe in then corresponds to the distribution of .when is fixed and we observe users over a time window of fixed length , then we observe a lognormal .we surmise that this process may explain the discrepancy between the degree distributions of networks and : the lognormal distribution corresponds to a network where no user entered or left the database during the period under study , so that users in the system all have the same age and are ate the same stage in the degree growth process .this observation is in line with other studies on different kinds of datasets such as , for example , the number of citations or the number of votes in proportional elections .however , if we also consider users for which we have only partial information , that is , users that have left or entered the network during the observation period , then we observe a degree distribution corresponding to a dpln , emerging as a mixture of many users of different age in the system , thus a mixture of many lognormal processes , taken at different stages .+ empirically , when we look at a fixed number of users during a fixed length of time , the degree distribution obtained is a lognormal ( recall figures [ fig : ts_deglognfit ] and [ fig : ts_pordeglognfit ] ) .furthermore , the lognormal distribution that fits the degree distribution evolves with the length of the time window of observation , which suggests that the process observed may correspond to the evolving process generating a dpln .figure [ fig : ts_musig ] shows the maximum likelihood parameters for the lognormal distribution fitting the degree distribution of the network , for time windows of observation ranging from one week to 26 weeks for belgium , and to 57 weeks for portugal . after a transient behavior due to the start of the observation period , we observe that the parameters continue evolving with time .the portuguese dataset covering a longer period , the observations are consistent with the hypothesis that the degree distribution is well approximated by a lognormal whose parameters evolve linearly with time , as is described in the process generating dpln distributions . in ,this evolution is the consequence of a geometric brownian motion . in our analyses ,we can not validate or reject the hypothesis that the process corresponds to a geometric brownian motion , but instead we observe directly the evolution of the parameters of the lognormal . + + now if instead of looking at a fixed number of people , we let people enter and leave the network during the observation window , this is equivalent to observing the whole population , but during a time - window that is different for each individual .indeed , the time when a user switches to or from another network determines the time frame during which this user was observed .furthermore , when a new user enters the network , they start with a degree equal to zero , whereas the degree of a user is frozen as soon as they leave the network and are no longer observed .therefore , if we assume that every user has a degree evolving according to a lognormal distribution with parameters evolving with time ( as empirically validated above ) , then the degree distribution that we observe when taking all users into account corresponds to a mix of lognormal distributions with various parameters , yielding the distribution that we observe for , which is well fitted by a dpln .let us note that for a short time window during which users have not yet had the time to enter or leave the network , the distribution for still corresponds to a lognormal distribution .+ however , the generating process described in supposes that the distribution of times during which users have been observed corresponds to an exponential distribution .unfortunately , this is not the case in the networks under study here .the distribution of elapsed time ( in days ) between the first and last activity is depicted on figure [ fig : ts_timeinthenet ] , for belgium ( top ) and portugal ( bottom ) .we observe that these distributions are strongly influenced by the maximum time determined by the size of the dataset ( 182 and 401 days of data , respectively ) .furthermore , even if we only take into account the left part of the distribution in the belgian dataset ( between 0 and 50 days ) , the distribution is still too broad to correspond to an exponential curve . in the case of portugal, the distribution of observation times shows even a less pronounced peak for small times .+ + these results suggest that the generating process of a dpln distribution may be more robust than suggested by the process uncovered in .we therefore suggest a new conjecture : the generating process of the dpln distribution admits the relaxation of the hypothesis that the time of observation is distributed exponentially .+ to strengthen our hypothesis , we conduct several monte - carlo experiments , reproducing the process described above , with different distributions for choosing . for each experiment, we randomly select 100,000 values of from a chosen distribution ( different than exponential ) , and then select a value for the degree from a lognormal distribution whose parameters depend on the value selected for and grow linearly with .we then observe the resulting distribution of synthetic degrees .figure [ fig : ts_variousdpln ] shows the results of our simulations for two different distributions of the variable : a combination of uniform distributions , and a sum of exponential distributions .these distributions were chosen as they reproduce some of the features observed in the empirical distributions observed on mobile phone data : a peak for very short times and a peak at the maximum time ( as limited by the observation time window , see figure [ fig : ts_timeinthenet ] ) .we observe that the distributions obtained for are better approximated by dpln distributions than by lognormal distributions , especially for small values of .furthermore , a test for normality of the logarithm of the samples rejected the null hypothesis that those samples correspond to lognormal distributions ( associated p - values ) .these results suggest that dpln distributions may arise as a result of mixing lognormal distributions with time - evolving parameters , but the hypothesis that the time of observation must be distributed exponentially may not be necessary for a dpln distribution to appear . +we have presented an observation from the analysis of two large databases of mobile phone communications in belgium and portugal .we showed that the measurement of the degree distribution of a time - evolving social network may be more complicated to analyze than initially thought , as we showed that seemingly benign choices of data cleaning such as the inherent sampling of the data can lead to radically different conclusions .we also showed that the double pareto lognormal distribution , observed in several studies as a characteristic of social networks does not represent the distribution of the underlying social network , but instead appears as an effect of the sampling of users and of the limited time window , the actual degree distribution corresponding rather to a lognormal .moreover , we showed that this effect appears in two different datasets indicating suggesting the tempting hypothesis calling for further verification that this is a universal characteristic of mobile phone datasets .furthermore , we presented results indicating that the double pareto lognormal distribution can arise with a process that does not exactly correspond to those studied before , opening the question of the robustness of this process for further study . + additionally , we observe that the degree distributions present similar characteristics whether we look a one or many weeks of data . up to a scaling parameterall curves correspond to the same distribution , see figure [ fig : ts_degdists ] .interestingly , we observe that the average degree is an appropriate estimator of this scaling parameter .this scaling property had already been observed in on the network of all users of belgium .here , we also extend this observation to another dataset , and to the network of active users presenting a lognormal degree distribution .+ + these observations suggest that the degree distribution of a mobile phone network may be governed by a universal law , and can be represented by a lognormal distribution in the case of a constant population being observed , and by a dpln in the case where an evolving population is being observed .independently of the dataset , or of the length of the time window ( provided that it is longer than one week ) , we observe that the degree distributions remain qualitatively the same , up to a scaling parameter .however , this may not be valid for very short time windows of the order of days , as was observed by krings _ et al . _ in .nevertheless , this degree distribution law appears to be fairly robust to the particularities of specific datasets , as we have reached the same conclusions studying two different datasets , which do not cover the same population nor the same country .these results provide new insights into the particular characteristics of mobile phone datasets , and the dynamics of the construction of a social network based on mobile communications . to overcome some of the privacy issues related to the use of mobile phone datasets , recentwork has focused on creating synthetic data reproducing the characteristics of empirical data .the observations presented in this paper may help in the future to create better synthetic datasets offering a closer correspondence with empirical data , as these results reveal characteristics of mobile phone datasets that may have been overlooked in the past .+ the present study opens the door to many questions to be investigated regarding the effects of sampling of a dataset . furthermore ,the sources of bias in a dataset are numerous .we have discussed the sampling of the dataset , which is determined by which operator provided the data , and which part of the population has chosen that operator . if the operator is more popular across certain groups , determined by for example age , revenue or occupation , the coverage of the dataset may be biased .moreover , this bias is very difficult to remove without access to a dataset with a perfect coverage , which does not exist .the question of the impact of sampling on the results of the analyses remains an open question , that would require further work .further than the choice of provider and its associated sampling , additional sources of bias in the data include the behavior of users .for example , flashing techniques consist in letting a relative s phone ring a couple of times and waiting for them to call back . such technique allows to insure that it is always the same person who pays for a communication , but links applying this technique will be removed from the dataset if filtering techniques impose reciprocity .another example is given by people who prefer voice calls to text messages , or the opposite .if only one type of communication is recorded in a dataset , some links appear imbalanced because the two nodes have different preferences .in addition , let us note that a single sim card may be shared between several people while other users possess more than one phone number , and that detecting these users is a difficult task .depending on the specific filtering methods used prior to the analysis of the data , such type of behavior of the users may induce additional bias in the extracted social network .however , so far it is not clear how to evaluate the sources and the extent of biases that are present in a given dataset , and researchers must pay attention to the interpretation of their results , bearing in mind that the data analyzed are far from being perfect .ad acknowledges funding from fonds national de la recherche scientifique ( f.r.s . - fnrs ) . this research was made possible with the support of orange .we acknowledge support from a grant actions de recherche concertes mining and optimization of big data models " of the communaut franaise de belgique " and from the belgian network dysco ( dynamical systems , control , and optimization ) , funded by the interuniversity attraction poles programme , initiated by the belgian state , science policy office .we also acknowledge support from innoviris in the context of the bru - net project .b. csji , a. browet , v.a .traag , j .- c .delvenne , e. huens , p. van dooren , z. smoreda , and v.d .blondel . exploring the mobility of mobile phone users ._ physica a : statistical mechanics and its applications _ , 3920 ( 6):0 14591473 , 2013 .s. isaacman , r. becker , r. cceres , m. martonosi , j. rowland , a. varshavsky , and w. willinger .human mobility modeling at metropolitan scales . in _ proceedings of the 10th international conference on mobile systems , applications , and services _ , pages 239252 .acm , 2012 .m. karsai , m. kivel , rk pan , k. kaski , j. kertsz , a.l .barabsi , and j. saramki .small but slow world : how network topology and burstiness slow down spreading . _physical review e _ , 830 ( 2):0 025102 , 2011 .l. kovanen , k. kaski , j. kertsz , and j. saramki .temporal motifs reveal homophily , gender - specific patterns , and group talk in call sequences ._ proceedings of the national academy of sciences _ , 1100 ( 45):0 1807018075 , 2013 .g. krings , f. calabrese , c. ratti , and v.d .blondel . scaling behaviors in the communication network between cities . in _2009 international conference on computational science and engineering _ , pages 936939 .ieee , 2009 .g. krings , m. karsai , s. bernhardsson , v.d .blondel , and j. saramki .effects of time window size and placement on the structure of an aggregated communication network ._ epj data science _ , 10 ( 4):0 116 , 2012 . d. j. mir , s. isaacman , r. cceres , m. martonosi , and r.n .dp - where : differentially private modeling of human mobility . in _big data , 2013 ieee international conference on _ , pages 580588 .ieee , 2013 .filippo radicchi , santo fortunato , and claudio castellano .universality of citation distributions : toward an objective measure of scientific impact ._ proceedings of the national academy of sciences _ , 1050 ( 45):0 1726817272 , 2008 . c. ratti , s. sobolevsky , f. calabrese , c. andris , j. reades , m. martino , r. claxton , and s.h . strogatz . redrawing the map of great britain from a network of human interactions ._ plos one _ , 50 ( 12):0 e14248 , 2010 .m. seshadri , s. machiraju , a. sridharan , j. bolot , c. faloutsos , and j. leskovec .mobile call graphs : beyond power - law and lognormal distributions . in _ proceeding of the 14th acm sigkdd international conference on knowledge discovery and data mining _ ,pages 596604 .acm , 2008 .
mobile phone data have been extensively used in the recent years to study social behavior . however , most of these studies are based on only partial data whose coverage is limited both in space and time . in this paper , we point to an observation that the bias due to the limited coverage in time may have an important influence on the results of the analyses performed . in particular , we observe significant differences , both qualitatively and quantitatively , in the degree distribution of the network , depending on the way the dataset is pre - processed and we present a possible explanation for the emergence of double pareto lognormal ( dpln ) degree distributions in temporal data .
the agents in the model are banks , firms and households . for details of the implementation , see si . at every timestepeach firm approaches its main bank with a request for a loan .the size of these loans is a random number from a uniform distribution .banks try to provide the requested firm - loan . if they have enough cash reserves available , the loan is granted .if they do not have enough , they approach other banks in the interbank ( ib ) market and try to get the amount from them at an interest rate of .not every bank has business relations with every other bank .interbank relations are recorded in the ib _ relation network _ , . if two banks and , are wiling to borrow from each other , , if they have no business relations , .we model the ib relation network with random graphs and scale - free networks , see si . if a bank does not have enough cash and can not raise the full amount for the requested firm - loan on the ib market , it does not pay out the loan . if the bank does pay out a loan , the firm transfers some of the cash to the households as ` investments ' for future payoffs ( wages , invest in new machines , etc . ) .loans from previous timesteps are paid back after timesteps with an interest rate of .the fraction of the loan not used to pay back outstanding loans , ends up at the households ( for details see si ) .households use the money received from firms to ( 1 ) deposit a certain fraction at the bank , for which they get interest of , or ( 2 ) to consume goods produced by other firms ( details in si ) .this money flows back to firms ( the firms profits ) and is used by those to repay loans . if firms run a surplus , they deposit it in their bank accounts , receiving interest of .the two actions of the households effectively lead to a re - distribution and re - allocation of funds at every timestep . for simplicity we model the households as a single ( aggregated ) agent that receives cash from firms ( through firm - loans ) and re - distributes it randomly in banks ( household deposits ) , and among other firms ( consumption ) .specifically , at time a bank - firm pair is chosen randomly , and the following actions take place : * banks and firms repay loans issued at time * firms realize profits or losses ( consumption ) * banks pay interest to households * firms request loans * households re - distribute cash obtained from firms * liquidity management of banks in the ib market , including : ib re - payments , firm - loan requests , defaulted firms , and re - distribution effects from households * firms pay salaries and make investments * firms or banks default if equity- or liquidity problems arise a new bank - firm pair is picked until all are updated ( random sequential update ) ; then timestep follows . during the simulation , firms and banks may be unable to pay their debts and thus become insolvent .firms are declared bankrupt if they are either insolvent , or if their equity capital falls below some negative threshold .banks are declared bankrupt if they are insolvent , or have equity capital below zero .if a firm goes bankrupt the bank writes off the respective outstanding loans as defaulted credits and realizes the _losses_. if the bank has not enough equity capital to sustain these losses it goes bankrupt as well . after the bankruptcy of a bank there occurs a default - event for all its ib creditors . this may trigger a cascade of bank defaults . for simplicity, there is no recovery for ib loans .a cascade of bankruptcies happens within one timestep .after the last bankruptcy is taken care of , the simulation is stopped .we model a _ closed _ system of banks , firms and households , meaning that there are no in- or out - flows of cash from the model . in the normal modethe model captures the current market practice , where banks follow a simple strategy to manage their liquidity .if a bank needs additional liquidity ( for providing a firm - loan request , or for its own re - payments of ib loans ) it contacts banks it is connected with in the ib relation network , and asks them for ib loans . in the normal mode , bank asks its neighbors ( with ) in _ random _ order .if bank can provide only a fraction of the requested ib loan , bank takes it , and continues to ask an other neighbor bank from ( in random order ) until the liquidity requirements of are satisfied .a simple modification to improve the stability of the system is to avoid borrowing from banks with a large systemic impact . for this a minimum level of transparency of the ib marketis necessary . for all banks we compute systemic risk metrics based on the interbank liability network , and the equity of banks , at timestep ( details in si ) .in particular we compute the debtrank , and for comparison the katz rank ( see methods ) .the most risky bank has rank 1 , the least risky has rank , see methods .in contrast to the normal mode , before bank asks its neighbors for ib loans , it orders them ( the banks contained in set ) according to their inverse debt- or katz rank. it then asks its neighboring banks in the order of their inverse rank , i.e. it first asks the least risky , then the next risky , etc .the rank is computed at the beginning of each timestep . in this waythe low - risk banks are favored because the likelihood for obtaining ( profitable ) ib deals is much higher for them than for risky banks , which are at the end of the list and will practically never be asked . in realitythis implies that the banks know the debtrank of each of their neighboring banks .this transparency is not available in the present banking system .note however , that in many countries central banks have all the necessary data to compute the debtrank .a possible way to implement such an incentive scheme in reality , is presented in the discussion .we implement a version of the transparent ib market , where the debtrank is computed after every transaction that takes place in the ib market , instead of being computed at the beginning of the day .this version we refer to as the _we simulate the above model with the parameters given in si , for 500 timesteps .results are averages over 10,000 identical simulations .fit parameters to the following distribution functions are collected in si table 2 . in fig .[ hist_loss ] ( a ) we show the distribution of losses to banks for the the normal mode ( red ) , where the selection of counterparties for ib loans is random and the transparent mode ( blue ) , where banks sort their potential counterparties according to their inverse debtrank , and then approach the least risky neighbor first for the ib loan .the fast mode is shown in green .the normal mode shows a heavy tail in the loss distribution , which completely disappears in the transparent and fast modes , where there are no losses higher than 50 and 40 units , respectively . of course lossesdo not entirely disappear in the transparent scheme , since the credit risk that firms bring to the banking system can not be completely eliminated .the fast mode appears to be slightly safer than the transparent mode .fits to all curves are found in si .the distribution of cascade - sizes of defaulting banks is seen in fig .[ hist_loss ] ( b ) .again the normal mode shows a heavy tail , meaning that in a non - negligible number of events , defaults of a single bank trigger a cascade of liquidity and equity problems through the system . in some casesup to 80 % of the banks collapse . in the transparent modethe likelihood for contagion is greatly reduced , and the maximum cascade size is limited by 40 banks in the transparent and about 30 in the fast mode . in fig .[ hist_loss ] ( c ) we show the transaction volume in the ib market of the three modes , normal ( red ) , transparent ( blue ) and fast ( green ) .the transparent and fast modes show a higher transaction volume indicating a more efficient ib market , where liquidity from banks with excess funds is more effectively channeled to those without .we verified , that the ratio of requested- to provided firm - loans , the efficiency , yields , irrespective of the mode . in fig .[ katzdistribution ] we show the normalized debtrank for all individual banks , for the normal ( red ) , and the transparent scheme ( blue ) .banks are rank ordered according to their debtrank so that the most risky bank is found to the very left , the safest to the very right .it is clear that the systemic risk impact in the transparent mode is spread more evenly throughout the system , whereas in the normal mode some banks appear to be much more dangerous to the system . for individual banks in the normal ( red ) and the transparent mode ( blue ) .banks are ordered according to their debtrank , the most risky is to the very left , the safest to the very right .the distribution is an average over simulation runs with an er network , and shows the situation at timestep ., width=264 ] in fig .[ katz_debt ] we compare the losses for debtrank ( red ) and katz rank ( blue ) .the performance of of the two definitions is hardly distinguishable .also the other systemic risk measures show no noticeable difference , for cascade size , and transaction volume distributions , see si .figure [ nw_effects ] shows the distribution of losses , for the ( a ) normal and ( b ) transparent mode , as computed with an er contact network ( red ) with , and a scale - free ba network ( see si ) with the same average connectivity ( ) . in both modesthe sf network leads to a slightly riskier situation .the situation for cascade sizes and transaction volume is depicted in si fig .2 , where we also show and discuss the effects of connectivity on the three measures in si fig .3 . finally ,we compute the distribution of the time to first default , for the normal and the transparent modes . both distributions are practically gaussian ( kurtosis , skewness ) with mean and standard deviation of , and , respectively .this is expected , since typically the first default is triggered by a firm - default , which is ( to first order ) independent of the situation in the ib market , but only depends on the parameters describing the firms ( , , ) and households ( , ) , see si . .both rank definitions provide practically identical results .same simulation parameters as in previous figure ., width=264 ]debtrank is a recursive method suggested in to determine the systemic relevance of nodes in financial networks .it is a number measuring the fraction of the total economic value in the network that is potentially affected by a node or a set of nodes .let denote the ib liability network at a given moment ( loans of bank to bank ) , and is the capital of bank , see si .if bank defaults and can not repay its loans , bank loses the loans , see si . if has not enough capital available to cover the loss , also defaults .the impact of bank on bank ( in case of a default of ) is therefore defined as .\ ] ] given the total outstanding loans of bank , , its _ economic value _ is defined as .the value of the impact of bank on its neighbors is . to take into account the impact of nodes at distance two and higher, it has to be computed recursively , where is a damping factor .if the network contains cycles the impact can exceed one . to avoid this problem an alternativewas suggested , where two state variables , and , are assigned to each node . is a continuous variable between zero and one ; is a discrete state variable for 3 possible states , undistressed , distressed , and inactive , .the initial conditions are , and ( parameter quantifies the initial level of distress : ] . to quantify the efficiency of the banking system we usethe ratio of the sum of requested loans by the firms to the sum of loans actually payed out to firms , at a given time , averaged over time , the efficiency of the system is then the time average , taken over the simulation . as another measure of efficiencywe use the transaction volume in the ib market at a particular time in a typical simulation run , the first term represents the new ib loans at timestep , the second the loans that are repaid .we set .we showed that the risk , endogenously created in a financial network by the inability of banks to carry out correct risk - estimates of their counterparties , can be drastically reduced by introducing a minimum level of transparency .systemic risk is significantly reduced by introducing an incentive that makes borrowers more prone to borrow from systemically safe lenders .this philosophy was implemented by making a centrality measure such as the debtrank available to all nodes in the network at each point in time .we could show that the efficiency of the financial network with respect to the real economy is not affected by the proposed regulation mechanism ( in all modes ) .this is possible since the regulation only re - distributes risk in order to avoid the emergence of risky agents that might threaten the system , and does not reduce the trading volume on the ib network .risky nodes that are low in debtrank , are barred from the possibility of lending their excess reserves to others .this deprives them from making profits on ib loans , but also reduces their risk of being hit by defaulted credits .they only receive payments and do not issue more risk , meaning that over time they become less risky .less risky banks are allowed to take more risk ( lend more ) and make more profits .the proposed mechanism makes the system safer in a self - organized critical manner .note , that in our scheme the borrower determines who borrows from whom .usually the lender is concerned if the borrower will be able to repay the loan . however , this risk of credit default is not necessarily of systemic relevance .lending to a bank with a large systemic risk can have relatively little consequences for the systemic importance for the lender , or the systemic risk of the system as a whole .in contrast , if a bank borrows from a systemically dangerous node the borrower inherits part of this risk , and increases the overall systemic risk .these facts are conveniently incorporated in the definition of debt- and katz rank .we found that the performance of the method is surprisingly insensitive to the choice of the particular centrality measure , or whether the actual topology of the ib network ( scale - free or random ) .also the average connectivity of the network is not relevant , as long as it remains in sensible regions ( $ ] ) .this suggests that the essence of the proposed scheme is that risk is spread more evenly across the network , which practically eliminates cascading failures .a possible way to implement the proposed transparency in reality would be that central banks regularly compute the debtrank and make it available to all banks . to enforce the regulation, the cb monitors the ib loans in their central credit register , and severely punishes borrowers who failed to find less risky lenders . to design a more market based mechanism to obtain the same self - organized critical regulation dynamics is subject to further investigations .
banks in the interbank network can not assess the true risks associated with lending to other banks in the network , unless they have full information on the riskiness of _ all _ the other banks . these risks can be estimated by using network metrics ( for example debtrank ) of the interbank liability network which is available to central banks . with a simple agent based model we show that by increasing transparency by making the debtrank of individual nodes ( banks ) visible to all nodes , and by imposing a simple incentive scheme , that reduces interbank borrowing from systemically risky nodes , the systemic risk in the financial network can be drastically reduced . this incentive scheme is an effective regulation mechanism , that does not reduce the efficiency of the financial network , but fosters a more homogeneous distribution of risk within the system in a self - organized critical way . we show that the reduction of systemic risk is to a large extent due to the massive reduction of cascading failures in the transparent system . an implementation of this minimal regulation scheme in real financial networks should be feasible from a technical point of view . since the beginning of banking the possibility of a lender to assess the riskiness of a potential borrower has been essential . in a rational world , the result of this assessment determines the terms of a lender - borrower relationship ( risk - premium ) , including the possibility that no deal would be established in case the borrower appears to be too risky . when a potential borrower is a node in a lending - borrowing network , the node s riskiness ( or creditworthiness ) not only depends on its financial conditions , but also on those who have lending - borrowing relations with that node . the riskiness of these neighboring nodes depends on the conditions of their neighbors , and so on . in this way the concept of risk loses its local character between a borrower and a lender , and becomes _ systemic_. the assessment of the riskiness of a node turns into an assessment of the entire financial network . such an exercise can only carried out with information on the asset - liablilty network . this information is , up to now , not available to individual nodes in that network . in this sense , financial networks the interbank market in particular are opaque . this intransparency makes it impossible for individual banks to make rational decisions on lending terms in a financial network , which leads to a fundamental principle : opacity in financial networks rules out the possibility of rational risk assessment , and consequently , transparency , i.e. access to system - wide information is a necessary condition for any systemic risk management . the banking network is a fundamental building block in our globalized society . it provides a substantial part of the funding and liquidity for the real economy . the real economy the ongoing process of invention , production , distribution , use , and disposal of goods and services is inherently risky . this risk originates in the uncertainty of payoffs from investments in business ideas , which might not be profitable , or simply fail . this type of risk can not be eliminated from an evolving economic system , however it can be spread , shared , and diversified . one of the roles of the financial system is to distribute the risk generated by the real economy among the actors in the financial network . the financial network can be seen as a service to share the burden of economic risk . by no means should this service by itself produce additional systemic risk endogenously . neither should the design and regulation of financial networks introduce mechanisms that leverage or inflate the intrinsic risk of the real economy . as long as systemic risk is endogenously generated within the financial network , this system is not yet properly designed and regulated . in this paper we show , that unless a certain level of transparency is introduced in financial networks , systemic risk will be endogenously generated within the financial network . this systemic risk is hard to reduce with traditional regulation schemes . by introducing a minimum level of transparency in financial networks , endogenous risk can be drastically reduced without negative effects on the efficiency in the financial services for the real economy . in most developed countries interbank loans are recorded in the ` central credit register ' of central banks , that reflects the asset - liability network of a country . the capital structure of banks is available through standard reporting to central banks . payment systems record financial flows with a time resolution of one second , see e.g. . several studies have been carried out on historical data of asset - liability networks , including overnight markets , and financial flows . given this data , it is possible ( for central banks ) to compute network metrics of the asset - liability matrix in real - time , which in combination with the capital structure of banks , allows to define a systemic risk - rating of banks . a systemically risky bank in the following is a bank that should it default will have a substantial impact ( losses due to failed credits ) on other nodes in the network . the idea of network metrics is to systematically capture the fact , that by borrowing from a systemically risky bank , the borrower also becomes systemically more risky since its default might tip the lender into default . these metrics are inspired by pagerank , where a web page , that is linked to a famous page , gets a share of the ` fame ' . a metric similar to pagerank , the so - called debtrank , has been recently used to capture systemic risk levels in financial networks . in this paper we present an agent based model of the interbank network that allows to estimate the extent to which systemic risk can be reduced by introducing transparency on the level of the debtrank . for computational efficiency we propose a measure based on katz centrality , which we refer to katz rank . both are closely related to the concept of eigenvalue centrality . betweenness centrality has been used to determine systemic financial risk before . to demonstrate the risk - reduction potential of feeding information of the debtrank back into the system , we use a simple toy model of the financial- and real economy which is described in the next section . interbank models of similar design were used before in different contexts . the central idea of this paper is to operate the financial network in two modes . the first reflects the situation today , where banks do nt know about the systemic impact of other banks , and where all interbank ( ib ) credits are traded with the same interest rate , the so - called ` inter bank offer rate ' , . we call this scenario the _ normal mode_. the second mode introduces a minimum regulation scheme , where banks chose their ib trading partners based on their debtrank . the philosophy of this scheme comes from the fact that borrowing from a systemically dangerous node can make the borrower also dangerous , since she inherits part of the risk , and thereby increases overall systemic risk . note , that a default of the borrower from a systemically dangerous bank affects not only the lender , but possibly also all other nodes from which the lender has borrowed . the idea is to reduce systemic risk in the ib network by not allowing borrowers to borrow from risky nodes . in this way systemically risky nodes are punished , and an incentive for nodes is established to be low in systemic riskiness . note , that lending _ to _ a systemically dangerous node does _ not _ increase the systemic riskiness of the lender . we implement this scheme by making the debtrank of all banks visible to those banks that want to borrow . the borrower sees the debtrank of all its potential lenders , and is required ( that is the regulation part ) to ask the lenders for ib loans in the order of their inverse debtrank . in other words , it has to ask the least risky bank first , then the second risky one , etc . in this way the most risky banks are refrained from ( profitable ) lending opportunities , until they reduce their liabilities over time , which makes them less risky . only then will they find lending possibilities again . this mechanism has the effect of distributing risk homogeneously through the network , and prevents the emergence of systemically risky nodes in a self - organized critical way : risky nodes reduce their credit risk because they are blocked from lending , non - risky banks can become more risky by lending more . we call this mode the _ transparent mode_.
many randomly deployed networks , such as wireless sensor networks , are properly characterized by random geometric graphs ( rggs ) . given a specified norm on the space under consideration , an rgg is usually obtained by placing a set of vertices independently at random according to some spatial probability distribution and connecting two vertices by an edge if and only if their distance is less than a critical cutoff .topological properties of rggs are comprehensively summarized in ; also see for a recent survey in the context of wireless networks . although extensive simulations and empirical studies are performed in dynamical rggs , analytical treatments of topological properties are merely done in static rggs in the previous work .a recent paper is a remarkable exception , in which the authors conduct the first analytical research on the connectivity of mobile rgg in the torus . in this paper , we will also present analytical results and consider a one - dimensional exponential rgg process evolving with time , where vertices are randomly placed along a semi - infinite line .one - dimensional exponential rggs have been recently investigated by some authors , which offer a significant variant from the familiar uniformly ] . also , the conditional density function for in the connected network is } ] .denote the event \backslash a\} ] and , ( [ 3 ] ) follows by noting that }p(\mathcal{c}_{t+1}|e_a)\cdot p(e_a)/p(\mathcal{d}_t).\ ] ] is time - reversible by standard results of markov chains ._ proof of proposition 1 : _ since is an irreducible finite markov chain , and are both positive recurrent . also since they are both non - periodical , and are ergodic states .set , for .if the righthand side of the above definition is , set . the first hitting probability is then given by . by a standard result from ,an irreducible ergodic markov chain has unique stationary distribution , and is given by , for in the present case .thereby , ( [ 4 ] ) follows easily from the facts , , for ; and , , for ._ proof of proposition 2 : _ when , the righthand side of expression ( [ 6 ] ) belongs to interval .hence tends to 0 as in view of ( [ 2 ] ) .since , tends to 0 as by the binomial theorem and ( [ 3 ] ) .then we have in this case , is a transient state and is an absorbing and positive recurrent state . by a standard result ( see e.g. ) , the stationary distribution corresponding to exists and is unique .direct calculation gives .it is straightforward to verify that as tends to infinity .the theorem is thus concluded by exploiting the relation . in this section ,we show a refinement stochastic process from . to be precise , let denote the event that has components at time , for .therefore , is a homogeneous markov chain with state space ] with and , .denote the event \backslash a\} ] . to evaluate ( [ 7 ] ), we still need the probability , but it is also at hand already : \backslash a\cup b}p(y_l^{t+1}<r|y_l^t < r).\end{aligned}\ ] ] the second and fourth terms in the above expression have been obtained in the proof of theorem 1 , and clearly now we arrive at the main result . _theorem 2 : _ the transition probability matrix of is , which is given by ( [ 7 ] ) .of course , we have and .since is an irreducible ergodic chain , it has a unique stationary distribution which may be deduced analogously as in section iii.a .suppose holds at time , and we will consider the markov chain . denote , then is the hitting time for disconnectivity .we may obtain the expectation of using the transition probabilities derived in section iii.a by a routine approach . in this section, we will instead depict an algorithm for getting the distribution of directly .the event is equivalent to , , , , . in view of ( [ 5 ] ), we can interpret the above as follows set , for and . therefore , conditioned on , the probability that the above inequalities holds simultaneously is shown to be given by where is given in the proof of theorem 1 .denote the last integrals of ( [ 8 ] ) by , . for , for , }\\ & & -\big(1-e^{-\frac{\lambda_l\left(r - u_l^{t+k-2}\right)}{1-p}}\big)e^{-\frac{\lambda_lr}{1-p}}1_{[v_l^{t+k-1}=0]}.\end{aligned}\ ] ] in general , for , we can proceed using this recursive formula by induction and integration by parts . notice that from ( [ 8 ] ) . consequently , given , the probability that , , , all are simultaneously true is seen to be given by now we state our resultas follows , whose proof is straightforward at this stage ._ theorem 3 : _ suppose the hitting time of is defined as above , then the distribution and it s expectation . in principle , by the truncation of , we may approximate arbitrarily close .for fixed , we denote by the static case which can be regarded as a snapshot of the dynamical process . also , we omit the superscript typically , e.g. , etc .let denote the probability that is connected .we have the following result regarding connectivity .the proof is easy and hence omitted ._ proposition 3 : _ we have moreover , suppose there exists such that , for all , then as .let denote the probability that consists of components and the probability that there are components in , each of which having size ( i.e. vertices ) ._ proposition 4 : _ suppose there exists such that , for all . then , for any fixed , as ; and for any fixed , , as . _ proof : _ mimicking the proof of theorem 3 and 4 in yields the result . in figure 2 ,we plot as a function of number of vertices for different .we take for , and for .observe that the convergence to the asymptotic value 0 is very fast .we may thus conclude that this static network is almost surely divided into an infinite number of finite clusters .this observation was first made in by a different approach .let denote the graph when ._ theorem 4 : _ in the graph , the degree distribution can be divided into three classes : the degree distribution of and is ; and for , that of is . for ,the degree distribution of and is . _proof : _ let , be independent .denote the degree of vertex as .we get where we used an equivalent definition of gamma distribution .hence , next , for , finally , for , which concludes the proof . define the connectivity distance ; and the largest nearest neighbor distance .we derive asymptotic tight bounds for and strong law of large numbers for , as tends to infinity ._ theorem 5 : _ in the graph , we have + ( i ) ( ii ) _ proof : _( i ) observe that invoking the boole inequality .let .take in the above expression and sum in , then we get by the borel - cantelli lemma , . hence , almost surely . on the other hand , .take , then we conclude that a.s . by using the borel - cantelli lemma again .\(ii ) by the independence of , we obtain take , then we get by the borel - cantelli lemma , almost surely. on the other hand , arguing similarly as in ( i ) , we can get a.s .. this completes the proof .this paper dealt with random geometric graphs in one dimension in which the vertex positions were evolving time . the critical assumption that this evolution was modeled by describing an evolution equation for the change in the inter - nodal spacing .we studied some dynamical as well as static properties and results were given for fixed total number of vertices as well as tending to infinity .it is worth pointing out that this paper is only a preliminary step on the investigation of exponential rgg process models .the idea of considering spacings may be extended to high dimensions in the following way .deploy according to a probability density , then place with the same probability density substituting the location of for the coordinate origin , and so forth .we deem that the growing scheme would be an important alternative from the typical binomial and poisson cases .other interesting directions include examination of `` multiple spacings '' , reinforcing 1-step memory to finite steps memory even to infinite one , which could be possible to result in power law degree distributions .since we only treat the limit regime for constant , how to deal with approaching infinity is our future research .10 m. d. penrose , _ random geometric graphs _ , oxford university press , 2003 .s. k. iyer and d. manjunath , `` topological properties of random wireless networks , '' _ sdhan _ , vol .2 , pp . 117139 , april 2006 .j. daz , d. mitsche , and x. prez - gimnez , `` on the connectivity of dynamic random geometric graphs , '' _ proceedings of the 19th annual acm - siam symposium on discrete algorithms _ , san francisco , 2008 pp . 601610 b. gupta , s. k. iyer , and d. manjunath , `` topological properties of the one dimensional exponential random geometric graph , '' _ random structures and algorithms _ , vol .2 , pp . 181204 , march 2008 .n. karamchandani , d. manjunath , and s.k .iyer , `` on the clustering properties of exponential random networks . ''_ ieee proceedings of 6th wowmom _ , 2005 pp .177182 n. karamchandani , d. manjunath , d. yogeshwaran , and s.k .iyer , `` evolving random geometric graph models for mobile wireless networks , '' _ ieee proceedings of the 4th wiopt _ ,boston , 2006 pp .17 s. csrg and w .- b .wu , `` on the clustering of independent uniform random variables , '' _ random structures and algorithms _4 , pp . 396420 , december 2004 . e. godehardt and j. jaworski , `` on the conncetivity of a random interval graph , '' _ random structures and algorithms _ ,vol . 9 , no .1 , pp . 137161 , august 1996 . y. shang , `` connectivity in a random interval graph with access points , '' _ information processing letters _9 , pp . 446449 , april 2009 . v. kurlin , l. mihaylova , and s. maskell , `` how many randomly distributed wireless sensors are enough to make a 1-dimensional network connected with a given probability ? ''arxiv:0710.1001v1 [ cs.it ] , 2007 .a. j. lawrance and p. a. w. lewis , `` a new autoregressive time series model in exponential variables ( near(1 ) ) , '' _ advances in applied probability _ , vol .4 , pp . 826845 , december 1981 . s. m. ross , _ introduction to probability models_. academic press , 2006 .e. seneta , _ non - negative matrices and markov chains_. springer - verlag , 1981 .y. c. cheng and t. robertazzi , `` critical connectivity phenomena in multihop radio models , '' _ ieee transactions on communications _ , vol .37 , no . 7 , pp . 770777 , 1989 . o. dousse , p. thiran , and m. hasler , `` connectivity in ad hoc and hybrid networks , '' _ proceedings of ieee infocom _ , new york , 2002 pp .10791088 d. miorandi and e. altman , `` connectivity in one - dimensional ad hoc networks : a queueing theoretical approach , '' _ wireless networks _5 , pp . 573587 , october 2006 .s. muthukrishnan and g. pandurangan , `` the bin - covering technique for thresholding random geometric graph properties , '' _ proceedings of 16th annual acm - siam symposium on discrete algorithms _ , vancouver , 2005 pp.989998 f. chung , s. handjani , and d. jungreis , `` generalizations of polya s urn problem , '' _ annals of combinatorics _ ,vol . 7 , no .2 , pp . 141153 , june 2003 . k. k. jose and r. n. pillai , `` geometric infinite divisiblility and its applications in autoregressive time series modeling , '' in : v. thankaraj ( ed . ) _ stochastic process and its applications _ , wiley eastern , new delhi , 1995. v. seetha lekshmi and k. k. jose , `` autoregressive processes with pakes and geometric pakes generalized linnik marginals , '' _ statistics and probability letters _3 , pp . 318326 ,february 2006 .
in this paper , we consider a one - dimensional random geometric graph process with the inter - nodal gaps evolving according to an exponential ar(1 ) process , which may serve as a mobile wireless network model . the transition probability matrix and stationary distribution are derived for the markov chains in terms of network connectivity and the number of components . we characterize an algorithm for the hitting time regarding disconnectivity . in addition , we also study topological properties for static snapshots . we obtain the degree distributions as well as asymptotic precise bounds and strong law of large numbers for connectivity threshold distance and the largest nearest neighbor distance amongst others . both closed form results and limit theorems are provided . random geometric graph ; autoregressive process ; component ; connectivity ; mobile network .
for a chaotic system with a normalized invariant density pesin s theorem states the identity between the sum of positive lyapunov exponents and its kolmogorov - sinai ( ks ) entropy ( see for conditions ) . in certain weakly chaotic systems ,the dynamics still remains quasi random , though the entropy and lyapunov exponents are zero .standard chaotic concepts based on exponential separation of nearby trajectories are replaced in these systems with new methods which have drawn considerable attention .previously , a generalization of pesin identity was suggested for systems whose invariant densities are not absolutely continuous along expanding directions and for systems such as logistic map at the edge of chaos based on tsallis entropy ( see also and ) .motivated by a question posed by r. klages , we found a pesin - type identity in intermittent weakly chaotic maps .our work shows that krengel entropy must be used instead of ks entropy which is zero for such systems , and that infinite ergodic theory is the mathematical basis for this identity . for weakly chaotic maps , in particular maps withmarginally unstable fixed points ( see details below ) , the infinite invariant measure is an essential tool . even though this density is not normalizable , still it can be used to describe statistical properties of the dynamical system , thus replacing the usual normalizable invariant density .however , besides a few exceptions the invariant density is unknown . herewe provide a simple numerical approach for its estimation .further , the standard scenario of statistical physics is that for chaotic motion , the density of ergodic systems will tend in to a normalizable invariant measure in the long time limit .for example is the boltzmann measure for canonical systems . for applicability of the statistical approachthe invariant density must be reached starting from wide classes of initial conditions . for systems with an infinite invariant measuredo we have similar behavior ?namely will the density approach an infinite measure starting from different types of initial conditions .here we investigate this issue numerically , and show that different initial states yield the same estimate for the infinite invariant density .this means that we have a simple method to find the infinite invariant density , at least on a computer .the numerical estimation of the infinite invariant density is important for many applications , in particular for the estimation of the krengel entropy and hence , according to our generalized pesin s identity , the averaged separation of trajectories .finally , we demonstrate that our numerical infinite invariant density is in excellent agreement with exact analytical infinite invariant density found by thaler for a specific map . using this exact analytical infinite invariant density we corroborated the validity of our generalized pesin identity without fitting .this leaves no room for speculations and doubts on our results .recently saa and venegeroles ( sv ) proposed a pesin identity for the same class of dynamics investigated in our work ( e.g. pomeau - manneville map ) for individual trajectories .we note that the average of this identity over initial conditions is exactly our previously obtained results .we remove misleading statements , for example sv claimed to have `` corrected '' our results .we clarify the notations used , and show that the core of misunderstanding is a trivial constant multiplying the infinite invariant density .this is a simple matter of definition of the infinite density which is not found for normal systems with finite invariant density since there the normalization condition determines uniquely the multiplicative constant in front of the equilibrium density .similar to our work we study one dimensional maps with one or more unstable fixed points .we consider parameter regime where the system has an infinite invariant density soon to be defined .the discrete time dynamics is governed by .a prominent example being the pomeau - manneville ( pm ) map with , . a second example studied by thaler is where .notice that this map is asymmetric with respect to for .the first map has one unstable fixed point on , while the second has two such points on and .a third map introduced by thaler is ^{-1/(z-2 ) } \ ; | \ ; mod \ ; 1 , \ ; \ ; z\ge1 . \label{thalereq}\ ] ] this map is similar to the pm map in the sense that it has one unstable fixed point located at and for small it has behavior . in this casethis map is important since thaler has obtained its exact analytical infinite invariant density . for all models we are interested in where the usual lyapunov exponent and ks entropy are zero .when they are positive , the invariant measure normalizable and usual pesin identity holds . in this case the distribution of finite time lyapunov exponents provides additional information about the behavior of maps . for a single trajectorythe generalized lyapunov exponent is defined as where for while for .the generalized lyapunov exponent ( [ eq1 ] ) is a random variable even in the long time limit .therefore , averaging over initial conditions , we focused among other things on the averaged generalized lyapunov exponent for .we defined the infinite invariant density according to ( see also appendix of for more discussion of the mathematical aspects of this definition ) here is the density of particles normalized to unity .we later check numerically ( see figure [ fig1 ] ) that is unique , in the sense that it is independent of the choice of the initial conditions .since conserves normalization , is normalized for any still the integral over the limiting value of diverges ( see ) .the conditions and rigorous proof that is indeed an invariant density can be found in thaler s work for a class of maps with a single unstable fixed point on the origin ( see also ) .we adopt this result and use it to find the infinite invariant density numerically . using a simple continuous time stochastic model proposed in , we analytically find the approximation for the normalized density of the pm map ( [ maneq ] ) with one unstable fixed point where denotes small and long time .the crossover is defined as . using ( [ g_t ] ) the time dependence of the crossoveris obtained .hence , the crossover goes to zero as . from definition ( [ eq3 ] )we obtain the approximate infinite invariant density for the pm map and according to ( [ rho1 ] ) the slow escape of trajectories from the vicinity of the unstable fixed point accumulates the density in its vicinity . the interesting feature being that this density diverges so strongly on that is non - normalizable .importantly thaler s theorem shows that ( [ eqsvb ] ) is valid for a large class of maps with a single unstable fixed point on the origin , and which behave like for .figure [ fig1 ] demonstrates that when equations ( [ rho1 ] ) , ( [ eqsvb ] ) describe the infinite invariant density for pm map . for finite time and small see deviations in agreement with ( [ g_t ] ) .since our theory works for small , not surprisingly ( [ rho1 ] ) does not work perfectly for , though deviations seem small to the naked eye .for the map ( [ thalereq ] ) thaler has found an exact analytical expression for its infinite invariant density . \label{thalerinf}\ ] ] hence , unlike the pm map where we do not have an exact expression for the infinite density , for the map ( [ thalereq ] ) we can compare simulations with theory in the regime .since , as we mentioned for this map has the same behavior as the pm map , the constant is given by ( [ eqsvb ] ) .note that the multiplicative constant is related to our working definition ( [ eq3 ] ) ( see further discussion below ) .in figures [ fig2new1 ] we see that slowly converges towards the theoretical infinite density , besides the mentioned deviations close to . as we increase measurement time the domain , where deviations from asymptotic theory are observed is diminishing . in figure [ fig2new3 ]we plot divided by showing that it converges to the constant as predicted in ( [ thalerinf ] ) .this implies that our method of estimation of the infinite density works well , and hence we are confident it can be used also for maps where we do not have an exact expression for invariant density .we will use thaler s analytical expression for the invariant density ( [ thalerinf ] ) to corroborate the generalization of the pesin identity below .but first we briefly review the generalized pesin identity . equation ( [ eq3 ] ) for the pm map with ( ) and . in simulationsthe times are ( in figure from bottom to top ) .different initial conditions are used to illustrate that the infinite density is not sensitive to the choice of initial conditions : solid line for , circles for , squares for . in the limit the system approaches the infinite invariant density : the dashed line , which is in good agreement with ( [ rho1 ] ) and ( [ eqsvb ] ) without any fitting . as follows from ( [ g_t ] ) , equation ( [ rho1 ] ) works for , where is the crossover . for the finite time is correctly described by the second line of equation ( [ g_t ] ) ( horizontal dotted lines with no fitting ) . as , and since we have when , which means the system approaches a non - normalizable state . ]pesin s theorem , valid for where asserts the equality where the kolmogorov - sinai entropy is given in terms of here is the normalizable invariant density .pesin s identity provides a deep relation between chaotic and statistical quantities of the system .for a different behavior is found . from ( [ eq2 ] ) and ( [ eq3 ] ) we suggested the generalization of the pesin s identity in the form where the krengel entropy appears notethat r. zweimller has shown the relation between the krengel entropy and complexity , so already at this point our work indirectly relates between the latter and separation of trajectories .below we calculate the complexity using well - known compression algorithm of lempel and ziv and relate it to the krengel entropy and . previously lempel - ziv complexity for weakly chaotic mapswas studied in .equation ( [ eq3 ] ) for the thaler map ( [ thalereq ] ) with ( ) . in simulationswe have used uniform initial density iterated for ( in figure from bottom to top ) . in the limit system approaches the analytical infinite invariant density equation ( [ thalerinf ] ) and ( [ eqsvb ] ) ( the dashed line without any fitting ) .bottom : comparison of for thaler map ( [ eq3 ] ) with ( ) calculated at with analytical infinite invariant density the dashed line ( [ thalerinf ] ) .notice the excellent agreement for large range of .,title="fig : " ] equation ( [ eq3 ] ) for the thaler map ( [ thalereq ] ) with ( ) . in simulationswe have used uniform initial density iterated for ( in figure from bottom to top ) . in the limit the system approaches the analytical infinite invariant density equation ( [ thalerinf ] ) and ( [ eqsvb ] ) ( the dashed line without any fitting ) .bottom : comparison of for thaler map ( [ eq3 ] ) with ( ) calculated at with analytical infinite invariant density the dashed line ( [ thalerinf ] ) .notice the excellent agreement for large range of .,title="fig : " ] , divided by .apparent convergence to a constant ( dashed line given by equation ( [ eqsvb ] ) without fitting ) validates our numerical scheme .noisy results for large times ( upper curve counted from bottom to top on the left of the figure ) and for large are due to statistical errors . ]generalized pesin identity equation ( [ eq5 ] ) follows from definitions of the generalized lyapunov exponent ( [ eq2 ] ) and the infinite invariant density ( [ eq3 ] ) ( which points out that with respect to the generalization of the pesin identity these two definitions are related to each other ) .using ( [ eq2 ] ) we have where the averaging is over initial conditions distributed according to a smooth initial density .since we are interested in the long time limit we replace the summation with an integral and average over the density function where underlines that this results is valid in the long time limit . using ( [ eq3 ] ) we write the density as substituting this expression into ( [ l_mean_1 ] ) and performing integration over the time we arrive at which is the generalization of the pesin identity ( [ eq5 ] ) , ( [ eq6 ] ) since the integral on the left is the krengel s entropy. notice that for the divergence of is canceled by .the prefactor stems from the integration over time , where the constant regularizes the time integral ( since we consider only long times ) .the constant is a direct consequence of our definitions of generalized lyapunov exponent and infinite invariant density and of course it can be absorbed in either definitions for aesthetics .we however stay with ( [ eq5 ] ) , ( [ eq6 ] ) since the usual lyapunov exponent is zero and the serves as a reminder that we are dealing with a weakly chaotic system . to clarify and avoid further confusionthe will appear in other averages .an important example is the complexity considered by zweimller . using the ratio ergodic theorem for complexity we get complexity using our notations the stems again from the summation ( integration ) over time , similar to what was done in ( [ l_mean_1 ] ) .we emphasize as usual that this relation is valid for the definition ( [ eq3 ] ) .clearly we have now we use the exact analytical infinite invariant density ( for the map ) to corroborate generalization of pesin identity .we calculate the generalized lyapunov exponent numerically according to ( [ eq2 ] ) starting from uniform ensemble . using mathematicawe calculate the integral with the exact analytical infinite invariant density ( [ thalerinf ] ) with in ( [ eqsvb ] ) .figure [ fig2new4 ] fully corroborates our assertions with good accuracy .we note that convergence of numerics strongly depends on the parameter or . for large ( corresponding to small ) and for ( )convergence of numerical results slows down .this is shown in figure [ fig2new5 ] . however , for intermediate our generalized pesin identity is fully supported by numerics .we note that here no stochastic approximation was employed whatsoever . ) , ( [ eq6 ] ) .parameters of the thaler map ( [ thalereq ] ) are ( from bottom to top in the figure ) .the generalized lyapunov exponent is calculated numerically according to ( [ eq2 ] ) starting from uniform ensemble .dashed limiting lines correspond to the integral with exact analytical infinite invariant density ( [ thalerinf ] ) calculated using mathematica program and constant given by ( [ eqsvb ] ) . ] clearly , our work provides the sought after elegant connection between entropy and separation of trajectories , at least for systems with an infinite invariant density .operationally , however , what is our work about ?it states that starting with a reasonable initial condition , e.g. particles uniformly distributed but not a delta function initial condition , the density of particles evolves according to the transformation and in the long time limit one can deduce from the infinite invariant density using ( [ eq3 ] ) .this can be done on a computer rather easily , or semi - analytically as we showed in our work ( see also equation ( [ rho1 ] ) and ( [ eqsvb ] ) ) .note that in this procedure we need not gather full information on the paths , only on their position after a long time . on the other hand one can follow the trajectories of the particles and evaluate the sub - exponential separation using ( [ eq2 ] ) .thus , two numerical protocols are used one to find the density of particles and the other follows trajectories and measures the generalized lyapunov exponent . with the infinite invariant density obtained in the first program, we can evaluate the krengel entropy by performing an integral .this entropy is then used to evaluate the average separation .of course our theory is testable in the sense that one can evaluate the infinite invariant density and with it predict the average separation , namely follow two different numerical protocols and checking the validity of our results ( see below ) .we also found the fluctuations of the separation , our work being consistent with the aaronson - darling - kac theorem .importantly , in we discussed the connection between the krengel entropy and lempel - ziv complexity thus showing the deep relation between sub - exponential separation of trajectories and algorithmic complexity for weakly chaotic systems ( see also discussion below ) . )is calculated numerically according to ( [ eq2 ] ) starting from a uniform ensemble of trajectories .simulation time was ( squares ) , ( circles ) and ( triangles ) .notice slow convergence to theoretical curve .dashed line corresponds to the integral with exact analytical infinite invariant density ( [ thalerinf ] ) calculated using mathematica program and constant given by ( [ eqsvb ] ) . ]one may claim that instead of using the definition ( [ eq3 ] ) for the infinite invariant density one could have used another definition .one may suggest with some . in our workwe choose , which is consistent with the usual choice of the normalized invariant density ( the case ) the essence of the criticism on our work by sv is that we could have chosen a different .however that would amount with only trivial multiplication of our final results with a constant .our results are valid as long as one pays attention to our definitions , in particular one should not ignore equation ( [ eq3 ] ) . by not informing their reader that we use equation ( [ eq3 ] ) ,sv have distorted the context of our work .clearly , as we discussed above , our generalized pesin identity depends also on the definition of the generalized lyapunov exponent ( [ eq2 ] ) .similarly to the infinite invariant density it can be defined with some constant which we choose to be unity . however , as it follows from the derivation after ( [ eq5 ] ) , ( [ eq6 ] ) choosing another constant in the definition of the generalized lyapunov exponent would result in multiplication of our generalized pesin identity by a trivial constant .the definition of these constants become important when considering averages . for examplethe generalized lyapunov exponent converges to a constant because the left - hand side does not depend on the choice of an invariant density , namely it can not depend on an arbitrary multiplicative constant , the infinite density must be determined precisely . indeed exactly for this reason we must have , and the working definition ( [ eq3 ] ) should not be ignored .one could of - course absorb the in ( [ l2 ] ) in the definition of , but as mentioned this is a matter of choice , which does not influence the predictive power of the theory . in their equation( 16 ) sv claim to correct our result ( [ eq5 ] ) .this boils down to other choices of and hence in our opinion is not a correction at all . in their worksv use an infinite invariant measure for the pm map , where is an undefined constant ( see their equation ( 2 ) and compare in our notations to equations ( [ rho1 ] ) , ( [ eqsvb ] ) ) . since they do not fix the constant measure is not the same as ours ( freedom in choice of multiplicative constant ) .their pesin - type identity reads where is the krengel entropy , with respect to invariant measure .notice , that by fixing the constant as in our equation ( [ eqsvb ] ) , sv relation ( [ eqsv ] ) boils down to our generalized pesin identity ( [ eq5 ] ) . also notice that sv presentation is specific to the pm map which has one unstable fixed point .of course more generally we can have two or more unstable fixed points .the invariant density will reveal singularities next to unstable fixed points all with the same order of singularities ( i.e. the same ) located on with . for example and for the map ( [ man2 ] ) .then the infinite invariant density will be of the type moreover , for the map ( [ man2 ] ) is asymmetric with respect to , therefor the infinite invariant density is also asymmetric as shown in figure [ fig1a ] and .the identity suggested by sv does not describe this situation since it is restricted to a single unstable fixed point and hence is not general .in contrast , our pesin - type identity is general and valid for maps with any number of unstable fixed points .so , we suggest to stick with our original identity ( [ eq5 ] ) , which is general and at the same time not distort its meaning by ignoring equation ( [ eq3 ] ) . equation ( [ eq3 ] ) for the map ( [ man2 ] ) with and ( ) calculated at ( dashed line ) , ( solid line ) with uniform initial density for .similar results are obtained for other initial conditions : in ( circles , ) , and in ( squares ) . ] in the second part of their work sv try to fix their claims .they write : _ a closer inspection of our work ( see in particular , their eq . ( 10 ) ) shows that they , when dealing with the continuous time stochastic linear model proposed in , tacitly choose _ this statement is misleading . in our workwe explicitly give examples of maps with two unstable fixed points , where this relation is obviously wrong . in these mapsone has two singularities in the infinite density , so the infinite density is of the type here we have two s , which , as we discussed above , for asymmetric maps are non identical .so , choosing a specific does not make any sense at all .we emphasize that our results are general and they are not sensitive to the choice of , neither are they specific for one particular map .rather we use equation ( [ eq3 ] ) which makes our results general while equation ( [ eqsv ] ) is not . for the pomeau - manneville map with ( ) calculated with different initial conditions : solid line for , dashed line for , dotted line for .the dashed - dotted line is where the entropy is calculated using numerical integration of ( [ eq6 ] ) . ] for the map ( [ man2 ] ) with and ( ) calculated with different initial conditions : solid line for , dashed line for , dotted line for .the dashed - dotted line is where is found using ( [ eq6 ] ) . ] finally , in our work we corroborated gaspard and wang result that is proportional to the number of injections in vicinity of unstable fixed points ( see more details in ) .we further showed that and hence are random variables with a mittag - leffler distribution in accordance with the aaronson - darling - kac theorem . on this issuesv wrote : _ it is interesting to notice that is also considered as a mittag - leffler random variable in by using renewal theory in a different manner , but its relation to is not stated . _the second part of this sentence is problematic . in the abstract of ref . we wrote : _ we show that is equal to krengel entropy and to the complexity calculated by the lempel - ziv compression algorithm_. this relation is given by in figure [ figlz ] we repeat our numerical calculation of the average information content by the lempel - ziv compression algorithm for different values of and for longer time ( see for more details ) .we then calculate the lempel - ziv complexity as .results of figure [ figlz ] are fully consistent with those in . in this propositionwe suggest that is an estimator of the complexity . in equation ( 20 )an exact relation between complexity and is given . since is not directly computable , replacing in equation ( 20 ) with the estimator yields equation ( 28 ) which is not rigorous , andso far supported by numerical evidence only .the proposition is motivated by the fact that in the non - zero entropy limit the lempel - ziv complexity is a good estimator of .clearly more work in this direction is needed . for the map ( [ man2 ] ) ( here is the number of words in a trajectory , see for more details ) calculated by the lempel - ziv algorithm for , and ( from bottom to top ) .dashed lines correspond to with found using ( [ eq6 ] ) .each curve is averaged over initial conditions . ]to go beyond general relations and for the sake of specific predictions we need estimates for the infinite invariant density .that goal is in principle rather simple .we start the evolution with initial conditions whose density does not contain a delta function , e.g. uniform initial conditions and after long measurement time estimate using ( [ eq3 ] ) . numerically we now demonstrate that the infinite invariant density defined by ( [ eq3 ] ) does not depend on the choice of initial conditions . namely we assume a normalizable initial density , not containing singularities ( like delta functions ) .three initial conditions are considered : for , for , and for ) . for these choiceswe evolve and then using eq .( [ eq3 ] ) we estimate . supported by simulations we see that when we get the same result for independent of the initial state .we have checked this for two maps ( [ maneq ] ) and ( [ man2 ] ) which have one and two unstable fixed points respectively . the results are shown in figures [ fig1 ] , [ fig1a ] .we see that is independent of the initial state .this implies that one can attain an estimate for the infinite invariant density rather easily , though more rigorous work is needed to give estimates on the convergence rate .our work also shows that the generalized lyapunov exponent , can be estimated starting from different initial conditions .of course at short times the estimates will vary from one initial condition to another , however as shown in figure [ fig2 ] and [ fig4 ] different initial states give the same estimate for in perfect agreement with the generalized pesin identity .in the mathematical literature the infinite invariant density is defined up to an arbitrary multiplicative constant .we followed william of occam economical philosophy and we fixed the constant to unity . more practically , to test predictions of a theory we need estimates for the infinite invariant density , which we obtain from a theory or numerics .it is therefore useful to define the infinite density precisely as we did in equation ( [ eq3 ] ) , and not leave it defined up to an arbitrary multiplicative constant .this operational definition is useful , since as we demonstrated it can be used to estimate the infinite density . with the infinite densitywe may calculate averaged observables and here we focused on which is a measure of sub - exponential separation .here we used exact expression for infinite density to obtain which perfectly match simulations . unfortunately exact expression for infinite invariant densityare scarce , and hence we believe our numerical approach is useful .we showed that the criticism posed recently on our generalization of pesin s identity for weakly chaotic systems is unjustified .we propose to stay with our identity because of its testability and broad validity , beyond the single unstable fixed point case .this work is supported by the israel science foundation .
weakly chaotic maps with unstable fixed points are investigated in the regime where the invariant density is non - normalizable . we propose that the infinite invariant density of these maps can be estimated using , in agreement with earlier work of thaler . here is the density of particles for smooth initial conditions . this definition uniquely determines the infinite density and is a valuable tool for numerical estimations . we use this density to estimate the sub - exponential separation of nearby trajectories . for a particular map introduced by thaler we use an analytical expression for the infinite invariant density to calculate exactly , which perfectly matches simulations without fitting . misunderstanding which recently appeared in the literature is removed . _ keywords _ : dynamical processes ( theory )
recently there has been interest in relating observed characteristics of global energy transport in space plasmas to sandpile " models which dissipate energy by means of avalanches ( consolini , 1997 ; chapman _ et al ._ , 1999 ) .when such models exhibit scale free , inverse power law statistics in the probability distributions of energy released by avalanches , and of avalanche length and duration , they are candidates for description in terms of self organised criticality ( soc ) ( bak _ et al . _ , 1987 , 1988 ; lu , 1995 ) see also jensen ( 1998 ) and references therein .the power spectra may also have an inverse power law ( 1/f " ) signature , and soc was introduced to explain the ubiquity of such spectra and of fractality in nature .chang s suggestion ( chang , 1992 , 1998a , 1998b ) that the magnetosphere is in an soc state has motivated application of avalanche models to the solar wind - magnetosphere - ionosphere system .observational motivation includes sporadic nature of energy release events within the magnetotail ( bursty bulk flow " events ( angelopoulos _ et al . _ , 1996 ) ) , power law in - situ magnetic field power spectra ( hoshino _ et al ._ , 1994 ) , and power law features of magnetospheric index data , notably ae which is an indicator of energy dissipated by the magnetosphere into the ionosphere .tsurutani _ et . al . _( 1990 ) described a broken power law ae spectrum ; indicative of soc but not conclusive as power law power spectra are not unique to soc systems ( jensen , 1998 ) .consolini ( 1999 ) has recently used ae data taken over a ten year period to construct the distribution of a burst measure extending the result obtained for one year in ( consolini , 1997 ) .this work strongly suggests that inverse power law burst statistics are a robust feature of the ae data , albeit with an exponential tail and some evidence of an additional lognormal component .it is currently less clear that the power law is solely of intrinsic magnetospheric origin and it may in fact be related to the behaviour of the solar wind energy input ( freeman _ et al ._ , 1999 ) .since the global reconfigurations of the magnetotail ( substorm events ) appear to have occurrence statistics with a well defined mean , chapman _ et al . _( 1998 ) demonstrated a simple avalanche model ( another example is ( pinho and andrade , 1998 ) ) that in principle has relevance for the magnetosphere in that it yields systemwide avalanches where the statistics have a well defined mean ( intrinsic scale ) , whereas their internal avalanche statistics are scale free .an additional consideration for space plasmas are the means by which the conjecture of scale free statistics can be tested .we wish to test the hypothesis that the probability distributions of energy dissipated , length scales and duration of avalanches are of power law form as seen in the original observations of soc ( jensen , 1998 ) in a slowly driven sandpile .since to test for power law statistics we need to maximise the range of sizes of events , the required statistical experimental evidence requires long runs of data . in the magnetospheric system this will then imply that both the instantaneous value and smoothed local mean of the loading rate ( the solar wind ) will have strong variation .in this paper we illustrate some robust features of the avalanche statistics of the simple avalanche model ( chapman _ et al ._ , 1998 , 1999 ) which are needed for application of such a model to space plasma data .we investigate to what extent the model gives inverse power law avalanche statistics under slow loading .we show how these statistics are modified under strong and/or variable loading .we shall also see that two distinct regimes for energy transport , both with power law avalanche statistics but with different slopes , emerge from the sandpile algorithm , depending on the size of the system .in addition we show how these may be characterized in terms of the interval distribution of events on different spatial scales .sandpile algorithms generally include an array of nodes , at each of which there is a variable amount ( height ) of sand ; a critical gradient ( difference in height between neighbouring nodes ) which , if exceeded by the actual gradient , triggers local redistribution of sand ; and algorithms for redistribution and fuelling .the main measured output is the statistics of the emergent avalanche distribution .( 1989 ) gave an early classification of such models .the relationship of the models to experimental sandpiles , and to the ideal concept of soc remain topics of active research ( see for example jensen ( 1998 ) , also dendy and helander ( 1997 ) ) .the sandpile model used here is described in more detail in ( chapman _ et al ._ , 1999 ; helander _ et al _ , ( 1999 ) ) .we have a one - dimensional grid of equally spaced cells one unit apart , each with sand at height and local gradient .there is a repose gradient below which the sandpile is always stable , and with respect to which heights and the gradients are measured .each cell is assigned a critical gradient .if the local gradient exceeds this , the sand is redistributed to neighbouring cells and iteration produces an avalanche .the critical gradients on each of the nodes are selected from a uniform ( top hat " ) probability distribution . is generated by choosing the at random with uniform probability from the range ] . with the angle of repose normalized to zero ,the time evolution is by systematic growth as sand is added , interspersed with systemwide avalanches where the energy falls back to zero , and internal avalanches where the energy is reduced to some nonzero value .the statistics of the energy released in internal and systemwide avalanches for two longer runs of this sandpile under more realistic conditions of fluctuating input are shown in figure 2 , with the normalized probability distribution for both internal and systemwide avalanches plotted as a single population .as in all sandpile runs here , the populations comprise over internal and systemwide avalanches .the two runs in figure 2 are for slow ( , diamonds ) and fast ( , circles ) mean loading rates , giving an indication of the expected behaviour of a system with strong variation in the driver , such as the solar wind driven magnetosphere ( see also ( watkins _ et al ._ , ( 1999 ) ) . for both values of inflow rate internal avalanchesshow distinct inverse power law regions with a turndown at small , whereas the systemwide events which have a characteristic mean ( chapman _ et al ._ , 1998 , 1999 ) appear as a bump at the high energy end .the distinct behaviour of the systemwide avalanches , independent of the inflow rate , shown here is a necessary condition for applicability to the magnetosphere ( chapman _ et al ._ , ( 1998 ) ) .the internal avalanches show different behaviour under slow and fast loading . essentially , the smaller events are destroyed as we increase the average loading rate , making larger events more probable .hence the normalized probability of larger events shows an increase on the plot .importantly , their power law slope ( here of index ) is preserved and is thus a robust feature that should be apparent in observations under variable loading .we now discuss the internal avalanches in more detail .two lines and are drawn on figure 2 ( and all subsequent figures ) .the values and are an approximate fit through the points . under slow loadingthe sandpile exhibits two distinct regimes . in the casewhere , ( any positive constant ) the sandpile evolution with time has been found analytically and , if the system is normalized to have total length unity , it can be shown that ( helander _ et al ._ , 1999 ) . the arguments of helander _ et al_ , ( 1999 ) ) lead us to expect a region of power law index in for sandpile with of finite width . we might also anticipate that as the width of is decreased more of the total range of would be characterized by a power law index , but surprisingly this is not so .the for four sandpile runs are superposed in figure 3 , differing only by the choice of . for three of these the same mean critical slope but three different widths ( )have been used .the fourth also has with width , but has a different mean .in this latter case we have rescaled since , on average , the heights of sand needed for instability will be smaller by an order of magnitude , so that ( 5 ) will yield values of that are on average smaller by two orders of magnitude .figure 3 suggests that all features of the probability distribution are robust against the choice of , which effectively represents the local condition for instability .avalanches dissipating smaller amounts of energy might be expected to extend over smaller length scales , illustrated in figures 4 - 6 where we replot the data shown in figure 3 , showing only the contribution from successively longer avalanches .avalanches with lengths respectively are shown .independent of the details of we see that the power law index corresponds to avalanches that extend over less than cells .figure 4 then shows that the drop at in figure 3 corresponds to avalanches that are one cell in length .the sandpile thus has three distinct regimes in its statistics : single cell avalanches that ( as expected ) are not power law ; avalanches smaller than 64 cells , with power law index , which may reflect the discrete nature of the grid ; and avalanches longer than cells and up to the system size , with power law index , which may approach a continuous limit for the system .the interval distribution ( that is , the probability distribution of time intervals between events ) provides further insight into the difference between avalanches on the small and the large scale . in figures 7 - 10we show a series of interval distributions from the single sandpile run with constant slow fuelling and with top hat probability distribution for the critical gradients $ ] .the figures show the time intervals between sucessively larger avalanches , that is , of all avalanches of length and .since sand is always added at cell 1 , and hence instability always occurs first at cell 1 a plot of the interval distribution for all avalanches ( not shown ) simply reproduces the probability distribution for , ie , the the distribution of times between sucessive avalanches , all of which are triggered at the first cell . as we selectively plot the distributions of time intervals between longer avalanches , that have propagated further down the sandpile , we see the effect of the interaction of more cells in the sandpile .figure 7 shows intervals between all avalanches that have propagated beyond cell 1 ( ie length ) .here we see two characteristic timescales corresponding to avalanches that stop at cell 2 , and those propagating beyond cell 2 .time is normalised to the inflow rate ( such that unit sand is added to the sandpile in unit time ) so that avalanches that reach cell 2 and stop will only occur after sufficient sand has been added to exceed the critical gradient at cell 1 , which has mean value 1 .hence the minimum time interval in this case is .as we increase the miminum avalanche length considered , the minimum time interval also increases correspondingly .the detailed behaviour becomes complex for lengths > 1 but less than 64 , that is , including avalanches which dissipate energy according to power law index .the general trend however is for an increasing number of characteristic time intervals to appear as we only consider avalanches of increasing length .when we exclude avalanches that dissipate energy according to power law index by only considering avalanches of length ( figure 10 ) the interval distribution has become continuous with cutoff at time interval 65 as we would expect .the large scale avalanches identified as those dissipating energy with probability distribution that is power law index therefore correspond to this continuous limit .unlike laboratory plasmas ( see chapman _ et al ._ , ( 1999 ) )these large scale events are expected to be relevant to astrophysical systems and are expected to be the robust observable in the case of the magnetosphere which has strong , variable loading .a simple one dimensional sandpile model has been developed with two distinct characteristics in the probability distribution of energy discharges . for internal reorganisationthere are two distinct inverse power law regimes , whilst for systemwide discharges ( flow of sand out of the system ) the probability distribution has a sharply - defined mean .our model may be applied to magnetospheric dynamics ( chapman _ et al ._ , ( 1998 ) ) , for example in reconciling the apparent paradox of power law indexes in internal dynamics with substorm event statistics which have peaked distributions . under slow loadingthe internal dynamics exhibits two regimes which have inverse power law statistics of index and , corresponding to reconfigurations on distinct length scales .short length scales may arise from the discrete nature of the grid , while we also see longer scales , up to the system size , that effectively approach a continuous limit of the model .we find a transition between these regimes at avalanche lengths of about 64 cells . for space plasma systems observations taken over long periods are required to test for possible inverse power law statistics .the loading of the system ( in the case of the magnetosphere , the solar wind ) is often characterised by both strong variability about a mean , and a large dynamic range of mean energy input .the inverse power law form of the statistics of large internal avalanches has been shown to be robust under fast loading .the effect of large loading rates is to exclude events which dissipate small amounts of energy , which in our model results in a single inverse power law regime with downturn at lower energies .we expect such inverse power law avalanche distributions to be a persistent feature in long runs of data that include fast " inflow conditions if the underlying system is governed by soc .acknowledgements : the authors thank r. o. dendy for stimulating discussions and giuseppe consolini for a preprint of ( consolini , 1999 ) .scc was supported by a particle physics and astronomy research council lecturer fellowship .99 angelopoulos v. , coroniti f. v. , kennel c. f. , kivelson m. g. , walker r. j. , russell c. t. , mcpherron r. l. , sanchez e. , meng c. i. , baumjohann w. , reeves g. d. , belian r. d. , sato n. , friis - christensen e. , sutcliffe p. r. , yumoto k. and harris t. ( 1996 ) multipoint analysis of a bursty bulk flow event on april 11 , 1985 . _ journal of geophysical research _ * 101 * , 4967 - 4989 .chang t. s. ( 1992 ) low dimensional behaviour and symmetry breaking of stochastic systems near criticality - can these effects be observed in space and in the laboratory ? ._ ieee transactions on plasma science _ , _ 20 _ , 691694 .chang t. s. ( 1998a ) multiscale intermittent turbulence in the magnetotail . in _ substorms-4 . _s. kokubun and y. kamide , pp 431436 .terra scientific publishing company / kluwer academic publishers , tokyo .chang t. s. ( 1998b ) sporadic localised reconnections and multiscale intermittent turbulence in the magnetotail . in _geospace mass and energy flow : results from the international solar terrestrial physics program .j. l. horwitz , d. l. gallagher and w. k. peterson , pp .193 - 199 .geophysical monograph 104 , american geophysical union , washington , d.c .consolini g. ( 1997 ) sandpile cellular automata and magnetospheric dynamics . in _proceedings volume 58 , cosmic physics in the year 2000"_. eds .s. aiello , n. iucci , g. sironi , a. treves and u. villante , pp 123126 .societa italiana di fisica , bologna , italy .tsurutani b. , sugiura m. , iyemori t. , goldstein b. e. , gonzalez w. d. , akosofu s .-i . and e. j. smith ( 1990 ) the nonlinear response of ae to the imf driver : a spectral break at 5 hours ._ geophysical research letters _ , * 17 * , 279282 .watkins , n. w. , chapman s. c. , dendy r. o. and rowlands g. ( 1999 ) robustness of collective behaviour in a strongly driven avalanche model : magnetospheric implications ._ geophysical research letters _ , * 26 * , 2617 - 2620
recently , the paradigm that the dynamic magnetosphere displays sandpile - type phenomenology has been advanced , in which energy dissipation is by means of avalanches which do not have an intrinsic scale . this may in turn imply that the system is in a self organised critical ( soc ) state . indicators of internal processes are consistent with this , examples are the power law dependence of the power spectrum of auroral indices , and in - situ magnetic field observations in the earth s geotail . an apparent paradox is that , rather than power laws , substorm statistics exhibit probability distributions with characteristic scales . here we discuss a simple sandpile model which yields for energy discharges due to internal reorganization a probability distribution that is a power law , whereas systemwide discharges ( flow of sand " out of the system ) form a distinct group whose probability distribution has a well defined mean . we analyse the model over a wide dynamic range whereupon two regimes having different inverse power law statistics emerge , corresponding to reconfigurations over two distinct scaling regions : short scale sizes sensitive to the discrete nature of the sandpile model , and long scale sized up to the system size which correspond to the continuous limit of the model . the latter are anticipated to correspond to large scale systems such as the magnetosphere . since the energy inflow may be highly variable , we examined the response of the model under strong or variable loading and established that the power law signature of the large scale internal events persists . the interval distribution of these events is also discussed .
numerous cosmological observations have implied a new paradigm for the cosmic expansion history , i.e. , an accelerated expansion of the universe . to explain the accelerated expansion , an additional energy component called ` dark energy 'is usually added to both the friedmann equation and the friedmann lematre acceleration equation , where general relativity is assumed to be correct . in particular , models , which assume cold dark matter ( cdm ) and a cosmological constant , have been suggested as an elegant description of accelerated expansion .( for other models , see refs . and references therein . )recently , easson , frampton , and smoot have proposed that an extra driving term should be added to the friedmann lematre acceleration equation .the additional entropic - force term can explain the accelerated expansion of the late universe and the inflation of the early universe , without introducing new fields . in the entopic - force scenario , called ` entropic cosmology ' , the additional driving term is derived from the usually neglected surface terms on the horizon of the universe in the gravitational action , assuming that the horizon has an entropy and a temperature .( in fact , the entropy and temperature are related to the bekenstein entropy and the hawking temperature of black holes on an event horizon . ) since then , many researchers have extensively examined entropic cosmology from various viewpoints . (the possibility that the entropic force on the horizon can explain the accelerating universe should be distinguished from the idea that gravity itself is an entropic force . ) in entropic cosmology , since the entropy on the horizon is assumed , the entropy can increase during the evolution of the universe .therefore , it is possible to consider that the evolution of the universe is a kind of non - adiabatic process , unlike in standard cosmology , in which an adiabatic ( isentropic ) expansion is assumed .nevertheless , such a non - adiabatic - like expansion of the universe has not yet been extensively investigated in entropic cosmology and has been considered in only a few studies .therefore , it is important to examine the non - adiabatic - like ( hereafter non - adiabatic ) process , to acquire a deeper understanding of entropic cosmology , especially from a thermodynamics viewpoint . also , after the discovery of black hole thermodynamics , the entropy of the universe was examined by many researchers . in particular , since the late 1990s , the entropy of the universe has been extensively discussed in a universe undergoing accelerated expansion .however , evolution of the entropy has not been studied in entropic cosmology , although entropy plays an important role . in this context, we examine a non - adiabatic expansion of the universe and discuss the evolution of the entropy in entropic cosmology . for this purpose, we derive the continuity ( conservation ) equation from the first law of thermodynamics , taking into account the non - adiabatic process caused by the entropy and the temperature on the horizon .if the modified friedmann and friedmann lematre acceleration equations are used , the continuity equation can be derived from the two equations without using the first law of thermodynamics .this is because two of the three equations are independent .however , in this study , we derive the continuity equation from the first law of thermodynamics , since the first law is the fundamental conservation law . using the obtained continuity equation, we formulate the generalized friedmann and friedmann lematre acceleration equations .in addition , we propose a simple model based on the formulation .it should be noted that we do not discuss entropic inflation in the early universe , since we focus on the late universe to examine the fundamental properties of the universe in entropic cosmology .the present paper is organized as follows . in sec .ii , we give a brief review of the two modified friedmann equations in entropic cosmology . in this section, we examine the properties of the single - fluid dominated universe . in sec .iii , we derive the modified continuity equation from the first law of thermodynamics , assuming a non - adiabatic expansion of the universe .we also discuss generalized formulations of entropic cosmology and propose a simple model . in sec .iv , we compare the simple model with the observed supernova data and several models . finally , in sec .v , we present our conclusions .koivisto _ et al . _ have summarized two modified friedmann equations to examine the entropic cosmology proposed by easson __ . in this study, we employ the two modified friedmann equations .we do not derive the two modified friedmann equations in the present paper , since the theoretical derivation has been described in refs . . in sec .[ modification of the friedmann equations ] , we first give a brief review of the two modified friedmann equations .[ solutions for single fluid ] and [ properties ] , we examine the solutions and the properties of the single - fluid dominated universe in entropic cosmology .we consider a homogeneous , isotropic , and spatially flat universe , and examine the scale factor at time in the friedmann lematre robertson walker metric . in entropic cosmology , the two modified friedmann equations are summarized as and where the hubble parameter is defined by and are the gravitational constant and the mass density of cosmological fluids , respectively . note that we neglect high - order terms for quantum corrections , since we focus on the late universe . in eq .( [ eq : mfrw02(h4=0 ) ] ) , represents the equation of state parameter for a generic component of matter , which is given as where and are the speed of light and the pressure of cosmological fluids . for non - relativistic matter ( or the matter - dominated universe ) and relativistic matter ( or the radiation - dominated universe ) , is and , respectively . in eqs .( [ eq : mfrw01(h4=0 ) ] ) and ( [ eq : mfrw02(h4=0 ) ] ) , the four coefficients , , , and are dimensionless constants .the - and -terms with the dimensionless constants correspond to the additional driving terms , which take into account the entropy and temperature on the horizon of the universe due to the information holographically stored there . in this study ,( [ eq : mfrw01(h4=0 ) ] ) and ( [ eq : mfrw02(h4=0 ) ] ) are called the modified friedmann equation and the modified ( friedmann lematre ) acceleration equation , respectively .[ equation ( [ eq : mfrw01(h4=0 ) ] ) corresponds to energy conservation . ]easson _ et al ._ have derived the modified acceleration equation , i.e. , eq .( [ eq : mfrw02(h4=0 ) ] ) . in their paper, the dimensionless constants were expected to be bounded by and .typical values for a better fitting were and .it was argued that the extrinsic curvature at the surface was likely to result in something like and . of course , it is difficult to derive four unknown dimensionless constants from first principles .we now examine the two modified friedmann equations .coupling [ eq .( [ eq : mfrw01(h4=0 ) ] ) ] with [ eq .( [ eq : mfrw02(h4=0 ) ] ) ] and rearranging , we obtain the above equation can be rewritten as where from eq .( [ eq : dhc1 ] ) , is calculated as accordingly , eq .( [ eq : dhda_c1h ] ) is arranged as where is defined by as discussed in ref . , the two modified friedmann equations can be arranged as a simple equation , eq .( [ eq : dhdn(h4=0 ) ] ) . in the next subsection, we solve eq .( [ eq : dhdn(h4=0 ) ] ) , assuming a single - fluid dominated universe . we can solve eq .( [ eq : dhdn(h4=0 ) ] ) analytically , when is constant .in fact , as shown in eq .( [ eq : c1 ] ) , is constant when , , and are constant values .( here represents and and represents and . ) therefore , to solve eq .( [ eq : dhdn(h4=0 ) ] ) , we assume that and are constant . in addition , for a constant , we assume the single - fluid dominated universe . concretely speaking , is and for the matter- and radiation - dominated universes , respectively . when is constant , eq .( [ eq : dhdn(h4=0 ) ] ) can be integrated as this solution is given by and therefore we find where and are integral constants . dividing eq .( [ eq : solve1 ] ) by , we have where and are the present values of the hubble parameter and the scale factor .we obtain the above simple solution , since we assume the single - fluid dominated universe and neglect high - order terms for quantum corrections .equation ( [ eq : h / h0 ] ) indicates that is an important parameter for discussing the universe in the present entropic cosmology .we can determine from eq .( [ eq : c1 ] ) .for example , if , then for the radiation - dominated universe ( ) is , while for the matter - dominated universe ( ) is .when , the two modified friedmann equations are the standard friedmann and acceleration equations , respectively .accordingly , eq . ( [ eq : h / h0 ] ) for and agrees with the standard formula for the radiation- and matter - dominated universes , respectively .note that the universe for corresponds to the -dominated universe , as discussed later . to observe the properties of the single - fluid dominated universe , we examine three properties : the scale factor ( in sec . [ scale factor ] ) , the luminosity distance ( in sec .[ luminosity distance ] ) , and the entropy on the hubble horizon ( in sec .[ entropy on the hubble horizon ] ) . in this subsection, we consider as a non - negative free parameter .the first important property we examine is the scale factor . to this end , eq .( [ eq : h / h0 ] ) is arranged as where and are defined by multiplying eq .( [ eqa : h / h0 ] ) by , we obtain where is calculated as integrating eq .( [ eqa : dadt ] ) and replacing by , we finally have & ( c_{1 } = 0 ) ,\\ \end{cases } \label{eqa : a - t}\ ] ] where represents the present time .note that the integral constants are calculated from at , where is set to be .typical results are given by & ( c_{1 } = c_{1,\lambda } = 0 ) .\end{cases}\ ] ] when , the above three results correspond to the scale factor for the radiation- , matter- , and -dominated universes , respectively .[ t ] now we consider as a non - negative free parameter , to observe the properties of the single - fluid dominated universe .time evolutions of the normalized scale factor with various are plotted in fig .[ fig - a - t_c1 ] .the results for , , , and are consistent with those for the radiation- , matter- , empty- , and -dominated universes , respectively .( entropic - force terms become more dominant as decreases . ) as shown in fig .[ fig - a - t_c1 ] , for , the increase of the scale factor tends to be faster as decreases .that is , at late times , the expansion of the universe increases with decreasing .in fact , an accelerated expanding universe is observed when , e.g. , and .we can confirm the accelerated expansion from the ` deceleration parameter ' , which is used to discuss the expansion of the universe .the deceleration parameter is defined by substituting eq .( [ eq : dhc1 ] ) into eq .( [ eq : mfrw02(h4=0 ) ] ) , we have and arranging this gives accordingly , substituting eq .( [ eq : ddota_a_c1 ] ) into eq .( [ eq : q_def ] ) , the deceleration parameter is given as note that we do not assume the single - fluid dominated universe to calculate shown in eq .( [ eq : q0c1 ] ) . from eq .( [ eq : q0c1 ] ) , we find that is negative when .the negative deceleration corresponds to the acceleration .that is , when , the accelerating universe can be mimicked by entropic - force terms .note that eq .( [ eq : q0c1 ] ) is different from for models , which we will discuss in sec .[ comparison ] .as mentioned previously , we assumed the single - fluid dominated universe to calculate the scale factor .however , the universe for is different from the so - called single - component universe , such as the radiation- , matter- , empty- , and -dominated universes appearing in the standard cosmology .this is because , in entropic cosmology , the entropic - force terms affect the properties of the universe .the luminosity distance obtained from the observation data has been widely used to study the accelerated expansion of the universe .therefore , we examine the luminosity distance of the single - fluid dominated universe in entropic cosmology .the luminosity distance is given as where the integrating variable and the function are given by and and is the redshift defined by substituting eq .( [ eq : h / h0 ] ) into eq .( [ eqa : dl - def2 ] ) , and using , we obtain as substituting eq .( [ eqa : fyc1 ] ) into eq .( [ eqa : dl - def1 ] ) , and integrating , we have & ( c_{1 } \neq 1 ) , \\ ( 1+z)\ln ( 1+z ) & ( c_{1 } = 1 ) , \end{cases }\label{eqa : dlc1}\ ] ] where corresponds to the empty - dominated universe .typical results are given by & ( c_{1 } = c_{1,m } = 3/2 ) , \\( 1+z)z & ( c_{1 } = c_{1,\lambda } = 0 ) .\end{cases}\ ] ] [ t ] the luminosity distance for various is shown in fig .[ fig - dl - z_c1 ] , where is considered as a non - negative free parameter . as shown in fig .[ fig - dl - z_c1 ] , the luminosity distance increases with decreasing , especially in higher regions .this indicates that an accelerated expanding universe appears when is small .in fact , the lines for , e.g. , and , correspond to the accelerating universe , as discussed in sec .[ scale factor ] .for example , the line for is equivalent to the luminosity distance for the -dominated universe . on the other hand , as non - accelerating universes , the lines for , , and are consistent with those for the radiation- , matter- , and empty - dominated universes , respectively .it is clearly shown that affects the properties of the single - fluid dominated universe in the present entropic cosmology . in sec .[ comparison ] , we will discuss the luminosity distance including the observed supernova data and models .in entropic cosmology , we assume that the hubble horizon has an associated entropy . therefore , as the third property , we examine the entropy on the hubble horizon .the hubble horizon ( radius ) and the entropy on the hubble horizon are given as and where , , and are the boltzmann constant , the reduced planck constant , and the surface area of the sphere with the hubble radius , respectively .the reduced planck constant is defined by , where is the planck constant .substituting into eq .( [ eq : s0_h ] ) , and using shown in eq . ( [ eq : rh_00 ] ) , we obtain the entropy as where is a positive constant given by for example , the entropy on the hubble horizon can be evaluated as : the entropy on the hubble horizon is far larger than the total of the other entropies of the matter within the horizon .multiplying eq .( [ eq : sh ] ) by and substituting eq .( [ eq : h / h0 ] ) into this , we have where the single - fluid dominated universe is assumed since eq .( [ eq : h / h0 ] ) is employed .substituting eq .( [ eqa : a - t ] ) into eq .( [ eq : h0sh ] ) and rearranging , we obtain the entropy on the hubble horizon : [ t ] to observe the entropy , we consider as a non - negative free parameter .the time evolution of the entropy on the hubble horizon for various is plotted in fig .[ fig - s - t_c1 ] .the entropy for does not depend on time .we can confirm this from both fig .[ fig - s - t_c1 ] and eq .( [ eq : sh - t ] ) .this is because depends on as shown in eq .( [ eq : sh ] ) , and the hubble parameter is constant when .( when , eq .( [ eq : dhc1 ] ) indicates a constant because . ) in contrast , the entropy increases with time for .the increase of entropy is likely consistent with the second law of thermodynamics . in sec .[ comparison ] , we will discuss the entropy on the hubble horizon , including models .( strictly speaking , the other entropies should be taken into account , to examine the generalized second law of thermodynamics as studied in refs . in the present paper, we do not discuss the generalized second law . )[ t ] we now examine the influence of on the entropy . to this end, we observe the dependence of the entropy on the normalized scale factor , which is given by eq .( [ eq : h0sh ] ) . in fig .[ fig - s - a_c1 ] , is varied from to with steps of , while is varied from to with steps of . as shown in fig .[ fig - s - a_c1 ] , the entropy rapidly increases with decreasing , especially for small , e.g. , , corresponding to early times .( note that we focus on the late universe in this study . ) in contrast , for larger corresponding to late times , e.g. , , the entropy increases slowly as decreases .in fact , we have observed that the expansion of the universe further accelerates as decreases , as shown in eq .( [ eq : q0c1 ] ) and in figs .[ fig - a - t_c1 ] and [ fig - dl - z_c1 ] .therefore , at late times , the entropy increases slowly as decreases ( or as the entropic - force terms are further dominant ) , while the expansion of the universe accelerates .we can expect the above relationship between the entropy and the expansion , because , which is obtained from eqs .( [ eq : sh ] ) and ( [ eq : hubble ] ) . in this section ,we have employed the two modified friedmann equations to study the universe in entropic cosmology .we have solved the equations and examined the properties of the single - fluid dominated universe . through the parameter related to entropic - force terms, we can summarize the properties of the universe systematically . the universe with a specific ( e.g. , , , , and ) is consistent with the single - component universe appearing in the standard cosmology .in the previous section , we examined the properties of the universe described by the modified friedmann and acceleration equations . in this section, we consider a non - adiabatic - like expansion process caused by an entropy and a temperature on the hubble horizon . in sec .[ modified continuity equation ] , we derive the continuity equation from the first law of thermodynamics , assuming non - adiabatic expansion of the universe . in sec .[ consistency of modified equations ] , using the continuity equation , we formulate the generalized friedmann and acceleration equations , and propose a simple model .it should be noted that several researchers have discussed similar modified continuity equations for entropic cosmology .for example , cai _ et al ._ derived the improved continuity equation from the first law of thermodynamics using double holographic screens , while qiu __ and casadio _ et al . _ derived the modified continuity equation from the modified friedmann and acceleration equations .danielsson examined the sourced acceleration equation using extra source terms , and discussed the modified continuity equation . in this subsection , we derive the modified continuity equation from the first law of thermodynamics , assuming non - adiabatic expansion of the universe caused by the entropy and temperature on the hubble horizon . to this end , we first review the continuity equation , according to the textbook by ryden . from the first law of thermodynamics , the heat flow across a region is given by where and are changes in the internal energy and volume of the region , respectively .this equation can be rewritten as let us consider a sphere of co - moving radius expanding along with the universal expansion so that its proper radius is given by the volume of the sphere is and therefore , the rate of change of the sphere s volume is given as the internal energy of the sphere is given by where the internal energy - density is differentiating eq .( [ eq : e(t ) ] ) with respect to , and substituting eq .( [ eq : dotv ] ) into this equation , the rate of change of the sphere s internal energy is given as substituting eqs .( [ eq : dotv ] ) and ( [ eq : dote ] ) into , and using eq .( [ eq : varepsilon ] ) , we calculate as v \notag \\ & = \left [ \dot{\rho } + 3 \frac{\dot{a}}{a } \left ( \rho + \frac{p}{c^2 } \right ) \right ] c^2 v .\label{eq : dotepdotv } \end{aligned}\ ] ] finally , substituting eqs .( [ eq : v(t ) ] ) and ( [ eq : dotepdotv ] ) into eq .( [ eq : firstt2 ] ) , we obtain the first law of thermodynamics in an expanding ( or contracting ) universe : c^2 v dt \notag \\ & = \left [ \dot{\rho } + 3 \frac{\dot{a}}{a } \left ( \rho + \frac{p}{c^2 } \right ) \right ] c^2 \left ( \frac{4 \pi}{3 } r_{s}^3 \right ) dt .\label{eq : dq0}\end{aligned}\ ] ] if we assume adiabatic ( and isentropic ) processes , then is : that is , , where and represent the entropy and the temperature . in this case, we obtain the continuity equation for the adiabatic ( isentropic ) process : . however , in this paper , we examine the universe in entropic cosmology .that is , the horizon is assumed to have an entropy and a temperature , and therefore , the entropy on the horizon can increase during evolution of the universe . in summary , we assume a non - adiabatic process given by to calculate , we employ the hubble radius as the preferred screen , since the apparent horizon coincides with the hubble radius in the spatially flat universe .the hubble radius is given as we assume that the hubble horizon has an associated entropy and an approximate temperature .the entropy shown in eq .( [ eq : s0_h ] ) is written as and the temperature is given by we emphasize that the temperature considered here is obtained from multiplying the so - called horizon temperature , , by . in this study , is a non - negative free parameter and is of the order of , typically or .in fact , corresponds to a parameter for the screen temperature discussed in refs .proposed that cosmological observations constrain the undetermined coefficient .( easson _ et al ._ suggested a similar modified coefficient for the temperature . )the temperature on the horizon can be evaluated as .\ ] ] the temperature is lower than the temperature of our cosmic microwave background ( cmb ) radiation , $ ] .accordingly , strictly speaking , the universe considered here is in thermal non - equilibrium states . in the present paper ,we assume a non - adiabatic expansion in thermal equilibrium states , using a single holographic screen .( thermal equilibrium states in entropic cosmology have been previously discussed using double holographic screens . ) from eqs .( [ eq : s0 ] ) and ( [ eq : t0 ] ) , we calculate as where is . the first law of thermodynamics can be written as therefore , substituting eqs .( [ eq : dq0 ] ) and ( [ eq : tds0 ] ) into eq . ( [ eq : dq_depdv_tds ] ) , we have c^2 \left ( \frac{4 \pi}{3 } r_{h}^3 \right ) dt = \gamma \frac{c^4}{g } \dot{r}_{h } dt , \ ] ] where the proper radius shown in eq .( [ eq : dq0 ] ) is replaced by the hubble radius . arranging the above equation and substituting eq .( [ eq : ra ] ) into the equation , we obtain this is the modified continuity equation derived from the first law of thermodynamics , assuming non - adiabatic expansion of the universe . the right - hand side of eq .( [ eq : fluid0 ] ) is related to the non - adiabatic process . if is or if is constant , eq .( [ eq : fluid0 ] ) is the continuity equation for adiabatic ( isentropic ) processes .we will discuss this in the next subsection .( a similar improved continuity equation for entropic cosmology has been examined in refs .note that we have derived the modified continuity equation from the first law of thermodynamics , neglecting the entropy for high - order corrections . ) as shown in eq .( [ eq : fluid0 ] ) , the modified continuity equation has the so - called non - zero term on the right - hand side , as if it were a non - adiabatic process .therefore , in the present paper , we call this the non - adiabatic process .( as discussed later , the non - zero term can be cancelled in appearance . ) in fact , it has been known that a similar non - zero term is included in the continuity equation for other cosmological models .accordingly , we introduce two typical models in the following .the first model is ` bulk viscous cosmology ' , in which a bulk viscosity of cosmological fluids is assumed . because of the bulk viscosity , a similar non - zero term is included in the continuity equation .( for bulk viscous cosmology , see , e.g. , the work of barrow . ) usually , the only bulk viscosity can generate a classical entropy in homogeneous and isotropic cosmologies .however , in this study , we assume an entropy on the horizon of the universe , instead of the classical entropy .therefore , in appendix [ appendix_bulk ] , we discuss similarities and differences between bulk viscous cosmology and entropic cosmology .the second model is ` energy exchange cosmology ' , in which the transfer of energy between two fluids is assumed ; e.g. , the interaction between matter and radiation , matter creation , interacting quintessence , the interaction between dark energy and dark matter , dynamical vacuum energy , etc .. in energy exchange cosmology , two continuity equations have a similar non - zero term on each right - hand side .note that the two non - zero right - hand sides are totally cancelled , since the total energy of the two fluids is conserved .for example , using a dynamical vacuum term , the continuity equations for matter ` ' and vacuum energy ` ' should be arranged as and , respectively .the continuity equation for matter is equivalent to eq .( [ eq : fluid0 ] ) , if is given by , where is a positive constant .however , in entropic cosmology , we do not assume a second fluid appearing in energy exchange cosmology .this is because , in entropic cosmology , an effective continuity ( conservation ) equation can be obtained from an effective description of the equation of state , without using a second fluid . in this sense , the effective description is likely similar to ( single - fluid ) bulk viscous cosmology rather than energy exchange cosmology , since an effective pressure is employed in bulk viscous cosmology , as shown in appendix [ appendix_bulk ] .that is , it is possible to obtain an effective continuity ( conservation ) equation in appearance , if we employ such an effective description .for example , when the effective pressure is given by , eq .( [ eq : fluid0 ] ) is arranged as . in appendix[ appendix_f ] , we discuss the effective description for entropic cosmology .of course , the non - zero right - hand side of eq .( [ eq : fluid0 ] ) may be interpreted as the interchange of energy between the bulk ( the universe ) and the boundary ( the horizon of the universe ) , as if it were energy exchange cosmology .therefore , it is important to examine a relationship between entropic cosmology and energy exchange cosmology in more detail . we leave this for the future research . in the above discussion, we consider shown in eq .( [ eq : fluid0 ] ) as a free parameter for the temperature .however , parameters for the entropy , such as tsallis entropic parameter , may be required for calculating .this is because nonextensive entropy , e.g. , tsallis entropy or renyi s entropy , has been suggested for generalized entropy of self - gravitating systems and has been extensively examined from astrophysical viewpoints .therefore , not only but another parameter for the entropy may be required for the modified continuity equation .we have three modified equations , i.e. , the modified friedmann and acceleration equations [ eqs .( [ eq : mfrw01(h4=0 ) ] ) and ( [ eq : mfrw02(h4=0 ) ] ) ] and the modified continuity equation [ eq . ( [ eq : fluid0 ] ) ] .two of the three equations are independent . in this subsection , using the modified continuity equation , we formulate the generalized friedmann and acceleration equations . for this purpose , we select the modified friedmann equation , eq .( [ eq : mfrw01(h4=0 ) ] ) , and the modified continuity equation , eq .( [ eq : fluid0 ] ) , as independent equations .this is because the two equations are related to the conservation law .( the modified friedmann equation corresponds to energy conservation . ) as the two independent equations , the generalized friedmann equation is given by and the modified continuity equation is written as here , in eq .( [ eq : mfrw01(f ) ] ) is a general function related to entropic - force terms including high - order corrections .danielsson has examined a similar acceleration equation using an extra source term .( an additional term corresponding to is not included in the friedmann equation for bulk viscous cosmology . see appendix [ appendix_bulk ] . )we now derive the generalized acceleration equation from eqs .( [ eq : mfrw01(f ) ] ) and ( [ eq : fluid0r ] ) . to this end , multiplying eq .( [ eq : mfrw01(f ) ] ) by , we have differentiating this equation with respect to gives dividing eq .( [ eq:2aa ] ) by gives multiplying eq .( [ eq : fluid0r ] ) by ( ) and arranging , we have where is as shown in eq .( [ eq : w ] ) . accordingly , substituting eq .( [ eq : rho_a_da ] ) into eq .( [ eq : dda_a ] ) , and using , we obtain + \frac{1}{2 } \dot{f } \frac{a}{\dot{a } } + f \notag \\ & = - \frac { 4\pi g } { 3 } ( 1 + 3w)\rho + \left ( f + \frac{1}{2 } \frac { \dot{f } } { h } - \gamma \dot{h } \right ) .\label{eq : accel}\end{aligned}\ ] ] equation ( [ eq : accel ] ) is the generalized acceleration equation , which is derived from eqs .( [ eq : mfrw01(f ) ] ) and ( [ eq : fluid0r ] ) . before proceeding further , in this paragraph, we discuss a spatially non - flat ( ) universe , where is a curvature constant . to this end , we add to the right - hand side of the generalized friedmann equation , eq .( [ eq : mfrw01(f ) ] ) , as described in appendix a of ref . . in the spatially non - flat universe , the apparent horizon , ,does not coincide with the hubble horizon , , because of .accordingly , we employ the apparent horizon as the preferred screen rather than the hubble horizon .consequently , on the right - hand sides of eqs .( [ eq : fluid0r ] ) and ( [ eq : accel ] ) should be replaced by .in other words , the modified continuity equation and the generalized acceleration equation should include a curvature term .of course , it was argued that the extrinsic curvature at the surface was likely to result in something like and .note that we consider a spatially flat ( ) universe in the present study .next , we discuss a simple model . for this purpose , we consider only -terms as entropic - force terms of the generalized friedmann equation .that is , in eq . ([ eq : mfrw01(f ) ] ) is set to be where is a constant .( we consider only -terms , since -terms of the modified friedmann equation are .the details are summarized in appendix [ appendix_f ] . ) substituting eq .( [ eq : fh2 ] ) into eq .( [ eq : accel ] ) , we have in fact , easson _ et al . _first proposed that the entropic - force terms are or , i.e. , -terms are not included in the entropic - force terms .accordingly , we determine so that the -term in eq .( [ eq : accel2 ] ) is cancelled . in other words , for the simple model , we select as in this case , we can obtain the simple self - consistent equations .the simple modified friedmann , acceleration , and continuity equations are summarized as the entropic - force term in eq .( [ eq : mfrw01(f)3 ] ) is the same as the term in eq .( [ eq : accel3 ] ) . the above two modified friedmann equations , i.e. , eqs .( [ eq : mfrw01(f)3 ] ) and ( [ eq : accel3 ] ) , correspond to eqs .( [ eq : mfrw01(h4=0 ) ] ) and ( [ eq : mfrw02(h4=0 ) ] ) for and .( satisfies the constraint , as is concluded in ref .therefore , we can easily calculate the properties of the single - fluid dominated universe , as examined in sec .[ properties ] . in the present study, we do not assume a cosmological constant and dark energy , since additional driving terms can be derived from the entropic - force on the hubble horizon .however , if is fixed as a constant , the above three equations are equivalent to models .( the right - hand side of eq .( [ eq : fluid0r3 ] ) is when is constant , because . ) for example , if we consider the matter - dominated universe with for and , then is from eq .( [ eq : c1 ] ) .the universe for corresponds to the -dominated universe in the standard cosmology . in this section ,we have derived the modified continuity equation from the first law of thermodynamics , assuming a non - adiabatic expansion of the universe . using the obtained continuity equation ,we have formulated the generalized friedmann and acceleration equations , i.e. , eqs .( [ eq : mfrw01(f ) ] ) and ( [ eq : accel ] ) . moreover , as a possible model , we have proposed the simple model given by eqs . ( [ eq : mfrw01(f)3 ] ) , ( [ eq : accel3 ] ) , and ( [ eq : fluid0r3 ] ) .we will discuss the properties of the simple model in the next section .note that cai _ , qiu _ et al . _ , casadio _ et al . _ , and danielsson have discussed similar modified continuity equations .the above obtained equations are related to their works .in this section , we examine evolution of the late universe using the simple model given by eqs .( [ eq : mfrw01(f)3 ] ) , ( [ eq : accel3 ] ) , and ( [ eq : fluid0r3 ] ) . to this end, we first examine the luminosity distance because , through , we can easily compare the simple model not only with models but also with the observed supernova data .for the present simple model , we consider the matter - dominated universe since , in entropic cosmology , we do not assume the cosmological constant and dark energy .( the influence of radiation is extremely small in the late universe , as discussed later . )the matter - dominated universe is given by in the simple model , the four parameters shown in eqs .( [ eq : mfrw01(h4=0 ) ] ) and ( [ eq : mfrw02(h4=0 ) ] ) , i.e. , , , , and , are set to be and where is assumed to be substituting the above equations into eq .( [ eq : c1 ] ) , is calculated as from eq .( [ eqa : dlc1 ] ) , we can calculate the luminosity distance for the simple model .of course , we accept that should be a free parameter .however , we determine as shown in eq .( [ eq : gamma32pi ] ) .( the coefficient was anticipated from the surface term order , while the coefficient was expected from the hawking temperature description , as described in ref . . ) in fact , the properties of the universe for are almost the same as the properties for , since the difference of between and is small , as shown by eq .( [ eq : c1_gamma ] ) .therefore , for the present simple model , we will observe the properties of the universe for , i.e. , . for models , the luminosity distance of the spatially flat universeis given as ^{-1/2 } , \label{eq : dl(cdm)}\end{aligned}\ ] ] where and . and represent the density parameters for the matter and the cosmological constant , respectively . represents the critical density , while is the density for matter which includes baryon and dark matter . for the flat universe , is given as .here , we neglect the density parameter for the radiation , since is extremely small , e.g. , . as typical universes, is set to be , , and .note that the notation is simplified as , , and .the universes for and correspond to the matter- and -dominated universes , respectively .that is , the universes for and are equivalent to the universes for and , as discussed in the previous section .the universe for is a fine - tuned standard model , which takes into account the recent wmap best fit values .we numerically calculate for the standard model , since we can not analytically solve eq .( [ eq : dl(cdm ) ] ) , except for special cases .[ t ] figure [ fig - dl - z ] shows the luminosity distance for the present simple model , supernova data points , and several models .we find that the simple model agrees well with supernova data points and the fine - tuned standard model , i.e. , .it is successfully demonstrated that the simple model can mimic the present accelerating universe without adding the cosmological constant and dark energy . in entropic cosmology , at least , it is possible to discuss why has such a value , unlike for the model .for example , may be estimated from a derivation of surface terms or the hawking temperature description .note that easson _have suggested an alternate acceleration equation for a better fitting , where the entropic - force term is .in fact , we have confirmed that the present simple model is better than the alternate acceleration equation , i.e. , the present model agrees well with the standard model , compared with the alternate acceleration equation suggested in ref . .next , we examine the entropy on the hubble horizon , using the present simple model and models .the entropy for the simple model is calculated from eq .( [ eq : sh - t ] ) with for , while for the model is calculated from as shown in eq .( [ eq : sh ] ) . since the entropies for and are equivalent to the entropies for and , we can calculate them analytically .on the other hand , we numerically compute the entropy for the fine - tuned standard model , i.e. , .for the standard model , we employ the following equation : we first integrate eq .( [ eq : a_cdm ] ) numerically , to obtain time evolution of the scale factor .the hubble parameter is numerically calculated from .therefore , we can obtain the entropy for the standard model . in the present study, we assume a spatially flat universe , i.e. , , and neglect the influence of radiation , i.e. , .now , we observe the time evolution of the entropy on the hubble horizon . as shown in fig .[ fig - s - t_cdm ] , the entropies for both the simple model and the fine - tuned standard model increase with time until the present time , . in this sense , the simple model is similar to the fine - tuned standard model , i.e. , .however , we can observe the difference between them clearly .for example , the increase of the entropy for the simple model is likely uniform , even after the present time , i.e. , .in contrast , the increase of the entropy for the standard model tends to become gradually slower , especially after the present time , as if the present time were a special time .this is because the cosmological constant is very dominant in the standard model . to examine the entropy more closely ,we focus on the rate of the change of the entropy . as shown in fig .[ fig - s - t_cdm ] , for the simple model is always positive , while for the standard model is negative except for the early stage . from eq .( [ eq : sh - t ] ) , we can confirm a positive for the present simple model , because for , where is a positive constant given by eq .( [ eq : k - def ] ) .[ t ] as discussed above , the increase in entropy for the present simple model is uniform , while that for the fine - tuned standard model becomes gradually slower , especially after the present time . in other words, the standard model implies that the present time is a special time .in fact , the entropy of the accelerated expanding universe has been extensively discussed from various viewpoints .many of the earlier works suggest that the increase of the entropy tends to become gradually slower , especially after the present time , because of the standard model .however , the simple model considered here predicts that the present time is not a special time , unlike for the prediction of the standard model .[ we have confirmed that the scale factor for the simple model increases uniformly even after the present time , while for the fine - tuned standard model increases rapidly after the present time . ( the figure is not shown . )this result is consistent with the result for the entropy shown in fig .[ fig - s - t_cdm ] .this is because is calculated from [ , as shown in eq .( [ eq : sh ] ) . ]finally , we examine the deceleration parameter .we can calculate for the present simple model as , by substituting into shown in eq .( [ eq : q0c1 ] ) . on the other hand , for the standard model can be calculated as , from , where is set to be .therefore , at the present time , the acceleration for the simple model is slower than that for the standard model .this is because the density parameter for the cosmological constant is dominant in the standard model .in contrast , in the simple model , we assume the matter - dominated universe in entropic cosmology , without adding the cosmological constant and dark energy .we have examined non - adiabatic expansion of the late universe and discussed the evolution of the entropy on the hubble horizon , to study entropic cosmology from a thermodynamics viewpoint . for this purpose ,we have employed the two modified friedmann equations , i.e. , the modified friedmann equation and the modified acceleration equation .first of all , based on the two equations for entropic cosmology , we have examined the properties of the single - fluid dominated universe , neglecting high - order terms for quantum corrections .consequently , we can systematically summarize the properties of the late universe , through a parameter related to entropic - force terms .it is found that , at late times , the entropy on the hubble horizon increases slowly with decreasing ( or as the influence of entropic - force terms increases ) , while the expansion of the universe accelerates .we have also derived the continuity equation from the first law of thermodynamics , assuming non - adiabatic expansion of the universe . using the obtained continuity equation ,we have formulated the generalized friedmann and acceleration equations , and have proposed a simple model as a possible model . through the luminosity distance , it is successfully shown that the simple model can explain the present accelerating universe and agrees well with both the supernova data and the fine - tuned standard model . on the other hand ,the increase of the entropy for the simple model is uniform , although the increase of the entropy for the standard model is gradually slow especially after the present time . in other words , the simple model implies that the present time is not a special time , unlike for the prediction of the standard model .we find that the present simple model predicts another future which is different from the standard model .the present study has revealed the fundamental properties of the non - adiabatic expanding universe in entropic cosmology . as one of several possible scenarios , the generalized formulation andthe simple model considered here will help in understanding the accelerating expanding universe .of course , it is difficult to determine related to the temperature on the hubble horizon .however , at least in principle , it is possible to discuss the accelerating universe quantitatively , by means of the present entropic cosmology .further discussions and observation data will be required to examine the present and future of the universe .the modified continuity equation examined here has the so - called non - zero term on the right - hand side .therefore , through the present paper , we call this the non - adiabatic process .if we employ an effective description similar to bulk viscous cosmology , the non - zero term on the right - hand side can be cancelled in appearance .alternatively , the non - zero term may be interpreted as the interchange of energy between the bulk ( the universe ) and the boundary ( the horizon of the universe ) .accordingly , it will be necessary to study entropic cosmology from various viewpoints .in the present study , we consider a homogeneous and isotropic universe , and assume an entropy on the horizon of the universe for entropic cosmology . however ,usually , an only bulk viscosity can generate an entropy in the homogeneous and isotropic universe .such a cosmological model is called as ` bulk viscous cosmology ' and has been extensively investigated . in this appendix , we discuss similarities and differences between bulk viscous cosmology and entropic cosmology . in bulk viscous cosmology , a bulk viscosity of cosmological fluids is assumed , and an effective pressure is given by where is the bulk viscosity .the continuity equation is substituting eq .( [ eq_bulk_p ] ) into eq .( [ eq_bulk_con0 ] ) and arranging , we have equation ( [ eq_bulk_con1 ] ) is similar to eq .( [ eq : fluid0 ] ) , because of a non - zero right - hand side . the right - hand side of eq .( [ eq_bulk_con1 ] ) is related to a classical entropy generated by bulk viscous stresses . on the other hand , in entropic cosmology , we assume an entropy ( and a temperature ) on the horizon of the universe , instead of the classical entropy . accordingly , the right - hand side of eq .( [ eq : fluid0 ] ) is related to the entropy on the horizon , unlike in bulk viscous cosmology .in fact , davies and barrow have suggested a total entropy , i.e. , the sum of the entropy on the horizon and the classical entropy generated by the bulk viscous stresses , to discuss the generalized second law of event horizon thermodynamics .that is , in the present entropic cosmology , we focus on the entropy on the horizon , neglecting the classical entropy discussed above .we now examine the friedmann equation . in bulk viscous cosmology , the friedmann equation is given by equation ( [ eq_bulk_frw1 ] ) does not include an additional term such as a cosmological constant . on the other hand , in entropic cosmology , the friedmann and acceleration equations have additional driving terms , which are derived fromthe usually neglected surface terms on the horizon .however , an additional term appears in the acceleration equation for bulk viscous cosmology .for example , the acceleration equation can be arranged as where the last term , , corresponds to the additional driving term .the additional term can explain the accelerated expansion of the universe .entropic - force terms of the modified friedmann and acceleration equations include four dimensionless constants , , , and . in this appendix, we determine the dimensionless constants and examine an effective description for entropic cosmology . to this end , we first derive the modified continuity equation from the modified friedmann and acceleration equations , although we have already derived the modified continuity equation from the first law of thermodynamics . this is because we can determine most of the dimensionless constants using the two continuity equations . the modified friedmann and acceleration equations , i.e. , eqs . ( [ eq : mfrw01(h4=0 ) ] ) and ( [ eq : mfrw02(h4=0 ) ] ) , can be written as and where and are given by and the four coefficients , , , and are dimensionless constants .equation ( [ eq : a_mfrw01 ] ) is the same formulation as eq .( [ eq : mfrw01(f ) ] ) . therefore , as shown in eq .( [ eq : dda_a ] ) , arranging eq .( [ eq : a_mfrw01 ] ) gives substituting eq .( [ eq : a_dda_a ] ) into eq .( [ eq : a_mfrw02 ] ) , and arranging this using , we have when and are general functions , eq . ( [ eq : a_drho ] ) represents the generalized continuity equation .however , in this appendix , and are given by eqs .( [ eq : a_f ] ) and ( [ eq : a_g ] ) , respectively .differentiating eq .( [ eq : a_f ] ) with respect to gives accordingly , substituting eqs .( [ eq : a_f ] ) , ( [ eq : a_g ] ) , and ( [ eq : a_df ] ) into eq .( [ eq : a_drho ] ) , we obtain .\label{eq : a_fluid}\end{aligned}\ ] ] equation ( [ eq : a_fluid ] ) is the modified continuity equation derived from the modified friedmann and acceleration equations .we now determine the dimensionless constants as many as possible . for this purpose, we compare eq .( [ eq : a_fluid ] ) with the modified continuity equation derived from the first law of thermodynamics , i.e. , eq .( [ eq : fluid0r3 ] ) : the two modified continuity equations , i.e. , eqs .( [ eq : a_fluid ] ) and ( [ eq : a_fluid_thermo ] ) , must be consistent with each other . therefore , three dimensionless constants can be determined when , , and are not .the three constants in eqs .( [ eq : a_mfrw01 ] ) and ( [ eq : a_mfrw02 ] ) are given by note that and should be determined from a different viewpoint , as mentioned previously .consequently , the modified self - consistent equations ( i.e. , the modified friedmann , acceleration , and continuity equations ) are summarized as entropic - force terms of the modified friedmann equation are only -terms , as shown in eq .( [ eq : a_mfrw01_4 ] ) . in the present study , we have selected and , to set up a simple model .we can confirm that the selection is consistent with eqs .( [ eq : a_const1 ] ) , ( [ eq : a_const2 ] ) , and ( [ eq : a_const3 ] ) . of course, the simple model is consistent with eqs .( [ eq : a_mfrw01_4 ] ) , ( [ eq : a_accel4 ] ) , and ( [ eq : a_fluid4 ] ) , since is selected as . if eqs .( [ eq : a_mfrw01 ] ) and ( [ eq : a_mfrw02 ] ) are used for the modified friedmann and acceleration equations , we can propose the above self - consistent equations to examine a non - adiabatic expansion of the late universe in entropic cosmology .finally , we examine an effective description for entropic cosmology discussed in sec . [ modified continuity equation ] . this is because it is possible to obtain an effective continuity ( conservation ) equation , when we employ an effective pressure similar to bulk viscous cosmology . in this study ,the effective pressure is given by and the equation of state parameter for the effective description is where is different from of eq .( [ eq : w ] ) .we can arrange eqs .( [ eq : a_accel4 ] ) and ( [ eq : a_fluid4 ] ) , using eqs .( [ eqa : pprime ] ) and ( [ eqa : wprime ] ) . as a result ,the self - consistent equations based on the effective description are summarized as as shown in eqs .( [ eqa : accel3e ] ) and ( [ eqa : fluid0r3e ] ) , the effective description helps to simplify the formulas of entropic cosmology .in particular , as shown in eq .( [ eqa : fluid0r3e ] ) , the so - called non - zero term on the right - hand side of eq .( [ eq : a_fluid4 ] ) is cancelled in appearance .eq . ( [ eqa : fluid0r3e ] ) may be suitable for discussing the continuity equation .however , through the present paper , we employ eq .( [ eq : a_fluid4 ] ) , to make the non - zero term clear .[ it should be noted that not only eq .( [ eq : a_fluid4 ] ) but also eq .( [ eqa : fluid0r3e ] ) is different from the continuity ( conservation ) equation discussed by easson _similarly , our dimensionless constants determined in this study are expected to be different from their suggested constants . ]s. perlmutter _et al . _ ,nature * 391 * , 51 ( 1998 ) .s. perlmutter _et al . _ ,j. * 517 * , 565 ( 1999 ) ; arxiv : astro - ph/9812133v1 .a. g. riess _. j. * 116 * , 1009 ( 1998 ) ; arxiv : astro - ph/9805201v1 . a. g. riess __ , astrophys .j. * 607 * , 665 ( 2004 ) ; arxiv : astro - ph/0402512v2 .a. g. riess _et al . _ ,j. * 659 * , 98 ( 2007 ) ; arxiv : astro - ph/0611572v2 .http://braeburn.pha.jhu.edu/~ariess/r06/sn_sample .m. tegmark _et al . _ , astrophys . j. * 606 * , 702 ( 2004 ) .d. n. spergel _et al . _ ,. ser . * 170 * , 377 ( 2007 ) .w. m. wood - vasey _ et al . _ ,j. * 666 * , 694 ( 2007 ) ; arxiv : astro - ph/0701041v1 . m. kowalski _et al . _ ,j. * 686 * , 749 ( 2008 ) .m. hicken _et al . _ ,j. * 700 * , 1097 ( 2009 ) .e. komatsu _ et al ._ , astrophys. ser . * 180 * , 330 ( 2009 ) ; arxiv:1001.4538v3 [ astro-ph.co ] .m. fukugita _et al . _ ,j. * 361 * , l1 ( 1990 ) .s. m. carroll and w. h. press , annu .. astrophys . * 30 * , 499 ( 1992 ) .s. weinberg , _ cosmology _ , oxford university press , 2008 .j. b. hartle , _ gravity : an introduction to einstein s general relativity _, pearson education , inc . , publishing as addison wesley , 2002. b. ryden , _ introduction to cosmology _ , pearson education , inc . , publishing as addison wesley , 2002 ._ , _ cosmology i _ , modern astronomy series 2 , edited by k. sato and t. futamase , nippon hyoron sha co. , 2008 , _ in japanese_. g. f. r. ellis , r. maartens , and m. a. h. maccallum , _relativistic cosmology _ , cambridge university press , 2012 .et al . _ ,phys . * 56 * , 525 ( 2011 ) .et al . _ ,space sci .* 342 * , 155 ( 2012 ) .d. a. easson , p. h. frampton , and g. f. smoot , phys .b * 696 * , 273 ( 2011 ) ; arxiv:1002.4278v3 [ hep - th ] .d. a. easson , p. h. frampton , and g. f. smoot , int .a * 27 * , 1250066 ( 2012 ) : arxiv:1003.1528v3 [ hep - th ] .j. d. bekenstein , phys .d * 7 * , 2333 ( 1973 ) ; phys .d * 9 * , 3292 ( 1974 ) ; phys .d * 12 * , 3077 ( 1975 ) .s. w. hawking , phys .* 26 * , 1344 ( 1971 ) ; nature * 248 * , 30 ( 1974 ) ; commun .* 43 * , 199 ( 1975 ) ; phys .d * 13 * , 191 ( 1976 ) .y. f. cai , j. liu , and h. li , phys .b * 690 * , 213 ( 2010 ) . y. f. cai and e. n. saridakis , phys .b * 697 * , 280 ( 2011 ) .t. qiu and e. n. saridakis , phys .d * 85 * , 043504 ( 2012 ) .r. casadio and a. gruppuso , phys .d * 84 * , 023503 ( 2011 ) .y. s. myung , astrophys .space sci . * 335 * , 553 ( 2011 ) ; arxiv:1005.2240v2 [ hep - th ] .f. e. m. costa , j. a. s. lima , and f. a. oliveira , arxiv:1204.1864v1 [ astro-ph.co ] .s. basilakos , d. polarski , and j. sol , phys .d * 86 * , 043010 ( 2012 ) : arxiv:1204.4806v1 [ gr - qc ] .p. c. w. davies , _ the physics of time asymmetry _, surrey university press - university of california press , 1974 .p. c. w. davies , rep .. phys . * 41 * , 1313 ( 1978 ) ; nature * 301 * , 398 ( 1983 ) .p. c. w. davies , class .quantum grav . * 4 * , l225 ( 1987 ) ; ann .henri poincar * 49 * , 297 ( 1988 ) .s. frautschi , science * 217 * , 593 ( 1982 ) .j. d. barrow , nature * 272 * , 211 ( 1978 ) .j. d. barrow , phys .lett . * 46 * , 963 ( 1981 ) .d. sugimoto , y. eriguchi , and i. hachisu , prog .supplement * 70 * , 154 ( 1981 ) .p. c. w. davies and t. m. davis , foundations of physics , * 32 * , 1877 ( 2003 ) .t. m. davis , p. c. w. davies , and c. h. lineweaver , class .quantum grav .* 20 * , 2753 ( 2003 ) .t. m. davis , ph .d. theses , university of new south wales , 2003 .w. buchmller and j. jaeckel , arxiv : astro - ph/0610835v1 .b. wang , y. gong , and e. abdalla , phys .d * 74 * , 083520 ( 2006 ) .y. gong , b. wang , and a. wang , journal of cosmology and astroparticle physics , * 01 * , 024 ( 2007 ) ; arxiv : gr - qc/0610151v2 .i. brevik , phys .d * 65 * , 127302 ( 2002 ) .i. brevik and o. gorbunova , gen .. gravit . * 37 * , 2039 ( 2005 ) .s. nojiri and s. d. odintsov , phys .d * 72 * , 023003 ( 2005 ) . j. ren and x .- h .meng , phys .b * 633*,1 ( 2006 ) . j. c. fabris , s. v. b. goncalves , and r. de s ribeiro , gen .relativ . gravit . *38 * , 495 ( 2006 ) .r. colistete , jr ._ et al . _ ,d * 76 * , 103516 ( 2007 ) .b. li and j. d. barrow , phys .d * 79 * , 103521 ( 2009 ) .meng and x. dou , commun .. phys . * 52 * , 377 , ( 2009 ) ; arxiv:0812.4904v1 [ astro - ph ] .a. avelino and u. nucamendi , journal of cosmology and astroparticle physics * 04 * , 006 ( 2009 ) ; arxiv:0811.3253v2 [ gr - qc ]. w. s. hipolito - ricaldi , h. e. s. velten , and w. zimdahl , journal of cosmology and astroparticle physics * 06 * , 016 ( 2009 ) ; arxiv:0902.4710v2 [ astro-ph.co ] . a. avelino and u. nucamendi , journal of cosmology and astroparticle physics * 08 * , 009 ( 2010 ) ; arxiv:1002.3605v2 [ gr - qc ] .o. f. piattella , j. c. fabris , and w. zimdahl , journal of cosmology and astroparticle physics * 05 * , 029 ( 2011 ) ; arxiv:1103.1328v1 [ astro-ph.co ] . x. dou and x .- h .meng , advances in astronomy * 2011 * , 829340 ( 2011 ) .w. davidson , mon . not .. soc . * 124 * , 79 ( 1962 ) .m. szydlowski , phys .b * 632 * , 1 ( 2006 ) .j. a. s. lima , a. s. m. germano , and l. r. w. abramo , phys .d * 53 * , 4287 ( 1996 ) .j. a. s. lima , s. basilakos , and f. e. m. costa , phys .d * 86 * , 103534 ( 2012 ) .l. amendola , phys .d * 62 * , 043511 ( 2000 ) .w. zimdahl , d. pavn , and l. p. chimento , phys .b * 521 * , 133 ( 2001 ) .b. wang , y. gong , and e. abdalla , phys .b * 624 * , 141 ( 2005 ) .b * 662 * , 1 ( 2008 ) .h. a. borges and s. carneiro , gen .. gravit . * 37 * , 1385 ( 2005 ) .s. carneiro , c. pigozzo , and h. a. borges , phys .d * 74 * , 023532 ( 2006 ) . c. pigozzo _ et al ._ , journal of cosmology and astroparticle physics * 08 * , 022 ( 2011 ) ; arxiv:1007.5290v2 [ astro-ph.co ] . j. s. alcaniz _ et al . _ ,b * 716 * , 165 ( 2012 ) .r. s. mendes and c. tsallis , phys . lett .a * 285 * , 273 ( 2001 ) .v. latora , a. rapisarda , and c. tsallis , phys .e * 64 * , 056134 ( 2001 ) .p. h. chavanis , astron . astrophys . * 386 * , 732 ( 2002 ) . a. taruya and m. sakagami , phys .lett . * 90 * , 181101 ( 2003 ) .a. nakamichi and m. morikawa , physica a * 341 * , 215 ( 2004 ) .b. liu and j. goree , phys .* 100 * , 055003 ( 2008 ) . n. komatsu , s. kimura , and t. kiwata , phys .e * 80 * , 041107 ( 2009 ) ; journal of physics : conference series , * 201 * , 012009 ( 2010 ) .n. komatsu , t. kiwata , and s. kimura , phys .e * 82 * , 021118 ( 2010 ) ; phys .e * 85 * , 021132 ( 2012 ) .
in ` entropic cosmology ' , instead of a cosmological constant , an extra driving term is added to the friedmann equation and the acceleration equation , taking into account the entropy and the temperature on the horizon of the universe . by means of the modified friedmann and acceleration equations , we examine a non - adiabatic - like accelerated expansion of the universe in entropic cosmology . in this study , we consider a homogeneous , isotropic , and spatially flat universe , focusing on the single - fluid ( single - component ) dominated universe at late - times . to examine the properties of the late universe , we solve the modified friedmann and acceleration equations , neglecting high - order corrections for the early universe . we derive the continuity ( conservation ) equation from the first law of thermodynamics , assuming non - adiabatic expansion caused by the entropy and temperature on the horizon . using the continuity equation , we formulate the generalized friedmann and acceleration equations , and propose a simple model . through the luminosity distance , it is demonstrated that the simple model agrees well with both the observed accelerated expansion of the universe and a fine - tuned standard ( lambda cold dark matter ) model . however , we find that the increase of the entropy for the simple model is likely uniform , while the increase of the entropy for the standard model tends to be gradually slow especially after the present time . in other words , the simple model predicts that the present time is not a special time , unlike for the prediction of the standard model .
a number of researches have been conducted on prediction of financial time series with neural networks since rumelhart developed back propagation algorithm in 1986 , which is the most commonly used algorithm for supervised neural network . with this algorithmthe network learns its internal structure by updating the parameter values when we give it training data containing inputs and outputs . we can then use the network with updated parameters to predict future events containing inputs the network has never encountered .the algorithm is applied in many fields such as robotics and image processing and it shows a good performance in prediction of financial time series .relevant papers on the use of neural network to financial time series include , , and . in these papers authorsare concerned with the prediction of time series and they to not pay much attention to actual investing strategies , although the prediction is obviously important in designing practical investing strategies .a forecast of tomorrow s price does not immediately tell us how much to invest today .in contrast to these works , in this paper we directly consider investing strategies for financial time series based on neural network models and ideas from game - theoretic probability of shafer and vovk ( 2001 ) . in the game - theoretic probability established by shafer and vovk , various theorems of probability theory , such as the strong law of large numbers and the central limit theorem ,are proved by consideration of capital processes of betting strategies in various games such as the coin - tossing game and the bounded forecasting game . in game - theoretic probability a player `` investor '' is regarded as playing against another player `` market '' . in this frameworkinvesting strategies of investor play a prominent role .prediction is then derived based on strong investing strategies ( cf .defensive forecasting in ) .recently in we proposed sequential optimization of parameter values of a simple investing strategy in multi - dimensional bounded forecasting games and showed that the resulting strategy is easy to implement and shows a good performance in comparison to well - known strategies such as the universal portfolio developed by thomas cover and his collaborators . in this paperwe propose sequential optimization of parameter values of investing strategies based on neural networks .neural network models give a very flexible framework for designing investing strategies . with simulation and with some data from tokyo stock exchangewe show that the proposed strategy shows a good performance .the organization of this paper is as follows . in section [ sec : sosnn ]we propose sequential optimizing strategy with neural networks . in section[ sec : compare ] we present some alternative strategies for the purpose of comparison . in section [ subsec : back - propagation ] we consider an investing strategy using supervised neural network with back propagation algorithm .the strategy is closely related to and reflects existing researches on stock price prediction with neural networks . in section [ subsec : markovian ]we consider markovian proportional betting strategies , which are much simpler than the strategies based on neural networks . in section[ sec : sim ] we evaluate performances of these strategies by monte carlo simulation . in section[ sec : real ] we apply these strategies to stock price data from tokyo stock exchange .finally we give some concluding remarks in section [ sec : remarks ] .here we introduce the bounded forecasting game of shafer and vovk in section [ subsec : bounded - forecasting - game ] and network models we use in section [ subsec : design - of - network ] . in section [ subsec : gradient ]we specify the investing ratio by an unsupervised neural network and we propose sequential optimization of parameter values of the network .we present the bounded forecasting game formulated by shafer and vovk in 2001 . in the bounded forecasting game , investor s capital at the end of round written as ( ) and initial capital is set to be . in each round investor first announces the amount of money he bets ( ) and then market announces her move ] .+ + end for + we can rewrite investor s capital as , where is the ratio of investor s investment to his capital after round .we call the investing ratio at round .we restrict as in order to prevent investor becoming bankrupt .furthermore we can write as taking the logarithm of we have the behavior of investor s capital in ( [ eq : log - capital ] ) depends on the choice of . specifying a functional form of regarded as an investing strategy .for example , setting to be a constant for all is called the -strategy which is presented in . in this paperwe consider various ways to determine in terms of past values of and and seek better in trying to maximize the future capital , .let denote past values of and let depend on and a parameter : . then is the best parameter value until the previous round . in our sequential optimizing investing strategy, we use to determine the investment at round : for the function we employ neural network models for their flexibility , which we describe in the next section .we construct a three - layered neural network shown in figure [ fig:1 ] .the input layer has neurons and they just distribute the input to every neuron in the hidden layer . also the hidden layer has neurons and we write the input to each neurons as which is a weighted sum of s .as seen from figure [ fig:1 ] , is obtained as where is called the weight representing the synaptic connectivity between the neuron in the input layer and the neuron in the hidden layer .then the output of the neuron in the hidden layer is described as as for activation function we employ hyperbolic tangent function . in a similar way , the input to the neuron in the output layer , which we write , is obtained as where is the weight between the neuron in the hidden layer and the neuron in the output layer .finally , we have which is the output of the network . in the following argument we use as an investment strategy .thus we can write where investor s capital is written as we need to specify the number of inputs and the number of neurons in the hidden layer .it is difficult to specify them in advance .we compare various choices of and in section [ sec : sim ] and section [ sec : real ] .also in we can include any input which is available before the start of round , such as moving averages of past prices , seasonal indicators or past values of other economic time series data .we give further discussion on the choice of in section [ sec : remarks ] . in this sectionwe propose a strategy which we call sequential optimizing strategy with neural networks ( sosnn ) .we first calculate that maximizes this is the best parameter values until the previous round .if investor uses as the investing ratio , investor s capital after round is written as for maximization of ( [ eq : maximize ] ) , we employ the gradient descent method . with this method , the weight updating algorithm of with the parameter ( called the learning constant )is written as where and the left superscript to indexes the round .thus we obtain similarly , the weight updating algorithm of is expressed as where thus we obtain here we summarize the algorithm of sosnn at round . 1 .given the input vector and the value of , we first evaluate and then . also we set the learning constant .we calculate and then with and of the previous step .then we update weight with the weight updating formula and .3 . go back to step 1 replacing the weight with updated values .after sufficient times of iteration , in ( [ eq : maximize ] ) converges to a local maximum with respect to and and we set and , which are elements of . then we evaluate investor s capital after round as .here we present some strategies that are designed to be compared with sosnn . in section [ subsec : back - propagation ] we present a strategy with back - propagating neural network .the advantage of back - propagating neural network is its predictive ability due to `` learning '' as previous researches show . in section [ subsec : markovian ]we show some sequential optimizing strategies that use rather simple function for than sosnn does .in this section we consider a supervised neural network and its optimization by back propagation .we call the strategy nnbp .it decides the betting ratio by predicting actual up - and - downs of stock prices and can be regarded as incorporating existing researches on stock price prediction .thus it is suitable as an alternative to sosnn . for supervised network , we train the network with the data from a training period , obtain the best value of the parameters for the training period and then use it for the investing period .these two periods are distinct . for the training periodwe need to specify the desired output ( target ) of the network for each day .we propose to specify the target by the direction of market s current price movement .thus we set note that this is the best investing ratio if investor could use the current movement of market for his investment .therefore it is natural to use as the target value for investing strategies .we keep on updating by cycling through the input - output pairs of the days of the training period and finally obtain after sufficient times of iteration . throughout the investing period we use and investor s capital after round in the investing periodis expressed as back propagation is an algorithm which updates weights and so that the error function decreases , where is the desired output of the network and is the actual output of the network .the weight of day is renewed to the weight of day as where also weight is renewed as where at the end of each step we calculate the training error defined as where is the length of the training period .we end the iteration when the the training error becomes smaller than the threshold , which is set sufficiently small . herelet us summarize the algorithm of nnbp in the training period . 1. we set .2 . given the input vector and the value of , we first evaluate and then .also we set the learning constant .we calculate and then with and of the previous step .then we update weight with the weight updating formula and .4 . go back to step 2 setting and while . when we set and and continue the algorithm until the training error becomes less than .in this section we present some sequential optimizing strategies that are rather simple compared to strategies with neural network in section [ sec : sosnn ] and section [ subsec : back - propagation ] .the strategies of this section are generalizations of markovian strategy in for coin - tossing games to bounded forecasting games .we present these simple strategies for comparison with sosnn and observe how complexity in function increases or decreases investor s capital processes in numerical examples in later sections .consider maximizing the logarithm of investor s capital in ( [ eq : log - capital ] ) : we first consider the following simple strategy of in which we use , where in this paper we denote this strategy by mkv0 . as a generalization of mkv0consider using different investing ratios depending on whether the price went up or down on the previous day .let when was positive and when it was negative .we denote this strategy by mkv1 . in the betting on the day we use and , where and here denotes the indicator function of the event in .the capital process of mkv1 is written in the form of ( [ eq : log - capital ] ) as we can further generalize this strategy considering price movements of past two days .let and let we denote this strategy by mkv2 .we will compare performances of the above markovian proportional betting strategies with strategies based on neural networks in the following sections .in this section we give some simulation results for strategies shown in section [ sec : sosnn ] and section [ sec : compare ] .we use two linear time series models to confirm the behavior of presented strategies .linear time series data are generated from the box - jenkins family , autoregressive model of order 1 ( ar(1 ) ) and autoregressive moving average model of order 2 and 1 ( arma(2,1 ) ) having the same parameter values as in .ar(1 ) data are generated as and arma(2,1 ) data are generated as where we set .after the series is generated , we divide each value by the maximum absolute value to normalize the data to the admissible range ] . in sosnn, we use the first values of as initial values and the iteration process in gradient descent method is proceeded until and with the upper bound of steps . as for the learning constant , we use learning - rate annealing schedules which appear in section 3.13 of . with annealing schedule called the search - then - converge schedule we put at the step of iteration as where and are constants and we set and . in nnbp , we train the network with five different training sets of observations generated by ( [ eq : ar1data ] ) and ( [ eq : armadata ] ) .we continue cycling through the training set until the training error becomes less than and we set with the upper bound of steps .also we check the fit of the network to the data by means of the training error for some different values of , and . in markovian strategies ,we again use the first values of as initial values .we also adjust the data so that the betting is conducted on the same data regardless of in sosnn and nnbp or different number of inputs among markovian strategies . in table [ sim ]we summarize the results of sosnn , nnbp , mkv0 , mkv1 and mkv2 under ar(1 ) and arma(2,1 ) .the values presented are averages of results for five different simulation runs of ( [ eq : ar1data ] ) and ( [ eq : armadata ] ) . as for sosnn, we simulate fifty cases ( combinations of and ) , but only report the cases of and some choices of because the purpose of the simulation is to test whether works better than other choices of under ar(1 ) and works better under arma(2,1 ) . for nnbpwe only report the result for one case since fitting the parameters to the data is quite a difficult task due to the characteristic of desired output ( target ) .also once we obtain the value of , and with training error less than the threshold , we find that the network has successfully learned the input - output relationship and we do not test other choices of the above parameters .( see appendix for more detail . )we set , and in simulation with ar(1 ) model and , and in simulation with arma(2,1 ) model .investor s capital process for each choice of and in sosnn , nnbp and each markovian strategy is shown in three rows , corresponding to rounds , , of the betting ( without the initial 20 rounds in sosnn and markovian strategies ) .the fourth row of each result for nnbp shows the training error after learning in the training period . the best value among the choices of and in sosnn or among each markovian strategy is written in bold and marked with an asterisk and the second best value is also written in bold and marked with two asterisks .also calculation results written with `` '' are cases in that simulation did not give proper values for some reasons ..log capital processes of sosnn , nnbp , mkv0 , mkv1 , mkv2 under ar(1 ) and arma(2,1 ) [ cols="^,^,^,^,^,^,^,^,^ " , ] in figure [ fig:2 ] we show the movements of closing prices of each company during the investing period . in figures[ fig:3]-[fig:5 ] we show the log capital processes of the results shown in table [ real ] to compare the performance of each strategy . figure [ fig:3 ] is for sony , figure [ fig:4 ] is for nomura holdings and figure [ fig:5 ] is for ntt . for sosnn we plotted the result of and that gave the best performance at ( the bottom row of the three rows ) in table [ real ] .as we see from above figures , nnbp which shows competitive performance for two linear models in section [ sec : sim ] gives the worst result .thus it is obvious that the network has failed to capture trend in the betting period even if it fits in the training period .also the results are favorable to sosnn if we adopt appropriate numbers for and .we proposed investing strategies based on neural networks which directly consider investor s capital process and are easy to implement in practical applications .we also presented numerical examples for simulated and actual stock price data to show advantages of our method .in this paper we only adopted normalized values of past market s movements for the input while we can use any data available before the start of round as a part of the input as we mentioned in section [ subsec : design - of - network ] .let us summarize other possibilities considered in existing researches on financial prediction with neural networks .the simplest choice is to use raw data without any normalization as in , in which they analyze time series of athens stock index to predict future daily index . in adopt price of faz - index ( one of the german equivalents of the american dow - jones - index ) , moving averages for 5 , 10 and 90 days , bond market index , order index , us - dollar and 10 successive faz - index prices as inputs to predict the weekly closing price of the faz - index .also in they use 12 technical indicators to predict the s`&`p 500 stock index one month in the future . from theseresearches we see that for longer prediction terms ( such as monthly or yearly ) , longer moving averages or seasonal indexes become more effective .thus those long term indicators may not have much effect in daily price prediction which we presented in this paper . on the other hand , adopting data which seems to have a strong correlation with closing prices of tokyo stock exchange such as closing prices of new york stock exchange of the previous day may increase investor s capital processes presented in this paper . since there are numerical difficulties in optimizing neural networks , it is better to use small number of effective inputs for a good performance . another important generalization of the method of this paper is to consider portfolio optimization .we can easily extend the method in this paper to the betting on multiple assets .let the output layer of the network have neurons as shown in figure [ fig:6 ] and the output of each neuron is expressed as , .then we obtain a vector of outputs . the number of neurons refers to the number of different stocks investor invests . investor s capital after round written as where thus also in portfolio cases we see that our method is easy to implement and we can evaluate investor s capital process in practical applications .here we discuss training error in the training period of nnbp . in this paperwe set the threshold for ending the iteration to , while the value commonly adopted in many previous researches is smaller , for instance , .we give some details on our choice of .let us examine the case of nomura holdings in section [ sec : real ] . in figure [ fig:6 ]we show the training error after each step of iteration in the training period calculated with ( [ eq : te ] ) .while the plotted curve has a typical shape as those of previous researches , it is unlikely that the training error becomes less than .also in figure [ fig:7 ] we plot for each calculated with parameter values after learning .we observe that the network fails to fit for some points ( actually days out of days ) but perfectly fits for all other days .it can be interpreted that the network ignores some outliers and adjust to capture the trend of the whole data .99 e. m. azoff ._ neural network time series forecasting of financial markets ._ wiley , chichester , 1994 .g. p. e. box and g. m. jenkins ._ time series : analysis forecasting and control_. holden - day , san francisco , 1970 . t. m. cover .universal portfolios ._ mathematical finance _ , * 1 * , no.1 , 129 , 1991 .c. darken , j. chang and j. moody . learning rate schedules for faster stochastic gradient search ._ ieee second workshop on neural networks for signal processing _ , 312 , 1992 . b. freisleben .stock market prediction with backpropagation networks ._ industrial and engineering applications of artificial intelligence and expert system 5th international conference _ , 451460 , 1992 . m. hanias , p. curtis and j. thalassinos .prediction with neural networks : the athens stock exchange price indicator ._ european journal of economics , finance and administrative sciences _ , * 9 * , 2127 , 2007 . s. s. haykin ._ neural networks and learning machines_. 3rd ed . , prentice hall , new york , 2008 . n. l. d. khoa , k. sakakibara and i. nishikawastock price forecasting using back propagation neural networks with time and profit based adjusted weight factors ._ sice - icase international joint conference _ , 54845488 , 2006 . m. kumon , a. takemura and k. takeuchi .sequential optimizing strategy in multi - dimensional bounded forecasting games .arxiv:0911.3933v1 , 2009 .d. e. rumelhart , g. e. hinton and r. j. williams .learning internal representation by backpropagating errors ._ nature _ , * 323 * , 533536 , 1986 . g. shafer and v. vovk ._ probability and finance : it s only a game!_. wiley , new york , 2001 . k. takeuchi , m. kumon and a. takemura . multistep bayesian strategy in coin - tossing games and its application to asset trading games in continuous time .arxiv:0802.4311v2 , 2008 .conditionally accepted to _stochastic analysis and applications_. v. vovk , a. takemura and g. shafer .defensive forecasting ._ proceedings of the 10th international workshop on artificial intelligence and statistics _ ( r. g. cowell and z. ghahramani editors ) , 365372 , 2005 .y. yoon and g. swales .predicting stock price performance : a neural network approach ._ proceedings of the 24th annual hawaii international conference on system _ , * 4 * , 156162 , 1991 . g. p. zhang .an investigation of neural networks for linear time - series forecasting ._ computers ` & ` operations research _ , * 28 * , no.12 , 11831202 , 2001 .
in this paper we propose an investing strategy based on neural network models combined with ideas from game - theoretic probability of shafer and vovk . our proposed strategy uses parameter values of a neural network with the best performance until the previous round ( trading day ) for deciding the investment in the current round . we compare performance of our proposed strategy with various strategies including a strategy based on supervised neural network models and show that our procedure is competitive with other strategies .
a few decades ago c. anfinsen [ ] showed that the structural information of naturally occurring proteins is entirely encoded by the corresponding amino acid sequence . since then many biologists , chemists and physicists have spent their efforts in trying to identify and simulate the mechanisms through which a given sequence reaches its stable , native conformation ( protein folding ) [ ] .the inverse problem , also known as protein design , has similarly resisted the efforts of an ever - growing number of researchers who are now tackling it with an arsenal of techniques including _ ab initio _ molecular dynamics and concepts of theoretical physics [ ] .the complexity of the problem is enormous because , in principle , it entails an exhaustive comparison of the native states of all sequences in search for the one(s ) matching the desired target structure [ ] .this problem has been recently formulated into a general mathematical form appropriate for numerical implementation [ ] which shows that solving the design problem for a structure , , amounts to the identification of the amino acid sequence , , that maximizes the occupation probability , , where is the boltzmann factor , is the energy of the sequence over the structure and the sum in the denominator is taken over all possible structures , having the same length of .the winning sequence will maximize at all temperatures below the folding transition temperature ( where the occupation probability of the native state is macroscopic ) .the occupation probability has the following simple physical interpretation .the quantity implicitly defined in ( [ eqn : occprob ] ) corresponds to the free energy of sequence .below the folding transition the dominant contribution to comes from the ground state(s ) of .hence , maximising the functional at low temperature is equivalent to identifying the sequences for which corresponds to the ground state energy and the ground state degeneracy is the lowest possible .several obstacles need to be overcome to implement eq .( [ eqn : occprob ] ) .first it is necessary to know the form of , i.e. the amino acid interaction potentials .second the calculation of for a given sequence entails , in principle , a complete exploration of the conformation space .finally , the quantities need to be calculated for all sequences in order to find the ones maximising eq .( [ eqn : occprob ] ) .the exploration of the sequence space is rather easy to carry out and may simply involve a generation of random sequences ; furthermore the dimension of the sequence space is often restricted by grouping the 20 different amino acids occurring in nature into a reduced number of classes according to their chemical similarities .instead , the sum over alternative conformations , in ( [ eqn : occprob ] ) requires the generation of physically stable structures that compete significantly with the true native state of to be occupied at low temperature .these give the most significant contribution to below the folding transition temperature .this problem is usually circumvented by neglecting the free energy contribution in eq .( [ eqn : occprob ] ) [ ] . maximising corresponds to identifying the sequence with lowest possible energy on .this procedure is , in principle , not guaranteed to yield the correct answer .in fact , it may well be that the sequence having the smallest energy on has an even lower energy on a different structure .despite the fact that the method is not rigorous it has encountered some favour due to its simplicity of use .the aim of the present paper is to investigate how efficiency / reliability of this procedure varies as 1 . the number of classes into which amino acids are subdivided is increased while the peptide length in held fixed , 2 .the length is changed while the number of classes is fixed , 3 .the amino acids in some positions are kept quenched while the others are chosen to minimize the total energy .these questions will be formulated in lattice contexts where an impartial and rigorous assessment of design techniques can be found with the aid of a computer . in particular we will recourse to exhaustive enumeration whenever computational resources will allow it .we will limit our structural ensemble to compact structures on a square lattice . for lengthsfor which exact enumeration is feasible ( up to a few dozens of residues ) the square lattice can yield a ratio of exposed / buried residues much closer to the real case than the cubic counterpart . from the analysis presented hereit will appear that , design techniques based on energy minimization encounter growing limitations as the number of amino acid classes and peptide length increase to approach realistic values .we will also examine approximations to the free energy that turn out to be more efficient than energy minimization procedures and take up the same amount of computational time . throughout this studywe will adopt the following hamiltonian where denotes the residue at position , is the interaction matrix and is equal to 1 if and are neighbouring residues that are not consecutive along the chain and zero otherwise .design techniques by energy minimization were first introduced by e. i. shakhnovich in the context of lattice models [ ] . this procedure has been justified within the approximation of the discrete random energy model [ ] in reference [ ] .one of the most stringent tests of this method was carried out in a competition between the research teams of harvard and san francisco [ ] .the goal was to design 10 three - dimensional compact structures of 48 beads within the hp framework .the hp model consists of only two classes of amino acids , hydrophobic [ h ] and polar [ p ] .a favourable contact energy , was assigned to two non - consecutive residues which are one lattice spacing apart , while the other interactions , , , were set equal to zero .these values are defined modulus a sufficiently negative additive constant to guarantee that the ground states are all compact .this model favours the collapse of a hydrophobic core and is thought to mimic the main driving force of protein folding [ ] .the design strategy followed by the harvard group was to consider only sequences with the same number of h and p residues ( equal composition ) and then identify , among them , the sequences having minimum energy on each target structure .disappointingly a correct answer was found in only one of ten cases [ ] . without the restriction that solutions have equal number of h and p residues , ,the energy minimization procedure would have yielded sequences with large values of ( so that a couple of residues was present in correspondence of each geometrical contact of the target structure ) .these solutions , like their counterpart where all residues are assigned as polar , correspond to trivial answers to the design problem since they have an enormous ground state degeneracy .a correct solution to all 10 harvard - san francisco problems was found recently by a careful treatment of the free energy in ( [ eqn : occprob ] ) [ ] .the study also showed that , to a good extent , the free energy of sequences with the same composition is approximately constant . in this case , for a given composition , maximising ( [ eqn : occprob ] ) corresponds to minimising the energy .therefore , provided that solutions exist at a given composition , energy minimization techniques may be apt to find them .one question that arises naturally is : which fraction of solutions having a given concentration can be found using energy minimization techniques ?we will answer this and other questions by considering first the case where the amino acids are subdivided into two classes .two different interaction matrices will be used in order to identify the qualitative features of the energy minimization procedures that do not depend on the details of the model .the first choice of parameters corresponds to the standard hp model [ ] while , for the second one , an interaction matrix previously adopted by the nec group will be used [ ] . in order to collect good statisticsit was decided to perform the design study on the most encodable compact structures , i.e. compact structures that are designed by the highest number of sequences [ ] . such structures have been shown to display a high degree of geometrical regularity mimicking that found in real proteins . as a by - product of our analysiswe found that the encodability property is robust against changes of the model like energy interactions or number of amino acid classes and confirm that the property of encodability has mainly a geometrical origin [ ] . for example we found that the most designable structures of length 16 and 25 ( see fig .[ fig : encod ] ) remained the same when using the the hp parameters or the nec ones .the main tool used for the analysis was a double backtracking algorithm , which mounted every sequence of length with ( they are nearly 13000 ) on each of the 69 compact conformations .working at fixed concentration , we then calculated the number , , of sequences that admitted as their unique ground state and have energy , as well as the total number of sequences, , which attain an energy when mounted on , irrespective of what their ground state is .the behaviour of is shown in the upper plot of fig .[ fig : hp16 ] , while in fig .[ fig : hp16]b ) is sketched the ratio .interestingly has the shape of a bell and shows that only a small fraction of solutions , 3 out of 100 , have minimum energy since the overall number of sequences having minimum energy is four so the fraction of them which are a correct solution to the problem is 0.75 . in fig .[ fig : hp25 ] we have represented analogous results for the case .the sequences were constrained to have ; there are approximately such sequences , while the number of compact structures is 1081 .analogous enumerations were carried out for the same sequence lengths and compositions but adopting the nec interaction parameters , , , .the results were qualitatively similar to those of the pure hp model , a part from the fact that the overall number of design solutions was greatly enhanced .for these reasons we present only the results for length 25 and , as shown in fig .[ fig : nec25 ] .it can be seen that presents an oscillatory behaviour .this is due to the fact that there is a small number of amino acid classes and that the entries of the interaction matrix have similar strength .in fact , two closely spaced energy levels could be obtained using very different sets of contact pairs ; the ocillations of reflect the fact that the number of sequences contributing to the two sets of contacts may vary significantly . finally the case where the amino acids are subdivided in 3 and 4 classes was addressed .it is often remarked that the shortcomings of energy minimization routines observed in hp - like contexts are due to the artificially large ground state degeneracy [ ] . introducing more classes of amino acids will tipically remove this artifact and may possibly lead to an improved performance of energy - minimization schemes . due to the large increase of sequence - space volume an exhaustive enumerationis feasible only for chains of length 16 .our interaction matrix for the 3 classes case was the following , the entries in ( [ eqn:3c ] ) were chosen so that the segregation principle is satisfied . for symmetric interactionsthis corresponds to requiring that a property that is satisfied to a large extent by extracted potentials like miyazawa - jernigan [ ] or maiorov - crippen [ ] .the matrix ( [ eqn:3c ] ) may be regarded as an extension of the nec one , since the submatrice corresponding to the interaction between the first two types of residues is equal , a part from a scaling factor of , to that used in reference [ ] .the requirement to use a fixed concentration entails the subdivision of sequences into 153 distinct bins .the performance of design algorithms based on energy minimization is not uniform across bins ; in particular , for some concentrations , the method may fail to find solutions . for two classes of amino acidsthis occurs , for example , for , and nec parameters , where no correct solution can be identified with the energy minimization despite the existence of 190 solutions with that length and composition .the optimal bin for the analysis were chosen so that they contained the highest number of solutions .this insures the collection of the best possible statistics on the behaviour of and . for three classes , one of the most populated bins corresponded to nearly equal composition : , where denotes the number of residues of type .the results are shown in fig .[ fig:3c16 ] .finally , for the 4 types case we extended the matrix ( [ eqn:3c ] ) to and chose to work with sequences with . at this composition thereexist nearly solutions ; the behaviour of and is represented in fig .[ fig:4c16 ] .the results presented so far show that energy minimization procedures can be effective in selecting correct ground states of model proteins of reasonable length and number of amino acids .this may come as a surprise since , in principle , the method is not guaranteed to work .part of successes observed here were undoubtedly due to the choice of highly encodable target structures . given that there exist many sequences that are solution to the design problem ( hundreds to thousands according to length , number of classes etc . )it is plausible that a handful of them will have very low values of energy. these will be the ( only ) ones selected by energy minimization procedures .this interpretation is corroborated by the fact that , when dealing with target structures that are poorly encodable ; ( e.g. structures with 20 or fewer design solutions ) , no correct answer to design can be normally found through energy minimization schemes .this fact also sheds some light on the failure of the harvard attempts to solve the harvard - san francisco problems .in fact , the target structures used on that occasion were chosen at random and not according to designability criteria . as argued in the original solution to the problem this was also the reason why only intermediate ( degenerate ) solutions could be found to all 10 harvard - san francisco problems . in the rest of this sectionwe will comment on the limitations that affect energy minimization schemes even when they are adopted in very favourable circumstances such as on designable compact structures .the first limitation regards how much the curve is `` squeezed '' against the minimum energy boundary . in optimization proceduresapplied to realistic off - lattice contexts where the number of classes and peptide lengths is too large to allow a thorough search of the whole sequence space , the minimization procedure will tipically come close to the lowest possible energy but without reaching it .it is then paramount to examine `` how close '' it is necessary to get to the ground state energy to ensure that a significant fraction of the sequences having that energy is a solution to the problem . as a quantitative measure we introduce the parameter where and are respectively the minimum and maximum energies for which solutions to the design problem exists and is the energy below which a randomly picked sequences has more than chance to be a solution to the design problem .thus , the lower the value of ( ) the worst is the performance of the method .since the curves typically do not show a smooth behaviour , is determined with the aid of a high - order polynomial function interpolating . for the results of fig .[ fig : hp16 ] we have . upon increasing the chain length the shoulder of the curveis shifted closer to the minimum energy edge ; in fact , from fig .[ fig : hp25 ] we have . finally , we considered the most encodable structure of length 36 and considered sequences with the same length and . since it is not feasible to mount all the sequences with this composition on each of the 57337 compact structures we resorted to a random sampling of sequences .we obtained .the values of and were found not to change appreciably when using a different composition provided that there exist a significant number of solutions . a similar trend can be observed by increasing the number of amino acid classes . for the results of figs .[ fig:3c16 ] and [ fig:4c16 ] one has and showing a steady decrease as a function of the number of classes ( also remember that for two classes we had ) .this shows that the demand on computational efficiency grows rapidly as a function of length and classes . forrealistic design on proteins with a few hundred residues and 20 types of amino acids the required efficiency may fall beyond the reach of computational techniques .nevertheless , even if it were possible to find the sequence(s ) with minimum energy , other issues need to be addressed .in particular , a limitation having far reaching consequences is that the number of correct solutions that can be identified is only a tiny fraction of the existing ones .for example , for , the fraction is 3/100 when using hp parameters and 3/207 for nec ones .these figures drop respectively to 24/1971 and 21/3978 for length 25 , .the proportion decreases more dramatically when considering more than two classes of amino acids , as can be seen in the plots of fig .[ fig:3c16 ] and fig .[ fig:4c16 ] . in designing realistic off - lattice proteinsthis feature is likely to pose severe limitations to the reliability of the method .in fact , it is expected that residues in naturally occurring sequences were not selected on mere energetic considerations but also on structural and biological functionality .these solutions would be correctly identified when maximising the low temperature occupation probability ( [ eqn : occprob ] ) which , as said before , has 100 % efficiency throughout the energy range .on the contrary they could be missed easily by energy - minimization schemes since sequences with minimum energy on a given protein backbone may not be the most suitable ones as far as biological function is concerned .yet energy minimization approaches could , in principle , still be salvaged by arguing that the active sites of a protein are a small fraction of the total residues and , when known , may be fixed _ a priori_. in an attempt to `` improve on nature '' ( e.g. to increase the thermodynamic stability of the native fold ) the rest of the residue could be found subsequently by energy minimization . in our lattice studieswe have found considerable evidence against this picture .we considered solutions to the design problem with intermediate energy and selected a small number of residues .then we performed an energy minimization procedure over sequences that _ a ) _ had the same concentration as the reference sequence and _ b ) _ were equal to the reference sequence in correspondence of the selected residues . in our attempts we found that very frequently the putative solution was wrong .this is best illustrated with a simple example given for chains of length 16 and two classes of amino acids interacting via the nec potentials . in fig .[ fig : test]a ) a ground state conformation having energy e= -13.2 is shown . by quenching two residues at position 4 and 14 and minimizing the energy at constant composition one obtains the sequence hphpppphphhhhpph .the energy of this sequence on the original structure is e= -13.5 but its native state is the structure shown in fig . [fig : test]b ) with energy -14.5 .this example is not an exceptional case since nearly one in 8 random attempts to quench two residues of intermediate sequences and minimizing the energy failed to give good solutions .this is a significant failure rate since it must be bourne in mind that nearly one half of residues occupy `` hot '' positions that constrain them to be of a well defined type ( e.g. p residues at corners ) for most solutions . when quenching two residues at cold positions the failure rate could be over 50% .quenching three key residues in chains of length 25 may lead to a design failure rates as high as 30% .this shows that introducing more constraints on the designed sequence ( besides its overall composition ) impairs severely the design method instead of making it more reliable .while energy minimization methods appear to be unsatisfactory from both theoretical and numerical points of view , they are tipically simple and fast to implement . on the other hand , recoursing to rigorous techniques that take into proper account the free energy term in eq .( [ eqn : occprob ] ) may , in proportion , require much more cpu time since each trial solution needs to be mounted on alternative conformations , ( see [ eqn : occprob ] ) .the obvious payoff is that _ all _ solutions to design problems can be identified with success throughout the whole energy range [ ] .there are , however , several design procedures based on approximate treatments of the free energy that , while having the same speed of energy minimization methods , are much more efficient [ ] .these procedures were first developed in the attempt to design real proteins [ ] . in that case , contrary to lattice models , it was impossible to generate alternative off - lattice structures competing with the target one ( doing so would amount to be able to perform a direct folding ) .rather it was decided to exploit the properties that formally depends only on the sequence to expand it as a function of composition and other sequence parameters .contrary to energy minimization techniques this method did not require any external intervention to fix the correct composition .remarkably the correct ratio of was nevertheless observed in optimal solutions [ ] .design strategies based on a functional approach to are not only reliable , but they may even be used to determine the best ( unknown ) amino acid interactions given a two - body ( or higher - order ) parametrization of the hamiltonian .details on the use of these methods can be found elsewhere [ ] ; in the rest of this section we will concentrate on yet another free - energy - based design method originally proposed by deutsch and kurowsky [ ] .the dk strategy is based on building a table of the relative frequency of geometrical contacts between two residues at sequence positions and , , collected over all compact structures .hence , for a given sequence , , the free energy can be approximated as this may be regarded as an average energy attained by sequence on compact structures . requiring that is minimized selects the sequence(s ) whose energy on lies as low as possible with respect to .this method can be used without constraining the sequence composition to a particular value and its efficiency , albeit not equal to 100 % , is much higher than what is obtainable by energy minimization in analogous circumstances .this was assessed by ranking the sequences according to their normalized dk score : and isolating those that stayed below a pre - assigned threshold .for example , for the case of length 25 , nec parameters , we chose the threshold value as 0.25 .the selected sequences represented our putative solutions and were ordered according to their ground state energy .the efficiency was calculated as the ratio of correct solutions versus putative ones as a function of energy . while , at low energies , the number of correct solutions was approximately the same for the dk method and the energy minimization one , at higher ones the efficiency of the dk procedure could be 100 times higher than the energy minimization procedure .the cumulative efficiency of the dk approach compared to energy - minimization routines over the whole range of energy for which solutions existed was 20:1 ( this range is easily identified with free - energy based methods , but impossible to find out with the energy minimization ) .the same proportion of efficiency was observed for length 36 , where a random sampling of sequences was performedwe have carried out extensive enumeration studies to assess the performance of design techniques based on energy minimization .it is found that these techniques can be effective in selecting correct ground states .nevertheless , it turns out that the overwhelming majority of solutions do not possess minimum energy and hence can not be identified .it was also shown that practical implementations of energy - based design strategies need to be more and more efficient in finding the lowest energy solutions on increasing the sequence length and number of amino acid classes .finally it was found that these methods become unreliable when additional requirements are imposed on the properties of putative solutions , which possibly suggests that the method may be unsuitable to design realistic proteins with desired biological functionality .design techniques incorporating appropriate treatments of the free energy do not suffer these shortcomings and can , in principle , lead to 100 % design success at the expenses of considerable cpu time . in the last section of this paperwe discuss some approximations to the free energy that while being as fast as energy - based methods , appear to be more efficient .furthermore , they do not require any prior fixing of a correct amino acid concentration and could be used effectively to replace energy - minimization techniques when designing realistic proteins. acknowledgments .we thank j. r. banavar , r. dima , jort van mourik and r. zecchina for useful discussions .we are indebted to flavio seno for illuminating suggestions and a careful reading of the manuscript .miyazawa , s. and jernigan r. l. , ( 1985 ) estimation of effective interresidue contact energies from protein crystal - structures - quasi - chemical approximation , _ macromolecules _ , * 18 * , 534 - 552 ; ( 1996 ) , residue - residue potentials with a favorable contact pair term and an unfavorable high packing density term , for simulation and threading _ j. mol biol . _ ,623 - 644 ponder , j. w. & richards , f. m. ( 1987 ) , tertiary templates for proteins - use of packing criteria in the enumeration of allowed sequences for different structural classes .* 193 * , 775 - 791 quinn , t. p. , tweedy , n. b. , williams r. w. , richardson , j. s. & richardson , d. c. ( 1994 ) , beta - doublet - de - novo design , synthesis , and characterization of a beta - sandwich protein . _sci usa _ * 91 * , 8747 - 8751
we present a detailed study of the performance and reliability of design procedures based on energy minimization . the analysis is carried out for model proteins where exact results can be obtained through exhaustive enumeration . the efficiency of design techniques is assessed as a function of protein lengths and number of classes into which amino acids are coarse grained . it turns out that , while energy minimization strategies can identify correct solutions in most circumstances , it may be impossible for numerical implementations of design algorithms to meet the efficiency required to yield correct solutions in realistic contexts . we also investigated how the design efficiency varies when putative solutions are required to obey some external constraints and found that a restriction of the sequence space impairs the design performance rather than boosting it . finally some alternate design strategies based on a correct treatment of the free energy are discussed . these are shown to be significantly more efficient than energy - based methods while requiring nearly the same cpu time .
nanoparticles ( nps ) are tiny particles of matter with diameters typically ranging from a few nanometers to a few hundred nanometers which possess distinctive properties. these particles , larger than typical molecules but too small to be considered bulk solids , can exhibit hybrid physical and chemical properties which are absent in the corresponding bulk material .the particles in their nano regime exhibit special properties which are not found in the bulk properties , for example , catalysis [ ] , electronic properties [ ] and size and shape dependent optical properties [ ] , which have potential ramifications in medicinal applications and optical devices [ ] .the current challenge is to develop capabilities to understand and synthesize materials at the nano stage , instead of the bulk stage . amongthe various nps studied , colloidal gold ( au ) nps were found to have tremendous importance due to their unique optical , electronic and molecular - recognition properties [ and ] .for example , selective optical filters , bio - sensors , are among the many applications that use optical properties of gold nps related to surface plasmon resonances which depend strongly on the particle shape and size [ ] .moreover , there is an enormous interest in exploiting gold nps in various biomedical applications since their scale is similar to that of biological molecules ( e.g. , proteins , dna ) and structures ( e.g. , viruses and bacteria ) [ ] . in recent years it has become possible to investigate the dependency of chemical and physical properties on size and shape of nps , due to transmission electron microscopy ( tem ) images . and , respectively , showed size and shape dependence of synthesis and catalysis reaction where they observed different rates .they also observed that circular gold nps are better catalysts compared to triangular nps for a specific reaction .the development of new pathways for the systematic manipulation of size and shape over different dimensions is thus critical for obtaining optimal properties of these materials . in this paperwe develop novel , model - based image analysis tools that classify and characterize the images of the nps which provide their morphological characteristics to enable a better understanding of the underlying physical and chemical properties .once we are able to accurately characterize the shapes of nps by using this method , we can develop different techniques to control these shapes to extract useful material properties .substantial work in estimating the closed contours of objects in an image has been done by , among others .imaging processing tools , especially for cell segmentation , also exist ; for instance , imagej [ ] is a tool recommended by the national institute of health ( nih ) .however , the features of the data we are dealing with are quite different from those considered in the literature reviewed , as there are various degrees of overlapping of the nps differing in shapes and sizes , as well as a significant number of nps lying along the image boundaries .high - level statistical image analysis techniques model an image as a collection of discrete objects and are used for object recognition [ ] . in images with object overlapping , bayesian approaches have been preferred over maximum likelihood estimators ( mle ) .the unrestricted mle approaches tend to contain clusters of identical objects allowing one object to sit on the top of the other , whereas the bayesian approaches mitigate this problem by penalizing the overlapping as part of the prior specification [ ] , offering flexibility over controlling the overlapping or the touching . in , a bayesian approach using a prior which forbids objects to overlapcompletely is proposed to capture predetermined shapes ( mushrooms , circular in shape ) .inference is carried out by finding the maximum a posteriori ( map ) estimates and the prior parameters are chosen by simulation experience , in effect , fixing the parameters that define the penalty terms . also used a similar framework to handle the unknown number of objects but introduce polygonal templates to model the objects .however , their application is restricted to cell detection problems , where the objects do not overlap but barely touch each other and the method works more like a segmentation technique than as a classification technique . moreover , the success of this approach depends on prior parameters , which are assumed known throughout the simulation . used the same model except that they considered elliptical templates instead of polygonal templates and applied their method to similar cell images .all the above methods take advantage of the marked point process ( mpp ) , in particular , the area interaction process prior ( aipp ) , or any other prior that penalizes the overlapping or touching , which we explain later in the paper . since the structure of the data we are analyzing is different from the literature, we adapt object representation strategies discussed above to the problem at hand .when we refer to a shape , we refer to a family of geometrical objects which share certain features , for example , an isosceles and a right triangle both belong to the triangle family .there are five types of possible shapes of the nps in our problem .the scientific reason is that the final shape of the particle is dominated by the potential energy and the growth kinetics .there is a balance between surface energy and bulk energy once a nucleus is formed .the arrangement of atoms in a crystal determines those energies such that only one of these specified shapes can be formed .we use similar scientific reasons to construct shape templates .these templates are determined by the parameters which vary from shape to shape .since there is a difference in the degree of overlapping from image to image , we assume that the parameters of the aipp are unknown and ought to be inferred .this leads to a hierarchical model setting where the prior distribution has an intractable normalizing constant . as a result ,the posterior is doubly intractable and we use the markov chain monte carlo ( mcmc ) framework to carry out the inference .simulating from distributions with doubly intractable normalizing constants has received much attention in the recent literature , but most of these methods consider the normalizing constant in the likelihood and not in the hierarchical prior ; , , and , among others . in this paper, we borrow the idea of , which is a modified version of the reweighting mixtures given in and , which can deal with doubly intractable normalizing constants in the hierarchical prior as well .the mcmc algorithm used can be described as a two - step mcmc algorithm .we first sample the parameters from the pseudo posterior distribution which is a part of the posterior that does not contain the aipp normalizing constant and then an additional monte carlo metropolis hastings ( mcmh ) step that accounts for this normalizing constant .sampling from the pseudo posterior distribution is also quite challenging . inferringthe unknown number of objects with undetermined shapes is a complex task .we propose reversible jumps mcmc ( rj - mcmc ) type of moves to handle both the tasks [ ] .birth , death , split and merge moves have been designed based on the work of .we also propose rj - mcmc moves to swap ( switch ) the shape of an object .using the above mentioned computational scheme , we obtain the posterior distributions for all the parameters which characterize the nps : number , shape , size , center , rotation , mean intensity , etc .owing to the model specification and the computational engine for inferring the model parameters , our approach extracts the morphological information of nps , detects nps laying on the boundaries , quantities uncertainty in shape classification , and successfully deals with the object overlapping , when most of the existing shape analysis methods fail .the rest of the paper is organized as follows : section [ sec2 ] describes the tem images , section [ sec3 ] deals with the object specification procedure , section [ sec4 ] describes the model specification , section [ sec5 ] describes the mcmc algorithm , section [ sec6 ] describes a simulation study and section [ sec7 ] applies the method to the real data .conclusions are presented in section [ sec8 ] .in this paper we analyze a mixture of gold nps in a water solution . in order to analyze the morphological characteristics , npsare sampled from this solution onto a very thin layer of carbon film .after the water evaporates , the two - dimensional morphology of nps is measured using an electron microscopy such as tem . in our case ,a jeol high resolution tem operating at kv accelerating voltage was used , which has nm of point resolution .the tem shoots a beam of electrons onto the materials embedded with nps and captures the electron wave interference by using a detector on the other side of the material specimen , resulting in an image .the electrons can not penetrate through the nps , resulting in a darker area in that part of the image .the output from this application will be an eight bit gray scale image where darker parts indicate the presence of a nanoparticle .the gray scale intensity is varying as an integer between and .refer to figure [ fig1 ] for examples of tem images . [ cols="^,^ " , ]figure [ fig5](b ) . similarly , we show the split move in action using example 2 .snapshots taken at the and the mcmc iterations for example 2 are given in figure [ fig6](a ) and ( b ) .not only an obvious split step has occurred but also we can see the different deviations of the boundaries which are related to the object representation parameters .all the simulations and the algorithms were implemented in matlab , running on a xeon dual core processor clocking ghz with gb ram .mcmc chains are initialized by using classical image processing tools .all the five templates are randomly assigned to complete template specification .the simulation time for the two examples is approximately two hours for iterations .convergence of the chains was observed within the first iterations .however , we point out that the computational time of the proposed method depends on the size of the image , the number of the objects and the complexity of overlapping , and burn - in time which strongly depends on the initial state of the chain . in order to accelerate quick mixing , we take advantage of several classical image processing tools .notable among these are the watershed image segmentation and certain morphological operator based image filtering techniques such as erosion , dilation etc .[ ] .for example , we use watershed segmentation to decompose the image into subimages that have approximately nonoverlapping regions ( in terms of objects ) .a repeated application of the erosion operator on the subimages , in conjunction with connected - component analysis and dilation operation , gives us an approximate count of number of objects and their morphological aspects .such information can be used to initialize the chains and to construct proposal distributions required by the mcmc sampler .in addition , the region - based approach allows one to exploit distributed and parallel computing concepts to reduce simulation time and make the algorithm scalable .further details are not presented here since morphological preprocessing is not the subject of the present work .we point out above that choices affect simulation time and may improve mixing but otherwise are not necessary for our proposed method to work .in addition , simulation time and effort required by the mcmc method required are relatively small compared to the time , effort and resources required to produce the nps and finally obtain the tem images which can exceed weeks .using the mcmc samples , we can obtain the distribution of the particle size , which is characterized by the area of the nanoparticle and the distribution of the particle shape .the aspect ratio , defined as the length of the perimeter of a boundary divided by the area of the same boundary , can be derived from the combination of size , shape and the pure parameters .the statistics of size , shape and aspect ratio are widely adopted in nano science and engineering to characterize the morphology of nps , and are believed to strongly affect the physical or chemical properties of the nps [ ] .for example , the aspect ratio is considered as an important parameter relevant to certain macro - level material properties because physical and chemical reactions are believed to frequently occur on the surface of molecules so that as the aspect ratio of a nanoparticle gets larger , those reactions are more active . we apply our method to three different tem images .the parameters that maximize the posterior distribution ( map ) obtained from the ( mcmc ) are presented in detail .our classification results of particular type are verified by our collaborators with domain expertise ; this manual verification appears the only valid way for the time being .more than of the nps in those images are classified correctly .this also includes the particles in the boundary as well as having overlapping regions . for completely observed objects ,there is almost correct classification .we start our application with the image in figure [ fig1](a ) .morphological image processing operations , such as watershed transformation and erosion , can be used to get an approximate count of the number of nps in the model [ ] .they also can be used in initializing the mcmc chains and in constructing proposal distributions required by the mcmc sampler .the morphological image processing we used in this dissertation has the following steps : ( 1 ) image filtering and segmentation , ( 2 ) determining the number of objects , ( 3 ) estimating location , size and rotation parameters .we first transform the image from grey to a binary image and then apply watershed transformation to partition the image into subimages . in each binary subimagewe apply erosion and dilation operations to find initial values for the parameters inside of each subimage .because this morphological processing is not the subject of the present work , it is not presented in more detail .after the initial values are obtained from the preprocessing step , all five templates are randomly assigned for starting template specifications . from the mcmc sampler described in section [ postmcmc ]we obtain a random sample of the posterior distribution for all the parameters which characterize the nps , namely , the _ shape _ , the _ size _ , the _ rotation _ , the _ random pure parameter _ , the _ mean _ intensity and the _ variance _ .we use this posterior sample for inferring the model parameters and extracting the morphological information of nps with uncertainty in shape size and classification . to better present our results, we chose to work with the maximum a posteriori ( map ) estimations of these parameters .out of nanoparticles in the image are in the boundaries . ]@ & + ( a ) ( scale ) & ( b ) ( foreground intensity ) + + in figure [ fig7 ] we show the tem image and map estimates of the parameters for mcmc sample . in figure [ fig8 ]we present the parameters of , and that correspond to the map estimate for all the number of objects , , corresponding to that value .summary statistics of the shape parameters are given in table [ tabl1 ] . from the table and the histogramit is clear that the mean intensity is different from nanoparticle to nanoparticle , justifying our assumption of different means in ( [ equ3 ] ) .we also obtain the posterior probability of the classification for each of the objects .this probability depends on the complexity of the shape of the object .for example , object has been classified as an ellipse with probability , whereas object has been classified as an ellipse with probability ( circle with probability ) . in table[ tabl1 ] ( and in all the following tables of this chapter ) we presented the classification with the highest posterior probability of some of the nanopartiles . in this examplewe successfully deal with the object overlapping and objects laying on the boundaries . c c c d2.2 c c@ * object * & * shape * & * center * & * size * & & & * mean * + 1 & e & & 51.49 & -0.21 & 1.14 & 50.64 + 2 & e & & 49.41 & 1.41 & 1.22 & 74.67 + 3 & e & & 47.20 & 1.36 & 1.12 & 62.55 + 4 & e & & 28.86 & 0.61 & 1.15 & 71.58 + 5 & e & & 49.98 & 0.83 & 1.13 & 64.58 + 6 & c & & 51.82 & & & 73.76 + our second application deals with a more complex image shown in figure [ fig9 ] . in this imageat least overlapping areas and at least nanoparticles laying in the boundary are observed .more specifically , nanoparticles , , , , , , , and lay in the boundary of the image while pairs 24 , 34 , 910 , 1011 , 1718 , and 1012 overlap . in this example, the overlapping is more complex and existing methods fail to represent the real situation .a number of nanoparticles are overlapped together forming a groups such as nanoparticles 9101112 .map estimate values for all the parameters are obtained after mcmc iterations .complex shapes have been classified accurately ; see figure [ fig9 ] . for example, nanoparticle has an incomplete image and it has been classified as a circle with posterior probability .the map estimates of the parameters drawn from mcmc , namely , shape , size , rotation , random pure parameter and mean intensity are presented for the first six objects in table [ tabl2 ] . in this application , out of the objects are ellipses ( e ) and are circles ( c ) and one is a triangle ( tr ) .we also present the histogram of the map estimates of parameters , and in figure [ fig10 ] .summary statistics of various shape parameters are given in table [ tabl2 ] .we see from the table that our proposed algorithm captures triangles , circles etc .quite accurately . c c c d2.2 c c@ * object * & * shape * & * center * & * size * & & & * mean * + 1 & e & & 37.48 & -1.51 & 1.2960 & 39.185 + 2 & c & & 41.04 & & na & 42.969 + 3 & e & & 38.02 & -0.29 & 1.2175 & 52.569 + 4 & e & & 47.44 & -1.17 & 1.1591 & 60.605 + 5 & e & & 44.33 & -0.36 & 1.1612 & 51.080 + 6 & e & & 49.63 & -1.76 & 1.1621 & 44.617 + @ & + ( a ) ( scale ) & ( b ) ( foreground intensity ) + + our next application deals with an image with nanoparticles with shapes ; see figure [ fig1](b ) . in this image , few objects have overlapping areas and at least nanoparticles are laying in the boundary .some objects do not have very clear shape like objects and .different shapes are captured with different templates with the proposed method .in addition to the circles and ellipses which were successfully captured in the previous images , the triangles and squares are also captured accurately .nanoparticles denoted by and are classified correctly , even if they have vague shapes ; see figure [ fig11 ] . in this example , out of nanoparticles , are classified as a circle , as an ellipse , as a triangle and as a square .distribution of the various parameters of the identified objects are shown in figure [ fig12 ] . in table[ tabl3 ] we present all the triangular shapes in order to compare the pure parameter . as we can see from the table , triangular shape nanoparticles and are closer to the equilateral triangle , with value close to , while triangular shape nanoparticles and have wider sides , since their . @ & + ( a ) ( scale ) & ( b ) ( foreground intensity ) + + c c c d2.2 c c@ * object * & * shape * & * center * & * size * & & & * mean * + 1 & e & & 12.43 & -1.57 & 1.29 & 66.27 + 4 & & & 25.82 & 1.38 & 2.32 & 49.33 + 12 & & & 28.73 & 0.35 & 2.31 & 79.59 + 28 & e & & 24.09 & 1.53 & 1.14 & 68.19 + 51 & & & 24.61 & -1.46 & 2.25 & 63.29 + 57 & & & 25.25 & 0.25 & 2.01 & 70.49 + in this image we can see more than percent of the nanoparticles are in the same shapes like circular or slightly tilted like ovals . normally when we do shape controlled synthesis , we called it nano spheres or circular nanoparticles . approximately five to ten percent of the other shapes or slight changes we usually neglect because in solution synthesis routes it is very difficult to synthesis of the same size and same shapes .however , if we consider critically the reason of shape evolution or statistical analysis of different shapes , then this small difference might be considered .we classify this particular example as spherical gold nanoparticles having almost the same size and shapes . as a part of the verification process, we compare the accuracy of our method with that of the current practice used in nanoscience . in brief , the current practice is largely a manual process with support of image processing tools such as imagej particle analyzer ( http://rsbweb.nih.gov/ij ) and axiovision ( http://www.zeiss.com/ ) , which have been popularly used for biomedical image processing .the results are shown in figures [ fig13 ] and [ fig14 ] .particles , are recognized .recognition % . ] particles , are recognized .recognition % . ] the manual counting process , subject to the application of the above imaging tools , is necessitated by the low accuracy of the autonomous procedures . for three tem images with overlaps among particles , our procedure recognized of the total articles compared to the 2050% recognition rate of the imagej . considering frequent occurrence of overlaps in the tem images of nanoparticles ,the existing software can not be used as more than a supporting tool .we have also applied our method to other images with the same success , encouraging its applicability .we adopted a bayesian approach to image classification and segmentation simultaneously and applied it in tem images of gold nanoparticles .the merit of our development is to provide a tool for nanotechnology practitioners to recognize the majority of the nanoparticles in a tem image so that the morphology analysis can be performed subsequently .this can evaluate how well the synthesis process of nanoparticles is controlled , and may even be used to explain or design certain material properties .several factors like kinetic and thermodynamic parameters , flux of growing material , structure of the support , presence of defects and impurities can affect the morphology of nps . in the future, we are planning to perform a factorial type experiment to identify the significant factors for morphological study .these significant factors can be properly controlled to develop nps of required shapes . from the experimental point of view , several improvements of existing techniques will be helpful to characterize the shape of the nps .one is tem tomography that allows to image an object in three dimensions , by automatically taking a series of pictures of the same particle at different tilt angles [ ] .another improvement of tem is environmental hrtem that is able to image nanoparticles , with atomic lattice resolution , at various temperatures and pressures [ ] . from the modeling point of view , we used marked point process to represent the nps in the image , where points represent the location of nps and marks represent their geometrical features .more specifically , we treated the nps in the image as objects , wherein the geometrical properties of the object were largely determined by templates and the interaction between the objects was modeled using the area interaction process prior . by varying the template parameters and applying operators such as scaling , shifting and rotation to the template , we modeled different shapes very realistically . in our current applications , we chose circle , triangle , square and ellipse as our templates .other templates can be also constructed in the same framework . to solve the intractability of the posterior distribution , we proposed a complex markov chain monte carlo ( mcmc ) algorithm which involves reversible jump , metropolis hastings , gibbs sampling and a monte carlo metropolis hastings ( mcmh ) for the intractable normalizing constants in the prior .the first steps deal with simulating from a pseudo posterior distribution without involving the random normalizing constant . a generalized metropolis - within - gibbs sampling with a reversible jump stepis used to simulate from a pseudo posterior distribution given the number of objects . additionally ,a reversible jump mcmc with the use of birth - death and merge - split moves is invoked on moving from a state with a different number of objects .finally , we simulate from the intractable normalizing constant posterior using monte carlo metropolis hastings where the acceptance ratio of the sample taken from the pseudo posterior is estimated by simulating from an auxiliary variable .we reported the posterior summary statistics of the shapes and the number of objects in the image .we successfully applied this algorithm to real tem images , outperforming convention tools aided by manual screening .our proposed methodology can help practitioners to associate morphological characteristics to physical and chemical properties of the nps , and in synthesizing materials that have potential applications in optics and medical electronics , to name a few .two new variables are introduced to make it clear that the pure parameters have a different meaning from template to template .for all the shapes , we provide a general algorithm : let denote the current state and the proposed state for .the notation of the parameters is different from the previous sections to show the dependence of the parameters on the model ( or template ) .if , generate from the prior distribution of the and consider a bijection : . from this bijectionit is clear that the jacobian is equal to identity matrix , , and . in summary ,the rj - mcmc algorithm is as follows : * select model with probability . * generate from . *set . *compute the m h ratio : where is the jacobian . * set with probability and otherwise .let , , and be the probabilities of proposing a birth , death , split or a merge move , respectively . in the birth step a new object proposed with a randomly assigned center . in this stepwe increase the dimension of the parameters by , all the parameters which describe the proposed object ) .all these new parameters are sampled from the prior distributions of the parameters .the introduction of these kind of auxiliary variables leads again to a jacobian equal to and the m h ratio is the death proposal chooses one object , , at random and removes it from the configuration .h ratio for this move is similar to ( [ equ9 ] ) .the details for the split and merge move are more complicated than the move types described above . first we restrict our attention only to the case where we merge two neighboring objects or split one object into two neighbors . the distance between the two neighbors can be approximated by a function of their individual size .when we move from one state to another , we require that the proposed objects have equal area with the existing . in order for the markov chain to be reversiblewe should ensure that every jump step can be reversed .we can improve the acceptance rate of these moves with different proposed algorithms , for example , , but that is beyond the scope of this paper . to facilitate the representation, we will denote by bold characters , and the current state in every move and , and the current state values without the objects ._ merge step _ : lets suppose we have two objects and that their parameters are . in the merge step ,we move to a new object with parameters .the equation which links the sizes of the old objects with the new is .also , and are chosen to represent the `` weighted middle '' point , taking in account the size of each object as .all the other parameters are chosen from one of the `` parent '' objects or at random . in order to match the two dimensions , we introduce six auxiliary variables , , which not only would enable us to move from state to state but also are interpretable : is expressing the distance between two centers of the neighboring objects , is the angle created from the union of the two centers , is chosen such that and , , , .the acceptance ratio , , in this case is the minimum of one and where is the determinant of the jacobian for the transformation and is the split proposed probability and is the merge proposed probability ._ split step _: in the split step , we move from ( , , , , , , , , , , , ) to , , , , , , , , , , , . in order to make this move possible ,we introduce six proposal distributions for the auxiliary variables .we propose from the prior of the size parameter , from the prior of rotation parameter , from , from the priors of and , respectively . in order for this move to be reversible, we again use the same transform that was used in the merge step . with the same setting we can compute the m h acceptance ratio .we thank professor faming liang for providing the preprint of his work on simulating from posterior distributions with doubly intractable normalization constants .we thank all the reviewers for their useful comments and a special thanks to the reviewer who helped us to improve the algorithm presented in page .
the properties of materials synthesized with nanoparticles ( nps ) are highly correlated to the sizes and shapes of the nanoparticles . the transmission electron microscopy ( tem ) imaging technique can be used to measure the morphological characteristics of nps , which can be simple circles or more complex irregular polygons with varying degrees of scales and sizes . a major difficulty in analyzing the tem images is the overlapping of objects , having different morphological properties with no specific information about the number of objects present . furthermore , the objects lying along the boundary render automated image analysis much more difficult . to overcome these challenges , we propose a bayesian method based on the marked - point process representation of the objects . we derive models , both for the marks which parameterize the morphological aspects and the points which determine the location of the objects . the proposed model is an automatic image segmentation and classification procedure , which simultaneously detects the boundaries and classifies the nps into one of the predetermined shape families . we execute the inference by sampling the posterior distribution using markov chain monte carlo ( mcmc ) since the posterior is doubly intractable . we apply our novel method to several tem imaging samples of gold nps , producing the needed statistical characterization of their morphology . , , , , , ,
control theory has developed into a very broad and interdisciplinary subject .one of its major concerns is how to design the dynamics of a given system to steer it to a desired target state , and how to stabilize the system in a desired state . assuming that the evolution of the controlled system is described by a differential equation , many control methods have been proposed , including optimal control , geometric control and feedback control .quantum control theory is about the application of classical and modern control theory to quantum systems .the effective combination of control theory and quantum mechanics is not trivial for several reasons . for classical control , feedback is a key factor in the control design , and there has been a strong emphasis on robust control of linear control systems .quantum control systems , on the other hand , can not usually be modelled as linear control systems , except when both the system and the controller are quantum systems and their interaction is fully coherent or quantum - mechanical .this is not the case for most applications , where we usually desire to control the dynamics of a quantum system through the interaction with fields produced by what are effectively classical actuators , whether these be control electrodes or laser pulse shaping equipment .moreover , feedback control for quantum systems is a nontrivial problem as feedback requires measurements , and any observation of a quantum system generally disturbs its state , and often results in a loss of quantum coherence that can reduce the system to mostly classical behavior . finally ,even if measurement backaction can be mitigated , quantum phenomena often take place on sub - nanosecond ( in many case femto- or attosecond ) timescales and thus require ultrafast control , making real - time feedback unrealistic at present .this is not to say that measurement - based quantum feedback control is unrealistic .there are various interesting applications , e.g. , in the area of laser cooling of atomic motion , or for deterministic quantum state reduction and stabilization of quantum states , to mention only a few , and progress in technology will undoubtedly lead to new applications .nonetheless , there are many applications of open - loop hamiltonian engineering in diverse areas from quantum chemistry to quantum information processing . even in the area of open - loop control many control design strategies , both geometry and optimization - based , utilize some form of model - based feedback .a particular example is lyapunov control , where a lyapunov function is defined and feedback from a model is used to generate controls to minimize its value . although there have been several papers discussing the application of lyapunov control to quantum systems , the question of when , i.e. , for which systems and objectives , the method is effective and when it is not , has not been answered satisfactorily .several early papers on lyapunov control for quantum systems such as considered only control of pure - state systems , and target states that are eigenstates of the free hamiltonian , and therefore fixed points of the dynamical system . for target states that are not eigenstates of , i.e. , evolve with time , the control problem can be reformulated either in terms of asymptotic convergence of the system s actual trajectory to that of the time - dependent target state , or as convergence to the orbit of the target state ( or more precisely its closure ) .such cases have been discussed in several papers but except for , the problem was formulated using the schrodinger equation and state vectors that can only represent a pure state . to give a complete discussion of lyapunov control , it is desirable to utilize the density operator description as it is suitable for both mixed - state and pure - state systems , and can be generalized to open quantum systems subject to environmental decoherence or measurements , including feedback control . in lyapunov control for mixed - state quantum systems was considered but the notion of orbit convergence used is rather weak compared to trajectory convergence , the lasalle invariant set was only shown to contain certain critical points but not fully characterized , and a stability analysis of the critical points was missing , in addition to other issues such as the assumption of periodicity of orbits , etc . furthermore , while an attempt was made to establish sufficient conditions to guarantee convergence to a target orbit , the effectiveness of the method for realistic system was not considered . in this paperwe address these issues .we consider the problem of steering a quantum system to a target state using lyapunov feedback as a trajectory tracking problem for a bilinear hamiltonian control system defined on a complex manifold , where the trajectory of the target state is generally non - periodic , and analyze the effectiveness of the lyapunov method as a function of the form of the hamiltonian and the initial value of the target state . in sec .[ sec : basics ] the control problem and the lyapunov function are defined , and some basic issues such as different notions of convergence and reachability of target states are briefly discussed . in sec .[ sec : lasalle ] the controlled quantum dynamics is formulated as an autonomous dynamical system defined on an extended state space , and lasalle s invariance principle is applied to obtain a characterization of the lasalle invariant set .this characterization shows that even for ideal systems satisfying the strongest possible conditions on the hamiltonian , the invariant set is generally large , and the invariance principle alone is therefore not sufficient to conclude asymptotic stability of the target state . noting that the invariant set must contain the critical points of the lyapunov function we characterize the former in sec .[ sec : critical ] . in sec .[ sec : conv_ideal ] we give a detailed analysis of the convergence behaviour of the lyapunov method for finite - dimensional quantum systems under an ideal control hamiltonian based on the characterization of the lasalle invariant set and our stability analysis .the discussion is divided into three parts , control of pseudo - pure states , generic mixed states , and other mixed states .the result is for this ideal choice of hamiltonian lyapunov control is effective for most ( but not all ) target states .finally , in sec .[ sec : conv_real ] we relax the unrealistic requirements on the hamiltonian imposed in sec .[ sec : conv_ideal ] , and show that this leads to a much larger lasalle invariant set , and significantly diminished effectiveness of lyapunov control .according to the basic principles of quantum mechanics the state of an -level quantum system can be represented by an positive hermitian operator with unit trace , called a density operator , and its evolution is determined by the liouville von - neumann equation ,\end{aligned}\ ] ] where is the system hamiltonian , denoted by an hermitian operator .if we are considering a sub - system that is not closed , i.e. , interacts with an external environment , additional terms are required to account for dissipative effects , although in principle , we can always consider the hamiltonian dynamics on an enlarged hilbert space , and we shall restrict our discussion here to hamiltonian systems .we shall say a density operator represents a pure state if it is a rank - one projector , and a mixed state otherwise .we further define the special class of pseudo - pure states , i.e. , density operators with two eigenvalues , one of which occurs with multiplicity , the other with multiplicity , and generic mixed states , i.e. , density operators with distinct eigenvalues . in the following we consider the bilinear hamiltonian control system ,\ ] ] where is an admissible real - valued control field and and a free evolution and control interaction hamiltonian , respectively , both of which will be assumed to be time - independent .we have chosen units such that the planck constant and can be omitted for convenience .the general control problem is to design a certain control function such that the system state with will converge to the target state .since the evolution of a hamiltonian system is unitary , the spectrum of is therefore time - invariant , or equivalently =\tr[\rho_0^n ] , \quad \forall n\in \nn.\ ] ] hence , for the target state to be reachable , and must have the same spectrum , or entropy in physical terms .if and do _ not _ have the same spectrum , we can still attempt to minimize the distance , but it will always be non - zero if we are restricted to hamiltonian engineering . for the following analysiswe shall assume that the initial and the target state of the system have the same spectrum .if this is the case and the system is density - matrix controllable , or pure - state controllable if the initial state of the system is pure or pseudo - pure , then we can conclude that the target state is reachable , although a particular target state may clearly be reachable even if the system is not controllable .assuming that and have the same spectrum , the quantum control problem can be characterized by the spectrum of the target state . if is pure , the problem is called a pure - state control problem .analogously , we can define the pseuo - pure - state control and generic - state control .pure - state control problems are often represented in terms of hilbert space vectors or wavefunctions evolving according to the schrdinger equation for pure states this wavefunction descritpion is equivalent to the density operator description since any rank - one projector can be written as for some hilbert space vector , but it does not generalize to mixed states , and we shall not use this formalism here . since the free hamiltonian can usually not be turned off , it is natural to consider non - stationary target states evolving according to .\ ] ] it is easy to see that is stationary if and only if it commutes with , =0 ] gives a necessary condition for the invariant set : \ad^m_{-ih_0}(-ih_1))=0 , \qquad \forall m\in\nn_0,\ ] ] where . since is hermitian we can choose a basis such that is diagonal with real eigenvalues , which we may assume to be arranged so that for all .let be the matrix representation of in the eigenbasis of , and be the transition frequency between energy levels and of the system .the lie algebra can be decomposed into an abelian part called the cartan subalgebra , and an orthogonal subalgebra , which is a direct sum of root spaces spanned by pairs of generators .for instance , we can choose the generators [ eq : lambda ] for , where the entry of the elementary matrix equals .expanding with respect to these generators \ ] ] and noting that we have for [ eq : lambda_comm ] & = 0,\\ [ d,\lambda_{k\ell } ] & = + i(d_k - d_\ell ) \bar{\lambda}_{k\ell } , \\ [ d,\bar{\lambda}_{k\ell } ] & = -i(d_k - d_\ell ) \lambda_{k\ell},\end{aligned}\ ] ] shows that is equal to [ eq : bm ] , \\b_{2 m } & = \sum_{k=1}^{n-1}\sum_{\ell = k+1}^n ( -1)^m \omega_{k\ell}^{2 m } [ \re(b_{k\ell } ) \lambda_{k\ell } -\im(b_{k\ell } ) \bar{\lambda}_{k\ell}].\end{aligned}\ ] ] let and with .then eq ( [ eq : trace - cond1 ] ) is equivalent to ] . furthermore , )=0 ] .we have proved the necessary part . for the sufficient part note that , , = e^{-ih_0t}[\rho_1,\rho_2 ] e^{ih_0t}\ ] ] and diagonal .thus if = \diag(c_1,\ldots , c_n) ] and hence .thus we have fully characterized the invariant set for systems with strongly regularly and an interaction hamiltonian with a fully connected transition graph .the result also shows that even under the most stringent assumptions about the system hamiltonians , the invariant set is generally much larger than the desired solution .therefore , the invariance principle alone is not sufficient to establish convergence to the target state .in this section we show that invariant set always contains at least the critical points of the lyapunov function and classify the stability of the critical points .we start with the case where is a fixed stationary state . in this casethe lyapunov function is effectively a function on .since can be written as for some in the special unitary group , also be considered a function on , .it is easy to see that the critical points of correspond to those of , and since ^ 2=c ] for . let be an orthonormal basis for the lie algebra , consisting of orthonormal off - diagonal generators such as , with as in eq .( [ eq : lambda ] ) , and orthonormal diagonal generators for . set .any near the identity can be written as , where is the coordinate of , and any in the neighborhood of can be parameterized as .thus eq .( [ eq : j ] ) becomes .\ ] ] at the critical point , implies that for all \nonumber\\ & = \tr(\sigma_m[u_0\rho_d u_0^\dagger , \rho_d]).\end{aligned}\ ] ] thus \in\su(n) ] .hence , for a given , the critical points of are such that =0 ] , and thus ^\dagger\\ & = [ u_1\rho_d u_1^\dagger , u_2 \rho_d u_2^\dagger ] = [ \rho_1,\rho_2].\end{aligned}\ ] ] thus we have the following : [ thm : crit:1 ] for a given , the critical points of the lyapunov function on are such that =0\} ] diagonal implies if ( a ) then with , and thus , and the rhs has to equal since .thus and , and the corresponding density operators commute , =0 ] for all , and thus =-[x,\pi] ] is diagonal .since , , are rank- projectors , , where are unit vectors in . setting where , , are pure states , represented as unit vectors in .we have \\ & = \ket{\psi_1}\langle\psi_1|\psi_2 \rangle\bra{\psi_2}-\ket{\psi_2}\langle\psi_2|\psi_1 \rangle\bra{\psi_1}\end{aligned}\ ] ] for , we require that all off - diagonal elements equal to zero , i.e. : for all .let .we have the following two cases .\(a ) i.e or . in this case ,=0 ] .if has non - zero eigenvalues , which is always the case if is strongly regular , then this happens if and only if is diagonal in the eigenbasis of . when is a stationary state eq .( [ eqn : auto ] ) can be reduced to a dynamical system on [ eqn : auto1 ] \\ f(\rho)&=\tr([-ih_1,\rho(t)]\rho_d)\end{aligned}\ ] ] and the lasalle invariant set can be reduced to =\diag(c_1,\ldots , c_n)\}\end{aligned}\ ] ] according to theorem [ thm : lasalle:4 ] .if is generic and both and ] since suppose and . then the -th component of ] is diagonal only if for , i.e. , if is diagonal .since the commutator of two diagonal matrices vanishes , the invariant set in this case reduces to the set of all that commute with the stationary state , i.e. , in this case the invariant set not only contains the set of critical points of the lyapunov function but we have . in summarywe have : [ thm : generic : crit ] if is a generic stationary target state then the invariant set contains exactly the critical points of the lyapunov function , i.e. , the stationary states , , that commute with and have the same spectrum .these critical points are the only stationary solutions and all the other solutions must converge to _ one _ of these points .however , we still can not conclude that all or even most solutions converge to the target state .in fact we shall see that not all solutions converge to even for .however , the target state is the only hyperbolic sink of the dynamical system , and all other critical points are hyperbolic saddles or sources , and therefore most ( almost all ) initial states will converge to the target state as desired .we note that theorem [ thm : crit:2 ] guarantees that for a given generic stationary state the critical points of the lyapunov function are hyperbolic .thus , if the dynamical system was the gradient flow of then asymptotic stability of these fixed points could be derived directly from the associated index number of the morse function . however , since the dynamical system ( [ eqn : auto1 ] ) is _ not _ the gradient flow , further analysis of the linearization of the dynamics near the critical points is necessary . to this endwe require a real representation for our complex dynamical system .a natural choice is the bloch vector ( sometimes also called stokes tensor ) representation , where a density operator is represented as a vector defined by , where and is the orthonormal basis of , as defined in the proof of theorem [ thm : crit:2 ] .the adjoint action ] , is diagonal and since diagonal matrices commute , =0 ] , or \in i\t ] since , and \in i\t ] vanishes since therefore , and thus the restriction of to is invertible , and hence has only non - zero eigenvalues .[ lemma : generic:3 ] if is a purely imaginary eigenvalue of then it must be an eigenvalue of , i.e. , for some , and either the associated eigenvector must be an eigenvector of with the same eigenvalue , or the restriction of to the subspace must vanish .if is not an eigenvalue of then is invertible and by the matrix determinant lemma since we must therefore have noting that is block - diagonal with blocks for all , this leads to ^t c_\beta^{(k,\ell)}\vec{u}^{(k,\ell ) } \\ & = \frac{-i\beta}{2 } \sum_{k,\ell } \frac{\delta_{k\ell}\delta_{\tau(k)\tau(\ell)}}{\omega_{k\ell}^2-\beta^2 } |b_{k\ell}|^2.\end{aligned}\ ] ] since all terms in the sum are real this is a contradiction .thus if is a purely imaginary eigenvalue of then it must be an eigenvalue of .since the spectrum of is , this means for some . without loss of generalityassume and let be the associated eigenvector of .then which is equivalent to [ eq : bx ] multiplying ( [ eq : bx]b ) by and adding it to ( [ eq : bx]a ) eq . ([ eq : b0inv ] ) shows that ^{-1}=b_0^{(1,2)} ] has no support in , which contradicts the assumption that is fully connected and has non - degenerate eigenvalues . on the other hand , if is an eigenvector of with eigenvalue and is strongly regular then the projection of onto the subspace is proportional to and is zero elsewhere , and thus implies , which contradicts the fact that the projection or ] and thus .simulations suggest that lyapunov control is ineffective , i.e. , fails to steer to or even the orbit of in such cases. however , it is difficult to give a rigorous proof of this observation , as we lack a constructive method to ascertain asymptotic stability near a non - stationary solution . in the special case where is periodic there are tools such as poincar maps but it is difficult to write down an explicit form of the poincar map for general periodic orbits .moreover , as observed earlier , for the orbits of non - stationary target states under are periodic only in some exceptional cases .fortunately though , we shall see that =0\} ] , where is a linear map from the hermitian or anti - hermitian matrices into , let be the real matrix corresponding to the stokes representation of .recall and , where and are the real subspaces corresponding to the cartan and non - cartan subspaces , and , respectively .let be the first rows of ( whose image is ) . for a generic invariant set contains points with nonzero commutator if and only if .it suffices to show that if , then for any such that ] .if this is true then for any with diagonal commutator , there exists some such that and since ] is also diagonal , hence equal to zero and =0 ] . in order to show that the kernel of has dimension , we recall that if for some then =0 ] diagonal implies =0 ] .we can easily verify that if the diagonal elements of are not equal then is a non - trivial polynomial , i.e. , can only have a finite set of zeros .hence we have : the invariant set for a generic contains points with nonzero commutator only if either has some equal diagonal elements or .therefore , the set of such that contains points with nonzero commutator has measure zero with respect to the state space .hence , if we choose a generic target state randomly , with probability one , it will be such that =0\} ] then any solution converges to for some , and all solutions except , which is stable , are unstable .for any solution there exists a subsequence such that .if only contains pairs that commute then we can choose an orthonormal basis such that both and are diagonal , and since and have the same spectrum , the diagonal elements of must be a permutation of those of , i.e. , for some .thus we have , and therefore if is a different positive limiting point of , we can similarly find a subsequence such that , for some .since is non - increasing along the trajectory , we must have .therefore , the result ( [ eqn:1 ] ) holds for any subsequence . to see that all solutions except those with are unstable, we consider the dynamics in the interaction picture .let we have and the dynamical system becomes : [ eqn : inter ] \\ \bar f(t ) & = \tr([-i\bar h_1(t ) , \bar\rho(t)]\bar\rho_d)\end{aligned}\ ] ] where and .thus , the original autonomous dynamical system , where is not stationary , has transformed into a non - autonomous system , where is a fixed point . according to theorems [ thm : generic :crit ] and [ thm : generic : hyperbolic ] , for a given there are hyperbolic critical points of the function , denoted by , , with and corresponding to the minimum and maximum , respectively .they are also the fixed points of the dynamical system ( [ eqn : inter ] ) . if =0\} ] suggest that almost all solutions converge to , which is consistent with the theorem .however , unlike for the stationary case we can not conclude that the solutions converging to the saddles between the maximum and minimum constitute a measure - zero set .we can still show , though , that in principle there exist solutions starting very close to that converge to the target state .so the region of asymptotic stability of is at least not confined to a local neighborhood of it . in the interaction picture ( [ eqn : inter ] ) for any saddle point with not all solutions with can converge to it . from the topological structure near a hyperbolic saddle pointwe know that the pre - image of contains not only .therefore , we can choose a such that and .since the solution with can not be a stationary , there exists a time such that .we have shown that for ` ideal ' systems , lyapunov control is mostly effective for both pseudo - pure and generic states , which covers the largest and most important classes of states . finally , we show that if is stationary but has degenerate eigenvalues then there may be large critical manifolds but we can still derive a result similar to the asymptotic stability of in the discussion of generic stationary states .[ thm : degenerate:1 ] if is a stationary state with degenerate eigenvalues then is a hyperbolic critical point of the function and it is isolated from the other critical points .choose a basis such that is diagonal , and let , denote the multiplicities of the distinct eigenvalues , where . using the same notation as in theorem [ thm : crit:2 ] , achieves the maximal value of . to show that it is a hyperbolic maximum of ( hence minimum of ) we need to find independent directions along each of which is a local maximum , where is the dimension of the manifold , in our case . as in the proof of theorem [ thm : crit:2 ] , we note that for the curves with and , the conjugate action of or on the critical point swaps the -th and -th diagonal elements .hence the number of swaps that decrease the value of is therefore , is a hyperbolic point of , hence of . since the critical values of as shown in eq .( [ eqn : v ] ) , are isolated and is the unique minimal value , it must also be isolated from the other critical points , which completes the proof .furthermore , we can show that is also a hyperbolic fixed point for the dynamical system ( [ eqn : auto1 ] ) : [ thm : degenerate:2 ] if is a stationary state with degenerate eigenvalues then is a hyperbolic sink of the dynamical system ( [ eqn : auto1 ] ) . as in theorem[ thm : generic : hyperbolic ] , we need to analyze the eigenvalues of linearization matrix . in order to show is hyperbolic , it suffices to show that there are eigenvalues with nonzero real parts , corresponding to eigenvectors in the tangent space of at , denoted as .let be a column vector consisting of elementary parts : and let be the restriction of to the subspace as before .following a similar argument as in lemma [ lemma : generic:3 ] it is easy to see that for such that , the eigenelement of is also an eigenelement of as , and that corresponds to a direction orthogonal to the tangent space . the number of such is by same arguments as in the proof of theorem [ thm : generic : hyperbolic ] , it therefore is easy to show that the remaining eigenvalues of with eigenvectors corresponding to the directions in must have non - zero real parts .a simple counting argument shows that the number of these eigenvalues is and thus is a hyperbolic point .since achieves the minimum of , these eigenvalues must have negative real parts , i.e. , must be a sink . hence , any solution near will converge to for , which establishes local asymptotic stability of . the next question is whether this asymptotic convergence holds for a larger domain , as in the case of stationary non - degenerate . in order to answer this, we need to investigate the lasalle invariant set . for a stationary with degenerate eigenvalues as in eq .( [ eqn : degenerate ] ) there are distinct diagonal satisfying =0 ] is orthogonal to the subspace spanned by the sequence with .comparison with ( [ eq : bm ] ) shows that if the coefficient then none of the generators have support in the root space of the lie algebra , and it is easy to see that the subspace of generated by is the direct sum of all root spaces with .thus , in our example , a necessary condition for to be in the invariant set is \in \t_{13}\oplus\c ] must be of the form = \begin{pmatrix } \alpha_{11 } & 0 & \alpha_{13 } \\ 0 & \alpha_{22 } & 0 \\ \alpha_{13}^ * & 0 & \alpha_{33 } \end{pmatrix}.\ ] ] furthermore , if is of type ( [ eq : rho13 ] ) then u_0(t)^\dagger = \begin{pmatrix } \alpha_{11 } & 0 & e^{i\omega_{13}t}\alpha_{13 } \\ 0 & \alpha_{22 } & 0 \\ e^{-i\omega_{13}t}\alpha_{13}^ * & 0 & \alpha_{33 } \end{pmatrix}\ ] ] with and , also has the form .therefore , \in \c\oplus\t_{13} ] ) vanish , and has a pair of purely imaginary eigenvalues whose eigenspaces span the root space and four eigenvalues with non - zero real parts , which must be negative since is locally stable from the lyapunov construction .however , the existence of two purely imaginary eigenvalues means that the target state is no longer a hyperbolic fixed point but there is centre manifold of dimension two . from the centre manifold theory ,the qualitative behavior near the fixed point is determined by the qualitative behavior of the flows on the centre manifold .therefore , the next step is to determine the centre manifold . for dimensions greater than two ,this is generally a hard problem if we do not know the solution of the system .however , since we know the tangent space of the centre manifold , if we can find an invariant manifold that has this tangent space at , then it is a centre manifold . in our case solutions in the invariant set form a manifold that is diffeomorphic to the bloch sphere for a qubit system , with the natural mapping embedding which maps the state ( or ) of the qutrit to the point on the bloch sphere corresponding to , and the two tangent vectors of the centre manifold at to the two tangent vectors of the bloch sphere at .thus this manifold is the required centre manifold at ( or ) . on the centre manifold a centre with the nearby solutions cycling around it .the hartman - grobman theorem in centre manifold theory proved by carr shows that all solutions outside converge exponentially to solutions on the centre manifold belonging to , while the solutions actually converging to only constitute a set of measure zero .therefore , when is not fully connected , the trajectories for most initial states will not converge to the target state ( or another critical point of ) but to other trajectories , which are not in orbit of either .next let us consider systems with fully connected but not strongly regular , such as i.e. , . in order to determine the subspace spanned by [ see ( [ eq : bm ] ) ] , we note that the characteristic vandermonde matrix ( [ eq : vandermonde ] ) of the system has rank two as only the first two rows are linearly independent .we find that in this case the invariant set is characterized by \in\c\oplus\span\{\mu,\bar{\mu}\} ] implies and .so all form a two - dimensional manifold with coordinates determined by the and components of .as we are interested in the local dynamics near the target state , we again study the linearization at the fixed point for the case of a generic stationary state , i.e. , diagonal with non - degenerate eigenvalues .using the same notation as before , the matrix , i.e. , the restriction of to the subspace , has six non - zero eigenvalues , where occurs with multiplicity two , and since , we know that also has six non - zero eigenvalues .however , two of these are purely imaginary , namely , as it can easily be checked that , and the corresponding vectors are where . moreover , we know that all other eigenvalues of must have negative ( non - zero ) real parts .analogous to the last subsection , we can show that the invariant set forms a centre manifold near with as a centre .thus by the hartman - grobman theorem of the centre manifold theory , we can again infer that most of the solutions near will not converge to .we have presented a detailed analysis of the lyapunov method for the problem of steering a quantum system towards a stationary target state , or tracking the trajectory of a non - stationary target state under free evolution , for finite - dimensional quantum systems governed by a bilinear control hamiltonian .although our results are partially consistent with previously published work in the area , our analysis suggests a more complicated picture than previously described .first , to allow proper application of the lasalle invariance principle we transform the original control problem into an autonomous dynamical systems defined on an extended state space . characterization of the lasalle invariant set for this system shows that it always contains the full set of critical points of the distance - like lyapunov function defined on the extended state space , where is the appropriate flag manifold for the density operators .consistent with previous work we show that the critical points of are the only points in the invariant set for ideal systems , i.e. , systems with strongly regular drift hamiltonian and fully connected control hamiltonian , and stationary target states .however , we also show that the invariant set is larger for non - stationary target states or non - ideal systems , the main difference being that for ideal systems , there is only a measure - zero set of target states for which the invariant set is larger than , while for non - ideal systems the invariant set is always significantly larger than .this observation is important because numerical simulations sugggest that lyapunov control design is mostly effective if the invariant set is limited to the critical points of , but likely to fail otherwise .our analysis for various cases explains why . for a generic target state ( stationary or not ) there is always a finite set of critical points of , and it can be shown using stability analysis that all of these critical points , except the target state , are unstable .specifically , for a stationary generic target state we can show that all the critical points are hyperbolic critical points of and hyperbolic critical points of the dynamical system , with the target state being the only hyperbolic sink .all the other critical points are hyperbolic saddles , except for one hyperbolic source corresponding to the global maximum .although this picture is somewhat similar to that presented in , our dynamical systems analysis shows the other critical points , referred to as antipodal points in , are unstable , but except for the global maximum , not repulsive .in fact , all the hyperbolic saddles have stable manifolds of positive dimension .thus , the set of initial states that do not converge to the target state , even in this ideal case , is larger than the ( finite ) set of antipodal points itself , although for ideal systems and generic stationary target states , it is a measure - zero set of the state space . for stationary states with degenerate eigenvalues ( non - generic states )the set of critical points is much larger , forming a collection of multiple critical manifolds .however , for ideal systems we can show that even in this case the target state is the still the only hyperbolic sink of the dynamical system and asymptotically stable .thus , in general we can still conclude that most states will converge to the target state , although it is non - trivial to show that the set of states that converge to points on the other critical manifolds has measure zero , except for the class of pseudo - pure states .this class is special since the set of critial points in this case has only two components : a single isolated point corresponding to the global minimum of , which is a hyperbolic sink of the dynamical system , and a critical manifold homeomorphic to for , on which assumes its global maximum value .thus , although the points comprising the critical manifold are not repulsive , since is decreasing as function of , no initial states outside this manifold can converge to it .we note that this argument was employed in to argue that the critical points other than the target state are ` repulsive ' but our analysis shows that it works only for the class of pseudo - pure states .thus , although our analysis suggest that , e.g. , that there are initial states other than the antipodal points that will not converge to the target state even for ideal systems , the set of states for which the lyapunov control fails is small , except for a measure - zero set of target states for which the invariant set contains non - critical points . for ideal systemone could therefore conclude that the lyapunov method is overall an effective control strategy. however , most physical systems are not ideal , and the hamiltonians and are unlikely to satisfy the very stringent conditions of strong regularity and full connectedness , respectively .for instance , these assumptions rule out all systems with nearest - neighbour coupling only , as well as any system with equally spaced or degenerate energy levels , despite the fact that most of these systems can be shown to be completely controllable as bilinear hamiltonian control systems .in fact , the requirements for complete controllability are very low . any system with strongly regular drift hamiltonian ,whose transition graph is not disconnected , for instance , is controllable , and in many cases even much weaker requirements suffice . in practice , a bilinear hamiltonian system can generally fail to be controllable only if it is decomposable into non - interacting subsystems or has certain ( lie group ) symmetries , ensuring that , e.g. , the dynamics is restricted to a subgroup such as the symplectic group .unfortunately , our analysis shows that the picture changes drastically for non - ideal systems , with the target state ceasing to be a hyperbolic sink of the dynamical system and becoming a centre on a centre manifold contained in a significantly enlarged invariant set . using results from centre manifold theory, we must conclude that most of the solutions converge to solutions on the centre manifold other than the target state .this result casts serious doubts on the effectiveness of the lyapunov method for realistic systems , in fact , it strongly suggests that lyapunov control design is an effective method only for a very small subset of controllable quantum systems .these results appear to be in conflict with some recently published results on lyapunov control , which suggest that when the hamiltonian and target state satisfy a certain algebraic condition then any state that is not an ` antipodal ' point of asymptotically converges to the orbit of the target state , and claims that the ` antipodal ' points are repulsive . sincethe notion of orbit convergence that was used in is weaker than the notion of convergence in the sence of trajectory tracking we have used , one might conjecture this to be the source of the discrepancy , and since orbit tracking may be quite adequate for many control problems that do not require precise phase control , for instance , this could mean that lyapunov control might still be an effective control strategy for many quantum control problems .however , this does not appear to be the case here .for instance , the notions of orbit and trajectory tracking are identical for stationary target states but even for ideal systems and stationary generic target states that satisfy the conditions in , our analysis suggests that the antipodal points , except one global maximum , are hyperbolic saddle points and hence unstable but not repulsive .furthermore , careful analysis of our results shows that for ideal systems convergence of to the orbit of implies except for a measure - zero set of target states .xw is supported by the cambridge overseas trust and an elizabeth cherry major scholarship from hughes hall , cambridge .sgs acknowledges uk research council funding from an advanced research fellowship and additional support from the and hitachi .she is currently also a marie curie fellow under the european union knowledge transfer programme mtdk - ct-2004 - 509223 .we sincerely thank peter pemberton - ross , tsung - lung tsai , christopher taylor , jack waldron , jony evans , dan jane , yaxiang yuan , jonathan dawes , lluis masanes , rob spekkens , ivan smith for interesting and fruitful discussions .
we present a detailed analysis of the convergence properties of lyapunov control for finite - dimensional quantum systems based on the application of the lasalle invariance principle and stability analysis from dynamical systems and control theory . under an ideal choice of the hamiltonian , convergence results are derived , with a further discussion of the effectiveness of the method when the ideal condition of the hamiltonian is relaxed .
we study the optimal insurance demand problem of an agent whose wealth is subject to shocks produced by some marked point process .such a problem was formulated by bryis in continuous - time with poisson shocks .gollier studied a similar problem where shocks are not proportional to wealth .an explicit solution to the problem is provided by bryis by writing the associated hamilton - jacobi - bellman ( hjb in short ) equation . in bryis and gollier , they modeled the insurance premium by an affine function of the insurance strategy } ] .we impose a constraint of non - bankruptcy on the wealth process of the agent for all .the objective of the agent is to maximize the expected utility of the terminal wealth over all admissible strategies and to determine the optimal policy of insurance .+ in mnif , we studied the latter stochastic control problem with state constraint by duality methods .duality method was introduced by karatzas et al . and cox and huang .we characterized the dual value function by a pde approach as the unique solution of the associated hjbvi . in this paper , we determine numerically the optimal strategy of investment and the optimal reserve process .usually , the optimal strategy is determined in a feedback form by using the primal approach and solving the associated hjb equation .the originality of this work and thanks to a verification theorem , the optimal reserve process is related to the derivative of the dual value function with respect to the dual state variable . when the shocks are modeled by a poisson process , we can obtain an explicit expression of the optimal strategy of insurance in terms of the dual value function .the paper is organized as follows .section 2 describes the model . in section 3, we formulate the dual optimization problem and we derive the associated hjbvi for the value function . in section 4 ,we prove a verification theorem .we show that if there exists a solution to the hjbvi , then subject to some regularity conditions , it is the value function of the dual problem . the optimal insurance strategy could be characterized completely by the value function of the dual problem .section 5 is devoted to a numerical analysis of the hjbvi : the hjbvi is discretized by using finite difference schemes and solved by using an algorithm based on the `` howard algorithm '' ( policy iteration ) .numerical results are presented .they provide the optimal insurance strategy and the optimal wealth process of the agent .let be a complete probability space .we assume that the claims are generated by a compound poisson process .more precisely , we consider an integer - valued random measure with compensator .we assume that where is a probability distribution on the bounded set and is a positive constant . in this case , the integral , with respect to the random measure , is simply a compound poisson process : we have , where is a poisson process with intensity and is a sequence of random variables with common distribution which represent the claim sizes .+ let be a finite time horizon .we denote by the filtration generated by the random measure .+ by definition of the intensity , the compensated jump process : is such that \times b),0\leq t\leq t\} ] is then given by : we assume that which means that the premium rate received by the agent is lower then the premium rate paid to the insurer . in the literature , this problem is known as a proportional reinsurance one .the agent is an insurer who has to pay a premium to the reinsurer .we impose that the insurance strategy satisfies : \,\,\ , \mbox { a.s . for all } t\leq s\leq t. \end{aligned}\ ] ] we also impose the following non - bankruptcy constraint on the wealth process : given an initial wealth at time , an admissible policy is a predictable stochastic process , such that conditions ( [ bornekm ] ) and ( [ contraintem ] ) are satisfied .we denote by the set of all admissible policies and .+ our agent has preferences modeled by a utility function .[ hypu ] we assume that the agent s utility is described by a crra utility function i.e. , where .we denote by the inverse of and we introduce the conjugate function of defined by a straightforward calculus shows that where and for all .+ the objective of the agent is to find the value function which is defined as we introduce some notations .let and ] .+ by girsanov s theorem , the predictable compensator of an element under is : we deduce from lemma 2.1 of fllmer and kramkov that and the upper variation process of is : from the non - decreasing property of , we have ,\end{aligned}\ ] ] where .mnif and pham gave the following dual characterization of the set \leq x\nonumber,\end{aligned}\ ] ] where is the subset of elements such that is bounded and is the set of all stopping times valued in ] defined as follows : \times ( 0,\infty)):=\big\{\displaystyle{f:[0,t ] * \times ( 0,\infty)\rightarrow\r}\mbox { such that } , \\ \,\sup_{y>0 } \frac{| f(t , y)|}{y+y^{-\gamma}}<\infty \mbox { and } \ ,\sup_{x>0,y>0 } \frac{|f(t , x)-f(t , y)|}{|x - y|(1+x^{-(\gamma+1)}+y^{-(\gamma+1)})}<\infty\big\}.\end{aligned}\ ] ]the main result of this section is the following verification theorem .it characterizes the optimal wealth process .when we model the jump by a poisson process , the optimal insurance strategy is expressed in terms of the hjbvi solution .our stochastic control problem is unusual , in the sense that , the control is unbounded predictable process and , given by , is also unbounded . for technical reason ,we need to add the following integrability conditions that we will check later in the case of poisson process .[ hyprhol1 ] we fix ] , + ( ii ) there exist two borel functions , such that \times c , \end{aligned}\ ] ] and .the following lemma states the growth condition of the dual value function .[ ub ] the dual value function is locally bounded and satisfies * proof . * see appendix .+ + [ verificationm ] suppose that there exists a solution to the hjbvi ( [ hjb2 ] ) , denoted by with terminal condition such that is continuously differentiable w.r.t and , is continuously differentiable w.r.t and in the no jump region and satisfies the growth condition .+ suppose that assumption [ hyprhol1 ] holds .suppose further that there exist a borel function , a process , ] stays in the no jump region almost surely .the process might have jumps in the region but reaches immediately the region .[ rq2 ] hypothesis means that the process regulates the process and decreases only when the wealth process hits zero .if all the shocks have the same size denoted by , then the optimal insurance strategy is given by .\end{aligned}\ ] ] from definition of ( see assumption [ hypothese2 ] ) , decreases only on the set or on this set , we have and so . by its lemma we obtain using hypothesis , the regularity on the function and it s lemma , we have plugging into and using , we obtain and so is the optimal insurance strategy . if all the shocks have the same size denoted by , then the set is given by predictable process : , a.s . , and = 1\} ] into \times(0,1) ] [ dmp ] the introduction of the function ( see equality ) , insures that the matrix is diagonally dominant . to solve equation ( [ eqdiscret1 m ] ) , we use the howard algorithm ( see lapeyre sulem and talay ) , also named policy iteration .+ it consists on computing two sequences and , ( starting from ) defined by : * step . to associated another strategy * step . to the strategy ,we compute a partition such that the solution is obtained by solving two linear systems : and * if , , stop , otherwise , go to step .the convergence the howard algorithm is obtained heuristically .we have no theoretical result for the convergence .the matrix arising after the discretization of the hjbvi does not satisfy the discrete maximum principle which is a sufficient condition for the convergence of such algorithm .after the numerical resolution of variational inequality ( [ dombornem ] ) , we compute the optimal strategy of insurance and the wealth process . from the verification theorem, we need to evaluate and to construct the process . + the optimal insurance strategy and the wealth process are given by formulas and .we describe below the algorithm . +* first step * : given an initial wealth , * we compute s.t and , + where , and * we compute . *second step * : let .for to , we construct the process as follows : * we compute and we select the nearest point of the grid to . this point will be denoted by .* we determine the optimal control which is obtained by howard algorithm at point .we denote this control by . *we evaluate .we take . *we compute ( resp ) and we select the point of the grid which is the nearest to ( resp ) .this point will be denoted by ( resp ) . *we make the following instruction : while , we decrease the process .we denote by the new point of the grid .* the optimal insurance strategy and the optimal wealth process are given by equation ( [ dominfinim ] ) is solved by using the howard algorithm .numerical tests are performed with the parameters given in table [ parametresn ] ..values for the model s parameters [ cols="^,^,^,^,^,^,^",options="header " , ] we suppose that there are two claims at times and .we first choose a uniform discretization step in the state coordinate .it is equal to .for the discretization step in time , we take .we compute the value of and the corresponding index .then we choose two discretization steps in the state coordinate .if \cup[(j_0 + 2)p,1) ] , the discretization step is equal to .we mention that the operation of choosing the point nearest to the grid is delicate which oblige us to reduce the discretization step in the zone ] lie in , we have where is a constant . + let be a minimizing sequence of . from the definition of these minimizing sequences , there exist and such that when and for all , we have \nonumber\\ & + & y e\left[\int_t^t z^n_ud^n_u ( \alpha -\beta + ( \beta -\int_c\rho^n_u(z)\,z\ , \pi(dz))_{+})du \right]-\epsilon_n.\end{aligned}\ ] ] since when , there exists such that for all , we have .we recall that and so since . using the boundedness of , jensen s inequality and the martingale property of , we have : &\geq & \tilde u(ye\left[z^n_t\right])\nonumber\\ & \geq & \tilde u(y).\end{aligned}\ ] ] for the second term of the r.h.s of inequality , since for all ] and .+ * first step * : we show that .\end{aligned}\ ] ] let .let applying the generalized it^ o s formula , we have and so we have since is a classical solution of the variational inequality , we have .\end{aligned}\ ] ] taking expectation in , we have , \end{aligned}\ ] ] for all .it remains to show that we consider the function , will be chosen later , . by using it^ o s formula andsince the function is a power utility function , we have the solution of is given by the dolans - dade exponential formula where } ] , we have taking expectation under and using assumption [ hyprhol1](ii ) , we obtain & \leq & 2\big(1+\int_t^s\int_c| z^{\rho}_{1u}|^2\left ( \rho_u(z)^{-\gamma p}-1 \right)^2\pi(dz)du\big)\\ & \leq&2\big(1+e\int_t^s |z^{\rho}_{1u}|^2du\big).\end{aligned}\ ] ] by fubini s theorem and gronwall s lemma , we have \leq c_1\end{aligned}\ ] ] from inequalities and , we obtain that \leq c_1 g ( \tilde u(y)),\end{aligned}\ ] ] and so <\infty.\end{aligned}\ ] ] similarly , one can prove that <\infty. ] .we consider the processes and and the positive number such that and hold .then , we have .\end{aligned}\ ] ] let taking expectation in , we have .\end{aligned}\ ] ] since the family is uniformly integrable under , equation implies and so is the solution of the dual problem . + * third step * : we show that defined by , ] .a. tourin and t. zariphopoulou , viscosity solutions and numerical schemes for investment consumption models with transaction costs , numerical methods in finance , edited by l.c.g .rogers and d. talay , 1997 .
this paper deals with numerical solutions of maximizing expected utility from terminal wealth under a non - bankruptcy constraint . the wealth process is subject to shocks produced by a general marked point process . the problem of the agent is to derive the optimal insurance strategy which allows `` lowering '' the level of the shocks . this optimization problem is related to a suitable dual stochastic control problem in which the delicate boundary constraints disappear . in mnif , the dual value function is characterized as the unique viscosity solution of the corresponding hamilton jacobi bellman variational inequality ( hjbvi in short ) . we characterize the optimal insurance strategy by the solution of the variational inequality which we solve numerically by using an algorithm based on policy iterations . * key words :* optimal insurance ; stochastic control ; duality ; dynamic programming principle ; howard algorithm * msc classification ( 2000 ) :* 93e20 , 60j75 , 65n06 .
the decay of a discontinuity separating two constant initial states ( _ riemann problem _ ) has played a very important role in the development of numerical hydrodynamic codes in classical ( newtonian ) hydrodynamics after the pioneering work of godunov . nowadays , most modern high - resolution shock - capturing methods are based on the exact or approximate solution of riemann problems between adjacent numerical cells and the development of efficient riemann solvers has become a research field in numerical analysis in its own ( see , e.g. , ) .riemann solvers began to be introduced in numerical relativistic hydrodynamics at the beginning of the nineties and , presently , the use of high - resolution shock - capturing methods based on riemann solvers is considered as the best strategy to solve the equations of relativistic hydrodynamics in nuclear physics ( heavy ion collisions ) and astrophysics ( stellar core collapse , supernova explosions , extragalactic jets , gamma - ray bursts ) .this fact has caused a rapid development of riemann solvers for both special and general relativistic hydrodynamics .the main idea behind the solution of a riemann problem ( defined by two constant initial states , and , at left and right of their common contact surface ) is that the self - similarity of the flow through rarefaction waves and the rankine - hugoniot relations across shocks allow one to connect the intermediate states ( ) with their corresponding initial states , .the analytical solution of the riemann problem in classical hydrodynamics rests on the fact that the normal velocity in the intermediate states , , can be written as a function of the pressure in that state ( and the flow conditions in state ) .thus , once is known , and all other unknown state quantities of can be calculated . in the case of relativistic hydrodynamicsthe same procedure can be followed , the major difference with classical hydrodynamics stemming from the role of tangential velocities . while in the classical case the decay of the initial discontinuity does not depend on the tangential velocity ( which is constant across shock waves and rarefactions ) , in relativistic calculations the components of the flow velocity are coupled through the presence of the lorentz factor .the equations of relativistic hydrodynamics admit a conservative formulation which has been exploited in the last decade to implement high - resolution shock - capturing methods . in minkowski space timethe equations in this formulation read _ t * u * + _ i * f*^(i ) = 0 [ cl ] where and ( ) are , respectively , the vectors of conserved variables and fluxes = ( d , s^1,s^2,s^3,)^t ^(i ) = ( d v^i , s^1 v^i + p ^1i , s^2 v^i + p ^2i , s^3 v^i + p ^3i , s^i - d v^i)^t .the conserved variables ( the rest - mass density , , the momentum density , , and the energy density ) are defined in terms of the _ primitive variables_, , according to where is the lorentz factor and the specific enthalpy . in the followingwe shall restrict our discussion to an ideal gas equation of state with constant adiabatic exponent , , for which the specific internal energy is given by = .choosing the surface of discontinuity to be normal to the -axis , rarefaction waves are self - similar solutions of the flow equations depending only on the combination . getting rid of all the terms with and derivatives in equations ( [ cl ] ) and substituting the derivatives of and in terms of the derivatives of , the system of equations can be reduced to just one ordinary differential equation ( ode ) and two algebraic conditions with constrained by }}{1-v^2c_s^2 } , \label{xi6 } \end{aligned}\ ] ] because non - trivial similarity solutions exist only if the wronskian of the original system vanish .we have denoted by the speed of sound , provided by the equation of state .the plus and minus sign correspond to rarefaction waves propagating to the right ( ) and left ( ) , respectively .the two solutions for correspond to the maximum and minimum eigenvalues of the jacobian matrix associated to , generalizing the result found for vanishing tangential velocity . from equations ( [ rar2 ] ) and ( [ rar3 ] )it follows that constant , i.e. , the tangential velocity does not change direction across rarefaction waves .notice that , in a kinematical sense , the newtonian limit ( ) leads to , but equations ( [ rar2 ] ) and ( [ rar3 ] ) do not reduce to the classical limit constant , because the specific enthalpy still couples the tangential velocities .thus , even for slow flows , the riemann solution presented in this paper must be employed for thermodynamically relativistic situations ( ) .the same result can be deduced from the rankine - hugoniot relations for shock waves ( see next section ) . using ( [ xi6 ] ) andthe definition of the sound speed , the ode ( [ ode ] ) can be written as = [ ode2 ] where is the absolute value of the tangential velocity and g(_,v^x , v^t)=. considering that in a riemann problem the state ahead of the rarefaction wave is known , equation ( [ ode2 ] ) can be integrated with the constraint constant , allowing to connect the states ahead ( ) and behind ( ) the rarefaction wave .the solution is only a function of and can be stated in compact form as v^x_b = r^a_(p_b ) .the rankine - hugoniot conditions relate the states on both sides of a shock and are based on the continuity of the mass flux and the energy - momentum flux across shocks .their relativistic version was first obtained by taub ( see also ) . considering the surface of discontinuity as normal to the -axis , the invariant mass flux across the shockcan be written as j w_s d_a ( v_s - v^x_a ) = w_s d_b ( v_s - v^x_b ) .[ mflux ] where is the coordinate velocity of the hyper - surface that defines the position of the shock wave and is the correspondent lorentz factor , . according to our definition , is positive for shocks propagating to the right . in terms of the mass flux , ,the rankine - hugoniot conditions are = - , [ vx ] = , [ p ] = 0 , [ rhvy ] = 0 , [ rhvz ] = .[ vp ] equations ( [ rhvy ] ) and ( [ rhvz ] ) imply that the quantity is constant across a shock wave and , hence , that the orientation of the tangential velocity does not change .the latter result also holds for rarefaction waves ( see 3 ) .equations ( [ vx ] ) , ( [ p ] ) and ( [ vp ] ) can be manipulated to obtain as a function of , and . using the relation and after some algebra, one finds v^x_b = ( h_a w_a v^x_a + ) ( h_a w_a + ( p_b - p_a ) ( + ) ) ^-1 . the final step is to express and as a function of the post - shock pressure .first , from the definition of the mass flux we obtain where ( ) corresponds to shocks propagating to the right ( left ) .second , from the rankine - hugoniot relations and the physical solution of obtained from the taub adiabat ( the relativistic version of the hugoniot adiabat ) , that relates only thermodynamic quantities on both sides of the shock , the square of the mass flux can be obtained as a function of . using the positive ( negative ) root of for shock waves propagating towards the right ( left ) , the desired relation between the post - shock normal velocity and the post - shock pressure is obtained . in a compact waythe relation reads v^x_b = s^a_(p_b ) .we refer to the interested reader to references for further details .the time evolution of a riemann problem can be represented as : i l_l _ * r_*_r where denotes a simple wave ( shock or rarefaction ) , moving towards the initial left ( ) or right ( ) states . between them , two new states appear , namely and , separated from each other through the third wave , which is a contact discontinuity . across the contact discontinuity pressure and normal velocityare constant , while the density and the tangential velocity exhibits a jump . as in the newtonian case ,the compressive character of shock waves ( density and pressure rise across the shock ) allows us to discriminate between shocks ( ) and rarefaction waves ( ) : _ ( ) = \ { rcl r _ ( ) & , & p_b p_a + s _ ( ) & , & p_b > p_a . [ shandrar ] where is the pressure and subscripts and denote quantities ahead and behind the wave . for the riemann problem and for and , respectively .the solution of the riemann problem consists in finding the positions of the waves separating the four states and the intermediate states , and .the functions and allow one to determine the functions and , respectively . the pressure and the flow velocity in the intermediate statesare then given by the condition v^x_r*(p _ * ) = v^x_l*(p _ * ) = v^x_*. [ vp0 ] which is an implicit algebraic equation in and can be solved by means of an iterative method .when and have been obtained , the equation of state gives the specific internal energy and the remaining state variables of the intermediate state can be calculated using the relations between and the respective initial state given through the corresponding wave .notice that the solution of the riemann problem depends on the modulus of , but not on the direction of the tangential velocity .figure [ fig3 ] shows the solution of a particular riemann problem for different values of the tangential velocity .the crossing point of any two lines in the upper panel gives the pressure and the normal velocity in the intermediate states .whereas the pressure in the intermediate state can take any value between and , the normal flow velocity can be arbitrarily close to zero in the case of an extremely relativistic tangential flow . to study the influence of tangential velocities on the solution a riemann problem , we have calculated the solution of a standard test involving the propagation of a relativistic blast wave produced by a large jump in the initial pressure distribution for different combinations of tangential velocities .although the structure of the solution remains unchanged for different tangential velocities , the values in the constant state may change by a large amount .we have obtained the exact solution of the riemann problem in special relativistic hydrodynamics with arbitrary tangential velocities . unlike in newtonian hydrodynamics ,tangential velocities are coupled with the rest of variables through the lorentz factor , present in all terms in all equations .it strongly affects the solution , especially for ultra - relativistic tangential flows .in addition , the specific enthalpy also acts as a coupling factor and modifies the solution for the tangential velocities in thermodynamically relativistic situations ( energy density and pressure comparable to or larger than the proper rest - mass density ) , rendering the classical solution incorrect in slow flows with very large internal energies .our solution has interesting practical applications .first , it can be used to test the different approximate relativistic riemann solvers and the multi - dimensional hydrodynamic codes based on directional splitting .second , it can be used to construct multi - dimensional relativistic godunov - type hydro codes . as an example, we have simulated a relativistic tube , in a cartesian grid , where the initial discontinuity was located in a main diagonal .the initial states were , , , , , , and the adiabatic index is .spatial order of accuracy was set to second order by means of a monotonic piecewise linear reconstruction procedure and second order in time is obtained by using a runge - kutta method for time advancing .the exact solution of the riemann problem is used at every interface to calculate the numerical fluxes .the results are shown in figure 2 , and are comparable to those obtained with other hrsc methods. profiles of all variables are stable and discontinuities are well resolved without excessive smearing .an efficient implementation of this exact riemann solver in the context of multidimensional relativistic ppm is in progress and will be reported elsewhere .
we have generalised the _ exact _ solution of the riemann problem in special relativistic hydrodynamics for arbitrary tangential flow velocities . the solution is obtained by solving the jump conditions across shocks plus an ordinary differential equation arising from the self - similarity condition along rarefaction waves , in a similar way as in purely normal flow . the dependence of the solution on the tangential velocities is analysed . this solution has been used to build up an _ exact riemann solver _ implemented in a multidimensional relativistic ( godunov - type ) hydro - code . dastronomia i astrofsica , u. valncia , 46100 burjassot , spain + mpi fr astrophysik , karl - schwarzschild - str . 1 , 85748 garching , germany e - mail : jose.a.pons.es
in order to establish the well - posedness of the mean values of quantum observables represented by unbounded operators , we investigate the regularity of solutions of quantum master equations ( with unbounded coefficients ) in stationary and transient regimes . for this purpose ,we use classical stochastic analysis . in many open quantum systems , the states of a small quantum system with hamiltonian according to the operator equation where ( see , e.g. , ) . here, is a separable complex hilbert space , are given linear operators in satisfying on suitable common domain and the unknown density operator is a nonnegative operator in with unit trace .the operators describe the weak interaction between the small quantum system and a heat bath .the measurable physical quantities of the small quantum system are represented by self - adjoint operators in , which are called observables .very important observables are unbounded , like position , momentum and kinetic energy operators . in the schrdinger picture ,the mean value of the observable at time is given by , the trace of .in the heisenberg picture , the initial density operator is fixed .using , for instance , ( [ 3 ] ) we obtain the following equation of motion for the observable : where ; see , for example , .the expected value of at time time is given by .the evolution of the state of a quantum system conditioned on continuous measurement is governed ( see , e.g. , ) by the stochastic evolution equation on . here , and are real valued independent wiener processes .[ exmeasurement ] set .let be defined by and . in ( [ 5 ] ) , take , and , with , and . for all , fix .example [ exmeasurement ] with describes the simultaneous monitoring of position and momentum of a linear harmonic oscillator ; see , for example , .taking instead and we get a well - studied model for the continuous measurement of position of a free particle ; see , for example , and references therein .our main tool for studying ( [ 3 ] ) and ( [ 41 ] ) is the following linear sse on : where are real valued independent wiener processes on a filtered complete probability space .in fact , the basic assumption of this paper is : there exists a nonnegative self - adjoint operator in such that : ( i ) is relatively bounded with respect to ; and ( ii ) ( [ 2 ] ) has a unique -solution for any initial condition satisfying . here , a strong solution of ( [ 2 ] ) is called -solution if , and the function is uniformly bounded on compact time intervals ; see definition [ definicion2 ] for details .the law of with respect to coincides with the law of for all ] , and consequently we can not apply directly it s formula to .we overcome this difficulty in section [ subsecteorema3 ] by applying it s formula to a regularized version of ; the resulting stochastic integrals ( similar to those in ) are only local martingales , and so we have to use stopping times .section [ secadqme ] addresses the existence and uniqueness of solutions for the adjoint quantum master equation , as well as its probabilistic representation .section [ secprob - rep ] deals with the probabilistic interpretations of regular density operators . in section [ secqme ]we construct schrdinger evolutions by means of stochastic schrdinger equations and study the regularity of solutions to ( [ 3 ] ) . section [ secstatsol ] focusses on the existence of regular stationary solutions for ( [ 3 ] ) . in section [ secoscillator ]we apply our results to a quantum oscillator .section [ secproofs ] is devoted to proofs . throughout this paper ,the scalar product is linear in the second variable and anti - linear in the first one .we write for the borel -algebra on .suppose that is a linear operator in .then denotes the adjoint of .if has a unique bounded extension to , then we continue to write for the closure of .let , be normed spaces .we write for the set of all bounded operators from to ( together with norm ) .we abbreviate to , if no misunderstanding is possible , and define .by we mean the subset of all nonnegative trace - class operators on .let be a self - adjoint positive operator in .then , for any we set and . as usual, stands for the set of all square integrable random variables from to .we write for the set of all satisfying a.s . and . the function is defined by if and whenever . in the sequel , the letter denotes generic constants .we begin by presenting in detail the notion of -solution to ( [ 2 ] ) .[ hipn3 ] suppose that is a self - adjoint positive operator in such that is a subset of the domains of and the maps are measurable .[ definicion2 ] let hypothesis [ hipn3 ] hold .assume that is either or ] * -a.s .[ notx ] the symbol will be reserved for the strong -solution of ( [ 2 ] ) with initial datum .[ notamedibilidad ] suppose that is a self - adjoint positive operator in , together with .then is measurable whenever is equipped with its borel -algebra ( see , e.g. , for details ) .we now make more precise our basic assumptions , that is hypothesis ( h ) .[ hipn5 ] suppose that hypothesis [ hipn3 ] holds .in addition , assume : the operator belongs to . for all , .let be -measurable .then for all , ( [ 2 ] ) has a unique strong -solution on ] .[ teorema3 ] suppose that hypothesis [ hipn5 ] holds .let belong to .then , for every nonnegative real number there exists a unique in such that for all in , moreover , any -solution of ( [ 41 ] ) with initial datum coincides with , and for all . the proofs fall naturally into lemmata [ lema41 ] and [ lema42 ] . as a by - product of our proof of the existence of solutions to ( [ 3 ] ) , we `` construct '' a solution to ( [ 41 ] ) , and so theorem [ teorema3 ] leads to theorem [ teorema10 ] .[ teorema10 ] let hypothesis [ hipn5 ] hold .suppose that and that is as in theorem [ teorema3 ] . then is the unique -solution of ( [ 41 ] ) with initial datum .lemmata [ lema26 ] and [ lema20 ] shows that is a -solution of ( [ 41 ] ) with initial datum .theorem [ teorema3 ] now completes the proof . in , c. m. mora developed the existence and uniqueness of the solution to ( [ 41 ] ) with unbounded , as well as its probabilistic representation . thus taking , corollary 14 of established the statement of theorem [ teorema10 ] under assumptions including the existence of an orthonormal basis of that satisfies , for example , and for all , where is the orthogonal projection of over the linear manifold spanned by . in theorem [ teorema10 ]we remove this basis , extending the range of applications . [ nota7 ] suppose that for all .let be the infinitesimal generator of a -semigroup of contractions .define the sequence of linear contractions on by where , , and .a. m. chebotarev proved that picard s successive approximations converge as to a quantum dynamical semigroup which is a weak solution to ( [ 41 ] ) ; see , for example , .holevo developed the probabilistic representation of under restrictions , including that and are the infinitesimal generators of -semigroup of contractions . from chebotarev and fagnola we have that for any , provided that there exists a self - adjoint positive operator in and a linear manifold which is a core for such that : ( i ) the semigroup generated by leaves invariant ; and ( ii ) for some , for all ; see also .this implies the uniqueness ( in the semigroup sense ) of the solution to ( [ 41 ] ) with bounded ; see , for example , .in addition to its proof , the main novelty of theorem [ teorema10 ] is that we do not assume properties like are the infinitesimal generators of a semigroup and condition ( i ) , which involves the study of invariant sets for . the latter is not an easy problem in general .the following notion of a regular density operator was introduced by chebotarev , garca and quezada to investigate the identity preserving property of minimal quantum dynamical semigroups .[ def2 ] let be a self - adjoint positive operator in .an operator belonging to is called -regular iff for some countable set , summable nonnegative real numbers and family of elements of , which together satisfy .we write for the set of all -regular density operators .we next formulate the concept of -regular operators in terms of random variables .this characterization of complements those given in using operator theory ; see also .[ teorema4 ] suppose that is a self - adjoint positive operator in .let be a linear operator in . then is -regular if and only if for some .moreover , can be interpreted as a bochner integral in both and .the proof is divided into lemmata [ lema6 ] and [ lema53 ] . by the following theorem ,the mean values of a large number of unbounded observables are well posed when the density operators are -regular .theorem [ teorema8 ] also provides probabilistic interpretations of these expected values .[ teorema8 ] suppose that is a self - adjoint positive operator in , and fix with .then : the range of is contained in and .consider , and let be a densely defined linear operator in such that .then is densely defined and bounded .the unique bounded extension of belongs to and is equal to , where is a well defined bochner integral in both and .moreover , deferred to section [ subsecteorema8 ] .we first deduce that ( [ i4 ] ) defines a density operator .[ teor7 ] let hypothesis [ hipn5 ] hold .then , for every there exists a unique operator such that for each -regular operator , where is an arbitrary random variable in satisfying . here is the strong -solution of ( [ 2 ] ) with initial datum , and we can interpret as a bochner integral in as well as in .moreover , for all . deferred to section [ subsecteor7 ] .[ notrho ] from now on , stands for the operator given by ( [ 31 ] ) .the next theorem says that the expected value commutes with the action of on random -regular pure density operators .[ teor9 ] assume that hypothesis [ hipn5 ] holds .let , with . then for all .deferred to section [ subsecteor9 ] .we now summarize some relevant properties of the family of linear operators .[ teor8 ] adopt hypothesis [ hipn5 ] .then is a semigroup of contractions such that , , and for all , the proof is divided into lemmata [ lema14 ] , [ lema24 ] and [ lema25 ] .the analysis outlined in section [ subsecexistence ] leads to our first main theorem , which asserts that satisfies ( [ 3 ] ) in both senses , integral and -weak , whenever is -regular .[ hipn1 ] the operators are closable .[ teor10 ] let hypotheses [ hipn5 ] and [ hipn1 ] hold .suppose that is -regular .then for all , where we understand the above integral in the sense of the bochner integral in .moreover , for any and , deferred to section [ subsecteor10 ] .let be densely defined .then hypothesis [ hipn1 ] is equivalent to saying that are densely defined .the second main theorem of this paper establishes that under hypothesis [ hipn5 ] , is the unique solution of ( [ 312 ] ) in the semigroup sense .its proof is based on arguments given in section [ subsecuniqueness ] .[ defsemigroupsol ] a semigroup of bounded operators on is called semigroup -solution of ( [ 3 ] ) if and only if : for each nonnegative real number , } } \vert\widehat{\rho}_{t}\vert_{\mathfrak{l } ( \mathfrak{l}_{1 } ( \mathfrak{h } ) ) } < \infty ] provided .we say that is a -solution of ( [ 5 ] ) with initial distribution on if and only if : * is a sequence of real valued independent brownian motions on the filtered complete probability space .* is an -valued process with continuous sample paths such that the law of coincides with and for all .* for every -a.s . and } \mathbb{e}_{\mathbb{q}}\vert cy_{s}\vert ^{2}<\infty ] defined by where and . since the range of is contained in , condition ( a ) of definition [ definicion3 ] yield with . according to conditions ( a ) , ( b ) of definition [ definicion3 ] , we have that is continuous for all , and so combining with hypothesis [ hipn5 ] we get the uniform continuity of on bounded subsets of \times\mathfrak{h } \times\mathfrak{h} ] : and the function is equal to .we next establish the martingale property of .for all ] is a martingale .the same conclusion can be drawn for and so } ] for ( see , e.g. , theorem 4.2.5 of ) , using the dominated convergence theorem , together with the continuity of , we get applying again the dominated convergence theorem yields , and hence letting in ( [ 415 ] ) we deduce that \\[-8pt ] & & \qquad= \mathbb{e } \int_{0}^{t } \bigl ( - g ( s , r_n x_{s } ( x ) , r_n x_{s } ( y ) ) + g_{n } ( s , x_{s } ( x ) , x_{s } ( y ) ) \bigr ) \,ds.\nonumber\end{aligned}\ ] ] finally , we take the limit as in ( [ 414 ] ) .since and tends pointwise to as , the dominated convergence theorem yields for any , .by , using the dominated convergence theorem gives thus , letting in ( [ 414 ] ) we obtain ( [ 45 ] ) .we begin by examining the properties of the bochner integral when .[ lema51 ] suppose that and belong to .then defines an element of , which moreover , is given by for all . here, is well defined as a bochner integral with values in both and .in addition , .we first get .since the image of lies in the set of all rank - one operators on , takes values in . applying parseval s equality yields hence is -measurable because the dual of is formed by all maps with .let .the absolute value of the operator is equal to the operator in case , and coincides with the null operator otherwise .therefore combining with ( [ 55 ] ) gives , and so the bochner integral is well defined in the separable banach space .we now turn to work in .the application from to is continuous , and in consequence the measurability of and implies that is -measurable .thus using we deduce that is bochner -integrable in ; see , for example , for a treatment of the bochner integral in banach spaces which , in general , are not separable .since is continuously embedded in , either of the interpretations of given above refers to the same operator .finally , for any belonging to , the linear function is continuous as a map from to .this gives ( [ 53 ] ) .similarly , ( [ 58 ] ) yields , because . under the assumptions of lemma [ lema51 ], can also be interpreted as a bochner integral in the pointwise sense ; see , for example , . to prove theorem [ teorema8 ] , we need the following lemma .[ lema52 ] let be a self - adjoint positive operator in .suppose that and .then belongs to . since -a.s ., from remark [ notamedibilidad ] we deduce that is strongly measurable .thus .proof of theorem [ teorema8 ] we start by proving statement ( a ) .let and let .using lemma [ lema51 ] yields in lemma [ lema52 ] we take to obtain .thus , lemma [ lema51 ] implies , and so . then and , which is our assertion .part ( a ) yields , and so is densely defined .we next prove that coincides with on . for this purpose , we approximate by , where is the yosida approximation of .suppose that and .as in the proof of lemma [ lema42 ] we consider , and so for any . therefore , and hence lemma [ lema51 ] gives since and commutes with , .using the dominated convergence theorem we obtain since is densely defined , is a closed operator .remark [ nota1 ] now shows that , and so applying lemma [ lema52 ] gives . combining ( [ 57 ] ) with lemma [ lema51 ]we get .since the closure of is equal to , we complete the proof of statement ( b ) by using lemma [ lema51 ] .first , we easily construct a random variable that represents a given -regular operator .[ lema6 ] let , with self - adjoint positive operator in .then there exists such that and a.s . in case , we take . otherwise , consider that is written as in definition [ def2 ] .then , we choose , and for any we define and .second , we use part ( a ) of theorem [ teorema8 ] , together with lemma [ lema51 ] , to establish the sufficient condition of theorem [ teorema4 ] .let be a self - adjoint positive operator in .suppose that , with .then is -regular .lemma [ lema51 ] shows that , hence , where is a countable set , are summable positive real numbers and is a orthonormal family of vectors of .using statement ( a ) of theorem [ teorema8 ] yields for all .we can extend to an orthonormal basis of formed by elements of .from parseval s equality we obtain and so . combining lemma [ lema51 ] with parseval s equalitywe now get this gives .we first establish , in our framework , the well - known relation between heisenberg and schrdinger pictures . [ lema10 ] suppose that hypothesis [ hipn5 ] holds , together with .let be as in theorem [ teorema3 ] .then for all , fix , and define the function by if , and otherwise . using the markov property of , which can be obtained by techniques of well - posed martingale problems , we get where for all . we will take the limit as in ( [ n311 ] ) .the dominated convergence theorem leads to combining ( [ 62 ] ) with ( [ 42 ] ) yields whenever . since , according to the dominated convergence theorem , we have as .then , letting in ( [ n311 ] ) we get by ( [ 62 ] ) , and so theorem [ teorema8 ] leads to ( [ 35 ] ) .we next check that is well defined by ( [ 31 ] ) .[ lema13 ] let hypothesis [ hipn5 ] hold and consider such that . then .let .using lemma [ lema10 ] yields hence ; see , for example , proposition 9.12 of .we now address the contraction property of the restriction of to .[ lema11 ] let hypothesis [ hipn5 ] hold .if are -regular , then since , according to lemma [ lema10 ] we have therefore , and so theorem [ teorema3 ] leads to ( [ n312 ] ) .the following lemma helps us to extend to all .[ lema12 ] suppose that is a self - adjoint positive operator in .then is dense in with respect to the trace norm .let .then there exists a sequence of orthonormal vectors for which , with and . for any we have andso is a -dense subset of since is dense in .now , the lemma follows from .proof of theorem [ teor7 ] combining theorem [ teorema4 ] with lemma [ lema13 ] we obtain that ( [ 31 ] ) defines unambiguously a linear operator for any and .lemma [ lema12 ] guarantees the uniqueness of the operator belonging to for which ( [ 31 ] ) holds .we next extend to a bounded linear operator in by means of density arguments .suppose that .by lemma [ lema12 ] , there exists a sequence of -regular operators for which .we define to be the limit in of as ; according to lemma [ lema11 ] this limit exists and does not depend on the choice of .recall that every has a unique decomposition of the form , with and self - adjoint operators in .for each we set where , denotes , respectively , the positive and negative parts of the self - adjoint operator ; see , for example , for details . we will verify that .let , with for any .since , lemma [ lema10 ] yields the construction of now implies for all .consider two -regular operators and . by definition [ def2 ], belongs to . if , then applying lemma [ lema10 ] we obtain therefore , and so lemma [ lema12 ] leads to for any .careful algebraic manipulations now show the linearity of .let us first prove the continuity of the map . [ lema16 ]assume that hypothesis [ hipn5 ] holds .let and , with , be random variables in satisfying .then converges in to as .let . combining ( [ 31 ] ) with the linearity of ( [ 2 ] ) we get in the last inequality we used that for .proof of theorem [ teor9 ] there exits a sequence of -valued random variables with finite ranges such that converges monotonically to ; see , for example , . by lemma [ lema16 ], converges to in .since is linear , an easy computation shows that , hence we will prove that converges to in as , which together with ( [ n37 ] ) implies . from lemma [ lema16 ]we obtain . for any we have , and so lemma [ lema11 ] yields therefore our proofis divided into three lemmata .the first two deal with the semigroup property of .[ lema14 ] let hypothesis [ hipn5 ] hold , and let be -regular .then for all , belongs to and whenever .since , combining theorem [ teorema4 ] with ( [ 31 ] ) gives .we will establish the semigroup property of the restriction of to . consider satisfying , and fix . for all we define if , and otherwise . using the markov property of deduce that where for all , .let .applying the dominated convergence theorem gives hence .then , and so theorem [ teor9 ] leads to by ( [ 38 ] ) , in ( [ 331 ] ) we replace by and by to obtain thus , letting in ( [ 38 ] ) , we get by ( [ 331 ] ) .[ lema24 ] under hypothesis [ hipn5 ] , is a semigroup of contractions which leaves invariant . by theorem [ teor7 ] , .since is positive whenever is -regular , using lemma [ lema12 ] yields for any and .suppose that , where are -regular operators . applying ( [ 31 ] )gives , and lemma [ lema14 ] asserts that for any .then , combining lemma [ lema12 ] with density arguments , we deduce that is a semigroup .we now examine the continuity of the map when is -regular .[ lema25 ] adopt hypothesis [ hipn5 ] , together with .then the map from to is continuous .consider such that .theorem [ teorema8 ] yields for all , and so combining theorem [ teorema8 ] with the cauchy schwarz inequality yields since } \vert x_s ( \xi ) \vert^{2 } ) < \infty ] we get that converges to .thus , and so .hence converges to weakly in .second , we show that the probabilistic representation of the right - hand side of ( [ 312 ] ) is continuous as a function from to .[ lema18 ] let hypothesis [ hipn5 ] hold .fix and .then , the function that maps each in to the complex number , is continuous .let be a sequence of nonnegative real numbers such that converges to .since } \vert x_s ( \xi ) \vert^{2 } ) < \infty ] with the dominated convergence theorem gives then , letting first in ( [ 131 ] ) and then using fubini s theorem , we get by condition ( h2.2 ) , the dominated convergence theorem leads to and so lemma [ lema51 ] yields , hence since the dual of consists in all linear maps with , lemma [ lema61b ] implies that is measurable as a function from to . furthermore , using lemma [ lema61 ] we get that is a bochner integrable -valued function on bounded intervals. then ( [ 63 ] ) , together with ( [ 61 ] ) , gives ( [ 136 ] ) .we are in position to show ( [ 311 ] ) and ( [ 312 ] ) with the help of hypothesis [ hipn1 ] .proof of theorem [ teor10 ] by theorem [ teorema4 ] , for certain .theorem [ teorema8 ] now gives .applying hypothesis [ hipn1 ] we get that are densely defined and , coincide with the closures of respectively ; see , for example , theorem iii.5.29 of . theorem [ teorema8 ] yields and .therefore where is as in lemma [ lema61 ] . combining ( [ 64 ] ) with lemma [ lema23] we get ( [ 311 ] ) , and so for all . using the continuity of obtain ( [ 312 ] ) .we first obtain the existence of a solution of ( [ 3 ] ) in the semigroup sense , without hypothesis [ hipn1 ] .[ lema26 ] under hypothesis [ hipn5 ] , is a semigroup -solution of ( [ 3 ] ) . by theorem [ teor8 ] , is a semigroup of bounded operators on that satisfies property ( i ) of definition [ defsemigroupsol ] .fix , with .thus is a -regular operator , and so ( [ 310 ] ) leads to property ( ii ) .finally , using lemmata [ lema61b ] and [ lema23 ] we get property ( iii ) .we next make it legitimate to use in our context the duality relation between quantum master equations and adjoint quantum master equations .[ lema20 ] let hypothesis [ hipn5 ] hold .suppose that and that is a semigroup -solution of ( [ 3 ] ) .then is a -solution of ( [ 41 ] ) with initial datum , where is the adjoint semigroup of ( see , e.g. , ) , that is , is the unique semigroup of bounded operators on such that for all and , using ( [ 315 ] ) we get that for all vectors whose norm is , we conclude from ( [ 55 ] ) that , hence that , and finally that applying property ( i ) of definition [ defsemigroupsol ] gives property ( b ) of definition [ definicion3 ] . in order to verify property ( a ), we will prove the continuity of for any .as in the proof of lemma [ lema42 ] , we define for . according to ( [ 315 ] )we have since , property ( ii ) of definition [ defsemigroupsol ] implies the continuity of the function . by ( [ 316 ] ) , using we deduce that the map is continuous , so is by the polarization identity .assume that .by ( [ 315 ] ) , combining with property ( iii ) of definition [ defsemigroupsol ] yields & & \qquad = \lim_{s\rightarrow0+}\frac{1}{s}\bigl ( \operatorname{tr } ( \widehat{\rho } _ { s } ( \vert x\rangle\langle x\vert ) \widehat{\rho}_{t}^{\ast } ( a ) ) -\operatorname{tr } ( \vert x\rangle\langle x\vert\widehat{\rho } _ { t}^{\ast } ( a ) ) \bigr)\\[-2pt ] & & \qquad= \mathcal{l } ( \widehat{\rho}_{t}^{\ast } ( a),x ) \end{aligned}\ ] ] with .thus from ( [ 316 ] ) and condition ( h2.2 ) we get that is uniformly convergent on bounded intervals , and so is continuous , and hence the application is continuous .therefore is continuously differentiable ( see , e.g. , section 2.1 of ) .property ( a ) of definition [ definicion3 ] now follows from ( [ 325 ] ) .we are in position to show our second main theorem .proof of theorem [ teorema9 ] let be a semigroup -solution of ( [ 3 ] ) .consider the adjoint semigroup of , and let be given by theorem [ teorema3 ] .combining lemma [ lema20 ] with theorem [ teorema3 ] we obtain for all and .if and , then applying ( [ 315 ] ) and lemma [ lema10 ] yields and so .lemma [ lema12 ] now implies that for all belonging to , hence .finally , lemma [ lema26 ] completes the proof . from have that hypothesis [ hipn5 ] holds with . hence theorem [ teorema9 ]yields our first assertion .suppose that or .using , for instance , the spectral theorem , we deduce the existence of a sequence of bounded self - adjoint operators in such that for all we have and . applying theorems [ teorema8 ] and [ teor10 ] ( or better lemmata [ lema61b ] and [ lema23 ] ) gives where with . by the dominated convergence theorem , letting we obtain let . using = -i i ] and f a = q a = p ] , we choose if and otherwise ; let {s } ] is a -solution of ( [ 5 ] ) with initial law . by remark[ nota6 ] , ( [ 5 ] ) has a unique -solution with initial distribution .therefore the distribution of with respect to coincides with the distribution of under . from have that } ] , applying ( [ 31 ] ) and the polarization identity gives .let be the -solution of ( [ 5 ] ) with initial distribution ; see remark [ nota6 ] .choose .then , theorem [ corolario2 ] shows that for all . as in the proof of theorem 3 of , applying techniques of well - posed martigale problems we obtain the markov property of the -solutions of ( [ 5 ] ) under hypothesis [ hipn5 ] .hence for any and } ( \vert\langle x , y_{t}\rangle\vert^{2})\bigr)= \mathbb{e}\biggl ( \int_{\mathfrak{h}}\mathbf{1 } _ { [ 0,\vert x\vert^{2 } ] } ( \vert\langle x , y\rangle \vert^{2 } ) p_{t } ( y_{0},dy ) \biggr).\vadjust{\goodbreak}\ ] ] on the other hand , using ( [ im1 ] ) we deduce that } ( \vert\langle x , y_{0}\rangle\vert^{2 } ) \bigr ) = \int_{\mathfrak{h}}\biggl ( \int_{\mathfrak{h}}\mathbf{1 } _ { [ 0,\vert x\vert^{2 } ] } ( \vert\langle x , y\rangle\vert^{2 } ) p_{t } ( z , dy ) \biggr ) \gamma ( dz ) .\ ] ] now , combining with } ( \vert\langle x , y\rangle\vert^{2 } ) p_{t } ( z , dy ) ) \gamma ( dz ) = \break \mathbb{e } ( \int_{\mathfrak{h}}\mathbf{1 } _ { [ 0,\vert x\vert^{2 } ] } ( \vert\langle x , y\rangle \vert^{2 } ) p_{t } ( y_{0},dy ) ) $ ] we get .this gives and so .since , from remark [ nota3 ] we have that is a closable operator satisfying .fix such that is equal to for all except a finite number .an easy computation shows that is equal to the sum of and where is a -degree polynomial whose coefficients depend on with .hence satisfies hypothesis [ hipn4 ] whenever . from follows that fulfills condition ( h2.3 ) of hypothesis [ hipn5 ] , and so theorems [ teor10 ] and [ teorema9 ] lead to statement ( i ) . from theorem 8 of have the existence of an invariant probability measure for ( [ 5 ] ) that satisfies the properties given in hypothesis [ hip2 ] with . using theorem [ teorema7 ] yields statement ( ii ) .the author wishes to express his gratitude to the anonymous referees , whose suggestions and constructive criticisms led to substantial improvements in the presentation .moreover , i thank franco fagnola and roberto quezada for helpful comments .
applying probabilistic techniques we study regularity properties of quantum master equations ( qmes ) in the lindblad form with unbounded coefficients ; a density operator is regular if , roughly speaking , it describes a quantum state with finite energy . using the linear stochastic schrdinger equation we deduce that solutions of qmes preserve the regularity of the initial states under a general nonexplosion condition . to this end , we develop the probabilistic representation of qmes , and we prove the uniqueness of solutions for adjoint quantum master equations . by means of the nonlinear stochastic schrdinger equation , we obtain the existence of regular stationary solutions for qmes , under a lyapunov - type condition . . .
quantum - state tomography ( qst ) is a standard tool used to characterize , validate , and verify the performance of quantum information processors .unfortunately , qst is a demanding experimental task , partly because the number of free parameters of an arbitrary quantum state scale quadratically with the dimension of the system . to overcome this difficulty, one can study qst protocols which include prior information about the system and effectively reduce the number of free parameters in the model . in this work we studyqst under the prior information that the state of the system is close to a pure state , and more generally , that it is close to a bounded - rank state ( a density matrix with rank less than or equal to a given value ) .indeed , in most quantum information processing applications the goal is not to manipulate arbitrary states , but to create and coherently evolve pure states . when the device is performing well , and there are only small errors , the quantum state produced will be close to a pure state , and the density matrix will have a dominant eigenvalue .one can use other techniques , e.g. randomized benchmarking , to gain confidence that it is operating near this regime .this important prior information can be applied to significantly reduce the resources required for qst .we study different aspects of informational completeness that allow for efficient estimation in this scenario , and robust estimation in the face of noise or when the state is full rank , but still close to a bounded - rank state .bounded - rank qst has been studied by a number of previous workers , and has been shown to require less resources than general qst .one approach is based on the compressed sensing methodology , where certain sets of randomly chosen measurements guarantee a robust estimation of low - rank states with high probability .other schemes , not related to compressed sensing , construct specific measurements that accomplish bounded - rank qst , and some of these protocols have been implemented experimentally . in addition , some general properties of such measurements have been derived . when considering bounded - rank qst a natural notion of informational completeness emerges ,referred to as _rank- completeness_. a measurement ( a povm ) is rank- complete if the outcome probabilities uniquely distinguish a state with rank from any other state with rank .a rigorous definition is given below .the set of quantum states with rank , however , is not convex , and in general we can not construct efficient estimators based on rank- complete measurements that will yield a reliable state reconstruction in the presence of experimental noise .this poses a concern for the practicality of such measurements for qst .the purpose of this contribution is two fold : ( i ) we develop the significance of a different notion of informational completeness that we denote as _ rank- strict - completeness_. we prove that strictly - complete measurements allow for robust estimation of bounded - rank states in the realistic case of noise and experimental imperfections by solving essentially any convex program .because of this , strictly - complete measurements are crucial for the implementation of bounded - rank qst .( ii ) we study two different types of strictly - complete measurements and show that they require less resources than general qst .the first is a special type of measurement called `` element - probing '' povm ( ep - povm ) .for example , the measurements proposed in refs . are ep - povms . in this context , the problem of qst translates to the problem of density matrix completion , where the goal is to recover the entire density matrix when only a few of its elements are given .the formalism we develop here entirely captures the underlying structure of all ep - povms and solves the problem of bounded - rank density matrix completion .the second type of strictly - complete povm we study is the set of haar - random bases .based on numerical evidence we argue that measurement outcomes of a few random bases form a strictly - complete povm and that the number of bases required to achieve strict completeness scales weakly with the dimension and rank . the remainder of this article is organized as follows . in sec .[ sec : info complete ] we establish the different definitions of informational completeness and in sec . [ sec : power of strict ] we demonstrate the power of strictly - complete povms for practical tomography .we show how such povms allow us to employ convex optimization tools in quantum state estimators , and how the result is robust to experimental noise . in sec .[ sec : constructions ] we establish a complete theory of rank- complete and strictly - complete povms for the case of ep - povms and explore numerically how measurements in random orthogonal bases yield a strictly - complete povm .we also demonstrate the robustness of strictly - complete measurements with numerical simulations of noisy measurements in sec .[ sec : numerics ] .we summarize and conclude in sec .[ sec : conclusions ] .completeness . * the measurement record , distinguishes the rank state from any other rank psd matrix. however , there generally will be infinitely many other states , with rank greater than , that are consistent with the measurement record . *( b ) rank- strict - completeness . *the measurement record distinguishes the rank state from any other psd matrix .thus it is unique in the convex set of psd matrices .] qst has two basic ingredients , states and measurements , so it is important to define these precisely .a quantum state in a -dimensional hilbert space , , is a density matrix , , that is positive semidefinite ( psd ) and normalized to unit trace .a quantum measurement with possible outcomes ( events ) is defined by a positive - operator valued measure ( povm ) with elements , .a povm then has an associated map=[{{\rm tr}}(e_1\cdot),\ldots,{{\rm tr}}(e_m\cdot)] ] , which we refer to this as the measurement vector . "a povm is _ fully informationally complete _ if the measurement vector , , distinguishes the state from all other states .a fully informationally complete povm must have linearly independent elements .we commonly think of povms as acting on quantum states , but mathematically we can apply them , more generally , on psd matrices . in this workwe discuss povms acting on psd matrices as it highlights the fact that our definitions and results are independent of the trace constraint of quantum states , and only depend on the positivity property .to accomplish this we treat the map , ] where and .the second expression shows that since , by definition , the povm elements sum to the identity , the povm always measures the trace of the matrix .it is also useful to define the kernel of a povm , ={\bf 0}\} ] be the corresponding measurement vector of a rank- strictly - complete povm . then , the solution to =\bm{p}\ , \ , { \rm and } \ , \ , x \geq 0,\ ] ] or to -\bm{p}\vert\;\ ;{ \rm s.t.}\ ; x \geq 0,\ ] ] where is a any convex function of , and is any norm function , is unique : .+ _ proof : _ this is a direct corollary of the definition of strict - completeness . since , by definition , the probabilities of rank- strictly - complete povm uniquely determine from within the set of all psd matrices , its reconstruction becomes a feasibility problem over the convex set =\bm{p},x\geq0\} ] .in particular , in the context of qst with idealized noiseless data , the constraint would be redundant ; the reconstructed state would necessarily be properly normalized .this corollary implies that strictly - complete povms allow for the reconstruction of bounded - rank states via convex optimization even though the set of bounded - rank states is nonconvex .moreover , all convex programs over the feasible solution set , i.e. , of the form of eqs . and , are equivalent for this task . for example , this result applies to maximum-(log)likelihood estimation where .corollary 1 does not apply for states in the povm s failure set , if such set exists .it is also essential that the estimation protocol be robust to noise and other imperfections . in any real experimentthe measurement vector necessarily contains noise due to finite statistics and systematic errors .moreover , any real state assignment should have full rank , and the assumption that the state has rank is only an approximation .producing a robust estimate in this case with rank- complete measurements is generally a hard problem since the set of bounded - rank states in not convex .strict - completeness , however , together with the convergence properties of convex programs , ensure a robust state estimation in realistic experimental scenarios .this is the main advantage of strictly - complete measurements and is formalized in the following corollary . +* corollary 2 ( robustness ) : * let be the state of the system , and let +{\bm e} ] for some quantum state with , then the solution to -\bm{f}\vert\leq\epsilon\ , \ , { \rm and } \ , \ , x \geq 0,\ ] ] or to -\bm{f}\vert\;\ ; { \rm s.t.}\ ; x \geq 0,\ ] ] where is a any convex function of , is robust : , and , where is any -norm , and is a constant which depends only on the povm . + +the proof , given in appendix [ app : proof ] , is derived from lemma v.5 of ref . where it was proved for the particular choice . in ref . this was also studied in the context of compressed sensing measurements .this corollary assures that if the state of the system is close to a bounded - rank density matrix and is measured with strictly - complete measurements , then it can be robustly estimated with any convex program , constrained to the set of psd matrices . in particular , it implies that all convex estimators perform qualitatively the same for low - rank state estimation .this may be advantageous especially when considering estimation of high - dimensional quantum states .as in the noiseless case , the trace constraint is not necessary for corollary 2 , and in fact leaving it out allows us to make different choices for , as was done in ref . . however , for a noisy measurement vector , the estimated matrix is generally not normalized , .the final estimation of the state is then given by . in principle, we can consider a different version of eqs . and where we include the trace constraint , andthis may have implications for the issue of bias " in the estimator .this will be studied in more details elsewhere .so far , we have shown that strictly - complete measurements are advantageous because of their compatibility with convex optimization ( corollary 1 ) and their robustness to statistical noise and to state preparation errors ( corollary 2 ) .we have not , however , discussed how to find such measurements or the resources required to implement them . in this sectionwe answer these questions with two different approaches .first , we describe a general framework that can be used to construct strictly - complete ep - povms , and we explicitly construct two examples of such povms in appendix [ app : construction ] .second , we numerically study the number of random bases that corresponds to strictly - complete povms for certain states rank and dimension . in both caseswe find that the number of povm elements required is , implying that strictly - complete measurements can be implemented efficiently .ep - povms are special types of povms where the measurement probabilities determine a subset of the total matrix elements , referred to as the `` measured elements . '' with ep - povms , the task of qst is to reconstruct the remaining ( unmeasured ) density matrix elements from the measured elements , and thus , in this case , qst is equivalent to the task of density matrix completion .similar work was carried out in ref . to study the problem of psd matrix completion .examples of ep - povms were studied by flammia _ et al . _ , and more recently , by goyeneche _ et al . _ , and shown to be rank-1 complete .we briefly review them here since we use them as canonical examples for the framework we develop .flammia _ et al . _ introduced the following povm , ,\end{aligned}\ ] ] with and chosen such that .they showed that the measurement probabilities and can be used to reconstruct any -dimensional pure state , as long as . under the assumption , , we find that .the real and imaginary parts of , , are found through the relations and , respectively .the povm in eq .is in fact an ep - povm where the measured elements are the first row and column of the density matrix .the probability can be used to algebraically reconstruct the element , and the probabilities and can be used to reconstruct the elements and , respectively .further details of this construction are given in appendix [ app : examples_full ] .a second ep - povm that is rank-1 complete was studied by goyeneche _ et al . _ . in this scheme fourspecific orthogonal bases are measured , goyeneche _ et al . _ outlined a procedure to reconstruct the pure state amplitudes but we omit it here for brevity .similar to the povm in eq ., the procedure fails when certain state - vector amplitudes vanish .more details are given in appendix [ app : construction ] .this povm is an ep - povm as well . here , the measured elements are the elements on the first diagonals ( the diagonals above and below the principal diagonal ) of the density matrix . denoting , and , we obtain , ] be the measurement record of some quantum state , . if \vert\leq \epsilon ] , we have , where depends only on the povm . + the proof of this lemma can be found in . to prove corollary 2 ,we first show that .the convex programs of eqs . and in the main text look for a solution that minimizes some convex function on the set -{\bmf}\vert\leq \epsilon , x\geq0\} ] , and according to the lemma . therefore we have + for convenience one parameter , , is used to quantify the various bounds .however it is straightforward to generalize this result to the case where the bounds are quantified by different values .the framework we developed in sec .[ ssec : ep ] allows us to construct rank- strictly - complete povms .we present here two such constructions .a rank- density matrix has free parameters .the first rank- strictly - complete povm we form has elements , and is a generalization of the povm in eq . .the povm elements are , ,\end{aligned}\ ] ] with and chosen such that .the probability can be used to calculate the density matrix element , and the probabilities and can be used to calculate the density matrix elements and .thus , this is an ep - povm which reconstruct the first rows and first columns of the density matrix .given the measured elements , we can write the density matrix in block form corresponding to measured and unmeasured elements , where is a submatrix and , , and are composed of measured elements .suppose that is nonsingular .given that , using the rank additivity property of schur complement and that , we obtain . therefore , we conclude that .thus we can reconstruct the entire rank- density matrix . following the arguments for the povm in eq ., it is straight forward to show that this povm is in fact rank- strictly - complete .the failure set of this povm corresponds to states for which is singular .the set is dense on a set of states of measure zero .the second rank- strictly - complete povm we construct corresponds to a measurement of orthonormal bases , which is a generalization of the four basis in eq . .we consider the case that the dimension of the system is a power of two .since a measurement of mutually unbiased bases is fully informationally complete , this construction is relevant as long as .we first assess the case of , which is the measurement proposed by goyeneche _ et al . _ but without adaptation .in this case there are five bases , the first is the computational basis , , and the other four are given in eq . .goyeneche _ et al . _ showed that the last four bases are rank-1 complete . here , we show these five bases are rank-1 strictly - complete with the techniques introduced above .we label the upper - right diagonals to , where the diagonal is the principal diagonal and the diagonal is the upper right element . each diagonal , except the , has a corresponding hermitian conjugate diagonal ( its corresponding lower - left diagonal ) .thus , if we measure the elements on a diagonal , we also measure the elements of its hermitian conjugate .the computational basis corresponds to measuring the diagonal . in sec .[ ssec : ep ] we showed measuring the last four bases corresponds to measuring the elements on the first diagonals . to show that the measurement of these five bases is rank-1 complete, we follow a similar strategy outlined in sec .[ ssec : ep ] .first , choose the leading principal submatrix , where , hereafter , the elements in bold font are the unmeasured elements . by applying a unitary transformation , which switches the first two rows and columns , we can move into the canonical form , from eq .we can solve for the bottom block of if . the set of states with corresponds to the failure set .note that the diagonal elements of the bottom block , and , are also measured .we repeat this procedure for the set of principal submatrices , , , for each , the upper - right and the lower - left corners elements and are unmeasured . using the same procedure as above we reconstruct these elements for all values of and thereby reconstruct the 2nd diagonals .we repeat the entire procedure again choosing a similar set of principal submatrices and reconstruct the 3rd diagonals and so on for the rest of the diagonals until all the unknown elements of the density matrix are reconstructed .since , we have reconstructed all diagonal elements of the density matrix and used the assumption that these five bases correspond to rank- complete povm . the first basis measures the 0th diagonalso by proposition 1 the measurement is rank-1 strictly - complete .the failure set corresponding to is when for .additionally , the five bases provide another set of submatrices to reconstruct .this set of submatrices results from also measuring the elements and , which were not used in the construction of .the failure set for is the same as the failure set of but since we gain additional robustness .when we consider both sets of submatrices the total failure set is and for and .this is the exact same set found by goyeneche _ et al . _ .we generalize these ideas to measure a rank- state by designing orthonormal bases that correspond to a rank- strictly - complete povm .the algorithm for constructing these bases , for dimensions that are powers of two , is given in algorithm [ alg : rankr_gmb ] .technically , the algorithm produces unique bases for but , as mentioned before , since mutually unbiased bases are informationally complete , for one may prefer to measure the latter .the corresponding measured elements are the first diagonals of the density matrix .given the first diagonals of the density matrix , we can reconstruct a state with a similar procedure as the one outlined for the five bases .first , choose the leading principle submatrix , .the unmeasured elements in this submatrix are and . by applying a unitary transformation we can bring into canonical form , and by using the rank condition from eq .we can solve for the unmeasured elements .we can repeat the procedure with the set of principle submatrices for for and from we can reconstruct the elements , which form the diagonal .we then repeat this procedure choosing the set of principle submatrices to reconstruct the diagonal and so on until all diagonals have been reconstructed .this shows the measurements are rank- complete and by proposition 1 , since we also measure the computational bases , the povm is also rank- strictly - complete .the failure set corresponds to the set of states with singular principal submatrix for .this procedure also has robustness to this set since , as in the case of , there is an additional construction .the total failure set is then when is singular for and is singular for . 1 .construction of the first basis : 2 .the choice of the first basis is arbitrary , we denote it by .this basis defines the representation of the density matrix . measuring this basis corresponds to the measurement of the all the elements on the 0th diagonal of .3 . construction of the other orthonormal bases : 4 . * for * $ ] , * do * * label the elements in the diagonal of the density matrix by where and .* for each element on the and diagonal , , associate two , two - dimensional , orthonormal bases , for allowed values of and .* arrange the matrix elements of the diagonal and diagonal into a vector with elements * find the largest integer such that is an integer .* group the elements of into two vectors , each with elements , by selecting elements out of in an alternative fashion , * * for * * do * * * each element of has two corresponding bases and from eq . . * * union all the two - dimensional orthonormal -type bases into one basis union all the two - dimensional orthonormal -type bases into one basis the two bases and are orthonormal bases for the -dimensional hilbert space . ** end for * * by measuring and for ( four bases in total ) , we measure all the elements on the and off - diagonals of the density matrix . 5 .* end for *
we consider the problem of quantum - state tomography under the assumption that the state is pure , and more generally that its rank is bounded by a given value . in this scenario two notions of informationally complete measurements emerge : rank- complete measurements and rank- strictly - complete measurements . whereas in the first notion , a rank- state is uniquely identified from within the set of rank- states , in the second notion the same state is uniquely identified from within the set of all physical states , of any rank . we argue , therefore , that strictly - complete measurements are compatible with convex optimization , and we prove that they allow robust quantum state estimation in the presence of experimental noise . we also show that rank- strictly - complete measurements are as efficient as rank- complete measurements . we construct examples of strictly - complete measurements and give a complete description of their structure in the context of matrix completion . moreover , we numerically show that a few random bases form such measurements . we demonstrate the efficiency - robustness property for different strictly - complete measurements with numerical experiments . we thus conclude that only strictly - complete measurements are useful for practical tomography .
most of the tasks the semantic web is eventually supposed to fulfill rely on the availability of ontologies . however , the creation and maintenance of ontologies is difficult because a number of domain experts most of which are not familiar with formal languages have to agree on a conceptualization of the respective domain .for that reason , it is crucial for the future of the semantic web to provide tools that make the creation of ontologies easy for everybody .acewiki tackles this problem by combining semantic wikis with controlled natural language .the goal of acewiki is to enable ordinary people with no background in formal languages to create expressive ontologies in a collaborative and intuitive way without the need of installing an application .there are several existing semantic wiki systems , see e.g. for a brief survey .unfortunately , most of those wikis do not support expressive ontology languages in a general way .furthermore , they are often hard to understand for people who are not familiar with the technical terms .attempto controlled english ( ace ) is the controlled natural language that is used for acewiki .being a subset of english , ace looks completely natural . restrictions of the syntax and the definition of a small set of interpretation rules make it a formal language that is automatically translatable into first - order logic .ace covers a large part of natural english : singular and plural noun phrases , active and passive voice , relative phrases , anaphoric references , existential and universal quantifiers , negation , and much more .ace has been used as a natural language front - end to owl with a bidirectional mapping of ace to owl .acewiki uses this for translating ace sentences into owl .the same work also introduces a protg plugin called `` ace view '' which enables to manage ontologies in ace within the protg environment .in acewiki , the ontological entities are represented by natural language words and phrases .proper names ( e.g. `` zurich '' , `` switzerland '' , `` europe '' ) are interpreted as individuals , nouns ( e.g. `` city '' , `` country '' ) are interpreted as classes , and transitive verbs ( e.g. `` borders '' ) , _ of_-constructs ( e.g. `` part of '' ) , and transitive adjectives ( e.g. `` located - in '' ) are interpreted as binary relations . using those words together with the predefined function words of ace ( e.g. `` a '' , `` every '' , `` if '' , `` then '' , `` and '' , `` not '' , `` is '' , `` that '' ) , ontological statements are expressed as ace sentences : + + as those examples show , the formal statements are easily readable and understandable by any english speaking person . in order to enable easy creation and modification of ace sentences , acewiki integrates a predictive editor that shows step - by - step the words that are syntactically possible at a given position in the sentence .figure [ fig : editor ] shows a screenshot of this editor .each of the ontological entities gets its own wiki article .figure [ fig : window ] shows an example .every article consist of ace sentences most of which can be translated into owl , e.g : + + ace is more expressive than owl , and thus we can write statements that go beyond the semantic expressivity of owl ( e.g. rule - like statements ) .such statements are marked with a red triangle ( and are currently ignored by the reasoner ) : + + furthermore , questions can be used to query the knowledge base , e.g : + + thus , ace is an ontology language , a rule language , and a query language at the same time .acewiki uses the owl reasoner pellet to perform reasoning tasks over the sentences of the wiki that are owl - compliant . in order to ensure the consistency of the ontology ,every new sentence is checked immediately after its creation whether it would introduce a contradiction . if this is the case then the sentenceis not included in the ontology and displayed in red font : + + the reasoner is also used to infer the class memberships of individuals .the results are presented in ace again : + + the same is done for class hierarchies : + + this shows that not only asserted but also inferred knowledge is represented in ace .finally , the reasoner is also used to answer questions : + + in general , we can say that acewiki communicates with the users on a very natural level .no knowledge about formal languages is required to deal with acewiki .in our previous work , we conducted a user experiment that showed that ordinary people with no background in logic and ontologies are able to deal with acewiki .the participants without being instructed how to interact with the interface were asked to add knowledge to acewiki .about 80% of the created sentences were correct and sensible .remarkably , more than 60% of those sentences were complex in the sense that they contained an implication or a negation .acewiki shows how ontologies can be created and modified in a natural way within a wiki .it demonstrates how semantic wikis using controlled natural language can be expressive and easy to use at the same time .our previous evaluation showed that acewiki is indeed easy to learn .
we demonstrate acewiki that is a semantic wiki using the controlled natural language attempto controlled english ( ace ) . the goal is to enable easy creation and modification of ontologies through the web . texts in ace can automatically be translated into first - order logic and other languages , for example owl . previous evaluation showed that ordinary people are able to use acewiki without being instructed .
[ sec : intro ] we consider in this paper the stokes eigenvalue problem which arises in stability analysis of the stationary solution of the navier - stokes equations : where is the flow velocity , is the pressure , is the laplacian operator , is the flow domain and denotes the boundary of the flow domain .let us introduce the stream function such that .then we derive an alternative formulation for - : where is the unit outward normal to the boundary . is also referred to as the biharmonic eigenvalue problem for plate buckling .the naturally equivalent weak form of - reads : find such that where the bilinear forms and are defined by there are various numerical approaches to solving - .mixed finite element methods introduce the auxiliary function to reduce the fourth - order equation to a saddle point problem and then discretize the reduced second order equations with ( - ) continuous finite elements .however , spurious solutions may occur in some situations .the conforming finite element methods including argyris elements and the partition of unity finite elements , require globally continuously differentiable finite element spaces , which are difficult to construct and implement .the third type of approaches use non - conforming finite element methods , such as adini elements , morley elements and the ordinary -interior penalty galerkin method .their disadvantage lies in that such elements do not come in a natural hierarchy .both the conforming and nonconforming finite element methods are based on the naturally equivalent variational formulation , and usually involve low order polynomials and guarantee only a low order of convergence .in contrast , it is observed in that the spectral method , whenever it is applicable , has tremendous advantage over the traditional -version methods . in particular , spectral and spectral element methods using highorder orthogonal polynomials for fourth - order equations result in an exponential order of convergence for smooth solutions . in analogy to the argyris finite element methods, the conforming spectral element method requires globally continuously differentiable element spaces , which are extremely difficult to construct and implement on unstructured ( triangular or quadrilateral ) meshes .this is exactly the reason why -conforming spectral elements are rarely reported in literature except those on rectangular meshes .hence , the spectral methods using globally smooth basis functions are naturally suitable choices in practice for on some fundamental regions including rectangles , triangles and polar geometries . to the best of our knowledgethere are few reports on spectral - galerkin approximation for the stokes eigenvalue problem by the stream function formulation in polar geometries .the polar transformation introduces polar singularities and variable coefficients of the form in polar coordinates , which involves intricate pole conditions thus brings forth severe difficulties in both the design of approximation schemes and the corresponding error analysis .the aim of the current paper is to propose and analyze an efficient spectral - galerkin approximation for the stream function formulation of the stokes eigenvalue problem in polar geometries . as the first step, we use the separation of variables in polar coordinates to reduce the original problem in the unit disk to equivalent infinite sequence of one - dimensional eigenvalue problems which can be solved individually in parallel .rigorous pole conditions involved are prerequisite for the equivalence of the original problem and the sequence of the one - dimensional eigenvalue problems , and thus play a fundamental role in our further study .it is worthy to note , however , that the pole conditions derived for the fourth - order source problems in open literature ( such as ) are inadequate for our eigenvalue problems since they would inevitably induce improper / spurious computational results .based on the pole condition , suitable approximation spaces are introduced and spectral - galerkin schemes are proposed .a rigorous analysis on the optimal error estimate in certain properly introduced weighted sobolev spaces is made for each one dimensional eigenvalue problem by using the minimax principle .finally , we extend our spectral - galerkin method to solving the stream function formulation of the stokes eigenvalue problem in an elliptic region .owing to its non - separable property , this problem is actually another challenge both in computation and analysis .a brief explanation on the implementation of the approximation scheme is first given , and an optimal error estimate is then presented in the cartesian coordinates under the framework of babska and osborn .the rest of this paper is organized as follows . in the next section , dimension reduction scheme of the stokes eigenvalue problemis presented . in 3 , we derive the weak formulation and prove the error estimation for a sequence of equivalent one - dimensional eigenvalue problems .also , we describe the details for an efficient implementation of the algorithm . in 4 , we extend our algorithm to the case of elliptic region .we present several numerical experiments in 5 to demonstrate the accuracy and efficiency of our method .finally , in 6 we give some concluding remarks .before coming to the main body of this section , we would like to introduce some notations and conventions which will be used throughout the paper .let be a generic positive weight function on a bounded domain , which is not necessarily in . denote by the inner product of whose norm is denoted by .we use and to denote the usual weighted sobolev spaces , whose norm is denoted by . in cases where no confusion would arise , ( if ) and may be dropped from the notation .let ( resp . ) be the collection of nonnegative integers ( resp .integers ) . for ,we denote by the collection of all algebraic polynomials on with the total degree no greater than .we denote by a generic positive constant independent of any function and of any discretization parameters .we use the expression to mean that . in the current section ,we restrict our attention to the unit disk .we shall employ a classical technique , separation of variables , to reduce the problem to a sequence of equivalent one - dimensional problems . throughout this paper, we shall use the polar coordinates for points in the disk that .we associate any function in cartesian coordinates with its partner in polar coordinates .if no confusion would arise , we shall use the same notation for and .we now recall that , under the polar coordinates , then the bilinear forms and in become \big[\frac{\partial^2 \overline{\phi}}{\partial r^2 } + \frac{1}{r } \frac{\partial \overline{\phi}}{\partial r } + \frac{1}{r^2 } \frac{\partial^2 \overline{\phi}}{\partial \theta^2}\big ] \d \theta , \\ & \mathcal{b}(\psi , \phi ) = \int_{0}^1r \d r \int_0^{2\pi } \big [ \frac{\partial \psi}{\partial r } \frac{\partial \overline{\phi}}{\partial r } + \frac{1}{r^2 } \frac{\partial \psi}{\partial \theta } \frac{\partial \overline{\phi}}{\partial \theta } \big ] \d \theta.\end{aligned}\ ] ] denote and define the bilinear forms for functions on , \big [ \overline{v } '' + \frac{\overline{v}'}{r } -\frac{m^2}{r^2}\overline{v } \big ] r \d r , \\ & \mathcal{b}_m(u , v ) = \int_0 ^ 1 \big(r u ' \overline{v } ' + \frac{m^2}{r }u \overline{v } \big ) \d r.\end{aligned}\ ] ] further let us assume by the orthogonality of the fourier system , one finds that for the well - posedness of and , the following pole conditions for ( and the same type of pole conditions for ) should be imposed , = ( 1-m^2 ) \psi_m'(0 ) = 0,\end{aligned}\ ] ] which can be further simplified into the following three categories , it is worthy to note that our pole condition for is a revision of the pole condition in ( 4.8 ) of . a concrete example to support the absence of reads , also ,this absence of in is also confirmed by .the boundary conditions on states for all integer .meanwhile , together with implies .it is then easy to verify that ( resp . ) induces a sobolev norm for any function on which satisfies the boundary condition ( resp . ) and the pole condition ( resp . ) .we now introduce two non - uniformly weighted sobolev spaces on , which are endowed with energy norms in the sequel , is reduced to a system of infinite one - dimensional eigen problems : to find such that and we now conclude this section with the following lemma on and . for , \d r. \end{split}\end{aligned}\ ] ] specifically , \d r , \\ \label{auv2 } & \mathcal{a}_{\pm 1}(u , v)= \displaystyle \int_{0}^1\big [ ru '' \overline{v } '' + \frac{3}{r } \big(u'\frac{u}{r } \big ) \big ( \overline{v}'-\frac { \overline{v}}{r } \big ) \big ] \d r , \\ \label{auv3 } & \mathcal{a}_m(u , v ) = \displaystyle \int_{-1}^1\big [ ru '' \overline{v } '' + \frac{2m^2 + 1}{r } u ' \overline{v } ' + \frac{m^4 - 4m^2}{r^3 } u \overline{v}\big ] \d r .\end{aligned}\ ] ] by integration by parts and the pole condition , one verifies that which gives .next , one readily checks that as a result , \big[\big(\overline{v}'\mp\frac{m}{r } \overline{v}\big ) ' + \frac{1\pm m}{r}\big(\overline{v}'\mp\frac{m}{r }\overline{v}\big ) \big ] r\d r \\ = & \int_{0}^1\big [ r \big(u'\mp\frac{m}{r } u\big ) ' \big(\overline{v}'\mp\frac{m}{r } \overline{v}\big ) ' + \frac{(1\pm m)^2}{r } \big(u'\mp\frac{m}{r } u\big ) \big(\overline{v}'\mp\frac{m}{r } \overline{v}\big)\big]\d r \\&+\int_{0}^1 ( 1\pm m ) \big[\big(u'\mp\frac{m}{r } u\big)\big(\overline{v}'\mp\frac{m}{r } \overline{v}\big)\big]'\d r .\end{aligned}\ ] ] meanwhile , the pole conditions - states that both and vanish at the two endpoints of .thus the last integral above is zero , and is now proved . and- are corrected versions of ( 4.10 ) in . \d r \\ & \,+ ( 1-m^2 ) \int_{0}^1(u'v ' ) ' \d r + m^2\int_{0}^1 \big [ u '' \big ( v ' -\frac{v}{r}\big ) + v '' \big ( u ' -\frac{u}{r}\big ) \big ] \d r \\ & \,+ m^2(1-m^2)\int_{0}^1 \frac{1}{r}\big [ u ' \big ( v ' -\frac{v}{r}\big ) + v ' \big ( u ' -\frac{u}{r}\big ) \big ] \d r\end{aligned}\ ] ] \d r \\ = \int_{0}^1 \big [ \big ( u ' -\frac{u}{r}\big ) ' \big ( v ' -\frac{v}{r}\big ) + \big ( v ' -\frac{v}{r}\big ) ' \big( u ' -\frac{u}{r}\big ) \big ] \d r \\+ \int_{0}^1\frac{1}{r } \big[\big ( u ' -\frac{u}{r}\big ) \big ( v ' -\frac{v}{r}\big ) + \big ( v ' -\frac{v}{r}\big ) \big ( u ' -\frac{u}{r}\big ) \big ] \d r\end{aligned}\ ] ] \d r = m^2(1-m^2)\int_{0}^1 \frac{1}{r}\big [ \big ( u ' -\frac{u}{r}\big ) \big ( v ' -\frac{v}{r}\big ) + \big ( u ' -\frac{u}{r}\big ) \big ( v ' -\frac{v}{r}\big ) \big ] \d r\\ + m^2(1-m^2 ) \int_{0}^1\big [ \frac{uv'+vu'}{r^2 } -\frac{2uv}{r^3 } \big ] \d r = m^2(1-m^2 ) \int_{0}^1 \frac{1}{r}\big [ \big ( u ' -\frac{u}{r}\big ) \big ( v ' -\frac{v}{r}\big ) + \big ( u ' -\frac{u}{r}\big ) \big ( v ' -\frac{v}{r}\big ) \big ] \d r\end{aligned}\ ] ] the proof is now completed .let be the space of polynomials of degree less than or equal to on , and setting .then the spectral galerkin approximation scheme to is : find such that and due to the symmetry properties and , we shall only consider from now on in this section . to give the error analysis , we will use extensively the minimax principle .let denote the eigenvalues of ( [ weak2 ] ) and be any -dimensional subspace of .then , for , there holds see theorem 3.1 in .let denote the eigenvalues of ( [ weak2 ] ) and be arranged in an ascending order , and define where is the eigenfunction corresponding to the eigenvalue .then we have see lemma 3.2 in .it is true that the minimax principle is also valid for the discrete formulation ( [ weak3 ] ) ( see ) ._ let denote the eigenvalues of ( [ weak3 ] ) , and be any -dimensional subspace of .then , for , there holds _define the orthogonal projection such that [ th : diseigen ] _ let be obtained by solving ( [ weak3 ] ) as an approximation of , an eigenvalue of ( [ weak2 ] ) .then , we have _ according to the coerciveness of and we easily derive . since , from ( [ e3.3 ] ) and ( [ e3.6 ] ) we can obtain .let denote the space spanned by .it is obvious that is a -dimensional subspace of . from the minimax principle , we have since from and the non - negativity of , we have thus , we have the proof of theorem [ th : diseigen ] is completed .denote by the jacobi weight function of index , which is not necessarily in .define the -orthogonal projection such that further , for , define recursively the -orthogonal projections such that (r)= \int_{0}^r \big[\pi_{n-1}^{1-k,1-k } u'\big](t ) \d t + u(0).\ ] ] for any nonnegative integers , define the sobolev space [ errpiab ] suppose with being real numbers or negative integers .then for sufficiently large , and for , we now extend the definition of the projection operator to such that where one readily finds , for , that next , for any nonnegative integers , define the sobolev space now we have the following error estimate on . is a legendre tau approximation of such that (0)=\partial_r^lu(0 ) , \quad \partial_r^l\big[\pi_n^{-k ,-k}u](1)=\partial_r^lu(1 ) , \qquad 0\le l\le k-1 , \\ \label{tau } & ( \pi_n^{-k ,- k}u - u , v ) = 0 , \qquad v\in \pp_{n-2k}.\end{aligned}\ ] ] further suppose with .then for , [ lm : pi ] suppose and with and .then for , define the differential operator and then set (t ) \d t.\end{aligned}\ ] ] we shall first prove . by , we find that (t ) \d t = \int_0 ^ 1 t^{m}\big [ \mathcal{d}_mu\big](t ) \d t = \int_0 ^ 1 \partial_t\big[t^{m } u(t)\big]\d t = 0,\quad n\ge m+3 , m\neq 0,\end{aligned}\ ] ] where the last equality sign is derived from the boundary condition .moreover , (t ) \dt = -r^{m}\big [ \pi^{-1,-1}_{n-1}\mathcal{d}_mu\big](r).\end{aligned}\ ] ] as a result , and (t ) \d t = 0 , \qquad m\neq 0.\ ] ] further , implies (1)= 0,\quad m\in \zz ; \qquad \big[\mathcal{d}_mu\big](0)= 0 , \quad m\neq 1,\end{aligned}\ ] ] which , together with the property of , gives (1)= 0,\quad m\in \zz ; \qquad \big[\pi^{-1,-1}_{n-1}\mathcal{d}_mu\big](0)= 0 , \quad m\neq 1.\end{aligned}\ ] ] in the sequel , we deduce that if and . in summary , we conclude that .next by and , we have \,\big\|\partial_r^{s-1}\mathcal{d}_mu\big\|^2_{\omega^{s-2,s-2}}.\end{aligned}\ ] ] finally , is an immediate consequence of the projection theorem , the proof is now completed .denote by the jacobi weight function of index , which is not necessarily in in .define the -orthogonal projection such that for any nonnegative integers , define the sobolev space [ errpiab ] suppose with being real numbers or negative integers .then for sufficiently large , and for , for any , denote then it is easy to verify that we now extend the definition of the projection operator to such that one readily finds that for any nonnegative integers , define the sobolev space thanks to , and lemma [ errpiab ] , we readily arrive at the following lemma .[ lm : pi ] for any with , it holds that [ lm : pi ] for any with , it holds that it is clear that if .thus by the projection theorem , cauchy - schwartz equality , poincar inequality , and , one derives that \\ & \ , \le 3 \big [ \|(u-\pi_{n}^{-2,-2}u)''\|_{\omega^{0,1},i}^2 + ( m^4 + 1 ) \| ( u-\pi_{n}^{-2,-2}u)'\|_{\omega^{0,-1},i}^2 \big ] \\ & \,\lesssim ( n^2 + 1 + m^4 ) n^{2 - 2s } \|\partial_r^su\|^2_{\omega^{s-2,s-2},i } , \end{split}\end{aligned}\ ] ] which gives the proof .[ eq : pkred1 ] & & ( _ r^k ( p^k_n u - u ) , v)=(_r^k-1 ( p^k-1_n-1 _ r u-_r u ) , v ) & & = = ( p_n - k _r^k u-_r^k u , v)=0,v _ n - k .meanwhile , for any , [ eq : rmp1 ] ( p_n^ku - u , v)=(-1)^k ( _ r^k(p_n^ku - u ) , _ r^-k v)=0 , where .so actually is a legendre tau approximation of .moreover , for any , [ eq : pkorth ] ( _ r^k ( p_n^ku - u ) , _ r^k v)=0 .thus is also an orthogonal projection in with respect to the semi - norm .[ th : pkacy ] if and , then [ eq : pkacy ] _r^l(p_n^k u - u)_w_l - kc n^l - r _r^r u_w_r - k , 0lk .let is the -th approximate eigenvalue of .if with , then we have , it can be represented by ; we then have meanwhile , by the variational form , the definition of , cauchy - schwarz inequality and theorem [ lm : pi ] , we have as a result , we have the following estimate for , for sufficiently large , . thus andwe finally deduce from theorem [ th : diseigen ] that the proof is now completed .we describe in this section how to solve the problems efficiently . to this end, we first construct a set of basis functions for . let where is the jacobi polynomial of degree .it is clear that define if and otherwise .our basis functions lead to the penta - diagonal matrix {n_m\le i , j\le n} ] instead of the hepta- and hendecagon - diagonal ones in .[ expan ] for , \\\label{phid1r } \begin{split } & \quad=\tfrac{(i-3)i^2}{4(2i-3)(2i-1 ) } j^{0,1}_{i-1}(2r-1 ) + \tfrac{(i-1)(i-3)(2i^2 - 7i+2)}{2(2i-5)(2i-1)(2i-3 ) } j^{0,1}_{i-2}(2r-1 ) -\tfrac{(i-2)(i-3)}{(2i-3)(2i-5 ) } j^{0,1}_{i-3}(2r-1 ) \\ &\qquad -\tfrac{(2i^2 - 9i+6)(i-3)^2}{2(2i-7)(2i-5)(2i-3 ) } j^{0,1}_{i-4}(2r-1 ) -\tfrac{(i-3)(i-4)^2}{4(2i-5)(2i-7 ) } j^{0,1}_{i-5}(2r-1 ) , \end{split } \\\label{phid0 } & \frac{\phi_i(r)}{r } = r\big [ \tfrac { i-3}{2(2i-3)}j^{0,1}_{i-2}(2r-1 ) -\tfrac { 2 ( i-3 )( i-2 ) } { ( 2i-3 ) ( 2i-5 ) } j^{0,1}_{i-3}(2r-1 ) + \tfrac { i-3}{2(2i-5 ) } j^{0,1}_{i-4}(2r-1)\big ] \\ \label{phid0r } \begin{split } & \quad = \tfrac{i(i-3)}{4(2i-1)(2i-3 ) } j^{0,1}_{i-1}(2r-1 ) -\tfrac{(i-1)(i-3)}{(2i-1)(2i-3)(2i-5 ) } j^{0,1}_{i-2}(2r-1 ) -\tfrac{(i-2)(i-3)}{2(2i-3)(2i-5 ) } j^{0,1}_{i-3}(2r-1 ) \\ & \qquad + \tfrac{(i-3)^2}{(2i-7)(2i-3)(2i-5 ) } j^{0,1}_{i-4}(2r-1 ) + \tfrac{(i-3)(i-4)}{4(2i-5)(2i-7 ) } j^{0,1}_{i-5}(2r-1 ) , \end{split } \end{aligned}\ ] ] and \\ & \qquad = \tfrac{3}{20}j^{0,1}_{2}(2r-1)+\tfrac{1}{10 } j^{0,1}_{1}(2r-1 ) -\tfrac{1}{4 } j^{0,1}_{0}(2r-1 ) , \end{split } \\\label{phi31 } \begin{split } & \phi_3 ^ 1{}'(r ) - \frac { \phi_3 ^ 1(r)}{r } = \frac{r}{3}\big [ j^{0,1}_{1}(2r-1)- j^{0,1}_{0}(2r-1 ) \big ] \\ & \qquad = \tfrac{1}{10}j^{0,1}_{2}(2r-1)+\tfrac{1}{15 } j^{0,1}_{1}(2r-1 ) -\tfrac{1}{6 } j^{0,1}_{0}(2r-1 ) .\end{split } \end{aligned}\ ] ] thus for , \frac { ( i-2 ) ( i-3 ) ( 4{i}^{4}-4{m}^{4}-24{i}^{3}+50{i}^{2}+10{m}^{2}-42 i+9 ) } { 4 ( 2i-5 ) ( 2i-1 ) ( 2i-3 ) } + ( \frac{3}{20}\delta_{m,0}+\frac14\delta_{m,1 } ) \delta_{i,3 } , & j = i+1,\\[0.3em ] \frac { ( i-3 ) ( i-2+m ) ( i-2-m ) ( i - m ) ( i+m ) } { 8 ( 2i-1 ) ( 2i-3 ) } + \frac{3}{40}\delta_{m,0 } \delta_{i,3 } , & j = i+2,\\[0.3em ] 0 , & j\ge i+3 , \end{cases } \end{aligned}\ ] ] and \frac { ( i-2 ) ( i-3 ) ( 4i^4 - 24{i}^{3}+43{i}^{2}+6{m}^{2}-21i-26 ) } { 16 ( 2i-7 ) ( 2i-5 ) ( 2i+1 ) ( 2i-1 ) ( 2i-3 ) } + ( \frac{1}{140}\delta_{m,0}+\frac{1}{84}\delta_{m,1 } ) \delta_{i,3 } , & j = i+1,\\[0.3em ] -\frac { ( i-1 ) ^{2 } ( i-3 ) ( { i}^{2}+{m}^{2}-2i-4 ) } { 8 ( 2i-5 ) ( 2i+1 ) ( 2i-1 ) ( 2i-3 ) } - \frac{3}{560}\delta_{m,0 } \delta_{i,3 } , & j = i+2,\\[0.3em ] -\frac { i ( i-3 ) ( 4 { i}^{4}-8{i}^{3}-13{i}^{2}+6{m}^{2}+17i-6 ) } { 16 ( 2i-5 ) ( 2i-3 ) ( 2i-1 ) ( 2i+3 ) ( 2i+1 ) } - ( \frac{3}{280}\delta_{m,0}+\frac{1}{120}\delta_{m,1})\delta_{i,3 } , & j = i+3,\\[0.3em ] -\frac { ( i-3 ) i ( i+1 ) ( i - m ) ( i+m ) } { 32 ( 2i-3 ) ( 2i-1 ) ( 2i+3 ) ( 2i+1 ) } - ( \frac{1}{280}\delta_{m,0 } + \frac{1}{315}\delta_{m,1 } ) \delta_{i,3 } , & j = i+4 , \\[0.3em ] 0 , & j\ge i+5 .\end{cases } \end{aligned}\ ] ] we postpone the proof to appendix [ app : b ] . we shall look for now , plugging the expression of in , and taking through all the basis functions in , we will arrive at the following algebraic linear eigenvalue system : with which can be efficiently solved .we first note that if the function is analytic at , the continuity of the function and its derivatives demands have -th order zeros at . for ,define the polynomials it is then easy to see from that specifically , we denote the following lemma indicates that are sobolev orthogonal polynomials in . for , it holds that .\end{aligned}\ ] ] we now focus on the construction of the sobolev orthogonal polynomials in . in view of and , it suffices to find actually , by , and , one finds further by and , one derives that for and we define define the approximation spaces then our novel spectral method for reads : find such that and as an immediate consequence of the reduction above and lemma 2 in , we have the following lemma .it holds that = \dfrac{2k+2m-2}{(k-2)r^{m+1}}\partial_r j^{-1,-2m-2}_{k+2m}(2r-1 ) \\ & = \frac{2(k+m-1)}{r^{m+1 } } j^{0,-2m-1}_{k+2m-1}(2r-1 ) = 2(k+m-1 ) j^{0,2m+1}_{k-2}(2r-1 ) r^m \end{aligned}\ ] ] \\ & = \frac{1}{r^{m-1}}\partial_r \frac{1}{r } \partial_r \big[\frac{1}{k } j^{-2,-2m}_{k+2m}(2r-1 ) + \frac{k+2m-1}{(k-2)(k-1 ) } j^{-2,-2m}_{k+2m-1}(2r-1)\big ] \\ & = \frac{1}{r^{m-1}}\partial_r \frac{1}{r } \big[\frac{k-1}{k } j^{-1,1 - 2m}_{k+2m-1}(2r-1 ) + \frac{k+2m-1}{k-1 } j^{-1,1 - 2m}_{k+2m-2}(2r-1)\big ] \\ & = \frac{1}{r^{m+1 } } \big [ ( k-1)rj^{0,-2m}_{k+2m-2}(2r-1 ) + ( k+2m-1 ) rj^{0,-2m}_{k+2m-3}(2r-1 ) \\ & -\frac{k-1}{k } j^{-1,1 - 2m}_{k+2m-1}(2r-1 ) - \frac{k+2m-1}{k-1 } j^{-1,1 - 2m}_{k+2m-2}(2r-1)\big ] \\ & = ( k-1)j^{0,2m}_{k-2}(2r-1)r^m + ( k+2m-1 ) j^{0,2m}_{k-3}(2r-1 ) r^m \\ & -\frac{k-1}{k+2m-1 } j^{-1,2m-1}_{k}(2r-1 ) r^{m-2 } - \frac{k+2m-1}{k+2m-2 } j^{-1,2m-1}_{k-1}(2r-1)r^{m-2}\end{aligned}\ ] ] = = = = = = = = \dfrac{2k+2m-3}{(k-2)r^{m+1}}\partial_r [ j^{-1,-2m-1}_{k+2m-1}(2r-1 ) r ] \\ & = \dfrac{1}{r^{m+1}}\partial_r [ \frac{k-2}{k-3}j^{-1,-2m-2}_{k+2m-1}(2r-1 ) + \frac{k+2m}{k-2}j^{-1,-2m-2}_{k+2m}(2r-1 ) ] \\ & = \frac{k-2}{r^{m+1 } } j^{0,-2m-1}_{k+2m-2}(2r-1 ) + \frac{k+2m}{r^{m+1 } } j^{0,-2m-1}_{k+2m-1}(2r-1 ) = \frac{2k+2m-3}{r^m}j^{0,-2m}_{k+2m-2}(2r-1 ) \\ & = ( 2k+2m-3)j^{0,2m}_{k-2}(2r-1 ) r^m\end{aligned}\ ] ] for any , define the orthogonal projection such that owing to the orthogonality , as a result , = r^m \partial_r \frac{1}{r^m } \big[j^{-2,2m-1}_k(2r-1 ) r^m\big ] = ( k+2m-2)j^{-1,2m}_{k-1}(2r-1 ) r^m \end{aligned}\ ] ] let be the -th smallest eigenvalue of .then let be an eigenfuction corresponding to , then there exists a constant such that the section , we extend our algorithm and numerical analysis from a circular disk to an elliptic domain , where and are the semi - major axis and the semi - minor axis , respectively , i.e. , .let us make the polar transformation , which maps the rectangle in polar coordinates onto the ellipse in cartesian coordinates . for ,we denote , which is equipped with the norm .if no confusion would arise , we shall also use the notation for its correspondence on .we now revisit the gradient and laplacian in cartesian coordinates .it is readily checked that . \end{split}\label{a5.2}\end{aligned}\ ] ] specifically , for any function it holds that &=\big ( \frac{1}{2a } \big ( u_m ' -\frac{m}{r } u_m \big ) \e^{\i ( m+1 ) \theta}+ \frac{1}{2a } \big ( u_m ' + \frac{m}{r } u_m \big ) \e^{\i ( m-1 ) \theta } , \\ & -\frac{\i}{2b } \big ( u_m ' -\frac{m}{r } u_m \big ) \e^{\i ( m+1 ) \theta } + \frac{\i}{2b } ( u_m ' + \frac{m}{r } u_m \big ) \e^{\i ( m-1 ) \theta}\big)^{\tr } , \end{split } \\ \label{deltaue } \begin{split } \delta [ u_m(r ) \e^{\i m\theta } ] & = \frac12\big ( \frac{1}{a^2}+\frac{1}{b^2}\big ) \mathcal{l}_m u_m(r ) \e^{\i m\theta } \\ & + \frac14\big(\frac{1}{a^2}-\frac{1}{b^2}\big ) \big [ \mathcal{k}_m u_m(r ) \e^{\i ( m+2)\theta } + \mathcal{k}_{-m } u_m(r ) \e^{\i ( m-2)\theta } \big ] , \end{split}\end{aligned}\ ] ] where and are differential operators defined by to make the ] meaningful at the origin , one requires that which , as before , can be further simplified into the following three categories , in view of , we have the following the lemma .[ equivlem ] for any function on , we write it then holds that ,\qquad u\in h^1(\omega ) , \end{split } \\\label{equiv2 } \begin{split } \|a^2\partial_x^2u+&b^2\partial_y^2u\|^2 = \big\|\partial_r^2 u+\frac{1}{r}\partial_ru + \frac{1}{r^2}\partial_{\theta}^2u \big\|_{\omega^{0,1},r}^2 = 2\pi \sum_{m=-\infty}^{\infty } \big\|\mathcal{l}_m u_m \big\|_{\omega^{0,1},i}^2 , \quad u\in h^2(\omega ) . \end{split}\end{aligned}\ ] ] define \big)^{\frac12 } = \big(2\pi \sum_{m=-\infty}^{\infty } \big\|u_m\big\|^2_{1,m , i}\big)^{\frac12 } , \\ \label{n2d } & \|u\|_{2,*,r } = \big(2\pi \sum_{m=-\infty}^{\infty } \big\|\mathcal{l}_m u_m \big\|_{\omega^{0,1},i}^2 \big)^{\frac12}= \big(2\pi \sum_{m=-\infty}^{\infty } \big\|u_m\big\|^2_{2,m , i}\big)^{\frac12}.\end{aligned}\ ] ] then it readily checked that and are equivalent norms of and , respectively .\big|^2 \d \theta r \d r \\ & + \frac{1}{4b^2 } \int_{0}^1 \!\!\ ! \int_{0}^{2\pi } \big| \sum_{m=-\infty}^{\infty } \big [ \big ( u_m ' -\frac{m}{r } u_m \big ) \e^{\i ( m+1 ) \theta}- \big ( u_m ' + \frac{m}{r } u_m \big ) \e^{\i ( m-1 ) \theta } \big]\big|^2\d \theta r \d r \\ \le & \big(\frac{1}{2a^2}+\frac{1}{2b^2}\big ) \int_{0}^{2\pi}\!\ ! \int_{0}^1 \big [ \big| \sum_{m=-\infty}^{\infty } \big ( u_m ' -\frac{m}{r } u_m \big ) \e^{\i ( m+1 ) \theta } \big|^2 + \big| \sum_{m=-\infty}^{\infty } \big ( u_m ' + \frac{m}{r } u_m \big ) \e^{\i ( m-1 ) \theta}\big|^2 \big ] \d \theta r \d r \\ = & \big(\frac{\pi}{a^2}+\frac{\pi}{b^2}\big ) \sum_{m=-\infty}^{\infty } \big [ \big\|u_m'-\frac{mu_m}{r}\big\|^2_{\omega^{0,1},i } + \big\|u_m'+\frac{mu_m}{r}\big\|^2_{\omega^{0,1},i } \big ] \\\le & \big(\frac{2\pi}{a^2}+\frac{2\pi}{b^2}\big ) \sum_{m=-\infty}^{\infty } \big [ \big\|u_m'\big\|^2_{\omega^{0,1},i } + m^2 \big\|u_m\big\|^2_{\omega^{0,-1},i } \big].\end{aligned}\ ] ] on the other hand , .\end{aligned}\ ] ] + 3\pi m^2\big(\frac{1}{a^2}-\frac{1}{b^2}\big)^2\big\| u_m'-\frac{u_m}{r}\big\|^2_{\omega^{0,-1},i}\end{aligned}\ ] ] define the approximation spaces , then the spectral - galerkin approximation to reads : find such that and we now give a brief explanation on how to solve the problems efficiently .define the matrices with their entries , \\& a^{m , m+2}_{j , k } = a^{m+2,m}_{k , j } = \frac{\pi}{4}\big(\frac{1}{a^4}-\frac{1}{b^4}\big ) \big [ \big ( \mathcal{l}_{m+2}\phi^{m+2}_k , \mathcal{k}_m \phi^m_j \big ) _ { \omega^{0,1},i } + \big(\mathcal{k}_{-m-2}\phi^{m+2}_k , \mathcal{l}_m \phi^m_j \big)_{\omega^{0,1},i } \big ] , \\ & a^{m , m+4}_{j , k } = a^{m+4,m}_{k , j } = \frac{\pi}{8}\big(\frac{1}{a^2}-\frac{1}{b^2}\big)^2 \big(\mathcal{k}_{-m-4}\phi^{m+4}_k , \mathcal{k}_m \phi^m_j \big)_{\omega^{0,1},i } , \end{aligned}\ ] ] and , \\ & b^{m , m+2}_{j , k } = b^{m+2,m}_{k , j } = \frac{\pi}{2 } \big(\frac{1}{a^2}+\frac{1}{b^2}\big ) \big ( \big ( \partial_r\phi^{m+2}_k + \frac{m+2}{r}\big)\phi^{m+2}_k , \big(\partial_r-\frac{m}{r}\big ) \phi^m_j \big)_{\omega^{0,1},i } .\end{aligned}\ ] ] in view of - , all the nontrivial matrices are penta - diagonal , and their nonzero entries can all be evaluated analytically .further suppose we arrive at the following algebraic eigenvalue problem : where is the unknown vector and and are block hepta - diagonal and block penta - diagonal matrices , respectively , {-n/2\le m , n\le n/2}\quad \text { and } \quad { \boldsymbol{b } } = \big [ b^{m , n } \big]_{-n/2\le m , n\le n/2}.\end{aligned}\ ] ] and is the unknowns . by - , \frac { [ 4m^4-(4i+6)m^3+(12i-22)m^2-(4i^3 - 30i^2 + 82i-96)m+4i^4 - 36i^3 + 116i^2 - 156i+63 ] ( i-3 ) ( i-4 ) } { 4 ( 2i-7 ) ( 2i-5 ) ( 2i-3 ) } , & j = i-1,\\[0.3em ] \frac { 3 ( i-3 ) ( i-2 ) ( { i}^{4}-{m}^{4}-8{i}^{3}+4{m}^{3}+24{i}^{2}-2{m}^{2}-32i-4m+18 ) } { 4 ( 2i-5 ) ( i-1 ) ( 2i-3 ) } , & j = i,\\[0.3em ] \frac { [ 4m^4+(4i-22)m^3-(12i-26)m^2+(4i^3 - 18i^2 + 34i-8)m+4i^4 - 28i^3 + 68i^2 - 68i+15 ] ( i-3 ) ( i-2 ) } { 4 ( 2i-5 ) ( 2i-1 ) ( 2i-3 ) } , & j= i+1 , \\[0.3em ] \frac { ( i-3 ) ( i - m ) ( i+m ) ( i+m-2 ) ( i-4+m ) } { 8 ( 2i-1 ) ( 2i-3 ) } , & j = i+2 .\end{cases}\end{aligned}\ ] ] (i-3)(i-6)}{(2i-11)(2i-9)(2i-3)(2i-7)(2i-5 ) } , & j = i-3 , \\ -\frac{8(i-5)(i-3)([(-i+3)m^2+(i^2 - 4i+6)m+(i-6)(i-2)^2]}{(2i-9)(2i-3)(2i-7)(2i-5 ) } , & j = i-2 , \\ \frac{4(i-4)(i-3)[-6m^2-(12i^3 - 90i^2 + 174i-72)m+(i-2)(4i^3 - 20i^2 + 9i-3 ) ] } { ( 2i-9)(2i-7)(2i-5)(2i-1)(2i-3 ) } , & j = i-1 , \\ \frac{4(i-2)(5i^2 - 3m^2 - 20i+6m+8)(i-3)^2}{(2i-7)(2i-5)(2i-1)(2i-3 ) } , & j = i , \\ \frac{4(i-2)(i-3)[-6m^2+(12i^3 - 54i^2 + 30i+48)m+(i-2)(4i^3 - 28i^2 + 41i+31 ) ] } { ( 2i-7)(2i-5)(2i+1)(2i-1)(2i-3 ) } , & j = i+1 , \\ -\frac{8(i-1)(i-3)[(1-i)m^2-(i^2 - 4i+6)m+(i+2)(i-2)^2]}{(2i-5)(2i+1)(2i-1)(2i-3 ) } , & j = i+2 , \\ -\frac{4[-6m^2 + 2(i-1)(2i^2-i-12)m+(i-2)(4i^3 - 4i^2 - 15i+9 ) ] ( i-3)i}{(2i-5)(2i+3)(2i+1)(2i-1)(2i-3 ) } , & j = i+3 , \\-\frac{2(i-3)i(i+m)(i+1)(i-2+m)}{(2i+3)(2i+1)(2i-1)(2i-3 ) } , & j = i+4 . \end{cases}\end{aligned}\ ] ] we now conduct the error analysis of by using the standard theory of babuka and j. osborn . to this end, we first define the semi - norm in with , ^{\frac12},\end{aligned}\ ] ] where the weight function .suppose for .then it holds that we first note that \d x \d y \\= & \ , \|\partial_x^2u\|^2 + 2\|\partial_x\partial_y u \|^2 + \|\partial_y^2 u\|^2 , \qquad u\in h_0 ^ 2(\omega),\end{aligned}\ ] ] where we derive the second equality sign by integration by parts. owing to the linear mapping from onto , it suffices to prove for being the unit disk , i.e. , .\end{split } \end{aligned}\ ] ] to this end , we further denote and find that it is then obvious that and thus we deduce from the following error estimate on polynomial approximations ( * ? ? ?* theorem 4.3 ) , .\end{aligned}\ ] ] the proof is now completed .let be the fourier orthogonal projection of such that then it is easy to see (r ) \e^{\i m \theta } \\ & \qquad = \sum_{m =- m}^m \big(\partial_r^2+\frac{1}{r}\partial_r -\frac{m^2}{r^2 } \big ) \big [ ( \pi^{-2,-2}_n - i)\hat u_m\big](r ) \e^{\i m \theta},\end{aligned}\ ] ] and thus by and , and where the last inequality sign is derived from the classic error estimate of the fourier approximation , e.g. , theorem 2.1 in . in the sequel , which clearly states that .\notag\end{aligned}\ ] ] by the approximation theory of babuka and osborn on the ritz method for self - adjoint and positive - definite eigenvalue problems , we now arrive at the following main theorem .let be the eigenvalues of ordered non - decreasingly with respect to , repeated according to their multiplicities .further let be an eigenvalue of with the geometric multiplicity and assume that .then there exits a constant such that where let be an eigenfuction corresponding to for , then there exists a constant such that let be an eigenfuction corresponding to , then there exist a constant and a function such that now perform a sequence of numerical tests to study the convergence behavior and show the effectiveness of our algorithm .we operate our programs in matlab 2015b .we now turn to the spectral decomposition of . under the polar coordinates , we first reformulate as follows , \psi(r,\theta ) = 0.\end{aligned}\ ] ] we next expand in the fourier series in , then is reduced to \psi_m(r ) = 0 , \quad \forall m \in \mathbb{z } , \end{split}\end{aligned}\ ] ] making the variable transformation and setting , we further simplify as \phi_m(\rho ) \\ = & \big [ \rho^2\partial_{\rho}^2 -3\rho \partial_{\rho } + ( 4-m^2 ) \big ] \big [ \rho^2\partial_{\rho}^2 + \rho \partial_{\rho } + ( \rho^2-m^2 ) \big ] \phi_m(\rho ) \\ = & \big [ \rho^2\partial_{\rho}^2 -3\rho \partial_{\rho } + ( \rho^2 + 4-m^2 ) \big ] \big [ \rho^2\partial_{\rho}^2 + \rho \partial_{\rho }-m^2 \big ] \phi_m(\rho ) = 0 .\end{split}\end{aligned}\ ] ] which , together with the pole conditions - , implies is a combination of the monomial of degree and the -th order bessel function of the first kind , which , together with the pole conditions - , admits a general solution , meanwhile , the boundary conditions imply that with some nontrivial . as a results , the determinant = - \lambda^{\frac{m+1}{2 } } j_{m+1}(\sqrt{\lambda})=0,\end{aligned}\ ] ] where the second equality sign is derived from the recurrence relation ( 4 ) in page 45 of . in return , the fundamental solution of determines the corresponding eigenfunction of , \e^{\i m\theta}.\end{aligned}\ ] ] finally , we note that the nontrivial roots of define a sequence of increasingly ordered eigenvalues , which are exactly the eigenvalues of the following second - order equation stemmed from the laplacian eigenvalue problem on the unit disk , we take as our examples .the numerical results of first four eigenvalues for different and are listed in table 5.1 - 5.4 .* table 5.1 the first four eigenvalues for and different in the unit disk . * ccccc n& & & & + & 14.6819706421365 & 49.2184567483993 & 103.5024835613828 & 177.6009453441972 + & 14.6819706421239 & 49.2184563216945 & 103.4994538951366 & 177.5207668138042 + & 14.6819706421239 & 49.2184563216945 & 103.4994538951365 & 177.5207668138044 + * table 5.2 the first four eigenvalues for and different in the unit disk . * ccccc n& & & & + & 26.3746164271634 & 70.8499989190960 & 135.0207088659703 & 218.9201891456649 + & 26.3746164271634 & 70.8499989190958 & 135.0207088659704 & 218.9201891456624 + & 26.3746164271634 & 70.8499989190957 & 135.0207088659696 & 218.9201891456631 + & 26.3746164271634 & 70.8499989190957 & 135.0207088659700 &218.9201891456630 + * table 5.3 the first four eigenvalues for and different in the unit disk .* ccccc n& & & & + & 40.7064658182003 & 95.2775725440372 & 169.3954498260997 & 263.2008542550081 + & 40.7064658182002 & 95.2775725440371 & 169.3954498260988 & 263.2008542550071 + & 40.7064658182004 & 95.2775725440372 & 169.3954498260995 & 263.2008542550078 + & 40.7064658182003 & 95.2775725440370 & 169.3954498260994 & 263.2008542550076 + we know from tables 5.15.3 that numerical eigenvalues achieve at least fourteen - digit accuracy with for and for , respectively . if we choose the numerical solutions with as reference solutions , the error figures of the approximate eigenvalue with different are listed in figures 1 - 3 .it is worthy to note that , when imposing the pole conditions as in for , one would necessarily get spurious eigenvalues even for large which can only serve as upper bounds of each the exact ones . for instance , the first computational eigenvalue in this case reads , a number far away from the reference one ..,scaledwidth=100.0% ] .,scaledwidth=100.0% ] we take as our example. the numerical data of the first four eigenvalues are listed in table 5.4 .we see that the eigenvalues achieve at least fourteen - digit accuracy with .if we choose the solutions of as reference solutions , the error figures of the approximate eigenvalue with different are listed in figure 4 .* table 5.4 the first four eigenvalues for different in an elliptic domain with .* ccccc n& & & & + & 9.96633619654313 & 11.0706597920227 & 13.1630821009849 & 15.6448857440637 + & 9.9663343484475 & 11.0706554383893 & 13.1627539459867 & 15.6437495616386 + & 9.96633434844728 & 11.0706554383168 & 13.1627539455290 & 15.6437494538630 + & 9.96633434844729 & 11.0706554383167 & 13.1627539455291 & 15.6437494538630 + & 9.96633434844726 & 11.0706554383166 & 13.1627539455290 & 15.6437494538630 + before concluding this section , we would like to present some figures of the ( real ) eigenfunctions corresponding to the smallest 8 eigenvalues in figures [ fig : psi1]-[fig : psi8 ] ..,scaledwidth=100.0% ] .,scaledwidth=100.0% ] .,scaledwidth=100.0% ] .,scaledwidth=100.0% ] .,scaledwidth=100.0% ] .,scaledwidth=100.0% ] .,scaledwidth=100.0% ] .,scaledwidth=100.0% ]we present a rigorous error analysis for our proposed spectral - galerkin methods in solving the stokes eigenvalue problem under the stream function formulation in polar geometries .we derive the essential pole condition and reduce the problem to a sequence of one - dimensional eigenvalue problems that can be solved individually in parallel .spectral accuracy is achieved by properly designed non - polynomial basis functions and the exponential rate of convergence is established by introducing a suitable weighted sobolev space ; all based on the correct pole condition . to the best of out knowledge ,the pole condition and such kind of usage of weighted sobolev space and basis functions are all for the first time in the literature .our spectral - galerkin method is also extended to solve the stream function formulation of the stokes eigenvalue problem on an elliptic region , which also indicates the capability of our method to solve fourth - order equations on other smooth domains .numerical experiments in the last section have validated the theoretical results and algorithms .as we can see , on special domains such as circular disks and elliptic regions , with only less than 50 degrees of unknowns , the proposed spectral method can achieve 14-digits accuracy for the first few eigenvalues of the stokes problem , this is far more superior to traditional methods such as finite element and finite difference methods .the classical jacobi polynomials , with are mutually orthogonal with respect to the jacobi weight function on , where is the kronecker delta , and for , denote by the pochhammer symbol .the classical jacobi polynomials possess the following important representation which symbolically furnishes the extension of to arbitrary and .generalized jacobi polynomials preserve most of the essential properties of the classic jacobi polynomials , among which the following identities are of importance in the current paper , in particular , the generalized jacobi polynomials with and/or being integers are our greatest interest , h^{\alpha,\beta}_n \big(\frac{\zeta-1}{2}\big)^{-\alpha } j_{n+\alpha}^{-\alpha , \beta}(\zeta ) , & \alpha\in \zz,\ n+\alpha\in \nn_0 , \\[0.4em ] h^{\alpha,\beta}_n \big(\frac{\zeta+1}{2}\big)^{-\beta } j_{n+\beta}^{\alpha , -\beta}(\zeta ) , & \beta\in \zz , \ n+\beta\in \nn_0 .\end{cases } \end{split}\end{aligned}\ ] ] the generalized jacobi polynomials with negative indices not only simplify the numerical analysis for the spectral approximations of differential equations , but also lead to very efficient numerical algorithms . finally , it is worthy to point out that a reduction of the degree of occurs if and only if , where if and otherwise .at first , - are trivial results on the jacobi expansion . by , , , one finds that , for , = \frac{8 i}{i-2 } j^{-2,-2}_{i}(t)+8(1-\delta_{i,4 } ) j^{-2,-2}_{i-1}(t).\end{aligned}\ ] ] then by , and , one derives = 2i(i-3)j^{0,0}_{i-2}(t)+2(i-4)(i-3)j^{0,0}_{i-3}(t ) \\ &\quad = \frac{2i(i-3)}{2i-3 } ( ( i-1)j^{0,1}_{i-2}(t)+(i-2 ) j^{0,1}_{i-3}(t ) ) + \frac{2(i-4)(i-3)}{2i-5 } ( ( i-2)j^{0,1}_{i-3}(t)+(i-3)j^{0,1}_{i-4}(t ) ) \\ & \quad = { \frac { 2 ( i-3 ) ( i-1 ) i}{(2i-3 ) } } j^{0,1}_{i-2}(t ) + { \frac { 8 ( i-3 ) ^{2 } ( i-2 ) ( i-1 ) } { ( 2i-3 ) ( 2i-5 ) } } j^{0,1}_{i-3}(t ) + { \frac { 2 ( i-4 ) ( i-3 ) ^{2}}{(2i-5)}}j^{0,1}_{i-4}(t ) , \end{aligned}\ ] ] and = \frac{1}{t+1}\big [ \frac{4 i ( i-3 ) } { i-2 } j^{-1,-1}_{i-1}(t)+4(i-4 ) j^{-1,-1}_{i-2}(t ) \big ] \\ & \quad = \big [ \frac{2 i ( i-3 ) } { i-2 } j^{-1,1}_{i-2}(t)+\frac{2(i-4)(i-3)}{i-2 } j^{-1,1}_{i-3}(t ) \big ] \\ & \quad = \frac { 2 ( i-3 ) i}{(2i-3 ) } j^{0,1}_{i-2}(t ) - \frac { 12 ( i-3 ) ( i-2 ) } { ( 2i-3 ) ( 2i-5 ) } j^{0,1}_{i-3}(t ) - \frac { 2 ( i-4 ) ( i-3 ) } { ( 2i-5)}j^{0,1}_{i-4}(t),\end{aligned}\ ] ] which give and immediately .further , by , and , = \frac{4 i ( i-3 ) } { i-2 } j^{-1,-1}_{i-1}(t)+4(i-4 ) j^{-1,-1}_{i-2}(t ) \\ & = \frac{2(i-3)i^2}{(2i-3)(2i-1 ) } j^{0,1}_{i-1}(t ) + \frac{4(i-1)(i-3)(2i^2 - 7i+2)}{(2i-5)(2i-1)(2i-3 ) } j^{0,1}_{i-2}(t ) \\ & -\frac{8(i-2)(i-3)}{(2i-3)(2i-5 ) } j^{0,1}_{i-3}(t ) -\frac{4(2i^2 - 9i+6)(i-3)^2}{(2i-7)(2i-5)(2i-3 ) } j^{0,1}_{i-4}(t ) -\frac{2(i-3)(i-4)^2}{(2i-5)(2i-7 ) } j^{0,1}_{i-5}(t),\end{aligned}\ ] ] and by , and , which lead to and , respectively . \pm ( 1-t)^2 ( 1+t)j^{2,1}_{i-4}(t ) = \frac{2(i-3)i(i\pm m)}{(2i-3)(2i-1)}j^{0,1}_{i-1}(t ) \\ + \frac{4(i-1)(i-3)(2i^2 - 7i+2\mp 2m)}{(2i-5)(2i-1)(2i-3)}j^{0,1}_{i-2}(t ) -\frac{4(i-2)(i-3)(2\pm m)}{(2i-3)(2i-5)}j^{0,1}_{i-3}(t ) \\ -\frac{4(2i^2 - 9i+6\mp 2m)(i-3)^2}{(2i-5)(2i-7)(2i-3)}j^{0,1}_{i-4}(t ) -\frac{2(i-3)(i-4)(i-4\mp m)}{(2i-5)(2i-7 ) } j^{0,1}_{i-5}(t)\end{aligned}\ ] ] we first note that spans all polynomials on the unit disk , thus forms a complete system in .denote by be the generalized jacobi polynomials as described in appendix [ app : a ] .define then forms a complete system in .let be a non - empty set . by an algebra of functions on we shall mean a set of functions , say denoted by , such that if then and , and if is a number then .further , we shall say that separates points of if given points , and , there exists a function such that .if is an algebra over of complex valued functions , we say that is self conjugate if whenever the conjugate function is also in .[ th : s - w ] let be a compact set and an algebra ( over ) of complex valued continuous functions on .assume that separates points , contains the constants , and is itself conjugate .then the uniform closure of is equal to the algebra of all complex valued continuous functions on . by and, one finds that further by , , one derives that \\ & = \frac{2k+2m-2}{(k+2 m ) r^{m+1 } } \partial_r\big [ r^{2m+2 } j^{-1,2m+2}_{k-2}(2r-1 ) \big ] = \frac{2k+2m-2}{(k-2 ) r^{m+1 } } \partial_r j^{-1,-2m-2}_{k+2m}(2r-1 ) \\ & = \frac{2k+2m-2}{r^{m+1}}j^{0,-2m-1}_{k+2m-1}(2r-1 ) = ( 2k+2m-2)j^{0,2m+1}_{k-2}(2r-1)r^m . \end{split}\end{aligned}\ ] ] as a result , one concludes from that which exactly gives .the proof is completed .
in this paper we propose and analyze spectral - galerkin methods for the stokes eigenvalue problem based on the stream function formulation in polar geometries . we first analyze the stream function formulated fourth - order equation under the polar coordinates , then we derive the pole condition and reduce the problem on a circular disk to a sequence of equivalent one - dimensional eigenvalue problems that can be solved in parallel . the novelty of our approach lies in the construction of suitably weighted sobolev spaces according to the pole conditions , based on which , the optimal error estimate for approximated eigenvalue of each one dimensional problem can be obtained . further , we extend our method to the non - separable stokes eigenvalue problem in an elliptic domain and establish the optimal error bounds . finally , we provide some numerical experiments to validate our theoretical results and algorithms . * keywords : * stokes eigenvalue problem , polar geometry , pole condition , spectral - galerkin approximation , optimal error analysis
cloud computing is a disruptive technology that has already changed the way many people live and conduct business .clearly , individuals , organizations , and businesses in developing countries are also adopting services such as reliable data storage , webmail , or online social networks .however , according to a report by the united nations conference on trade and development ( unctad ) , the rate at which this adoption takes place is much slower . in western countries the increasing demand by usersis met by creating ever - larger data centers and upgrading and extending high - speed communication networks . in developing countriesthe picture looks different .the unctad report goes on saying that missing infrastructure is a major obstacle for the uptake of cloud computing in these regions : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [ ] whereas there were in 2011 more than 1000 secure data servers per million inhabitants in high - income economies , there was only one such server per million in [ the least developed countries ( lcds ) ] ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ we argue that trying to copy the infrastructure of western countries is neither feasible nor meets the requirements of developing countries , since setting up and maintaining a number of large data centers can easily cost hundreds of millions of dollars .just using the infrastructure provided by the large multinational corporations that create such data centers is also not an ideal situation . handing over data to foreign entitiesraises all kinds of issues ranging from privacy and data protection ( the local laws and regulations may not match those of the hosting country ) to national security and industrial espionage . additionally , due to network outagesit may be difficult to establish reliable access to these remote servers , which is an even more acute problem for rural areas . then moving to the cloud to gain a reliable storage solutionwould be negated by not being able to access it at all times .consequently , we advocate the use of small , distributed data centers that can be adapted to the local needs in a bottom - up , community - based fashion . these can serve as hubs and relays for ( large - scale ) remote cloud services or as facilities for local community services . pushing some of the data storage and computation to the client sideis known as a _architecture , and is done to improve the availability and durability of stored data , as well as to lower network latencies in mobile devices .figure [ fig : cloudlet ] shows a typical cloudlet architecture : the clients are not directly connected to the cloud but via cloudlet servers .it is important that the design of cloudlet data centers considers the specific requirements of developing countries .first of all , it has to use low - cost components that are readily available .second , low power consumption is a crucial criterion . due to unreliabilities in the provision of electricity, the data center may have to be run on solar power or batteries for considerable stretches of time .third , we need a robust and sustainable system than can be operated in a harsh environment ( in terms of temperatures and weather ) .finally , the data center should be based on open platforms and standards and avoid proprietary technology as much as possible .this will make it easier to deploy , maintain , and repair the hardware .additionally , an open system allows users to adapt , extend , and scale it to their particular needs .interestingly enough , even a big player like facebook is advocating the use of open hardware in their open compute project initiative . herewe illustrate how single - board computers , such as the raspberry pi ( rpi ) , can serve as building blocks for computing platforms that meet the requirements described above .originally developed to spark the interest of school children in computer science , it has also been discovered by hobbyists worldwide , who use it for a wide range of projects .we have been building raspberry pi clusters and experimenting with them for more than two years now .a while ago we have started using them for teaching and training purposes and even as a testbed for research .we believe that this technology has a lot of potential and can make an impact on the lives of many more people by serving as a low - cost data storage and computing platform .the remainder of the paper is structured as follows .we discuss related work in the next section and then describe our hardware design in section [ sec : hardware ] , followed by a brief discussion of software aspects in section [ sec : software ] .section [ sec : discussion ] highlights and explains some of our design decisions in more detail and also sketches use cases and application domains .finally , in section [ sec : concl ] we conclude with a brief summary .there are many studies investigating the adoption of cloud computing , for some overview and survey papers , see .however , most of these studies highlight the topic from the point of view of well - developed countries . while some inhibitors , such as security and privacy concerns , hold universally , others are predominantly found in developing countries and play almost no role in western countries .predominant among these are unstable power grids and inadequate internet connectivity , translating into more or less frequent outages .this calls for a different approach to cloud computing : before cloud services can take off , a reliable infrastructure has to be put in place .there are numerous publications on connectivity , communications networks , and bandwidth , see for a few examples . while the networking aspect is very important , the storage and computational aspects should not be neglected ; especially since getting the infrastructure for widespread broadband connectivity into place will still take considerable time . consequently , there is a need for local data centers that can bridge the communication gaps .tesgera et al .propose a cloudlet - based approach to tackle network issues in emerging regions , but do so very briefly on a very abstract level , identifying research challenges , but not proposing any implementation .we agree with hosman and baikie that the western `` bigger is better '' approach for building large - scale data centers is bound to fail in developing countries , due to the particular constraints .furthermore , in a study hosman identifies the main challenges faced by hardware deployers in these regions : energy consumption , cost , environment - related issues , connectivity , and maintenance and support .these are all crucial aspects we cover by relying on inexpensive , low - power , rugged platforms , such as micro - computers , to build micro - data centers .there are various groups already working on developing raspberry pi clusters and similar architectures , the challenge is to adapt these designs to the conditions found in developing countries .in the following we give an overview of the hardware employed in our design : in particular the casing , network architecture , electronic components , power supply , and cooling . at the core of our design are modular wooden stackable boxes of size 40 x 40 x 25 cm ( shown in figure [ fig : box ] ) , offering several advantages . the boxes can be assembled and disassembled with a simple screw driver and for transport the individual parts can even be stored in ( hand ) luggage .once set up , the contents of a box can be accessed without the need for any tools .the front plate is fixed with wing screws , which can be opened with bare hands .the boxes also feature handles , which allows their transport in assembled state . additionally , the electronic components are not soldered to the casing , but fixed on shelves with screws . consequently, the devices inside of a box can be easily accessed , repaired , substituted , maintained , and updated , even by a person with minimal technical skill . for the casing we have chosen wood , because it is an environmentally friendly material that can be found around the world .however , this can be replaced with material that is readily available locally .the default network architecture consists of two subclusters connected via two switches , which in turn can be connected to an external network .figure [ fig : network ] shows a schematic diagram of the architecture .( we use 8-port switches with two additional uplink ports ; poe stands for power over ethernet and will be discussed later ) .a subcluster , which fits into one box , comprises two control and five data storage / computational raspberry pis , to which solid state disks are connected . by running two subclusters each containing multiple rpis, we avoid a single point of failure , in case of a breakdown the data center can be kept up and running , albeit at reduced performance .this design is also scalable : we do not need to connect both switches to the external network , one of them can also be connected to one or more other subclusters . in our current designwe use the new raspberry pi 2 , model b , with a 900 mhz quad - core armv7 cpu and 1gbyte ram , as it exhibits a higher performance than its predecessor , the raspberry pi b+ .however , it has a slightly higher power consumption than the b+ model , even in single - core mode .thus , if power consumption is a crucial issue , the pi 2 can be swapped for the pi b+ . in terms of costs ,the difference between the two models is about 35 , the b+ model for $ 25 .in fact , we are not restricted to raspberry pi boards , other single - board computers , such as banana pi , odroid or any other device with a cpu of the arm family would also work . with some tinkering itwould even be possible to make use of old smart phones .using the power over ethernet ( poe ) technology simplifies the assembly , as we do not need separate sets of cables for communication and power supply .additionally , it eliminates the need for separate power supply sockets for the raspberry pis , although a poe adapter is necessary on the rpi side . the main advantage , though , is the provision of smart power management and monitoring techniques integrated into the system .for example , when using the hp 2530 - 8-poe+ switch , poe operation can be disabled and enabled ( at different levels of priority ) for each individual port .clearly , this configuration has a price tag attached to it : a solution relying on poe will add to the costs of the system .we can make a marginal saving by connecting the two controller rpis of a subcluster via a single poe adapter for supplying them with power . the power supply , which requires a separate box , comprises four 12v lithium or lead - acid batteries configured in series with a total capacity of 200 ah .the batteries , whose task it is to provide a constant source of energy , can be charged via solar panels or any other source of electricity that is available .the lead batteries have the advantage of being cheaper than the lithium ones .however , the lithium batteries are much better suited to higher temperatures and will degrade much more slowly under these conditions .the lead batteries should not be ruled out completely , though . in a high - altitude environment , such as the andes or the himalayas , we may run into problems with lithium batteries and low temperatures , as they can not be charged at temperatures below freezing .thus , lead batteries would be a better choice for these regions .the power management is controlled by a power inverter and battery charging regulator . in our prototype , a 220v ac 300w sine wave power inverter ( studer aj-400 - 48 ) and a lithium battery charger is used . when using lead batteries , we can replace this with the aj-400 - 48-s model , which also integrates a 10a 48v dc lead - acid battery charging regulator .the ac output socket from the power inverter can be extended by a three - way power strip to supply a laptop or other external diagnosis tool .an important design decision is to use only passive cooling techniques , so that we do not incur any additional power consumption and that there are no moving parts that can break down .we created a set of holes on two opposite sides of the casing to make use of the stack effect for cooling .one set of holes is found in the upper part of one side , while the opposite set is located in the lower part .the warm air flows out through the upper set , drawing in cooler outside air through the lower set . for protection, the holes can be covered with anti - insect and anti - dust nets .depending on the placement of the boxes , additional cooling mechanisms can be put into place , such as solar chimneys , windcatchers , or making use of the cooler temperature underground by installing additional heat sinks .in our group we developed tools to set up and maintain the operating system installed on the individual nodes of the data center .we do this in an efficient and automated manner with scripts that install a minimal version of debian or archlinuxarm on a node , register a node in the cluster , and then update the node to fully integrate it into the cluster .we can also monitor the activity of each node by installing our monitoring panel .this covers the basic infrastructure , but is not enough yet .a cluster in which software is deployed in a bare - metal fashion on individual nodes is not attractive for potential users , some form of middleware is needed .however , most off - the - shelf solutions , such as openstack or similar frameworks , are usually too heavyweight for raspberry pi clusters .the storage manager component of openstack , swift , is relatively lightweight compared to other components , such as nova ( computing ) and neutron ( networking ) .we have successfully deployed it on a raspberry pi cluster , making the cluster usable as a data storage platform with data replication , meaning that we do not lose data in case of ( partial ) hardware failure .currently , we are working on extending this solution to other aspects of cloud computing .in the following we provide more details and motivate some of the particular design decisions we made .we also discuss application domains of our server architecture .while we were designing our data center we realized that it would be crucial to already anticipate some of the future requirements or changes that users would make to the system to adapt it to their needs .that is the reason why we went for a flexible design around the core of wooden boxes housing the electronic devices .depending on the specific requirements in terms of power consumption , redundancy , and computational power , a suitable number of boxes containing electronics and power supplies can be selected and stacked on top of each other . in a test run , we measured the power consumption of one box , i.e. one subcluster with seven raspberry pis , a switch , and five ssds . generating a stress test under archlinux with two cpu processes , one io load , and one ram load with 128 mb malloc as a benchmark , one subcluster consumed 48w running the benchmark .we specified the capacity of the batteries assuming a load that continuously consumes 96w ( for the two subclusters ) .the goal was to keep the discharge rate at 50% for a duration of twelve hours . regularly discharging down to low levelshas a detrimental effect on the life of the batteries .a duration of twelve hours was chosen to be able to run the data center over night , during which there is no sunlight for solar panels .nevertheless , the capacity of the batteries can be adapted to the local circumstances .our goal is not to compete with the high - tech environment usually found in large cities and metropolitan areas , but to provide an alternative for more rural areas .the proposed approach allows users to set up local servers which can be used as components of a private or community cloud network .this is especially important when it comes to handling sensitive data in domains such as health care and governance , as it gives full ownership of the data and services to the persons running the data center .other important use cases are education and research .as we have experienced with our own students , it is an ideal platform for teaching practical skills in the area of distributed systems .the acquired knowledge ranges from hardware all the way to protocols synchronizing nodes in a network . due to its mobilityit can also be used to set up field labs for processing data , which , for example , is generated by sensor networks monitoring the environment . beyond these applications ,the platform may even help in sparking entrepreneurial activity , as the initial investment is not large and the system can be scaled out when and if the need arises .moreover , upgrading the system with more efficient boards as they become available can be done gradually , i.e. , this does not have to be done in one go , making it possible to achieve the upgrade with a number of smaller investments rather than one big one .even though cloud computing and similar services are also expanding in developing countries , they are still far from being widely spread . while there are also discussions about factors inhibiting the adoption of cloud computing in western countries , there are factors which are unique to the developing world , so trying to apply the western approach to cloud computing is very likely bound to fail .two crucial factors , which were also identified in a report by the world economic forum , are the lack of both infrastructure and a skilled workforce .we believe that our micro - data center architecture can start filling these gaps by providing an open , inexpensive , adaptable , and extendible platform with a low power consumption , empowering communities to take matters into their own hands . due to these features, it can also be rolled out in schools and universities to teach and grow the next generation of engineers and computer scientists .we hope that in the near future the digital divide will not widen , as it is currently doing in the area of cloud computing , but become narrower .however , we believe that in order for this to happen , it is not sufficient for developing countries to just import information technology . ultimately , a lot of the needed infrastructure should be developed and produced in the countries themselves . in that light, we see our cluster as a starting point for many more creative and innovative solutions .
instead of relying on huge and expensive data centers for rolling out cloud - based services to rural and remote areas , we propose a hardware platform based on small single - board computers . the role of these micro - data centers is twofold . on the one hand , they act as intermediaries between cloud services and clients , improving availability in the case of network or power outages . on the other hand , they run community - based services on local infrastructure . we illustrate how to build such a system without incurring high costs , high power consumption , or single points of failure . additionally , we opt for a system that is extendable and scalable as well as easy to deploy , relying on an open design .
an interesting and important problem of geometry and mathematical analysis is the exact answer to the question : which is the surface area of the ellipsoid immersed in the euclidean space ? despite the simplicity of the question and the fact that the roots of the problem can be traced back to the 19th centurythere has been only a partial progress towards its solution .this is because the closed form solution had evaded the efforts of previous researchers and scholars .the first serious investigation had been performed by legendre who obtained an equation for the surface area of the ellipsoid in terms of formal integrals . at this pointwe note , that a nice and critical review of the mathematical literature summarizing the attempts of various mathematicians in solving the problem , from the period of legendre till 2005 , has been written in ( see for instance in ) .there is also a practical interest for an exact solution for the ellipsoidal surface area in various fields of science , we just mention a few such fields : 1 ) in biology the human cornea as well as the chicken erythrocytes are realistically described by an ellipsoid and the area is important in the latter case for the determination of the permeabilities of the cells , 2 ) in cosmology and the physics of rotating black holes 3 ) in the geometry of hard ellipsoidal molecules and their virial coefficients .in particular in the latter case , the surface area appears in the expression for the pressure of the ellipsoidal molecules .we also mention the relevance of the surface area of ellipsoid for the investigation and measurement of capillary forces between sediment particles and an air - water interface . for an application to medicinewe refer the reader to . on the other handthere are two further important aspects related to the geometry of the ellipsoid awaiting for a full analytic solution with many important applications .namely : _ first _ the calculation in closed analytic form of the _ capacitance _ of a conducting ellipsoid and _ second _ the exact analytic calculation of the _ demagnetizing factors _ of a magnetized ellipsoid . in the former case ,the geometry of the ellipsoid is complex enough to serve as a promising avenue for modeling arbitrarily shaped conducting bodies .capacitance modulation has been suggested recently as a method of detecting microorganisms such as the e. coli present in the water . despite its importance in theory and applications , no exact analytic solution for the capacitance of theellipsoid had been derived by previous authors .there was only a formula in terms of formal integrals derived in . in the later case ,the magnetic susceptibility of the body determined in the ambient magnetic field is influenced by the shape and dimensions of the body .thus the measured ( apparent ) magnetic susceptibility should be corrected for this shape effect to obtain the shape - independent true susceptibility the relation between the true and apparent volume susceptibility involves the so called demagnetizing factors .the first attempts of calculating the demagnetizing factors of the ellipsoid were made in , .however , the authors of these works only derived expressions in terms of formal integrals . in this paper , we derive for the first time the closed form solution for the three demagnetizing factors for the ellipsoid , in terms of the first hypergeometric function of appell of two variables. a fundamental application of our work will be in the determination of asteroidal magnetic susceptibility and its comparison to those of meteorites in order to establish a meteorite - asteroid match .another interesting application of our solution for the demagnetizing factors of the ellipsoid would be in the field of microrobots . an external magnetic field can induce torque on a ferromagnetic body . thus the use of external magnetic fields has strong advantages in microrobotics and biomedicine such as wireless controllability and safe use in clinical applications .thus , there is a certain demand from pure and applied mathematics for the closed form solutions of the above geometric problems .it is the purpose of our paper to produce such novel and useful exact analytic solutions for all three described problems above .we report our findings in what follows .we consider an ellipsoid centred at the coordinate origin , with rectangular cartesian coordinate axes along the semi - axes we begin our exact analytic calculation for the infinitesimal surface area using the formula for the surface segment : {1+z_{x}^{2}+z_{y}^{2}}\mathrm{d}x\mathrm{d}y,\label{gensurfarea}\ ] ] where and substituting to we get{\frac{1-\left ( 1-\frac{c^{2}}{a^{2}}\right ) \frac{x^{2}}{a^{2}}-\left ( 1-\frac{c^{2}}{b^{2}}\right ) \frac{y^{2}}{b^{2}}}{1-\frac{x^{2}}{a^{2}}-\frac{y^{2}}{b^{2}}}}\mathrm{d}x\mathrm{d}y\nonumber\\ & = \sqrt[2]{\frac{1-\delta\frac{x^{2}}{a^{2}}-\varepsilon\frac{y^{2}}{b^{2}}}{1-\frac{x^{2}}{a^{2}}-\frac{y^{2}}{b^{2}}}}\mathrm{d}x\mathrm{d}y,\end{aligned}\ ] ] where we define: consequently the octant surface area is given by{1-x^{2}/a^{2}}}\sqrt[2]{\frac{1-\delta\frac{x^{2}}{a^{2}}-\varepsilon\frac{y^{2}}{b^{2}}}{1-\frac{x^{2}}{a^{2}}-\frac{y^{2}}{b^{2}}}}\mathrm{d}y\right\ } \mathrm{d}x\nonumber\\ & = \int_{0}^{a}\left\ { \int_{0}^{b\sqrt[2]{1-x^{2}/a^{2}}}\sqrt{\frac{\left ( 1-\delta\frac{x^{2}}{a^{2}}\right ) \left [ 1-\frac{\varepsilon}{b^{2}\left ( 1-\delta\frac{x^{2}}{a^{2}}\right ) } y^{2}\right ] } { \left ( 1-\frac{x^{2}}{a^{2}}\right ) \left [ 1-\frac{1}{b^{2}}\frac{1}{1-\frac{x^{2}}{a^{2}}}y^{2}\right ] } } \mathrm{d}y\right\ } \mathrm{d}x\nonumber\\ & = \int_{0}^{a}\left\ { \int_{0}^{b\sqrt[2]{1-x^{2}/a^{2}}}\omega\sqrt { \frac{(1-\mu^{2}y^{2})}{(1-\lambda^{2}y^{2}}}\mathrm{d}y\right\ } \mathrm{d}x,\end{aligned}\ ] ] with{\frac{\left ( 1-\delta\frac{x^{2}}{a^{2}}\right ) } { \left ( 1-\frac{x^{2}}{a^{2}}\right ) } } , \text { } \mu^{2}:=\frac{\varepsilon}{b^{2}(1-\delta x^{2}/a^{2})},\lambda^{2}:=\frac{1}{b^{2}}\frac{1}{1-\frac{x^{2}}{a^{2}}}.\ ] ] we define a new variable:{1-x^{2}/a^{2}}}\implies\mathrm{d}y_{1}=\frac{\mathrm{d}y}{\eta},\text { with } \eta:=b\sqrt[2]{1-x^{2}/a^{2}}.\ ] ] thus:{\psi } } \right\ } \mathrm{d}x\implies\end{aligned}\]] with and denotes the first generalized hypergeometric function of with two variables and parameters the double series converges absolutely for thus we obtain {1-\frac{\delta x^{2}}{a^{2}}}\frac{1}{2}\frac{\gamma(1/2)\gamma(1)}{\gamma(3/2)}f_{1}\left ( \frac{1}{2},-\frac{1}{2},\frac{1}{2},\frac{3}{2},\mu^{\prime2},1\right ) \mathrm{d}x\nonumber\\ & = \int_{0}^{a}b\sqrt[2]{1-\frac{\delta x^{2}}{a^{2}}}\frac{1}{2}\frac { \gamma(1/2)\gamma(1)}{\gamma(3/2)}\frac{\gamma(3/2)\gamma(1/2)}{\gamma(1)\gamma(1)}f\left ( \frac{1}{2},-\frac{1}{2},1,\mu^{\prime2}\right ) \mathrm{d}x\nonumber\\ & = \int_{0}^{a}b\sqrt[2]{1-\frac{\delta x^{2}}{a^{2}}}\frac{\pi}{2}f\left ( \frac{1}{2},-\frac{1}{2},1,\frac{(1-x^{2}/a^{2})}{(1-\delta x^{2}/a^{2})}(1-c^{2}/b^{2})\right ) \mathrm{d}x.\nonumber\\ & \end{aligned}\ ] ] in the transition from the first to the second line of the previous equation we made use of the property of appell s hypergeometric function according to which if one of its two variables is set to the value ( one ) , then the function reduces to the ordinary hypergeometric function of gau : it is also valid{1-\delta x_{1}^{2}}\frac{\pi}{2}f\left ( \frac{1}{2},-\frac{1}{2},1,\frac{(1-x_{1}^{2})}{(1-\delta x_{1}^{2})}(1-c^{2}/b^{2})\right ) \mathrm{d}x_{1}.\ ] ] we now apply the transformation which yields{1-\delta}}\int_{0}^{1}\frac{1}{\left [ 1+\frac{\delta\eta^{2}}{1-\delta}\right ] ^{2}}\frac{\pi } { 2}f\left ( \frac{1}{2},-\frac{1}{2},1,(1-\eta^{2})\varepsilon\right ) \mathrm{d}\eta\nonumber\\ & = \frac{ab}{\sqrt[2]{1-\delta}}\int_{0}^{1}\sqrt[2]{1-\varepsilon\eta^{2}}\int_{0}^{\pi/2}\frac{1}{\left [ 1+\frac{\delta(1-\eta^{2})}{1-\delta}\cos^{2}\phi\right ] ^{2}}\mathrm{d}\phi\mathrm{d}\eta\nonumber\\ & = \frac{ab}{\sqrt[2]{1-\delta}}\int_{0}^{1}\sqrt[2]{1-\varepsilon\eta^{2}}\frac{\pi}{4}\frac{2+\frac{\delta(1-\eta^{2})}{1-\delta}}{\left [ 1+\frac{(\delta(1-\eta^{2}))}{1-\delta}\right ] ^{3/2}}\mathrm{d}\eta\implies\end{aligned}\ ] ] the total surface area is given by{1-\delta}}\biggl\{\int_{0}^{1}\sqrt[2]{1-\varepsilon\eta^{2}}\frac{\pi}{2}\frac{1}{\left\ { 1+\left [ \frac{\delta(1-\eta^{2})}{1-\delta}\right ] \right\ } ^{3/2}}\mathrm{d}\eta\nonumber\\ & + \int_{0}^{1}-\frac{\pi}{4}\frac{-\delta(1-\eta^{2})}{1-\delta}\frac{\sqrt[2]{1-\varepsilon\eta^{2}}}{\left\ { 1+\left [ \frac{\delta ( 1-\eta^{2})}{1-\delta}\right ] \right\ } ^{3/2}}\mathrm{d}\eta\biggr\},\end{aligned}\ ] ] while using = \frac{1-\delta + \delta(1-\eta^{2})}{1-\delta}=\frac{1}{1-\delta}(1-\delta\eta^{2}),\ ] ] we obtain{1-\delta}}\biggl\{\frac{\pi}{4}\frac{\gamma(1/2)\gamma(1)}{\gamma(3/2)}\frac{1}{[1/(1-\delta ) ] ^{3/2}}f_{1}\left ( \frac{1}{2},\mathbf{\beta}_{\epsilon},\frac{3}{2},\varepsilon,\delta\right ) \nonumber\\ & -\frac{\pi}{4}\frac{(-\delta)}{1-\delta}\frac{1}{[1/(1-\delta)]^{3/2}}\frac{1}{2}\frac{\gamma(1/2)\gamma(2)}{\gamma(5/2)}f_{1}\left ( \frac{1}{2},\mathbf{\beta}_{\epsilon},\frac{5}{2},\varepsilon,\delta\right ) \biggr\}\nonumber\\ & \label{area51ell13kraniotis}\ ] ] and we defined the 2-tuple: equation is our for the surface area of the ellipsoid .we believe it constitutes the first complete exact analytic solution of the problem , while equation is of certain mathematical beauty .thus , we have proved the theorem [ georgiosvkraniotis ] the surface area of the general ellipsoid in closed analytic form is given by the equation {1-\delta}}\biggl\{\frac{\pi}{2}\frac{1}{[1/(1-\delta)]^{3/2}}f_{1}\left ( \frac{1}{2},\mathbf{\beta}_{\epsilon},\frac{3}{2},\varepsilon,\delta\right ) -\frac{\pi}{6}\frac{(-\delta)}{1-\delta}\frac{1}{[1/(1-\delta)]^{3/2}}f_{1}\left ( \frac{1}{2},\mathbf{\beta}_{\epsilon},\frac{5}{2},\varepsilon , \delta\right ) \biggr\}\mathcal{a}_{scalene}^{ellipsoid}=4\pi ab\left( \frac{c^{2}}{a^{2}}f_{1}(\frac{1}{2},-\frac{1}{2},\frac{3}{2},\frac{3}{2};\epsilon,\delta ) + \frac{1}{3}\left ( 1-\frac{c^{2}}{a^{2}}\right ) f_{1}(\frac{1}{2},-\frac { 1}{2},\frac{3}{2},\frac{5}{2};\epsilon,\delta)\right ) }\ ] ] while {1-c^{2}/a^{2}}}{1-\sqrt[2]{1-c^{2}/a^{2}}}\right\ } .\label{gvktom13} ] {\pi}, ] & & & {c}0.0133953\\ \end{array } ] & {c}0.973133\\ \end{array } ] & & & {c}0.0613072\\ 0.06108 \end{array } ] & {c}0.876021\\ 0.87611 \end{array } ] & & & {c}0.235445\\ 0.23555 \end{array } ] & {c}0.5256\\ 0.5256 \end{array } ] & & & {c}0.0615658\\ 0.06154 \end{array } ] & {c}0.876868\\ 0.87692 \end{array } $ ] + we also plot the demagnetizing factors of the magnetized ellipsoid versus the ratio for various values of the ratio our results are displayed in figures [ lcdemfac],[mdemfac1],[ndemfac ] .[ ptb ] capacitancegvk.eps [ ptb ] dmgl2f.eps [ ptb ] mdm.eps [ ptb ] ndm.eps using theorem [ magnetization ] we can write for the potential where have solved analytically a number of problems related to the geometrical and physical properties of the theory of ellipsoid . in particular , we have solved in closed analytic form for the capacitance of a conducting ellipsoid immersed in the euclidean space exact solution has been expressed in terms of appell s first hypergeometric function and it is given by theorem [ xwritikotita ] and equation we have also computed exactly the capacitance of a conducting ellipsoid in .the resulting exact analytic solution is expressed in terms of lauricella s fourth hypergeometric function of -variables , see theorem [ condenserngvk ] , equation we subsequently solved analytically for the demagnetizing factors of a magnetized ellipsoid immersed in the euclidean space the resulting solutions are expressed elegantly in terms of appell s first hypergeometric function as stated in theorem [ magnetization ] and equations finally , we have derived the closed form solution for the geometrical entity of the surface area of the ellipsoid immersed in the euclidean space our analytic solution in this case is given in theorem [ georgiosvkraniotis ] , eqn. we believe that the useful exact analytic theory of the ellipsoid we have developed in this work will have many applications in various scientific fields .we have already outlined in the introduction possible multidisciplinary applications of our theory in science ; a scientific multidisciplinarity which measures from physics , biology and chemistry to micromechanics , space science and astrobiology .a fundamental mathematical generalization of our theory would be to investigate the immersion of an ellipsoid in curved spacesand solve for the corresponding geometrical and physical properties of such an object .however , such a project is beyond the scope of the present paper and it will be a subject of a future publication .the generalization of theorem [ xwritikotita ] for the capacitance of the ellipsoid in -dimensions involves the analytic computation of a hyperelliptic integral .hyperelliptic integrals which are involved in the solution of timelike and null geodesics in kerr and kerr-(anti ) de sitter black hole spacetimes have been computed analytically in references , , in terms of the multivariable lauricella s hypergeometric function .the idea is to bring a hyperelliptic integral by the appropriate transformations onto the integral representation that the function admits .[ condenserngvk]the closed form solution for the capacitance of a conducting ellipsoid is given by the formula: where denotes the fourth hypergeometric function of lauricella of -variables and applying the transformation ( [ metasximatismos ] ) to the hyperelliptic integral yields:{(1-\mu_{2}x^{2})(1-\mu_{3}x^{2})\cdots(1-\mu_{n}x^{2})}}\nonumber\\ & \overset{x^{2}=\xi}{=}\frac{1}{2a_{1}^{n-2}}\int_{0}^{1}\frac{\xi ^{\frac{n-3 - 1}{2}}\mathrm{d}\xi}{\sqrt[2]{(1-\mu_{2}\xi)(1-\mu_{3}\xi ) \cdots(1-\mu_{n}\xi)}}\nonumber\\ & = \frac{1}{2a_{1}^{n-2}}\frac{\gamma\left ( \frac{n-2}{2}\right ) \gamma ( 1)}{\gamma\left ( \frac{n}{2}\right ) } f_{d}\left(\frac{n-2}{2},\underset{n-1}{\underbrace{\frac{1}{2},\frac{1}{2},\ldots , \frac{1}{2}}},\frac{n}{2},x_{2},x_{3},\ldots , x_{n}\right)\ ] ] where and applying our closed form analytic formula , eqn.([gvkcapacitynd ] ) , for and the choice of values we derive for the capacitance of this particular higher dimensional ellipsoid: b. t. bulliman and p. w. kuchel , * a series expression for the surface area of an ellipsoid and its application to the computation of the surface area of avian eythrocytes * , j. theor .( 1988 ) * 134 , * 113 - 123 t. h. shumpert , * capacitance calculations for satellites , part i. isolated capacitances of ellipsoidal shapes with comparisons to some other simple bodies * , sensor and simulation notes , note 157 , 1972 , 1 - 30 s. r. keller , * on the surface area of the ellipsoid * , mathematics of computation , ( 1979 ) , * 33 , * 310 - 314 ; l. r.m .maas , * on the surface area of an ellipsoid and related integrals of elliptic integrals * , journal of computational and applied mathematics 51 ( 1994 ) 237 - 249 g. v. kraniotis , * frame dragging and bending of light in kerr and kerr-(anti ) de sitter spacetimes * , class .quantum grav .22 ( 2005 ) 4391 - 4424 ; g. v. kraniotis , * periapsis and gravitomagnetic precessions of stellar orbits in kerr and kerr - de sitter black hole spacetimes * , class .quantum grav . 24( 2007 ) 1775 - 1808
we derive the closed form solutions for the surface area , the capacitance and the demagnetizing factors of the ellipsoid immersed in the euclidean space . the exact solutions for the above geometrical and physical properties of the ellipsoid are expressed elegantly in terms of the generalized hypergeometric functions of appell of two variables . various limiting cases of the theorems of the exact solution for the surface area , the demagnetizing factors and the capacitance of the ellipsoid are derived , which agree with known solutions for the prolate and oblate spheroids and the sphere . possible applications of the results achieved , in various fields of science , such as in physics , biology and space science are briefly discussed .
diffusion weighted magnetic resonance imaging ( dw - mri ) is able to quantify the anisotropic diffusion of water molecules in biological tissues such as the human brain white matter .the great success of dw - mri comes from its capability to accurately describe the geometry of the underlying microstructure .dw - mri captures the average diffusion of water molecules , which probes the structure of the biological tissue at scales much smaller than the imaging resolution .new dmri techniques for high angular resolution diffusion imaging ( hardi ) are now able to recover one or more directions of fiber populations at each imaging voxel and thus , overcome some of the limitations of diffusion tensor imaging ( dti ) in regions of complex fiber configurations where fibers cross , branch and kiss .most current classification techniques are based on dti measures of westin et al .for example , a recent paper from classifies the white matter voxels into single and crossing fibers simply by hard thresholding the linear , planar and spherical measures computed from the eigen - values of the diffusion tensor .other techniques have been developed to better handle fiber crossings using apparent diffusion coefficient modeling from hardi and other hardi models representations based on spherical harmonics ( sh ) decomposition .it was previously suggested using automatic bayesian relevance determination that between 1/3 and 2/3 of wm voxels contained crossings .jeurissen et al have just showed that this number is an underestimation and that wm crossings can take up to 90% of wm voxels using maxima extracted from a robust sh - based fiber orientation distribution estimation using constrained spherical deconvolution .otherwise , the first attempts to classify hardi voxels using machine learning techniques was done in using the space of sh coefficients from q - ball imaging .these techniques are based on diffusion maps and spectral clustering but were never applied to neurodegenerative datasets .more recently , schnell et al . have designed a classification process based on support vector machines , also based on the sh representation of the dmri signal and have compared their results with the classical classification of westin s measures , as done in .the main contribution of this paper is the creation of a system allowing to automatically classify voxels of hardi data in order to segment the white matter among different classes .the proposed method goes deeper than the approach presented in because it takes into account the neighbourhood of the voxels by using convolved data .using the spherical harmonic ( sh ) representation of each voxel , we want to be able to classify it as : ( a ) white matter with a single fiber bundle ( wmsf ) , ( b ) white matter with crossing fiber bundles ( wmcf ) , ( c ) non - white matter ( n - wm ) of type gray matter ( gm ) or cerebrospinal fluid ( csf ) .section [ sec_apparatus ] presents the dataset .we use a support vector machine ( svm ) classifier to obtain the label of a voxel .the svm tries to find the best separating hyperplane between two classes .as we work with several classes , a one - against - one approach is used ( we train classifiers ) .we easily see that using only the sh information may be not enough accurate , as there is no knowledge of the neighborhood while classifying the voxel . to address this problem ,we operate a 2d convolution of each slice of the brain against a kernel .this allows to work with a voxel representation resulting of a weighted sum of the neighboring voxels .the convolution kernels are chosen in order to obtain the best accuracy ( see section [ sec_selection ] ) .say we have the selected feature space of cardinality ( _ i.e. _ , each voxel is represented by a vector of size ; depends on the sh order ) and , the convolution kernels with one convolution kernel per dimension of the features space .we apply the convolution kernel on each feature of each voxel of each slice : , \forall j \in [ 1,n ] % \end{array}\ ] ] with ] the label of the voxel at ( ) in slice ) . we apply a 6 fold stratified cross validation ( we obtain 6 subsets , where each label is represented at the same ratio in each subset : there is the same recognition difficulty in each subset ) .each subset serves as a testing set , while the other subsets serve as training set ( the following procedure is then applied 6 times ) .the mean ( ) and the standard deviation ( ) of the feature vectors of the training set is computed in order to apply a zscore normalisation of the training and testing sets ( ) .the normalised training samples serve to train a svm with a gaussian kernel . to quickly evaluate the couple ( , ) we do not try to search the best svm parameters .so and are at the default value of libsvm ( ) .the learned svm is used to predict the label of the testing samples .there is a large disparity in the number of individuals of each class .so , we do not want to simply compute an averaged classification error rate .instead we compute the following errors : ( i ) _ missed wm ratio ( mwmr ) _ : ratio of wmsf and wmcf voxels recognized as being n - wm ( csf or gm ) voxels ; ( ii ) _ exchanged wm ratio ( ewmr ) _ : ratio of wmsf voxels recognized as being wmcf voxels and wmcf voxels recognized as wmsf voxels ; ( iii ) _ imagined wm ratio ( iwmr ) _ : ratio of n - wm ( csf and gm ) voxels recognised as being wmsf or wmcf voxels .then the final error score is computed as following : with , and in this experiment .this way , we give a low weight on the ewmr and bigger weights on iwmr and mwmr .although we try to differenciate them , we do not care of exchanging csf and gm . as , for each feature set, the search space is very huge , we use a genetic algorithm in order to search the best convolution kernels .each kernel is an array of size , with the width of the kernel .the genome consists of the kernels stored in a 1d array using values in the interval $ ] ( it means that values inside the kernel will be in this interval ) , so it has a size of .we do not pay attention to have a kernel summing to as we do not care if the data obtained after convolution is not in the same metric as the original data ( we do not manipulate or visualize the convolved data , it only serves for classification ) .population contains 500 individuals .the initial population contains : * one individual build with gaussian kernels which are supposed to be good by doing a weighted average based on the distance around the voxel of interested * 250 individuals created with modification of the first individual ( mean of the gaussian kernels based individual and a random one * 249 individuals generated totally randomly .the procedure stops after 100 generations or 3 days of computing .the cross - over rate is set at 0.9 and use 2 points .the mutation rate is set at 0.1 ( a higher mutation did not implyied a faster convergence nor better results ) and apply gaussian mutations .the number of elites is set at 20 .the fitness value of a chromosome is the explained in ( [ eq_fitness ] ) and the genetic algorithm wants to reduce this rate .ground truth datasets and validation is one of the biggest challenge of the dmri community .hence , there is an important effort to build _ ex - vivo _ phantoms that produce realistic datasets , more realistic than simulated synthetic data .this is the case of the fibercup data , mimicking a coronal slice of the brain .it is a simple 3d dmri dataset , but is quite unique because it reproduces complex fiber crossing configurations , similar to configurations in the cemtrum semioval and u fibers of the brain .the underlying ground truth is known and thus will serve as our learning / testing dataset . for this paper , we have focused on the mm isotropic , 64 directions , s / mm dataset .the phantom and ground truth fibers are illustrated in fig .[ fig : fibercup ] .thus , we use a good compromise between synthetic data as in and _ in vivo _ data for which it almost impossible to have ground truth .phantom border represents gm , fibers in one direction ( resp .several directions ) is wmsf ( resp .wmcf ) , the rest is csf .phantom dataset with ground truth fibers ., width=124,height=124 ] from this hardi dataset , we provide one input among the following choices to the svm : i)-ii ) a sh order 4 and 8 representation of raw signal ( sh4 , sh8 respectively ) , or iii ) the eigenvalues of the diffusion tensor ( eig ) .results of the proposed method are compared to a baseline classifier .this classifier does not use convolution operations , but it works with far more information .each voxel is classified by several svm classifier using a different feature space ( sh4 , sh4 rotation insensitive , sh8 , sh8 rotation insensitive , eigenvalues sh4 after deconvolution , odf4 , odf8 ) .each classifier is trained using the best parameters ( , ) by getting them through a grid search and 10 fold cross validation .a final svm classifier operates a fusion ( in order to obtain better results than a single svm ) of the results of the previous classifier in order to give the label of the voxel .note that this svm fusion gives better results than a majority vote and that each individual classifier ..recognition performance ( fitness value / global classification error rate ) , using the best convolution kernels , and for the baseline classifiers [ cols= " < , < , < , < " , ] using a computer having 4 gb of ram and a processor of 4 cores , scripts written in python and consuming tasks compiled using cython , it takes around 24 hours to run the evolution procedure for most couples of feature / kernel width ( we need to evaluate classification tasks on a high quantity of voxels for each couple of selected feature and kernel width ) . as the feature space is smaller , results or obtained quickly for sh4 data than for sh8 .the evolution procedure was not able to find a better individual than the standard kernel matrices for some configurations .this can be because of the size of the search space which is too big in comparison to the size of the quantity of individuals involved in the procedure .[ tab_perf_results ] presents the performances obtained using the best filter sets for each couple of features ( sh4 , sh8 ) and convolution kernel s width ( 5 , 7 , 9 ) as well as the performances of the baseline classifiers .[ fig_comp_performance ] visually presents the recognition performance using our method ( sh8 , kernel width of 5 ) against the best baseline classifier .baseline classifiers mainly do mistakes by detecting crossing fibers as single fibers .if we use the global classification error rates , they seem to perform well , however if we use the fitness value or look at fig .[ fig_class_base ] , we understand they perform badly .this may be explained because it is trained in order to reduce the global error rate whatever is the location of the errors .the proposed convolution based classifier performs better .most error are miss recognition of the border of the phantom .this can be explained by the fact that the fitness function does not try to minimize this error .most of the other errors are located at the frontier between two different classes . if we ignore the recognition error between csf and gm ( because there may not be important in our context ) , the error rate drops from 14.65% to 6.61% ( sh8/window of size 5 ) .the huge amount of time involved in the experiment is not representative of a real use of the system as a lot of classifiers are learned and tried during the optimisation procedure . to give a more accurate representation of the duration of the computation ,we have computed the mean time taken to learn the classifier and the mean time taken to classify the voxels by using the best kernels for each configuration of features and kernel width .it takes in average less than 11s to learn the classifier and a bit more than 1.50s to classify each voxel . in a real life application ,only the classification process is used ; so we think it is fast enough to be used in a real world application even if processing time would be slightly larger with more voxels .we can approximate an overestimate of the classification duration of a whole brain by : with v the number of voxels to classify .it is an overestimate as it does not take into account a potential reduction of the number of voxels to classify after having applyied a manual or automatic localisation of the brain .thus with a matrix of , the classification duration would be approximatly .ten minutes seems to be a correct amount of time .we have proposed a new svm classifier taking into account a local spatial neighbourhood to classify each voxel of hardi data according to its water diffusion phenomenon .we can successfully classify voxels containing single fiber bundles , voxels with several directions of diffusion reflecting crossing fiber populations and random isotropic voxels without fiber bundles .we believe that this opens many possible perspectives in quantitative white matter analysis in healthy and patients with neurodegenerative diseases .future experiment could use large margin filtering in order to optimize the svm parameters while optimizing the convolution matrices .it is also important to apply the results on real brain datasets manually labelled by neurologists or neurosurgeons .morphological operators could also decrease the recognition error rate .p. fillard , m. descoteaux , a. goh , s. gouttard , b. jeurissen , j. malcolm , a. ramirez - manzanares , m. reisert , k. sakaie , f. tensaouti , t. yo , j .- f .cois mangin , and c. poupon . quantitative evaluation of 10 tractography algorithms on a realistic diffusion mr phantom ., 56(1):220234 , 2011 .b. jeurissen , a. leemans , j .- d .tournier , d. k. jones , and j. sijbers .investigating the prevalence of complex fiber configurations in white matter tissue with diffusion magnetic resonance imaging ., in press(-): , 2012 .
the understanding of neurodegenerative diseases undoubtedly passes through the study of human brain white matter fiber tracts . to date , diffusion magnetic resonance imaging ( dmri ) is the unique technique to obtain information about the neural architecture of the human brain , thus permitting the study of white matter connections and their integrity . however , a remaining challenge of the dmri community is to better characterize complex fiber crossing configurations , where diffusion tensor imaging ( dti ) is limited but high angular resolution diffusion imaging ( hardi ) now brings solutions . this paper investigates the development of both identification and classification process of the local water diffusion phenomenon based on hardi data to automatically detect imaging voxels where there are single and crossing fiber bundle populations . the technique is based on knowledge extraction processes and is validated on a dmri phantom dataset with ground truth .
the words `` quantum gravity '' have become associated by physicists over the years with `` difficult problem '' . attempts to quantize general relativity started almost immediately after quantum mechanics was establishes ( see the article by rovelli for a concise history of the field ) . among the people involvedare the most stellar names in physics .indeed , one should expect problems when attempting to apply the rules of quantum mechanics to general relativity .although general relativity was developed before quantum mechanics , the latter was introduced in the context of newtonian physics .already the incorporation of special relativity required some effort and in fact , the introduction of quantum field theory , which in many ways is an extension of quantum mechanics .general relativity however , is a much more radical revision of physics than special relativity .it is a theory of space - time itself as opposed to a theory of entities living in a spacetime .quantum mechanics was firmly based on the latter viewpoint .this key element separates general relativity from almost all other physics theories .the invariance of the theory under coordinate redefinitions , which is more clearly viewed as invariance of the variables under diffeomorphisms , is not present in any other significant physical theory .in fact , we have only learnt how to apply the rules of quantum mechanics to theories invariant under diffeomorphisms relatively recently . and the theories in question , like bf theory or chern simons theory , are remarkably simpler than general relativity. these theories are only superficially field theories , in that they are described in terms of fields , but the true degrees of freedom of the theories are finite in number .they are , in fact , mechanical systems instead of field theories . solutions of the equations of motion are given by fields that are `` trivial '' , the only non - trivialities coming from possible topological features of the manifolds the theories live on .once one realizes this fact , it should not be surprising that their quantization becomes relatively straightforward .there is a strong sociological element involved in quantum gravity as well .after the many successes of quantum field theory following world war ii , it could only be expected that the application of the same powerful techniques to general relativity should finally conquer the problem .but this was not so .the application of perturbation theory to general relativity taught many interesting lessons , consolidating the ideas of gauge and ghosts .but it ultimately appeared to fail .general relativity appears to be perturbatively non - renormalizable .the practitioners of quantum field theory became so discouraged by this fact , that they adopted the point of view that general relativity should be abandoned as a physical theory .it is not that the successes of the theory explaining the classical world are in question .it is the fundamental nature of the theory . in this point of viewgeneral relativity would play the role of an effective theory .the lagrangian and field equations of general relativity should be viewed as , for instance , those of the navier stokes theory of fluids . a highly successful and useful theory , but not one that anyone would care to quantize , for instance , to describe a quantum fluid .just like in the case of quantum fluids one quantizes a theory underlying the navier stokes one , one would quantize a theory underlying general relativity . a theory that reproduces general relativity only in certain regimes , but is richer in other regimes .another analogy that comes to mind is the fermi theory of weak interactions , non - renormalizable and just a low energy manifestation of a richer theory , the theory of weak interactions .this point of view led to the development of supersymmetric theories , supergravity , kaluza - klein theories , superstrings and m - theory .is this point of view the only one ? strictly speaking ,the answer is yes .general relativity only describes gravity , and therefore a richer theory should come into play in order to have a unified picture of all interactions , so general relativity indeed should be the limit of a larger theory . but even if one ignores all other interactions , are we completely sure that general relativity can not be quantized? this appears as an academic question .after all , if we know we need a larger theory , why bother with determining if general relativity can be quantized ?the reason this question is , in the view of some people , not of purely academic interest , is that general relativity has features that we would all desire in a unified description of nature .most notably , the fact that space - time is dynamical and invariant under diffeomorphisms . in a sense ,general relativity is perhaps the simplest theory incorporating these features with non - trivial content .lessons learnt from attempting to quantize it should therefore be very valuable at the time of quantizing a richer theory .these are the reasons propelling a small but non - trivial ( see for some statistics ) minority of physicists to study the quantization of general relativity . but is nt the fact that the theory is non - renormalizable an indictment of this program ?how could one quantize such a theory ?to understand this we need to separate an intrinsic question ( is the theory quantizable or not ) from a procedural question ( can we quantize it using perturbation theory ) .these two questions are different . in fact , we know of examples of theories that admit a quantum description and that we do not know how to treat perturbatively .de witt s group studied sigma models that have this feature . but more striking is the example of general relativity in dimensions . in dimensionslower than four , the einstein equations just state that the metric is flat .general relativity in such a situation is only an apparent field theory , since the only solution to the equations is `` constant '' .one can have degrees of freedom if the topology of the manifold is non - trivial .yet , when perturbative quantization of general relativity in dimensions was attempted , the theory appeared to be non - renormalizable more or less in the same way the four dimensional version was .it was only when witten noticed that one should be able to perform a non - perturbative quantization , and carried it out , that people realized one could find ways to treat the theory perturbatively . the general relativity in dimensions example exhibits in a dramatic way the pitfalls expecting anyone attempting to quantize these kinds of theories .the lesson is that the symmetries of these theories are far more elaborate than usual . in the case of gravity ,the symmetry group is so large that the theory is rendered trivial by it .quantization schemes that do not take this into account , fail . in dimensions ,unravelling the symmetry was easy because one has full control of the theory .the general exact solution of the equations of motion is not only known in closed form , but a good intuitive handle on its meaning is available .nothing like this occurs in dimensions .learning how to gain a comparable handle is the task at hand .it is obviously a difficult task .we will never have the general solution of the einstein equations in four dimensions in closed form . that may prevent us from ever getting the kind of intuitive handle on the theory that is needed in order to quantize it .the non - perturbative quantization of gravity was pioneered by dewitt in the 60 s , following the early efforts of dirac and bergmann .an immediate problem that was encountered is that the kind of variables in terms of which gravity is usually described , is very different from the ones used in the successful quantum field theories of particle physics . in the latter ,the fundamental variable is a connection .this made many of the techniques that had been developed for handling particle physics theory not applicable to general relativity .a change in this situation occurred when ashtekar introduced a formulation of general relativity in terms of a connection that had a very elegant and simple canonical structure .in fact , the theory resembled a yang mills theory and opened the possibility of introducing the techniques so successfully used in that context to general relativity .these lectures will give glimpses onto some of the results that have arisen ever since .these notes are based on lectures . due to the finite lecture time ( further compressed by a two day plane delay ! ) and lack of expertise of the author in some areas it was not possible to cover many topics . a big , broad topic i missed is spin foam approaches to the path integral .this will be covered soon by a forthcoming review paper by perez .i will not discuss the beautiful results on black hole entropy .these are of great importance , since they are precise calculations that do not shy away from taking into account the full dynamics of the theory .the paper by ashtekar et al . has references to all the early literature .very recent work by varadarajan and others , showing a connection between the loop representation and more traditional fock pictures and the work of thiemann s group on semi - classical states could not be covered .the beauty of these results requires a level of detail that was not possible in the format of the lectures . finally ,although the notes attempt to guide a newcomer to the literature , they have not been prepared carefully enough as to attempt to be a comprehensive review .loll , rovelli and thiemann reviews articles covering lattice approaches , loop quantization and canonical quantum gravity respectively .carlip has a superb review on quantum gravity in general , giving the essentials of all approaches . for a lighter reading , smolin s recentbook covers several aspects of quantum gravity .detailed discussion of the early results are found in ashtekar s books , other topics can also be seen in the book we wrote with gambini .baez and muniain have an introductory book to knot theory with applications to gravity .canonical quantization is the oldest and most conservative approach to quantization .it demands one to control the theory well , not allowing to bypass several detailed questions , namely what is the space of states of the theory , what is the inner product , what is observable .every physicist has performed a few canonical quantizations in courses on quantum mechanics .to canonically quantize one roughly follows the following steps : a ) one picks a poisson algebra of classical quantities that is large enough to span the physics of interest ( in ordinary mechanics and , for instance ) ; b ) one represents these quantities as operators acting on a space of wavefunctions and the poisson algebra as an algebra of commutators ; c ) if the theory has constraints , that is , quantities that vanish identically classically , one has to impose that they vanish quantum mechanically as operators ; d ) an inner product has to be introduced on the space of wavefunctions that are annihilated by the constraints ; e ) predictions for the expectation values of observables ( quantities that have vanishing poisson brackets with the constraints ) can be worked out .notice that several of these steps involve choices .for instance , there is no unique way to choose an inner product , or to choose a certain set of classical quantities to be promoted to operators . for general relativityone has to start by casting the theory in a canonical form .this was done by dirac and bergmann in the 50 s and 60 s ( for a more modern discussion see ) .the fundamental canonical variable is the metric of a spatial surface ( people normally use instead of to avoid confusion with the space - time metric ) .its canonically conjugate momenta is usually denoted and the tilde denotes that it is a density , as momenta usually are .the momenta are closely related to the extrinsic curvature , which is also closely related to the time derivative of the three metric .the theory has four constraints .these are relationships among the variables at a given instant of time .three of them form a vector and are called the `` vector '' or diffeomorphism constraint .when one has constraints in canonical theories , it is due to the presence of symmetries .the diffeomorphism constraint can be shown to be associated with the invariance of general relativity under spatial diffeomorphisms .the remaining constraint is associated with the invariance of general relativity under diffeomorphisms off the spatial surface .it is usually called the scalar or hamiltonian constraint . unfortunately , the canonical treatment breaks the symmetry between space and time in general relativity and the resulting algebra of constraints is not the algebra of four diffeomorphisms , the hamiltonian constraint is singled out and behaves differently .the algebra of constraints is closed in the sense that poisson brackets of constraints are proportional to constraints , but the poisson bracket of two hamiltonian constraints is proportional to a diffeomorphism constraint through a function of the canonical variables .this we will see , will cause difficulties at the time of quantization .if one performs a legendre transform one finds that the hamiltonian of general relativity is just a combination of the above mentioned constraints .that is , the hamiltonian of the theory vanishes .this is due to the fact that the notion of time introduced in order to set up the canonical theory is a fiducial , arbitrarily introduced time .the canonical formalism `` knows '' that relativity does not really single out a preferred time and responds back by saying that the hamiltonian associated with any artificial time vanishes .physical time can only be retrieved in the canonical theory through an elaborate process with many difficulties ( kucha s article contains a detailed discussion of the `` problem of time '' ) one can attempt a canonical quantization by considering wavefunctions of the metric . one can represent the metric as a multiplicative operator and its canonically conjugate momentum as a functional derivative .one can then attempt to promote the constraints to operatorial equations .the diffeomorphism constraint , which is linear in the momenta , is relatively simple to implement .it implies that the wavefunctions are really only functions of the diffeomorphism invariant content of and not of itself .this is natural and elegant , but is also problematic : we do not know how to code in a simple way the diffeomorphism invariant information of .therefore the solution to constraint presented is natural but also quite formal ; we can not write it or handle it in an explicit way .a worse situation arises when one considers the hamiltonian constraint .the latter is a non - polynomial function of the canonical variables that requires regularization .most regularizations used in particle physics depend on the presence of a background ( c - number ) metric , which we do not have available in quantum gravity .no satisfactory treatment of the hamiltonian constraint in this context has ever been found .worse , the lack of control on the space of functions considered also implies that we do not know any kind of useful inner product to be introduced .finally , in the last step we were supposed to compute expectation values of the observables ( see for references on the observables problem ) of the theory .observables have to be quantities that have vanishing poisson brackets with the constraints .this implies they are invariants under the symmetries of the theory and that as quantum expressions they will act upon the space of physical states ( solutions to the constraints ) in such a way as to keep us within that space .unfortunately no such quantities are known for general relativity in a generic situation .if one reduces the theory by introducing additional symmetries , sometimes observables can be found .for instance in cosmological models or if the space - times considered are asymptotically flat .what is happening here is that since the hamiltonian of the theory is a combination of constraints , finding observables is tantamount to finding the constants of motion of the theory .but the constants of motion can be seen as re - expressing the initial conditions for a given solution of the equations of motion as functions of phase space .this , of course , requires solving the equations of motion , something we can not do for general relativity in closed form , unless we have symmetries present .this has suggested the possibility that the problem could perhaps be tackled in an approximate form .progress has been recently made on this issue in the new variable context as well .the introduction of ashtekar s new variables generated the new momentum that has invigorated the field in the last 18 years . for pedagogical reasons ,the new variables are best introduced in a two stage process .the first stage is to use , instead of the metric of space as a fundamental variable , a set of triads ( the tilde denotes a density weight , introduced for convenience , is a spatial index and labels the three frame fields ) .people had considered using the triad as a canonical variable .the description closest to the notation used in these days is given by barbero .extra constraints arise since the theory is now invariant under frame field transformations ( rotations ) as well .the hamiltonian is still a complicated non - polynomial function of the canonical variables .so the introduction of triads per se is not too helpful .the real breakthrough was the realization by ashtekar that the sen connection could be used as a canonically conjugate momentum to the triad .usually the canonically conjugate momentum considered for the triad is proportional to the extrinsic curvature .the sen connection adds a piece given by the spin connection of the triad .actually the sum of these two terms can be done while multiplying the extrinsic curvature times a constant ( this constant is called the immirzi parameter ) , yielding a one - parameter family of possible canonically conjugate variables ( all members of the family are related by a canonical transformation ) .we call this the generalized sen connection .if one rewrites the constraints in terms of the triads and the generalized sen connection several things happen .the set of constraints introduced due to the symmetry of the theory under triad rotations now takes the form of `` divergence of the triad equal zero '' .the divergence is taken with respect to the generalized sen connection .if one writes the connection as and the triads as the resulting equation is exactly a gauss law , like the one present in yang mills theory ( the arises due to the symmetry under rotations of the triads ) .the diffeomorphism constraint takes a form that resembles a poynting vector .this is nice , since the latter is clearly associated with the momentum of the fields and fields without net linear momentum are the only ones invariant under diffeomorphisms .finally the hamiltonian constraint still is a complicated non - polynomial function of the variables .however , if one chooses the immirzi parameter equal to the imaginary unit ( this is if one considers a lorentzian signature space - time , for an euclidean signature the immirzi parameter should be chosen equal to one ) , the non - polynomialities cancel out .one is left with a hamiltonian constraint that takes a simple , polynomial ( in fact at most quadratic ) form in terms of the canonical variables .another appealing aspect is that written in terms of these new variables , general relativity appears as a yang mills theory with a set of extra constraints ( and with a different hamiltonian ) .this opened the possibility of introducing in general relativity tools that were used in yang mills theory for its quantization .one of these tools is the use of loops .we could now attempt a canonical quantization of the theory we just discussed .one could pick wavefunctions that are functionals of the connection ] one can alternatively think of the coefficients of its expansion in the wilson loop basis ( is a loop ) . representing the functions in this way is what is known as the `` loop representation '' . in thisrepresentation wavefunctions are functions of loops and operators are geometric operators that act on loops . such representation was first proposed for yang mills theory by gambini and trias and for general relativity in terms of the new variables by rovelli and smolin .a caveat is that the basis provided by wilson loops is really an overcomplete basis .the coefficients in the expansion are therefore constrained by certain relations , known as the mandelstam identities . for a function of a loop to be admissible as a wavefunction in the loop representation, it should satisfy these identities .this can be challenging to achieve .as we will see a solution to this problem was eventually found in terms of spin networks . since one is automatically considering gauge invariant functions only when one works in the loop representation , gauss law identically vanishes .we are therefore left with the diffeomorphism and hamiltonian constraints only .the beauty of the loop representation lies in the natural action the diffeomorphism constraint acquires in this representation .the diffeomorphism constraint acts on wavefunctions by shifting infinitesimally the loop .therefore it is immediate to solve the diffeomorphism constraint .one simply has to consider wavefunctions of loops such that they are invariant under deformations of the loops .such functions are studied by the branch of mathematics called knot theory and are known as knot invariants .we therefore see that we can solve the diffeomorphism constraint in terms of a set of functions on which there is a lot of mathematical knowledge .a further surprise that the loop representation yielded was that it appeared to also help in solving the hamiltonian constraint .in retrospect , this result appears as of quite limited importance , but it provided a quite significant boost of interest in the subject at the time it was found , so we will review it here .let us go back briefly to the connection representation .suppose we want to promote the hamiltonian constraint to a quantum operator .we choose a factor ordering such that the triads are to the right .one needs to regularize the operator .let us choose a simple minded point splitting , putting the two triads and the curvature at slightly different points .the triads operating on the wilson loop produce a result that is proportional to the tangent to the loop at the point where they act ( one can see this simply by noting that it is the only vector present at that point ) .that means that the two functional derivatives , viewed as a tensor , produce as a result a symmetric tensor , since the result is proportional to a vector times itself .this tensor is contracted with , which is antisymmetric .therefore the result vanishes . notice that this result does not depend on the details of the regularization .this result was first noticed by jacobson and smolin .one caveat is that for it to be true , the loops in question have to be smooth . if a loop is not smooth , it contains points where there is more than one tangent and the constraint is not automatically zero , since the two functional derivatives could be proportional to different vectors and do not yield a symmetric tensor anymore .if we now go back to the loop representation , the previous result suggests that if we consider knot invariants that have support on smooth loops only , one would appear to solve all the constraints of quantum gravity .this result by rovelli and smolin generated a lot of excitement .but there are problems in attempting to solve the constraints in such a way .first of all , generic knot invariants with support on smooth loops only fail to satisfy the mandelstam identities .this can be fixed by using spin networks as we will see later . but more importantly, smooth loops appear to be too simple to carry interesting physics .as we will see , operators like the volume of space vanish identically unless one has intersections .therefore this space of solutions of the constraints is very likely a `` degenerate '' subspace that is not large enough to do meaningful physics .it was however , of great historical importance .another early result of interest in the loop representation was based on an observation due to kodama .the observation is that if one considers the exponential of the integral over space of the chern simons form built from the sen connection \right)$ ] , it automatically solves the hamiltonian constraint in the connection representation if one introduces a cosmological constant .the cosmological constant produces an extra terms in the hamiltonian of the form . to see that the kodama state solves the constraint we only need to know that when one acts with on it , one gets something proportional to ; this one can see through a calculation . to put it in simpler terms , for this state `` , '' that is the triad is proportional to the magnetic field built from the sen connection .therefore the two terms in the hamiltonian constraint , which can be schematically written as `` '' can be made to cancel each other by a choice of the constant in the kodama state .the kodama state has been found to have connection , in the cosmological context , with the hartle hawking and vilenkin vacua .of interest as well is its expression in the loop representation . to transform this state to the loop representation we wish to find its expansion in the basis of wilson loops , that is , the coefficients , this integral has been extensively studied in a different context , that of chern simons theory . that is ,consider a theory whose action is .then the integral can be viewed as the computation of the expectation value of the wilson loop in a chern simons theory .the result is a function of a loop that is diffeomorphism invariant , that is , it is a knot invariant .this invariant is the kauffman bracket , which is closely related to the jones polynomial , a knot invariant of great interest in the mathematical literature .since this invariant is the transform of a state that is annihilated by the hamiltonian constraint in the connection representation , it should be annihilated by the hamiltonian constraint in the loop representation .this has been checked explicitly .it is remarkable that this invariant from the mathematical literature , which was developed in completely independent fashion from any idea related to the gravitational field , manages to solve the quantum einstein equations .as we discussed in the previous section , the early years after the introduction of the new variables and the loop representation were ripe with intriguing and promising results , that appeared to be a significant step forward in the construction of a canonical theory of quantum gravity . however , many of the results were of formal nature only .there was not a good control on the space of states that one was operating on . herewe will mention some formal developments that took place in the mid 90 s that helped gain better control on the calculations being performed . as we discussed before ,wilson loops are an overcomplete basis for gauge invariant functions .they are constrained by a set of nonlinear identities called mandelstam identities .the simplest such identity comes from the following identity of matrices in the fundamental representation . in terms of wilson loopsthis would read with denoting loop composition .now , it is clear that these identities stem from the fact that we are working in the fundamental representation of to construct the wilson loops .one does not need to do so .consider a diagram like the one in the figure .one could construct holonomies along all the links in the diagram , in principle , with different representations of on each link ( the representations are labeled by an integer ) . at each of the verticesone would use invariant tensors in the group to `` tie up '' the holonomies in such a way as to have at the end of the process a gauge invariant quantity .such a quantity is a natural generalization of the wilson loop .the diagrams like the one in the figure are called spin networks . since the invariants so constructed do not depend on any particular representation , there are no relations between them as when we were working in the fundamental representation only. therefore the mandelstam identities are automatically solved .this was first realized by rovelli and smolin .spin networks also allow to do calculations in a natural and simple way , as we will see in the following sections .calculations like the ones needed to transform states to the loop representation require a measure of integration in the space of connections modulo gauge transformations .such integrations are also of interest to construct inner products .these are really functional integrals in spaces of infinite dimensionality .there is little experience on how to construct such integrals .ashtekar and lewandowski among others have pioneered the construction of measures of integration in such spaces .they begin by choosing a given set of functions that will be integrable .the functions chosen are `` cylindrical functions '' .these are functionals on the infinite dimensional space that really only depend on certain `` directions '' or `` projections '' of the space against a set of schwarz test functions .the projection is achieved through the use of wilson loops , or even more easily , spin networks .it might appear that cylindrical functions are too simple to be able to capture interesting physics .but for the case of a scalar field , for instance , the fock measure can be constructed with cylindrical functions . in the case of spin networksthe resulting measure of integration is really simple , it just states that spin networks are actually an orthogonal basis , ie , where the delta on the right hand side is one if the spin networks are equal or zero otherwise .the measures constructed are naturally diffeomorphism invariant .if one considers the class of spin networks related by diffeomorphisms with and the class related with one can construct an inner product on the classes simply by demanding that their inner product be zero if no member of the class coincides with at least one member of the class .the subject of measures of integration has several mathematical subtleties .physicists can get a very readable succinct account in the paper by ashtekar , marolf and mourao .most quantities of physical interest will involve products of the fundamental fields and therefore will require regularization .the latter is a non - trivial subject in quantum gravity .most regularization procedures used in particle physics require the use of metric information . in particle physicsthe metric is a c - number .but this is not the case in gravity .if one insists in using regularization procedures that involve the metric , one should consider it an operator .this can complicate quite a bit the task of regularizing expressions .alternatively , one can introduce an external c - number metric into the formalism , and hope that after one regularizes , there is no trace left of this artificial element in the construction .it is of interest to notice that some operators of ( limited ) physical interest can indeed be computed that are well defined in spite of the use of regularizations .these operators represent the area of a surface and the volume of a region of space . at firstit appears that these operators will be difficult to regularize .the classical expression for the area of a surface is where is the normal to the surface .the presence of the square root might at first suggest that regularization will be problematic .however , partitioning the area in small elements of area one can quickly see that for the quantity inside the square root spin network states are actually eigenstates .the end result for main portion of the spectrum of the area operator is where the sum is over all links of the spin network that pierce the area and is the valence of the link .we see that areas are quantized in terms of the planck length squared .this was first noticed by rovelli and smolin .later ashtekar and lewandowski did a comprehensive analysis of the spectrum of the area operator .the quantization of the area reveals another surprise .most people expected areas to be quantized , but the expectation was that the spectrum would be equally spaced , i.e , .it is not .this has consequences .bekenstein and mukhanov have shown that assuming that the spectrum of the area is equally spaced has serious implications for the validity of the thermal spectra of black holes .rovelli and collaborators have shown that these problems can be solved by considering the correct spectrum .for the volume operator results are quite similar .the volume operator acting on a spin network gives a nonzero result if within the region considered there are intersections of valence equal or larger than four .one gets a picture of spin networks in which each links carries `` quanta of area '' and each intersection of valence four or larger carries `` quanta of volume '' .both operators are finite and well defined without reference to any background metric structure in spite of the fact that they had to be regularized .several calculations of the spectrum of the volume can be found in .it is clear that one can not really discuss any physics emerging from quantum gravity until one has dealt with the hamiltonian constraint . attempting to do so would be equivalent to trying to do physics after handling two of the three components of gauss law in yang mills theories .one can attempt to do some calculations `` at the kinematical level '' ( i.e. ignoring the constraints ) in the hope that some of the basic features of the calculations will persist when the constraints are enforced , but this is not guaranteed .it is important to preface the discussion of this section with these caveats since they very much apply to what we will discuss .it took everyone by surprise when amelino - camelia and collaborators at cern argued that in the detection of gamma ray bursts one could find traces of quantum gravity phenomenology .for years it had been common lore that quantum gravity required energies so high that it could only have relevant effects in the big bang or inside black holes .the possibility of detecting quantum gravity effects via gamma ray bursts goes as follows : the gamma rays that arrive on earth have travelled a very long distance , since gamma ray bursts are expected to be cosmological .that distance appears even larger if one measures in terms of the number of wavelengths of a gamma ray .if one assumes that when a wave propagates through the `` quantum foam '' each wavelength gets disturbed by an amount of the order of the planck length , then the smallness of this number can be compensated by the huge number of wavelengths involved in traveling from the burst to earth . if one inputs numbers it turns out that detectable dispersions ( differences in times of arrival ) of seconds in gamma rays that differ by as those detected by the batse experiment imply that quantum gravity has to happen at energies larger than in order not to be visible .this is only three orders of magnitude away from the planck scale !this led to a lot of interest in these observations . within loop quantum gravity some calculationshave been performed to attempt to estimate these effects , at a kinematical level .the calculations require a number of simplificatory assumptions .otherwise one would have to deal with einstein - maxwell theory and work out a semiclassical limit .in general the assumptions have been that the electromagnetic field has been treated classically and only the maxwell part of the hamiltonian constraint is considered .one finds modified maxwell equations that imply that there is birefringence in the propagation of waves .similar results can be found for neutrinos .the birefringence for photons has been severely constrained experimentally .a much more careful recalculation of the effects done recently confirms several of the general features of the original calculations .the main problem with these predictions is that in order to have a non - vanishing effect at the lower order in terms of the energy of the gamma rays , one needs to introduce rather unnatural assumptions in terms of the quantum state considered ( otherwise one could not generate a birefringence , which is tantamount to a parity violation ) .if one does not make these assumptions , then the effects only arise at the next order in and are completely undetectable . in terms of the original work of amelino camelia et al .they postulated a non - standard dispersion relation of the form and the effects would be observable if is non - vanishing .a non - vanishing implies a fractional power in a dispersion relation , which is unusual .more importantly , all these calculations are implying that one is violating lorentz invariance .this is a huge step to take .there is significant discussion of the implications in the current literature ( see and references therein ) .one of the initial encouragements that the new variables introduced was that the hamiltonian constraint appears as a polynomial function of the fundamental variables .this suggested that one could perhaps promote it to a quantum operator and several attempts to regularize it were carried out .however , there is an obvious fundamental flaw in attempting this .the hamiltonian constraint is quadratic in the triads .the triads are densities of weight one , meaning that the constraint is a density of weight two . more precisely , the version of the constraint that is nice and polynomial is a density of weight two .one could turn it into a density of weight one by dividing by the determinant of the metric , but then the resulting operator would be complicated and non - polynomial .why is it a problem that it is a density of weight two ?suppose we wished to promote it to an operator in the loop or spin network representation . what could such an operator be? we have at our disposal a manifold , and a set of lines in it .we have available a density of weight one , the dirac delta , which is naturally defined on any manifold .but we do not have a density of weight two .and we can not multiply dirac deltas . therefore if one found a regularization of the doubly densitized hamiltonian constraint, what has to be happening is that one provided the extra density weight via the regulator . andtherefore the imprint of the regulator will not disappear upon regularization .all these difficulties were bridged when thiemann discovered how to handle the single - densitized hamiltonian constraint .the expression for the constraint is , and thiemann noticed that where is the volume of the three manifold .the hamiltonian constraint can therefore be written as , when we first discussed the hamiltonian constraint with the new variables , we noted that it was important that we take the immirzi parameter to be the imaginary unit . that made certain non - polynomial terms disappear , but at the price of making the variables complex .thiemann noted that through a similar use of identities as the one we discussed , these non - polynomial terms could also be reexpressed in terms of poisson brackets .therefore there is no need anymore of taking the immirzi parameter to be imaginary and from now on one can work with variables that are completely real .thiemann proposed a quantization for the above mentioned hamiltonian constraint .the procedure consists in introducing a lattice .he chooses an irregular lattice ( tetrahedral ) . in terms of this lattice ,he approximates the expression for the classical hamiltonian constraint using holonomies . omitting many details, the idea is that the `` '' term is represented by a closed loop going around a triangle on one of the faces of the elementary tetrahedron and the `` '' is represented by a line holonomy that is retraced to recover gauge invariance .the classical hamiltonian constraint discretized on the lattice is therefore only a function of holonomies and the volume of the manifold .the attractive aspect of this is that both holonomies and the volume of the manifold can be represented by well defined finite operators in the spin network representation . therefore producing awell defined , finite hamiltonian constraint is tantamount to `` putting hats on the classical expression '' since all the ingredients can be naturally quantized without divergences !there are a couple of caveats that need to be noted .the `` '' can be constructed by many different kinds of elementary loops . as long as they shrink to a point when the lattice is refined they all will represent properly the curvature .this indicates that there is therefore huge ambiguity in how to define the operator .an additional ambiguity is the valence of the holonomy that represents the curvature .moreover , a crucial element for the hamiltonian to be well defined is that it act on diffeomorphism invariant states . on such statesthe details of how the holonomy that represents the curvature is placed with respect to the spin network are immaterial .this , in turn , ensures that the resulting quantum theory is consistent .if one acts with two hamiltonian constraints , the two loops added are indistinguishable from each other and therefore if one acts in the opposite order the final result is the same .the hamiltonian constraint therefore commutes with itself .now , the classical poisson algebra of constraints stated that the poisson bracket of two hamiltonians should be proportional to a diffeomorphism . if one promotes this to a quantum operatorial expression and acts on diffeomorphism invariant states , the right hand side will give zero since they are annihilated by diffeomorphisms .therefore the commutator of two hamiltonians should vanish as well .thiemann goes on to show that similar constructions can be carried out for general relativity coupled to fundamental matter fields : yang mills , higgs , fermions .this achievement is quite remarkable .we are in the presence of the first finite , well defined , anomaly - free , non - trivial theory of quantum gravity ever presented .the theory fulfills the promise of acting as a `` natural regulator of matter fields '' in the sense that no divergences are present when the theory is coupled to matter .is quantum gravity finally achieved ?the answer is still not known .what has been found is a theory ( more precisely infinitely many theories due to the ambiguities ) that are well defined .although this is no small feat in this context , we do not know if any of these theories contains the correct physics of gravity .this will only be confirmed or contradicted when a semiclassical approximation is worked out so we can make contact with more familiar results .active investigations along these lines are being pursued by thiemann and collaborators and ashtekar and collaborators .there are some aspects of thiemann s construction that appear somewhat troubling .the same construction can be worked out in dimensional gravity .if one studies the solutions of the quantum hamiltonian constraint one finds many more states than the ones allowed by witten s theory .however , if one demands that they be normalized with the inner product we discussed , the witten sector is all that is left .this can be seen as a positive result ( after all , we get the correct theory ) or as a negative one ( the correct theory only is recovered after choosing carefully an inner product ) .it appears that in dimensions thiemann s hamiltonian also admits too many solutions .the fact that the constraint algebra can only be recovered on diffeomorphism invariant states , where it is only abelian , is also troubling .though again , there is no genuine interest in states that are not diffeomorphism invariant .other worries were expressed in a paper by smolin .the general consensus at the moment is that there appear to be worries that the hamiltonian does not capture the correct physics , but no one can make a theorem out of the worries to prove that thiemann s hamiltonian is wrong .the verdict will come when further explorations of the semiclassical approximation are worked out .the idea of exploring quantum gravity effects in the simplified context of cosmological models has held appeal over the years . yet ,due to the lack of a theory of quantum gravity , the approach taken was rather bizarre .people would consider general relativity , then reduce the classical theory to only cosmological metrics ( which in the case of homogeneous cosmologies reduces the equations to ordinary differential equations , losing the field theoretical nature of general relativity ) .the resulting theory was then quantized and some interpretations were attempted .the main criticism that was levied against this kind of investigations is that `` imposing a symmetry then quantizing '' does not have to agree with `` a sector of the quantum theory with a given symmetry '' .that is , there is no guarantee that what one sees in quantum cosmology will appear at all when one gets a handle of the full theory and studies cosmological situations . with thiemann s introduction of viable theories for quantum gravity, it therefore became ubiquitous to attempt to study quantum cosmology `` properly '' .that is , study the sector of the full quantum theory of gravity that approximates homogeneous cosmologies .this is what bojowald set out to do .he finds that in isotropic and homogeneous quantum cosmology states reduce to `` spin networks with only one link '' .this is understandable since everything happens `` at a single point '' in a homogeneous model .the presence of the link is needed to make sense of the operators involving connections ( one needs more than a point to have a notion of a connection ! ) .the quantum states therefore are labeled by an integer which corresponds to the valence of the single link of the spin network .bojowald constructs a version of thiemann s hamiltonian acting on these states .he also finds a well defined version of the volume operator .one of the long held beliefs in quantum cosmology is that quantum effects will eliminate the big bang singularity . in bojowald s casethis is actually realized in practice .considering the case of a flat robertson walker metric , he finds that he can find a finite , well defined expression for the operator .this is done through similar identities as those that led to equation ( [ thiemid ] ) .since the resulting operator is finite , it suggests that the singularity can be avoided .remarkably , if one analyzes the relationship of with the volume operator ( which should be such relationship does hold quantum mechanically when the universe is `` large '' .but when the universe becomes of the size of a few planck volumes , the relationship is broken , the volume goes to zero but remains finite .the avoidance of the singularity can be implemented concretely in this approach through discrete equations of motion that actually never become singular . andthe theory can be coupled to various matter sources without introducing singular behaviors through the use of the defined to implement the couplings .bojowald goes on to introduce a notion of time for these cosmologies .since everything is discrete , his notion of time is discrete too .the evolution equations are recursion relations and he shows that for large universes they reproduce the results of usual wheeler - dewitt - based quantum cosmology . and these evolution equations also allow to evolve non - singularly through the point where one expects the singularity classically .remarkably , even an argument for inflation being generated by quantum gravity can be found in this context .the fact that the cosmological reduction of thiemann s hamiltonian appears to give the correct physics of quantum cosmology is considered by some as an indication that the right physics of gravity is contained .one should be aware of the fact , however , that bojowald s construction implies a limiting procedure .quantum states peaked on homogeneous cosmologies are really distributions and therefore one needs to extend the operators defined for other states to them .this extension is non - trivial and there might be ambiguities in it that allow to `` correct '' things in order to get the right physics .although this is not what bojowald set out to do , it might have accidentally happened .moreover , part of the worries about thiemann s hamiltonian have to do with the constraint algebra .in homogeneous cosmologies , since everything takes place `` at a point '' there is only one hamiltonian constraint that therefore is obviously abelian , which agrees with thiemann s general result .nevertheless , it is striking that detailed attractive predictions in the cosmological context can be extracted from the proposed hamiltonian constraint .when we discussed thiemann s hamiltonian constraint we mentioned that he started from a given classical theory in the continuum and introduced a lattice to discretize the hamiltonian constraint .the lattice hamiltonian is then promoted to a quantum operator naturally .the idea of using lattices to regularize gravity is not new .the novelty is the use of the recently acquired knowledge about well defined operators and states .lattice approaches , however , are plagued by difficulties to which thiemann s approach may not be immune .the difficulty has to do with the fact that in the case of general relativity lattice regularizations breaks the symmetry of the theory under diffeomorphisms ( on the contrary , in the yang mills case , lattice gauge theory has the advantage of providing a gauge invariant regularization ) .the theories one gets on the lattice therefore have a considerably different structure than the theory they attempt to approximate in the continuum .it is the personal impression of the author that at the time of quantizing the discrete theories , one needs to take their structure seriously .in particular , most discrete approximations that one constructs for general relativity , end up being inconsistent ( their equations do not admit a single solution ) .this is well known , for instance , in numerical relativity .when one wishes to integrate the einstein equations on a computer , they are approximated by finite difference equations .whereas in the continuum theory if one solves the constraint equations initially , the evolution equations guarantee the constraints will hold at all time , this is not true of the discrete equations .there is therefore no way to satisfy simultaneously the constraint equations ( at all times ) and the evolution equations .most people in numerical relativity use `` free evolution '' that is , they accept that they will fail to satisfy the constraints at later times as part of the numerical error of the solution they incur. in quantization the last argument does not work .if a theory is inconsistent , there is little sense in attempting a quantization .most attempts to lattice quantum gravity suffer from this problem .for instance , when one discretizes the hamiltonian constraint , the discrete constraints fail to close an algebra ( this is reasonable , since algebras are associated with infinitesimal symmetries and nothing can be infinitesimal in a discrete theory ) .if the constraints do not close an algebra one can generate further constraints by taking poisson brackets .if one is not careful one ends up with too many constraints and the theory has no solutions .this is the canonical manifestation of the inconsistency .these kind of problems are very basic and are even present in very simple systems .for instance , if one considers a newtonian particle and discretizes newton s equations , it is a well known fact that energy fails to be conserved .in fact astronomers who wish to follow planetary motion have known this for a long time and construct special discretizations of newton s laws that automatically conserve energy and angular momentum .our goal is to find something similar in the gravitational context , that is , a discretization scheme that automatically preserves the constraints .lee proposed a way to fix the problems of the newtonian particle that can be easily translated to the gravitational case .the idea consists in enforcing the constraints through a suitable choice of lagrange multipliers . in the case of general relativity ,one chooses the lapse and shift in such a way that in the next step the constraints are satisfied .this has no counterpart in the continuum theory .one has four constraints to enforce and four quantities to solve for ( the equations to be solved are coupled algebraic non - linear equations ) .we have recently worked out the canonical theory for such consistent discretizations and applied it to yang mills and bf theory and presented the prescription for the gravitational case . in this approach , since the constraints are automatically satisfied , most of the conceptually hard problems that arose due to attempting to impose the constraints disappear . quantum gravity become a conceptually clear yet computationally challenging problem .there are many attractive features in this approach .since the initial data fixes the lapse and the shift , and quantum mechanically one generically has a superposition of initial data , one automatically has a superposition of discretizations .the goal of `` averaging out '' over all discretizations is implemented naturally .we have applied these ideas to a cosmological model .classically one finds that if one runs the model backwards , unless one fine tunes the initial data , no singularity is present .this is natural , generically the singular point will not fall on the lattice .when one quantizes however , the fact that the singularity classically only occurs for a set of measure zero in the possible initial data implies that the singularity is not present .we see a remarkable agreement with bojowald s prediction , although the details and motivation are different .much more will have to be explored to see if the consistent discretizations are a viable route for quantization .in particular we have little experience with the complicated non - linear equations that fix the lapse and the shift .what if they quickly generate negative lapses or singularities ?it is imperative that experience be gained , particularly in midi - superspace examples where there are field theoretic degrees of freedom .even these cases are computationally challenging given the complexity of the algebraic non - linear coupled equations that determine the lapse and the shift .the last 18 years have seen a renaissance of canonical quantum gravity .the field has been brought to a complete new level in terms of mathematical sophistication and possibilities for discussing physical consequences and applications .it is becoming more evident by the day that not only it is not clear that general relativity has a problem at the time of its quantization , but that actually the quantization of einstein s theory has a lot to teach us about physics .this physics might be of interest just in the context of pure general relativity or in the context of proposed unified theories of all interactions , like string theoryi am grateful to a. prez , m. bojowald , j. lewandowski , t. thiemann for comments on the manuscript .i also wish to thank the organizers of the brazilian school for the invitation to participate .this paper was completed at the erwin schrdinger institute in vienna .my work in this field has for many years been in collaboration with rodolfo gambini .this work was supported by grants nsf - phy0090091 , funds from the horace hearne jr .institute for theoretical physics and the fulbright commission in montevideo .99 c. rovelli , `` notes for a brief history of quantum gravity , '' arxiv : gr - qc/0006061 .see d. birmingham , m. blau , m. rakowski and g. thompson , phys .rept . * 209 * , 129 ( 1991 ) for a review . c. rovelli , strings , loops and others : a critical survey of the present approaches to quantum gravity , arxiv : gr - qc/9803024 . j. de lyra _ et al ._ , phys .d * 46 * ( 1992 ) 2538 .e. witten , nucl .b * 311 * , 46 ( 1988 ) .s. deser , j. g. mccarthy and z. yang , phys .b * 222 * , 61 ( 1989 ) .a. perez , class .( to appear ) .a. ashtekar , j. c. baez and k. krasnov , adv .theor . math .phys . * 4 * , 1 ( 2000 ) [ arxiv : gr - qc/0005126 ] .m. varadarajan , phys .d * 64 * , 104003 ( 2001);*66 * , 024017 ( 2002 ) [ arxiv : gr - qc/0104051,0204067 ] ; a. ashtekar and j. lewandowski , class .* 18 * , l117 ( 2001 ) ; a. ashtekar , s. fairhurst and j. l. willis , `` quantum gravity , shadow states , and quantum mechanics , '' arxiv : gr - qc/0207106 . t. thiemann , `` complexifier coherent states for quantum general relativity , '' arxiv : gr - qc/0206037 ; h. sahlmann and t. thiemann , `` towards the qft on curved spacetime limit of qgr .i : a general scheme , '' arxiv : gr - qc/0207030 ; `` ii : a concrete implementation , '' arxiv : gr - qc/0207031 ; h. sahlmann , t. thiemann and o. winkler , nucl .b * 606 * , 401 ( 2001 ) [ arxiv : gr - qc/0102038 ] and references therein .r. loll , living rev .* 1 * , 13 ( 1998 ) [ arxiv : gr - qc/9805049 ] . c. rovelli, living rev .* 1 * , 1 ( 1998 ) [ arxiv : gr - qc/9710008 ]. t. thiemann , arxiv : gr - qc/0110034 .s. carlip , rept .phys . * 64 * , 885 ( 2001 ) [ arxiv : gr - qc/0108040 ] .l. smolin , `` three roads to quantum gravity '' , london , uk : weidenfeld & nicolson ( 2000 ) .a. ashtekar ( notes prepared in collaboration with r. tate ) , `` lectures on non - perturbative canonical gravity '' , advanced series in astrophysics and cosmology vol .6 , world scientific , singapore ( 1991 ) ; `` new perspectives in canonical quantum gravity '' , bibliopolis ( naples , italy ) ( 1988 ) .r. gambini , j. pullin , `` loops , knots , gauge theories and quantum gravity '' , cambridge university press ( 1996 ) . j. baez and j. p. muniain , _singapore , singapore : world scientific ( 1994 ) 465 p. ( series on knots and everything , 4)_. k. kucha , `` time and interpretations in quantum gravity '' , in `` proceedings of the 4th canadian conference on general relativity and relativistic astrophysics : university of winnipeg , 16 - 18 may , 1991 '' , g. kunstatter , d.e .vincent , j.g .williams ( editors ) , world scientific , singapore ( 1992 ) . c. rovelli , phys .d * 65 * , 044017 ( 2002 ) [ arxiv : gr - qc/0110003 ] .r. gambini and j. pullin , phys .lett . * 85 * , 5272 ( 2000 ) [ arxiv : gr - qc/0008031 ] ; class .* 17 * , 4515 ( 2000 ) [ arxiv : gr - qc/0008032 ]. j. f. barbero , phys .d * 51 * , 5507 ( 1995 ) [ arxiv : gr - qc/9410014 ] .a. sen , phys .b * 119 * ( 1982 ) 89 .r. giles , phys . rev . * d24 * , 2160 ( 1981 ) .r. gambini , a. trias , nucl . phys . *b278 * , 436 ( 1986 ) . c. rovelli , l. smolin , phys .lett . * 61 * , 1155 ( 1988 ) ; nucl . phys . * b331 * , 80 ( 1990 ) . s. mandelstam , phys . rev .* d19 * , 2391 ( 1979 ) .t. jacobson , l. smolin , nucl .* b299 * , 295 ( 1988 ) . h. kodama , phys . rev .* d42 * , 2548 ( 1990 ) .e. witten , commun .phys * 121 * , 351 ( 1989 ) .b. brgmann , r. gambini , j. pullin , phys .68 * , 431 ( 1992 ) .a. ashtekar and j. lewandowski , j. math .phys . *36 * , 2170 ( 1995 ) [ arxiv : gr - qc/9411046 ] ; also in `` quantum gravity and knots '' , ed . by j. baez , oxford univpress ( 1993 ) . c. rovelli and l. smolin , phys .d * 52 * , 5743 ( 1995 ) [ arxiv : gr - qc/9505006 ] .a. ashtekar , d. marolf and j. mourao , `` integration on the space of connections modulo gauge transformations , '' arxiv : gr - qc/9403042 .c. rovelli and l. smolin , nucl .b * 442 * , 593 ( 1995 ) [ erratum - ibid .b * 456 * , 753 ( 1995 ) ] [ arxiv : gr - qc/9411005 ] .a. ashtekar and j. lewandowski , class .* 14 * , a55 ( 1997 ) [ arxiv : gr - qc/9602046 ] ; adv .* 1 * , 388 ( 1998 ) [ arxiv : gr - qc/9711031 ] . j. d. bekenstein and v. f. mukhanov , phys. lett .b * 360 * , 7 ( 1995 ) [ arxiv : gr - qc/9505012 ] .m. barreira , m. carfora and c. rovelli , gen .* 28 * , 1293 ( 1996 ) [ arxiv : gr - qc/9603064 ] .* 1 * , 388 ( 1998 ) [ arxiv : gr - qc/9711031 ] ; t. thiemann , j. math .phys . * 39 * , 3347 ( 1998 ) [ arxiv : gr - qc/9606091 ] ; r. loll , nucl .b * 460 * , 143 ( 1996 ) [ arxiv : gr - qc/9511030 ] ; r. loll , phys .lett . * 75 * , 3048 ( 1995 ) [ arxiv : gr - qc/9506014 ] ; j. lewandowski , class .* 14 * , 71 ( 1997 ) [ arxiv : gr - qc/9602035 ] .g. amelino - camelia , j. r. ellis , n. e. mavromatos , d. v. nanopoulos and s. sarkar , nature * 393 * , 763 ( 1998 ) [ arxiv : astro - ph/9712103 ] .r. gambini and j. pullin , phys .d * 59 * , 124021 ( 1999 ) [ arxiv : gr - qc/9809038 ] ; j. alfaro , h. a. morales - tecotl and l. f. urrutia , phys .d * 65 * , 103509 ( 2002 ) [ arxiv : hep - th/0108061 ] .j. alfaro , h. a. morales - tecotl and l. f. urrutia , phys .* 84 * , 2318 ( 2000 ) [ arxiv : gr - qc/9909079 ] .r. j. gleiser and c. n. kozameh , phys .d * 64 * , 083007 ( 2001 ) [ arxiv : gr - qc/0102093 ] .j. magueijo and l. smolin , arxiv : gr - qc/0207085 . t. thiemann , class .grav . * 15 * , 839 ( 1998 ) [ arxiv : gr - qc/9606089 ] .m. gaul and c. rovelli , class .grav . *18 * , 1593 ( 2001 ) [ arxiv : gr - qc/0011106 ] .t. thiemann , class .grav . * 15 * , 1207 ( 1998 ) [ arxiv : gr - qc/9705017 ] .t. thiemann , class .grav . * 15 * , 1281 ( 1998 ) [ arxiv : gr - qc/9705019 ] .t. thiemann , class . quant. grav . * 15 * , 1249 ( 1998 ) [ arxiv : gr - qc/9705018 ] .l. smolin , `` the classical limit and the form of the hamiltonian constraint in non - perturbative quantum general relativity , '' arxiv : gr - qc/9609034 .m. bojowald , class .* 17 * , 1489 ( 2000 ) [ arxiv : gr - qc/9910103 ] ; class .* 17 * , 1509 ( 2000 ) [ arxiv : gr - qc/9910104 ] ; class .. grav . * 18 * , 1055 ( 2001 ) [ arxiv : gr - qc/0008052 ] ; m. bojowald , phys .lett . * 86 * , 5227 ( 2001 ) [ arxiv : gr - qc/0102069 ] . m. bojowald , class . quant .* 18 * , l109 ( 2001 ) [ arxiv : gr - qc/0105113 ] .m. bojowald , arxiv : gr - qc/0206054 .t. d. lee , in `` how far are we from the gauge forces '' antonino zichichi , ed .plenum press , ( 1985 ) r. gambini and j. pullin , arxiv : gr - qc/0206055 ; c. di bartolo , r. gambini and j. pullin , arxiv : gr - qc/0205123 .
this is a summary of the lectures presented at the xth brazilian school on cosmology and gravitation . the style of the text is that of a lightly written descriptive summary of ideas with almost no formulas , with pointers to the literature . we hope this style can encourage new people to take a look into these results . we discuss the variables that ashtekar introduced 18 years ago that gave rise to new momentum in this field , the loop representation , spin networks , measures in the space of connections modulo gauge transformations , the hamiltonian constraint , application to cosmology and the connection with potentially observable effects in gamma - ray bursts and conclude with a discussion of consistent discretizations of general relativity on the lattice .
inertial particles play an important role in various applications in science and engineering .examples include planet formation , particle aggregation in rotating flows , atmosphere / ocean science ( in particular rain initiation ) , chemical engineering .the prominent role that inertial particles play in various scientific and industrial applications has triggered many theoretical investigations , see for example and the references therein .the starting point for many theoretical investigations concerning inertial particles is stokes law which says that the force exerted by the fluid on the particle is proportional to the difference between the background fluid velocity and the particle velocity : various extensions of this basic model have been considered in the literature , in particular by maxey and collaborators . in this workwe will restrict ourselves to the analysis of particles subject to a force of the form , together with additional molecular bombardment . in principlethe fluid velocity satisfies either the euler or the navier stokes equations and it is obtained through direct numerical simulations ( dns ) .the solution of a newtonian particle governed by force law coupled to either the euler or navier stokes equations is analytically difficult to study and computationally expensive .it is hence useful to consider in to be a given random field which mimics some of the features of velocity fields obtained from dns ; one can consider , for example , random fields whose energy spectrum is consistent with that of velocity fields obtained from dns .the qualitative study of newtonian particles governed by for given ( random ) velocity fields is very similar to the theory of turbulent diffusion , which has been primarily developed in the case .however , relatively little is known about the properties of solutions in the inertial case .it is important , therefore , to consider simplified models for the velocity field which render the particle dynamics amenable to rigorous mathematical analysis and careful numerical investigations . a model for the motion of inertial particles in two dimensionswas introduced in and analyzed in a series of papers .this model consists of motion in a force field comprised of a contribution from stokes law together with molecular bombardment ; the velocity field is a gaussian , markovian , divergence free random field .this gives the equations [ e : motion ] the parameter is the stokes number , which is a non dimensional measure of the particle inertia ( essentially it is the particle relaxation time ) .the molecular diffusion coefficient is given by , is white noise in , is space time gaussian white noise and are appropriate positive , self adjoint operators .gaussian velocity fields of the form have been considered by various authors in the past , in particular in the context of passive advection .see for example and the references therein .the usefulness of random velocity fields of the form in simulations is that , by choosing the operators and appropriately , we can generate random velocity fields with a given energy spectrum , thus creating caricatures of realistic turbulent flows .generalizations to arbitrary dimension are also possible various qualitative properties of the system have been studied , such as existence and uniqueness of solutions and existence of a random attractor .furthermore , various limits of physical interest have been studied : rapid decorrelation in time ( kraichnan ) limits .diffusive scaling limits ( homogenization ) were studied for time independent , periodic in space velocity fields ; thus the model is obtained from with only . in these three papersit was shown that the rescaled process converges in distribution in the limit as to a brownian motion with a nonnegative definite effective diffusivity .various properties of the effective diffusivity , in particular , the dependence of on the parameters of the problem were studied by means of formal asymptotics and extensive numerical simulations .the purpose of this paper is to carry out a similar analysis for the model problem where the velocity field is time - dependent .when , i.e. the particle inertia is negligible , the equation of motion becomes this equation has been studied extensively in the literature .the homogenization problem for with velocity fields of the form , was studied in .there it was shown that the rescaled process , with being the solution of and being a finite dimensional truncation of solutions to , , converges in distribution to a brownian motion with a positive definite covariance matrix , the _ effective diffusivity_. in this paper we will show that a similar result holds for the inertial particles problem .that is , we consider the diffusive rescaling for solutions to with being a galerkin truncation of .we show , first with the aid of formal multiple scale expansions and then rigorously , that the rescaled processes converges to a brownian motion and we derive a formula for the effective diffusion tensor .we study various properties of the effective diffusivity as well as some scaling limits of physical interest .furthermore , we analyze the dependence of the effective diffusivity on the various parameters of the problem through numerical simulations . in particular, we show that the effective diffusivity depends on the stokes number in a very complicated , highly nonlinear way ; this leads to various interesting , physically motivated , questions .the generator of the markov diffusion process corresponding to is not a uniformly elliptic operator , as in the case of passive tracers , but a degenerate , _ hypoelliptic _ operator .this renders the proof of the homogenization theorem for quite involved , since rather sophisticated tools from the spectral theory of hypoelliptic operators are required see the appendix .the rest of the paper is organized as follows . in section [ sec : model ]we introduce the exact model that we will analyze and present some of its properties . in section [ sec : mult_sc ] we use the method of multiple scales to derive the homogenized equation . in section[ sec : small_delta ] we study simultaneously the problems of homogenization and rapid decorrelation in time . in section 5we present the results of numerical simulations .section 6 is reserved for conclusions .the rigorous homogenization theorem is stated and proved in the appendix .we will study the following model for the motion of an inertial particle in where and is a standard white noise process on , i.e. a mean zero generalized gaussian process with the velocity field is of the form where for each fixed , is an matrix smooth and periodic as a function of , and is a stationary generalized ornstein uhlenbeck process on : here is a standard gaussian white noise process on , which is independent from , and are positive definite matrices .the parameter controls the correlation time of the ornstein uhlenbeck process .we remark that one can construct a velocity field through a finite dimensional truncation of .notice however that we do not assume that the velocity field is incompressible as such an assumption is not needed for the analysis .we will , however , restrict ourselves to incompressible velocity fields when studying the problem numerically in section [ sec : numerics ] , as this case is physically interesting .it is sometimes more convenient for the subsequent analysis to consider the rescaled ou process . written in terms of ,the equations that govern the motion of inertial particles become [ e : motion_resc ] the velocity field that appears in is a mean zero stationary gaussian random field with correlation time .it is possible to show that in the limit as ( the rapid decorrelation in time limit ) the solution of converges pathwise to the solution of the kraichnan like velocity field is mean zero , gaussian , and delta correlated in time .we will refer to as the _ colored velocity field model _ and to as the _ white velocity field model_. in this paper we will be mostly concerned with the diffusive limit of solutions to and .that is , we will consider the rescaled process and study the limit as .a natural question is whether the homogenization ( ) and rapid decorrelation in time ( ) limits commute .we answer this question in the affirmative through using formal asymptotics as well as through numerical investigations .in this section we will derive the homogenized equation which describes the motion of inertial particles at large length and time scales for both the colored and white noise velocity fields .the derivation of the homogenized equation is based on multiscale / homogenization techniques .we refer to for a recent pedagogical introduction to such methods .we start by rescaling the equations of motion according to , . using the fact that for any white noise process we have that in law we obtain : [ e : color_mult_resc ] we now introduce two new variables and and write the above equations as a first order system : [ e : sde_resc ] with the understanding that and with , and being standard brownian motions on and , respectively .the sdes clearly exhibit the two time scales ( for ) and ( for ) .our purpose is to homogenize over the fast variables to obtain a closed equation which governs the evolution of , and is valid for .let denote the solution of starting at and let be a smooth function . then the observable satisfies the backward kolmogorov equation associated with the rescaled process : ^{\epsilon } \nonumber \\ & = : & \left ( \frac{1}{\epsilon^{2}}\mathcal{l}_{0}+\frac{1}{\epsilon } \mathcal{l}_{1 } \right ) u^{\epsilon } , \\text{with } \ \ u^{{\epsilon}}|_{t=0}=f.\end{aligned}\ ] ] here : [ e : oper_defn ] we use the subscript on the last two operators to emphasize that they are the generators of ornstein - uhlenbeck processes in and respectively .the operator is the generator of the markov process : [ e : fast_proc ] in the appendix we prove that the markov process given by is geometrically ergodic . hence , there exists a unique invariant measure with smooth density which is the solution of the stationary fokker planck equation here are the formal of and , respectively .the invariant density is a periodic function of and decays rapidly as . in the appendixwe also prove that the operator ( equipped with boundary conditions described above ) has compact resolvent in the appropriate function space .consequently , fredholm theory applies : the null space of the generator is one dimensional and consists of constants in .( see the appendix ) .moreover the equation , has a unique ( up to constants ) solution if and only if where and .we assume that that the average of the velocity with respect to the invariant density vanishes : this is natural because it removes any effective drift making a purely diffusive scaling natural .the identity implies that consequently : thus , the centering condition is equivalent to : let us now proceed with the derivation of the homogenized equation .we look for a solution of in the form of a power series in : with being in .we substitute into ( [ e : kolm_resc ] ) and obtain the following sequence of equations : [ e : e0e1e2 ] from , we deduce that the first term in the expansion is independent of the fast variables , i.e. . now equation becomes : we can solve this by using separation of variables .it is easy to show , however , that this term does not affect the homogenized equation and for simplicity we set it equal to . ] : with this is the _ cell problem _ which is posed on .assumption implies that the right hand side of the the above equation is centered with respect to the invariant measure of the fast process .hence the equation is well posed ( see the appendix ) .the boundary conditions for this pde are that is periodic in and that it belongs to , which implies sufficiently fast decay at infinity .we now proceed with .we apply the solvability condition to obtain : thus , the backward kolmogorov equation which governs the dynamics on large scales is where the effective diffusivity is given by and where denotes the tensor ( or outer ) product .notice that only the symmetric part of the effective diffusivity is relevant in the homogenized equation . however , the effective diffusivity itself is non symmetric in general .we define equation is the backward kolmogorov equation corresponding to a pure brownian motion .we have the following result : for and the function , where solves , is approximated by solving with and where is a standard brownian motion on a theorem , justifying the formal approximation leading to this result , is proved in the appendix , using the martingale central limit theorem . in this subsectionwe show that the effective diffusivity is non negative .this implies that the homogenization equation is well posed .we can show this by using the dirichlet form ( theorem 6.12 in ) which shows that , for every sufficiently smooth , now let be the solution of the poisson equation and define , where is an arbitrary unit vector .the scalar field satisfies the poisson equation we combine with , to calculate since is a positive definite matrix .thus the following results holds : the effective diffusivity matrix given by is positive semi - definite and thus the limiting backward kolmogorov equation is well posed .it is not entirely straightforward to check whether the centering condition or , equivalently , is satisfied or not , as we do nt have a formula for the invariant measure of the fast process we only know that it exists .it is possible , however , to identify some general classes of flows which satisfy by using symmetry arguments .consider for example the case of a parity invariant flow , i.e a flow satisfying the condition it follows from and that the invariant density satisfies it is easy to see now that implies that is satisfied .hence , the centering condition is satisfied for velocity fields that are odd functions of .[ rem : drift ] even if the centering condition is not satisfied , the large - scale , long - time dynamics of the inertial particle is still governed by an effective brownian motion , provided that we study the problem in a frame co - moving with the mean flow . indeed ,if we denote by the mean flow , i.e. then the rescaled processed converges in distribution to a brownian motion with covariance matrix ( the effective diffusivity ) given by with we can use the same multiscale techniques to study the diffusive scaling of for the white velocity field , equation : after a similar calculation with the colored noise problem we find that the backward kolmogorov equation which governs the dynamics on large scale is where the effective diffusivity is given by here with the operator is defined in .we use the notation to denote averaging over with respect to the invariant distribution .equation is the backward kolmogorov equation corresponding to a pure brownian motion .hence we have the following result : for and the function , where solves , is approximated by solving with and where is a standard brownian motion on this result can be justified rigorously by means of the martingale central limit theorem , as is done for the coloured noise case in the appendix . as in the case of the colored velocity field ,the covariance matrix of the limiting brownian motion is nonnegative define .indeed , let be an arbitrary unit vector , define and use the dirichlet form ( theorem 6.12 in ) thus we have the following result : the effective diffusivity matrix given by is positive semi - definite and thus the associated backward kolmogorov equation is well posed .an important observation is that the centering condition for the white noise problem is always satisfied .indeed , let , and use the identity together with integrations by parts to deduce that this suffices for solvability of .hence , the long time , large scale behavior of solutions to is always diffusive .this is to be contrasted with the case of the colored velocity field , where an additional condition , equation , has to be imposed to ensure diffusive large scale dynamics .consider the rescaled equation , and denote its solution by .it is clear that if we first take the limit as and then the limit , then converges to a brownian motion with covariance matrix given by eqn . , without having to impose any centering condition .a natural question arises as to what happens if we interchange the order with which we take the limits . in this sectionwe show that the two limits commute under the additional assumption that the centering condition is satisfied .in particular we have the following result : [ r : inter ] let be positive definite matrices that commute and assume that the centering condition is satisfied . then for the effective diffusivity from the colored noise model given by admits the asymptotic expansion where is given by .the derivation of is based on singular perturbation analysis of the cell problem and of the stationary fokker - planck equation ; see .we start by writing the operator defined in in the form , with of course , is also of the form with replaced with .note that is the generator of a -dimensional ou process .hence , it has a one dimensional null space which consists of constants in .furthermore , the process generated by is geometrically ergodic and its invariant measure is gaussian . since and commute , the density of the unique gaussian invariant measure ( i.e. , the solution of the equation ) is where is the normalization constant .let be the solution of .as before , we define for an arbitrary unit vector .we have that that and that now we need to calculate the small asymptotics of and .we look for a solution of in the form of a power series in . we substitute the above into to obtain the following sequence of equations [ e : phie0e1e2 ] from equation we get that . in order for equation to be well posed it is necessary that the right hand side of the equation is orthogonal to the null space of , i.e. that which is satisfied , since the term to be averaged is linear in and is a mean zero gaussian density .the solution of equation is the solvability condition for gives we use the fact that to obtain this is precisely the cell problem for the white noise velocity field , equation projected along the direction .hence , the small expansion of the solution to is where is the solution to .we look for a solution of in the form of a power series in we substitute this expansion into and equate equal powers of to obtain the following sequence of equations .[ e : rhoe0e1e2 ] from the first equation we deduce that ( abusing notation ) .the solvability condition for is satisfied since the solution of is the solvability condition for is we use the expressions for and to deduce that and we substitute the above expressions in to conclude that satisfies consequently , , the solution of the second equation in .thus , the small expansion of , the solution of is we have that in the previous subsections we expanded and as and showed that where and satisfy . nowif we substitute the series expansion in ( [ e : ef_dif_alpha ] ) we obtain : since does not depend on we can integrate over the variable and obtain thus we have shown that for the effective diffusivity of the colored noise problem is approximately equal to that arising from the white noise problem up to terms of provided the centering condition is satisfied .it is straightforward to show that exactly the same result holds even when the centering condition is not satisfied . in this casethe asymptotic analysis is done for equations and ; the effective drift vanishes in the limit .in this section we study the dependence of the effective diffusivity or on the various parameters of the problem ( stokes number , molecular diffusivity etc . ) by means of numerical experiments .we study equations , in two dimensions with the velocity field being the taylor green flow , modulated in time by a one dimensional ou process .the equations of motion for the colored velocity field are with note that we consider the original equations , , rather than the rescaled version .the white noise model is here are independent gaussian white noise processes in dimensions and respectively .our aim is to study the dependence of the effective diffusivity on the parameters and .the taylor - green flow satisfies the parity invariance condition and consequently the centering condition is satisfied .furthermore , the symmetry properties of the taylor green flow imply that the two diagonal components of the effective diffusivity are equal , while the off diagonal components vanish . for the rest of the section we will use the notation and will refer to as the _ effective diffusivity_. we calculate the effective diffusivity using monte carlo simulations , rather than solving the poisson equations , .the numerical solution of degenerate poisson equations of this form is an interesting problem which we leave for future study .we solve equations and numerically for different realizations of the noise and we compute the effective diffusivity using its lagrangian definition where denotes ensemble average over all driving brownian motions . in practice, of course , we approximate the ensemble average by a finite number of ensemble members . we solve the equations ( [ e : motion_num_col ] ) , ( [ e : motion_num_white ] ) using the euler marayama method for the -variables and the exact solution for the ornstein uhlenbeck process .the euler method for the colored noise problem has a order of strong convergence 1 since then noise is additive in this case ; in the white noise case this reduces to order , since the noise is then multiplicative .we use 3000 particles with fixed non random initial conditions .the initial velocity of the inertial particles is always taken to be .we integrate over 10000 time units with .first we investigate the dependence of the effective diffusivity on the stokes number for the taylor - green flow .we set the values of .our results are presented in figures [ fig : k_tau_sig10_3 ] and [ fig : k_tau_sig10_2 ] . for comparisonwe also plot the diffusion coefficient of the free particle .we observe that when the effective diffusivity is several orders of magnitude greater than the molecular diffusivity , both for the colored and the white noise case .furthermore , the dependence of on is different when and , with a crossover occuring for . on the other hand ,the enhancement in the diffusivity becomes much less pronounced when is not very small , and essentially dissapears as increases , see figure [ fig : k_tau_sig10_2 ] .this is to be expected , of course .we fix now and investigate the dependence of on for various values of .our results are presented in figures [ thal ] and [ thal1 ] , where for comparison we also plot the diffusion coefficient of the free particle . in figure [ thal ]we plot the effective diffusivity of the colored noise problem in the case where ( inertial particles ) and ( passive tracers ) . in both casesthe effective diffusivity is enhanced in comparison with the one of the free particle problem .however , the existence of inertia enhances further the diffusivity .this phenomenon has been observed before in the case where the velocity field used was again the taylor - green velocity field but with no time dependence .in figure [ thal1 ] we plot the effective diffusivity of the white noise problem as a function of in the case where (inertial particles ) and (passive tracers ) .the enhancment occurs in both cases but again the existence of inertia enhances further the diffusivity . as expected when the effective diffusivities for both inertial particles and passive tracers converge to . for the colored noise problem ] for the white noise problem ] in this subesection we investigate the dependence of on and for . in the limit as either or ou processes converges to .it is expected , therefore , that in either of these two limits the solution of the stokes equation converges to the solution of and , consequently , in this limit the effective diffusivity is simply the molecular diffusion coefficient .this result can be derived using techniques from e.g. ( chapter 9 in ) . on the other hand , when either or , the ou process dominates the behavior of solutions to the stokes equation and , consequently , the effective diffusivity is controlled by the ou process .the above intuition is supported by the numerical experiments presented in figure [ fig : k_vs_alpha_lambda ] .in particular , the effective diffusivity converges to when either becomes large or becomes small , and becomes unbounded in the opposite limits ., ] in this section we study the effect of in the effective diffusivity of the colored noise problem .our results are plotted in figure [ thala ] .the values of are set equal to 1 , while and .we expect that as the colored noise problem should approach the white noise problem .this is what we see in figure [ thala ] , since when is of the value of the effective diffusivity for the colored noise problem is almost the same as the white noise one .the rate at which the effective diffusivity for the colored noise problem converge to the one for the white noise problem depends on the values of , .indeed , as we have already seen in subsection 5.1 for small values of and there is a significant difference between the values for the two diffusivities when . ]the problem of homogenization for inertial particles moving in a time dependent random velocity field was studied in this paper .it was shown , by means of formal multiscale expansions as well as rigorous mathematical analysis , that the long - time , large - scale behavior of the particles is governed by an effective brownian motion .the covariance of the limiting brownian motion can be expressed in terms of the solution of an appropriate poisson equation .the combined homogenization / rapid decorrelation in time for the velocity field limit was also studied .it was shown that the two limits commute .our theoretical findings were augmented by numerical experiments in which the dependence of the effective diffusivity on the various parameters of the problem was investigated .furthermore , various limits of physical interest such as etc.where studied .the results of our numerical experiments suggest that the effective diffusivity depends on the various parameters of the problem in a very complicated , highly nontrivial way . there are still many questions that remain open .we list some of them .* rigorous study of the dependence of the effective diffusivity on the various parameters of the problem .this problem has been studied quite extensively in the context of passive tracers .apart for this being an interesting problem for the point of view of the physics of the problem , it also leads to some very interesting issues related to the spectral theory of degenerate , nonsymmetric second order elliptic operators . * numerical experiments for more complicated flows .it is expected that the amount of enhancement of the diffusivity will depend sensitively on the detailed properties of the incompressible , time dependent flow .* proof of a homogenization theorem for infinite dimensional ou processes , i.e. for the model . in thissetting , questions such as the dependence of the effective diffusivity on the energy spectrum and the regularity of the flow can be addressed .let be the solution to the sde where , is a standard brownian motion on furthermore the field is given by here , for each fixed , and , furthermore , is smooth and period as a function of .also is the solution of where is a standard brownian motion on and are positive definite matrices .our goal is to prove that the rescaled process converges weakly to a brownian motion with variance given by .we rewrite , as a system of first order sdes [ e : syst ] this is a markov process for on .we let denote the function so that .since is -periodic we may view as a markov process on .[ thm : homog ] let be the markov process defined through the solution of , where and , and assume that the process is stationary .assume that the vector field has zero expectation with respect to the invariant measure of the markov process then the rescaled process converges weakly to a brownian motion with covariance matrix where here is the unique up to additive constants solution of the poisson equation the assumptions on and are made merely for notational simplicity .it is straightforward to extend the proof presented below to the case where are not diagonal matrices , provided that they are positive definite . in the case where the centering condition , or equivalently , is not satisfied , then to leading order the particles move ballistically with an effective velocity .a central limit theorem of the form of theorem [ thm : homog ] provides us with information on the fluctuations around the mean deterministic motion .see also remark [ rem : drift ] .it is not necessary to assume that the process is started in its stationary distribution as it will approach this distribution exponentially fast . indeed , as we prove in proposition [ prop : inv_meas ] below , the fast process is geometrically ergodic .this implies that for every function which does not grow too fast at infinity there exist constants such that where denotes expectation with respect to the law of the process and the unique invariant measure .we make the stationarity assumption to avoid some technical difficulties . as is usually the case with theorems of the form , e.g. , the proof is based on the cental limit theorem for additive functionals of markov processes : we apply the it formula to the solution of the poisson equation to decompose the rescaled process into a martingale part and a remainder ; we then employ the martingale central limit theorem ( * ? ? ?* ch . 7 ) to prove a central limit theorem for the martingale part and we show that the remainder becomes negligible in the limit as . in order to obtain these two results we need to show that the fast process is ergodic and that the solution of the poisson equation exists and is unique in an appropriate class of functions , and that it satisfies certain _ a priori _ estimates . in order to prove that the fast process is ergodic in a sufficiently strong sense we use results from the ergodic theory of hypoelliptic diffusions order to obtain the necessary estimates on the solution of the poisson equation we use results on the spectral theory of hypoelliptic operators . our overall approach is similar to the one developed in . for the proof of the homogenization we will need the following three technical results which we prove in appendix [ sec : estim ] .[ prop : inv_meas ] let be the operator defined in and assume that and . then the process generated by is geometrically ergodic . [prop : estim_inv_meas ] assume that and also let be the invariant measure of the process generated by .then , for every and there exists a function ( the schwartz space of smooth functions with fast decay at infinity ) such that [ prop : boundphi ] let with for every multiindex and every .assume further that , where is the invariant measure of the process .then there exists a solution of the equation moreover , for every , the function satisfies furthermore , for every , is unique ( up to an additive constant ) in . _ proof of theorem [ thm : homog ] ._ we have already shown that the centering assumption on the velocity field , equation , is equivalent to .moreover , clearly satisfies the smoothness and fast decay assumptions of proposition [ prop : boundphi ] .proposition [ prop : boundphi ] applies to each component of equation and we can conclude that there exists a unique smooth vector valued function which solves the cell problem and whose components satisfy estimate .we apply now it formula to with solving and use the fact that to obtain clearly .furthermore , the stationarity assumption together with propositions [ prop : estim_inv_meas ] and [ prop : boundphi ] imply that consider now the martingales and . according to the martingale central limit theorem ( * ? ? ?7.1.4 ) , in order to prove convergence of a martingale to a brownian motion , it is enough to prove convergence of its quadratic variation in to ; is the variance of the limiting brownian motion .this now follows from propositions [ prop : estim_inv_meas ] and [ prop : boundphi ] , together with the ergodic theorem for additive functionals of ergodic markov processes .in particular , using to denote the quadratic variation of a martingale , we have that similarly we combine the above with equation and use the fact that and that are diagonal matrices to conclude the proof of the theorem . with a bit of extra work onecan also obtain estimates on the rate of convergence to the limiting brownian motion in the wasserstein metric , as was done in for the case of a time independent velocity field .to accomplish this we need to obtain appropriate pathwise estimates on the rescaled particle velocity and the ornstein uhlenbeck process .we also need to introduce an additional poisson equation of the type and to apply the it formula to its solution .the poisson equation of type plays the role of a higher order cell problem from the theory of homogenization ; see , e.g. , for the proof of an error estimate using higher order cell problems in the pde setting . the argument used in ( * ? ? ?2.1 ) is essentially a pathwise version of the pde argument .we leave the details of this quantative error bound to the interested reader .in this section we prove that the operator generates a geometrically ergodic markov process .this means that there exists a unique invariant measure of the process which has a smooth density with respect to lebesgue measure on ; and that , furthermore , estimate holds .in addition , we prove some regularity properties of the invariant density and existence and uniqueness of solutions together with a priori estimates for the poisson equation .the proof of proposition [ prop : inv_meas ] follows the lines of .the proof of propositions [ prop : estim_inv_meas ] and [ prop : boundphi ] is based on results from .the proof of this proposition [ prop : inv_meas ] is based upon three lemmas . in the first lemmawe show that the transition probability has a smooth density with respect to lebesgue . in the secondwe show that is everywhere positive . in the thirdwe show that there exists a lyapunov function .these three lemmas imply that the fast process is geometrically ergodic ( * ? ? ?2.8 ) . with the understanding that and .the jacobian of the drift for this system is : where in order to prove that our system is hypo - elliptic we need to span through the noise vectors and their lie commutators with the drift .we are going to study two cases .the first is when . in this casethe noise provides the vectors : thus we are missing vectors in order to span .we can see that we can obtain the missing vectors in the following way : thus we obtained the vectors that span we briefly remark on the case where . in this casesince there is no noise in the equation describing the motion of and we need to span having the following vectors : we notice that : so the only way not to obtain the vectors is if the following equation holds : if this equation does not hold we can obtain the rest of the vectors in the exact same way as we did in the case where the proof of this result is based on a controllability argument .we start by writing compactly in the form with ^t ] and the control problem associated with is for any , any , and any we can find smooth ,\ir^{d+n})$ ] such that is satisfied and . to see this set .the equation for is we consider this equation separately since it does not involve any of the other state variables .choose to be a path such that , for the given , where with we denote the last components of the vectors .since is positive definite is invertible and is defined by substitution and will be as smooth as and hence . also can be taken as 0 .the equation for is where is chosen that satisfies .now let x be a path such that , for that given t where with we denote the first elements of the vectors respectively .since is everywhere invertible , is defined by substitution and will be as smooth as and hence . also can be taken as 0 .now note that the event occurs with positive probability since the wiener measure of any tube is positive .so the brownian motion controls the component of equation ( [ e : zyeta ] ) . againnote that the event occurs with positive probability since the wiener measure of any such tube is positive .combining now these two results is possible to deduce the required open set irreducibility . using results from we can also derive some regularity estimates for the invariant density .in addition , we can show that the operator , the formal of , has compact resolvent and , hence , fredholm theory applies ._ proof of proposition [ prop : estim_inv_meas ] _ the proof of this result is similar to the proof of ( * ? ? ?3.1 ) , which in turn follows the lines of .denote by the ( random ) flow generated by the solutions to and by the semigroup defined on finite measures by by lemma [ lem : hormand ] maps every measure into a measure with a smooth density with respect to the lebesgue measure .it can therefore be restricted to a positivity preserving contraction semigroup on .the generator of is the formal of .we now define an operator on by closing the operator defined on by note at this point that and is required to make the coefficients of and , respectively , strictly positive .we can rewrite the above expression in hrmander s `` sum of squares '' form as with since is on the torus , it can be checked that the assumptions of ( * ? ? ? * thm .5.5 ) are satisfied with . combining this with ( * ? ? ?5.6 ) , we see that there exists such that , for every , there exists a positive constant such that holds for every in the schwartz space . clearly , the operator has compact resolvent .this , together with with and ( * ? ? ?5.9 ) imply that has compact resolvent .notice now that thus , is the solution of the homogeneous equation the compactness of the resolvent of implies that there exists a function such that estimate , together with a simple approximation argument imply that for every , and therefore belongs to the schwartz space .furthermore , an argument given for example in ( * ? ? ?* prop 3.6 ) shows that must be positive .since one has furthermore the invariant density satisfies estimate .we start with the proof of existence .fix , consider the operator defined in , and define the function it is clear that if there exists such that , then is a solution to . consider the operator . by the considerations in the proof of proposition [ prop : estim_inv_meas ], has compact resolvent .furthermore , the kernel of is equal to the kernel of , which in turn by lemma [ lem : kernel ] is equal to the span of .define and define to be the restriction of to .since has compact resolvent , it has a spectral gap and so is invertible .furthermore , we have that , therefore solves and thus leads to a solution to .since satisfies a similar bound to and since for every , the bound follows as in proposition [ prop : estim_inv_meas ] .the uniqueness of in the class of functions under consideration follows immediately from lemma [ lem : kernel ] .carmona and f. cerou .transport by incompressible random velocity fields : simulations & mathematical conjectures . in _stochastic partial differential equations : six perspectives _ , volume 64 of _ math .surveys monogr ._ , pages 153181 .soc . , providence , ri , 1999 .g. c. papanicolaou , d.w .stroock , and s. r. s. varadhan .martingale approach to some limit theorems . in _papers from the duke turbulence conference ( duke univ . ,durham , n.c . , 1976 ) ,6 _ , pages ii+120 pp .duke univ .duke univ . , durham , n.c . , 1977 .pavliotis , a.m. stuart , and l. band .monte carlo studies of effective diffusivities for inertial particles . in _monte carlo and quasi - monte carlo methods 2004 _ , pages 431441 .springer , berlin , 2006 .stroock and s.r.s .varadhan . on the support of diffusion processes with applications to the strong maximum principle . in _ proceedings of the sixth berkeley symposium on mathematical statistics and probability ( univ .california , berkeley , calif . , 1970/1971 ) , vol .iii : probability theory _ , pages 333359 , berkeley , calif . , 1972 .california press .
we study the problem of homogenization for inertial particles moving in a time dependent random velocity field and subject to molecular diffusion . we show that , under appropriate assumptions on the velocity field , the large scale , long time behavior of the inertial particles is governed by an effective diffusion equation for the position variable alone . this is achieved by the use of a formal multiple scales expansion in the scale parameter . the expansion relies on the hypoellipticity of the underlying diffusion . an expression for the diffusivity tensor is found and various of its properties are studied . the results of the formal multiscale analysis are justified rigorously by the use of the martingale central limit theorem . our theoretical findings are supported by numerical investigations where we study the parametric dependence of the effective diffusivity on the various non dimensional parameters of the problem .
research funders increasingly require data management plans prior to applications for funding .similarly , infrastructures and policies are arising regarding both archiving , documentation and sharing research data .while scientists are increasingly being evaluated and funded according to quantitative measures , e.g. by citations , it is relevant to ask whether a citation advantage exists that is related to the activity of sharing data , similar to the debated citation advantage related to open access ( e.g. ( * ? ? ? * kurtz et al .2005 ) and ( * ? ? ?* kurtz et al . 2007 ) ) .we present here a simple study of astrophysical publications to investigate a possible increased citation impact resulting from linking to data , using the nasa astrophysics data system , henceforth ads ( cf .* kurtz et al . 2000 ) ) .this work is an extension of the initial study concerning data links in papers published in the journal _ apj _ during 20002010 , presented in an unpublished working paper by ( * ? ? ?* dorch ( 2012 ) ) .the ads , launched by nasa in 1992 , is hosted by the harvard - smithsonian center for astrophysics .ads is an online publication database of millions of astronomy and physics papers receiving abstracts or tables of contents from hundreds of journal sources .the ads also lists citations for each paper .the ads search engine is tailor - made for searching astronomical abstracts and can be queried for author names , astronomical object names , title words , abstract text , and results can be filtered according to a number of criteria ( cf .* eichhorn et al . 2000 ) ) . for each publication record in ads ,a number of links are possible , including data links to online data , e.g. at external data centers .links of this type are abbreviated `` d '' aka .d links ( cf .* accomazzi & eichhorn 2004 ) and ( * ? ? ?* eichhorn et al . 2007 ) ) .therefore , it is possible to limit ads to publications with or without d links . in part of the work presented here ,we invoke also a secondary source of publication data , the inspec database from the institute of engineering and technology ( formerly the iee ) and a source of citation data , and the web of science ( wos ) science citation index from thomson reuters . like wos , but unlike ads , inspec is a commercial major indexing database of scientific and technical literature .+ in this study , we perform two analyses : 1 .we investigate the number of papers and citations for papers with or without d links during the period 20002014 for _ apj _ , _ a&a _ and _ mnras _ using nasa ads .we investigate the number of papers and citations for experimental and theoretical papers respectively during 2010 for _ apj _ , _ a&a _ and _ aj _ using inspec and nasa ads .firstly , _ ( a ) _ we limit the study to papers published in major astrophysical journals during the 15-year period in the current millennium 20002014 , cf .table [ tab1 ] .furthermore , we define the citation advantage of papers that link to data as the ratio of the number of citations per year to papers with links to data , and the number of citations per year to papers without such links . publication data and derivatives for _ apj _ are illustrated in fig .[ fig1 ] left and right . secondly , _ ( b ) _ it is relevant to investigate whether we introduce a bias in selecting articles with data - links , e.g. whether experimental papers more often link to data , and whether experimental papers are cited more than theoretical papers .+ to test this possibility , we apply the feature _ treatment type _ that the database inspec assigns to all indexed papers : * _ theoretical or mathematical _ is assigned when the subject matter is generally of a theoretical or mathematical nature . * _ experimental _ is used for documents describing an experimental method , observation or result .includes apparatus for use in experimental work and calculations on experimental results .articles from the three journals _ apj _ , _ a&a _ and _ aj _ are downloaded into the reference handling program endnote in order to extract dois for further processing .the relevant dois are then entered into inspec and the articles are separated into the two tiers : either classified as theoretical or experimental work .the few articles classified as both experimental and theoretical are discarded from the analysis . finally , we apply , in this case , wos in order to extract the number of citations because dois are not searchable in ads ._ apj _ as registered by ads includes letters as well as the supplement series but the articles published in those latter categories are not fully included in wos and we discard them from the present analysis .the number of articles with or without data links ( as well as citation data from wos ) is then downloaded directly from ads . .datafor the four journals _ apj _ , _ a&a _ , _ mnras _ and _ aj _ : journal impact factor 2013 ( jf ) , the average number of papers published per year during 20002014 , , the average fraction of papers with d links , the average fraction of citations resulting from papers with d links , and the average d link citation advantage during 20002014 .[ cols="<,^,^,^,^,^",options="header " , ] a statistical analysis was performed as appropriate to test for any significance in mean citation counts between articles with and without datalinks as well as between theoretical and experimental articles .f tests were used to test for equal variance ; two tailed t - tests were then run for unequal and equal variances as appropriate to test for significant difference between mean total citations per paper .our focus is only on articles published in 2010 .this ensures time to accumulate a sufficiently large number of citations .the papers with d links received , in total , fewer citations per year on average relative to the papers without d links ( by approximately a factor of two ) .however , there being fewer papers with links to data , it turns out that these papers on the average received more citations per paper i.e. during the examined period the d link papers in _ apj _ on average receive 28% more citations per paper per year , than the papers without d links . since 2009that fraction is higher and in the case of _ apj _ more like 50% more citations , cf .fig.[fig1 ] ( right ) .next , we look at the journals and papers in term of their experimental or theoretical content . in case of papers published in _the number of experimental papers is only slightly above the number of theoretical ones .the difference between the mean numbers of citations obtained by the two groups is small as well .the situation is different when considering papers with or without data links . in case of d link papers ,the number of experimental papers is much larger than the number of theoretical papers , while the latter has the largest mean number of citations . on the other hand ,the number of theoretical non - link papers is above the similar number of experimental papers , but still the theoretical articles obtain the most citations .the same pattern is observed in case of the two other journals __ aj_. the theoretical papers with data links obtain the highest number of citations .the difference is most pronounced in case of papers published in _aj _ , but this conclusion is based on rather few papers in the data .we have examined the statistical confidence level of our conclusions : in case of _ apj _ and _ a&a _ it is evident , although only evident at the 5% significance level in case of _ apj _ ( ) , that papers with d links obtain the largest numbers of citations . in case of _ aj_ a value well above indicates that the citation advantage is not statistically well founded . in a similar fashiona significant advantage for obtaining citations has been observed for theoretical d link papers compared to experimental d link papers in case of all three journals . on the other hand, it can only be proven at the confidence level partly due to a low number of papers and scatter in citations data .our simple study indicates a clear tendency for papers with links to data to receive more citations per year on average , than papers that do not link to data . however , there are several biases that could be studied further , e.g. whether longer papers , papers with more authors etc .display generically different citation patterns . also of potential importance is whether some subjects that `` naturally '' link to data have a higher citation impact than other fields , e.g. papers based on space missions or telescope data .* henneken & accomazzi ( 2011 ) ) performed an analysis restricting publication data using a set of 50 keywords : looking at cumulative citations to papers after a 10 year period .the report demonstrated a 20% increase in citation count for papers with d links , compared to those without. alas , evidence is mounting that linking to data enabling sharing does indeed merit those who do so .this evidence thereby also supports initiatives furthering the development of data infrastructure .
we present here evidence for the existence of a citation advantage within astrophysics for papers that link to data . using simple measures based on publication data from nasa astrophysics data system we find a citation advantage for papers with links to data receiving on the average significantly more citations per paper than papers without links to data . furthermore , using inspec and web of science databases we investigate whether either papers of an experimental or theoretical nature display different citation behavior .
the method of the generalized alignment indices ( galis ) was originally introduced in as an efficient chaos detection method . to date , the gali method has been successfully applied to a wide variety of conservative dynamical systems for the discrimination between regular and chaotic motion , as well as for the detection of regular motion on low dimensional tori . in the present paperwe extend and complete the study of the gali method , focusing on its behavior for the special case of periodic orbits and their neighborhood in conservative dynamical systems .the detection of periodic orbits and the determination of their stability are fundamental approaches for the study of nonlinear dynamical systems , since they provide valuable information for the structure of their phase space .in particular , stable periodic orbits are associated with regular motion , since they are surrounded by tori of quasiperiodic motion , while in the vicinity of unstable periodic orbits chaotic motion occurs . the gali method is related to the evolution of several deviation vectors from the studied orbit , and therefore is influenced by the characteristics of the system s tangent space .the main goal of the paper is to determine the usefulness of the method for probing the local dynamics of periodic orbits with different stability types .we manage to achieve this goal by deriving theoretical predictions for the behavior of galis for stable and unstable periodic orbits .we also verify numerically the validity of these predictions , by studying the evolution of galis for periodic orbits of several hamiltonian flows and symplectic maps , clarifying also the connections of such dynamical systems . in addition, we show how the properties of the index can be used to locate stable periodic orbits , and to understand the dynamics in the vicinity of unstable ones .the paper is organized as follows : in the first two introductory sections we recall the definition of the gali , describing also its behavior for regular and chaotic orbits ( sect .[ sec : gali ] ) , and report the several stability types of periodic orbits in conservative systems ( sect .[ sec : stability ] ) . in sect .[ sec : g_po ] we first study theoretically the behavior of the index for stable and unstable orbits , and then present applications of the gali to particular orbits of hamiltonian flows and symplectic maps .[ sec : near ] is devoted to the dynamics in the neighborhood of periodic orbits , while sect .[ sect : perp_dev_vec ] is dedicated to the relation between the galis of stable periodic orbits for flows and maps . finally , in sect .[ sec : conclusions ] , we summarize our results .let us briefly recall the definition of the galis and their behavior for regular and chaotic motion in conservative dynamical systems . consider an autonomous hamiltonian system of degrees of freedom ( ) , described by the hamiltonian , where and , are the generalized coordinates and conjugate momenta respectively .an orbit in the -dimensional phase space of this system is defined by a vector , with , , .the time evolution of this orbit is governed by hamilton s equations of motion while the time evolution of an initial deviation vector from the solution of eqs .( [ eq : hameq ] ) , obeys the variational equations where is the jacobian matrix of .let us also consider a discrete time conservative dynamical system defined by a -dimensional ( ) symplectic map .the evolution of an orbit in the -dimensional space of the map is governed by the difference equation in this case , the evolution of a deviation vector , with respect to a reference orbit , is given by the corresponding _ tangent map _ for hamiltonian flows and maps the generalized alignment index of order ( gali ) , , is determined through the evolution of initially linearly independent deviation vectors . to avoid overflow problems , the resulting deviation vectors are continually normalized , but their directions are kept intact .then , according to gali is defined as the volume of the -parallelogram having as edges the unit deviation vectors , , determined through the wedge product of these vectors as with denoting the usual norm . from this definitionit is evident that if at least two of the deviation vectors become linearly dependent , the wedge product in eq .( [ eq : gali ] ) becomes zero and the gali vanishes . in the -dimensional phase space of an hamiltonian flow or a map , regular orbits lie on -dimensional tori , with for hamiltonian flows , and for maps .for such orbits , all deviation vectors tend to fall on the -dimensional tangent space of the torus on which the motion lies .thus , if we start with general deviation vectors , these will remain linearly independent on the -dimensional tangent space of the torus , since there is no particular reason for them to become linearly dependent . as a consequence gali remains practically constant and different from zero for . on the other hand, gali tends to zero for , since some deviation vectors will eventually have to become linearly dependent . in particular , the generic behavior of gali for regular orbits lying on -dimensional tori is given by note that these estimations are valid only when the conditions stated above are exactly satisfied .for example , in the case of 2d maps , where the only possible torus is an -dimensional invariant curve , the tangent space is -dimensional .thus , the behavior of gali ( which is the only possible index in this case ) is given by the third branch of eq .( [ eq : gali_order_all ] ) , i.e. gali , since the first two cases of eq .( [ eq : gali_order_all ] ) are not applicable . from eq .( [ eq : gali_order_all ] ) we deduce that , for the usual case of regular orbits lying on an -dimensional torus , the behavior of gali is given by on the other hand , for a chaotic orbit all deviation vectors tend to become linearly _ dependent _ , aligning themselves in the direction defined by the maximum lyapunov characteristic exponent ( mlce ) and hence , in that case , gali tends to zero _ exponentially _ following the law t } , \label{eq : gali_chaos}\ ] ] where are the first largest lyapunov characteristic exponents ( lces ) of the orbit .the gali is a generalization of a similar indicator called the smaller alignment index ( sali ) , which has been used successfully for the detection of chaos in several dynamical systems .the generalization consists in the fact that the galis use information of _ more than two _ deviation vectors from the reference orbit , leading to a faster and clearer distinction between regular and chaotic motion compared with sali . in practice , sali is equivalent to gali since ( see appendix b of for more details ) . for the numerical computation of galiswe consider the matrix having as rows the coordinates of the unit deviation vectors , , , with respect to the usual orthonormal basis , , ... , of the -dimensional phase space .thus , gali can be evaluated as the square root of the sum of the squares of the determinants of all possible submatrices of here the sum is performed over all possible combinations of indices out of , denotes the determinant , and the explicit dependence of all quantities on the time is omitted for simplicity .equation ( [ eq : norm ] ) is ideal for the theoretical determination of the asymptotic behavior of galis for chaotic and regular orbits .it has been used in for the derivation of eqs .( [ eq : gali_order_all ] ) and ( [ eq : gali_chaos ] ) , and will be applied later on in sect .[ sec : theory ] for the determination of galis behavior for periodic orbits .however , from a practical point of view the application of eq .( [ eq : norm ] ) for the numerical evaluation of gali is not very efficient as it might require the computation of a large number of determinants . in more efficient numerical technique for the computation of gali , which is based on the singular value decomposition of matrix was presented . in particular , it has been shown that gali is equal to the product of the singular values , of where denotes the transpose matrix .now , consider a -periodic orbit ( i.e. an orbit satisfying ) of an hamiltonian flow or of a symplectic map .its linear stability is determined by the eigenvalues of the so - called monodromy matrix , which is obtained from the solution of the variational equations for one period ( see for example ( * ? ? ?3.3) ( * ? ?* chapt . 4 , 5 ) ) .the monodromy matrix is symplectic satisfies the condition , with |\lambda_1| \geq |\lambda_2| \geq \cdots \geq are given by in the case of unstable periodic orbits , where at least latexmath:[ \eta \eta \xi ] and grow proportional to , since determinants with more than one columns proportional to $ ] are identically zero .thus , we conclude that for . for , is a square matrix which has a constant determinant , since time appears only through multiplications with the first columns of , and so .summarizing , the time evolution of gali for stable periodic orbits of hamiltonian systems is given by it is worth mentioning that eq .( [ eq : gali_po_ham ] ) can be retrieved from eq .( [ eq : gali_order_all ] ) by assuming motion on an -dimensional torus , i.e. on an 1-dimensional curve , which is the stable periodic orbit .note that for , only the last two branches of eq .( [ eq : gali_order_all ] ) are meaningful .stable periodic orbits of symplectic maps correspond to stable fixed points of the map , which are located inside islands of stability .any deviation vector from the stable periodic orbit performs a rotation around the fixed point .this , for example , can be easily seen in the case of 2d maps where the islands in the vicinity of a stable fixed point can be represented through linearization , by ellipses ( see for instance ( * ? ? ?3.3.b) ) .thus , any set of initially linearly independent , unit deviation vectors will rotate around the fixed point , keeping on the average the angles between them constant .this means that the volume of the -parallelogram having as edges these vectors will remain practically constant , exhibiting some fluctuations , since the rotation angles are constant only on average .so , in the case of stable periodic orbits of maps we have verify the validity of the theoretical predictions of eqs .( [ eq : gali_chaos_upo ] ) and ( [ eq : gali_po_ham ] ) we now compute the galis for some representative hamiltonian systems of different number of degrees of freedom .first we consider the well - known 2dof hnon - heiles model in our study we keep the value of the hamiltonian fixed at .[ 2d_ham_hh1](a ) shows the pss of the system defined by , .we consider two stable periodic orbits ( whose stability type is according to eq .( [ eq : sta_type ] ) ) : an orbit of period 5 ( i.e. an orbit intersecting the pss at the 5 points denoted by blue crosses in fig . [ 2d_ham_hh1](a ) ) with initial condition , and an orbit of period 7 ( red squares in fig . [ 2d_ham_hh1](a ) ) with initial condition .the time evolution of gali , for these two orbits , for a random choice of initial orthonormal deviation vectors , is shown in figs . [ 2d_ham_hh1](b ) and ( c ) respectively .for both orbits the indices show a power law decay to zero , in accordance with the theoretical prediction of eq .( [ eq : gali_po_ham ] ) for . .the intersection points of stable periodic orbits of period 5 ( blue crosses ) and 7 ( red squares ) , as well as an unstable orbit of period 5 ( green circles ) are also plotted . the line , denoting a set of initial conditions discussed in sect .[ sec : near ] , is also plotted .the time evolution of gali ( red curves ) , gali ( green curves ) and gali ( blue curves ) for these three orbits is shown in panels ( b ) , ( c ) and ( d ) respectively .both axes of ( b ) and ( c ) , and the vertical axis of ( d ) are logarithmic .( e ) the time evolution of the quantity , which has as limit for the mlce of the unstable periodic orbit ( horizontal dotted line ) . plotted lines correspond to functions proportional to , and in ( b ) and ( c ) , and the exponential laws ( [ eq : gali_chaos_upo ] ) for , in ( d).,title="fig:",width=192 ] .the intersection points of stable periodic orbits of period 5 ( blue crosses ) and 7 ( red squares ) , as well as an unstable orbit of period 5 ( green circles ) are also plotted . the line , denoting a set of initial conditions discussed in sect .[ sec : near ] , is also plotted .the time evolution of gali ( red curves ) , gali ( green curves ) and gali ( blue curves ) for these three orbits is shown in panels ( b ) , ( c ) and ( d ) respectively .both axes of ( b ) and ( c ) , and the vertical axis of ( d ) are logarithmic .( e ) the time evolution of the quantity , which has as limit for the mlce of the unstable periodic orbit ( horizontal dotted line ) . plotted lines correspond to functions proportional to , and in ( b ) and ( c ) , and the exponential laws ( [ eq : gali_chaos_upo ] ) for , in ( d).,title="fig : " ] .the intersection points of stable periodic orbits of period 5 ( blue crosses ) and 7 ( red squares ) , as well as an unstable orbit of period 5 ( green circles ) are also plotted . the line , denoting a set of initial conditions discussed in sect .[ sec : near ] , is also plotted .the time evolution of gali ( red curves ) , gali ( green curves ) and gali ( blue curves ) for these three orbits is shown in panels ( b ) , ( c ) and ( d ) respectively .both axes of ( b ) and ( c ) , and the vertical axis of ( d ) are logarithmic .( e ) the time evolution of the quantity , which has as limit for the mlce of the unstable periodic orbit ( horizontal dotted line ) . plotted lines correspond to functions proportional to , and in ( b ) and ( c ) , and the exponential laws ( [ eq : gali_chaos_upo ] ) for , in ( d).,title="fig : " ] + .the intersection points of stable periodic orbits of period 5 ( blue crosses ) and 7 ( red squares ) , as well as an unstable orbit of period 5 ( green circles ) are also plotted . the line , denoting a set of initial conditions discussed in sect .[ sec : near ] , is also plotted .the time evolution of gali ( red curves ) , gali ( green curves ) and gali ( blue curves ) for these three orbits is shown in panels ( b ) , ( c ) and ( d ) respectively .both axes of ( b ) and ( c ) , and the vertical axis of ( d ) are logarithmic .( e ) the time evolution of the quantity , which has as limit for the mlce of the unstable periodic orbit ( horizontal dotted line ) . plotted lines correspond to functions proportional to , and in ( b ) and ( c ) , and the exponential laws ( [ eq : gali_chaos_upo ] ) for , in ( d).,title="fig : " ] .the intersection points of stable periodic orbits of period 5 ( blue crosses ) and 7 ( red squares ) , as well as an unstable orbit of period 5 ( green circles ) are also plotted . the line , denoting a set of initial conditions discussed in sect .[ sec : near ] , is also plotted .the time evolution of gali ( red curves ) , gali ( green curves ) and gali ( blue curves ) for these three orbits is shown in panels ( b ) , ( c ) and ( d ) respectively .both axes of ( b ) and ( c ) , and the vertical axis of ( d ) are logarithmic .( e ) the time evolution of the quantity , which has as limit for the mlce of the unstable periodic orbit ( horizontal dotted line ) .plotted lines correspond to functions proportional to , and in ( b ) and ( c ) , and the exponential laws ( [ eq : gali_chaos_upo ] ) for , in ( d).,title="fig : " ] in order to check the validity of eq .( [ eq : gali_chaos_upo ] ) , we consider an unstable periodic orbit ( of type ) of period 5 ( green circles in fig . [ 2d_ham_hh1](a ) ) with initial condition . the theoretically expected value of this orbit s mlce is estimated from eq .( [ eq : lce_po ] ) to be , while because the hamiltonian function is an integral of motion . in fig .[ 2d_ham_hh1](d ) the time evolution of the corresponding gali , is plotted . from these resultswe conclude that the computed values of galis are well approximated by eq .( [ eq : gali_chaos_upo ] ) for and , at least up to . after that time we observe a change in the exponential decay of gali .this happens because the numerically computed orbit deviates from the unstable periodic orbit , due to computational inaccuracies , and enters the surrounding chaotic domain , which is characterized by different lces .this behavior is also evident from the evolution of the finite time mlce ( fig . [ 2d_ham_hh1](e ) ) having as limit for the mlce of the computed orbit ( for more details on the computation of the mlce the reader is referred to ( * ? ? ?* sect . 5 ) ) . for an initial time interval , well approximates the mlce of the unstable periodic orbit , but later on , due to the divergence of the computed orbit from the periodic trajectory, tends to a different value , which is the mlce of the chaotic domain around the unstable periodic orbit .let us now investigate the behavior of the galis for a 3dof hamiltonian system , where different types of unstable periodic orbits can appear .in particular , we consider a system of three harmonic oscillators with nonlinear coupling , described by the hamiltonian the harmonic frequencies of the oscillators are determined by parameters , , , and the strengths of the nonlinear couplings by and .this system was introduced as a crude description of the inner parts of distorted 3-dimensional elliptic galaxies .detailed studies of its basic families of periodic orbits were performed in . following these works , we fix , , and and vary and in order to study periodic orbits of different stability types . in fig .[ spoupo](a ) we plot the time evolution of galis for a stable ( ) periodic orbit with initial condition for and .the 3dof system has a 6-dimensional phase space and so , 5 different gali with , are defined .all galis decay to zero following the power law predictions given by eq .( [ eq : gali_po_ham ] ) for . , for an ( a ) stable , ( b ) unstable , ( c ) unstable and ( d ) unstable periodic orbit of the 3dof hamiltonian system ( [ 3dham ] ) .both axes of ( a ) and ( d ) , and the vertical axis of ( b ) and ( c ) are logarithmic . plotted lines correspond to appropriate power laws ( [ eq : gali_po_ham ] ) in ( a ) , and exponential laws ( [ eq : gali_chaos_upo ] ) in ( b ) and ( c ) .( e ) the time evolution of quantities , having respectively , as limit the two largest lces , of the unstable periodic orbit .the theoretically estimated value is denoted by a horizontal line.,title="fig:",width=185 ] , for an ( a ) stable , ( b ) unstable , ( c ) unstable and ( d ) unstable periodic orbit of the 3dof hamiltonian system ( [ 3dham ] ) . both axes of ( a ) and ( d ) , and the vertical axis of ( b ) and ( c ) are logarithmic . plotted lines correspond to appropriate power laws ( [ eq : gali_po_ham ] ) in ( a ) , and exponential laws ( [ eq : gali_chaos_upo ] ) in ( b ) and ( c ) .( e ) the time evolution of quantities , having respectively , as limit the two largest lces , of the unstable periodic orbit .the theoretically estimated value is denoted by a horizontal line.,title="fig : " ] , for an ( a ) stable , ( b ) unstable , ( c ) unstable and ( d ) unstable periodic orbit of the 3dof hamiltonian system ( [ 3dham ] ) . both axes of ( a ) and ( d ) , and the vertical axis of ( b ) and ( c ) are logarithmic . plotted lines correspond to appropriate power laws ( [ eq : gali_po_ham ] ) in ( a ) , and exponential laws ( [ eq : gali_chaos_upo ] ) in ( b ) and ( c ) .( e ) the time evolution of quantities , having respectively , as limit the two largest lces , of the unstable periodic orbit .the theoretically estimated value is denoted by a horizontal line.,title="fig : " ] + , for an ( a ) stable , ( b ) unstable , ( c ) unstable and ( d ) unstable periodic orbit of the 3dof hamiltonian system ( [ 3dham ] ) .both axes of ( a ) and ( d ) , and the vertical axis of ( b ) and ( c ) are logarithmic . plotted lines correspond to appropriate power laws ( [ eq : gali_po_ham ] ) in ( a ) , and exponential laws ( [ eq : gali_chaos_upo ] ) in ( b ) and ( c ) .( e ) the time evolution of quantities , having respectively , as limit the two largest lces , of the unstable periodic orbit .the theoretically estimated value is denoted by a horizontal line.,title="fig : " ] , for an ( a ) stable , ( b ) unstable , ( c ) unstable and ( d ) unstable periodic orbit of the 3dof hamiltonian system ( [ 3dham ] ) . both axes of ( a ) and ( d ) , and the vertical axis of ( b ) and ( c ) are logarithmic . plotted lines correspond to appropriate power laws ( [ eq : gali_po_ham ] ) in ( a ) , and exponential laws ( [ eq : gali_chaos_upo ] ) in ( b ) and ( c ) .( e ) the time evolution of quantities , having respectively , as limit the two largest lces , of the unstable periodic orbit .the theoretically estimated value is denoted by a horizontal line.,title="fig : " ] let us now study representative cases of all the different types of unstable periodic orbits that can appear in a general 3dof system . in particular, we consider an periodic orbit with initial condition for , ( fig .[ spoupo](b ) ) , an periodic orbit with initial condition for , ( fig . [ spoupo](c ) ) , and a periodic orbit with initial condition for and ( figs . [ spoupo](d ) and ( e ) ) .using eq .( [ eq : lce_po ] ) we estimated the lces to be , and , for the and the unstable periodic orbits respectively . using these values as good approximations of the actual lces ,we see in figs .[ spoupo](b ) and ( c ) that the evolution of galis is well reproduced by eq .( [ eq : gali_chaos_upo ] ) .an eigenvalue of the monodromy matrix of the unstable periodic orbit is numerically found to be , while the remaining three of them ( apart from the two unit ones ) are , and .then , from eq .( [ eq : lce_po ] ) we estimated the three largest lces of the periodic orbit to be , .the evolution of the galis for this orbit is shown in fig .[ spoupo](d ) .although the periodic orbit is unstable , gali does not decay to zero but remains constant until .this happens because , according to eq .( [ eq : gali_chaos_upo ] ) gali , but in this case .however , due to unavoidable inaccuracies in the numerical integration , the computed orbit eventually diverges from the unstable periodic one and enters a chaotic domain characterized by different lces with .this divergence is also evident from the evolution of quantities , in fig .[ spoupo](e ) , whose limits at are and respectively ( see for more details on the computation of and ) . in particular , we get for , while later on the two quantities attain different values . consequently, for gali starts to decay exponentially to zero . on the other hand , all other galis in fig .[ spoupo](d ) show an exponential decay , even when gali remains constant , since the corresponding exponents in eq .( [ eq : gali_chaos_upo ] ) do not vanish . finally , we turn to a multi - dimensional hamiltonian system representing a 1-dimensional chain of 5 identical particles with nearest neighbor interactions given by the fpu- hamiltonian where is the displacement of the particle from its equilibrium position and is the corresponding conjugate momentum . in our study , we set and impose fixed boundary conditions to the system , so that we always have .let us consider two particular members of a family of periodic orbits studied in which have initial conditions of the form , , , .we compute the galis of an stable periodic orbit ( figs .[ figfpu:2](a ) and ( b ) ) with initial condition for and an unstable periodic orbit ( fig . [ figfpu:2](c ) ) with initial condition for . from fig .[ figfpu:2 ] we see again that the behavior of the galis is well reproduced by eq .( [ eq : gali_po_ham ] ) for in the case of the stable orbit , and by eq .( [ eq : gali_chaos_upo ] ) for , , , which are the values obtained by eq .( [ eq : lce_po ] ) for the unstable orbit ., gali , gali and ( b ) gali , gali , gali for a stable periodic orbit of the 5dof hamiltonian system ( [ fpu_hamiltonian_2 ] ) .( c ) the time evolution of gali , gali , gali for an unstable periodic orbit of the same model . plotted lines correspond to appropriate power laws ( [ eq : gali_po_ham ] ) in ( a ) and ( b ) , and exponential laws ( [ eq : gali_chaos_upo ] ) in ( c).,title="fig : " ] , gali , gali and ( b ) gali , gali , gali for a stable periodic orbit of the 5dof hamiltonian system ( [ fpu_hamiltonian_2 ] ) .( c ) the time evolution of gali , gali , gali for an unstable periodic orbit of the same model . plotted lines correspond to appropriate power laws ( [ eq : gali_po_ham ] ) in ( a ) and ( b ) , and exponential laws ( [ eq : gali_chaos_upo ] ) in ( c).,title="fig : " ] , gali , gali and ( b ) gali , gali , gali for a stable periodic orbit of the 5dof hamiltonian system ( [ fpu_hamiltonian_2 ] ) .( c ) the time evolution of gali , gali , gali for an unstable periodic orbit of the same model . plotted lines correspond to appropriate power laws ( [ eq : gali_po_ham ] ) in ( a ) and ( b ) , and exponential laws ( [ eq : gali_chaos_upo ] ) in ( c).,title="fig : " ] according to the theoretical arguments of sect .[ sec : theory ] , the galis of unstable periodic orbits of maps should exhibit the same behavior as in the case of hamiltonian flows , i.e. they should tend exponentially to zero following eq .( [ eq : gali_chaos_upo ] ) . on the other hand ,we have argued that the galis of stable periodic orbits should remain constant , according to eq .( [ eq : gali_po_maps ] ) , having a different behavior with respect to hamiltonian systems . to verify these predictions, we now proceed to study some periodic orbits in a 2d and a 4d symplectic map .first we consider the 2d hnon map where is a real positive constant .the phase space of this map for is plotted in fig . [ 2dhm:1](a ) .we consider two periodic orbits of period 5 ( i.e. after 5 iterations of the map the orbit returns to its initial point ) : an stable orbit ( blue stars in fig .[ 2dhm:1](a ) ) with initial condition , and an unstable one ( red crosses in fig .[ 2dhm:1](a ) ) with initial condition . fig .[ 2dhm:1](b ) shows that the gali of the stable periodic orbit oscillates around a constant positive value , in accordance to eq .( [ eq : gali_po_maps ] ) .we have also verified that the gali of the unstable periodic orbit decays exponentially to zero following eq .( [ eq : gali_chaos_upo ] ) with . ) for .the points of two periodic orbits of period 5 , a stable ( blue stars ) and an unstable one ( red crosses ) , are also plotted .the time evolution of gali of the stable orbit is plotted in ( b ) .two particular points of the unstable periodic orbit discussed in sect .[ sec : near ] are marked by letters a and b.,title="fig : " ] ) for .the points of two periodic orbits of period 5 , a stable ( blue stars ) and an unstable one ( red crosses ) , are also plotted .the time evolution of gali of the stable orbit is plotted in ( b ) .two particular points of the unstable periodic orbit discussed in sect .[ sec : near ] are marked by letters a and b.,title="fig : " ] let us now consider the 4d symplectic map \\ x_{3 } ' & = & x_{3 } + x_{4}'\\ x_{4 } ' & = & x_{4 } + \frac{k_2}{2\pi}\sin(2\pi x_{3})-\frac{\beta}{2\pi}\sin[2\pi(x_{1}-x_{3})]\\ \end{array } ( \mbox{mod}\,\ , 1),\ ] ] which consists of two coupled standard maps , with real parameters , and . in fig .[ 4dsm:1 ] we plot the evolution of galis for an stable periodic orbit of period 7 with initial condition for and . likein the case of the 2d map ( [ 2dhenmap ] ) , gali , gali and gali remain constant , oscillating around non - zero values , in accordance to eq .( [ eq : gali_po_maps ] ) .( red curve ) , gali ( green curve ) and gali ( blue curve ) for a stable periodic orbit of period 7 of the 4d map ( [ 4dmap ] ) . ]we now turn our attention to the dynamics in the _ vicinity _ of periodic orbits , studying initially the neighborhood of stable periodic orbits in hamiltonian systems . as a first example we consider the 2dof hnon - heiles system ( [ 2dhh ] ) , and in particular the stable periodic orbit of period 5 studied in sect .[ sect:2dof ] . in fig .[ 2d_ham_hh1](b ) we have seen that gali , gali and gali in accordance to eq .( [ eq : gali_po_ham ] ) .we expect that small perturbations of this trajectory will lead to regular motion on 2-dimensional tori surrounding the periodic orbit . for this kind of motion , eq .( [ eq : gali_order_all_n ] ) predicts gali constant , gali and gali .thus , only for gali a different evolution between the periodic orbit and its neighborhood is expected .this is actually true , as we see in fig .[ nearspo](a ) where the time evolution of gali , is plotted for the stable periodic orbit ( red curves ) and two nearby orbits whose initial conditions result from a ( green curves ) and ( blue curves ) perturbation .the gali of neighboring orbits initially follows a gali evolution , similar to the periodic orbit , but later on stabilizes to a non - zero value as eq .( [ eq : gali_order_all_n ] ) predicts . from fig .[ nearspo](a ) we see that the closer the orbit is to the periodic trajectory the longer the initial phase of gali lasts , and the smaller is the final non - zero value to which the index tends . ,gali and gali for three orbits of the hnon - heiles system ( [ 2dhh ] ) : the stable periodic orbit of period 5 studied in sect .[ sect:2dof ] ( red curves ) and two nearby orbits whose initial conditions result from a ( green curves ) and ( blue curves ) perturbations of the periodic orbit .note that curves of gali and gali overlap each other .( b ) the gali values at for orbits with initial conditions on the line of the pss of fig .[ 2d_ham_hh1](a ) , as a function of the coordinate of the initial condition .( c ) regions of different gali values on the plane of the hnon - heiles system ( [ 2dhh ] ) .each point corresponds to an orbit in the neighborhood of a family of periodic orbits ( white curve ) and is colored according to the value computed at .the black filled circle denotes the stable periodic orbit of fig .[ 2d_ham_hh1](b).,title="fig : " ] , gali and gali for three orbits of the hnon - heiles system ( [ 2dhh ] ) : the stable periodic orbit of period 5 studied in sect .[ sect:2dof ] ( red curves ) and two nearby orbits whose initial conditions result from a ( green curves ) and ( blue curves ) perturbations of the periodic orbit .note that curves of gali and gali overlap each other .( b ) the gali values at for orbits with initial conditions on the line of the pss of fig .[ 2d_ham_hh1](a ) , as a function of the coordinate of the initial condition .( c ) regions of different gali values on the plane of the hnon - heiles system ( [ 2dhh ] ) .each point corresponds to an orbit in the neighborhood of a family of periodic orbits ( white curve ) and is colored according to the value computed at .the black filled circle denotes the stable periodic orbit of fig .[ 2d_ham_hh1](b).,title="fig:",width=191 ] , gali and gali for three orbits of the hnon - heiles system ( [ 2dhh ] ) : the stable periodic orbit of period 5 studied in sect .[ sect:2dof ] ( red curves ) and two nearby orbits whose initial conditions result from a ( green curves ) and ( blue curves ) perturbations of the periodic orbit .note that curves of gali and gali overlap each other .( b ) the gali values at for orbits with initial conditions on the line of the pss of fig .[ 2d_ham_hh1](a ) , as a function of the coordinate of the initial condition .( c ) regions of different gali values on the plane of the hnon - heiles system ( [ 2dhh ] ) .each point corresponds to an orbit in the neighborhood of a family of periodic orbits ( white curve ) and is colored according to the value computed at .the black filled circle denotes the stable periodic orbit of fig .[ 2d_ham_hh1](b).,title="fig:",width=200 ] let us now perform a more global study of the dynamics of the hnon - heiles system .first , we consider orbits whose initial conditions lie on the line of the pss of fig . [2d_ham_hh1](a ) .in particular , we use 7000 equally spaced initial conditions on this line and compute their gali values , using for each of them the same set of initial ( random and orthonormal ) deviation vectors . in fig .[ nearspo](b ) we plot the gali values at as a function of .the regions where gali has large values ( ) correspond to regular motion on 2-dimensional tori .regions where gali has very small values ( ) correspond to chaotic or unstable periodic orbits , while domains with intermediate values ( ) , correspond to sticky , chaotic orbits .we also distinguish narrow regions where gali decreases abruptly to values .these correspond to domains of regular motion around the main stable periodic orbits of the system , as e.g. in the vicinity of which corresponds to the stable periodic orbit in the center of the main island of stability in the pss of fig .[ 2d_ham_hh1](a ) .this behavior appears because gali at stable periodic orbits decays following a power law and reaches values smaller than the ones obtained for the neighboring regular orbits , where gali tends to constant non - zero values , as we have seen in fig .[ nearspo](a ) .this information can be directly used to identify the location of stable periodic orbits . in fig .[ nearspo](c ) we show a color plot of the parametric space of the hnon - heiles system ( [ 2dhh ] ) .each point corresponds to an initial condition and is colored according to its value computed at .chaotic orbits are characterized by very small gali values and are located in the purple colored domains .the deep orange colored `` strip '' corresponds to the vicinity of a family of stable periodic orbits ( this family is denoted by a white curve ) for which gali attains smaller ( but not too small ) values with respect to the surrounding light orange colored region , where regular motion on 2-dimensional tori takes place .we note that , as increases , the periodic orbit changes its stability and becomes unstable for .the point , , denoted by a black filled circle in fig .[ nearspo](c ) , corresponds to the stable periodic orbit of fig .[ 2d_ham_hh1](b ) .the galis of chaotic orbits in the vicinity of unstable periodic orbits can exhibit a remarkable oscillatory behavior .such an example is shown in fig . [ neighupo ] for a chaotic orbit of the 2d map ( [ 2dhenmap ] ) with initial condition ( point denoted by ` 0 ' in fig .[ neighupo ] ( b ) ) , which is located very close to the unstable periodic orbit of period 5 discussed in sect .[ sect:2dhm ] ( point a in fig .[ 2dhm:1](a ) and fig .[ neighupo ] ( b ) ) . in fig .[ neighupo](a ) we see that the gali of this orbit decreases exponentially , reaching very small values ( gali ) , since the two initially orthonormal deviation vectors tend to align ( fig .[ neighupo](b ) ) due to the chaotic nature of the orbit .the evolution of these vectors is strongly influenced by the stable and unstable manifolds of the nearby unstable periodic orbit .in particular , as the chaotic orbit moves away from point a along a direction parallel to the unstable manifold ( green curve in fig . [neighupo](b ) ) , both deviation vectors are stretched in this direction , and shrunk in the direction of the stable manifold ( blue curve in fig .[ neighupo](b ) ) .so , after a few hundreds of iterations , while the orbit remains in the proximity of point a ( note the tiny intervals in both axes of fig .[ neighupo](b ) ) , the evolved unit deviation vectors become almost identical , and consequently gali decreases significantly .nevertheless , the angle between the two vectors does not vanish , and starts to grow again when the orbit approaches point b of fig .[ neighupo](c ) , which is the next consequent of the unstable periodic orbit ( see also fig . [ 2dhm:1](a ) ) .the chaotic orbit approaches point b moving parallel to the stable manifold of point b ( blue curve in fig .[ neighupo](c ) ) . now the deviation vectors start to shrink along this manifold , while they expand along the direction of the unstable manifold of point b ( green curve in fig .[ neighupo](c ) ) .this leads to a significant increase of the angle between the two unit vectors , as we see in fig .[ neighupo](c ) , and consequently to an increase of the gali values ( fig .[ neighupo](a ) ) . of a chaotic orbit of the 2d map ( [ 2dhenmap ] ) , with initial condition close to the unstable periodic orbit discussed in sect .[ sect:2dhm ] .the blue curve shows the coordinate of the orbit in arbitrary units .consequents of this orbit and of two unit deviation vectors from it in the neighborhood of points a and b of the unstable periodic orbit of fig .[ 2dhm:1](a ) , are respectively plotted in ( b ) and ( c ) . in ( b ) and ( c ) the stable and unstable manifolds of points a and b are respectively plotted , while the points of the chaotic orbit are labeled according to their iteration number.,title="fig:",width=194 ] of a chaotic orbit of the 2d map ( [ 2dhenmap ] ) , with initial condition close to the unstable periodic orbit discussed in sect .[ sect:2dhm ] .the blue curve shows the coordinate of the orbit in arbitrary units .consequents of this orbit and of two unit deviation vectors from it in the neighborhood of points a and b of the unstable periodic orbit of fig .[ 2dhm:1](a ) , are respectively plotted in ( b ) and ( c ) . in ( b ) and( c ) the stable and unstable manifolds of points a and b are respectively plotted , while the points of the chaotic orbit are labeled according to their iteration number.,title="fig:",width=207 ] of a chaotic orbit of the 2d map ( [ 2dhenmap ] ) , with initial condition close to the unstable periodic orbit discussed in sect .[ sect:2dhm ] .the blue curve shows the coordinate of the orbit in arbitrary units .consequents of this orbit and of two unit deviation vectors from it in the neighborhood of points a and b of the unstable periodic orbit of fig .[ 2dhm:1](a ) , are respectively plotted in ( b ) and ( c ) . in ( b ) and ( c ) the stable and unstable manifolds of points a and b are respectively plotted , while the points of the chaotic orbit are labeled according to their iteration number.,title="fig:",width=211 ] this oscillatory behavior is repeated as the chaotic orbit visits all consequents of the unstable periodic orbit , and is clearly seen in fig .[ neighupo](a ) where the coordinate of the chaotic orbit is plotted in arbitrary units ( blue curve ) together with the gali values .the horizontal segments of this curve correspond to the time intervals that the orbit spends close to the fixed points of the unstable periodic orbit . during the first part of these intervalsthe chaotic orbit approaches a fixed point , the two deviation vectors become different and gali increases , while afterwards , the chaotic orbit moves away from the fixed point , whence the two deviation vectors tend to align , and gali decreases .gali reaches its lowest values during the short transition intervals between the neighborhoods of two successive points of the unstable periodic orbit , which correspond to the short connecting segments between the plateaus of the blue curve in fig .[ neighupo](a ) .these oscillations of gali can last for quite long time intervals , but eventually the chaotic orbit will escape from the strong influence of the homoclinic tangle of the unstable periodic orbit and gali will rapidly tend to zero .it is worth mentioning that abrupt changes in the values of sali ( which practically is gali ) by many orders of magnitude was also reported in for chaotic orbits of planetary systems .up to now , we have described in detail these oscillations of gali in the case of the 2d map ( [ 2dhenmap ] ) because they can be easily explained , while the deviation vectors themselves can be visualized in the 2-dimensional phase space of the map .interestingly , this remarkable behavior occurs in higher dimensional systems as well . in fig .[ ham_neighupo ] we show two such examples . in particular , we consider a chaotic orbit of the 2dof hamiltonian system ( [ 2dhh ] ) , whose initial condition is located close to an unstable periodic orbit of period 7 with initial condition ( fig . [ ham_neighupo](a ) ) , and an orbit of the 3dof system ( [ 3dham ] ) whose initial condition is near the periodic orbit presented in sect .[ sect:3dof ] ( fig .[ ham_neighupo](b ) ) . in both panels of fig .[ ham_neighupo ] we observe an oscillatory behavior of gali , similar to the one shown in fig . [ neighupo](a ) .we also point out that in both cases all other galis show similar oscillatory behaviors .( a ) for orbits of ( a ) the 2dof hnon - heiles system ( [ 2dhh ] ) , and ( b ) the 3dof hamiltonian system ( [ 3dham ] ) .blue curves show in arbitrary units the coordinate of the studied orbits on ( a ) the pss , of system ( [ 2dhh ] ) , and ( b ) the pss , of system ( [ 3dham]).,title="fig : " ] ( a ) for orbits of ( a ) the 2dof hnon - heiles system ( [ 2dhh ] ) , and ( b ) the 3dof hamiltonian system ( [ 3dham ] ) .blue curves show in arbitrary units the coordinate of the studied orbits on ( a ) the pss , of system ( [ 2dhh ] ) , and ( b ) the pss , of system ( [ 3dham]).,title="fig : " ]in sect . [ sec : stability ] we discussed the dynamical equivalence between hamiltonian systems and maps , as the latter can be interpreted as appropriate psss of the former .we have also seen that galis behave differently for flows and maps . in particular , as was shown in sect .[ sec : g_po ] , they remain constant for stable periodic orbits of maps ( see eq .( [ eq : gali_po_maps ] ) ) and decrease to zero for flows , according to eq . ( [ eq : gali_po_ham ] ) .the fact that maps can be considered as pss of flows , however , is the key to understanding this difference .so , computing the restriction of the galis on the pss of a hamiltonian system , or more generally on spaces perpendicular to the flow , should lead to behaviors of the indices similar to the ones obtained for maps .actually this approach has already been successfully applied to other chaos indicators related to the evolution of deviation vectors , by only considering the components of these vectors which are orthogonal to the flow . using deviation vectors orthogonal to the flow ,we indeed obtain the same gali behavior for stable periodic orbits of flows and maps .now , for stable periodic orbits of flows the galis of these vectors remain constant , as we see from figs .[ perpen:1](a ) and ( b ) where these galis are plotted for the stable periodic orbits of figs .[ 2d_ham_hh1](b ) and [ spoupo](a ) , respectively .these behaviors differ , however , from the ones shown in figs . [2d_ham_hh1](b ) and [ spoupo](a ) where the galis of the usual deviation vectors were computed .we note that when vectors orthogonal to the flow are used , gali of an hamiltonian system is by definition equal to zero , because the projected vectors are linearly dependent on a -dimensional space . for this reason gali and gali not displayed in figs .[ perpen:1](a ) and ( b ) respectively . , for the stable periodic orbit of the 2dof system ( [ 2dhh ] ) presented in fig .[ 2d_ham_hh1](b ) , and ( b ) gali , for the stable periodic orbit of the 3dof system ( [ 3dham ] ) presented in fig .[ spoupo](a ) , when the orthogonal to the flow components of the deviation vectors are used.,title="fig : " ] , for the stable periodic orbit of the 2dof system ( [ 2dhh ] ) presented in fig .[ 2d_ham_hh1](b ) , and ( b ) gali , for the stable periodic orbit of the 3dof system ( [ 3dham ] ) presented in fig .[ spoupo](a ) , when the orthogonal to the flow components of the deviation vectors are used.,title="fig : " ]in this paper , we have explored in more detail the properties of the gali method by using it to study the local dynamics of periodic solutions of conservative dynamical systems . to this end , we have : a ) theoretically predicted and numerically verified the behavior of the method for periodic orbits , b ) summarized the expected behaviors of the indices and c ) clarified the connection between the behavior of galis for dynamical systems of continuous ( hamiltonian flows ) and discrete ( symplectic maps ) time .more specifically , we showed that for stable periodic orbits , galis tend to zero following particular power laws for hamiltonian flows ( eq . ( [ eq : gali_po_ham ] ) ) , while they fluctuate around non - zero values for symplectic maps( [ eq : gali_po_maps ] ) ) .in addition , the galis of unstable periodic orbits tend exponentially to zero , both for flows and maps ( eq . ( [ eq : gali_chaos ] ) ) .finally , we examined the usefulness of the indices in helping us better understand the dynamics in the vicinity of periodic solutions of such systems .we explained how , the fact that galis attain larger values _ near _ stable periodic orbits than _ on _ the periodic orbits themselves , can be used to identify the location of these orbits .we also observed a remarkable oscillatory behavior of the galis associated with the dynamics close to unstable periodic orbits and explained it in terms of the stable and unstable manifolds of the periodic orbit , showing how the influence of these manifolds can lead to large variations of the gali values by many orders of magnitude .the authors thank t. bountis for many valuable suggestions and comments on the content of the manuscript .s. would like to thank a. celletti and a. ponno for useful discussions and t. m. the max planck institute for the physics of complex systems in dresden , germany , for its hospitality during his visit in may - june 2009 , where a significant part of this work was performed .this work was partly supported by the european research project `` complex matter '' , funded by the gsrt of the ministry education of greece under the era - network complexity program .a. was also supported by the pai 2007 - 2011``nosy - nonlinear systems , stochastic processes and statistical mechanics '' ( fd9024cu1341 ) contract of ulb .fermi , e. , pasta , j. & ulam , s. [ 1955 ] , `` studies of nonlinear problems '' , _ los alamos document la-1940_. see also : `` nonlinear wave motion '' [ 1974 ] , _ am .soc . providence _ ,* 15 * , lectures in appl .math . , ed .newell a. c. fouchard , m. , lega , e. , froeschl , ch . & froeschl , c. [ 2002 ] `` on the relationship between the fast lyapunov indicator and periodic orbits for continuous flows '' , _ celest .astron . _ * 83 * , 205 - 222 .gerlach , e. , eggl , s. & skokos , ch .[ 2011 ] `` efficient integration of the variational equations of multi - dimensional hamiltonian systems : application to the fermi - pasta - ulam lattice '' , _ int . j. bif .( in press , eprint arxiv:1104.3127 ) .hadjidemetriou , j. [ 2006 ] `` periodic orbits in gravitational systems '' , in _ chaotic worlds : from order to disorder in gravitational n - body dynamical systems , proceedings of the advanced study institute _ , b. a. steves , a. j. maciejewski and m. hendry ( eds . ) , pp .43 - 79 .manos , t. , skokos , ch .& bountis , t. [ 2008a ] `` application of the generalized alignment index ( gali ) method to the dynamics of multi - dimensional symplectic maps '' , in _ chaos , complexity and transport : theory and applications .proceedings of the cct07 _ c. chandre , x. leoncini and g. zaslavsky ( eds . ) , pp . 356 - 364 .manos , t. , skokos , ch ., athanassoula , e. & bountis , t. [ 2008b ] `` studying the global dynamics of conservative dynamical systems using the sali chaos detection method '' , _ nonlin .complex syst . _ * 11 * , 171 - 176 .manos , t. , skokos , ch .& bountis , t. [ 2009 ] `` global dynamics of coupled standard maps '' , in _ chaos in astronomy _ g. contopoulos and p. a. patsis ( eds . ) , astrophysics and space science proceedings , pp . 367 - 371 .manos , t. , & ruffo , s. [ 2010 ] `` scaling with system size of the lyapunov exponents for the hamiltonian mean field model '' , _ transport theory and statistical physics _( in press , eprint arxiv:1006.5341 ) .manos , t. & athanassoula , e. [ 2011 ] `` regular and chaotic orbits in barred galaxies - i. applying the sali / gali method to explore their distribution in several models '' , _ mon . not .r. astron ._ * 415 * , 629 - 642 .ooyama , n. , hirooka , h. & saito , n. [ 1969 ] `` computer studies on the approach to thermal equilibrium in coupled anharmonic oscillators .one dimensional case '' , _ j. phys. japan _ * 27*(4 ) , 815 - 824 .panagopoulos , p. , bountis , t. & skokos , ch . [ 2004 ] `` existence and stability of localized oscillations in 1-dimensional lattices with soft spring and hard spring potentials '' , _j. vibration & acoustics _ * 126 * , 520 - 527 .skokos , ch . , bountis , t. & antonopoulos , ch . [ 2008 ] `` detecting chaos , determining the dimensions of tori and predicting slow diffusion in fermi - pasta - ulam lattices by the generalized alignment index method '' , _ eur .j. sp . top ._ * 165 * , 5 - 14 .
as originally formulated , the generalized alignment index ( gali ) method of chaos detection has so far been applied to distinguish quasiperiodic from chaotic motion in conservative nonlinear dynamical systems . in this paper we extend its realm of applicability by using it to investigate the local dynamics of _ periodic _ orbits . we show theoretically and verify numerically that for _ stable _ periodic orbits the galis tend to zero following particular power laws for hamiltonian flows , while they fluctuate around non - zero values for symplectic maps . by comparison , the galis of _ unstable _ periodic orbits tend exponentially to zero , both for flows and maps . we also apply the galis for investigating the dynamics in the neighborhood of periodic orbits , and show that for chaotic solutions influenced by the homoclinic tangle of unstable periodic orbits , the galis can exhibit a remarkable oscillatory behavior during which their amplitudes change by many orders of magnitude . finally , we use the gali method to elucidate further the connection between the dynamics of hamiltonian flows and symplectic maps . in particular , we show that , using for the computation of galis the components of deviation vectors orthogonal to the direction of motion , the indices of stable periodic orbits behave for flows as they do for maps .
the simple random walker is one of the most important conceptual tools in statistical physics .it lies behind our understanding of diffusive motions in thermal equilibrium , and almost every statistical estimate makes use of its properties based on the central limit theorem .it is also useful in the biological context , because it explains many behavioral aspects of micro - organisms swimming in a viscous liquid .recent experiments report another biological application of random walks , performed by repair proteins along one - dimensional dna sequences .the experimental results show the followings : first , the net displacements are distributed symmetrically from the starting point .second , the mean - squared displacement is found to increase linearly with time .these are exactly the characteristics of the simple random walk .such a diffusive process is found efficient both in energy and time : the proteins slide along dna without requiring atp because the process is driven by thermal energy , and the group of repair proteins in a cell would check the entire genome sequence within 3 minutes even if this one - dimensional diffusion was the only scanning mechanism .there are also interesting variations of the simple random walk , one of which is the persistent random walk .the persistent random walker has a ` memory ' or ` momentum ' in the sense that it takes a step in the same direction as the previous one with probability , say , , and in the opposite direction with .such dynamics introduces correlation in the walker s displacements .however , when we talk about its ` memory ' , it should be understood in a rather loose sense , because this model can still be described by a second - order markovian process , which is essentially memoryless .the precise way to introduce persistence depends on the detailed mechanism that we are to describe . in this work, we consider a slightly different type of a persistent random walker which has an external state as its position together with an internal state that prescribes its direction .the walker receives as an input a binary string composed of 1 and 0 .the former bit acts on the external state , moving the walker by one discrete step in the prescribed direction .the latter , on the other hand , acts only on the internal state with flipping the direction .the point is that there occurs no displacement in the latter situation , differing from the conventional persistent random walk .therefore , in terms of the displacement , there are three possibilities , i.e. , -1 , 0 , and + 1 , at each time step .such persistence as considered in this work due to the separation of internal and external variables is actually possible in some transport phenomena , for example , if the walker has a ratchet , which forces the motion in a particular direction but is controllable by inputs from outside .if the input string is random so that it contains 1 with probability and 0 with probability , our model can be analyzed by solving a second - order markovian process with the generation - function method .in addition , it is also possible to obtain the generating function for the returning probability in a closed form at .this work is organized as follows : in the next section , we present analytic results for the movement of this random walker by using the generating - function method .how it returns to the starting point will be discussed in sec .[ sec : return ] . after comparing the analytic results with numerical ones , we conclude this work .suppose that the walker wanders along the one - dimensional line from to .its position is represented by an integer .we assume that the walker starts from the origin , i.e. , , at time .every time step , the walker reads a bit from an input string which we denote as with at probability and at probability for .the probability for the walker to occupy position at time is denoted as , where the superscript means the initial direction , or , of the walker . our initial condition is such that the walker is located at the origin with the positive direction , as expressed by and . at time , we have where the first term corresponds to the case with , which is equivalent to shifting the walker by one lattice spacing .the second term corresponds to the other case with , which amounts to reverting the direction . by the same logic, we have another recursion relation : let us define with the initial condition reexpressed as and . in terms of eq .( [ eq : q+- ] ) , we write eqs .( [ eq : p+ ] ) and ( [ eq : p- ] ) as the eigenvalues of the matrix are and with .a general expression for is obtained by diagonalizing the matrix in eq .( [ eq : q2 ] ) . considering the initial condition at , we find that \lambda_1^t \right.\nonumber\\ & & + \left .\left[1 + \frac{2q+p(x - x^{-1})}{\sqrt{d}}\right ] \lambda_2^t \right\}. \label{eq : q+t}\end{aligned}\ ] ] the mean and variance of the position are obtained as }{2q } \label{eq : mean}\ ] ] and \right\}}{4q^2}. \label{eq : variance}\ ] ] in the limit of , we can approximate and for .if , in particular , the mean position is obtained as , and the variance is at arbitrary . as time varies .the symbols are averages over random samples , while the dotted lines represent the analytic predictions in eqs .( [ eq : mean ] ) and ( [ eq : variance ] ) .error bars are shown but not larger than the symbol size.,title="fig:",scaledwidth=45.0% ] as time varies .the symbols are averages over random samples , while the dotted lines represent the analytic predictions in eqs .( [ eq : mean ] ) and ( [ eq : variance ] ) .error bars are shown but not larger than the symbol size.,title="fig:",scaledwidth=45.0% ] figure [ fig : moments ] depicts numerical results from our monte carlo calculation over samples , compared with eqs .( [ eq : mean ] ) and ( [ eq : variance ] ) , respectively . the calculated mean and variancefully coincide with the analytic predictions .let us now consider the probability of return to the origin at time and denote it as .it is convenient to define .if is odd , i.e. , if with , is then given as ^ 2\nonumber\\ & = & q \sum_{k=0}^m \binom{m}{k } \binom{m}{k } p^{2m-2k } q^{2k}\label{eq : utpq}\\ & = & q p^{2 m } { } _ 2f_1 ( -m ,- m;1;q^2/p^2),\nonumber\end{aligned}\ ] ] where is a hypergeometric function .the reason behind eq .( [ eq : utpq ] ) is roughly explained as follows : the summand can be interpreted as pairing two strings of length , for each of which there are bits of and the other bits of .the probability in front of the right - hand side of eq .( [ eq : utpq ] ) means that for every such pair , one finds a proper place to insert so as to bring the walker back to the origin .if we define as the probability to occupy at time , we can establish the following relation : by conditioning the first bit , .the probability turns out to be closely related to the narayana number : which describes the number of possibilities to have pairs of correctly matched parentheses with distinct nestings .if , in particular , we find an explicit expression for as ^ 2\nonumber\\ & = & p \sum_{k=1}^m \binom{m}{k } \binom{m}{k-1 } p^{2m-2k } q^{2k } \label{eq : bt}\\ & = & mp^{2m-1}q^2 { } _ 2f_1(1-m ,- m;2;q^2/p^2).\nonumber\end{aligned}\ ] ] plugging eqs .( [ eq : utpq ] ) and ( [ eq : bt ] ) into eq .( [ eq : ut+1 ] ) , we have an expression for when is even , too .equation ( [ eq : utpq ] ) simplifies for due to the following identity in combinatorics : we restrict ourselves to this specific case of henceforth .then , the return probability found above obeys the following recursion relation : note that because one should get to stay at the origin .together with , one can find recursively by using eq .( [ eq : return ] ) . however , it is more useful to introduce the generating function for as and eq .( [ eq : return ] ) then reduces to this ordinary differential equation is solved to yield from which one can extract at arbitrary . to return to the origin , and probability for the first return at time .we have fixed the parameter at to compare the numerical results over random samples ( symbols ) with the analytic ones ( dotted ) from eqs .( [ eq : ux ] ) and ( [ eq : fx ] ) .error bars are shown but smaller than the symbol size.,scaledwidth=50.0% ] we may furthermore define as the probability of the first return to the origin at time with setting .it is a standard exercise to decompose into according to the first return time . denoting the generating function for as we may rewrite eq .( [ eq : dec ] ) as we thus obtain from which one can read off at any .note that approaches one as , which means that this walk is recurrent .note also that eq .( [ eq : fx ] ) is an odd function , because the first return after leaving the origin is impossible when is even .we have checked the expressions for and with monte carlo calculation and drawn the results in fig .[ fig : return ] .there is another useful expression for written as \gamma\left ( \frac{t}{2 } \right)}{4\sqrt{\pi } ~\gamma\left ( \frac{t}{2}+\frac{3}{2 } \right ) } , \label{eq : ft}\ ] ] whereby we can obtain an important quantity , defined as the number of returns to the origin by time .we decompose according to the first return time : if the first return time is , it means that there must be at least returns among the possible trajectories in total .in addition , when those trajectories hit the origin at , the number of possibilities should be , each of which may contain more returning events during the remaining time steps .this can be expressed as by the definition of . to sum up , and related by let us consider the corresponding generating function : where the first term can be evaluated by using the general expression in eq .( [ eq : ft ] ) and the second term can be expressed by convolution as in eq .( [ eq : conv ] ) . after some algebra , we arrive at which yields }\\ & = & x + 3x^2 + 8x^3 + 19x^4 + 44x^5 + \ldots.\nonumber\end{aligned}\ ] ] for example , we find three returning events by time , because a trajectory generated by visits the origin twice and another trajectory from does it once , while the other two are kept away from the origin .the singular point of at is of particular interest , because corresponds to the average number of returns among all paths by time . rewriting where the prime denotes differentiation, we find that the singularity is dominated by the first term on the right - hand side . as a result , if , the average number of returns scales as which is similar to the case of the simple random walk .in summary , we have considered a variant of the persistent random walk and calculated its statistical properties by using the generating - function method . in the limit of ,the distribution of the position approaches the gaussian function , and both the mean and variance are scaled by the factor of .we have also obtained the generating functions for return events at in closed forms .the resulting analytic predictions are fully confirmed by numerical simulation . before concluding this work ,let us briefly mention how to deal with the periodic boundary condition .we may assume that there are sites , i.e. , , with the periodic boundary condition such that .the generating function method is made compatible with the boundary condition if we introduce with .then , the following expression corresponds to the discrete fourier transform of .the inverse transform therefore gives us we have already obtained the expression in eq .( [ eq : q+t ] ) so we need only to use instead of and then plug the resulting into eq .( [ eq : ift ] ) . as , only the lowest mode with survives , which corresponds to .since the conservation of total probability automatically implies , we immediately see from eq .( [ eq : ift ] ) that , that is , a uniform distribution as indicated by our intuition .10ifxundefined [ 1 ] ifx#1 ifnum [ 1 ] # 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1'''' [ 0]secondoftwosanitize [ 0 ] + 12$12 & 12#1212_12%12[1][0] _( , , ) _ _ ( , , ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) _ _ ( , , ) _ _ ( , , )
the investigation of random walks is central to a variety of stochastic processes in physics , chemistry , and biology . to describe a transport phenomenon , we study a variant of the one - dimensional persistent random walk , which we call a zero - one - only process . it makes a step in the same direction as the previous step with probability , and stops to change the direction with . by using the generating - function method , we calculate its characteristic quantities such as the statistical moments and probability of the first return .
in a group formation game , each player can be a member of one and only one group , and individual payoffs depend , directly or indirectly , on group structure .many difficult and pressing economic problems fall into this category , including rent - seeking , resource management , contract bidding , volunteer organization , problem solving , and political lobbying . depending on the application , the collection of individualsmay be called a group , club , or coalition . however , apart from this basic semantic difference , all of these problems share the same basic characteristics they all create incentives , such as economies of scale , risk - sharing , skill aggregation , and social capital accumulation , that make working within a group ( coalition or club ) more attractive than working alone . in this paper ,i combine two recent areas of interest dynamic coalition formation games and social network constraints to explore how groups form when individuals move dynamically and face social , spatial , and institutional constraints on group membership .traditionally , group formation has been modeled statically that is , players make their group membership decisions simultaneously see , for instance , hart and kurz ( 1983 ) , nitzen ( 1991 ) , yi and shin ( 1997 ) , konishi and weber ( 1997 ) , and heintzelman _ et al _ ( 2006 ) .these models are advantageous because of their analytical simplicity and clarity . however , as i will show in section [ sec : static - coalition - formation ] , static group formation games often have multiple equilibria , suggesting that there are potentially large gains to clarifying the process by which they form .the presence of multiple equilibria raises a new , more complicated set of questions . which of these equilibria can we realistically expect to reachgiven a dynamic group formation process ?will the equilibrium outcome reached be efficient ?what characteristics of the problem affect that outcome ? these questions can not be addressed using a static model , and thus researchers have increasingly turned towards dynamic models of the coalition formation process .recent work spans may different subfields , including industrial organization , political economy , rent - seeking , and local public good provision and includes ( among many others ) bloch ( 1996 ) , yi ( 2000 ) , arnold and schwalbe ( 2002 ) , konishi and ray ( 2003 ) , macho - stadler _ et al _ ( 2004 ) , arnold and wooders ( 2005 ) , and page and wooders ( 2007 ) .however , these dynamic group formation models largely still assume that players are unencumbered by social , spatial , or institutional barriers to group membership .that is , the players interact freely with all other individuals in the game and can join any of the groups in the game without regard to the composition of the group .this is a reasonable assumption in some contexts however , in many other cases , individuals face substantial barriers in making their group membership decisions .the nature of these barriers will differ depending on the context of the specific problem being considered .barriers to group membership may be social ( eg : an individual can only join a group that contains someone he knows ) or spatial ( eg : an individual can only join a group with close neighbors ). some of these barriers are explicit ( eg : a requirement that a current member `` vouch '' for the applicant ) but others are implicit ( eg : a social norm against attending a party composed only of strangers ) .the barriers may either limit actions ( an individual is unable to join a particular group ) or information ( an individual does not know about the group ) .however , by modeling these constraints explicitly , we can look beyond the more superficial of these differences and ask a whole new set of questions . how do characteristics of the underlying constraint affect the eventual group structure ?are individuals better or worse off when they are constrained more heavily ?how do constraints of different types affect social welfare ? in this paper ,i model the constraints faced by individuals via a network of connections a player can only join a group if she is connected to a current member .this method allows me to use machinery from the burgeoning networks literature , which explores how social , spatial and institutional networks affect individual behavior .this literature encompasses a wide range of subfields that ( as noted in jackson ( 2005 ) ) have only recently started to interact .one branch of the literature has developed tools used to identify communities within existing social networks ( see , for instance , girvan and newman ( 2002 ) , newman and girvan ( 2004 ) , and copic _ et al _ ( 2007 ) ) ; another branch examines how limiting interactions between individuals can affect strategic behavior ( see , for instance , galeotti _ et al _ ( 2007 ) and charness and jackson ( 2006 ) ) ; and a third branch explores the dynamics of how social networks form ( see jackson ( 2005 ) for a survey of this work ) . by combining elements of these two emerging literatures ,i am able to illustrate the importance of both dynamics and network constraints in the group formation process . as a baseline for comparison ,i start with a static game in which individuals are completely unconstrained in their choice of groups and show that this game has multiple equilibria .i then allow individuals to move sequentially , and solve explicitly for the set of nash equilibria of this game .i show that the dynamics act as an equilibrium refinement . however , the equilibrium reached in the dynamic game is highly suboptimal the negative externality imposed by entering individuals drives groups to be much too large , relative to the social optimum .i then compare the grouping behavior of the unconstrained individuals to the behavior of individuals constrained by an exogenous network of connections .the network limits a player s action set to those groups containing individuals she is connected to .i show that the network constraint mitigates the tendency for groups to get too large .the efficiency of the outcome depends on the topological characteristics of the network constraint social welfare is higher when the network is sparse and highly ordered .this result has the surprising implication that informational , institutional , and geographic barriers to group membership may actually improve social welfare by restricting groups from becoming too large .finally , i consider optimal institutional design and show that the optimal membership rule also depends on network topology when a network is dense or random , the exclusive membership rule ( which allows a group to reject members who do not improve the group s welfare ) is always optimal .however , when the network is sparse or highly ordered , the exclusive membership rule can lead to highly suboptimal results .the structure of the paper is as follows . in section[ sec : basic - model ] , i introduce a static coalition formation model , in which individuals choose their group membership simultaneously .i characterize the set of nash equilibria of that game , and show that only one is optimal . in section [ sec : seq unconstrained ] ,i transform the static model by allowing individuals to make their group membership decisions sequentially over time .this defines a dynamic game similar to that of arnold and wooders ( 2005 ) .i characterize the set of nash equilibria of this game , and show that a single , highly suboptimal equilibrium survives . in section [ sec : seq network constraint ] , i introduce the network constraint .i first characterize the set of nash equilibria of the constrained static game .i then move to the dynamic game and show how network topology affects social welfare . in section [ sec :opt inst design ] , i consider optimal institutional design and show that the optimal membership rule depends on the topology of the network constraint . in section [sec : extensions and conclusions ] , i conclude and discuss extensions to the model .before considering the behavior of individuals who face a constraint , i will first consider a game in which individuals are unconstrained .this game is actually a special case of the constrained game ( ie : one in which all individuals are connected ) and thus provides a good baseline for comparison between this game and the existing unconstrained literature .consider a group formation game with homogeneous individuals , .an individual can be a member of one and only one group thus , the group structure at time is a partition of , , where denotes the set of individuals in group . note that the number of groups is determined endogenously , and thus may vary from one period to the next .the set of all such partitions of the players into groups is denoted .although all of the games defined in this paper could be played using a generalized payoff function , in the following i will assume that the players have identical payoff functions that depend only on the size of the player s own group .thus , individual payoffs are given by where and is the size of group .i also assume that is single - peaked with maximum value .the assumption that payoffs depend only on own group size obviously does not allow for externalities between groups . nor does it allow players to have preferences over group composition .however , this is an appropriately simple starting point for dynamic analysis to the extent that inter - coalition externalities muddy behavior , they are best left to future extensions .i will also assume that payoffs are single - peaked in group size .this assumption is useful because individual and social preferences are aligned the individuals all want to be in groups of size , and social welfare is highest when this occurs .i will show that the equilibrium reached is suboptimal , _ despite this alignment_. the assumption that payoffs are single - peaked also covers nearly all cases that we might encounter generalizing further would add considerable complication without yielding much useful insight .however , extensions to more general payoff forms are obviously important , and are of interest for future studies .section [ sec : extensions and conclusions ] includes a discussion of these generalizations .define to be the smallest such that .that is , is the largest group that will form before an individual forms a new group of size 2 . if , then a new group will never form , and for convenience , i will define in these cases .figure [ illustrating gbar ] illustrates an example of with .an illustration of the smallest such that . ] note that since individuals in this game are homogeneous , the exact arrangement of the players in the groups is not as important as the sizes of the groups .thus , i will often find it convenient to refer to the vector of group sizes resulting from a particular partition of the individuals into coalitions , rather than referring to the partition itself . to that end , define the _ group size vector _ of a partition by .note that the mapping from partitions to group size vectors is many - to - one , and thus the mapping from equilibrium partitions to equilibrium group size vectors will be as well . ultimately , the process of group formation is a dynamic one individuals join , leave , and form new groups over time .however , the assumption that moves are made simultaneously may be accurate in some instances and since dynamics add a good deal of analytical complication to the model , it is reasonable to ask whether making the model dynamic adds to our understanding of the problem .to that end , i will first examine a static group formation model .i show that when payoffs are single peaked in group size , there are often multiple nash equilibria . in section[ sec : seq unconstrained ] , i show i allow individuals to choice their group membership sequentially , and show that the dynamics of the game refine the set of equilibria , leaving a single equilibrium group size vector .this indicates that adding model dynamics can yield insights beyond those gained from static models .consider a static group formation game with players and payoffs , , single - peaked in group size .individuals choose their group membership simultaneously .we can think of the individuals as choosing a `` location '' , and all of the individuals who jump to the same location are then members of the same group thus , an individual s behavior strategy consists of a choice of coalition : .the pair defines the static coalition formation game .a nash equilibrium of this game is a partition of the players into coalitions , such that no individual wishes to deviate unilaterally .let be the set of partitions of the individuals into coalitions such that no individuals wishes to move unilaterally that is , .let denote the set of nash equilibrium coalition size vectors induced by those equilibrium partitions . in the following ,i characterize .this characterization reveals several interesting aspects of group formation with single - peaked utility and also establishes the need for equilibrium refinement .[ lem : at most one small]-[lem :- odd size ] establish several characteristics that an equilibrium of the static game will have : 1 ) the coalitions will mostly be larger than the social optimum ( at most one will be smaller ) and 2 ) all of the groups larger than the optimum will be approximately the same size . theorem [ thm : static eq ] assembles these conditions into a complete characterization of .finally , theorem [ thm : number of static eq ] puts a lower bound on the number of equilibria in the set , showing that this static game will often have multiple equilibria .lemma [ lem : at most one small ] states that in equilibrium , most groups will be larger than the social optimum at most one group can be too small .[ lem : at most one small]let be a static group formation game with single - peaked . if players are unconstrained in their choice of group , then there no equilibrium such that .that is , in equilibrium at most one group will be smaller than the social optimum . towards a contradiction ,suppose such that . is strictly increasing in that range , so .but then players in group 1 have an incentive to move to group 2 , so can not be an equilibrium lemma [ lem : at most one small ] implies that in characterizing , we need consider only two cases : either all of the groups are larger than the socially optimal size ( ) , or exactly one group is small ( ) .the following two lemmas address the sizes of the groups in these two different cases .lemma [ lem : all same size ] shows that in any equilibrium where all groups are larger than the social optimum , the groups must be approximately the same size .lemma [ lem :- odd size ] sets a more restrictive condition in the case where one group is smaller than the social optimum .[ lem : all same size]let be a static group formation game with single - peaked . if players are unconstrained in their choice of group , then for all , . that is , in equilibrium , all groups larger than the social optimum must be the same size , up to integer constraints .towards a contradiction , suppose such that and . is strictly decreasing in this range , so then players in group 1 have an incentive to move to group 2 , so can not be an equilibrium .note that this result extends a result in nitzen ( 1991 ) to the case of single - peaked utility .arnold and wooders ( 2005 ) prove a similar result for a sequential game .the following lemma extends that result to the case where one group is smaller than the social optimum .the nash equilibrium requires a slightly stronger restriction on the size of the groups .[ lem :- odd size ] let be a static group formation game with single - peaked . if players are unconstrained in their choice of group , then for all such that , this implies , both of the following must be true : 1 . 2 . let that .part 1 : consider group 1 ( the small coalition ) and an arbitrary group , such that note that and .if , then players in group 1 would move to group . similarly , if , then players in group would move to group 1 .together , these three inequalities imply part 2 : consider two arbitrary groups , and , such that .lemma [ lem : all same size ] indicates that . towards a contradiction ,suppose , so that . by part 1 , .since we assumed , this implies that but since to the left of the optimum , , meaning that players in coalition would move to coalition 1 .thus , it must be that exactly .theorem [ thm : static eq ] combines the insights from lemmas [ lem : at most one small]-[lem :- odd size ] to fully characterize .[ thm : static eq]let be a static group formation game with single - peaked payoff function . if the individuals are unconstrained in their choice of group , the set of nash equilibria of that game , , is the union of two sets : 1 . 2 . + [ exa : static game]a static group formation game with logistic utility the implications of lemmas [ lem : at most one small]-[lem :- odd size ] and theorem [ thm : static eq ] can best be illustrated through a specific example .consider a static group formation game with 100 players and a logistic payoff function .this function is single - peaked with maximum and .it is illustrated in figure [ fig : logistic function ] .individual payoff function for example [ exa : static game ] .note that the players enjoy the highest payoff in a coalition of size ., title="fig : " ] : : in broad terms , this game could represent any of a number of different applications .it might , for example , represent a simple rent - seeking game , in which players compete for a single rent .there are many incentives for individuals to pool their efforts and compete as a group : risk averse individuals may be willing to trade some rent in expectation for a more consistent income stream ; a complicated rent seeking task may require a range of skills ; and economies of scale may make larger groups more likely to win .however , individuals must weigh these advantages against the disadvantages of higher maintenance costs , free - rider problems , and the division of rents .many of these incentives come down to a decision over different size groups .thus , we might think of this example as one in which individuals have determined that a rent - seeking group of size 10 maximizes their expected utilities by mitigating risk and taking advantage of economies of scale .beyond that ideal size , the losses from managing a larger size group , free - riding off of others efforts , and division of rents between a larger number of people reduce the expected utility of the individuals in those groups .however , they still prefer that larger group to pursuing the rents alone .when a group has more than 17 members , the costs of maintaining that group make pursuing rents alone more attractive than staying in the group .lemma [ lem : at most one small ] indicates that in any nash equilibrium of this game , at most one coalition will be smaller than the socially optimal group size , .lemma [ lem : all same size ] indicates that all of the groups larger than the social optimum will be approximately the same size .using these two facts , one can show that there are 5 nash equilibria of this static game : , , , , and .note that is _ not _ an equilibrium , because an individual in a group of size 20 is better off striking out as an individual .this example illustrates several difficulties with the static game .first of all , the set of stable coalition configurations is highly sensitive to the particular parameters used .for example , with individuals , the game illustrated above does not have an equilibrium with a small group .however , if we change the game slightly , so that there are individuals , there will be an `` odd - sized '' equilibrium : .secondly , most games will have multiple stable group size configurations .in fact , it is possible to put a lower bound on the number of equilibria for a given game .theorem [ thm : number of static eq ] does just that .[ thm : number of static eq]let be a static group formation game .then .i will set the lower bound by enumerating the equilibria in which all groups are larger than the social optimum ( ie : the first set in theorem [ thm : static eq ] ) .note that since all groups are approximately the same size , each equilibrium with all large groups is entirely characterized by the _ number _ of groups .the largest possible group is and the smallest possible group is .thus , there should be one equilibrium for each integer in the interval $ ] , and divide evenly , but including that complication only adds more equilibria , keeping the lower bound accurate ( albeit a bit lower than is strictly necessary ) . ] or .since the lower bound in theorem [ thm : number of static eq ] is usually greater than 1 , the static game will usually have multiple equilibria .however , the static game provides no insight into which of those equilibria is most likely to occur .are they all equally likely , or is there a distribution of equilibria ?does that distribution depend in a predictable way on the observable elements of a particular problem ?these questions are particularly important because some of the equilibria lead to considerably higher social welfare than others .the following section shows that a more dynamic model of the group formation process , in which individuals make their decisions sequentially over time , provides an equilibrium refinement .as i will show , not all of the equilibria characterized in theorem [ thm : static eq ] are attainable when players start the game as individuals .surprisingly , the surviving equilibrium group size vector is the worst possible of the static equilibria .while a static group formation game will typically have more than one equilibrium , only one is efficient .this obviously begs the question will individuals moving sequentially reach the efficient coalition arrangement , or will they reach an inefficient outcome ?will that outcome be unique or are several outcomes possible ? in this section , i show that allowing the players to move sequentially refines the set of equilibria from the static game .when the players start the game as individuals , they will always reach an equilibrium group structure with the same coalition size vector .furthermore , it is not the socially optimal one when individuals make their group formation decisions dynamically , they end up in groups that are much too large , despite a clear alignment between individual and social welfare . in the unconstrained sequential group formation game ,individuals are able to join any currently existing group , or alternatively they can strike off as an individual , forming a group of size 1 . thus , individual s action set at time can be denoted by , where denotes the action of striking out as an individual . in the following , i will assume that individuals make their group membership decisions myopically that is , they decide which group will maximize their return , given only the _ current _ group structure .this defines a behavior strategy that simply maps the current group partition , , to the individual s action set as defined above : .this myopic behavior strategy is identical to that used in arnold and schwalbe ( 2002 ) and arnold and wooders ( 2005 ) .the myopia assumption is convenient because it makes the analysis more tractable .however , in the case of sequential coalition formation games , it is also behaviorally more reasonable than perfect foresight .the sequential nature of this game induces an explosion in number of possible states , making the sequential coalition formation game more like chess or go than tic - tac - toe .moreover , as i will mention later , one can show that myopia is not the sole cause of the observed behavior , making the assumption relatively innocuous . defines an unconstrained sequential coalition formation game , where is a particular order of motion for the players .it is worth noting that i obtain dramatically different results using the nash equilibrium than arnold and wooders do using the nash club and k - remainder equilibria . ]an equilibrium of this dynamic game is a partition of the players into groups , such that is , a group configuration is an equilibrium if no individual wishes to deviate unilaterally .let represent the set of equilibrium coalition size vectors resulting from those partitions .note that any nash equilibrium of the dynamic game must be a stable group configuration in the static game , and therefore .theorem [ thm : number seq eq ] shows that the sequential game has a unique equilibrium up to the symmetry of the players .furthermore , theorem [ thm : worst possible eq ] shows that when players start the game as individuals , this equilibrium is always the worst possible stable group configuration from the standpoint of social welfare , despite the alignment between social and individual preferences .[ thm : number seq eq]let be an unconstrained sequential group formation game with single - peaked . then there is a unique nash equilibrium group size vector , , which is a function of the number of players and the payoff function alone .[ thm : worst possible eq]let be an unconstrained sequential group formation game with single - peaked .let be the static coalition formation game with the same number of players and payoff function . if is the ( unique ) nash equilbrium of then .that is , the nash equilibrium of the sequential game is the element of that minimizes social welfare .let be an arbitrary sequential group formation game .let be the corresponding static group formation game .the equilibrium of the static game yields the lowest social welfare is the equilibrium with groups of the largest size or conversely , the equilibrium with the smallest number of groups .i will show that regardless of the order of motion , , the players always reach a configuration with the minimum number of groups , and thus the lowest possible social welfare value .if is strictly increasing or decreasing , then the result follows trivially .so suppose is unimodal . unimodal implies .thus , the first individual will always want to start a new group . since , subsequent individuals will prefer to join the existing group to forming a new group of size 2 .in fact , it is only worthwhile to create a second group of size 2 when the existing group is size . if , then a second group never forms the individuals ultimately form one large group of size and the result follows trivially .so suppose .for the sake of clarity , let . , but this assumption simplifies the exposition .] thus , the equilibrium in with the lowest social welfare is the equilibrium with groups .regardless of the order of motion , a new group forms only if all existing groups have reached size .thus , the final group forms only once there are groups of size .this implies that the unique equilibrium of the sequential game will have groups .the basic insights of the proof can best be appreciated via a specific example . an unconstrained sequential group formation game with logistic utility note that while a static group formation game , , often has multiple equilibrium group size vectors , only one is efficient .recall from example [ exa : static game ] that while the game has 5 nash equilibria , , , , and only the equilibrium with groups of size 10 is efficient .now , consider the sequential group formation game with the same parameters and an arbitrary order of play : . according to theorem [ thm : number seq eq] , this sequential game has a unique equilibrium .moreover , theorem [ thm : worst possible eq ] indicates that the equilibrium will be the stable group structure with the lowest possible social welfare in this case , the configuration with coalitions of size 16 and 17 .note that this equilibrium is inefficient , because all players are better off in groups of size 10 .the following analysis shows how players wind up in this suboptimal group structure .the players start the game as individuals , so the first player to move faces a choice between remaining as an individual and forming a group of size 2 .the player is myopic , so she chooses the group of size 2 because it gives her higher utility in the next period ( figure [ fig : one to two ] ) . the first individual to move joins another individual to form a group of size 2 . ] the second player to move faces a similar choice she must decide whether to join the existing large group to form a group of 3 , or join another individual to form a second group of 2 .the group of 3 gives her higher utility , so she joins that group ( figure [ fig : one to three ] ) .the second individual to move must choose between forming a new group of size 2 or joining the existing group of 3 .he will choose the group of 3 , since it gives him higher utility than the group of 2 . ]a new group only forms when where is the size of the existing large group .the smallest such is obviously in this case , a group of 17 ( figure [ fig : new group forms ] ) .a new group forms when the large group is size because the individual is better off in a new group of size 2 . ]this is true regardless of how many `` large '' groups ( groups with more than one individual ) there are .thus , the second group forms when there are 83 individuals and one group of 17 , the third group forms when there are 69 individuals and two groups of size 17 , and so on .the last group forms when there are 15 individuals and five groups of size 17 . this sixth group is the final group that will ever form .individuals may ( and indeed , will ) move between the existing groups , but no new group will ever form .the individuals will stop moving when all six groups are approximately the same size namely , in the configuration with two groups of size 16 and four groups of size 17 : . as predicted by theorem [ thm : worst possible eq] , this is the stable group arrangement with the lowest possible social welfare value .note also that at no point did we specify the order of play thus , the players will reach the arrangement regardless of their order of motion .one might be tempted to attribute the behavior detailed in theorems [ thm : number seq eq ] and [ thm : worst possible eq ] to the players myopia . however , it is possible to show that even perfectly forward - looking players will form groups that are larger than the socially optimal size .since even forward - looking agents reach a suboptimal equilibrium , it is clear that myopia is not all that is at work in this result .the cause of the observed behavior is the externality that joining a group imposes on existing group members . when the group is smaller than the social optimum , that externality is positivehowever , when the group is at the optimal size , the externality is a negative one .the entering member is obviously made better off by the change ( otherwise , he would not move ) , but the rest of the group is made worse off .the negative externality causes individuals to enter a group that does not benefit from the extra member , which then drives groups to become too large .obviously , in the real world , groups of individuals will sometimes make their membership decisions together .if we allow a subgroup of up to individuals to move as a group , then any equilibrium that exists will necessarily have smaller groups .however , the set of equilibria that are stable to such coalitional deviations are largely empty ( see arnold and wooders ( 2005 ) for a discussion of this problem ) . more importantly , when we move on to games with a network constraint , as in the following section , it becomes less clear what is meant by a configuration that is stable to `` coalitional deviations '' .analysis of more complicated , network - specific equilibrium concepts are obviously venues for future work .additionally , there is empirical evidence that groups tend to be too large .many institutions exist that constrain the size of coalitions , a measure that would be unnecessary if individuals found themselves in groups of ideal size . in the following section , i consider the effects of social and spatial constraints on individual behavior and show that such constraints can improve total social welfare .section [ sec : opt inst design ] continues this discussion by exploring the effect of the network constraint on optimal institutional design .the analysis of the previous section ( as well as much of the existing literature ) assumes that individuals are free to join any existing group , regardless of its current composition .however , the cases where individuals are completely unconstrained are relatively few in most instances , individuals face social , spatial , and information constraints when making their membership decisions .consider , for example , a set of farmers forming water management groups along the banks of a river . although it is conceivable that the farmers would organize into groups at random , they are more likely to join farmers who are adjacent to them on the river than those in a distant location .similarly , research groups are more likely to be composed of colleagues than strangers , and an individual is unlikely to attend a party unless he already knows someone who is attending .the most natural way of modeling these constraints is via a network of connections .i give each individual an exogenous network of connections to other people .an individual can only join a group if it contains a person she is connected to on the network .more formally , defines a particular sequential group formation game with a network constraint , where is an exogenous , unchanging matrix of connections between individuals that is , if is connected to on the network and otherwise . in the constrained game ,an individual s action set is restricted to include only those groups she is connected to : .clearly such a matrix of connections can model any set of constraints faced by individual agents , making the network formulation of this problem extremely general .however , there is an additional advantage of using a network constraint namely , it allows us to draw conclusions about general `` classes '' of constraints that seem similar , without getting caught up in the details of a particular case .for instance , we might want to determine how individuals on a spatial network behave differently than individuals on a social network , without getting tied up in the details of a particular network structure .fortunately , by varying only a few parameters , we can obtain a natural spectrum of network structures that correspond nicely to the types of networks that we would observe in the real world . for my analysis of network topologyi will use a watts - strogatz network , which has only two parameters .the first is average degree , , which enumerates the average number of connections each individual in the network has .the second is the watts - strogatz parameter , , which allows us to examine a spectrum of different network types when , the network is regular and approximates a spatial network ; when , the individuals are connected at random ; for values of between 0 and 1 , the network has a `` small world '' structure , which approximates that of a social network .a pair describes a family of networks with similar topological characteristics .note that when the network is fully connected , every player knows someone in every group and therefore .thus , the unconstrained static and sequential games considered previously are a special case of the constrained game namely one where the average degree is at a maximum : .i will first use the static game to illustrate the effects of the network constraints on individual behavior given a particular network and then i will show how social welfare is affected by the network constraint . as in the unconstrained case , i will start by looking at individual behavior when players make their group membership decisions simultaneously .this section generalizes the results of section [ sec : static - coalition - formation ] to the constrained case .an equilibrium of the static game with a network constraint is a partition of the players into groups that is both feasible and individually rational .a nash equilibrium of the game is a partition of the players into groups , , such that , implies : 1 . 2 . for some one result of adding a network constraint is that the analogue to lemma [ lem : all same size ] need not be true . that is ,when players are constrained , there may exist stable group structures in which groups are different sizes . for a given static group formation game , there may exist a nash equilibrium group structure , , such that for some . as an illustration of this claim , consider a game with 12 players on a ring , as pictured in figure [ fig : ring ] . a game with 12 players arranged on a ring .] further suppose and , so that all individuals want to be in a group of size 2 , and will never form a group larger than size 6 .figure [ fig : ring with new eq ] illustrates a stable coalition structure of the static game with uneven group sizes .an example of an equilibrium group configuration on a ring .note that this would not be an equilibrium on the fully - connected network because the players in group c would move group a. ] it is obvious from figure [ fig : ring with new eq ] how the ring affects the stability of this configuration .the individuals in group c would like to join group a , but they are unable to because they are not connected to that group on the social network .if the network were fully connected , the individuals in group c would like to move to group a , and the configuration would not be stable .note that the constraint of the ring could represent either a constraint on actions ( the players would move if they could ) or information ( the players would move if they knew ) .it could also equally well represent an explicit constraint ( a legal constraint ) , an implicit constraint ( a social norm ) , or a functional constraint ( a geographic coincidence ) .this result is significant because while existing models predict that groups will be the same size in equilibrium , real - world groups are seldom identical in size .this analysis indicates that if individuals are constrained , group sizes need not be the same . by exploiting the fact that any two connected individuals form a fully connected subgraph, we can extend the results in theorem [ thm : static eq ] to the current case . to do so, we need one final definition : given a network constraint , i call two groups , and _ connected _ if and such that .[ thm : static constrained equilibria]let be a static group formation game with single - peaked payoff function and network constraint . if for all connected groups , and , either 1 . and + or 2 . and simply note that any pair of connected groups contains a pair of connected agents , who form a fully connected subgraph of the original graph .the result above follows immediately from theorem [ thm : static eq ] . as in the special case of a fully connected network ,the set of equilibria of the dynamic game is a subset of the equilibria of the static game : .however , claim [ cla : not like unconstrained ] indicates that theorem [ thm : number seq eq ] need not be true there will often be multiple equilibrium group size configurations and the set of equilibria may depends on the order of play .[ cla : not like unconstrained]a sequential coalition formation game need not have a unique equilibrium .furthermore , the set of equilibria of this game may depend on the order of play , examples illustrating this claim can be found in appendix a. since the outcome of the constrained group formation game may depend on the order of play and random moves and there is no strong theoretical foundation for a particular order of play or set of random choices , i must somehow deal with this multiplicity of equilibria .one method would be to determine the distribution of outcomes combinatorially and calculate the expected social welfare exactly .however , this method would yield results that are overly narrow , applying only to the specific networks considered . as discussed early, i would like to draw conclusions about a `` class '' of networks with similar topologies .to that end , i will rely on the computational version of the combinatorial arguement above .i will average social welfare over a large number of similar games . in this case, i use a watts - strogatz network , which is constructed as follows .we start with a regular network of degree is a network in which every individual is connected to her nearest neighbors on each side .we then rewire each of the links in the regular graph with probability .a link is rewired by disconnecting one end and reconnecting it to a different , random node in the network .thus , the pair describes a family of networks with a similar topology . as an illustration of the effect of varying these two parameters ,consider a coalition formation game with 12 players .figure [ fig : degree graphs ] depicts four networks with different degree .four different networks connecting 12 players .degree of the network decreases from left to right .the first panel depicts a fully connected network .as edges are removed at random , average degree declines and players potentially become more constrained in their choice of groups . ] in the first panel , every player is connected to every other player .this is called a fully - connected graph , and represents the special case examined in sections [ sec : static - coalition - formation ] and [ sec : seq unconstrained ] .the subsequent panels depict the same network with random links removed .obviously , as the degree of the network decreases , the individuals within that network have fewer choices of groups to join ( the size of an individual s action set is bounded above by the number of neighbors she has ) .this parameter potentially has different meaning in different types of networks in a spatial network , the average degree specifies how far an individuals can `` see '' in all directions , whereas in a social network , the average degree varies inversely with the `` familiarity '' required for membership .three different networks connecting 12 players . in the first panel ,the players are connected to their two nearest neighbors on each side .this is called a regular network , and is often used to represent arrangements of individuals in space . in the second panel , a small number of the links in the regular network are rewired at random .the result is called a small world network , and is a simple model of a social network . in the last panel ,all of the links in the original network are rewired at random .the result is a random network , similar to those depicted in figure [ fig : degree graphs ] .random networks are easily analyzed , but a poor approximation of social connections . ] as noted earlier , the parameter allows us to examine networks with different topologies . when none of the links are rewired , and we have a regular network such as that pictured in the first panel of figure [ fig : watts strogatz networks ] .these networks have a high clustering coefficient ( average probability that two of a nodes neighbors are connected ) and a high network diameter ( largest minimum path length between two nodes ) , and are a good model for spatial networks . on the other hand , when , all of the links in the network are rewired at random .the result is a completely random network , pictured in the last panel of figure [ fig : watts strogatz networks ] .these networks have a low clustering coefficient and low diameter .although easy to work with statistically , random networks are unfortunately relatively rare empirically . by rewiring a small , but non - zero fraction of the links, we obtain a small world network , pictured the second panel of figure [ fig : watts strogatz networks ] .a small world network has a high clustering coefficient but low diameter , and is a reasonable first - order approximation of a social network in the following analysis , i average social welfare over 100 games with random order of play and networks with the same pair .note that with the exception of the regular networks ( ) , the network structure will differ from one run to the next , even as the parameters remain the same .this allows me to average over a number of networks with the same parameters , which gives the results greater generality .since i hope to isolate the effects of network topology on outcomes , the non - network elements of the game remain the same .all results in this section use a game with 100 players and logistic utility function .define efficiency to be ratio of actual social welfare to the maximum possible social welfare .this plot shows average efficiency over 100 runs of a sequential coalition formation game with and .the network constraints are random ( ) . as the degree of the network constraint decreases , social welfare increases .social welfare increases because the constraint binds more heavily , mitigating the tendency for groups to get too large . ]figure [ fig : efficiency and degree ] shows that holding the watts - strogatz parameter constant , social welfare declines in the degree of the network constraint ., i used a random graph ( ) .the results are qualitatively similar for other values of . ] since the size of an individual s action set is bounded above by her degree on the network , degree provides a rough measure of how constraining the network is on individual behavior . as the degree of the network decreases ,the individuals are more constrained in their choice of groups , which mitigates the tendency for groups to get too large .the fact that social welfare increases as individuals are more constrained is consistent with the hypothesis that groups are too large because of a negative externality . define efficiency to be ratio of actual social welfare to the maximum possible social welfare .this plot shows average efficiency for 100 runs of a sequential coalition formation game .for all runs , and . holding degree constant ( at 2 , 4 , 6 ) average social welfare declines in the watts - strogatz parameter that is , social welfare is higher when the network is ordered than when it is random . ]figure [ fig : efficiency watts strogatz ] shows the effects of the watts - strogatz parameter on social welfare . , i used networks of degree 2 , 4 , and 6 .the results are qualitatively similar for networks of different degree .obviously , as degree increases , the drop in social welfare from the regular graph to the random graph becomes less dramatic .] as the graph moves from regular , to small world , to random , social welfare declines .one possible reason for this trend is that as the watts - strogatz parameter increases , the clustering coefficient decreases .the clustering coefficient is the probability that two of a nodes neighbors are connected . as the clustering coefficient decreases , the probability that an individual knows more than one person in a group decreases , and the expected size of the action set increases .thus , as the clustering coefficient decreases , the network becomes less constraining and average social welfare declines .there is considerable evidence that real - world groups do tend to be too large .the most compelling evidence is that groups have developed institutions to artificially restrict membership a measure that i argue would not be required if individuals self - organized optimally . in this section ,i examine how network topology affects the optimal choice of membership rule .when individuals are homogeneous , there are only two possible membership rules : the `` open membership rule '' ( no restriction on group membership ) and the `` exclusive membership rule '' ( groups can reject a member ) . when individuals are unconstrained , the exclusive membership rule is always preferable to the open membership rule .in other words , the coalition members should never allow the group to get larger than . [cols="^,^",options="header " , ] one might think that the exclusive membership rule would _ always _ be preferably .however , figure [ fig : leaf example ] illustrates that when individuals are restricted in their choice of groups via a social network , the exclusive membership rule can sometimes result in group configurations with extremely low social welfare values .an even simpler example uses a ring network .suppose 20 individuals are arranged on a ring as shown in figure [ fig : ring bad outcome ] .if , then the exclusive membership rule may cause individuals to be `` isolated '' between groups of the ideal size . both of these examples highlight why the exclusive membership rule is less beneficial when individuals are constrained in their choice of groups . in the unconstrained case , all of the individuals who were excluded from other groups could band together .when individuals are restricted , they no longer have that option , and there is a much greater chance of individuals being forced into low utility outcomes .an example illustrating how the exclusive membership rule can be detrimental on a ring .this is a game with 20 individuals arranged on a ring and . because the large groups can prevent them from joining , the isolated individuals must accept a lower payoff . ] once again, more general results can be obtained by averaging over many runs on topologically similar networks .figure [ fig : opt inst design degree ] shows that when degree is low , the exclusive membership rule is no longer the clear optimal choice .similarly , figure [ fig : opt inst design w - s ] shows that when the graph is very ordered , the exclusive membership rule is , on average , less beneficial . given the possibility of extremely poor outcomes , such as those pictured in figures [ fig : leaf example ] and [ fig : ring bad outcome ] , the open membership rule may be more desirable when the network constraint is low degree or highly ordered . ] ]the model i have introduced in this paper lends itself to numerous interesting extensions .although i have chosen a very simple payoff function for this initial work , the basic game structure of the model is easily generalized to model any problem in which payoffs depend on coalition structure .for instance , by making payoffs a function of the entire coalition size vector ( ) , we can include inter - coalitional externalities .this would allow us to model ( among other things ) a sequential version of the traditional 2-period rent - seeking game ( such as that in nitzen ( 1991 ) ) or resource - allocation game ( such as that in heintzelman _et al _ ( 2006 ) ) and ask whether inter - coalitional externalities are affected by the structure of the underlying network constraint . for example , we might ask whether social welfare is higher when resource management groups are formed according to geography ( eg : water management groups on a river ) or social / family ties ( eg : fisheries on a bay ) .if we allow heterogeneity of players , then the payoff function might depend on the _ composition _ of the groups , as well as their size ( ) , which would allow us to explore another set of problems .if individuals are heterogeneous in ability or tool sets , then we can ask where self - organized teams contain an optimal level of diversity for problem - solving .if individuals are heterogeneous in ideology , we can explore the formation of lobbyist organizations and political parties .more abstractly , if individuals are heterogeneous in an arbitrary characteristic , we can model discriminatory behavior . finally , making the network structure endogenous may add some insight into network formation ( see jackson ( 2005 ) for an excellent survey of this growing literature ) .this might be accomplished either by making links weighted , or allowing links to be added an lost over time .the process of group formation is one that has attracted increasing interest in the past decade . in this paper ,i use a simple extension of a static coalition formation game to illustrate the importance of dynamics and membership constraints in the coalition formation process .i show that in the sequential coalition formation game , individuals tend to form groups that are too large , especially when players are unconstrained .real world groups often implement membership restrictions , indicating that without such restrictions , the groups would tend to be too large .i also show that if individuals are constrained by a network , they tend to form groups that are closer to the ideal size , without the addition of membership restrictions .in fact , constraining group membership by requiring a social or spatial connection can effectively substitute for the institutional constraint of a membership rule .10 arnold , t. , wooders , m. , 2005 .dynamic coalition formation with coordination .working paper .arnold , t. , schwalbe , u. , 2002 .dynamic coalition formation and the core ._ journal of economic behavior and organization , 49 , _ 363 - 380 .bloch , f. , 1996 .sequential formation of coalitions in games with externalities and fixed payoff division ._ games and economic behavior , 14 , _ 90 - 123 .charness , g. , jackson , m. , 2006 .group play and the role of consent in network formation .forthcoming , _ journal of economic theory ._ copic , j. , jackson , m. , kirman , a. , 2005 .identifying community structures from network data . working paper .galeotti , a. , goyal , s. , jackson , m. , vega - redondo , f. , yariv , l. , 2006 . network games . working paper .girvan , m. , newman , mej . , 2002 .community structure in social and biological networks . _ proceedings of the national academy of sciences , 99 , 12 , _ 7821 - 7826 .hart , s. , kurz , m. , 1983 .endogenous formation of coalitions ._ econometrica , 51 , 4 , _ 1047 - 1064 .jackson , m. , 2005 .a survey of models of network formation : stability and efficiency . in : demange , g. , wooders , m. ( eds . ) ,_ group formation in economics_. cambridge , uk / cambridge university press .konishi , h. , le brenton , m. , weber , s. , 1997 .pure strategy nash equilibrium in a group formation game with positive externalities ._ games and economic behavior , 21 , _ 161 - 182 .konishi , h. , ray , d. , 2003 .coalition formation as a dynamic process ._ journal of economic theory , 110 , _ 1 - 41 .konishi , h. , weber , s. , 1997 .free mobility equilibrium in a local public goods economy with congestion ._ research in economics , 51 , _ 19 - 30 .macho - stadler , i. , perez - castrillo , d. , porteiro , n. , 2004 .sequential gormation of coalitions through bilateral agreements . working paper .newman , mej ., girvan , m. , ( 2004 ) . finding and evaluating community structure in networks . _physics review e , 69 , 2 , _ 69 - 84 .nitzen , s. , 1991 .collective rent dissipation . _ the economic journal , 101 , 409 ,_ 1522 - 1534 .page , f. , wooders , m. , 2007 .networks and clubs ._ journal of economic behavior and organization , 64 _ , 406 - 425 .heintzelman , m. , salant , s. , schott , s. , 2006 .putting free - riding to work : a partnership solution to the common - property problem . working paper .watts , d. , strogatz , s. , 1998 .collective dynamics of small - world networks . _nature , 393 , _440 - 442 .yi , s .- s .stable coalition structures with externalities . _ games and economic behavior , 20 , _ 201 - 237 .yi , s .- s . ,shin , h. , 2000 .endogenous formation of research coalitions with spillovers ._ international journal of industrial organization 18 , 2 , _ 229 - 256 .i will illustrate the second half of this claim first that the order of play can affect the set of nash equilibria . consider a game with 12 players arranged in a ring , as shown in figure [ fig : ring ] .further , suppose the payoff function is such that and .for the first case , suppose that the players proceed in order around the ring that is , .figure [ fig : seq in order around ring ] shows game play leading to an equilibrium coalition structure with two groups of size six . in this game ,12 individuals are arranged in a ring .the payoff function , , has maximum and .the individuals move in order around the ring wind up in two groups of size 6 .in fact , is the only equilibrium group size configuration of the game .figure [ fig : seq groups of 2 ] shows the same game with a different order of play . ] because of the order of play , the individuals are always choosing between joining an existing large group , forming a new group of two , or remaining as an individual .this choice is much the same as the choice players face in the unconstrained game with the same payoff function pictured figure [ fig : fully connected 2 groups 6 ] .thus , it should be unsurprising that the players reach the same equilibrium coalition structure as they would in the unconstrained game : .in fact , this is the only equilibrium coalition size vector possible in this particular coalition formation game .now consider a second game with the same number of players , network constraint , and payoff function , but a different order of play .figure [ fig : seq groups of 2 ] shows one possible sequence of game play , given . this game is identical to the game presented in figure [ fig : seq groups of 2 ] except for the order of motion : .this figure shows a particular sequence of moves , which leads to groups of the ideal size : .note that is not an equilibrium of the game presented in figure [ fig : seq groups of 2 ] , proving that the set of equilibria may depend on the order of play . ] because the first few players to move are separated from the existing large groups , they are unable to impose on the groups that have already formed , as they did in the previous example .the result is an equilibrium coalition structure with four groups of the ideal size : .since is in but not in , it is clear that the order of motion does affect the set of equilibria . of course, the outcome pictured in figure [ fig : seq groups of 2 ] is not the only possible equilibrium of the game with order of play .many players in this game are forced to make random choices .figure [ fig : seq groups of 3 ] shows that if some of those players make different choices , then the players will find themselves in a different configuration in this case , .this game is identical to that presented in figure [ fig : seq groups of 2 ] .note , in particular , that the order of play is the same : .however , the players have made different random choices , leading to a different equilibrium outcome : .this shows that when players are sufficiently constrained , there need not be a unique equilibrium coalition size configuration . ] this is an illustration of the first half of claim [ cla : not like unconstrained ] , which states that a coalition formation game with a network constraint need not have a unique equilibrium coalition size configuration .
in this paper , i illustrate the importance of both dynamics and network constraints in the group formation process . i introduce a class of games called sequential group formation games , in which players make their group membership decisions sequentially over time , and show that the dynamics act as an equilibrium refinement . however , the resulting equilibrium is highly suboptimal groups tend to be much too large , relative to the social optimum . i then introduce a network constraint , which limits a player s action set to those groups that she is connected to on an exogenous network . the network constraint mitigates the tendency for groups to get too large and social welfare is higher when the network is sparse and highly ordered . this result has the surprising implication that informational , institutional , and geographic barriers to group membership may actually improve social welfare by restricting groups from becoming too large .
over the last decades scientists pioneered the field of laser - interferometric gravitational wave ( gw ) detection , culminating in the establishment of a worldwide network of large - scale gravitational wave detectors .the design and construction of a second generation of gw observatories is well underway and observation with ten times improved sensitivity is expected to start in about 5 years .triggered by the einstein gw telescope ( et ) design study within the european fp7 framework , research has started on design options for a third generation gw observatory , aiming for a sensitivity 100 times better than of current instruments and thus allowing us to scan a one million times larger fraction of the universe for astrophysical gw sources .in addition to improved sensitivity , a key feature of observatories such as et will be their strongly expanded bandwidth , covering the range from 1hz to 10khz .especially the extension of the detection band towards the lower frequency end will increase the number and signal - to - noise ratio of observable gravitational wave signals and therefore significantly enhance the astrophysical impact of third generation observatories . as we will show in section [ sec : benefit ], for achieving the immense bandwidth envisaged for instruments such as et , it might be highly beneficial , if not even technically unavoidable to split the detection band into several optimized detectors of moderate bandwidth , forming altogether a so - called _ xylophone _ interferometer covering the full detection band . in section [ sec :example ] we present for the first time a potential design for a third generation xylophone configuration , consisting of a low - power , cryogenic interferometer optimized for the low - frequency band and a higher - power , room - temperature interferometer covering the high - frequency band .spanning the detection band over four orders of magnitude in frequency , as it is ask for third generation gw observatories such as et , is technically extremely challenging : different noise types dominate the various frequency bands and often show opposite response for different tuning of the same design parameter . a well - known example for such a behavior is the correlation of the two quantum noise components : photon shot noise ( psn ) and photon radiation pressure noise ( prpn ) .in order to improve the psn limited sensitivity at high frequencies one needs to increase circulating optical power of the gw detector , which at the same increases the prpn and therefore worsens the low frequency sensitivity .vice versa , lowering the circulating power reduces the prpn and improves the low frequency sensitivity , while the psn contribution will raise and reduce the high frequency sensitivity .this dilemma can be resolved by following the path of electromagnetic astronomy , where telescopes are built for a specific , rather narrow - banded detection window ( visible , infrared etc ) and later on the data from different frequency bands is combined to cover the desired bandwidth .building two or more gw detectors , each optimised for reducing the noise sources at one specific frequency band , can form a xylophone observatory providing substantially improved broadband sensitivity .the xylophone concept was first suggested for advanced ligo , proposing to complement the standard broadband interferometers with an interferometer optimized for lower frequency , thus enhancing the detection of high - mass binary systems .the concept was then taken forward for underground observatories . in this articlewe extend the xylophone concept for the application in third generation gw observatories .one may think that a xylophone might significantly increase the required hardware and its cost , i.e. building more than one broadband instrument .however , such an argument does not take the technical simplifications that it would allow , the better reliability of simpler instruments , and the more extensive scientific reach allowable into account .for example splitting a third generation observatory in to a low - power , low - frequency and a high - power high - frequency interferometer , has not only the potential to resolve the above mentioned conflict of psn and prpn , but also allows to avoid the combination of high optical power and cryogenic test masses . to reduce thermal noise to an acceptable level in the low frequency band, it is expected that cryogenic suspensions and test masses are required .even though tiny , the residual absorption of the dielectric mirror coatings deposits a significant amount of heat in the mirrors , which is difficult to extract , without spoiling the performance of the seismic isolation systems , and thus limiting the maximum circulating power of a cryogenic interferometer .starting from the single - detector et configuration described in we developed a 2-band xylophone detector configuration to resolve the high - power low - temperature problem of a single band et observatory .table [ tab : summary ] gives a brief overview of the main parameters of the analysed low - frequency ( et - lf ) and high - frequency ( et - hf ) detector .the high - frequency interferometer , et - hf , is an up - scaled but otherwise only moderately advanced version of a second generation interferometer : we considered an arm length of 10 km and a circulating light power of 3mw . in order to achieve the aimed high frequency sensitivity we also assumed the implementation of squeezed light as well as tuned signal recycling ( sr ) , which allows to simultaneously extract both signal sidebands . to reduce the thermal noise contributions , limiting the medium frequency range , without recurring to cryogenic temperatures we considered increasing the beam size to the technical maximal feasible value of about 12 cm beam radius , as well as changing the beam shape from the currently used tem to mesa beams or a higher order laguerre gauss ( lg ) mode . using the lg mode the coating brownian and the substrate brownian noiseare reduced by factors 1.61 and 1.40 , respectively .please note that the suspension system of et - hf is identical to a second generation gw observatory , but scaled up to cope with the higher mirror mass of 200 kg , required to manage the larger beams with a feasible mirror aspect ratio .the sensitivity curve and the noise budget of et - hf is shown in the right hand plot of figure [ fig : noise_budget ] ..summary of the most important parameters of the 2-band xylophone detector shown in figure [ fig : noise_budget ] .[ tab : summary ] [ cols="<,^,^",options="header " , ] unlike et - hf the low frequency xylophone interferometer , et - lf , will require several innovative techniques , well beyond the scope of first and second generation gw interferometers . in order to reduce seismic noise , we assumed an extremely long suspension system , composed of 5 stages , each 10 m tall , in addition to the reduced seismic level of an underground location .even though the reduced seismic excitation of an underground site decreases the gravity gradient noise significantly , a further reduction of a factor 50 is required from subtraction of gravity gradient noise .the main feature of the lf detector is that all thermal noise sources are significantly reduced by using cryogenic test masses , which is made possible by the reduced optical power of only 18kw , comparable to that of a first generation gw detector .sapphire and silicon have been proposed as test mass material for a cryogenic gw detector .however , material costs and material properties , as well as the available boule dimensions , which would not significantly change the sensitivity of et - lf ( see figure [ fig : noise_budget ] ) ] seem to slightly favor silicon .therefore , we considered silicon test masses cooled to a temperature of 10k in this article .the most important material parameters used in our analysis are the youngs modulus of 10k silicon of 164gpa and the loss angles of and for the low and high refraction coating materials , respectively .unfortunately the available measurements indicate higher loss angles for the coating materials at cryogenic temperatures than at room temperature .however , since research on cryogenic coatings just started , we optimistically assumed would only increase by about a factor of 2 , yielding only a very minor decrease of the et - lf sensitivity . ]that by the time construction of third generation instruments starts , coatings will be available featuring the same loss angles as current coatings at room temperature .the resulting thermal noise contributions of a single cryogenic silicon test mass are shown in figure [ fig : single_cryo_mass ] .using silicon mirrors also implies to change the laser wavelength from 1064 nm to 1550 nm where silicon is highly transmissive and has very low absorption . changing the laser wavelengthhas an impact on coating brownian noise and quantum noise .due to the fact that for 1550 nm light the mirror coatings have to be about 1.5 times thicker , the overall coating brownian noise is increased by a factor .in addition the psn is also increased by a factor 1.2 , while the prpn is improved by a factor 1.2 .the resulting noise buget of et - lf , limited by gravity gradient noise at low frequencies and quantum noise at all other frequencies , is shown in the left hand plot of figure [ fig : noise_budget ] .please note that we omitted suspension thermal noise from our analysis of et - lf , as this is subject of ongoing research and so far no mature noise estimate exists .however , it appears likely that the low loss characteristics of crystalline fibers at low temperature may not be a limitation above the gravity gradient level .the overall strain sensitivity of the proposed xylophone configuration is shown in figure [ fig : h_summary ] and compared to the sensitivity of the single broadband et described in .the resulting inspiral ranges of the xylophone are with 3200mpc and 38000mpc for binary neutron stars ( bns ) and binary black holes ( bbh ) , respectively , significantly larger than the ones for the et single configuration ( bns range = 2650mpc , bbh range = 25000mpc ) .the sensitivity of the xylophone in the intermediate frequency range ( 50 to 300hz ) is slightly worse than the one of et - single , but the overall inspiral ranges improve due to the strongly increased sensitivity around 10hz . while the et - single interferometer is limited by rpn between 2 and 30hz , et - lf can make use of a narrow - band detuned signal recycling to further decrease the quantum noise .we presented an initial design of a xylophone interferometer for a third generation gw observatory , composed of a high - power , high - frequency interferometer complemented by a cryogenic low - power , low - frequency interferometer . the xylophone concept provides a feasible alternative ( decoupling the requirements of high - power laser beams and cryogenic mirror cooling ) compared to a single broadband interferometer ( et - single ) and is found to potentially give significantly improved senstivity .future efforts will focus on investigating the prospects of additional xylophone interferometer either to improve the peak sensitivity around 100hz or to push the low frequency wall further down in frequency . this work has been supported by the science and technology facilities council ( stfc ) , the european gravitational observatory ( ego ) , the centre national de la recherche scientifique ( cnrs ) , the united states national science foundation ( nsf ) and the seventh framework programme ( grant agreement 211743 ) of the european commission .
achieving the demanding sensitivity and bandwidth , envisaged for third generation gravitational wave ( gw ) observatories , is extremely challenging with a single broadband interferometer . very high optical powers ( megawatts ) are required to reduce the quantum noise contribution at high frequencies , while the interferometer mirrors have to be cooled to cryogenic temperatures in order to reduce thermal noise sources at low frequencies . to resolve this potential conflict of cryogenic test masses with high thermal load , we present a conceptual design for a 2-band xylophone configuration for a third generation gw observatory , composed of a high - power , high - frequency interferometer and a cryogenic low - power , low - frequency instrument . featuring inspiral ranges of 3200mpc and 38000mpc for binary neutron stars and binary black holes coalesences , respectively , we find that the potential sensitivity of xylophone configurations can be significantly wider and better than what is possible in a single broadband interferometer .
be a one - dimensional ( 1d ) signal and an orthonormal transform matrix , where is the set of real numbers .if and there are only spikes ( nonzero entries ) in , we say that is -sparse in domain .we sample by ( ) to get , where .if obeys the order- restricted isometry property ( rip ) and has low coherence with , then ( and in turn ) can be effectively recovered from .many algorithms have been proposed to recover from its random sample , e.g. linear programming ( lp ) and orthogonal matching pursuit ( omp ) . for a detailed overview on recovery algorithms, please refer to . in practice , many signals , e.g. image , video , etc , are two - dimensional ( 2d ) .a straightforward implementation of 2d compressive sampling ( cs ) is to stretch 2d matrices into 1d vectors .however , such direct stretching increases exponentially the complexity and memory usage at both encoder and decoder .an alternative to 1d stretching is to sample rows and columns of 2d signals independently by using separable operators .through 2d separable sampling , encoding complexity is exponentially reduced . however , as the recovery problem is converted into a standard 1d -minimization problem , decoding complexity is still very high . as a representative sparse signal recovery algorithm ,the omp achieves good performance with low complexity .the omp is originally designed for 1d signal recovery . to reduce the complexity of 2d signal recovery , this paper extends the 1d - omp to obtain the 2d - omp .we prove that with 2d separable sampling , 2d - omp is in fact equivalent to 1d - omp , so that both algorithms will output exactly the same results .however , the complexity and memory usage of 2d - omp is much lower than that of 1d - omp .thus , 2d - omp can be used as an alternative to 1d - omp in 2d sparse signal recovery .this paper is arranged as follows .section [ sec:1d_omp ] first briefly reviews the principles of 2d separable sampling and 1d - omp , and then makes a detailed analysis on the complexity of 1d - omp . in section [ sec:2d_omp ] , we deduce the 2d - omp algorithm , reveal the equivalence of 2d - omp to 1d - omp , and compare the complexity and memory usage of 2d - omp with that of 1d - omp . in section [ sec : results ] , simulation results are reported .finally , section [ sec : conclusion ] concludes this paper .the principle of 2d separable sampling is as follows .let be a 2d signal which is -sparse in domain , i.e. and there are only spikes in , where denotes the transpose . for simplicity, we use the same operator to sample the rows and columns of independently to get .let be the 1d stretched vector of and the 1d stretched vector of .it was proved that where denotes the kronecker product .it is easy to prove , hence .now this is just a standard 1d sparse signal recovery problem which can be attacked by the omp .[ alg:1d_omp ] * input * : * : sampling matrix * : sample * : sparsity level * output * : * : reconstruction of the ideal signal * auxiliary variables * : * : residual * : set of the indices of atoms that are allowed to be selected in the future * initialization * : * * let , where is the -th column of .we call the _ dictionary _ and an _atom_. the main idea of 1d - omp is to represent as a weighted sum of as few atoms as possible .algorithm [ alg:1d_omp ] gives main steps of 1d - omp . to implement 1d - omp, we need two auxiliary variables . first , to avoid atom reselection , set is defined to record the indices of those atoms that are allowed to be selected in the future ( excluding those already selected atoms ) .second , vector is defined to hold the residual after removing the selected atoms from .initially , is set to . then at each iteration, the decoder picks from the dictionary the atom that best matches the residual and then renews the weights for all the already selected atoms via the least squares .we decompose the iteration of 1d - omp into the following steps and analyze its complexity step by step .the projection of residual onto atom is , where denotes the inner product between two vectors and denotes the -norm of a vector .let , then this step can be implemented by , where denotes dot division .the complexity of this step is dominated by matrix - vector multiplication .hence the complexity of this step is .this step selects from unselected atoms the atom with the maximal absolute value of projection . as there are atoms and , the complexity of this step is approximately , negligible compared with the step .let , then .according to linear algebra , where and . the complexity to calculate and depends on . for ,the complexity of this step is negligible compared with the step .the complexity of this step depends on . for ,the complexity of this step is negligible compared with the step .based on the above analysis , we conclude that the complexity of 1d - omp is dominated by the step and its complexity is .this section develops the 2d - omp algorithm whose main idea is to represent 2d signal as a weighted sum of 2d atoms that are selected from an over - complete dictionary .we first redefine the concepts of atom , dictionary , and projection for 2d signals .then we give the 2d - omp algorithm .we reveal the equivalence of 2d - omp to 1d - omp and compare the complexity and memory usage of 2d - omp with that of 1d - omp .let be a 2d signal that is -sparse in domain and .let , where is the -th column of .we redefine dictionary , atom , and projection as follows . in the 2d - omp, the dictionary contains atoms and each atom is an matrix . let be the ( , )-th atom , then is the outer product of and now can be represented by the weighted sum of , i.e. the projection of onto is where and is the frobenius norm of , i.e. [ alg:2d_omp ] * input * : * : sampling matrix * : sample * : sparsity level * output * : * : reconstruction of the ideal signal * variable * : * : residual * , : set of the coordinates of atoms that are allowed to be selected in the future , for row indices and for column indices * initialization * : * * , algorithm [ alg:2d_omp ] gives main steps of 2d - omp . to implement the 2d - omp algorithm , we also need two auxiliary variables .first , to avoid atom reselection , set is defined to record the coordinates of those atoms that are allowed to be selected in the future ( excluding those already selected atoms ) , where for row indices and for column indices .second , is defined to hold the residual after removing the selected atoms from .initially , is set to . then at each iteration ,the decoder first searches for the best matched atom in the dictionary and then renews the weights for all the already selected atoms via the least squares . the weighted sum of selected atoms constructs an approximation to .let we model the problem as finding the optimal that minimizes the frobenius norm of , which is in fact equivalent to the least squares problem .as , where is the trace of a matrix , the problem is equivalent to using ( [ eq : r ] ) and , we have where and when takes the minimum , there must be hence it is easy to get let , then similarly , for , because we have obviously , the -th atom of is compared with ( [ eq:2d_atom ] ) , it can be found that is just the 1d stretched vector of .hence , the frobenius norm of will equal the -norm of .then we prove that the projection of onto equals the projection of onto .obviously , hence it means : at each iteration of 1d - omp and 2d - omp , the same atom will be selected . finally , as is the 1d stretched vector of , .hence , the least squares in 1d - omp and 2d - omp will output exactly the same results ( in fact , it can be easily proved that and ) .based on the above analysis , we draw the conclusion that 2d - omp is equivalent to 1d - omp .below we analyze the complexity of 2d - omp step by step .let be an matrix whose -th element is .then this step can be implemented by .the complexity of this step is dominated by matrix - matrix multiplication and matrix - matrix multiplication .as , the complexity of this step is . since there are atoms and , the complexity of this step is approximately , negligible compared with the step . from ( [ eq : h ]) , it can be seen that the complexity to calculate is . since , the complexity to calculate is .the complexity to calculate and depends on . for ,the complexity of this step is negligible compared with the step .the complexity of this step depends on . for ,the complexity of this step is negligible compared with the step .based on the above analysis , we draw the conclusion that the complexity of 2d - omp is , roughly of that of 1d - omp . for 1d - omp ,an matrix is needed to hold , so the memory usage is , while for 2d - omp , is replaced by , so the memory usage is reduced to .we have written both 2d - omp and 1d - omp algorithms in matlab .we present herein the results under three typical settings , i.e. , , and . for each setting, we increase sparsity level from 8 to 16 .the transform matrix is 2d discrete cosine transform ( dct ) matrix . the sensing matrix is formed by sampling independent and identically - distributed ( i.i.d . )entries from standard gaussian distribution by using the function .2d sparse signal is obtained by using the function with density .we run the matlab codes on intel(r ) core(tm ) i7 cpu with 12 gb memory and collect the total running time of trials for 1d - omp and 2d - omp respectively . because two algorithms output exactly the same results , only the speedup of 2d - omp over 1d - omp with respect to is reported in fig .[ fig : speedup ] . from fig .[ fig : speedup ] , we can draw two conclusions : 1 . as increases , the speedup of 2d - omp over 1d - omp descends gradually .this is because for small , the complexity of 2d - omp and 1d - omp is dominated by the _ project _ step . as increases , the complexity of other steps will weight heavier . especially , at the _ renew weights _step , the complexity of matrix inverse is , which will ascend quickly as increases .2 . as increases , the speedup of 2d - omp over 1d - omp becomes more significant .when , the speedup of 2d - omp over 1d - omp ranges from 10 to 11 times , while when , the speedup of 2d - omp over 1d - omp ranges from 32 to 35 times .this is because the speedup of 2d - omp over 1d - omp comes mainly from the _ project _ step , while at other steps 2d - omp shows little superiority over the 1d - omp . as increases , the complexity of the _ project _ step weights heavier , which explains the aove phenomenonfor 2d sparse signal recovery , this paper develops 2d - omp algorithm . we prove that 2d - omp is equivalent to 1d - omp , but it reduces recovery complexity and memory usage .hence , 2d - omp can be used as an alternative to 1d - omp in such scenarios as compressive imaging , image compression , etc . following the deduction in this paper ,the extension of 2d - omp to higher dimensional omp is straightforward .for example , by utilizing 3d separable sampling , 3d - omp can be obtained by defining each atom as a 3d matrix . then at each iteration, the decoder projects 3d sample matrix onto 3d atoms to select the best matched atom , and then renews the weights for all the already selected atoms via the least squares .3d - omp can find its use in hyperspectral image compression .
recovery algorithms play a key role in compressive sampling ( cs ) . most of current cs recovery algorithms are originally designed for one - dimensional ( 1d ) signal , while many practical signals are two - dimensional ( 2d ) . by utilizing 2d separable sampling , 2d signal recovery problem can be converted into 1d signal recovery problem so that ordinary 1d recovery algorithms , e.g. orthogonal matching pursuit ( omp ) , can be applied directly . however , even with 2d separable sampling , the memory usage and complexity at the decoder is still high . this paper develops a novel recovery algorithm called 2d - omp , which is an extension of 1d - omp . in the 2d - omp , each atom in the dictionary is a matrix . at each iteration , the decoder projects the sample matrix onto 2d atoms to select the best matched atom , and then renews the weights for all the already selected atoms via the least squares . we show that 2d - omp is in fact equivalent to 1d - omp , but it reduces recovery complexity and memory usage significantly . what s more important , by utilizing the same methodology used in this paper , one can even obtain higher dimensional omp ( say 3d - omp , etc . ) with ease . compressive sampling , 2d sparse signal , recovery algorithm , orthogonal matching pursuit .
neural networks are not constant structures . modification in neural nets leads to changes in mapping of input signal to output .the most explored type of neural plasticity is synaptic plasticity .synaptic plasticity deals with modifications of connection strength between neurons .it is an activity dependent process , and synaptic efficacy modifications depend on activity of postsynaptic and presynaptic neurons .the second type of plasticity is known as structural plasticity .structural plasticity deals with anatomical structure of neurons , and connections between neurons .the anatomical structures of neurons are subject to variation , and new connections between neurons can be established or deleted in the course of development of neuron net .structural plasticity as well as synaptic plasticity is a permanent and activity dependent process .because it is an activity dependent process it can be the basis of learning .activity - dependent modifications of neural circuits lead to changes in activity pattern of whole network. the geometrical properties of neurons should be considered in the theoretical investigations of structural plasticity of neural netsworks .the network must be considered as a system of neurons in three dimensional neuropil where neurons communicate with each other .the geometrical properties of neurons and the neuropil where neurons are constituent elements of networks .the neural networks generate activity patterns which depend on the intrinsic state of individual neuron and external influences on the system .this activity influences structural plasticity process .an external signal changes the activity pattern and leads to creation of new connections i.e. network will be modified according to external information . the neurons without connectionscan be considered a system of neurons in three dimensional space . at the beginning ,the neurons interact only by emission of chemicals .even no connections between neurons exist it may still be considered a system . with time new connections between neuronsappear and the system will possess new properties and neurons can make influence on the activity of each other neurons . in vivo and in vitro neuronsself - organize into networks .mature networks can be considered a result of activity - dependent dynamical wiring process .single neurons have properties that form the assembly into networks .calcium plays the most important role in wiring process .neuronal activity via voltage - dependent calcium channels provide influx of calcium through membrane .intracellular calcium activates and regulates different intercellular processes which influences growth cone movement , axon elongation , neurotrophin release , synaptogenesis , among other molecular mechanisms .the molecular mechanisms of most of these processes remains unclear and are subjects of many experimental studies .wiring process is controlled by intrinsic neuronal activity and neural activity is caused by sensory experience ( externals signals ) .external signals regulate neuronal activity and leads formation of the wiring between neurons .this adaptation leads to different neural activity even under constant sensory input , enabling the building of more complex representation and leading to progressive cognitive development .three levels of neuronal response to external signals can be considered : ( i ) induced spikes , ( ii ) synaptical plasticity and ( iii ) structural plasticity .these processes have different time scales namely , spikes milliseconds , synaptical plasticity hours , structural plasticity days . in the present paperwe take into account the first and third levels of consideration . in the future we plan to include in our model the second level . in this paperwe present a mathematical model of the neural activity , underlying the development of neural networks .the basis of our diffusion model is the experimental support of physiological and anatomical data .numerical simulations show the neural network growth , and how neural activity controls this process .the results may be used for experimental verification of the neural network growth and for conditions of new experiments .some parameters of our model have no experimental basis ( e.g. dependence of amount of agm released by neurons on activity ) and may needs to be verified in future experiments .models of axon guidance have been considered in detail by hentschel and van ooyen .three types of diffusible molecules have been considered : a chemoattractant released by target cells , a chemoattractant released by the axonal growth cones and chemorepellant released by axonal growth cones .two cases were considered namely , ( i ) diffusible signals only , and ( ii ) contact interactions with diffusible signals .it was shown that target - derived chemoattractant controls axon guidance , the axon - derived chemoattractant and chemorepellant control bundling and debundling .the dynamics of chemicals concentrations are described by standard diffusion equation .every chemical has its own diffusion constant , the release rate and degradation parameter .the release rate constants of chemicals has no dependence on the state of the cell which releases them . in the framework of the model the growth cones response to the concentration gradients of chemicals and total response of the growth coneare the result of two attractive and one repulsive concentration gradient .our model , in the part concerning agm s diffusion and growth cone movement , is based on the above mentioned models .for simplicity we consider only one type of chemoattractant .for description of the movement of growth cone one uses more complicated equations .the main difference is that , the release of chemoattractant is controlled by activity of the target cells , and the growth rate of the growth cones depends on the activity state of the neuron with growing axon .our model can be considered as generalization in the point of neuron activity , of the model presented in the .some models of activity - dependent neural network development have already been considered .first of all the model suggested by van ooyen and collaborators .the model consists of initially disconnected neurons , modeled as neuritic field , which are organized into a network under influence of their internal activity .the growth of neurites are connected with ca concentration inside the cell . therefore , the growth of neurites depends on their own level of activity , and the neurons become connected when their fields overlap . according to this model the high level of activity causes neurites to retract , whereas low level allows further outgrowth . from mathematical point of viewthey used a system of coupled differential equations for neural activity and connection strength .they showed , that spatial distribution of the cells can create connectivity patterns in which hysteresis appears and complex periodic behavior takes place .the segev with collaborators presented a model that incorporates stationary units representing the cells soma and communicating walkers representing the growth cones .the dynamics of the walker s internal energy is controlled by the soma , and they migrate in response to chemorepulsive and chemoattractive glues emitted by the soma and communicate with each other and with the soma by means of chemotactic feedback .our model is based on axon guidance by extracellular signals released by other neurons .our consideration is based on the diffusion of the agm .we already considered this approach before in simple form with binary neurons and without detail consideration of the process of diffusion . also , the parameters of the model have had no connection with reality and were taken from the mathematical point of view to obtain suitable results .the main novelty in our approach is that we consider the activity - dependence of the processes underlaying neural network growth in more detail .axon growth requires the interplay of many processes : producing cytoplasmic and membrane elements , shipping these building blocks to the right compartment and inserting them into the growing axon and coordination of all these processes . the tips of growing axons equipped by a very specialized structure , called growth cone , which is specialized for generation forward tension on the elongating axon .growth cone s cytoskeleton consist of microtubules mostly located in central domain of the growth cone ( lamellipodia ) , and actin filaments located in the lamellipodia and finger - like structures ( filopodia ) .actin monomers in the peripheral domain undergo constitutive filament assembly , elongating the lamellipodia and filopodia and pushing the growth cone membrane in forward direction .simultaneously actin filaments are dragged back motors into central domain by myosin - like motors where the actin filaments depolymerize .the advance of peripheral domain of the cone determined by the balance of anterograde polymerization and retrograde retraction of actin . if the balance is shifted toward forward protrusion , the decrease of in retrograde flow of actin filaments is accompanied by microtubule polymerization into the peripheral domain , moving the central domain of the growth cone forward and elongating the axon . a great variety of extracellular signalshave been found to regulate axon growth .extracellular guidance signals can either attract or repel growth cones , and can operate either at close range or over a distance.in the literature a family of chemicals has been found , which in mammals include nerve growth factor ( ngf ) , brain - derived neurotrophic factor ( bdnf ) , neurotrophins , netrins , slits , semaphorins , ephrins .diffusible cues are netrins , neurotrophins , ngf , bdnf among others .the neuronal growth cone uses surface receptors to sense these cues and to transduce guidance information to cellular machinery that mediates growth and turning responses .these guidance factors we will call axon guidance molecules ( agm ) .recent studies have shown that electrical activity is required for growing axons to reach their appropriate target area . in neurons in culture ,the changes in growth cone motility after electrical stimulation are accompanied by an influx of calcium through voltage - sensitive calcium channels .the effects of electrical activity and increases in intracellular calcium concentration on growth cone morphology are not the same for all neurons .some growth cones collapse , some show greater motility and others do not respond at all , depending on the type of neuron , the type of neurite ( axon versus dendrite ) and environmental factors .it has been found that there are several different signals and signal transduction mechanisms that ultimately result in alterations of the cytoskeletal structure and growth cone motility .modern experimental investigations show that essential role in controlling growth cone guidance are ca signals .the ca concentration in growth cones is controlled by various channels , pumps and buffers .guidance cues causes the opening of plasma membrane calcium channels .one of the best studied plasma membrane channel types on growth cones is voltage - operated calcium channels .guidance of axons to their targets probably involves at least three ca dependent effects on motility in particular , growth promotion , growth inhibition or collapse , and directional steering ( turning ) .global ca signals can regulate membrane dynamics and cytoskeletal elements to control elongation , whereas localized ca signals can cause asymmetric activation of downstream effector proteins to steer the growth cone .a small ca gradient produced by modest ca influx or release induces repulsion , whereas a larger ca gradient produced by greater ca influx in combination with release induces attractive turning .a set of experimental results leads to `` ca set - point '' hypothesis : normal growth cone motility depends on an optimal range of [ ca ] and neurite growth stops above or below this optimal range . therefore , ca regulation of growth cone motility depends on both the spatio - temporal patterns of ca signals and the internal state of the neuron , which is modulated by other signals received by the neuron .thus , a rise in [ ca ] in the growth cone activates numerous target proteins ( cam , camkii , myosin , calpain , calcineurin etc . ) and cellular machinery which regulates actin and microtubule dynamics to provide the growth cone extension and steering .when a growth cone guided by agm reaches an appropriate target cell , the synaptogenesis and synapse refinement processes start .synapse formation is controlled by dynamic interactions between various genes and their encoded proteins and occurs throughout development to generate synapse specificity .the modern version of dale s principle suggests that all of a particular neuron s terminals release the same set of neurotransmitters .nevertheless , it is well known that a neuron can store and presumably release the different sets of transmitters from individual axon endings .the neurotransmitter choice of the neurons depends on programmed and environmental factors , and this process is neither limited by a critical period nor restricted by their insertion in a network .calcium transient patterns plays a key role in the differentiation of neural precursor cells , and their frequency may specify neuronal morphology and acquisition of neurotransmitter phenotype .neuronal activity also plays a main role in the neuronal connection establishment - and refinement processes .the electrical activity of neurons can regulate the choice of neurotransmitter in cultured neurons through calcium influx and can differentially affect the regulation of transmitter expression .certain neurons choose the neurotransmitter which they use in an activity - dependent manner , and different trophic factors are involved in this phenotype differentiation during development .regulation of transmitter expression occurs in a homeostatic manner .suppression of activity leads to an increased number of neurons expressing excitatory transmitters and a decrease number of neurons expressing inhibitory transmitters and vice verse .based on the above discussion we assume that each neuron s axon can release different neurotransmitters and can establish different types of synaptic connections ( inhibitory or excitatory ) . the type of synapse can be determined by state of presynaptic or / and postsynaptic neuron . for simplification we assumed that the type of a synaptic connection between cells depends on the state of postsynaptic cell at synaptogenesis process .release of some neurotrophic factors can be triggered by external stimulation and neuron s electrical activity .the activity dependent release of agm s is a key assumption in our model .we doubt there is complete proof of activity dependent agm s release and we consider this point a an hypothesis .let us proceed to description of the model adopted here .the concentration of agm , , at point in the moment released by the -th neuron at point can be found as the solution of the equation here , and are agm diffusion and degradation coefficients in the intracellular medium respectively .the source is amount of the agm per unit time .it is well - known that the solution of this equation has the following form where is initial distribution of the concentration , is dimension of the problem and the the green function has standard gaussian form we suppose that in the initial time there is no agm and the process is managed by the source which is concentrated at the -th neuron .the parameter describes amount of agm released per unit second .this quantity means the amount of agm per unit time .the describes activity of the neuron and . using this information we have the following form of the concentration for description of neural electrical activity several modelshave been developed .for simplicity we take the activity is the subject for the equation where the functional has step form with step function and is amount of neurons . the matrix with elements describes the influence the -th neuron to -th neuron ; means excitatory and inhibitory connections , for there is no influence .the functions describe the external sources which excites -th neuron .now we define the vector which describes the tip of the axon which started to grow from -th neuron .it is subject for equation where functional has the form of the step function , too : the functional , in fact , is smooth function of activity . in our modelwe adopt the simplest form of this functional as a step - function by using the threshold parameter .it means that the axon is quiescent if the activity of its neuron is greater then some threshold value .the parameter is a coefficient describing axon s sensitivity and motility . at the initial moment we set matrix which means no connections between neurons .then we solve the above equations ( [ eqaxon ] ) , ( [ eqactiv ] ) , ( [ eqconc ] ) assuming that some neurons are exited by external force that is assuming that some of are not zero .we obtain position of the axon s tip at the moment .if -th axon makes connection with -th neuron we set .in this section we present the result of the numerical simulation of the model considered above .we simulate two ( neurons ) and three ( neurons ) dimensional realizations of the model . for simplicitywe consider the network of neurons in form of the lattice with increment ( distance between neurons ) .each neuron has single growth cone , which we consider as its axon .initially all growth cones are located near its soma , and all synaptic weights are equal to zero ( ) , which means no connections between neurons .the system of differential equations ( [ eqaxon ] ) , ( [ eqactiv ] ) , ( [ eqconc ] ) were integrated simultaneously by the euler method .the parameters used in the model were taken from different experimental findings and we list them below . 1 . the agm diffusion coefficient , .2 . the amount of agm per unit second , .3 . the relaxation time of activity , .4 . the coefficient describing axon s sensitivity and motility , .the threshold parameter is a parameter exclusive to our model and for this reason there is no experimental value that can be assumed to it .we set the threshold parameter , , to take good agreement with experimental observation of the network growth .the degradation coefficient may be found , in principal , from specific experiments .unfortunately , there is no information about this parameter and we put to be in agreement with network growth .this parameter regulates the rate of the axon s growth . the greater the smaller rate of axon s growth .there is another threshold parameter of activity which defines sort of connections : or . for homeostasiswe take this parameter to equal .if activity of postsynaptic neuron s greater then we set and vice versa .this is a crude approximation but it is enough to describe the neurons networks development . as an example we present in fig .[ d2 ] and [ d3 ] the neural network which was obtained by using the following training pattern sequence .( we show snapshots of the dynamic process ; full animation may be found in url : http://neurowiring.narod.ru/video.html ) = 8truecm=8truecm = 8truecm=8truecm neuron a was being active with activity during the time period from to and activity of other neurons in this period is zero . in fig .[ d2]a we show the snapshot of the system at the moment .we observe that the nearest two neurons growth cones grow to this active neuron a. after some time the growth cones reached the active neuron a and synaptical connects appear .the growth cones of the distant neurons do not appear because concentration of the agm far from the active neuron is neglegible .in the interval the central neuron was active with activity .the neuron b was active in interval . in fig .[ d2]b we reproduce the snapshot of the system at the moment .we note that the growth cones of neurons close to the another neuron did not reach this central neuron and the neuron b became active . the consequence of this fact is curved form of the axons . since the moment the neuron c plays the role due to its activity ( see fig .[ d2]c and [ d2]d ) .we show the snapshots of the system at the moments and .we observe the the new connections to neuron b appeared .the growth cones which did not reach neuron b turned to the active neuron c. the growth cone of the active neuron c which started to grow in the earliest moment is quiescent in the period of its activity ( see fig .[ d2]c and [ d2]d ) .mathematically it is described by threshold function ( [ threshold ] ) .we observe that the topology of the neural network strongly depends on the sequence of neuronal activity the reason of which is external influence . in real systemthe growth of the connections in the complex connection structure will depend on the internal oscillation of activity rather then external signals .the external influence is very important at the initial time during the network development .the situation in this case is close to the 2 dimensional case .the first neuron in the upper cone was active in the period with activity . in the period neuron a was active . the snapshot for moment is shown in fig .[ d3]a . in the period neuron b was active ( see fig .[ d3]b for ) .the neuron c was active in the period ( see fig .[ d3]c for ) .the neuron d starts to be active since moment ( see fig .[ d3]d for ) .we observe the network development close to 2 dimensional case .the topology of the network depends on the sequence of neuronal activity .= 8truecm = 8truecmin above sections we developed the new theoretical approach to describe the growth of the network of neurons .the model is based on the diffusion of the agm and dynamic of the diffusion satisfies to diffusion equation .the rate of the growth cone movement is proportional to concentration gradient which should be the case .the peculiarity of the model is that the activity of the neuron manages the release of the agm which influence to axon s growth .this process leads to appearance of the new connections in the system and network development . in framework of the model and with real parameters obtained in experiments we have correct picture of network growth ( see fig .[ d2 ] and [ d3 ] ) in temporal scale .in particular , the fig . 2 ( ) , shows some neurons to have only long - range connections without any local connections .this showes how nested neural networks are formed in the cerebral cortex .our expectations of the network topology and direction of the axon s growth were realized in framework of the model .we confine ourself for two reasons .first of all we would like to consider in detail the dynamics of the axon s growth in dependence of the neurons activity .second , the real system contains huge amount the neurons .the present calculations are confined by power of our computers . in the futurewe intend to make numerical simulations for more real amount of the neurons and axons .real cortical networks have more complex structure comparing with that obtained in framework of our model. the neural network development is controlled by many factors which are out of scope of our model , namely , cell adhesion molecules , multiple guidance factors and etc . in the model presented here we considered only one of them , the axon guidance by single diffusable factor , which is , in fact , the most important factor in activity dependent development . in our model axonshave only one branch and single growth cone , without taking into account axon branching .the cortical neuron s axons have plenty branched structure and different branches of single axon grow to different target neurons . in the model each neuron can make connection only with one target neuron . alongside with guidance growth cone by chemoattractionthe process is controlled also by chemorepellants .different types of neurons have different properties and cortex consists of a lot amount of different neurons . in the model presented here we take into account only one type of the neurons .the guidance of particular growth cones to the target is not controlled by single agm . in particular parts of growth cone trajectory the different agm s take part in axon guidance .the growth cones can go to long distances and they result to long - range intra - cortical and cortico - cortical connections .real neurons possess also the strong branched dendrite structure .the dendrite growth represents very complex process which is managed by many factors . in our modelthe axons growth cones make synaptic connections directly to a soma .the most of synaptic connections in cortex are on dendrites which are absent in our model . to obtain a more precise picture of cortex development we should take into account also the morphological propertied of neurons .we plan to include dendrites and morphological propertied of neurons into our model in future investigations .the theoretical framework developed here can be used for description the development of a particular set of neurons constituting the neural system . in this paperwe show that chemotactically guidance of growth cones by agm released by neurons can be a basis of neural networks growth , topology and development .the concentration of agm released by individual cells is basis for correct axon guidance with appropriate rate .all parameters describing the model are taken from different real experiments .this model can be used for description the growth of developing neural network .it is well known that the wiring of neural networks takes place on the chemotaxis based axon guidance .the connections structure between neurons is very complex in real networks .such complex structure can appear only by switching of and switching of the chemical signal that regulates the growth of axon . using this modelwe conclude that these process can be based for learning , because creation of new connection leads to increasing of network complexity in structure .another application of this model is in treatment of damaged neural tissue by stem cells . using this model we may describe the processes of integration of stem cells into existing network .this model can be used also for understanding the processes which takes place at deep brain stimulation by electrical current .we showed that the electrical stimulation of individual cells leads to alternation of agm release and in growth cones guidance , and deep brain stimulation can change the network structure .another application of our model can be in modeling of imprinting memories in cultured neural networks .sossin ws , sweet - cordero a , scheller rh , dale s hypothesis revisited : different neuropeptides derived from a common prohormone are targeted to different processes , proc .usa * 87*:4845 - 4848 , 1990 .ciccolini f , collins tj , sudhoelter j , lipp p , berridge mj , local and global spontaneous calcium events regulate neurite outgrowth and onset of gabaergic phenotype during neural precursor differentiation , j. neurosci . *23*:103 - 111 , 2003 .
it is currently accepted that cortical maps are dynamic constructions that are altered in response to external input . experience - dependent structural changes in cortical microcurcuts lead to changes of activity , i.e. to changes in information encoded . specific patterns of external stimulation can lead to creation of new synaptic connections between neurons . the calcium influxes controlled by neuronal activity regulate the processes of neurotrophic factors released by neurons , growth cones movement and synapse differentiation in developing neural systems . we propose a model for description and investigation of the activity dependent development of neural networks . the dynamics of the network parameters ( activity , diffusion of axon guidance chemicals , growth cone position ) is described by a closed set of differential equations . the model presented here describes the development of neural networks under the assumption of activity dependent axon guidance molecules . numerical simulation shows that morpholess neurons compromise the development of cortical connectivity .
we analyse the strong numerical approximation of an ito stochastic partial differential equation defined in .boundary conditions on the domain are typically neumann , dirichlet or some mixed conditions .we consider ,\qquad t>0\end{aligned}\ ] ] in a hilbert space . here is the generator of an analytic semigroup not neccessary self adjoint .the functions and are nonlinear of and the noise term is a -wiener process defined on a filtered probability space , that is white in time .the noise can be represented as a series in the eigenfunctions of the covariance operator given by where , are the eigenvalues and eigenfunctions of the covariance operator and are independent and identically distributed standard brownian motions .precise assumptions on , , and are given in section [ scheme ] and under these type of technical assumptions it is well known , see that the unique mild solution of is given by typical examples of the above type of equation are stochastic ( advection ) reaction diffusion equations arising , for example , in pattern formation in physics and mathematical biology . we illustrate our work with both a simple reaction diffusion equation where we can construct an exact solution as well as the stochastic advection reaction diffusion equation where is the diffusion tensor , is the darcy velocity field and is a constant depending of the reaction function .the study of numerical solutions of spdes is an active research area and there is an extensive literature on numerical methods for spdes ( [ adr ] ) . for temporal discretizationsthe linear implicit euler scheme is often used , spatial discretizations are usually achieved with finite element , finite difference or spectral galerkin method . in the special case with additive noise , new schemes using linear functionals of the noisehave recently been considered .the finite element method is used for the spatial discretization in and the spectral galerkin in .our schemes here are based on using the finite element method ( or finite volume method ) for space discretization so that we gain the flexibility of theses methods to deal with complex boundary conditions and we can apply well developed techniques such as upwinding to deal with advection .one of our schemes is the non diagonal version of the stochastic scheme presented in and the other is the extension of the deterministic exponential time differencing of order one to stochastic exponential scheme . comparing to the schemes presented in on additive noise , the results here are more general since the linear operator does not need to be self adjoint andwe do not need information about eigenvalues and eigenfunctions of the linear operator .furthermore we examine here convergence for ito multiplicative noise for the exponential integrators , which has not so far been considered for spdes for these integrators . as in , schemes presented here are based on exponential matrix computation , which is a notorious problem in numerical analysis . however , new developments for both ljapoints and krylov subspace techniques have led to efficient methods for computing matrix exponentials .the convergence proof given below is similar to one in for a finite element discretization in space and backward euler based method in time .the paper is organised as follows . in section [ scheme ]we present the two numerical schemes based on the exponential integrators and our assumptions on .we also present and comment on our convergence results .section [ proof ] contains the proofs of our convergence theorems .we conclude in section [ simulation ] by presenting some simulations and discuss implementation of these methods .let us start by presenting briefly the notation for the main function spaces and norms that we use in the paper .we denote by the norm associated to the inner product of the hilbert space . for a banach space denote by the norm of , the set of bounded linear mapping from to and by the space defined by let be a trace class operator .we introduce the spaces and notation we need to define the -wiener process . an operator is hilbert - schmidt if where is an orthonormal basis in h. the sum in is independent of the choice of the orthonormal basis in .we denote the space of hilbert schmidt operators from to by and the corresponding norm by let be a process , we have the following equality using the ito s isometry . \end{aligned}\ ] ] let us give some assumptions required both for the existence and uniqueness of the solution of equation ( [ adr ] ) and for our convergence proofs below .[ assumptionn ] the operator is the generator of an analytic semigroup . in the banach space , , we use the notation .we recall some basic properties of the semigroup generated by . *[ smoothing properties of the semigroup ] * + [ prop1 ] let and , then there exist such that in addition , where .we describe now in detail the assumptions that we make on the nonlinear terms , and the noise . [ assumption1 ] * [ assumption on the drift term ] * there exists a positive constant such that is continuous in and satisfies the following lipschitz condition as a consequence , there exists a constant such that [ assumption2 ] * [ assumption on the noise and the diffusion term ] * + the covariance operator is in reace class i.e. tr( , and there exists a positive constant such that is continuous in and satisfies the following condition as a consequence , there exists a constant such that [ existth ] * [ existence and uniqueness ] * + assume that the initial solution is an random variable and assumption [ assumption1 ] , assumption [ assumption2 ] are satisfied .there exists a mild solution to unique , up to equivalence among the processes , satisfying for any there exists a constant such that }{\sup}\mathbf{e } \vert x(t)\vert^{p } \leq c\left(1+\mathbf{e } \vert x_{0 } \vert^{p}\right ) .\end{aligned}\ ] ] for any there exists a constant such that }{\sup } \vert x(t)\vert^{p } \leq c_{1}\left(1+\mathbf{e } \vert x_{0 } \vert^{p}\right ) .\end{aligned}\ ] ] the following theorem proves a regularity result of the mild solution of .[ newtheo ] assume that assumption [ assumption1 ] and assumption [ assumption2 ] hold .let be the mild solution of ( [ adr ] ) given in ( [ eq1 ] ) .if then for all ,\,x(t ) \in l_{2}(\mathbb{d},\mathcal{d}((-a)^{\beta/2})) ] .more results about the regularity of the mild solution can be found in .we assume that has a smooth boundary or is a convex polygon of .in the sequel of this paper , for convenience of presentation , we take to be a second order operator as this simplifies the convergence proof .more precisely we take and consider the general second order semi linear parabolic stochastic partial differential equation given by ] ( see ( * ? ? ? * section 4 ) ) .notice that by the definitions of the operator and , for where is the nemytskii operator defined by we introduce two spaces and where that depend on the choice of the boundary conditions for the domain of the operator and the corresponding bilinear form .for dirichlet boundary conditions we let and for robin boundary conditions , neumann boundary being a special case , we take and see for details . the corresponding bilinear form of given by for dirichlet and neumann boundary conditions , and by for robin boundary conditions . according to grding s inequality ( see ) ,there exists two positive constants and such that by adding and subtracting on the right hand side of ( [ adr ] ) , we have a new operator that we still call corresponding to the new bilinear form that we still call such that the following coercivity property holds note that the expression of the nonlinear term has changed as we include the term in a new nonlinear term that we still denote by . the coercivity property ( [ ellip ] ) implies that is a sectorial on i.e. there exists such that where ( see ) .then is the infinitesimal generator of bounded analytic semigroups on such that where denotes a path that surrounds the spectrum of .functions in satisfy the boundary conditions and with in hand we can characterize the domain of the operator and have the following norm equivalence for we consider the discretization of the spatial domain by a finite element triangulation .let be a set of disjoint intervals of ( for ) , a triangulation of ( for ) or a set of tetrahedra ( for ) with maximal length .let denote the space of continuous functions that are piecewise linear over the triangulation .to discretize in space we introduce the projection from onto defined for by the discrete operator is defined by like the operator , the discrete operator is also the generator of an analytic semigroup .the semi discrete in space version of the problem ( [ adr ] ) is to find the process such that for ] , where for .we take to be a constant that may depend on and other parameters but not on or .our result is a strong convergence result in for schemes setdm1 and setdm0 .[ th1 ] let be the mild solution of equation at time represented by ( [ eq1 ] ) .let be the numerical approximations through or ( for scheme setdm1 and for scheme setdm0 ) and .assume that for small enough .the following estimates hold . if then if then suppose that small enough : if then if and then , small enough . in the proof of theorem [ th1 ] , the assumptions for and are enough . we only need that and ] be so that . if then we have the following estimate , * proof * consider the difference so that we estimate each of the terms and .estimation of the terms and are similar to ones in [ , lemma 3.2 ] with additive noise . using proposition [ prop1 ] as in yields and for term , we have using the ito isometry property yields using assumption [ assumption2 ] and proposition [ prop1 ] yields let us estimate .the ito isometry again , with the boundedness of and assumption [ assumption2 ] yields hence combining our estimates of and ends the lemma .* proof * set recall that with we examine the error thus we follow the approach in .let us estimate the first term .using the definition of from , the first term can be expanded then let us estimate , for with and , if , lemma [ lemme1 ] with yields and if we have for , using assumption [ assumption1 ] , triangle inequality as well as the fact that and are bounded operators with fubini s theorem yields once again using the lipschitz condition , triangle inequality , the fact that and are bounded but with lemma [ lemme2 ] yields thus if we obviously have by taking in lemma [ lemme2], small enough .let us estimate . for , using lemma [ lemme1 ] yields thus if small enough , for , using lemma [ lemme1 ] yields thus combining the previous estimates yields : for for for and if small enough for and if small enough , with small enough .let us estimate , we follow the same approach as in .note that in the case of additive noise the estimation is straightforward and smooth noise improve the accuracy ( see and figure [ fig0022 ] in section [ simulation ] ) .for multiplicative noise we have then let us estimate each term .using the ito isometry , the boundedness of and , and assumption [ assumption2 ] yields for , using lemma [ lemme2 ] and assumption [ assumption2 ] yields for , taking , with small enough yields let us estimate . by itos isometry and lemma [ lemme1 ] , we have indeed using lemma [ lemme1 ] and assumption [ assumption2 ] , for small enough , we have thus since is the discrete form of we therefore have for , we obviously have using lemma [ lemme1 ] let us estimate , by our assumption on the following estimation holds since thus combining the estimates related to yields the following .that for and small enough , for and combining the estimates of and and applying the discrete gronwall lemma ends the proof .we just give a sketch of the mains steps .recall that we can therefore put the estimation of the error in the form of equation [ eq : iiiiii ] .the estimate of the corresponding is the same as in theorem [ th1 ] with the extra term this is estimated in theorem 2.6 as the estimation of is the same as for the scheme setdm0 .efficient implementation of can be achieved by either the real fast ljapoints technique in or the krylov subspace technique in . in the first examplewe apply the scheme to linear problem where we can construct the exact solution for the truncated noise .the finite element method is used for space discretization . in this examplewe use the real fast lja point technique to compute the exponential functions .we use noise with exponential correlation ( see below ) which is obviously a trace class noise . in the second examplewe apply the scheme to nonlinear stochastic flow with multiplicative noise in a heterogeneous media . to deal with high pclet number flow, we use the finite volume method for the space discretization . in this casewe use the krylov subspace technique to compute the exponential functions , implemented in the matlab functions expv.m and phiv.m of the package expokit .we compute the exponential matrix functions with the krylov subspace technique with dimension and the absolute tolerance . in the legends of our graphs ,`` setdm1 '' denotes results from the setdm1 scheme , `` setdm0 '' denotes results from the setdm0 scheme with `` implicit '' denotes results from the standard semi - implicit euler - maruyama scheme .as a simple example consider the reaction diffusion equation with additive noise in the time interval ] such that \\ 0 \quad \text{in}\quad \left\lbrace 1 \right\rbrace \times\left [ 0,1\right ] \end{array}\right . \end{aligned}\ ] ] and where is the pressure , is dynamical viscosity and the permeability of the porous medium .we have assumed that rock and fluids are incompressible and sources or sinks are absent , thus the equation =0 \end{aligned}\ ] ] comes from mass conservation . as in , we take the following values for in the representation ( [ eq : w ] ) note that to have a trace class noise we need . in our simulationwe use . to deal with high pclet flows we discretize in space using finite volumes . we can write the semi - discrete finite volume discretization of as ( see ) . figure [ fig022a ] shows the convergence of setdm0 , setdm1 and semi - implicit schemes for the homogeneous porous medium .the scheme setdm1 seems to be more accurate for large time steps but for large time steps it has the same order of accuracy as the semi - implicit and setdm0 schemes .the temporal order is for setdm1 scheme , for setdm0 scheme and the semi - implicit scheme .we used 200 realizations and the convergence order is close to the , the predicted order of convergence in theorem [ th1 ] . a sample the true solutionis shown in figure [ fig022b ] with while the mean of the true solution for 200 realizations is shown in figure [ fig022c ] .figure [ fig027a ] shows the convergence of setdm0 and setdm1 schemes for the heterogeneous porous medium .it also shows that setdm1 is more accurate than setdm0 scheme for high time step size .the observed temporal order is for setdm1 scheme and for setdm0 scheme .figure [ fig027c ] shows the streamline of the velocity field .a sample the `` true solution `` is shown in figure [ fig027b ] with while the mean of the ' ' true solution '' for 200 realizations is shown in figure [ fig027d ] .g. j. lord and a. tambue . a modified semi implict euler - maruyama scheme for finite element discretization of spdes .g. j. lord and a. tambue .stochastic exponential integrators for finite element discretization of spdes with additive noise .http://arxiv.org/abs/1005.5315 , 2010 .c. moler and c. van loan .ninteen dubious ways to compute the exponential of a matrix , twenty five years later . , 45(1 ) , pp . 349 , 2003 .m. caliari , l. bergamaschi and m. vianello . the relpm exponential integrator for fe discretizations of advection - diffusion equations . in : m. bubak , g. d. van albada , p. sloot ( eds.),lecture notes in computer sciences volume 3039 , springer verlag , berlin heidelberg , pp .434 - 442 , 2004 .alerkin finite element methods for stochastic parabolic partial differential equations ., 43(4 ) ( 2005 ) 13631384 .e. hausenblas .approximation for semilinear stochastic evolution equations . , 18(2)(2003 ) 141186 .m. kovcs , s. larsson , and f. lindgren .strong convergence of the finite element method with truncated noise for semilinear parabolic stochastic equations with additive noise ., 53 ( 2010 ) 309320 .p. kloeden , g. j. lord , a. neuenkirch and t. shardlow .the exponential integrator scheme for stochastic partial differential equations : pathwise error bounds . , 2009 .g. j. lord and t. shardlow .postprocessing for stochastic parabolic partial differential equations . , 45(2)(2007 ) 870889 .d. j. higham .an algorithmic introduction to numerical simulation of stochastic differential equations ., 43(3)(2001 ) 525546. s. larsson .nonsmooth data error estimates with applications to the study of the long - time behavior of finite element solutions of semilinear parabolic problems . .a. jentzen and m. rckner .regularity analysis for stochastic partial differential equations with nonlinear multiplicative trace class noise . ,2010 . g. da prato and j. zabczyk .second order partial differential equations in hilbert spaces . , 2002 .
we consider the numerical approximation of a general second order semi linear parabolic stochastic partial differential equation ( spdes ) driven by space - time noise , for multiplicative and additive noise . we examine convergence of exponential integrators for multiplicative and additive noise . we consider noise that is in trace class and give a convergence proof in the root mean square norm . we discretize in space with the finite element method and in our implementation we examine both the finite element and the finite volume methods . we present results for a linear reaction diffusion equation in two dimensions as well as a nonlinear example of two - dimensional stochastic advection diffusion reaction equation motivated from realistic porous media flow . parabolic stochastic partial differential equation , finite element , exponential integrators , strong numerical approximation , multiplicative noise , additive noise .
one of my main joys in teaching is helping students draw connections between what they know and what they are learning .i incorporate projects in most of my classes to effectively engage students .such projects are most naturally done in applied courses , but can be integrated into theoretical courses as well .positive correlations between student learning and nontraditional teaching techniques such as project - based learning and problem - focused group work in class are shown from the literature in subsection [ sec : literature ] .my most effective projects have covered the full spectrum of mathematical modeling where students are involved in data collection , processing , developing model equations , and evaluation of their model based upon the results and outside data . when first implementing projects in my classes , my students and i dealt with several logistical issues that needed to be smoothed out for the projects to be beneficial .i share my implementation difficulties , resources i have utilized , and a generalization of these best practices in section [ sec : logistics ] .section [ sec : implementation ] summarizes specific project implementations , chosen topics , student feedback for classes in which i have implemented major modeling projects .i have found that these best practices depend upon the class size and topic .for example , a project in differential equations with 43 students is best spread over 2 weeks in groups of 4 , while a project in advanced linear algebra with 14 students is best spread over 5 weeks in groups of 2 . detailed project prompts , class sizes , and grading rubrics can be found in the appendix . implementing projects in class increases student engagement in class , collaboration outside of class , and new perspectives with which to learn the material .long - term studies have shown that interactive teaching styles result in significantly higher understanding of concepts .project - based learning is a teaching methodology that utilizes student - centered projects , often over extended periods of time , to facilitate student learning .coupled with smaller class sizes ( about 40 students ) , short - term projects have been shown to improve student attitude and achievement .further , long - term problem - driven approaches in a large class ( about 75 students ) similarly improves student learning and achievement .effective collaboration and motivational open - ended questions , two key attributes of project - based learning , are helpful guides for developing class projects .specifically , group collaboration that involves student choice , communication , writing , revision , and presentation is most effective at increasing student learning .additionally , group projects provide stimulating discussions and have spurred ideas for individual research projects later on .modeling - focused projects are helpful in providing a big - picture perspective , especially at the beginning of a course .for example , on the first day of mathematical modeling , i give pairs of students a hazelnut , a jar , and a ruler and ask them to estimate how many hazelnuts fit in the jar .given 10 minutes , students estimate the number by comparing the volumes of a hazelnut and the jar while ignoring packing loss .after group reflection in class , they are prepared to research packing efficiencies and come back to the next class with much better estimates .this project exemplifies a core principle of project - based learning : students developing mastery and becoming self - directed learners . additionally , by incorporating parameter estimation from collected data , as in this example, students achieve a more realistic view of the world which helps them utilize their learning later on .mooney and swift frame the modeling process as creating an idealized replica of the real world , called the model world , through simplifying assumptions which are helpful and necessary due to available resources .the solution to the idealized problem is then evaluated in the real world and the model is improved and re - solved as necessary . starting with a quick visual guess , then volume measurements , and finally accounting for packing efficiency, my students could see the improvement with each successive resolution of their model .this simple activity demonstrates how and why we solve problems in the idealized model world as well as how many assumptions surround even simple calculations .the room buzzed with anticipation as i counted out the hazelnuts from a packed jar .the way their estimates spread about the exact value ( which one student estimated exactly ) gave a great segue into statistical measures .a motivation for these key attributes of the modeling process can be sparked in every project through follow - up questions such as _ why was this a good way to set up the real question mathematically ? , " `` is this a good mathematical technique for solving the problem ?, '' `` how well can you trust your answer ? ''`` what do you do with the solution ? ''_ not knowing how to solve the problem exactly , students may feel overwhelmed with doubt in giving estimated solutions to problems they framed themselves , but encouraging further justification , such as a logical argument developing their model and statistical tests of the data , can help them gain confidence . facing such open - ended questions in a supportive learning environmenthelps prepare them for collaborative work in their jobs where they will have to explain and defend their results .there are many logistical issues that can make implementing projects difficult for both professor and student .these can be minimized through open dialog with students and utilizing available resources .the main logistical issue for implementing projects is the additional professor workload .projects are an important way to engage students in a different learning style and encourage positive collaboration .yet , projects can end up displacing important content and requiring much more time prepare and grade .further , it feels risky to dedicate an entire class period to group activities .other logistical issues revolve around individual student learning .unequal division of workload is common when group members do not consciously divide up the work .the grade should reflect the distribution of the workload as well as the quality of the project as a whole . as a learning tool, each student needs to know the content of the whole project , not just their individual component . also , groups who do not plan out their project work end up completing the project at the last minute .i have utilized several different tools in my classes , both in preparing projects and implementing them , to minimize the impact of these logistical issues .databases of prepared projects and ideas save prep time .collaboration tools and software lessen the difficulty in student coordination of the project .course management tools house project work in one place for ease of presenting results , collecting student work , and grading it online .there are many resources available with outlines and prepared class projects .toews gives a broad overview of how modeling can be used across the mathematics curriculum , while others cater to specific courses such as calculus , differential equations , numerical methods , and a course specifically in mathematical modeling with emphasis on writing .a repository of modeling activites specifically for differential equations called _ simiode : systematic initiative for modeling investigations and opportunities in differential equations _ is maintained by a community of educators and directed by brian winkel .these activities are peer - reviewed to provide clear instructions for student investigation and instructor facilitation .a similar community - supported repository specifically for calculus is _ project mosaic _ .spurred by the 2013 mathematics for planet earth initiative , the center for discrete mathematics and theoretical computer science sponsored the development of several sustainability modules in the same vein as simiode activities applied to calculus , differential equations , discrete math , statistics , and liberal arts mathematics courses .further , individual projects covering a wide spectrum of topics and classes can be found in the literature through journals such as primus .coordinated file management can help implement projects , even in a small class .a central database for web submission , display , and grading of projects has been a major time saver for me .file submission , forum setup , and wiki creation are supported by most course management systems ( my university uses moodle ) , and can also be accessed separately through google drive ( drive.google.com ) , drop box ( dropbox.com ) , and others .a private wiki , accessed through a course management system like moodle ( http://www.moodle.com[www.moodle.com ] ) or separately like pbworks ( http://www.pbworks.com[www.pbworks.com ] ) , is a web environment where students can create and link together multiple web pages , similar to wikipedia in a more controlled environment .having students complete their project on the wiki keeps everything in one place to make it easier for me to grade .i can also visually check on group work progress and give directed reminders to those trailing behind .i often require students to self - report their individual contributions on the wiki to help them stay accountable and allow me to distribute points according to contribution efficiently .in addition , the fact that students are self - reporting their contributions encourages them to consciously divide up the work evenly at the beginning .the wiki environment also allows for quick transition between oral presentations of multiple groups since they all link back to the page with the project s prompt . for collaborative data analysis ,google sheets was effective in providing access to data collection , processing , and visualization .this saved me and the students the hassle of transferring files and made grading easier by keeping all of my students work in the same document . because they were all working on the same document ,i created a tab for each group to work on and put instructions and example formatting in the first tab .both the wiki and google sheets are handy in group presentations as they decrease the amount of down time transfering files . to help coordinate group work , i originally encouraged students to use mass emails to myself and their group .mass emailing is helpful because the emails can be sorted together and a record of the conversation is kept , but it does clutter my email inbox and email does not transfer mathematical work well .instead , i now encourage students to post their work on the private wiki which allows posting of figures and mathematical typesetting in html .in addition to the benefits mentioned above , the wiki environment allows students to immediately see group updates in the project as a whole , add comments for revisions , and track changes to hold students accountable for contributing their share of the work .optimal implementation of a project varies by class size and topic , but i have noticed some best practices that apply in general . incremental notification and implementation of projects is key to merging project work with standard classwork . in preparing your class ,seek a good rhythm and balance of skills for placing the project ; a very structured project fits best at the beginning and a more open - ended project in the latter part of the course .embed extensions of lecture content in the project so that class lecture and homework problems feed into the project .encourage students to better communicate in groups through assigning individual roles and emphasize the need to review each others work as it relates to the project as a whole .most of all , include opportunites for students to buy into the project through selecting partners , group role , and sometimes topic .remind students of their share of the workload and keep them accountable through monitoring individual contributions and , if desired , asking students to evaluate their group .to minimize prep time it is important to start with just one project that you have tried once yourself , preferably prepared and tested by another source such as those mentioned in subsection [ sec : resources ] . in working through projects myself ,i determine the prerequisite skills and schedule it accordingly .before i assign a project , i will go over the expectations and demonstrate the technology they will be using whether it is a calculating web applet , wiki environment , or program they must compile themselves .leading by example helps avoid many technical issues for students . to keep from suddenly jarring students from lecture mode , i notify them of a project the week before andhave them form groups the class period before i assign the project so they are ready to dive in together .how do you get students into groups they like without the groups becoming too lopsided ?i started assigning balanced groups myself , but found that students did not work well together . to balance student choice and more diverse groups ,i now assign groups from pairs of students where students pair up themselves . increasing student choicehas led to groups bonding quicker and working together more smoothly .diversifying the groups in terms of major , gender , ethnicity , work ethic , etc .helps students in large classes get to know more of their classmates and break out of isolating cliques . from my past experience ,the most successful group sizes were powers of 2 , formed from the self - selected couples up to 12 groups per class for reduced grading : 2 for fewer than 24 students , 4 for 24 - 48 students , and 8 for 48 - 96 students . at a small private university, my classes range from 7 to 43 with a mean of 34 , but earlier on in my career i found groups of 8 ( paired groups of 4 ) worked well for classes of 75 ( pre - calculus ) and 97 ( linear algebra ) .in addition , projects two weeks or longer give a thorough team - building experience through coordinated communication and scheduling of group work .above all , projects should support student learning in the course as a whole and enhance its breadth , depth , and content accessibility .encourage students through the project requirements to connect to other disciplines and to reference other researchers work or data . to add depth , students should be accountable for their own work and communicate it well in writing and verbally . to improve accessibility, students should use technology ( such as a moodle wiki and google sheets ) in presenting visualizations of their work as well as posting reports online .this section gives summaries and student feedback for example projects from three lower division and three upper division mathematics courses : liberal arts mathematics , discrete mathematics , differential equations , numerical methods , mathematical modeling , and advanced linear algebra .for open - ended projects , i keep a list of topics from which i either assign or have students choose .see section [ sec : appendix ] for detailed project prompts .these project ideas come primarily from textbook resources or the online community repositories mentioned above : simiode and project mosaic . to broaden simple projects ,i often generalize parameters of the model , have students collect or analyze a dataset , or extend the analysis of their solution .student feedback can be a helpful evaluative tool , but getting timely and constructive feedback is difficult . to this end ,i stagger different types of feedback collection throughout the term . along with online course evaluations at the end ,i offer anonymous midterm evaluations online which guides students reflection of the course goals and their learning successes and difficulties so far .project - specific feedback from 2012 - 2015 course evaluations are grouped into themes in table [ table : feedback ] with what semester term they occurred more than once while additional course - specific feedback is listed below each project summary .the main themes are grouped by hindrances : ( h1 ) timing of projects with respect to lecture topics , ( h2 ) coordinating schedules for group work outside of class , ( h3 ) technical problems ; and benefits : ( b1 ) abstract topics became more tangible , ( b2 ) topics made more sense and at a deeper level , ( b3 ) good rhythm of individual homework and group projects .tracking student feedback by semester , i noticed a positive correlation in the volume of comments about the benefits of projects and a negative correlation in the volume of comments about project difficulties . as i repeated classes and had more experience implementing projects , the effect of logistical issues diminished and their benefits increased .this shows that students are less able to benefit from projects when dealing with logistical difficulties , and that the benefits on student learning increase once logistical issues are reduced ..occurrence of project - specific feedback themes from 2012 - 2015 ( sp = spring , fa = fall ) [ cols="<,^,^,^,^,^,^,^ " , ] because differential equations are one of the most natural ways to express a dynamical model , this class provides ample opportunity for modeling projects .i assign four 2-week projects in this 15-week course .the first ( from siomiode ) and fourth ( extended from textbook ) projects are shown below .* resources * * dorm floor plan ; bag of beans * * collaborative spreadsheet ( google sheets ) * * collaborative private wiki ( moodle ) * expectations * * simulate spread of common cold by shaking beans onto floor plan and tracking infected rooms * * develop model and use data to estimate parameters * * complete wiki report with 5 minute presentation students were enthusiastic about this hands - on activity and appreciated seeing how a differential equation model develops . * resources * * collaborative private wiki ( moodle ) * * pplane applet ( http://math.rice.edu/~dfield/dfpp.html[math.rice.edu//dfpp.html ] ) * expectations * * research qualitative and quantitative data on realistic predator and prey species * * develop predator - prey model and estimate parameters * * analyze model theoretically and via pplane simulation * * submit wiki report with 10 minute presentation most groups choose standard predator - prey pairs such as orca - sea lion or cheetah - gazelle which have been fun ways to study such animal behavior around the globe .every semester , however , i have a couple groups with more imaginative selections such as zombies - humans , mutants - humans , and sith - jedi .these groups collect data from novels or games on which they base their parameters and compare their results . while less quantitative in nature , such a project provides a creative outlet for students .after this project in fall 2013 , a student who had been dragging her feet through the class so far exclaimed , `` i never knew math could be fun ! ''it is exciting to see student attitudes impacted so positively by projects .covering graph theory , analysis of algorithms , and various other topics , discrete mathematics is brimming with applications but scattered over many topics .thus , i chose one culminating graph theory project on game analysis spanning 4 weeks . in groups of 4 ,students choose games from a list and develop individual strategies which they test against each other . to space this long - term project , i have groups teach each other their games halfway through the project , before completing their analysis and presenting to the class .* resources * * collaborative wiki ( moodle ) * expectations * * choose a board / card game to summarize and analyze * * develop and test individual strategies against each other * * develop a board state evaluation function and demonstrate it on a limited game tree * * complete wiki report and 10 minute presentation students shared that this was their favorite part of the course .i also noticed improvement in exam scores after this project .the liberal arts mathematics course at my university , _ the world of mathematics _ , emphasizes applications of mathematical concepts in areas such as consumer finance , probability , and statistics .the collaborative spreadsheet can be viewed and copied from the link provided .* provided resources * * collaborative discussion forum ( moodle ) * * collaborative spreadsheet for posting information ( google sheets ) https://docs.google.com/spreadsheets/d/17nwc4l2olfm4aufgcpvirqd0ext9-nica4mbpm7mg2e/edit#gid=1685348171[docs.google.com/spreadsheets/d/17nwc4l2olfm4aufgcpvirqd0ext9-nica4mbpm7mg2e ] * * salary statistics ( http://www.salary.com[www.salary.com ] ) * * cost of living ( http://costofliving.salary.com[costofliving.salary.com ] ) * * mortgage payments ( http://www.zillow.com[www.zillow.com ] ) * * budget estimates ( http://www.learnvest.com[www.learnvest.com ] ) * * tedx talk on budgeting http://www.tedxwallstreet.com/alexa-von-tobel-one-life-changing-class-you-never-took-2/#sthash.pfih6s4v.dpbs[www.tedxwallstreet.com/alexa-von-tobel-one-life-changing-class-you-never-took-2/ ] * expectations * * collect data for a chosen job , location , and house * * complete a budget and compare to the ideal 50 - 20 - 30 rule * * compare future value of retirement contributions for several initial amounts ( simple sensitivity analysis ) * * choose a best budget from these results and compare within group students were surprised at how short their `` paid vacation '' is in retirement , and overall liked how this project connected course topics to their lives .after several short technique - learning projects in numerical methods , the final 5-week project is open - ended so that students have the flexibility of applying methods covered in class to topics of their choice .long - term coordination for this project is most easily done in pairs which works well for this upper division class with fewer than 20 students .i set up a schedule of one - on - one meetings and weekly update reports of their project to keep them accountable .summaries of the schedule and components of this open - ended project are listed below .* resources * * mathematical software ( matlab ) * * mathematical typesetting software ( http://www.lyx.org[www.lyx.org ] ) * * collaborative forum ( moodle ) * expectations * * choose a topic to analyze numerically * * ( other project objectives ) * * 10-page report typeset in tex * * form handout and 15 minute presentation some students use this opportunity to enhance projects in one of their other courses , such as _ simulating a helical solenoid _ and _ identifying predictors of voting registration _ , while others extended topics introduced in class like _ computing land area from a sample of longitude and latitude _ and _ investigating fractal behavior of various iteration functions_. i set up the entire mathematical modeling course as a series of 2 week group projects .though this was a smaller upper division course ( 12 students ) , i chose to have groups of 4 to adequately deal with the depth of these projects .below is a summary of two of these projects , estimating hazelnuts in a jar and modeling the spread of the common cold on campus using an sir model , both extensions of problems in the textbook .the collaborative spreadsheet can be viewed and copied from the link provided .* resources * * ruler , hazelnut , and jar dimensions * expectations * * measure volume of hazelnut and estimate how many fit in the given jar * * research optimal packing efficiencies and improve initial group estimate * * submit report summarizing chosen simplifying assumptions , solution method , and final estimate students enjoyed this project as a way to see how many individual decisions and various approaches can go into solving such a simple problem .* resources * * mathematical software ( matlab ) * * collaborative spreadsheet ( google sheets ) https://docs.google.com/spreadsheets/d/18wqiqb7lu1m-pf_jsp32dey5vkucbcql8avpqyseka8[docs.google.com/spreadsheets/d/18wqiqb7lu1m-pf_jsp32dey5vkucbcql8avpqyseka8 ] * expectations * * among the people living near you , track how many of them show signs of a cold every day for a week ( 40 - 50 per group ) * * record data in the spreadsheet and compute average transmission probabilities * * form the markov model with collected transmission probabilities and simulate deterministic and stochastic versions * * evaluate coefficient of determination , f - ratio , and t - test statistic for each model * * submit report evaluating your model with a 15 minute presentation students really enjoyed this chance to collect data and be a part of the dataset themselves .for this small ( 10 - 20 student ) upper division theoretical class , i assign a 5 week project analyzing and extending a research article in groups of 2 .this is a simple extension of the modeling process to evaluating and improving upon current ( undergraduate - level ) research .* resources * * repository of undergraduate - level math research ( college mathematics journal ) * * mathematical typesetting ( http://www.lyx.org[www.lyx.org ] ) * expectations * * choose an article on linear algebra and replicate its findings * * extend this article by generalizing a theorem or applying the given theory in a new way * * submit a report summarizing the article and detailing your contribution in extending it . * * prepare a 15-minute conference - style presentation in addition to positive feedback about this project , several of my students have presented their research at a regional conference .in sum , the best practices i use incorporating projects reduce the impact of logistical issues through incremental notification and guidance of the project . in preparing classes ,place a project according to both its prerequisite skills and level of student investigation and embed course content in the project .encourage students to better communicate in groups through assigning individual roles and emphasize the need to review each others work as it relates to the project as a whole . include opportunities for students to buy into the project through selecting partners , group role , and sometimes topic .remind students of their share of the workload and keep them accountable through monitoring individual contributions .once the logistical issues are minimized , projects can connect students to course content in new ways and motivate them to learn at a deeper level .0 ambrose , sa et .al . 2010 ._ how learning works : 7 research - based principles for smart teaching_. san francisco , ca . :jossey - bass .aris , rutherford , _ mathematical modelling techniques _ ,dover publication , 1994 .bliss , k.m . ,fowler , k.r . , and galluzzo , b.j ._ math modeling : getting started and getting solutions _ , society for industrial and applied mathematics , 2014 .claus - mcgahan , elly ( 2007 ) modeling projects in a differential equations course , _ primus : problems , resources , and issues in mathematics undergraduate studies _ , 8(2 ) , 137 - 149 , doi : 10.1080/10511979808965890 .cline , kelly s. ( 2007 ) numerical methods through open - ended projects , _ primus : problems , resources , and issues in mathematics undergraduate studies _ , 15(3 ) , 274 - 288 , doi : 10.1080/10511970508984122 .cullinane , michael j. ( 2011 ) helping mathematics students survive the post - calculus transition , _ primus : problems , resources , and issues in mathematics undergraduate studies _ , 21(8 ) , 669 - 684 , doi : 10.1080/10511971003692830 .center for discrete mathematics and theoretical computer science .accessed on 29 may 2015 .epstein , jerome ( 2013 ) the calculus concept inventory measurement of the effect of teaching methodology in mathematics , _ notices of the ams _, 60 ( 8) .hestenes , d. , wells , m. , and swackhamer , g. ( 1992 ) force concept inventory , _ the physics teacher _ , 30 , 141 - 158 .karaali , gizem ( 2011 ) an evaluative calculus project : applying bloom s taxonomy to the calculus classroom , _ primus : problems , resources , and issues in mathematics undergraduate studies _ , 21(8 ) , 719 - 731 , doi : 10.1080/10511971003663971 .knoll , m. ( 1997 ) the project method : its vocational education origin and international development ._ journal of industrial teacher education _ , 34(3 ) , 59 - 80 .linhart , jean m. ( 2014 ) teaching writing and communication in a mathematical modeling course , _ primus : problems , resources , and issues in mathematics undergraduate studies _ , 24(7 ) , 594 - 607 , doi : 10.1080/10511970.2014.895459 .mergendoller , j. r. , and maxwell , n. l. ( 2006 ) the effectiveness of problem - based instruction : a comparative study of instructional methods and student characteristics . _ the interdisciplinary journal of problem - based learning _ , 1(2 ) , 49 - 69 .mooney , douglas , and swift , randall ._ a course in mathematical modeling _ , the mathematical association of america , 1999 .kaplan , daniel et .al _ project mosaic_. http://mosaic - web.org/. accessed on 29 may 2015 .olson , jo c. , cooper , sandy , and lougheed , tom ( 2011 ) influences of teaching approaches and class size on undergraduate mathematical learning , _ primus : problems , resources , and issues in mathematics undergraduate studies _ , 21(8 ) , 732 - 751 , doi : 10.1080/10511971003699694 .toews , carl ( 2012 ) mathematical modeling in the undergraduate curriculum , _ primus : problems , resources , and issues in mathematics undergraduate studies _ , 22(7 ) , 545 - 563 ,doi : 10.1080/10511970.2011.648003 .winkel , brian ( 2010 ) parameter estimates in differential equation models for chemical kinetics , _ international journal of mathematical education in science and technology _, 42(1 ) , 37 - 51 , doi : 10.1080/0020739x.2010.500806 .winkel , brian ( 2013 ) computers have taken us to the brink in mathematics and we have balked , _ computers in the schools : interdisciplinary journal of practice , theory , and applied research _, 30(1 - 2 ) , 148 - 171 , doi : 10.1080/07380569.2013.768940 .winkel , brian ( director ) simiode : systematic initiative for modeling investigations and opportunities with differential equations , https://www.simiode.org/. accessed 29 may 2015 .this appendix lists the ( abridged ) project prompts and rubrics ( when given ) for each of the projects summarized in section [ sec : implementation ] .use the given links to estimate monthly expenses as if you were starting a new job today .see collaborative spreadsheet for a template .\1 ) estimate monthly salary from salary.com and compute taxes withheld ( use 11% fed + 9% state ( adjust accordingly ) ) to determine your take - home pay .\2 ) select a home on zillow to estimate 20% down due and mortgage payments .note : it is more realistic that you will rent for a while when you start your job , but it is surprising how quickly you will be interested in buying a house , so it is good to plan ahead for it. how long will it take you to save up enough money for the 20% down payment ( you will be charged extra if you do not pay 20% down so this is a good amount to plan for ) .\3 ) check the rate on your student loans and compute the monthly payments needed after you graduate .\4 ) choose how much to put aside for retirement .use the spreadsheet to calculate how much you will have saved up for retirement in 35 years using an average interest rate of 8% , and then divide this number by your starting salary s future value ( assuming 3% inflation ) to see how many years of living on paid vacation " you will have earned .note : your retirement savings may be augmented by social security and retirement accounts your employer sets up for you .\5 ) estimate other personal expenses ( yellow colored cells ) using the example values as a starting point .\6 ) visualize your distribution of fixed costs , financial goals , and flexible spending .how similar is it to the 50 - 20 - 30 rule ? in what areas could you trim costs to better fit the rule ?\7 ) alter the amount put into retirement and compare years of retirement and how it impacts your monthly spending .choose a best value and explain why it is better than the others you checked .compare with your group members and discuss your differences .this is an extension of the activity you did on the first day of class . in your same groups of 4do the following .\a ) record 10 simulations of brandt outbreak model with different number of starting infected in this google sheet .\b ) derive differential equation model ( with unknown parameters ) and find its general solution .summarize why your model fits .explain in detail the main steps needed in solving your model .show the model and solution .\c ) for each simulation , estimate the fitting parameter ( growth rate ) by graphing the general solution with each data set . show 2 - 3 example graphs of your best fitting models over the range of initial infected .\d ) summarize all 10 growth rates , their average and range .show graph of all 10 data sets together .discuss your results : was it a good model ?were your parameter values close to the average ? how could this model be used ? how could this model be improved ? in groups of 4 , choose a dominant predator , a lesser predator , and a common prey to research .we will be using qualitative ( graphs ) and quantitative ( average growth / death rates ) data to set up a predator - predator - prey model and analyze the differences between 1 and 2 predators in the system .create a set of links to ( a ) describe your chosen scenario with citations , ( b ) analyze the 2x2 model , and ( c ) analyze the 3x3 model .create a group wiki . give a 10 minute summary of your wiki page and models in class next friday .each group member must contribute to both the wiki and the presentation .rubric : 10 pts : wiki setup of pred - prey scenario and citations 10 pts : 2x2 model analysis 10 pts : 3x3 model analysis 10 pts : presentation choose a topic which can be investigated through a numerical method .previous topics include electric field of helical solenoid , statistical analysis of voter skepticism , julian sets ( fractals ) , computing area on a sphere , and heat analysis of air fins .planning : sign up for a 30-minute time slot to meet to nail down a detailed plan for your individual project .come prepared with your topic and several specific things you would like to investigate about it .initialization : explain your topic , why you are interested in it , and what kinds of numerical methods would be helpful in investigating your topic ( named programs i have given you or ones you have written for this class that would be helpful ) .outline three objectives for your project : easy ( takes 1 - 2 hrs ) , medium ( takes 1 week ) , hard ( takes weeks).for each objective , outline a program to complete them in its own matlab file .upload to foxtale with this report .update : for each of your individual project objectives , continue the list of what you have accomplished , what you still plan to do , and what part(s ) you are stuck on / having trouble with .you should be wrapping things up for your final report next thursday .list any questions you have for me .report : describe the question you investigated and give some background information on your topic ( cite references for all results used ) .describe the process used in analyzing the given problem and developing your programs .list all programs used and describe how they are used ( including inputs and outputs ) show and describe what each result represents .then discuss the significance of your results in terms of your investigation and list any avenues that could be interesting to explore further in the future .presentation : 10-minute visual presentation of your final project .have a 1-pg flyer printed for everyone in the class .summarize / demonstrate any code used . estimate the number of hazelnuts that can fit ( densely packed ) in the jam jar that i brought them in .jar is modeled as a large lower cylinder with a smaller upper cylinder on top where the edge moves in to fit the lid .lower cylinder has diameter 2 + 9/16 in .( 6.6 cm ) and height 2 + 3/8 in .( 6.5 cm ) .upper cylinder has diameter 2 + 5/16 in ( 5.9 cm ) and height 3/4 in .( 1.6 cm ) . use the model representation of each hazelnut of your choice and the measurements your group madelast tuesday to estimate the maximum number of hazelnuts that can fit ( without crushing ) inside the jar .share your measurements with your group or you can remeasure a hazelnut in my office ( no liquids ) .prizes for best estimate(s ) .write a summary explaining your assumptions and approximations used in representing the hazelnut .\a ) daily track evidence of having a cold amongst a group living near you for a week .track 10 people per person , including yourself , and tabulate the number transitioning between states on the common cold spreadsheet \b ) compute average transmission probabilities between states from your sample data , form the markov model , and write a matlab file to compute the deterministic model : .\c ) use these same probabilities to program a stochastic markov model , , using the current undergrad population at fox as your total population .\d ) compute values for each of your models to your s , i , r group data , group_data " .note , since you just computed averages for your group data ( not actual regression ) , this actually uses your markov model populations for not the coefficients .this should be six values , three comparing to the model and three comparing to the model : for example the first is the deterministic model s in (1,1:n ) and y is the s data from group_data(1,1:n ) , where n is the length of your data vector .\e ) use the coldsir regression updated file to compute the estimate markov probabilities for the class_data using multilinear regression .compare to your group s average values .\f ) compute the f - ratio and t - test statistic of the multilinear regression ( b ) and evaluate the significance of the trend line and coefficients for the class_data .\g ) write up a report and present your findings ( 10 min ) in class .you can do this inside your matlab file using % % as section headers and use the publish command to construct a report with your comments , code , computed results and graphs .see posted coldsir regression file as an example for structuring sections to publish a report . in a group of 4 , you will choose a simple game for analysis .suggestions : planar graph game , competing knight s tour game , competing n - queens game , nim , sim edge - coloring game , pipe layer , mu torere . for your chosen simple game ,1 ) provide links for further reading and visualizations on this wiki to explain and demonstrate your game to the class .\2 ) each person will write up their own strategy ( as an algorithm outline ) for playing the game in this wiki .\3 ) compare strategies through competing multiple times within your group and recording who won the game and in how many moves ( or how many possible moves left whichever is easier ) .summarize your results on this wiki .does the first player always win in perfect play or are there conditions on them winning ?\4 ) lay out a game tree for a 3 move end - game scenario and show how the perfect game is selected . develop an evaluative function which would be a good heuristic for a fixed depth search and demonstrate it .\5 ) what mathematical concepts are this game based upon ? your linear algebra project will be done in groups of two people and will be presented in class the first week of march .your project must be based upon an idea presented by a mathematical journal article ( e.g. college mathematics journal ) and must include a piece of original work that you add. the topics must be relevant to linear algebra in a theoretical nature .for example , just using matrices in computation is not enough .suggested topics : vector space , bases / basis , linear transformation , isomorphism , dual space , eigenvalue / eigenvector , matrix limits , norm , inner product , linear operator , adjoint operator , normal operator , hermitian operator , orthogonal operator , and canonical forms .report : typeset professionally in lyx ( or another tex writer ) following the posted template with at least 6 full pages ( 2 large figures included ) .demonstrate your knowledge of and work on a topic chosen from a peer - reviewed research article .include necessary definitions , theorems with proofs ( at least one ) , and applications / examples to help explain the topic .rubric : 100 pts : ( 25 ) typesetting and organization , ( 25 ) understanding of background information , ( 25 ) personal research contribution , ( 25 ) follows outlined research proposal presentation : slide presentation in lyx ( beamer class ) or powerpoint or google slides . a lyx template with beamer is posted .summarize your contributions / work on your chosen topic . include title page ( title / names / professor / school ) , outline page , intro / background ,results , examples / applications , conclusions / future work , citations / thank you page .each group member should contribute equally to the oral presentation : ( 10 - 15 minutes ) .start by introducing yourself .end by thanking the audience .moderator will ask for questions .* r. corban harwood * earned his ph.d . in mathematics from washington state university in 2011 andis currently assistant professor of mathematics at george fox university in newberg , or . in teaching, he loves drawing connections between mathematics and different disciplines like music , biology , and philosophy . on occasion ,corban does mathematical modeling consulting ranging from testing financial phone apps to designing optimal battery chemistries for hybrid - electric vehicles .his research interests include numerical partial differential equations , semilinear operator splitting methods , and modeling reaction - diffusion phenomena . on the side ,he enjoys cycling through the willamette valley and hiking to waterfalls with his family .
projects provide tangible connections to course content and can motivate students to learn at a deeper level . this article focuses on the implementation of projects in both lower and upper division math courses which develop and analyze mathematical models of a problem based upon known data and real - life situations . logistical pitfalls and insights are highlighted as well as several key implementation resources . student feedback demonstrate a positive correlation between the use of projects and an enhanced understanding of the course topics when the impact of logistics is reduced . best practices learned over the years are given along with example project summaries .
one of the most surprising results of the last decades in the field of stochastic processes has been the discovering that fluctuation terms ( loosely called _ noise _ ) can actually induce some degree of order in a large variety of non - linear systems .the first example of such an effect is that of _stochastic resonance_ by which a bistable system responds better to an external signal ( not necessarily periodic ) under the presence of fluctuations , either in the intrinsic dynamics or in the external input .this phenomenon has been shown to be relevant for some physical and biological systems described by nonlinear dynamical equations .other examples in purely temporal dynamical systems include phenomena such as noise - induced transitions , noise - induced transport , coherence resonance , etc . in extended systems, noise is known to induce a large variety or ordering effects , such as pattern formation , phase transitions , phase separation , spatiotemporal stochastic resonance , noise - sustained structures , doubly stochastic resonance , amongst many others .all these examples have in common that some sort of _ order _ appears only in the presence of the right amount of noise .there has been also some recent interest on the interplay between chaotic and random dynamics .some counterintuitive effects such as coherence resonance , or the appearance of a quasi periodic behavior , in a chaotic system in the presence of noise , have been found recently .the role of noise in standard synchronization of chaotic systems has been considered in , as well as the role of noise in synchronizing non chaotic systems . in this paperwe address the different issue of synchronization of chaotic systems by a common random noise source , a topic that has attracted much attention recently .the accepted result is that , for some chaotic systems , the introduction of the same noise in independent copies of the systems could lead ( for large enough noise intensity ) to a common collapse onto the same trajectory , independently of the initial condition assigned to each of the copies .this synchronization of chaotic systems by the addition of random terms is a remarkable and counterintuitive effect of noise and although some clarifying papers have appeared recently , still some contradictory results exist for the existence of this phenomenon of noise induced synchronization .it is the purpose of this paper to give further analytical and numerical evidence that chaotic systems can synchronize under such circumstances and to analyze the structural stability of the phenomenon .moreover , the results presented here clarify the issue , thus opening directions to obtain such a synchronization in electronic circuits , for example for encryption purposes .common random noise codes have been used in spread spectrum communication since a long time ago .the main idea is to mix a information data within a noisy code . at the receiver, the information is recovered using a synchronized replica of the noise code .more recently , the use of common noise source has been also proposed as a useful technique to improve the encryption of a key in a communication channel .the issue of ordering effect of noise in chaotic systems was considered already at the beginning of the 80 s by matsumoto and tsuda who concluded that the introduction of noise could actually make a system less chaotic .later , yu , ott and chen studied the transition from chaos to non chaos induced by noise .synchronization induced by noise was considered by fahy and hamman who showed that particles in an external potential , when driven by the same random forces , tend to collapse onto the same trajectory , a behavior interpreted as a transition from chaotic to non chaotic behaviors .the same system has been studied numerically and analytically .pikovsky analyzed the statistics of deviations from this noise - induced synchronization .a paper that generated a lot of controversy was that of maritan and banavar .these authors analyzed the logistic map in the presence of noise : + where is the noise term , considered to be uniformly distributed in a symmetric interval ] contribute to with negative values , indicating trajectory convergence .larger or smaller slopes contribute with positive values , indicating trajectory divergence .since the deterministic and noisy maps satisfy one is tempted to conclude that the lyapunov exponent is not modified by the presence of noise .however , there is noise - dependence through the trajectory values , . in the absence of noise, is positive , indicating trajectory separation . when synchronization is observed , the lyapunov exponent is negative , as required by the argument in .notice that this definition of the lyapunov exponent assumes a fixed realization of the noise terms , and it is the relevant one to study the synchronization phenomena addressed in this paper .one could use alternative definitions .for instance , if one considers the coupled system of both the variable and the noise generator producing , then the largest lyapunov exponent of the composed system is indeed positive ( and very large for a good random number generator ) .this simply tells us that there is a large sensitivity to the initial condition of the composed system as shown by the fact that a change of the seed of the random number generator completely changes the sequence of values of both and .we consider in this paper the situation described by definition ( [ lyapunov2 ] ) with fixed noise realization . by using the definition of the _ invariant measure on the attractor _ , or _ stationary probability distribution _ , the lyapunov exponent can be calculated also as here we see clearly the two contributions to the lyapunov exponent : although the derivative does not change when including noise in the trajectory , the stationary probability does change ( see fig.4 ) , thus producing the observed change in the lyapunov exponents .synchronization , then , can be a general feature in maps , such as ( [ eq:3 ] ) , which have a large region in which the derivative is smaller than one .noise will be able to explore that region and yield , on the average , a negative lyapunov exponent .this is , basically , the argument developed in . in order to make some analytical calculation that can obtain in a rigorous way the transition from a positive to negative lyapunov exponent , let us consider the map given by eq .( [ eq:2 ] ) and + with .this particular map , based in the tent map , has been chosen just for convenience .the following arguments would apply to any other map that in the absence of noise takes most frequently values in the region with the highest slopes , but which visits regions of smaller slope when noise is introduced .this is the case , for example , of the map ( 3 ) . in the case of ( [ twotents ] ) , the values given by the deterministic part of the map , after one iteration from arbitrary initial conditions , fall always in the interval is the region with the highest slope . in the presence of noisethe map can take values outside this interval and , since the slopes encountered are smaller , the lyapunov exponent can only be reduced from the deterministic value . to formally substantiate this point ,it is enough to recall the definition of lyapunov exponent ( [ lyapunov2 ] ) : an upper bound for is , so that a bound for is immediately obtained : .equality is obtained for zero noise .the interesting point about the map( [ twotents ] ) and similar ones is that one can demonstrate analytically that can be made negative .the intuitive idea is that it is enough to decrease in order to give arbitrarily small values to the slopes encountered outside , a region accessible only thanks to noise . to begin with ,let us note that if , and if , so that an upper bound to ( [ lyapunov2 ] ) can be written as and are the proportion of values of the map inside and outside this interval , respectively , and we have used that as they converge to and , the invariant measure associated to and to the rest of the real line , respectively ( ) .a sufficient condition for to fall outside is that .thus , , where we have used the gaussian character of the noise . in consequence , one finds from ( [ bound ] ) the important point is that is independent on the map parameters , in particular on .thus , ( [ rebound ] ) implies that by decreasing the value of can be made as low as desired . by increasing such that , will be certainly negative .thus we have shown analytically that strong enough noise will always make negative the lyapunov exponent of the map ( [ twotents ] ) and , accordingly , it will induce yield `` noise - induced synchronization '' in that map .in this section we give yet another example of noise induced synchronization .we consider the well known lorenz model with additional random terms of the form : + is white noise : a gaussian random process of mean zero , and delta correlated , .we have used , and which , in the deterministic case , are known to lead to a chaotic behavior ( the largest lyapunov exponent is ) . as stated in the introduction ,previous results seem to imply that synchronization is only observed for a noise with a non zero mean .however , our results show otherwise .we have integrated numerically the above equations using the stochastic euler method .specifically , the evolution algorithm reads : \nonumber \\y(t+\delta t ) & = & y(t ) + \delta t \left [ -x(t ) z(t ) + r x(t ) -y(t ) \right ] \label{eq : euler}\\ & + & \epsilon \sqrt{\delta t } g(t ) \nonumber \\z(t+\delta t ) & = & z(t)+ \delta t \left [ x(t ) y(t ) -b z(t)\right ] \nonumber\end{aligned}\ ] ] + the values of are drawn at each time step from an independent gaussian distribution of zero mean and variance one and they have been generated by a particularly efficient algorithm using a numerical inversion technique .the time step used is and simulations range typically for a total time of the order of ( in the dimensionless units of the lorenz system of equations ) .the largest lyapunov exponent has been computed using a simultaneous integration of the linearized equations . for the deterministic case , trajectories starting with different initial conditionsare completely uncorrelated , see fig .this is also the situation for small values of .however , when using a noise intensity the noise is strong enough to induce synchronization of the trajectories .again , the presence of the noise terms forces the largest lyapunov exponent to become negative ( for it is ) . as in the examples of the maps , after some transient time , two different evolutions which have started in completely different initial conditions synchronize towards the same value of the three variables ( see fig .( 5b ) for the coordinate ) . therefore , these results prove that synchronization by common noise in the chaotic lorenz system does occur for sufficiently large noise intensity .this result contradicts previous ones in the literature .the main difference with these papers is in the intensity of the noise : it has to be taken sufficiently large , as here , in order to observe synchronization .notice that although the noise intensity is large , the basic structure of the butterfly " lorenz attractor remains present as shown in fig .( 6 ) . again, this result shows that , although the noise intensity used could be considered large , the synchronization is rather different from what would be obtained from a trivial common synchronization of both systems to the noise variable by neglecting the deterministic terms .an important issue concerns the structural stability of this phenomenon , in particular how robust is noise synchronization to small differences between the two systems one is trying to synchronize . whether or not the synchronization of two trajectories of the same noisy lorenz system ( or of any other chaotic system ) observed here , equivalent to the synchronization of two identical systems driven by a common noise , could be observed in the laboratory , depends on whether the phenomenon is robust when allowing the two lorenz systems to be not exactly equal ( as they can not be in a real experiment ) .if one wants to use this kind of stochastic synchronization in electronic emitters and receivers ( for instance , as a means of encryption ) one should be able to determine the allowed discrepancy between circuits before the lack of synchronization becomes unacceptable .additional discussions on this issue may be found in .we consider the following two maps forced by the same noise : linearizing in the trajectory difference , assumed to be small , we obtain we have defined , and we are interested in the situation in which the two systems are just slightly different , for example , because of a small parameter mismatch , so that will be small in some sense specified below .iteration of ( [ linearized ] ) leads to the formal solution : we have defined , and .an upper bound on ( [ solution ] ) can be obtained : the first term in the r.h.s . is what would be obtained for identical dynamical systems . we knowthat as , where is the largest lyapunov exponent associated to ( [ second ] ) .we are interested in the situation in which , for which this term vanishes at long times .further analysis is done first for the case in which is a bounded function ( or is a bounded trajectory with continuous ) . in this situation, there is a real number such that .we then get : an unequality valid for large .let us now define , the maximum slope of the function .a trivial bound is now obtained as : this can be further improved in the case , where we can write : as a consequence , differences in the trajectories remain bounded at all iteration steps . since , according to the definition ( [ lyapunov2 ] ) , is also an upper bound for the lyapunov exponent for all values of and , in particular , for the noiseless map , , this simply tells us that if the deterministic map is non chaotic , then the addition of a common noise to two imperfect but close replicas of the map will still keep the trajectory difference within well defined bounds .the situation of interest here , however , concerns the case in which a negative lyapunov exponent arises only as the influence of a sufficiently large noise term , i.e. the deterministic map is chaotic and . in this case , the sum in eq .( [ bound3 ] ) contains products of slopes which are larger or smaller than .it is still true that the terms in the sum for large value of can be approximated by and , considering this relation to be valid for all values of , we would get : and , thus , at large : it can happen , however , that the product defining contains a large sequence of large slopes . these terms ( statistically rare ) will make the values of to violate the above bound at sporadic times .analysis of the statistics of deviations from synchronization was carried out in .although for the most probable deviation is close to zero , power - law distributions with long tails are found , and indeed its characteristics are determined by the distribution of slopes encountered by the system during finite amounts of time , or finite - time lyapunov exponents , as the arguments above suggest .therefore , we expect a dynamics dominated by relatively large periods of time during which the difference between trajectories remains bounded by a small quantity , but intermittently interrupted by bursts of large excursions of the difference .this is indeed observed in the numerical simulations of the maps defined above .this general picture is still valid even if is not explicitly bounded .we have performed a more quantitative study for the case in which two noisy lorenz systems with different sets of parameters , namely : + and + are forced by the same noise . in order to discern the effect of each parameter separately ,we have varied independently each one of the three parameters , , while keeping constant the other two .the results are plotted in fig .7 . in this figurewe plot the percentage of time in which the two lorenz systems are still synchronized with a tolerance of 10% .this means that trajectories are considered synchronized if the relative difference in the variable is less than 10% .according to the general discussion for maps , we expect departures from approximate synchronization from time to time .they are in fact observed , but from fig .7 we conclude that small variations ( of the order of 1% ) still yield a synchronization time of more than 85% . in fig .8 we show that the loss of synchronization between the two systems appears in the form of bursts of spikes whose amplitude is only limited by the size of the attractor in the phase space . moreover , it can be clearly seen in the same figure that large ( but infrequent ) spike amplitudes appear for arbitrarily small mismatch . in the realm of synchronization of chaotic oscillators , two different types of analogous intermittent behaviorhave been associated also to the fluctuating character of the finite - time conditional lyapunov exponents as above .one is on - off intermitency where the synchronization manifold is sligthly unstable on average but the finite time lyapunov exponent is negative during relatively long periods of time . in the other one , named bubbling ,the synchronization is stable on average but the local conditional lyapunov exponent becomes occasionally positive .while in the former case bursting always occurs due to the necessarily imperfect initial synchronization , in the latter it is strictly a consequence of the mismatch of the entraining systems . in this sense , the behavior reported in the preceding paragraph should be considered as a manifestation of bubbling in synchronization by common noise .in this paper we have addressed the issue of synchronization of chaotic systems by the addition of common random noises .we have considered three explicit examples : two 1-d maps and the lorenz system under the addition of zero mean , gaussian , white noise . while the map examples confirm previous results in similar maps , and we have obtained with them analytical confirmation of the phenomenon , the synchronization observed in the lorenz system contradicts some previous results in the literature .the reason is that previous works considered noise intensities smaller than the ones we found necessary for noise - synchronization in this system . finally , we have analyzed the structural stability of the observed synchronization . in the lorenz system ,synchronization times larger than 85% ( within an accuracy of 10% ) can still be achieved if the parameters of the system are allowed to change in less than 1% .it is important to point out that noise - induced synchronization between identical systems subjected to a common noise is equivalent to noise induced order , in the sense that the lyapunov exponent defined in ( 4 ) becomes negative in a single system subjected to noise .one can ask whether the state with negative lyapunov exponent induced by noise may be still be called ` chaotic ' or not .this is just a matter of definition : if one defines chaos as exponential sensibility to initial conditions , and one considers this _ for a fixed noise realization _ , then the definition of lyapunov exponent implies that trajectories are not longer chaotic in this sense .but one can also consider the extended dynamical system containing the forced one _ and _ the noise generator ( for example , in numerical computations , it would be the computer random number generator algorithm ) .for this _ extended system _ there is strong sensibility to initial conditions in the sense that small differences in noise generator seed leads to exponential divergence of trajectories .in fact , this divergence is at a rate given by the lyapunov exponent of the noise generator , which approaches infinity for a true gaussian white process .trajectories in the noise - synchronized state are in fact more irregular than in the absence of noise , and attempts to calculate the lyapunov exponent just from the observation of the time series will lead to a positive and very large value , since it is the _ extended _ dynamical system the one which is observed when analyzing the time series ( typically such attempts will fail because the high dimensionality of good noise generators , ideally infinity , would put them out of the reach of standard algorithms for lyapunov exponent calculations ) . again , whether or not to call such irregular trajectories with just partial sensibility to initial conditions ` chaotic ' is just a matter of definition .more detailed discussion along these lines can be found in .there remain still many open questions in this field .they involve the development of a general theory , probably based in the invariant measure , that could give us a general criterion to determine the range of parameters ( including noise levels ) for which the lyapunov exponent becomes negative , thus allowing synchronization . in this work and similar ones ,the word synchronization is used in a very restricted sense , namely : the coincidence of asymptotic trajectories .this contrasts with the case of interacting periodic oscillations where a more general theory of synchronization exists to explain the phenomenon of non trivial phase locking between oscillators that individually display very different dynamics .indications of the existence of analogue non trivial phase locking have been reported for chaotic attractors .there a phase " with a chaotic trajectory defined in terms of a hilbert transform is shown to be synchronizable by external perturbations in a similar way as it happens with periodic oscillators . whether or not this kind of generalized synchronization can be induced by noise is , however , a completely open question .last , but not least , it would be also interesting to explore whether analogs of the recently reported synchronization of spatio - temporal chaos may be induced by noise .r. toral , c. mirasso , e. hernndez - garca and o. piro , in _ unsolved problems on noise and fluctuations , upon99 _ , abbot and l. kiss , eds .511 , p. 255 - 260 , american institute of physics , melville ( ny ) ( 2000 ) .m. san miguel , r. toral , _ stochastic effects in physical systems _ , instabilities and nonequilibrium structures vi , eds .e. tirapegui , j. martnez and r. tiemann , kluwer academic publishers 35 - 130 ( 2000 ) .p. ashwin , j. buescu , and i. stewart , phys .a193 * 126 ( 1994 ) ; s.c .venkataramani , b.r . hunt , e. ott , d.j .gauthier , and j.c .bienfang , phys .* 77 * 5361 ( 1996 ) ; j.f .heagy , t.l .carroll , and l.m .pecora , phys . rev . *e52 * r1253 ( 1995 ) ; d.j . gauthier and j.c .bienfang , phys .lett . * 77 * 1751 ( 1996 ) .l. yu , e. ott , and q. chen , phys . rev .lett . * 65 * 2935 ( 1990 ) ; n. platt , e.a .spiegel , and c. tresser , phys .lett . * 70 * 279 ( 1993 ) ; n. platt , s.m .hammel , and j.f .heagy , phys .* 72 * 3498 ( 1994 ) ; j.f .heagy , n. platt , and s.m .hammel , phys . rev . *e 49 * 1140 ( 1994 ) ; y.h .yu , k. kwak , and t.k .lim , phys .lett . * a 198 * 34 ( 1995 ) ; h.l .yang , and e.j .ding , phys . rev . *e 54 * 1361 ( 1996 ) .
we study the effect that the injection of a common source of noise has on the trajectories of chaotic systems , addressing some contradictory results present in the literature . we present particular examples of 1-d maps and the lorenz system , both in the chaotic region , and give numerical evidence showing that the addition of a common noise to different trajectories , which start from different initial conditions , leads eventually to their perfect synchronization . when synchronization occurs , the largest lyapunov exponent becomes negative . for a simple map we are able to show this phenomenon analytically . finally , we analyze the structural stability of the phenomenon . * the synchronization of chaotic systems has been the subject of intensive research in the last years . besides its fundamental interest , the study of the synchronization of chaotic oscillators has a potential application in the field of chaos communications . the main idea resides in the hiding of a message within a chaotic carrier generated by a suitable emitter . the encoded message can be extracted if an appropriate receiver , one which synchronizes to the emitter , is used . one of the conditions to be fulfilled in order to achieve synchronization is that the receiver and the emitter have very similar device parameters , hence making it very difficult to intercept the encoded message . although the usual way of synchronizing two chaotic systems is by injecting part of the emitted signal into the receiver , the possibility of synchronization using a common random forcing has been also suggested . however , there have been some contradictory results in the literature on whether chaotic systems can indeed be synchronized using such a common source of noise and the issue has began to be clarified only very recently . in this paper we give explicit examples of chaotic systems that become synchronized by the addition of gaussian white noise of zero mean . we also analyze the structural stability of the phenomenon , namely , the robustness of the synchronization against a small mismatch in the parameters of the chaotic sender and receiver . *
the interstellar medium ( ism ) is turbulent , as known from observations of non - thermal cloud velocities , and also expected on theoretical grounds , because of the very large reynolds numbers ( defined as the ratio of inertial to viscous forces ) .understanding turbulence is crucial for understanding of many physical processes , including energy injection into the ism , non - photoelectric heating of the ism , star formation , propagation of cosmic rays , and heat transport .( see review by , ) , and references therein. obtaining properties of ism turbulence from observations is a long - standing problem ( see review by , and references therein ) .the statistics of the random velocity field is essential for describing the turbulence . to obtain the velocity statistics , so - called velocity centroids ( )have been frequently attempted .however , the separation of the velocity and density contributions to the velocity centroids has always been a problem .therefore the relation between the statistics of velocity and velocity centroids is frequently claimed to be trustworthy only when the density fluctuations are negligible ( see ) .to remedy this situation a statistical technique termed velocity channel analysis , or in short vca , was developed in .it has been successfully used to obtain the spectra of turbulence in in small magellanic cloud and the milky way ( see , ) .however , the vca requires turbulence to be supersonic and to follow power laws ( see ) . this paper will identify a technique that can be reliably used with turbulence even if the turbulence is subsonic and/or does not obey a power - law .the latter case can occur when , for instance , self - gravity modifies the turbulence on small scales . in pursuing this goalwe introduce modified velocity centroids ( mvcs ) and derive an analytical relation between the statistics of mvcs and the underlying statistics of the 3d turbulent velocity .there has been substantial progress in understanding of compressible magnetohydrodynamical ( mhd ) turbulence ( see a review by ) that will guide us in adopting the model for this paper .numerical simulations in revealed anisotropic spectra of alfvn and slow waves , as well as an isotropic spectrum of the fast waves .the anisotropies are elongated along the local direction of magnetic field .however , since this local direction changes from one place to another , the anisotropy in the system of reference related to the mean magnetic field is rather modest , i.e. of the order of , where is the dispersion of the random magnetic field . for typical ism andthe expected anisotropy is modest .therefore it is possible to characterize the turbulence using isotropic statistics ( see testing in and in 4 ) .only z - components of the velocity are available through doppler shift measurements .in what follows we shall assume that the velocity and density of the gas can be presented as sums of a mean value and a fluctuating part : , , where , .the fluctuating components and satisfy the condition , . henceforth we use to denote ensemble averaging .the correlation function of the z - component of velocities at two points and , separated a distance ( with ) , denoted by subscripts and ( , ) is . andit is related to the component of the 3d spectrum through a fourier transform ( see ) : where is the projection of the spectral tensor , expressed through the transversal and longitudinal ( normal and parallel to , respectively ) 3d spectra of the velocity field , which are functions of the wavenumber amplitude .similarly , the two point density statistics can be given by the structure function .in what follows we consider the emissivity of our media proportional to the first power of density line . for recombination radiation ( such as h for instance )the emissivity is proportional to the square of the density , and modifications of the present technique are necessary . ] and no absorption present . in this case , the intensity , , is proportional to the column density , where is the density of emitters in the position - position velocity ( ppv ) space ( see ) .that is , , where , is the density of atoms , is a proportionality constant , denotes integration along the line of sight , and is a two dimensional vector in the plane of the sky .the structure function of mvcs consists of two terms. it will be clear from the discussion below that it may be safe to use ordinary centroids when the second term is much smaller than the first term .in particular , ^ 2 - \langle v^2 \rangle \left[i(\mathbf{x_1})- i(\mathbf{x_2})\right]^2 \right\rangle \label{m}\ ] ] where , is a `` unnormalized centroid '' of the z - component of velocity , and . to simplify our notationswe shall omit the and -subscripts ( also for eq.([m ] ) ) .we do not discuss the effect of separately because for the technique presented here it acts in the same way as regular motion , and one can formally introduce . , that is , at a given position , integrating along the line of sight in the expression for and may be advantageous . in terms of use of may be preferable . ] . from an observational standpoint, the velocity dispersion can be obtained using the second moment of the spectral lines : to express the 2d statistics of the modified centroids through the underlying 3d velocity and density statistics we shall have to make several elementary transformations .first of all , the difference of integrals that enter the expression for the modified centroids should be presented as a double integral . then using an elementary identity $ ] it is possible to find ( see ) , \label{s2}\ ] ] where . using the millionshikov hypothesis ( see ) to relate the fourth moments of the fields with the second moments , namely that we get sufficient simplifications of the expression : where denote the cross terms arising from correlations between the fluctuations of the velocity and density the correlations between velocity and density fluctuations have been studied earlier and found not to have a strong impact on the vca . in particular , the first term in eq .( [ cr ] ) was directly obtained in , and numerical integration of the corresponding fluctuations provides fluctuations in the centroid value that does not exceed 8% of the mvcs values at small . in this paperwe will neglect the cross terms and test this assumption by analyzing synthetic maps of compressible mhd simulations . a more detailed analysis , with more sets of data , and the cross terms ,will be included in a forthcoming paper .( [ d ] ) depends on both density and velocity statistics .the dominant contribution arising from density fluctuations is the first term of eq .( [ d ] ) .integration of this term yields the second term in eq .( [ m ] ) , as result the expression for the mvcs is , \label{m2}\ ] ] where if then eq.([b ] ) recovers the statistics of the velocity field .this is certainly the case if the turbulence is subsonic , where the amplitude of the density fluctuations is small , or when the density spectrum is steep ( i.e. most of the power residing at large scales ) given that we are interested in the fluctuations at small . with . if the spectrum is _ shallow _( see ) .] however , if the amplitude fluctuations of density at small scales is large ( i.e. shallow spectrum or highly supersonic turbulence ) , the contribution could be important .if we omit the density term in eq.([b ] ) and take a 2d fourier transform to resulting distribution of mvcs we have : where we have used again the notation of using capital letters to distinguish 2d vectors from the 3d counterparts . substituting eqs .( [ m2 ] ) , ( [ corr - spect ] ) , and ( [ spect ] ) into eq .( [ p ] ) one gets where and are constants ( , where is the extent of the emitting region ) and the second term provides only a function contribution at . thus eq . ( [ final ] ) provides a relation ) but implicitly assumed that ( see also ) . ] between the spectrum of the modified centroids and the transversal spectrum of the 3d velocity field .we should note that the relation in eq .( [ final ] ) does not depend on a particular form ( power - law is a common assumption ) of the underlying spectra . if the turbulent velocity field is mostly solenoidal ( see ) , the isotropic spectrum that is used for describing hydrodynamic turbulence is uniquely defined through , namely , .in what follows we will include the normalized centroids in their usual form ( see ) intuitively it is clear that the power spectrum of the centroids given by eq .( [ c ] ) should provide a better fit to the power spectrum of the velocity ( the spectral index , not the amplitude ) than that of .indeed , the contributions of density fluctuations are mitigated by the division over the column density in eq .( [ c ] ) .the testing was done using the files obtained through simulations of compressible mhd turbulence .the description of the code and the numerical simulations can be found in and in .we use the same data cube that we used in .namely , the dimension of the cube is , and the mach number .the result of the mhd simulations are velocity , density and magnetic field data .we use density and velocity to produce spectral line data cubes , the procedure is described in .we calculated , , and from synthetic spectral line data cubes .the spectra were obtained by applying a fast fourier transform to the 2d distribution of the centroids and column density values .the power spectrum of the mvcs was calculated subtracting times the column density power spectrum , from the power spectrum of ( see eq .[ m ] ) . for the original data cubes ( with a steep density spectrum ) , the differences among all three types of centroids were marginal for the dynamical range studied , although unnormalized centroids systematically show more deviations from the velocity power spectrum . to enhance the effect of density, we made the density shallow by changing the fourier components of the density amplitudes , but , in order to preserve the density - velocity correlations , we kept the phase information ( see ) .the results are shown in fig 1 .it is clear that the three measures provide a good fit at large scales , but at small scales normalized and unnormalized centroids tend to be density dominated .an additional advantage of using mvc over the conventional normalized centroids is that mvc provides the correct amplitude of the spectrum , while normalized centroids at best give the correct index ( logarithmic slope ) . .the dotted line corresponds to the spectrum of density modified by changing the amplitudes the fourier components of the original spectrum .the modified density spectrum for the inertial range .the vertical position of the normalized centroids ( _ triangles _ ) is shifted vertically for visual purposes .[ fig : comp_mhd],width=332 ] it is also clear that for a shallow density spectrum , the improvement from unnormalized centroids to normalized centroids is marginal .our numerical study shows that mvcs are advantageous when the underlying spectrum of density is dominated by fluctuations at small scales . and, the main advantage of mvcs is that they allow an explicit statistical description of the procedures involved , which makes the analysis more reliable .our study revealed that the mvcs can be successfully used to obtain the velocity statistics from synthetic observational data .another technique , namely , the vca is complementary to the mvcs .vca is more robust to density - velocity correlations .for instance , it was shown in that vca can provide correct statistics of velocity fluctuations even in the case when the velocity and density are correlated at their maximum given by the cauchy - swartz inequality ( see ) .on the contrary , if we assume such a maximum correlation for the mvcs we will not get the correct scaling . comparing results obtained with the mvcs and the vca one can get a better handle on what are the velocity statistics .it is advantageous to combine vca and mvcs techniques .we can apply vca to the largest scales , where turbulence is supersonic , and study mvcs at all scaless , including those where the turbulence becomes subsonic .the correspondence of the spectral slope obtained with the two different techniques would substantially increase the reliability of the result .there is a more subtle , but important point .mvcs provide the spectrum of solenoidal motions given by , while vca is sensitive to both potential and solenoidal motions .this may be used to get both potential and solenoidal motions in order to estimate the role of compressibility .in addition , because the thermal velocity acts in the vca analysis the same way as the thickness of the channel maps , measurements of the spectrum with the mvcs can provide a means of estimating the thermal velocity .it is essential to combine the advantages of the techniques .our analysis of numerical data in 4 shows that the worries about the velocity centroids ( see ) although justified , may be somewhat exaggerated . if we _ assume _ that the criterion for the use of the centroids is satisfied ( that the second term in eq .( [ m ] ) is small compared to the first one ) for earlier data sets , the analysis of observational data in provides a range of power - law indexes .their results obtained with structure functions if translated into spectra are consistent with , where with a standard deviation of .the kolmogorov index falls into the range of the measured values .l1228 exhibits exactly the kolmogorov index as the mean value , while other low mass star forming regions l1551 and hh83 exhibit indexes close to those of shocks , i.e. . the giant molecular cloud regions show shallow indexes in the range of ( see ) .it worth noting that obtained somewhat more shallow indexes that are closer to the kolmogorov value using autocorrelation functions .those may be closer to the truth as in the presence of absorption in the center of lines , minimizing the regular velocity used for individual centroids might make the results more reliable ( see also first footnote in 3 ) .repeating of the study using mvcs and taking care of the absorption effects may be advantageous .the emergence of the kolmogorov index for the compressible magnetized gas may not be so surprising . for a range of mach numbers reported the existence of distinct alfvenic , slow , and fast wave cascades .the alfvn and slow waves result in kolmogorov - type scaling .the application of the vca to small magellanic cloud resulted in the velocity spectrum consistent with those predictions .however , other researchers ( e.g. ) reported indexes that varied with the mach number of the simulations .it is clear that more theoretical and observational work is necessary .\1 . we derived a criterion when the velocity centroids may reflect the actual underlying velocity statistics .we introduced mvcs that may recover velocity statistics from spectral line data in cases when the traditional centroid analysis fails .our numerical tests show that both the mvcs and the normalized velocity centroids ( see eq .( [ c ] ) ) successfully recover the velocity information from data cubes obtained via numerical simulations of the compressible mhd turbulence if the density spectrum is steep .mvcs and the vca are complementary independent techniques .obtaining the same power spectrum with both of them enhances confidence in the result .mvcs are most useful when turbulence is subsonic or / and does not follow the power law . additional information on the solenoidal versus potential motions , temperatures of gas etc .may be obtained combining these two techniques .+ + we thank the referee anthony minter for valuable suggestions that improved this work .the research by a.l . is supported by the nsf grant ast-0125544 .a.e . acknowledges financial support from conacyt ( mexico ) .fruitful communications with volker ossenkopf are acknowledged .
we address the problem of studying interstellar turbulence using spectral line data . we find a criterion when the velocity centroids may provide trustworthy velocity statistics . to enhance the scope of centroids applications , we construct a measure that we term `` modified velocity centroids '' ( mvcs ) and derive an analytical solution that relates the 2d spectra of the modified centroids with the underlying 3d velocity spectrum . we test our results using synthetic maps constructed with data obtained through simulations of compressible magnetohydrodynamical ( mhd ) turbulence . we show that the modified velocity centroids ( mvcs ) are complementary to the the velocity channel analysis ( vca ) technique . employed together , they make determining of the velocity spectral index more reliable and for wider variety of astrophysical situations .
computer hardware is increasingly being shared between multiple , potentially untrusted , programs .examples of such sharing range from cloud services where a single computer may share workloads of multiple clients via mobile phones that run multiple apps , each authored by a different developer , to web browsers displaying pages from different sites . to protect confidential or private information that some of these programs may access ,the system imposes a _ security policy _ that prevents the dissemination of such information .one threat to the security of the system are _ covert channels _ , which allow colluding programs to bypass the security policy , by transferring information over media that are not controlled by the system .a typical scenario includes two programs : a _ trojan _ program , which has access to sensitive information but is confined by the security policy ( i.e. prevented from sending information to arbitrary destinations ) , and a _ spy _ process that does not have access to the sensitive information but can communicate with less restrictions . using a covert channel , the trojan can send the sensitive information to the spy , which can then exfiltrate it from the system .covert channels are often classified as either _ storage _ or _ timing _ channels .storage channels exploit the ability of one program to store data that the other program can read .timing channels , in contrast , exploit timing variations for transferring information .past research has demonstrated the possibility of completely eliminating storage channels . for timing channels ,the picture is not that clear .some classes of timing channels can be eliminated by ensuring deterministic timing of any externally visible effects of programs , and mitigation strategies are often suggested for published microarchitectural channels . however , there is currently no known method that guarantees the absence of timing channels on shared hardware . in this paper_ we examine the degree to which it is possible to prevent timing channels on modern hardware . _specifically we look at intra - core channels , which exploit hardware state for signalling between processes or vms that time - share a processor core .this means we not only ignore channels between cores , but also between concurrent executions on a single core ( hyperthreading ) ; channels between hyperthreads are well - documented and understood and are probably impossible to close .this is thus a fairly restricted scenario and one would expect that all channels could be trivially ( albeit expensively ) closed by flushing all cached state on a context switch , using the appropriate hardware mechanisms , so the main challenge would seem to be how to minimise the cost of the defence .however , we show reality to be different : we demonstrate that on recent arm as well as x86 processors there are channels that resist all attempts to close them by flushing the state they exploit .specifically , we implement several covert - channel techniques , including the prime+probe attack on the l1 data cache and the l1 instruction cache .we also implement new attacks targeting the translation lookaside buffer ( tlb ) , the branch predictor unit ( bpu ) , and the branch target buffer ( btb ) .we measure the channels created by these techniques , first without mitigations , to demonstrate the existence of the channel , and then with the use of mitigation techniques , to measure the remaining channel .our results show that some channels remain even after activating all of the available mitigation techniques .in particular , we note that the x86 does not support any instruction or documented method for clearing the state of the bpu .consequently , the branch prediction channel remains open .we further note that popular belief notwithstanding , invalidating the contents of the caches does not close all cache - based channels and that at least on intel x86 , flushing the tlb has negligible effect on the tlb channel . in summary, we make the following contributions : * we identify a limited scenario for investigating microarchitectural - timing - channel elimination .* we implement multiple persistent - state microarchitectural covert channels , some of which have previously only been speculated , but never implemented ( ) , identify existing mitigation techniques available in existing processors ( ) , and measure the channels with and without those mitigation techniques ( ) .our results show that on present hardware , intra - core channels remain even when using all hardware - supported flush operations .we begin by describing the relevant components of modern processors , and how they can be leveraged for timing channels . the _ instruction set architecture _ ( isa ) is the hardware - software contract for a processor family , such as x86 or arm cortex - a .the isa specifies the functional operation of the processor , including the instructions that the processor can execute , their encoding , the available registers and their functions .while there may be some minor variations in feature support between processors in a family , the core of the isa remains invariant , allowing seamless software support across the family .the isa abstracts over a processor s implementation , which is made up from a large number of components , including functional units , caches , buses and interfaces , collectively called the _microarchitecture_. while functionally transparent , details of the microarchitecture affect the timing of operations .much of this is the result of the processor caching information in many places , in order to improve average - case execution speed .any such caches make the latency of operations dependent on execution history and thus creating the potential for timing channels .we now describe the relevant components .[ [ cpu - caches ] ] cpu caches + + + + + + + + + + these are , in terms of their effect on timing , the most noticeable components .the caches bridge the speed gap between the processor and the much slower memory , by holding recently - accessed data or instructions .a cache is a bank of high - speed memory , typically using static random access memory ( sram ) technology , which is faster albeit more expensive than the dynamic random access memory ( dram ) technology commonly used in the main memory .faster technology and greater proximity to the processing core ( enabled by smaller size ) means that access to the cache is much faster than to the main memory .caches utilise the spatial and temporal locality of programs for reducing average access time .[ [ cache - organisation ] ] cache organisation + + + + + + + + + + + + + + + + + + the cache is organised in _lines _ of a fixed , power - of - two size , typically ranging from 32 to 128 bytes . the line is the unit of allocation and transfer to memory , i.e. at any time a line either is invalid or caches a size - aligned block of memory .the lowest - order bits of the address of a data item or instruction are the line offset , i.e. they determine where the item is located within the line .caches are generally _ set associative _ , meaning that a fixed number , , of lines are grouped into a _ set _, where is the associativity of the cache ; the lines of a set are also often called _ ways_. cache content is located by hashing the address onto a set number . in most casesthe hash is just the low - order address bits after stripping the offset bits . within the set ,the correct line is found by associative lookup , comparing the address bits with a _ tag _ stored in each line .if none of the tags match , the item is not in the cache ( i.e. a cache miss ) .[ [ cache - hierarchy ] ] cache hierarchy + + + + + + + + + + + + + + + as the speed gap between processor and memory is orders of magnitude , modern processors have a hierarchy of caches .closest to the core is the l1 cache , generally split into separate instruction and data caches , i- and d - cache , and always private to a core .further levels are unified , larger and slower , down to the _ last - level cache _ ( llc ) , which is generally shared between cores .[ [ cache - addressing ] ] cache addressing + + + + + + + + + + + + + + + + l1 caches are frequently _ virtually addressed _ , i.e. lookup is by virtual address of the item .all other caches are _ physically addressed_. on recent intel processors , the llc lookup uses a more complex hash rather than just the low - order address bits .the hash function is unspecified , but has been reverse - engineered .[ [ translation - lookaside - buffer ] ] translation lookaside buffer + + + + + + + + + + + + + + + + + + + + + + + + + + + + the mapping from virtual to physical addresses is specified in the page table . to avoid frequent lookups , the processor caches translations in the _ tlb_.it is usually organised as a two - level cache .intel processors generally feature set - associative tlbs , while on many other architectures they are fully associative ( single set ) or a mixture ( e.g. a fully - associative first - level and a set - associative second - level tlb on arm cortex a9 ) .intel processors also feature a separate cache for page directory entries .[ [ branch - prediction ] ] branch prediction + + + + + + + + + + + + + + + + + to avoid pipeline stalls while processing branch instructions , processors feature a _ bpu _ , which predicts the target of branches .this allows the processor to speculatively fetch instructions following the branch . in the case of a misprediction, the speculative execution is rolled back and processing continues on the right branch .a typical bpu consists of at least two subunits : the _ btb _ and the _ history buffer_. [ [ history - buffer ] ] history buffer + + + + + + + + + + + + + + the history buffer aims to predict the outcome of conditional branches , i.e. whether the branch is taken or not .prediction is typically based on the history of the specific branch , possibly with a combination of the outcomes of branches leading to it .the history buffer maintains a state machine for each branch ( or a combination of a branch and branching history ) . in the common two - bit predictor ,the predictor needs to mispredict twice in order for the prediction to change . [[ branch - target - buffer ] ] branch target buffer + + + + + + + + + + + + + + + + + + + + the btb caches destination addresses of unconditional and taken conditional branches .details are generally not specified by the manufacturer , but can frequently be reverse - engineered .[ [ prefetching ] ] prefetching + + + + + + + + + + + modern processors increase the effectiveness of the cache by predicting which memory locations will be accessed in the near future , and pre - loading these locations into the cache , a process called _prefetching_. this works best for constant - stride accesses , but modern prefetchers can deal with more complex access patterns .the exact operation of the prefetcher is generally unspecified .[ [ microarchitectural - state ] ] microarchitectural state + + + + + + + + + + + + + + + + + + + + + + + + the microarchitectural components described above have in common that they maintain some state which is based on prior computation , either by caching raw data or instructions , or by implementing state machines that capture recent execution history .this state is functionally transparent , i.e. it does not affect the results or outcomes of the programs . however , because this state is used to improve the performance of the program , it affects operation timing and is therefore visible through variations in the timing of program executions .[ [ timing - channels ] ] timing channels + + + + + + + + + + + + + + + whenever this state is shared between different program executions there is a potential timing channel , as the timing of one program may depend on the execution history of another . in general, a channel will exist unless the state is either strictly partitioned between programs ( e.g.if the hardware tags the state with a program i d ) or flushed when switching the processor between programs ( context switch ) .[ [ covert - channels ] ] covert channels + + + + + + + + + + + + + + + if those conditions are not met , then a trojan can , through its own execution , force the hardware into a particular state and a spy can probe this state by observing progress of its own execution against real time .this will constitute a covert channel , i.e. an information flow bypassing the system s security policy .for example , the trojan can modulate its cache footprint , encoding data into the number of cache lines accessed .the spy can read out the information by observing the time taken to access each cache line . or the trojan can force the branch predictor state machine into a particular state , which the spy can sense by observing the latency of branch instructions ( and thus whether they are predicted correctly ) .the actual implementations of covert channels depend on the details of the particular microarchitectural feature they exploit .a large number of such implementations have been described , as surveyed by ge et al . . historically , covert channels were mostly discussed within the scope of multilevel security ( mls ) systems .such systems have users with different classification levels and the system is required to ensure that a user with a high security clearance , e.g. a top secret classification , does not leak information to users with a lower clearance .the advent of modern software deployment paradigms , including cloud computing and app stores , increased the risk of covert channels and shifted some of the focus from military - grade systems to commercial and personal environments .covert channels break the isolation guarantees of cloud environments where workloads of several users are deployed on the same hardware .similarly , mobile devices rely on the system s security policy to ensure privacy whilst executing software of multiple , possibly untrustworthy developers .[ [ side - channels ] ] side channels + + + + + + + + + + + + + the threat of microarchitectural channels is not restricted to environments compromised by trojans .side channel _ is a special case of a covert channel , which does not depend on a colluding trojan , but instead allows a spy program to recover sensitive information from a non - colluding _victim_. where they exist , side channels pose a serious threat to privacy and can be used to break encryption . in general, collusion allows better utilisation of the underlying hardware mechanism and hence covert channels tend to have much higher bandwidth than side channels based on the same microarchitectural feature ; the capacity of the covert channel is the upper bound of the corresponding side channel capacity .this means that closing covert channels implicitly eliminates side channels .for that reason we focus on covert channels in this work , as we aim to establish to which degree microarchitectural timing channels can be eliminated .we note that any timing channel that allow the spy to obtain address information from the trojan ( i.e. which data or instructions it accesses ) can potentially establish a side channel .[ [ p+p ] ] prime+probe + + + + + + + + + + + prime+probe is a specific and commonly used technique for exploiting set - associative caching elements as cache - based timing channels .it has been applied to the l1 d - cache , l1 i - cache , and the llc .using the technique , the spy primes the cache by filling some of the cache sets with its own data .the trojan uses each of the sets that the spy primes to transmit one bit .for clear bits , the trojan leaves the spy s data in the set . for set bits ,the trojan replaces the spy s data with its own .the spy then probes the cache state to receive the information .it measures the time it takes to access the data it originally stored in them .a long access time indicates that some data in the cache set was replaced between the prime and the probe stages therefore the corresponding bit is set .a short access time indicates that the data is still cached thus the corresponding bit is clear .a microarchitectural timing channel can be eliminated if the underlying hardware state is either strictly partitioned or flushed .partitioning is possible , e.g. in physically - addressed caches , using memory colouring .this utilises the fact that in an associative cache , any particular memory location can only be cached in a specific set .the os can allocate physical memory to security domains such that they can not compete for the same cache sets . where partitioning is not possible , e.g. in virtually - addressed caches , such as the tlb , or where it is not possible to associate state with domains , as may be the case in the state machines used in the branch predictor or prefetcher , state must be flushed on a context switch .architectures generally provide instructions for flushing caches , but not for all of the other state .a frequently - suggested defence is injecting noise , e.g. via random perturbations of the state .while the approach reduces the usable capacity of channels , it can not eliminate the signal ( unless the noise anti - correlates with the transmitted signal ) and the cost of reducing the signal - noise ratio quickly becomes prohibitive .furthermore , there are sophisticated channel implementations that are robust against noise and covert channels have been demonstrated to work in noisy environments .another countermeasure frequently suggested is to fuzz the clock or to reduce its resolution .we note that fuzzing the clock just introduces noise into the system , and thus has the same limitations as other ways of adding noise .covert channels have been implemented even in the absence of high - resolution clocks .hence , these methods can not completely eliminate covert channels and are therefore not suitable for our purposes .our aim is not to minimise the cost of channel mitigation , but rather to establish whether they can be closed at all .we therefore use the ( costly ) brute - force approach of flushing any state that the hardware allows us to flush on each context switch .covert channels exploit shared hardware features to provide information flow between programs running in different security domains . as such they are independent of the operating system ( os ) or hypervisor separating the domains .the only role the os or hypervisor plays is in applying mitigations , trying to close the channels .therefore , the actual os or hypervisor used on the platform is of little importance , other than that it must allow implementation of the mitigations . for our experiments we use the sel4 microkernel , for a number of reasons .first , sel4 is a small ( about 10,000 lines of code ) and simple system , which makes it relatively easy to implement mitigations , compared to a large , complex system such as linux .furthermore , sel4 is specifically designed for use in security - critical system , and has undergone comprehensive formal verification , with proofs of implementation correctness and security enforcement . in particular , sel4 has been proved free of storage channels , which means any remaining channels must be timing channels , simplifying analysis of results .sel4 can be used as the basis of a general - purpose os , as a separation kernel or as a hypervisor . in this workwe use it as a separation kernel .this means that our setup contains the minimum amount of software , with our attack code running in a minimal environment directly on top of sel4 , with no actual os services .this avoids any interference from other software components .note that using sel4 as a hypervisor would expose the same microarchitectural channels , but possibly more .demonstrating channels in the separation - kernel setup implies more generality , as the same channels will exist in the virtualisation setup .in our threat model we assume that the adversary manages to executes a trojan program within a secure environment .for example , the adversary may compromise a service within the secure environment and inject the trojan s code .alternatively , the adversary may be , or may control , a corrupt developer that inserts malicious code into a software product used in the secure environment , e.g. an app used to process private data on a smartphone . executing within the secure environment gives the trojan access to sensitive data which the adversary wants , however the security policies at the secure environment prevent data exfiltration by the trojan . additionally , the adversary controls a spy program which executes on the same computer as the trojan , for example , in a different virtual machine .the spy is not executing within the same secure environment and consequently can communicate freely with the adversary , but it does not have access to the sensitive data .the adversary s aim is to exploit a microarchitectural covert channel in the shared hardware .if such a channel exists and is not controlled by the system , the trojan can use the channel to send the sensitive data to the spy , which can then send the data to the adversary . in this work we check whether the system can prevent the adversary from exploiting such covert channels . as indicated in, we focus in this work on channels that can be exploited by trojan and spy time - sharing a processor core .this allows us to ignore _ transient - state _ channels , i.e. those that exploit the limited bandwidth of processor components .transient - state channels rely on concurrent execution of the trojan and the spy and are therefore automatically excluded in the time - sharing scenario .we thus only need to handle _ persistent - state _ channels , which rely on exhausting the storage capacity of processor elements . in a typical persistent - state channel, the spy sets the targeted component to a known state .the trojan executes , modifying the state based on the data to transmit .the spy then uses operations whose timing depends on the state of the component to measure the modifications the trojan made and recover the data .because persistent - state channels require storage in the targeted element , they often target caching elements .example of targeted elements include the data caches , instruction caches , tlb , and bpu . as we are exploring microarchitectural channels , we ignore timing channels that are controlled by software .for example , the trojan could create a timing channel by varying its execution time before yielding the processor .we note that the system can protect against such channels by padding the execution time of the trojan following a yield .moreover , because we investigate the processor s ability to close the channel , we only investigate channels within the processor itself .external channels , such as the dram open rows channel , are outside the scope of this work .in this work we examine the level of support that manufacturers provide for eliminating microarchitectural timing channels in their processors . to this purpose , we implement multiple covert channels , identify the processor instructions and available information that can be used for mitigating the channels , and measure the capacity of the channel with and without the mitigation techniques .these steps are described in greater details below . following cock et al . , we view a channel as a pipe into which a sender ( the trojan ) places _ inputs _ drawn from some set and which a receiver ( the spy ) observes as _ outputs _ from a set .both the inputs and the outputs depend on the specific covert channel used .we implement four channels , each designed to target a specific microarchitectural component . but note that as these components are not isolated from the rest of the processor , the channels are affected by components other than those targeted .we target the following channels : * l1 data cache * this channel uses the prime+probe attack technique described in on the l1 d - cache .the input symbols consist of numbers between 0 and the number of sets in the cache . to send a symbol , the trojan reads data to fill cache sets , completely filling these sets .the spy performs the prime+probe attack by first filling the whole cache with its own data and then measuring the time to read the data from each cache set .the output symbol is the sum of the read measurements .for the implementation we adapt the l1 prime+probe attack of the mastik toolkit to the processors we use .note that we could use a more sophisticated encoding of the input symbols to increase capacity .however , the point is not to establish the maximum channel capacity , but to investigate whether it can be closed .we therefore keep things as simple as possible .[ s : encoding ] * l1 instruction cache * here we use the prime+probe attack on the l1 i - cache .the approach is identical to the l1 d - cache channel , except that instead of reading data , the programs execute code in memory locations that map to specific cache sets .the implementation also uses an adaptation of the mastik code .* translation lookaside buffer * to implement a tlb - based channel , our trojan sends an input symbol consisting of a number between 0 and 128 ( the size of the arm tlb and twice the size of the x86 tlb ) . to send a symbol , , the trojan reads a single integer from each of pages .the spy measures the time to access a number of pages . in order to reduce self - contention in the spy, it only accesses half of the tlb ( 64 or 32 pages , respectively ). a more sophisticated design would take into account the structure of the tlb and aim to target the individual associative sets .as before , we opted for simplicity rather than capacity .the only prior implementation of a tlb - based channel is , which use an intra - process tlb side channel to bypass the protection of kernel address space layout randomisation ( kaslr ) .we are not aware of any prior implementation of inter - process tlb channels and past work consider such channels infeasible because the tlb is flushed on context switch .* branch prediction * the branch prediction channel exploits the processor s history buffer . in each time slice, the trojan sends a single - bit input symbol .both the trojan and the spy use the same channel code for sending and receiving .the code , shown in , consists of a sequence of conditional forward branches that are always taken ( line 8) .these set the history to a known state .the next code segment ( lines 1017 ) measures the time it takes to perform a branch ( line 13 ) that conditionally skips over 256 ` nop ` instructions ( line 14 ) .the branch outcome depends on the least significant bit of register ` % edi ` .the return value of the code ( register ` % eax ` ) is the measured time . [ cols= "> , < " , ] in some cases the explanation for the remaining channels is straightforward .for example , intel architectures do not support any method of clearing the state of the bpu .consequently , the branch prediction channel remains unchanged even when we enable all of the protection provided by the processor .in other cases the story is more intricate .we now look at some examples .we now continue the investigation of the arm cortex a9 l1 i - cache channel that we started in .recall that shows that even when we flush the caches , we can see a horizontal transition between two distinct distributions , indicating that a channel exists . in the arm cortex a9 ,the distance between two addresses that map to the same cache set , known as the cache _ stride _ ( and equal to the cache size divided by associativity ) , is 8kib .clearly , the transition in occurs at 4kib , which matches the page size .this may indicate that the channel originates not from the cache itself , but from some part of the virtual memory unit , for example , from the tlb .hence , clearing the tlb can , potentially , eliminate this channel . .[ f : cm - a9-l1i - all ] ] .[ f : avg - a9-l1i - all ] ] when applying all of the countermeasures available on the arm cortex a9 processor , including flushing the caches , btb and tlb and disabling the prefetcher , we get the channel matrix in .the channel is still significant , as is clearly evident from .input values smaller than 14 result in below - average output symbols , whereas symbols in the range 15 - 50 produce above - average output .while the channel matrix demonstrate the existence of a channel , it does not show the cause of the channel .one possible explanation for the channel is that the processor maintains some state that is not cleared by flushing the caches and is not deactivated when disabling the prefetcher .an alternative explanation is that the state is maintained outside the processor , for example resulting from a dram open - row channel . to investigate further, we look at different indicators : we use the performance monitoring unit ( pmu ) to count the l1 i - cache refill operations executed during the probe .as we have the prefetcher disabled , these refills should be independent of timing . ]shows the results . comparing with , we see consistent variations in the range below an input value of 50 .as these refills are triggered by internal processor state , we can preclude external effects , such as dram latency . without countermeasures , the channel that exploits this cache behaves as expected , as is evident from . .[ f : cm - sb - l1i - none ] ] because this is a cache channel , we can reasonably expect that invalidating the caches on the context switch would eliminate the channel .as in the case of the arm cortex a9 processor , the channel matrix with this countermeasure ( ) shows a much reduced channel . however , as on the arm cortex a9 , the channel matrix still shows horizontal variation , indicating a small but definite remaining channel . .[ f : cm - sb - l1i - wbinvd ] ] we further evaluate the channel when flushing the tlb and disabling the prefetcher , but a distinct channel still remains , as shown in . .[ f : cm - sb - l1i - all ] ] .[ f : cm - hw - l1i - none ] ] on the haswell microarchitecture , the l1 i - cache channel is comparatively small , see .surprisingly , disabling the prefetcher _ increases _ the capacity of the channel ( ) .it seems that between sandy bridge and haswell , intel modified the prefetcher and possibly the branch predictor , leading to better masking of l1-cache latencies , thus reducing the effectiveness of the attack . .[ f : cm - hw - l1i - pref ] ] enabling all mitigations fails to close the channel , but still _ decreases _ it , from a capacity of 0.65b to 0.25b .the channel is clearly evident in .[ f : cm - hw - l1i - all ] ] ] we now turn our attention to the branch prediction channel . recall that in the channel implementation ( ) , there are only two potential input values , 0 and 1 , corresponding to a branch taken and not - taken .shows the distribution of output values for each of these input values both without and with mitigations .we first note that for both cases , the distribution of the output symbols for inputs 0 and 1 are clearly distinct . for the non - mitigated case ,the median output value for input 0 is 36 cycles whereas the median output for input 1 is 60 .mitigation changes the access times because code now needs to be brought from memory , rather than from the l1 i - cache .however , the output values for inputs 0 and 1 are even more apart than in the case of no mitigation , with median output values being 172 and 244 , respectively . ] like the sandy bridge processor , the haswell processor ( ) shows clearly distinct output distributions for the different input symbols . unlike the sandy bridge processor , on haswell we do not see such a large difference between the output values for the mitigated and the non mitigated cases .we believe that the haswell prefetches soon - to - be - executed instructions even when prefetching is disabled . .[ f : cm - sb - tlb - none ] ] the last channel we investigate uses the tlb . on the sandy bridge architecture , shows a very distinct channel , despite the non - global tlb entries being flushed on the context switch due to updating the ` cr3 ` register ( we run in 32-bit mode where the tlb is untagged and flushed by the hardware on each context switch ) .this is in contrast to the common belief that the tlb channel is not a threat to virtualised environments because of the mandatory flush . for good measure , we explicitly flush global entries as well , but the effect is minimal as shown in .surprisingly , invalidating the cache does remove most of the channel , leaving only a small residual channel , as shown in . .[ f : cm - sb - tlb - tlb ] ] .[ f : cm - sb - tlb - wbinvd ] ]as we can see from the results , deploying all of the available method of processor state sanitisation still leaves high capacity channels .the countermeasures we deployed are often suggested for mitigating the exact channels we use . yet , in contrast with the popular belief , we find that despite some being prohibitively inefficient , these countermeasures fail at eliminating the channels .we further find that none of the channels is completely eliminated even when we deploy all of the available countermeasures .the capacity of the residual channels may be small , but they still exist . .[ f : cm - sb - l1d - all ] ] as an example , the capacity of the residual intel sandy bridge l1-d channel ( ) is 0.038 , with a potential error of up to 0.025 .that means that on average , a computationally unbounded adversary will require between 26 and 40 input symbols to transfer a single bit . with a transfer rate of 500 symbols per second ,the bandwidth of the channel is at most 19 bits per second .while such a capacity may seem small and insignificant , we note that , as we indicated earlier , we did not build the channel to achieve high capacity .consequently , further engineering is likely to better exploit the underlying source of leakage and achieve a much higher capacity .moreover , for high - security systems , even channels with capacities below one bit per second may pose a threat .for example , the orange book recommends that channels with a bandwidth above 0.1 bits per second are audited .the main issue with these low - capacity residual channels is that we do not understand them . evidently , there is some state within the processor , but we do not know what this state is and how to manage it . consequently , there is a real possibility that better understanding of this state will enable higher - bandwidth exploits .the only way to rule out such a possibility is through understanding the root cause of the channel .in this work we investigate intra - core covert channels in modern cpus .we implemented five different covert channels and measure their capacity on two microarchitecture implementations of each of the two most popular isas , x86 and arm .we identified processor tools to mitigate these covert channels , but demonstrated that these tools are not sufficient .we find and that high - capacity channels remain in every architecture , even when implementing the most drastic ( and expensive ) countermeasures .it goes without saying that even if we were able to fully close a channel , this would not guarantee that there is no other hardware state that could be exploited , or that more sophisticated exploits of the same state would not succeed .we therefore have to conclude that , in the absence of improved architectural support for covert channel mitigation , these modern processors are not suitable for security - critical uses where a processor core is time - multiplexed between different security domains .this work only explores the tip of the iceberg .we have limited ourselves to intra - core channels in a time - sharing scenario . in doing that we ignored all transient - state covert channels attacks and all attacks that rely on state outside the processor .the inevitable conclusion is that security is a losing game until the hardware manufacturers get serious about it and provide the right mechanisms for securely managing shared processor state .this will require additions to the isa that allow any shared state to be either partitioned or flushed .we would like to thank dr stephen checkoway who helped uncovering documentation on processor functionality .
we investigate how different categories of microarchitectural state on recent arm and x86 processors can be used for covert timing channels and how effective architecture - provided mechanisms are in closing them . we find that in recent intel processors there is no effective way for sanitising the state of the branch prediction unit and that , contrary to often held belief , flushing the translation lookaside buffer on intel processors does nothing to mitigate attacks based on this component . we further show that in both arm and x86 architectures flushing all the hardware caches is not effective to close cache - based timing channels . the implication of this is that secure sharing of a processor core in these architectures is not possible , irrespective of cost . = 1
enthusiasm for the use of big data in the improvement of health service is huge but there is a concern that without proper attention to some specific challenges the mountain of big data efforts will bring forth a mouse .now , there is no technical problem with `` big '' in healthcare .electronic health records include hundreds of millions of outpatient visits and tens of millions of hospitalizations , and these numbers grow exponentially .the main problem is in quality of data .`` big data '' very often means `` dirty data '' and the fraction of _ data inaccuracies _ increases with data volume growth .human inspection at the big data scale is impossible and there is a desperate need for intelligent tools for accuracy and believability control .the second big challenge of big data in healthcare is _ missed information_. there may be many reasons for data incompleteness .one of them is in health service `` fragmentation '' .this problem can be solved partially by the national and international unification of the electronic health records ( see , for example , health level seven international ( hl7 ) standards or discussion of the template for uniform reporting of trauma data ) . however , some fragmentation is unavoidable due to the diverse structure of the health service . in particular ,the modern tendency for personalization of medicine can lead to highly individualized sets of attributes for different patients or patient groups .there are several universal technologies for the handling of missing data .nevertheless , the problem of handling missed values in large healthcare datasets is certainly not completely solved .it continues to attract the efforts of many researchers ( see , for example , ) because the popular universal tools can lead to bias or loss of statistical power . for each system , it is desirable to combine various existing approaches for the handling of missing data ( or to invent new ones ) to minimize the damage to the results of data analysis . for the best possible solution , we have to take into account the peculiarities of each database and to specify the further use of the cleaned data ( it is desirable to understand in advance how we will use the preprocessed data ) . in our workwe analyze missed values in the tarn database .we use the preprocessed data for : * the evaluation of the risk of death , * the identification of the patterns of mortality , * approaching several old problems like the trunkey hypothesis about the trimodal distribution of trauma mortality .the ` two stage lottery ' non - stationary markov model developed in the sequel can be used for the analysis of missing outcomes in a much wider context than the tarn database and could be applied to the handling of data gaps in healthcare datasets which experience the problem of transferred and lost patients and missing outcomes . in this paperwe analyze the unknown outcomes. the next task will be the analysis of missed data in the most common `` input '' attributes .there are more than 200 hospitals which send information to tarn ( tarn hospitals ). this network is gradually increasing .participation in tarn is recommended by the royal college of surgeons of england and the department of health .more than 93% of hospitals across england and wales submit their data to tarn .tarn also receives data from dublin , waterford ( eire ) , copenhagen , and bern .we use tarn data collected from 01.01.2008 ( start of treatment ) to 05.05.2014 ( date of discharge ) .the database contains 192,623 records and more than 200 attributes .sometimes several records correspond to the same trauma case because the patients may be transferred between tarn hospitals .we join these records .the resulting database includes data of 182,252 different trauma cases with various injuries .16,693 records correspond to patients , who arrived ( transferred from other institutions ) to tarn hospitals later than 24 hours after injury .this sample is biased , for example the fraction of dead outcomes ( fod ) for this sample is 3.34% and fod for all data is 6.05% .this difference is very significant for such a big sample .( if all the outcomes in a group of the trauma cases are known then we use the simple definition of fod in the group : the ratio of the number of registered deaths in this group to the total number of patients there .such a definition is not always applicable .the detailed and more sophisticated analysis of this notion follows in the next section . )we remove these 16,693 trauma cases from analysis but use them later for validation of the `` mortality after transfer '' model . among them , there are 15,437 patients who arrived at a tarn hospital within 30 days after injury .we call this group ` in30 ' for short ( fig .[ maingroups ] ) . as a resultwe have 165,559 records for analysis ( ` main group ' ) .this main group consists of two subgroups : 146,270 patients from this group approached tarn during the first day of injury and remained in tarn hospitals or discharged to a final destination during the first 30 days after injury .we call this group the ` available within 30 days after injury ' cases ( or ` available w30d ' for short ) .the other 19,289 patients have been transferred within 30 days after injury to a hospital or institution ( or unknown destination ) who did not return data to the tarn system .we call them ` transferred out of tarn within 30 days after injury ' or just ` out30 ' ( fig .[ maingroups ] ) .the patients with the non - final discharge destinations ` other acute hospital ' and ` other institution ' were transferred from a tarn hospital to a hospital ( institution ) outside tarn and did not return to the tarn hospitals within 30 days after injury .the database includes several indicators for evaluation of the severity of the trauma case , in particular , abbreviated injury scale ( ais ) , injury severity score ( iss ) and new injury severity score ( niss ) . for a detailed description and comparison of the scores we refer readers to reviews .the comparative study of predictive ability of different scores has a long history .the scores are used for mortality predictions and are tested on different datasets .the widely used definition of the endpoint outcome in trauma research is survival or death within 30 days after injury .a substantial number of tarn in - hospital deaths following trauma occur after 30 days : there are 957 such cases ( or 8% of tarn in - hospital death ) among 11,900 cases with ` mortuary ' discharge destination .this proportion is practically the same in the main group ( 165,559 cases ) : 894 deaths after 30 days in hospital ( or 7.9% ) among 11,347 cases with ` mortuary ' discharge destination .death later than 30 days after injury may be considered as caused by co - morbidity rather than the direct consequence of the injury .these later deaths are not very interesting from the perspective of an acute trauma care system ( as we can not influence them ) , but they might be very interesting from the perspective of a geriatric rehabilitation centre or of an injury prevention program for elderly patients .on the other hand , when `` end of acute care '' is used as an outcome definition then a significant portion of deaths remains unnoticed .for example , in the 3332 trauma cases treated in the ulleval university hospital ( oslo , norway , 2000 - 2004 ) 18% of deaths occurred after discharge from the hospital .the question of whether it is possible to neglect trauma caused mortality within 30 days after trauma for the patients with the discharge destination ` home ' , ` rehabilitation ' and other ` recovery ' outcomes is not trivial .moreover , here are two questions : * how do we collect all the necessary data after discharge within 30 days after trauma a technical question ? * how do we classify the death cases after discharge within 30 days after trauma ; are they consequences of the trauma or should they be considered as comorbidity with some additional reasons ?the best possible answer to the first question requires the special combination of technical and business process to integrate data from different sources .the recent linkage from tarn to the office for national statistics ( ons ) gives the possibility to access the information about the dates of death in many cases .it is expected that the further data integration process will recover many gaps in the outcome data .the last question is far beyond the scope of data management and analysis and may be approached from different perspectives .whether or not the late deaths are important in a model depends on the question being asked . from the data management perspective, we have to give the formal definition of the outcome in the terms of the available database fields .it is impossible to use the standard definition as survival or death within 30 days after injury because these data are absent .we define the outcome ` alive w30d ' for the tarn database being as close to the standard definition as it is possible . in the tarn database discharge destinations ` home ( own ) ' , ` home ( relative or other carer ) ' , ` nursing home ' , and ` rehabilitation 'are considered as final .if we assume that these trauma cases have the outcome ` alive w30d ' then we loose some cases of death . from the acute care perspectivethese cases can be considered as irrelevant .let us accept this definition .there still remain many cases with unknown outcome . for analysis of these caseswe introduce the outcome category ` transferred ' . in this categorywe include the cases which left the tarn registry to a hospital or other institution outside tarn , or to an unknown destination within 30 days .the relations between the discharge destinations and these three outcomes are presented in table [ table:2 ] .[ cols="^,^,^,^,^",options="header " , ] it may be convenient to have formulas for estimation of fod .this smoothed fod ( ) is found for as a linear combination of and ( [ sfod ] ) .for the simple formulas do not have much sense and we have to use a refined model with the inclusion of age ( sec .[ sec : refine ] ) the number of cases is not sufficient for good approximation for this extended model . for the number of cases is not sufficient and we use three bins for trauma severities marked by the values of the coarse - grained variable : ( , 48 cases ) , ( , 53 cases ) , and ( , 38 cases ) . is presented as a quadratic function of . all the coefficients are estimated using weighted least squares method .the weight of the severities combination is defined as the sum of weights of the corresponding trauma cases .[ [ medical - commentary-1 ] ] medical commentary + + + + + + + + + + + + + + + + + + the complete outcome dataset derived from this work allows all patients to be included in the analysis of the effect of combined injuries .the counter - intuitive results from this analysis ( some combinations of injuries seem to have better outcomes than a single injury of the same severity ) provides a fertile area for further work .it may be that the explanation is technical , within the way that the continuum of human tissue destruction from trauma is reduced to a simple 5 point scale .each point on the scale is actually a band that covers a range of tissue damage .there might also be a true physiological explanation for the lower lethality of combined injuries , as each injury absorbs some of the force of impact .the same concept is used in formula 1 , where the cars are designed to break into pieces , with each piece absorbing some of the impact . in humansthere is a well known concept that the face can act as a ` crumple zone ' and mitigate effect of force on the brain .the effect of injury combinations shown in table 6 is a novel finding that requires further analysis . in the early 1980sa hypothetical statement was published that the deaths from trauma have a trimodal distribution with the following peaks : immediate , early and late death .this concept was clearly articulated in a popular review paper in scientific american .the motivation for this hypothesis is simple : trunkey explains that the distribution of death is the sum of three peaks : `` the first peak ( _ ` immediate deaths ' _ ) corresponds to people who die very soon after an injury ; the deaths in this category are typically caused by lacerations of the brain , the brain stem , the upper spinal cord , the heart or one of the major blood vessels .the second peak ( _ ` early deaths ' _ ) corresponds to people who die within the first few hours after an injury ; most of these deaths are attributable to major internal hemorrhages or to multiple lesser injuries resulting in severe blood loss .the third peak ( _ ` late deaths ' _ ) corresponds to people who die days or weeks after an injury ; these deaths are usually due to infection or multiple organ failure . ''strictly speaking , the _ sum of three peaks does not have to be a trimodal distribution_. many groups have published refutations of trimodality : they did not find the trimodal distribution of death . in 1995 , sauaia et al reported that the `` greater proportion of late deaths due to brain injury and lack of the classic trimodal distribution '' .wyatt et al could not find this trimodal distribution in data from the lothian and borders regions of scotland between 1 february 1992 and 31 january 1994 .they hypothesised that this may be ( partly ) due to improvements in care . recently ,more data has become available and many such reports have been published . the suggestion that the improvement in care has led to the destruction of the second thd third peakshas been advanced a number of times . in 2012 , clark et al performed an analysis of the distribution of survival times after injury using interval censored survival models .they considered the trimodal hypothesis of trunkey as an artifact and provide arguments that the observed ( in some works ) second peak is a result of differences in the definition of death .k. sreide et al analysed the time distribution from injury to death stratified by cause of death .they demonstrated that the trimodal structure may be , probably , extracted from data but its manifestation is model dependent ( see fig . 6 in ) .there were several discussion papers published : `` trimodal temporal distribution of fatal trauma fact or fiction ? '' .the trimodal hypothesis was tested on tarn data .it was demonstrated that `` the majority of in hospital trauma deaths occur soon after admission without further peaks in mortality '' .we reproduce the same results , indeed . but tarn database , the largest european trauma database , allows us to make a _ stratified analysis of mortality _ and the preliminary results demonstrate the richness of the possible patterns of death .let us test the famous trunkey hypothesis . in fig .[ fig : mortalitycoeff ] the daily mortality coefficients are presented for low severities ( a ) ( niss severities 1 - 8 , 27,987 cases in database , 508 death in tarn , 3,983 patients transferred from tarn within 30 days after injury ) , and for the whole database ( b ) . for the prediction of death in the ` out30 ' group we used the model with retarded transfer .the non - monotonicity and peaks in the mortality for low severities of injury are illustrated in fig .[ fig : mortalitycoeff ] . further analysis of these patterns should involve other attributes such as the age of the patient and the type and localization of the injury .[ [ medical - commentary-2 ] ] medical commentary + + + + + + + + + + + + + + + + + + it has been widely accepted that the trunkey trimodal distribution was a theoretical concept designed to illustrate the different modes of dying following injury .previous analysis of trauma data has looked at all patients and has not shown any mortality peaks , however this new analysis shows that there are peaks ( patterns ) if subgroups are studied .the underlying clinical or patient factors are not immediately obvious , but future analysis giving a better understanding of patterns of death could act as a stimulus to look for the clinical correlates of these patterns - with the potential to find modifiable factors .the pattern of death in various subgroups as shown in figure 7 is a novel finding that requires further analysis .handling of data with missed outcomes is one of the first data cleaning tasks . for many healthcare datasets ,the problem of lost patients and missed outcomes ( in 30 days , in six months or any other period of interest ) is important .there are two main approaches for solving this problem : 1 . to find the lost patients in other national and international databases ; 2 . to recover the distribution of the missed outcomes and all their correlations using statistical methods , data mining and stochastic modelling .without any doubt the first approach is preferable if it is available : it is better to have complete information when it is possible .nevertheless , there may be various organizational , economical and informational restrictions. it may be too costly to find the necessary information , or this information may be unavailable or even does not exist in databases . if there are only small number of lost cases ( dozens or even hundreds ) then they may be sought individually .however if there are thousands of losses then we need either a data integration system with links to appropriate databases like the whole nhs and ons data stores ( with the assumption that the majority of the missed data may be taken from these stores ) or a system of models for the handling of missed data , or both because we might not expect all missed data to be found in other databases . in the tarn dataset , which we analyse in this paperthe outcome is unavailable for 19,289 patients .the available case study paradigm can not be applied to deal with missed outcomes because they are not missed ` completely at random ' .non - stationary markov models of missed outcomes allow us to correct the fraction of death .two nave approaches give 7.20% ( available case study ) or 6.36% ( if we assume that all unknown outcomes are ` alive ' ) .the corrected value is 6.78% ( refined model with retarded transfer ) .the difference between the corrected and nave models is significant , whereas the difference between different markov corrections is not significant despite the large dataset .non - stationary markov models for unknown outcomes can utilize any scheme of predictive models with using any set of available attributes . we demonstrate the construction of such models using maximal severity model , binned niss model and binned niss supplemented by the age structure at low severities .we use weighting adjustment to compensate for the effect of unknown outcomes .the large tarn dataset allows us to use this method without significant damage to the statistical power .analysis of mortality for a combination of injuries gives an unexpected result .if are the three maximal severities of injury in a trauma case then the expected mortality ( fod ) is not a monotone function of , , under given .for example , for expected fodfirst decreases when grow from 0 to 1 - 2 and then increases when approaches .following the seminal trunkey paper , multimodality of the mortality curves is a widely discussed problem . for the complete tarn datasetthe coefficient of mortality monotonically decreases in time but stratified analysis of the mortality gives a different result : for lower severities fod is a non - monotonic function of the time after injury and may have maxima at the second and third weeks after injury. perhaps , this effect may be ( partially ) related to geriatric traumas .we found that the age distribution of trauma cases is strongly multimodal ( fig .[ fig : mortality ] ) .this is important for healthcare planning. the next step should be the handling of missed values of input attributes in the tarn database firstly , we should follow the `` guidelines for reporting any analysis potentially affected by missing data '' , report the number of missing values for each variable of interest , and try to `` clarify whether there are important differences between individuals with complete and incomplete data '' .already preliminary analysis of the patterns in the distribution of the missed input data in the tarn dataset demonstrates that the gaps in data are highly correlated and need further careful analysis .secondly , we have to test and compare various methods of handling missing input attributs in the tarn database .it is not necessary to analyse all attributes in the database for mortality prediction and risk evaluation .it is demonstrated that there may exist an optimal set of input attributes for mortality prediction in emergency medicine and additional variables may even reduce the value of predictors .therefore , before the analysis of imputation efficiency , it is necessary to select the set of most relevant variables of interest .the models developed in this case study can be generalized in several directions .firstly , for trauma datasets , different attributes could be included in the ` state ' for the non - stationary markov models ( figs .[ fig : mortalitymodelbefore ] , [ fig : mainmarkovmodelafter ] ) .we did not explore all such possibilities but have studied just simple models of the maximal severity and binned niss .an example of model refinement with inclusion of age in the state variable is presented in section [ sec : refine ] .secondly , the ` two stage lottery ' non - stationary markov model could be used as a general solution applicable to any health dataset where ` transfer in ' or ` transfer out ' is a feature .transfer between hospitals is common in healthcare , therefore , we expect that models of this type will be useful for all large healthcare data repositories .the trauma audit and research network ( tarn ) have collected the largest european trauma database .we have analysed 192,623 cases from the tarn database .we excluded from the analysis 16,693 patients ( 8.67% ) , who arrived into tarn hospitals later than 24 hours after injury .the other 146,270 patients ( 75.94% ) approached tarn during the first day of injury and remained in tarn or discharged to a final destination within 30 days of injury .19,289 patients ( 13.19% ) from this group transferred from tarn to another hospital or institution ( or unknown destination ) within 30 days of injury .for this subgroup the outcome is unknown .analysis of the missed outcomes demonstrated that they can not be considered as misses ` completely at random ' .therefore , the analysis of available cases is not applicable for the tarn database .special efforts are needed to handle data with missed outcomes .we have developed a system of non - stationary markov models for the handling of missed outcomes and validated these models on the data arising from patients who moved to tarn ( and excluded from the model fitting ) .we have analysed mortality in the tarn database using the markov models which we have developed and also validated .the results of analysis were used for weighting adjustment in the available cases database ( reweighting of the death cases ) .the database with adjusted weights can be used for further data mining tasks and will keep the proper fraction of deaths . 5 .the age distribution of trauma cases is essentially multimodal , which is important for healthcare planning .our analysis of the mortality coefficient in the tarn database demonstrates that ( i ) for complex traumas the fraction of death is not a monotone function of all severities of injuries and ( ii ) for lower severities the fraction of death is not a monotonically decreasing function of time after injury and may have intermediate peaks in the second and third weeks after injury .the approach developed here can be applied to various healthcare datasets which have the problem of lost patients , inter hospitals transfer and missing outcomes .o. bouamra , a. wrotchford , s. hollis , a. vail , m. woodford , f. lecky , a new approach to outcome prediction in trauma : a comparison with the triss model , journal of trauma - injury , infection , and critical care 61 ( 3 ) ( 2006 ) , 701710 .t. brockamp , m. maegele , c. gaarder , j.c .goslings , m.j .cohen , r. lefering , p. joosse , p.a .naess , n.o .skaga , t. groat , s. eaglestone , m.a .borgman , p.c .spinella , m.a .schreiber , k. brohi , comparison of the predictive performance of the big , triss , and ps09 score in an adult trauma population derived from multiple international trauma registries , critical care 17 ( 2013 ) , r134 , http://ccforum.com/content/17/4/r134 h.r .champion , w.s .copes , w.j .sacco , c.f .frey , j.w .holcroft , d.b .hoyt , j.a .weigelt , improved predictions from a severity characterization of trauma ( ascot ) over trauma and injury severity score ( triss ) : results of an independent evaluation , journal of trauma - injury , infection , and critical care 40 ( 1 ) ( 1996 ) , 4249 .d. demetriades , b. kimbrell , a. salim , g. velmahos , p. rhee , c. preston , g. gruzinski , l. chan , trauma deaths in a mature urban trauma system : is `` trimodal '' distribution a valid concept ?journal of the american college of surgeons 201 ( 3 ) ( 2005 ) , 343348 .dolin , l. alschuler , s. boyer , c. beebe , f.m .behlen , p.v .biron , a. shabo ( shvo ) , hl7 clinical document architecture , release 2 , journal of the american medical informatics association 13 ( 1 ) ( 2006 ) , 3039 .g. fuller , o. bouamra , m. woodford , t. jenks , h. patel , t.j .coats , p. oakley , a.d .mendelow , t. pigott , p.j .hutchinson , f. lecky , the effect of specialist neurosciences care on outcome in adult severe head injury : a cohort study , journal of neurosurgical anesthesiology 23 ( 3 ) ( 2011 ) , 198205 .g. fuller , o. bouamra , m. woodford , t. jenks , s. stanworth , s. allard , t.j .coats , k. brohi , f. lecky , recent massive blood transfusion practice in england and wales : view from a trauma registry , emergency medicine journal 29 ( 2 ) ( 2012 ) , 118123 .g. fuller , o. bouamra , m. woodford , t. jenks , h. patel , t. j. coats , p. oakley , a.d .mendelow , t. pigott , p.j .hutchinson , f. lecky , temporal trends in head injury outcomes from 2003 to 2009 in england and wales , british journal of neurosurgery 25 ( 3 ) ( 2011 ) , 414421 .gabbe , grad dip biostat , f.e .lecky , o. bouamra , m. woodford , t. jenks , t.j .coats , p.a .cameron , the effect of an organized trauma system on mortality in major trauma involving serious head injury : a comparison of the united kingdom and victoria , australia , annals of surgery 253 ( 1 ) ( 2011 ) , 138143 .goldfarb , w.j .sacco , m.a .weinstein , t.f .ciurej , r.a .cowley , h.r .champion , w. gill , w.b .long , t.c .mcaslan , two prognostic indices for the trauma patient , computers in biology and medicine 7 ( 1 ) ( 1977 ) , 2125 .guly , o. bouamra , m. spiers , p. dark , t. coats , f.e .lecky , vital signs and estimated blood loss in patients with major trauma : testing the validity of the atls classification of hypovolaemic shock , resuscitation 82 ( 5 ) ( 2011 ) , 556559 . c. de knegt , s.a.g .meylaerts , l.p.h .leenen , applicability of the trimodal distribution of trauma deaths in a level i trauma centre in the netherlands with a population of mainly blunt trauma , injury 39 ( 9 ) ( 2008 ) , 9931000 .a. lavoie , l. moore , n. lesage , m. liberman , j.s .sampalis , the new injury severity score : a more accurate predictor of in - hospital mortality than the injury severity score .journal of trauma - injury , infection , and critical care 56 ( 2004 ) , 13121320 .lowe , h.l .gately , j. r. goss , c.l .frey , c.g .peterson , patterns of death , complication , and error in the management of motor vehicle accident victims : implications for a regional system of trauma care , journal of trauma - injury , infection , and critical care 23 ( 6 ) ( 1983 ) , 503509 .ringdal , t.j .coats , r. lefering , s. di bartolomeo , p.a .steen , o. rise , l. handolin , h.m .lossius , and utstein tcd expert panel , the utstein template for uniform reporting of data following major trauma : a joint revision by scantem , tarn , dgu - tr and ritg , scandinavian journal of trauma , resuscitation and emergency medicine 16 ( 1 ) ( 2008 ) , 7 .r. rutledge , t. osler , s. emery , s. kromhout - schiro , the end of the injury severity score ( iss ) and the trauma and injury severity score ( triss ) : iciss , an international classification of diseases , ninth revision - based prediction tool , outperforms both iss and triss as predictors of trauma patient survival , hospital charges , and hospital length of stay , journal of trauma - injury , infection , and critical care 44 ( 1 ) ( 1998 ) , 4149 .sacco , j.w .jameson , w.s .copes , m.m .lawnick , s.l .keast , h.r .champion , progress toward a new injury severity characterization : severity profiles , computers in biology and medicine 18 ( 6 ) ( 1988 ) , 419429 .sacco , a.v .milholland , w.p .ashman , c.l .swann , l.m .sturdivan , r.a .cowley , h.r .champion , w. gill , w.b .long , t. c. mcaslan , trauma indices , computers in biology and medicine 7 ( 1 ) ( 1977 ) , 920 .a. sauaia , f.a .moore , e.e .moore , k.s .moser , r. brennan , r.a .read , p.t .pons , epidemiology of trauma deaths : a reassessment , journal of trauma injury , infection , and critical care 38 ( 2 ) ( 1995 ) , 185193 .w.c . shoemaker , d.s .bayard , c.c.j . wo , a. botnen , l.s .chan , l - c .chien , k. lu , d. demetriades , h. belzberg , r.w .jelliffe , stochastic model for outcome prediction in acute illness , computers in biology and medicine 36 ( 6 ) ( 2006 ) , 585600 .k. sreide , a.j .krger , a. line vrdal , c.l .ellingsen , e. sreide , h.m .lossius , epidemiology and contemporary patterns of trauma deaths : changing place , similar pace , older face , world journal of surgery 31 ( 11 ) ( 2007 ) , 20922103 .sterne , i.r .white , j.b .carlin , m. spratt , p. royston , m.g .kenward , a.m. wood , j.r .carpenter , multiple imputation for missing data in epidemiological and clinical research : potential and pitfalls , british journal of medicine 338 ( 2009 ) , b2393 .t. sullivan , a. haider , s.m .dirusso , p. nealon , a. shaukat a , m. slim , prediction of mortality in pediatric trauma patients : new injury severity score outperforms injury severity score in the severely injured , journal of trauma - injury , infection , and critical care 55 ( 2003 ) , 10831087 . * evgeny mirkes * ( ph.d ., sc.d . ) is a research fellow at the university of leicester .he worked for russian academy of sciences , siberian branch , and siberian federal university ( krasnoyarsk , russia ) .his main research interests are biomathematics , data mining and software engineering , neural networks and artificial intelligence . he led and supervised many medium - sized projects in data analysis and development of decision - support systems for computational diagnosis and treatment planning .* timothy j. coats * ( frcs ( eng ) , md , fcem ) is a professor of emergency medicine at the university of leicester .chair faem research committee 2000 - 2009 , chair trauma audit and research network ( tarn ) , chair nihr injuries and emergencies national specialist group .research interests : diagnostics and monitoring in emergency care , coagulation following injury , predictive modeling of outcome following injury .* jeremy levesley * ( ph.d , fima ) is a professor in the department of mathematics at the university of leicester .his research area is kernel based approximation methods in high dimensions , in euclidean space and on manifolds .he is interested in developing research at the interface of mathematics and medicine , and sees interpretation of medical data sets as a key future challenge for mathematics . *alexander n. gorban * ( ph.d .professor ) holds a personal chair in applied mathematics at the university of leicester since 2004 .he worked for russian academy of sciences , siberian branch ( krasnoyarsk , russia ) , and eth zrich ( switzerland ) , was a visiting professor and research scholar at clay mathematics institute ( cambridge , ma ) , ihes ( bures - sur - yvette , le de france ) , courant institute of mathematical sciences ( new york ) , and isaac newton institute for mathematical sciences ( cambridge , uk ) .his main research interests are dynamics of systems of physical , chemical and biological kinetics ; biomathematics ; data mining and model reduction problems .
handling of missed data is one of the main tasks in data preprocessing especially in large public service datasets . we have analysed data from the trauma audit and research network ( tarn ) database , the largest trauma database in europe . for the analysis we used 165,559 trauma cases . among them , there are 19,289 cases ( 13.19% ) with unknown outcome . we have demonstrated that these outcomes are not missed ` completely at random ' and , hence , it is impossible just to exclude these cases from analysis despite the large amount of available data . we have developed a system of non - stationary markov models for the handling of missed outcomes and validated these models on the data of 15,437 patients which arrived into tarn hospitals later than 24 hours but within 30 days from injury . we used these markov models for the analysis of mortality . in particular , we corrected the observed fraction of death . two nave approaches give 7.20% ( available case study ) or 6.36% ( if we assume that all unknown outcomes are ` alive ' ) . the corrected value is 6.78% . following the seminal paper of trunkey ( 1983 ) the multimodality of mortality curves has become a much discussed idea . for the whole analysed tarn dataset the coefficient of mortality monotonically decreases in time but the stratified analysis of the mortality gives a different result : for lower severities the coefficient of mortality is a non - monotonic function of the time after injury and may have maxima at the second and third weeks . the approach developed here can be applied to various healthcare datasets which experience the problem of lost patients and missed outcomes . missed data , big data , data cleaning , mortality , markov models , risk evaluation
the lambert w function is the solution of , for complex and .it can be considered as the multi - branch inverse of the conformal map between the complex -plane and the complex -plane . when and are restricted to having real values, the graph of the lambert w function is as shown in figure [ fig - lamw - graph ] . for further background regarding the lambert w function ,see .some problem situations , for instance the modeling of current and voltage in diodes or solar cells , reduce to an implicit equation which can be solved explicitly by means of the lambert w function . as a simple example , consider the implicit equation where are positive real numbers , parameters of the model , and are real variables . )is a considerable simplification , in order to have an example in mind .the typical solar cell model has four or five parameters .see , page 14 , for instance . ]the corresponding explicit solution for as a function of is here denotes the principal branch of the natural logarithm function , and denotes the principal branch of the lambert w function . a typical task might be , given values of the model parameters , to draw a graph of as a function of .for that task , it is computationally efficient to have , instead of the implicit equation ( [ eqimplicit ] ) , the explicit solution ( [ eqexplicit ] ) for in terms of and the model parameters. another task might be to estimate the model parameter values which best fit experimental observations of pairs .for that task , one wants to have an understanding of how varying the model parameters will affect the - curves .still another task might be to determine the relationships among the model parameters which correspond to having an extremum of a function of or .this task also requires an understanding of how varying the model parameters will affect the - curves , but it will also be helpful if the formula utilized has partial derivatives with respect to all the parameters .the expression in equation ( [ eqexplicit ] ) is analytic , so is well - behaved with respect to its argument and the model parameters .it can be repeatedly differentiated , its extrema lie at stationary points , and so on . because one is working with real values for and and the positive real model parameters ,the argument for the lambert w function evaluation in equation ( [ eqexplicit ] ) is positive , so the principal branch of the lambert w function is being used .moreover , one is using the principal branch of the natural logarithm function .everything is single valued in this expression .however , there can be a numerical difficulty in performing computations with equation ( [ eqexplicit ] ) . making the substitutions ( coordinate changes ) and , the computations involve an evaluation of the function one difficulty which can arise , depending upon the computer programming language used , is numerical underflow or overflow , related to the evaluation of an exponential of where ( negative or positive ) has a large magnitude .the value of must therefore be restricted to the set of logarithms of floating point numbers whose exponents can be accurately represented in the arithmetic facility of the computer language .a second difficulty which can arise is that the best computer programming language for the rest of one s problem solution may not have a built - in lambert w function evaluator . the purpose of this note is to address those two difficulties .we will describe a simple procedure , which can be implemented in any programming language with floating point arithmetic , for the robust calculation of the function .the procedure is valid for essentially any real value of which is representable in the programming language .we may consider the function as a transformation of the lambert w function , with a change of representation or coordinate space . for clarity, we will restrict to real arguments .we can think of , the principal branch of the lambert w function , as a mapping of the positive real line to itself .the function maps the whole real line to the positive real line , and its inverse , the principal branch of the natural logarithm , maps the positive real line to the whole real line . in this interpretation ,the function is the composition of functions , and it maps the whole real line to the whole real line .suppose that .taking exponentials ( ie , applying the function to both sides of the equation ) gives that is , using the definition of the lambert w function , or taking logarithms ( ie , applying the function to both sides ) gives equation ( [ eqyey ] ) is a simple equation structure , as simple as the lambert w defining equation structure in fact , equation ( [ eqyey ] ) is just equation ( [ eqwew ] ) in another coordinate system .when we are evaluating as the solution to equation ( [ eqyey ] ) , we are just evaluating the lambert w function .there is an important difference , however : the evaluation of does not involve much risk of underflow or overflow in the numerical representation of the computer language . since satisfies , the first derivative of satisfies or the second derivative of satisfies figure [ fig - logwexp1 ] shows the function for moderate values of the argument , that is , for .figure [ fig - logwexp2 ] shows the same function for larger values of the argument , that is , for .one can see from these graphs that the function behaves like when is much less than 0 , and behaves like when is much more than 0 . for values of 0 , there is a smooth blend between the two behaviors , with . the function is strictly monotonic increasing , as it has a positive first derivative .it curves downward , as it has a negative second derivative .one can further see , from figure [ fig - logwexp2 ] , that when the range of the argument is large , the graph of looks like it has a sharp corner at the origin . actually , as figure [ fig - logwexp1 ] illustrates , the graph does not really have a sharp corner .nonetheless , at a suitable distance ( large scale ) , one has in the function a useful smooth function for representing a function which has a step in its derivative .in the terminology of h. kuki ( page 23 ) , the function is contracting .that is , , or , for all values in its argument domain .that means the task of finding an estimate for given is relatively stable .a slight change in ( noise in the input ) will produce only a slight change in .the only challenges in developing a formula to estimate , given , are finding an appropriate algorithm , coding the sequence of calculations to avoid unwanted cancellation , and being reasonably efficient in the number of computations performed .a suitable algorithm can be an initial estimate , followed by some number of iterations of a refinement .method is used to perform refinements because it has cubic convergence , and the derivatives involved can be calculated efficiently . given any fixed real number , we wish to find a real number such that is zero .the first and second derivatives of are needed for halley s method .they are and hence are particularly easy to calculate .once one has from the calculation of , the derivatives are also at hand .it is also necessary , in order to use halley s method , that the first derivative is non - zero ; that is the case for . as an initial estimate , we choose to use for , and for . for , we linearly interpolate between the two values and 1 .this is an extraordinarily crude initial estimate , but it is sufficient , since halley s method is very robust and rapidly convergent in this application .the general iteration formula for halley s method is in this particular case and the iteration formula becomes the details of coding depend upon the computer language .it will be efficient to evaluate only once per iteration .all other computations are straightforward arithmetic .when evaluating the denominator of the adjustment in the iteration equation ( [ eqiter ] ) , there is little risk of cancellation resulting from the subtraction , as the first term in the denominator is larger than the second term . in practice ,just a few iterations suffice to give a good result . for arguments in , four iterations of halleys method reduce the absolute error to less than .the actual coding can use a convergence criterion , based upon the desired maximum error in the estimate of function value , to determine how many iterations to perform . alternatively ,if the precision is fixed by the computer language s arithmetic representation or by the needs of the application situation , then one can determine how many iterations of the refinement will suffice , and perform only that number , omitting the final redundant iteration which verifies the convergence .this technique , due to h. kuki as seen in his algorithms for computing square root ( see , pages 49 - 50 and 135 - 136 ) , probably deserves a name . perhaps it should be called kuki s convergence non - test " .the author had the good fortune and honor to work for hirondo kuki in 1968 - 69 , and thanks him for his guidance , support and friendship .he also thanks s. r. valluri for an introduction to the lambert w function and the many interesting problems associated with its properties , and in particular for a stimulating discussion of the topic of this note .he thanks mark campanelli for suggesting halley s method for iterative refinement in solar cell calculations .
the function , where denotes the lambert w function , is the solution to the equation . it appears in various problem situations , for instance the calculation of current - voltage curves for solar cells . a direct calculation of may be inaccurate because of arithmetic underflow or overflow . we present a simple algorithm for calculating that is robust , in that it will work for almost all values which are representable in the arithmetic of one s chosen computer language . the algorithm does not assume that the chosen computer language implements the lambert w function . * a robust approximation to + a lambert - type function * + + ken roberts + april 8 , 2015
biological systems are inherently adaptive and have evolved to survive stressors and randomness . beyond being robust to stressors ,they are in fact evolutionarily designed to ` gain ' from them .this property of systems has been referred to as ` antifragility ' .antifragile systems benefit from stressors or noise . here , by `stressors ' one refers to unfavorable abiotic changes in external milieu such as temperature , ph , pressure etc .a system s reaction to any external stressor or stimuli could be enumerated through a measurable property that reflects gain or loss in response to the stressor .it has been argued that a system having convex response would benefit from addition of noise .many of the biological systems have convex responses due to evolutionary selection that they have undergone .modeling the nature of such system responses would help in better understanding of design principles of biological systems that help them to thrive under stressful circumstances .human engineered systems are designed to cope with stable signals .an electrical system is designed assuming unvarying electrical signal and structural systems are designed for absence of severe seismic disturbances .thus , signal variability reflects ` noise ' and is harmful to the system .classically physiological systems are thought to be designed to reduce variability and to attain homeostasis .in contrast , signals of a wide variety of physiological systems , such as human heartbeat , brain s electrical activity etc . , fluctuate in a complex manner .in fact , a defining feature of a living organic system is adaptability , the capacity to respond to unpredictable stimuli .a lung is a physiological system that serves towards respiration , critical for oxidative processes in organisms .human breathing is driven by pressure generated through spontaneous muscular action .the lungs are ventilated in response to this pressure .the lung volume representing the normal volume of air displaced between normal inspiration and expiration , when no extra effort is applied , is referred to as ` tidal volume ' .the pressure - volume response curves for lung are known as ` static compliance curve ' .figure [ scc ] shows the standard static compliance curve for normal human lung ventilation .mathematical modeling of this process results in a sigmoidal equation of the form : where is volume and is pressure .+ the equation has been shown to best fit the curve for both the convex and concave region .this equation not only comprehensively characterizes the p - v curve but also provides various parameters essential for clinical experimentation and studies .the four parameters given by , , , in the equation are fitting parameters .the parameter is the lower asymptote volume ; here and corresponds to difference between lower and higher asymptote ; here it is .the parameter depicts the pressure at the inflection point which is given by of .finally , is proportional to the pressure range within which most of the volume change takes place i.e. it is the index of linear compliance with a value of of .when lungs are incapable of ventilating spontaneously , mechanical ventilators are used .historically , mechanical ventilators are designed to deliver equal sized breaths .it has been observed that such conventional monotonously regular ventilation has negative consequences for critically ill patients .for instance , patients suffering from acute respiratory distress syndrome ( ards ) can have negative impact from this conventional mode of mechanical ventilation due to alveolar collapse and airway damage .natural healthy ventilation is characterized by its variability .biologically variable ventilation emulates healthy variation and has been shown to prevent deterioration of gas exchange , increase arterial oxygenation and , in general , is reported to improve respiratory mechanics under various lung pathologies .the reasons for advantageous effects of variable ventilation are not entirely clear and need to be explored further . in this study , we explore whether by adding suitably designed noise in ventilation pressure of mechanical ventilators , one can obtain better tidal volume without increasing the mean airway pressure .apart from ventilation pressure distributed as uniform distribution , we studied gaussian , log - normal , linear and power law distributions .in contrast to uniform distribution which gives equal weightage to convex and concave parts of the static compliance curve , these distributions preferentially span the curve .this allowed us to focus on different parts of the curve .for example , gaussian distribution gives more emphasis on the central part of the curve .the log - normal and linear stress on the latter half of the applied pressure range , whereas power law emphasizes on the first half of pressure range while introducing a few high - pressure spikes . studying various variable ventilation strategies could provide us an insight into the best possible method for operating mechanical ventilators so as to benefit the most from convexity of the static compliance curve .jensen s inequality provides the central argument to explain benefits of convexity and could be used to identify conditions under which addition of noise will be beneficial .jensen s inequality states that if is a real valued convex function in the interval ] then , ) \le e [ f(x ) ] \quad\text{,}\ ] ] here 12 & 12#1212_12%12[1][0] ( ) , _ _ ( , ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( )
mechanical ventilation is used for patients with a variety of lung diseases . traditionally , ventilators have been designed to monotonously deliver equal sized breaths . while it may seem intuitive that lungs may benefit from unvarying and stable ventilation pressure strategy , recently it has been reported that variable lung ventilation is advantageous . in this study , we analyze the mean tidal volume in response to different ` variable ventilation pressure ' strategies . we found that uniformly distributed variability in pressure gives the best tidal volume as compared to that of normal , scale - free , log normal and linear distributions . cite as : : : r yadav , m ghatge , k hiremath and g bagler , chapter 26 , pp 299 - 306 , systems thinking approach for social problems , lecture notes in electrical engineering , 327 , 2015 .
like resource - bounded universal turing machines , efficiently constructed universal circuits capture the hardness of languages computed by circuits in a given circuit class . as a result ,the study of the existence and complexity of universal circuits for quantum circuit classes provides insight into the computational strength of such circuits , as well as their limits .there is both a theoretical and a practical aspect to this study .the existence of a universal circuit family for a complexity class defined by resource bounds ( depth , size , gate width , etc . ) provides an upper bound on the resources needed to compute any circuit in that class .it also opens up possibilities for proving lower bounds on the hard languages in the class , as such bounds would follow from a lower bound proof for the language computed by a universal circuit family for the circuit class .more precisely , the specific , efficient construction of a universal circuit for a class of circuits yields , for a fixed input size , a single circuit which can be used to carry out the computation of every circuit ( with that same input size ) in that family , basically a chip or processor for that class of circuits .the more efficient the construction of the universal circuit , the smaller the processor for that class .furthermore , the universal circuit is in a sense a compiler for all possible computations of all circuits in this family .it can be used to efficiently program all possible computations capable of being carried out by circuits in this circuit class , and in doing so automatically acts as a general purpose simulator and with as little loss of efficiency as is possible . in the case of quantum circuitsthere are particular issues relating to the requirements that computations must be clean and reversible which come into play , and to an extent complicate the classical methods .still much of our motivation for this work originates with classical results due to cook , valiant , and others .cook and hoover considered depth universality and described a depth - universal uniform circuit family for circuits of depth .valiant studied size universality and showed how to construct universal circuits of size to simulate any circuit of size .( see section [ sec : other - work ] . ) fix and let be a collection of quantum circuits on qubits .a quantum circuit on qubits is _universal for _ if , for every circuit , there is a string ( the _ encoding _ ) such that for all strings ( the _ data _ ) , the circuit collections we are interested in are usually defined by bounding various parameters such as the size ( number of gates ) , depth ( number of layers of gates acting simultaneously on disjoint sets of qubits ) , or palette of allowed gates ( e.g. , hadamard , , cnot ) .as in the classical case , we also want our universal circuits to be _ efficient _ in various ways .for one , we restrict them to using the same gate family as the circuits they simulate .we may also want to restrict their size or the number of qubits they use for the encoding .we are particularly concerned with the depth of universal circuits .fix a family of unitary quantum gates .a family of quantum circuits is _ depth - universal over _ if 1 . is universal for -qubit circuits with depth using gates from , 2 . only uses gates drawn from , 3 . has depth , and 4 .the number of encoding qubits of is polynomial in and .depth - universal circuits are desirable because they can simulate any circuit within a constant slow - down factor .thus they are as time - efficient as possible .our first result , presented in section [ sec : depth - univ ] , shows that depth - universal quantum circuits exist for the gate families and , where and are the hadamard and gates , respectively , and and are the -qubit fanout and -qubit toffoli gates , respectively ( see section [ sec : prelims ] ) .[ thm : depth - univ ] depth - universal quantum circuits exist over and over .such circuits use qubits and can be built log - space uniformly in and .note that the results for the two circuit families are independent , because it is not known whether -qubit toffoli gates can be implemented exactly in constant depth using single - qubit gates and fanout gates , although they can be approximated this way . it would be nice to find depth - universal circuits over families of bounded - width gates such as .depth - universal circuits with bounded - width gates , if they exist , must have depth and thus can only depth - efficiently simulate circuits with depth .this can be easily seen as follows : suppose all you wanted was a universal circuit for depth-1 circuits on qubits that use cnot gates _only_. since any pair of the qubits could potentially be connected with a cnot gate , that pair must be connected somehow ( indirectly perhaps ) within the circuit .thus any data input qubit can potentially affect any of the other data output qubits . since only has constant - width gates , the number of qubits affected by any given data input increases by only a constant factor per layer , and so must have layers .one can therefore only hope to find depth - universal circuits for circuits of depth over bounded - width gates .although such circuits exist in the classical case ( see below ) , we are unable to construct them in the quantum case ( see section [ sec : open ] ) .the study of quantum circuit complexity was originated by yao .the basic definitions and first results in this research area can be found in nielsen and chuang .most of the research on universal quantum circuit classes deals with finding small , natural , universal sets of gates which can be used in quantum circuits to efficiently simulate any quantum computation .our problem and point of view here is quite different .we have the goal of constructing , for a natural class of quantum circuits , a single family of quantum circuits which can efficiently simulate all circuits on the class . in this paperwe consider classes which have significant resource bounds ( small or even constant depth , or fixed size ) and ask that the corresponding universal circuits family to have similar depth or size bounds .cook and hoover considered the problem of constructing general - purpose classical ( boolean ) circuits using gates with fanin two .they asked whether , given , there is a circuit of size and depth that can simulate any -input circuit of size and depth . cook and hoover constructed a depth - universal circuit for depth and polynomial size , but which takes as input a nonstandard encoding of the circuit , and they also presented a circuit with depth to convert the standard encoding of the circuit to the required encoding .valiant looked at a similar problem trying to minimize the size of the universal circuit .he considered classical circuits built from fanin gates ( but with unbounded fanout ) and embedded the circuit in a larger universal graph . using switches at key vertices of the universal graph , any graph ( circuit )can be embedded in it .he managed to create universal graphs for different types of circuits and showed how to construct a -size and -depth universal circuit .he also showed that his constructions have size within a constant multiplicative factor of the information theoretic lower bound . for quantum circuits ,nielsen and chuang ( in ) considered the problem of building generic universal circuits , or _programmable universal gate arrays _ as they call them .their universal circuits work on two quantum registers , a data register and a program register . they do not consider any size or depth bound on the circuits and show that simulating every possible unitary operation requires completely orthogonal programs in the program register .since there are infinitely many possible unitary operations , any universal circuit would require an infinite number of qubits in the program register .this shows that it is not possible to have a generic universal circuit which works for all circuits of a certain input length .however they showed that it is possible to construct an extremely weak type of probabilistic universal circuit with size linear in the number of inputs to the simulated circuit .sousa and ramos considered a similar problem of creating a universal quantum circuit to simulate any quantum gate .they construct a basic building block which can be used to implement any single - qubit or cnot gate on qubits by switching certain gates on and off .they showed how to combine several of these building blocks to implement any -qubit quantum gate . for the rest of the paper, we will use to denote the universal circuit and to denote the circuit being simulated .we define the quantum gates we will use in section [ sec : prelims ] .the construction of depth - universal circuits is in section [ sec : depth - univ ] .we briefly describe the construction of almost - size - universal quantum circuits in section [ sec : size - univ ] .we mention a couple of miscellaneous results in section [ sec : misc ] .we assume the standard notions of quantum states , quantum circuits , and quantum gates described in , in particular , ( hadamard ) , ( ) , ( phase ) , and ( controlled not ) .we will also need some additional gates , which we now motivate .the depth - universal circuits we construct require the ability to feed the output of a single gate to many other gates . while this operation , commonly known as fanout , is common in classical circuits , copying an arbitrary quantum state unitarily is not possible in quantum circuits due to the no - cloning theorem .it turns out that we can construct our circuits using a classical notion of fanout operation , defined as the _ fanout gate _ for any of the standard basis states ( the control ) and ( the targets ) and extended linearly to other states . can be constructed in depth using gates .we need to use unbounded fanout gates to achieve full depth universality .we also use the _ unbounded toffoli gate _ .we reserve the term `` toffoli gate '' to refer to the ( standard ) toffoli gate , which is defined on three qubits .in addition to the fanout gate , our construction requires us to use controlled versions of the gates used in the simulated circuit . for most of the commonly used basis sets of gates ( e.g. , toffoli gate , hadamard gate , and phase gate ) ,the gates themselves are sufficient to construct their controlled versions ( e.g. , a controlled hadamard gate can be constructed using a toffoli gate and hadamard and phase gates ) . depth or size universalityrequires that the controlled versions of the gates should be constructible using the gates themselves within proper depth or size , as required .a set of quantum gates is said to be _ closed under controlled operation _ if for each , the controlled version of the gate can be implemented in constant depth and size using the gates in .here , is a single qubit and could be a single or a multi - qubit gate .note that , and given , , and we can implement the toffoli gate via a standard constant - size circuit .we can implement the phase gate as , and since , we can implement and . a _ generalized gate _ , which we will hereafter refer to simply as a _ gate _, is an extension of the single - qubit pauli gate ( ) to multiple qubits : a gate can be constructed easily ( in constant depth and size ) from a single unbounded toffoli gate ( and vice versa ) by conjugating the target qubit of the unbounded toffoli gate with gates ( i.e. , placing on both sides of the toffoli gate on its target qubit ) .similarly , a _-fanout gate _ applies the single - qubit gate to each of target qubits if the control qubit is set : a gate can be constructed from a single gate and vice versa in constant depth ( although not constant size ) by conjugating each target with gates .so , in our depth - universal circuit construction , we can use either or both of these types of gates .similarly for unbounded toffoli versus gates . gates and -fanout gates are important because they only change the phase , leaving the values of the qubits intact ( they are represented by diagonal matrices in the computational basis ) .this allows us to use a trick due to hyer and palek and run all possible gates for a layer in parallel .in this section , we prove theorem [ thm : depth - univ ] , i.e. , that depth - universal circuits exist for each of the gate families we first give the proof for then show how to modify it for .the depth - universal circuit we construct simulates the input circuit layer by layer , where a layer consists of the collection of all its gates at a fixed depth . is encoded in a slightly altered form , however .first , all the fanout gates in are replaced with -fanout gates on the same qubits with gates conjugating the targets . at worst , this may roughly double the depth of ( adjacent gates cancel ) .each layer of the resulting circuit is then separated into three adjacent layers : the first having only the gates of the original layer , the second only the gates , and the third only the -fanout gates . then simulates each layer of the modified by a constant number of its own layers .we describe next how these layers are constructed .[ [ simulating - single - qubit - gates . ] ] simulating single - qubit gates .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the circuit to simulate an -qubit layer of single - qubit gates of type , say , consists of a layer of controlled- gates where the control qubits are fed from the encoding and the target qubits are the data qubits .figure [ fig : single - qubit ] shows a layer of gates , where , controlled using , , , cnot , and toffoli gates . to simulate gates on qubits , say , set to and the rest of the -qubits to .[ [ simulating - z - fanout - gates . ] ] simulating -fanout gates .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the circuit to simulate a -fanout layer is shown in figure [ fig : z - fanout ] .the top qubits are the original data qubits .the rest are ancilla qubits .all the qubits are arranged in blocks of qubits per block .the qubits in block are labeled .each subcircuit looks like figure [ fig : a - i - z - fanout ] .the qubits are encoding qubits .the large gate between the two columns of toffoli gates is a -fanout gate with its control on the ancilla ( corresponding to and ) and targets on all the other ancill .here is the state evolution from , suppressing the qubits and ancill internal to the subcircuits in the ket labels . note that after the first layer of fanouts , each qubit carries the value . to simulate some -fanout gate of whose control is on the qubit , say , we do this in block by setting to and setting to for every where the qubit is a target of . all the other -qubits in are set to .we can do this in separate blocks for multiple -fanout gates on the same layer , because no two gates can share the same control qubit .any -qubits in unused blocks are set to .[ [ simulating - unbounded - toffoli - gates . ] ] simulating unbounded toffoli gates .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + we can modify the construction above to accommodate unbounded toffoli gates ( gate family ) , or equivalently gates , by breaking each layer of into four adjacent layers , the first three being as before , and the fourth containing only gates . the top - level circuitto simulate a layer of gates looks just as before ( figure [ fig : z - fanout ] ) , except now each subcircuit looks a bit different and is shown in figure [ fig : a - i - z ] , where the central gate is a gate connecting the ancill .as before , the qubits are encoding qubits . the gates on the overall phase by .when the gate of is applied , its contact point is in the state .note that if and otherwise .the gate then multiplies the overall phase by .the state thus evolves as given below : to simulate some gate of whose first qubit is , say , we do this in block by setting to and setting to for every where the qubit is part of .all the other -qubits in are set to .as before , we can do this in separate blocks for multiple gates on the same layer , because no two gates can share the same first qubit .any -qubits in unused blocks are set to , and it is easy to check that this makes the block have no net effect .similar to a depth - universal circuit , a _ size - universal circuit _ is a universal circuit with the same order of the number of gates as the circuit it is simulating .formally , a family of universal circuits for -qubit circuits of size is _ size - universal _ if .a simple counting argument shows that it is not possible to obtain a completely size - universal circuit for fanin- circuits .consider all circuits with fanin- gates where one input of each gate is the first qubit .there are possible circuits .then consider similar circuits where there is no gate with input as the first qubit and continue recursively .thus the number of possible fanin- circuits is .since all the encoding bits have to be connected to some of the fanin- gates in the universal circuit , it must have gates .we use valiant s idea of universal graphs to construct a universal family of fanin- circuits that are very close to the afforementioned lower bound . as before, we would like to simulate by using the same set of gates used in .our construction works for any circuit using unbounded toffoli gates and any set of single - qubit and -qubit gates closed under the controlled operation .first we will define a universal directed acyclic graph with special vertices ( called _ poles _ ) in which we can embed any circuit with gates ( considering the inputs also as gates ) .the embedding will map the wires in the circuit to paths in the graph .an _ edge - embedding _ of into maps one - to - one to and maps each edge to a directed path in such that distinct edges are mapped to edge - disjoint paths .the graph of any circuit of size can be represented as a directed acyclic graph with vertices such that there is no edge from to for and each vertex has fanin and fanout .let be the set of all such graphs .a graph is _ edge - universal _ for if it has distinct poles such that any graph can be edge - embedded into where each vertex is mapped to vertex .then , valiant shows how to construct a universal graph .there is a constant such that for all there exists an acyclic graph that is edge - universal for , and has vertices , each vertex having fanin and fanout .it is fairly easy to construct a universal circuit using the universal graph .in fact , the universal circuit for circuits with inputs and gates will be any edge - universal graph for .consider any such edge - universal graph .then has vertices for some .these vertices include fixed poles and non - pole vertices .create a quantum circuit with gates ( including the inputs and outputs ) where describes how the gates connect to each other .for each of the vertices of , remove their incoming edges and replace the vertices by the input as shown in figure [ fig : size - univ - input - gate ] .replace each of the vertices with a subcircuit that applies any of the single- or -qubit gates on the inputs , where the gate to apply is controlled by the encoding .e.g. , figure [ fig : size - univ - pole ] shows the gates at a pole vertex in a universal circuit simulating and gates . for a non - pole vertex ,replace it with a subcircuit that swaps the incoming and outgoing wires ( i.e. , first input is connected to second output and second input is connected to first output ) or directly connects them ( i.e. , first input is connected to first output and similarly for the second input ) .again , the subcircuit is controlled by the encoding which controls whether to swap or directly connect ( see figure [ fig : size - univ - non - pole ] ) . the edge disjointness property guarantees that wires in the embedded circuit are mapped to paths in which can share a vertex but can not share any edge . to simulate any fanin- circuit with gates acting on qubits , construct the edge - universal graph for .embed the graph of into such that the input nodes of are mapped to the poles in .now for each gate of the circuit , consider the pole to which it was mapped. set a bit in the encoding to denote the type of the gate at that pole .for the non - pole vertices , set a bit in the encoding to specify whether the two input values should be swapped or mapped directly to the two output values .the size of the encoding is which is for polynomial - size circuits .this construction gives us a universal circuit with a logarithmic blow - up in size .there is a constant and a family of universal circuits that can simulate every circuit with gates acting on qubits such that .we can use a similar idea for circuits with unbounded fanin .first we decompose the unbounded fanin gates using bounded fanin gates ( fanin in this case ) .this is doable for most of the common unbounded fanin gates .for example , an unbounded toffoli gate of size can be constructed using successive toffoli gates of size , which can in turn be implemented using hadamard , phase , and gates .so any circuit of size consisting of hadamard , and unbounded toffoli gates can be transformed into an equivalent circuit with size at most consisting of these single - qubit gates and gates .the rest of the construction follows as before .there is a family of universal circuits that can simulate quantum circuits of size on qubits and consisting of hadamard , , and unbounded toffoli gates such that .[ [ circuit - encoding . ] ] circuit encoding .+ + + + + + + + + + + + + + + + + we have been mostly concerned with the actual simulation of a quantum circuit by the universal circuit .it is possible , however , to hide some complexity of the simulation in s description of itself .usually , the description of a classical circuit describes the underlying graph of the circuit and specifies the gates at each vertex .we can similarly describe a quantum circuit by its graph structure .the description is extremely compact with size proportional to the size of the circuit .however , we use a description that is more natural for quantum circuits and especially suitable for simulation . the description stores the grid structure of the circuit ; the rows of the grid correspond to the qubits , and the columns correspond to the different layers of the circuit .this description is not unique for any given circuit and its size is , where is the number of qubits and is the depth of the circuit .a graph - based description can be easily converted to this grid - based description in polynomial time .[ [ depth - universal - classical - circuits . ] ] depth - universal classical circuits .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the techniques of section [ sec : depth - univ ] can be easily adapted to build depth - universal circuits for a variety of classical ( boolean ) circuit classes with unbounded gates , e.g. ,ac , acc , and tc circuits . the key reason is that these big gates are all `` self - similar '' in the sense that fixing some of the inputs can yield a smaller gate of the same type .we will present these results in the full paper .a number of natural , interesting open problems remain .fanout gates are used in our construction of a depth - universal circuit family .is the fanout gate necessary in our construction ?we believe it is .in fact , we do not know how to simulate depth- circuits over universally in depth without using fanout gates , even assuming that the circuits being simulated have depth .the shallowest universal circuits with bounded - width gates we know of have a blow - up factor in the depth , just by replacing the fanout gates with log - depth circuits of gates .our results apply to circuits with very specific gate sets .how much can these gate sets be generalized ? are similar results possible for any countable set of gates containing hadamard , unbounded toffoli , and fanout gates ?we showed how to contruct a universal circuit with a logarithmic blow - up in size .the construction is within a constant factor of the minimum possible size for polynomial - size , bounded - fanin circuits .however for constant - size circuits , we believe the lower bound can be tightened to match the proven upper bound . for unbounded - fanin circuits ,we construct a universal circuit with size which is significantly larger than the bounded fanin lower bound of .we think that a better lower bound is possible for the unbounded - fanin case .we thank michele mosca and debbie leung for insightful discussions .the second author is grateful to richard cleve and iqc ( waterloo ) and to harry buhrman and cwi ( amsterdam ) for their hospitality .
we define and construct efficient depth - universal and almost - size - universal quantum circuits . such circuits can be viewed as general - purpose simulators for central classes of quantum circuits and can be used to capture the computational power of the circuit class being simulated . for depth we construct universal circuits whose depth is the same order as the circuits being simulated . for size , there is a log factor blow - up in the universal circuits constructed here . we prove that this construction is nearly optimal .
in this work , we consider two - user interference channels with an external eavesdropper . without the secrecy constraints , the interference channel is studied extensively in the literature .however , the capacity region is still not known except for some special cases .interference channels with confidential messages is recently studied by .nonetheless , the external eavesdropper scenario has not been addressed extensively in the literature yet .in fact , the only relevant work regarding the security of the interference channels with an external eavesdropper is the study of the secure degrees of freedom ( dof ) in the -user gaussian interference channels under frequency selective fading models , where it is shown that positive secure dofs are achievable for each user in the network . in this work ,we propose the cooperative binning and channel prefixing scheme for ( discrete ) memoryless interference channels with an external eavesdropper .the proposed scheme allows for cooperation in adding randomness to the channel in two ways : ) cooperative binning : the random binning technique of is cooperatively exploited at both users . ) channel prefixing : users exploit the channel prefixing technique of in a cooperative manner .the proposed scheme also utilizes the message - splitting technique of and partial decoding of the interfering signals is made possible at the receivers .the achievable secrecy rate region with the proposed scheme is given .for the gaussian interference channel , the channel prefixing technique is exploited to inject artificially generated noise samples into the network , where we also allow power control at transmitters to enhance the security of the network .the proposed scheme is closely related with that of . considered the relay - eavesdropper channel and proposed the noise - forwarding scheme where the relay node sends a codeword from an independently generated codebook to add randomness to the network in order to enhance the security of the main channel . considered gaussian multiple - access wire - tap channels and proposed the cooperative jamming scheme in which users transmit their codewords or add randomness to the channel by transmitting noise samples , but not both .the approach in this sequel , when specialized to the gaussian multiple access channel with an external eavesdropper , generalizes and extends the proposed achievable regions given in , due to the implementation of simultaneous cooperative binning and jamming at the transmitters together with more general time - sharing approaches .this simultaneous transmission of secret messages and noise samples from transmitters is considered by . in , authors proposed artificially generated noise injection schemes for multi - transmit antenna wire - tap channels , in which the superposition of a secrecy signal and an artificially generated noise is transmitted from the transmitter , where the noisy transmission only degrades the eavesdropper s channel . for the single transmit antenna case , wire - tap channels with helper nodes is considered , in which helper nodes transmit artificially generated noise samples in order to degrade the eavesdropper s channel .remarkable , exploitation of the channel prefixing technique was transparent in these previous studies .the proposed scheme in this work shows that the benefit of cooperative jamming scheme of and noise injection scheme of originates from the channel prefixing technique .in addition , compared to , the proposed scheme allows for cooperation via _ both _ binning and channel prefixing techniques , whereas in one of the transmitters is allowed to generate and transmit noise together with the secret signal and cooperation among network users as considered in this sequel was not implemented for the confidential message scenario .the rest of this work is organized as follows .section ii introduces the system model . in section iii, the main result for discrete memoryless interference channels is given .section iv is devoted to some examples of the proposed scheme for gaussian channels .finally , we provide some concluding remarks in section v.we consider a two - user interference channel with an external eavesdropper ( ic - ee ) , comprised of two transmitter - receiver pairs and an additional eavesdropping node .the discrete memoryless ic - ee is denoted by for some finite sets . herethe symbols are the channel inputs and the symbols are the channel outputs observed at the decoder , decoder , and at the eavesdropper , respectively .the channel is memoryless and time - invariant : , where we omit the if , i.e. , . random variables are denoted with capital letters ( ) , and random vectors are denoted as bold - capital letters ( ) .again , we drop the for .lastly , ^+\triangleq\max\{0,x\}$ ] , , and . ] we assume that each transmitter has a secret message which is to be transmitted to the respective receivers in channel uses and to be secured from the external eavesdropper . in this setting , an secret codebook has the following components : ) the secret message sets for transmitter . ) encoding function at transmitter which map the secret messages to the transmitted symbols , i.e. , for each for . ) decoding function at receiver which map the received symbols to estimate of the message : for .reliability of the transmission of user is measured by , where where is the event that is transmitted from the transmitters .for the secrecy requirement , the level of ignorance of the eavesdropper with respect to the secured messages is measured by the equivocation rate we say that the rate tuple is achievable for the ic - ee if , for any given , there exists an secret codebook such that , and for sufficiently large .the secrecy capacity region is the closure of the set of all achievable rate pairs and is denoted as . the gaussian interference channel in standard form is given in .we have the same transformation here for the gaussian interference channel with an external eavesdropper ( gic - ee ) model .we remark that the channel capacity will remain the same as the transformations are invertible .we represent the average power constraints of the transmitters as , where codewords should satisfy for . here the input - output relationship , i.e. , , changes to the following : where for as depicted in fig .the secrecy capacity region of the gic - ee is denoted as . for . ]in this section , we introduce the proposed cooperative binning and channel prefixing scheme for the ic - ee model . with this scheme , transmitters design their secrecy codebooks using the random binning technique .this binning structure in the codebook let a transmitter to add randomness in its own signals . however , the price of adding extra randomness to secure the transmission appear as a rate loss in the achievable rate expressions . in our scenario ,the proposed strategy allows for cooperation in design of these binning codebooks , and allows for cooperation in prefixing the channel as we utilize the channel prefixing technique of at both users .hence , users of the interference channel will add only _ sufficient _ amount of randomness as the other user will help to increase the randomness seen by the eavesdropper .the achievable secure rate region with this scheme is described below .first consider auxiliary random variables , , , , , , and defined on arbitrary finite sets , , , , , , and , respectively .now , let be the set of all joint distributions of the random variables , , , , , , , , , , , and that factors as . here , the variable serves as a time - sharing parameter .see , for example , for a discussion on time - sharing parameters .the variable is used to construct the _ common _ secured signal of transmitter that has to be decoded at both receivers , where the random binning technique of is used for this construction .the variable is used to construct the _ self _ secured signal that has to be decoded at receiver but not at receiver , where the random binning technique of is used for this construction .the variable is used to construct _ other _ signal of transmitter that has to be decoded at receiver but not at receiver , where the conventional random codebook construction , see for example , is used for this signal , i.e. , no binning is implemented .similarly , , , and are utilized at user . finally , it is important to remark that the channel prefixing technique of is exploited with this construction as we transformed the channel to using the prefixes and . to ease the presentation , we first state the following definitions .we define , , , , , and corresponding rates and .note that we choose below . also , we define . is the set of all tuples satisfying for a given joint distribution . is the set of all tuples satisfying for a given joint distribution . is the set of all tuples satisfying for a given joint distribution . is the closure of all satisfying and for a given joint distribution .we now state the main result of the paper .the achievable secrecy rate region using the cooperative binning and channel prefixing scheme is as follows .[ thm : r ] . the proof is omitted and will be provided in the journal version of this work .in this section , we provide some examples of the proposed coding scheme for gaussian interference channels and show that the proposed scheme provides gains in securing the network by exploiting cooperative binning , cooperative channel prefixing , and time - sharing techniques .firstly , we describe how the channel prefixing can be implemented in this gaussian scenario . here, one can independently generate and transmit noise samples for each channel use from the transmitters ( without constructing a codebook and sending one of its messages ) to enhance the security of the network . as there is no design of a codebook at the interfering user for this noise transmission , receivers and the eavesdropper can only consider this transmission as noise .accordingly , transmitter uses power for the construction of its ( binning ) codewords , which are explained in the previous section , and obtains , somehow , the signal .in addition , it uses power for its jamming signal and generates i.i.d .noise samples represented by , where we choose .then , it sends to the channel , instead of just sending .now , we can use the scheme proposed in the previous section for the design of the signals .below we will use superposition coding to construct this signal . butfirst , for a rigorous presentation , we provide some definitions .let denote the set of all tuples satisfying and for now , we define a set of joint distributions as follows . , , , , , , , , , , where the gaussian model given in ( [ eq : gic - ee ] ) gives .then , the following region is achievable for the gaussian interference channel with an external eavesdropper .[ thm : rg ] .we emphasize the way of implementing the channel prefixing technique of ( * ? ? ?* lemma ) : is chosen by . with this choice , we are able to implement simultaneous binning and jamming at the transmitters together with a power control .we now present a computationally simpler region .consider [ thm : rg2 ] .we also provide a sub - region of that will be used for numerical results .define a set of joint distributions . [ thm : rg3 ] .it is important to note that we use the convex closure of the rate regions instead of using a time - sharing parameter in these subregions .we have already given the more general region above and we conjecture that it is possible to extend these achievable subregions by a different choice of channel prefixing or by using a time - sharing approach .accordingly , we consider a tdma - like approach , which will show that even a simple type of time - sharing is beneficial . herewe divide the channel uses into two intervals of lengths represented by and , where and is assumed to be an integer . the first period , of length , is dedicated to secure transmission for user .during this time , transmitter generates binning codewords using power and jams the channel using power ; and transmitter jams the channel using power . for the second periodthe roles of the users are reversed , where users use powers , , and .we call this scheme cooperative tdma ( c - tdma ) and obtain the following region in this case .[ thm : rc - tdma ] , where where ^+,\nonumber\end{aligned}\ ] ] and ^+.\nonumber\end{aligned}\ ] ] note that , we only consider adding randomness by noise injection for the cooperative tdma scheme above . however , our coding scheme presented in the previous section allows for an implementation of more general cooperation strategies , in which users can add randomness to the channel in two ways : adding randomness via cooperative binning and adding randomnees via cooperative channel prefixing .a user by implementing _ both _ of these approaches can help the other one in a time - division setting .we again remark that the proposed cooperative binning and channel prefixing scheme allows even more general approaches such as having more than two time - sharing periods . in this sectionwe provide numerical results for the following subregions of the achievable region given by corollary [ thm : rg ] . ) : this region is provided above , where we utilize both cooperative binning and channel prefixing . ) : here we utilize either cooperative binning or channel prefixing scheme at a transmitter , but not both . ) : here we only utilize cooperative binning . accordingly , jamming powers are set to zero . ) : this region is an example of utilizing both time - sharing and cooperative channel prefixing . no cooperative binning is used . ) : here we do not allow transmitters to jam the channel during their dedicated time slots and call this case no self channel prefixing ( nscp ) . ) : here no channel prefixing is implemented .this case refers to conventional tdma scheme , in which users are allowed to transmit during only their assigned slots .hence , this scheme only utilizes time - sharing .numerical results are provided in fig . and fig .the first scenario depicted in fig . shows the benefits of cooperative binning technique .also , cooperative channel prefixing does not help to enlarge the secure rate region in this scenario .secondly , in fig . , we consider an asymmetric scenario , in which the first user has a weak channel to the eavesdropper but the second user has a strong channel to the eavesdropper . here , the second user can help the first one to increase its secrecy rate .however , channel prefixing and time - sharing does not help to the second user as it can not achieve positive secure rate without an implementation of cooperative binning .remarkable , cooperative binning technique helps the second user to achieve positive secure transmission rate in this case .these observations suggest the implementation of all three techniques ( cooperative binning , cooperative channel prefixing , and time - sharing ) as considered in our general rate region , i.e. , . , , .] , , , , . ]it can be shown that the proposed scheme reduces to the noise forwarding scheme of for the discrete memoryless relay - eavesdropper channel .remarkable , the channel prefixing technique can be exploited in this scenario to increase the achievable secure rates .for example , for the gaussian channel , injecting i.i.d .noise samples can increase the achievable secure transmission rates as shown in .our result here shows that the gain resulting from the noise injection comes from the exploitation of the channel prefixing technique .in addition , the proposed scheme , when specialized to a gaussian multiple - access scenario , results in an achievable region that generalizes and extends the proposed regions given in due to the implementation of simultaneous cooperative binning and channel prefixing at the transmitters together with more general time - sharing approaches .in this work , we have considered two - user interference channels with an external eavesdropper .we have proposed the cooperative binning and channel prefixing scheme that utilizes random binning , channel prefixing , and time - sharing techniques and allows transmitters to cooperate in adding randomness to the channel . for gaussian interference channels ,the channel prefixing technique is exploited by letting users to inject independently generated noise samples to the channel .the most interesting aspect of our results is , perhaps , the unveiling of the role of interference in cooperatively adding randomness to the channel to increase the secrecy rates of multi - user networks .v. s. annapureddy and v. v. veeravalli , `` gaussian interference networks : sum capacity in the low interference regime and new outer bounds on the capacity region , '' _ ieee trans .inform . theory _, submitted for publication .y. liang , a. somekh - baruch , h. v. poor , s. shamai ( shitz ) , and s. verdu , `` cognitive interference channels with confidential messages , '' in _ proc .45th annual allerton conference on communication , control and computing _ , monticello , il , sept .r. liu , i. maric , p. spasojevic , and r. d. yates , `` discrete memoryless interference and broadcast channels with confidential messages : secrecy rate regions , '' _ ieee trans .inform . theory _54 , no . 6 , pp . 2493 - 2507 , june 2008 .o. o. koyluoglu , h. el gamal , l. lai , and h. v. poor `` on the secure degrees of freedom in the -user gaussian interference channel , '' in _ proc . 2008 ieee international symposium on information theory ( isit08 ) _ , toronto , on , canada , july 2008 .e. tekin and a. yener , `` the general gaussian multiple - access and two - way wiretap channels : achievable rates and cooperative jamming , '' _ ieee trans .inform . theory _ , vol .54 , no . 6 , pp . 2735 - 2751 , june 2008 .x. tang , r. liu , p. spasojevic , and h. v. poor `` the gaussian wiretap channel with a helping interferer , '' in _ proc .2008 ieee international symposium on information theory ( isit08 ) _ , toronto , on , canada , july 2008 .
this paper studies interference channels with security constraints . the existence of an external eavesdropper in a two - user interference channel is assumed , where the network users would like to secure their messages from the external eavesdropper . the cooperative binning and channel prefixing scheme is proposed for this system model which allows users to cooperatively add randomness to the channel in order to degrade the observations of the external eavesdropper . this scheme allows users to add randomness to the channel in two ways : 1 ) users cooperate in their design of the binning codebooks , and 2 ) users cooperatively exploit the channel prefixing technique . as an example , the channel prefixing technique is exploited in the gaussian case to transmit a superposition signal consisting of binning codewords and independently generated noise samples . gains obtained form the cooperative binning and channel prefixing scheme compared to the single user scenario reveals the positive effect of interference in increasing the network security . remarkably , interference can be exploited to cooperatively add randomness into the network in order to enhance the security .
johansen et al . developed a model ( referred to below as the jls model ) of financial bubbles and crashes , which is an extension of the rational expectation bubble model of blanchard and watson . in this model , a crash is seen as an event potentially terminating the run - up of a bubble .a financial bubble is modeled as a regime of accelerating ( super - exponential power law ) growth punctuated by short - lived corrections organized according to the symmetry of discrete scale invariance .the super - exponential power law is argued to result from positive feedback resulting from noise trader decisions that tend to enhance deviations from fundamental valuation in an accelerating spiral .the jls model has been proved to be a very powerful and flexible tool to detect financial bubbles and crashes in various kinds of markets such as the 2006 - 2008 oil bubble , the chinese index bubble in 2009 , the real estate market in las vegas , the south african stock market bubble and the us repurchase agreement market .recently , the jls model has been extended to detect market rebounds and to infer the fundamental market value hidden within observed prices .also , new experiments in ex - ante bubble detection and forecast has been performed in the financial crisis observatory at eth zurich . here, we present an extension of the jls model , which is in the spirit of the approach developed by zhou and sornette to include additional pricing factors . the literature on factor models is huge and we refer e.g. to ref. and references therein for a review of the literature .one of the most famous factor model , now considered as a standard benchmark , is the three - factor fama - french model augmented by the momentum factor .recently , the concept of the zipf factor has been introduced .the key idea of the zipf factor is that , due to the concentration of the market portfolio when the distribution of the capitalization of firms is sufficiently heavy - tailed as is the case empirically , a risk factor generically appears in addition to the simple market factor , even for very large economies .malevergne et al . proposed a simple proxy for the zipf factor as the difference in returns between the equal - weighted and the value - weighted market portfolios .malevergne et al . have shown that the resulting two - factor model ( market portfolio the new factor termed `` zipf factor '' ) is as successful empirically as the three - factor fama - french model .specifically , tests of the zipf model with size and book - to - market double - sorted portfolios as well as industry portfolios finds that the zipf model performs practically as well as the fama - french model in terms of the magnitude and significance of pricing errors and explanatory power , despite that it has only two factors instead of three . in the present paper, we would like to introduce a new model by combining the zipf factor with the jls model .the new model keeps all the dynamical characteristics of a bubble described in the jls model .in addition , the new model can also provide the information about the concentration of stock gains over time from the knowledge of the zipf factor .this new information is very helpful to understand the risk diversification and to explain the investors behavior during the bubble generation .the paper is constructed as follows .section [ sec : model ] describes the definition of the zipf factor as well as the new model .the derivation of the model is presented in this section and the appendix .section [ sec : calibration ] introduces the calibration method of this new model .then we test the new model with two famous chinese stock bubbles in the history in section [ sec : application ] and discuss the role of the zipf factor in these two bubbles .section [ sec : conclusion ] concludes .we introduce the new model in this section . our goal is to combine the zipf factor with the jls model of the bubble dynamics . to be specific , we introduce the following definition . +* definition 1 * : _ the zipf factor is defined as proportional to the difference between the returns of the capitalization - weighted portfolio and the equal - weighted portfolio for the last time step : where ( respectively ) is the price of the capitalization - weighted ( respectively equal - weighted ) portfolio , and .the weights of the portfolios are normalized so that their two prices are identical at the day preceding the beginning time of the time series : . _ + * definition 2 * : _ the integrated zipf factor obtained by taking the integral of the zipf factor defined by expression ( [ hjyjuj4u ] ) : _ _ _ by definition , the zipf factor describes the exposition to a lack of diversification due to the concentration of the stock market on a few very large firms . the dynamics of stock markets during a bubble regime is then described as where is the portfolio price , is the drift ( or trend ) whose accelerated growth describes the presence of a bubble ( see below ) , is the factor loading on the zipf s factor and is the increment of a wiener process ( with zero mean and unit variance ) .the term represents a discontinuous jump such that before the crash and after the crash occurs . the loss amplitude associated with the occurrence of a crashis determined by the parameter .the assumption of a constant jump size is easily relaxed by considering a distribution of jump sizes , with the condition that its first moment exists .then , the no - arbitrage condition is expressed similarly with replaced by its mean .each successive crash corresponds to a jump of by one unit .the dynamics of the jumps is governed by a crash hazard rate .since is the probability that the crash occurs between and conditional on the fact that it has not yet happened , we have = 1 \times h(t ) dt + 0 \times ( 1- h(t ) dt) ] denotes the expectation operator .this leads to = h(t)dt~. \label{theyjytuj}\ ] ] noise traders exhibit collective herding behaviors that may destabilize the market in this model .we assume that the aggregate effect of noise traders can be accounted for by the following dynamics of the crash hazard rate the intuition behind this specification ( [ eq : hazard ] ) has been presented at length by johansen et al . , and further developed by sornette and johansen , ide and sornette and zhou and sornette . in a nutshell , the power law behavior embodies the mechanism of positive feedback posited to be at the source of the bubbles .if the exponent , the crash hazard may diverge as approaches a critical time , corresponding to the end of the bubble . the cosine term in the r.h.s .of ( [ eq : hazard ] ) takes into account the existence of a possible hierarchical cascade of panic acceleration punctuating the course of the bubble , resulting either from a preexisting hierarchy in noise trader sizes and/or from the interplay between market price impact inertia and nonlinear fundamental value investing .we assume that all the investors of the market have already taken the diversification risk into account , so that the no - arbitrage condition reads =0 ] and = h(t)dt ] , this yields this result ( [ tjyj4n ] ) expresses that the return is controlled by the risk of the crash quantified by its crash hazard rate .the excess return is the remuneration that investors require to remain invested in the bubbly asset , which is exposed to a crash risk .now , conditioned on the fact that no crash occurs , equation ( [ eq : dynamic ] ) is simply where the zipf factor is given by expression ( [ hjyjuj4u ] ) .its conditional expectation leads to = \kappa h(t ) dt\ ] ] substituting with the expression ( [ eq : hazard ] ) for and ( [ hjyjuj4u ] ) for , and integrating , yields the log - periodic power law ( lppl ) formula as in the jls model , but here augmented by the presence of the zipf factor , which adds the term proportional to the zipf factor loading : = a + b(t_c - t)^m + c(t_c - t)^m\cos(\omega\ln ( t_c - t ) - \phi)~ , \label{eq : lppl}\ ] ] where is defined by expression ( [ heyjujkuj5 ] ) and the r.h.s . of ( [ eq : lppl ] )is the primitive of expression ( [ eq : hazard ] ) so that and .this expression ( [ eq : lppl ] ) describes the average price dynamics only up to the end of the bubble . the same structure as equation ( [ eq : lppl ] ) is obtained using a stochastic discount factor following the derivation of zhou and sornette , as shown in the appendix .the jls model does not specify what happens beyond .this critical is the termination of the bubble regime and the transition time to another regime .this regime could be a big crash or a change of the growth rate of the market .merrill lynch emu ( european monetary union ) corporates non - financial index in 2009 provides a vivid example of a change of regime characterized by a change of growth rate rather than by a crash or rebound . for ,the crash hazard rate accelerates up to but its integral up to which controls the total probability for a crash to occur up to remains finite and less than for all times .it is this property that makes it rational for investors to remain invested knowing that a bubble is developing and that a crash is looming .indeed , there is still a finite probability that no crash will occur during the lifetime of the bubble .the condition that the price remains finite at all time , including , imposes that . within the jls framework ,a bubble is qualified when the crash hazard rate accelerates . according to ( [ eq : hazard ] ) , this imposes and ,hence since by the condition that the price remains finite .we thus have a first condition for a bubble to occur by definition , the crash rate should be non - negative .this imposes are eight parameters in this lppl model augmented by the introduction of the zipf s factor , four of which are the linear parameters ( and ) .the other four ( and ) are nonlinear parameters .we first slave the linear parameters to the nonlinear ones .the method here is the same as used by johansen et al . .the detailed equations and procedure is as follows .we rewrite eq .( [ eq : lppl ] ) as : = \gamma \zeta(t ) + a + b f(t ) + c g(t ) : = rhs(t)~. \label{eq : lppl2sgw}\ ] ] we have also defined the bounds of the search space are : \\\label{eq : rangetc } m & \in & [ 10^{-5 } , 1 - 10^{-5 } ] \label{eq : rangem}\\ \omega & \in & [ 0.01 , 40 ] \\\phi & \in & [ 0 , 2\pi - 10^{-5}]\end{aligned}\ ] ] we choose these bounds because has to be between and according to the discussion before ; the log - angular frequency should be greater than .the upper bound is large enough to catch high - frequency oscillations ( though we later discard fits with ) ; the phase should be between 0 and ; the predicted critical time should be after the end of the fitted time series .finally , the upper bound of the critical time should not be too far away from the end of the time series since predictive capacity degrades far beyond .jiang et al . have found empirically that a reasonable choice is to take the maximum horizon of predictability to extent to about one - third of the size of the fitted time window .we use the shanghai composite index as the market proxy to test the jls model augmented with the zipf factor .the shanghai composite index is a capital - weighted measure of stock market performance . on december 19 , 1990 ,the base value of the shanghai composite index was fixed to .we note the base date as . denoting by , the total market capitalization of the firms entering in the shanghai composite index on december 19 , 1990 , the value of the shanghai composite index at any later time is given by where is the current total market capitalization of the constituents of the shanghai composite index . here, time is counted in units of trading days .calling ( respectively ) , the share price ( respectively total number of shares ) of firm at time , we have the total capitalization of firm at time and the total market capitalization at time where is the number of the stocks listed in the index at time .at the time when the calibrations were performed , the ssec market included 884 active stocks .since december 19 , 1990 , 36 firms were delisted and another 11 were temporarily stopped .based on the rule of the index calculation , the terminated stocks are deleted from the total market capitalization after the termination is executed , while the last active capitalization of the temporarily stopped stocks are still included in the total market capitalization .the equal - weighted price entering in the definition of the zipf factor is constructed according to the formula : ~ , \label{rtjuki8klo8k}\ ] ] where is the beginning of the fitted window and is the trading day immediately preceding .we use this measure of to make sure that the equal - weighted price and the value - weighted price are identical at .this implies that is set to be ( recall that is defined by expression ( [ heyjujkuj5 ] ) ) .the return is defined by ~. \label{rheyju6h}\ ] ] in expression ( [ rheyju6h ] ) , is the total capitalization value of firm at time and is the number of the stocks which are listed in the index for both time and .formula ( [ rheyju6h ] ) together with ( [ rtjuki8klo8k ] ) means that the zipf factor is a portfolio that puts an equal amount of wealth at each time step ( by a corresponding dynamical reallocation depending on the relative performance of the stocks as a function of time ) on each of the stocks entering in the definition of the shanghai composite index , so that the zipf portfolio is maximally diversified ( neglecting here the impact of cross - correlations between the assets ) .putting expression ( [ rheyju6h ] ) inside ( [ rtjuki8klo8k ] ) yields ~. \label{rtjuki8argwreklo8k}\ ] ] when the number of the stocks remains unchanged from to , i.e. ~,\ ] ] expression ( [ rtjuki8argwreklo8k ] ) can be simplified as : ^{1/m}~ , \label{rtjuki8argwreklo8ksimp}\ ] ] showing that is the geometrical mean of the capitalizations of the stocks constituting the shanghai composite index , as compared with the index which is proportional to the arithmetic mean of the firm capitalizations .the shanghai composite index had two famous bubbles in recent history as described in table [ tb : ssecbubbles ] . both of them are tested in this paper .the time series are fitted with both the original jls model and the new model .the 10 best initial guesses from the heuristic search algorithm are kept .the results are shown in figs .[ fg : fit_ssec_zipf1 ] - [ fg : fit_ssec_zipf2 ] . .information on the tested bubbles of ssec . [ cols="<,<,<,<",options="header " , ] in contrast , the integrated zipf factor remained negative over the lifetime of bubble 2 as shown in fig .2 , implying that the gains of the shanghai index were more driven by small and medium size firms .the factor load is -0.028 for the best fit shown in fig .[ fg : fit_ssec_zipf2 ] and the mean value of for bubble 2 is small and negative ( see tab . [tb : gamma ] ) . the overall contribution of the zipf factor to the stock change is therefore small and negative ( due to the product of a negative integrated zipf factor by a negative factor loading ) , which makes the remuneration of investors due to their exposition to the diversification risk still positive but small . at the time when bubble 2 started, the world economy has been seriously shaken by the developing subprime crisis .the demand for chinese product exports decreased dramatically . to compensate for the loss from collapsing exports, the chinese government launched a 4 trillion chinese yuan stimulus with the aim to boost the domestic demand .small companies that are usually more vulnerable to a lack of access to capital profited proportionally more than their larger counterpart from this injection of capital in the economy .this is reflected in relative better performance of small and medium size firms in the stock market , leading to a slightly negative value of the integrated zipf factor during the development of bubble 2 .although the small companies benefit more , the stimulus was designed to boost the whole economy .the diversification risk turned out to be relatively minor at that time , explaining the small value of the zipf factor load .we have introduced a new model that combines the zipf factor embodying the risk due to lack of diversification with the johansen - ledoit - sornette model of rational expectation bubbles with positive feedbacks .the new model keeps all the dynamical characteristics of a bubble described in the jls model .in addition , the new model can also provide information about the concentration of stock gains over time from the knowledge of the zipf factor .this new information is very helpful to understand the risk diversification and to explain the investors behavior during the bubble generation .we have applied this new model to two famous chinese stock bubbles and found that the new model provide sensible explanation for the diversification risk observed during these two bubbles .100 a. johansen , d. sornette , critical crashes , risk 12 ( 1 ) ( 1999 ) 9194 .a. johansen , d. sornette , o. ledoit , predicting financial crashes using discrete scale invariance , journal of risk 1 ( 4 ) ( 1999 ) 532 .a. johansen , o. ledoit , d. sornette , crashes as critical points , international journal of theoretical and applied finance 3 ( 2 ) ( 2000 ) 219255 .o. blanchard , m. watson , bubbles , rational expectations and speculative markets , in : wachtel , p. , eds ., crisis in economic and financial structure : bubbles , bursts , and shocks .lexington books : lexington .d. sornette , discrete scale invariance and complex dimensions , physics reports 297 ( 5 ) ( 1998 ) 239270 .d. sornette , r. woodard , w .- x .zhou , the 2006 - 2008 oil bubble : evidence of speculation and prediction , physica a 388 ( 2009 ) 15711576 .jiang , w .- x .zhou , d. sornette , r. woodard , k. bastiaensen , p. cauwels , bubble diagnosis and prediction of the 2005 - 2007 and 2008 - 2009 chinese stock market bubbles , journal of economic behavior and organization 74 ( 2010 ) 149162 .zhou , d. sornette , analysis of the real estate market in las vegas : bubble , seasonal patterns , and prediction of the csw indexes , physica a 387 ( 2008 ) 243260 .zhou , d. sornette , a case study of speculative financial bubbles in the south african stock market 2003 - 2006 , physica a 361 ( 2006 ) 297308 .w. yan , r. woodard , d. sornette , leverage bubble , http://arxiv.org/abs/1011.0458 .w. yan , r. woodard , d. sornette , diagnosis and prediction of market rebounds in financial markets , http://arxiv.org/abs/1001.0265 .w. yan , r. woodard , d. sornette , inferring fundamental value and crash nonlinearity from bubble calibration , http://arxiv.org/abs/1011.5343 .d. sornette , r. woodard , m. fedorovsky , s. reimann , h. woodard , w .- x .zhou , the financial bubble experiment : advanced diagnostics and forecasts of bubble terminations ( the financial crisis observatory ) , http://arxiv.org/abs/0911.0454 .d. sornette , r. woodard , m. fedorovsky , s. reimann , h. woodard , w .- x .zhou , the financial bubble experiment : advanced diagnostics and forecasts of bubble terminations volume ii master document , http://arxiv.org/abs/1005.5675 .zhou , d. sornette , fundamental factors versus herding in the 2000 - 2005 united states stock market and prediction , physica a 360 ( 2006 ) 459483 .j. knight , s. satchell , linear factor models in finance , butterworth - heinemann ( 2005 ) .e. f. fama , r. f. kenneth , the cross - section of expected stock returns , journal of finance 47 ( 1992 ) 427465 .e. f. fama , r. f. kenneth , common risk factors in the returns on stocks and bonds , journal of financial economics 33 ( 1993 ) 356 .e. f. fama , r. f. kenneth , size and book - to - market factors in earnings and returns , journal of finance 50 ( 1995 ) 131155 .e. f. fama , r. f. kenneth , multifactor explanations of asset pricing anomalies , journal of finance 51 ( 1996 ) 5584 .m. carhart , on persistence of mutual fund performance , journal of finance 52 ( 1997 ) 5782 .y. malevergne , d. sornette , a two - factor asset pricing model and the fat tail distribution of firm sizes , eth zurich preprint ( 2007 ) http://papers.ssrn.com/sol3/papers.cfm?abstract_id=960002 .y. malevergne , p. santa - clara , d. sornette , professor zipf goes to wall street , nber working paper no .15295 ( 2009 ) http://ssrn.com/abstract=1458280 .d. sornette , a. johansen , significance of log - periodic precursors to financial crashes , quantitative finance 1 ( 4 ) ( 2001 ) 452471 .k. ide , d. sornette , oscillatory finite - time singularities in finance , population and rupture , physica a 307 ( 2002 ) 63106 .d. sornette , a. johansen , large financial crashes , physica a 245 n3 - 4 ( 1997 ) 411422 .d. sornette , r. woodard , m. fedorovsky , s. reimann , h. woodard , w .- x .zhou , the financial bubble experiment : advanced diagnostics and forecasts of bubble terminations ( the financial crisis observatory ) , ( www.er.ethz.ch/fco/fbe_report_may_2010 ) ( 2010 ) .g. v. bothmer , c. meister , predicting critical crashes ?a new restriction for the free variables , physica a 320c ( 2003 ) 539547 .d. cvijovic , j. klinowski , taboo search : an approach to the multiple minima problem , science 267 ( 5188 ) ( 1995 ) 664666 .k. levenberg , a method for the solution of certain non - linear problems in least squares , quarterly of applied mathematics ii 2 ( 1944 ) 164168 .d. w. marquardt , an algorithm for least - squares estimation of nonlinear parameters , journal of the society for industrial and applied mathematics 11 ( 2 ) ( 1963 ) 431441 .we present another derivation of the model using the theory of the stochastic pricing kernel .our derivation follows and adapt that presented by zhou and sornette . under this theory ,the no - arbitrage condition is presented as follows .the product of the stochastic pricing kernel ( stochastic discount factor ) and the value process , of any admissible self - financing trading strategy implemented by trading on a financial asset , should be a martingale : , ~~~~~~\forall t'>t~. \label{eq : pmmartingale}\ ] ] let us assume that the dynamics of the stochastic pricing kernel is formulated as : where is the interest rate and is the zipf factor defined as ( [ hjyjuj4u ] ) .the process denotes the market price of risk , as measured by the covariance of asset returns with the stochastic discount factor and represents all other stochastic factors acting on the stochastic pricing kernel . by definition, is independent to at any time : = { \rm e_t } [ dw ] \cdot { \rm e_t } [ d\hat{w } ] = 0~ , \forall t \geq 0.\ ] ] we further use the standard form of the price dynamics in the jls model : where is the same brownian motion as in ( [ eq : sdf ] ) .the term represents the jump process , valued 0 when there is no crash and 1 when the crash occurs .the dynamics of the jumps is governed by the crash hazard rate defined in ( [ eq : hazard ] ) with : = h(t)dt~.\ ] ] according to the stochastic pricing kernel theory , should be a martingale . taking the future time in ( [ eq : pmmartingale ] ) as the increment of the current time , then & = & { \rm e } \left [ \frac{(p(t)+dp)(d(t)+dd)-p(t)d(t)}{p(t)d(t)}\right]\\\nonumber & = & { \rm e } \left [ \frac{p(t)dd+d(t)dp + dd dp}{p(t)d(t)}\right]\\\nonumber & = & { \rm e } \left[ \frac{dd}{d } + \frac{dp}{p}+\frac{dddp}{dp}\right]\\\nonumber & = & 0~.\end{aligned}\ ] ] to satisfy this equation , the coefficient of should be zero , that is .this yields when there is no crash ( ) , the expectation of the price process is obtained by integrating ( [ eq : appendixprice ] ) : = \int ( \gamma z(t)+\kappa h(t)+r(t)+\sigma(t)\lambda(t))dt~.\ ] ] for and , we obtain : & = & \int ( \gamma z(t)+\kappa h(t))dt \\\nonumber & = & \gamma \zeta(t ) + \int \kappa h(t ) dt \\\nonumber & = & \gamma \zeta(t ) + a + b(t_c - t)^m + c(t_c - t)^m\cos(\omega\ln ( t_c - t ) - \phi)~,\end{aligned}\ ] ] which recovers ( [ eq : lppl ] ) . when the crash started is marked by the vertical magenta dot - dashed line .the historical close prices are shown as blue full circles .the best 10 fits of the original jls model are shown as the green dashed lines and the best 10 fits of the new factor model are shown as the red solid lines .( lower panel ) the corresponding zipf factor ( magenta solid line with ` x ' symbol ) and function ( blue dot - dashed line ) during this period . ] when the crash started is marked by the vertical magenta dot - dashed line .the historical close prices are shown as blue full circles .the best 10 fits of the original jls model are shown as the green dashed lines and the best 10 fits of the new factor model are shown as the red solid lines .( lower panel ) the corresponding zipf factor ( magenta solid line with ` x ' symbol ) and function ( blue dot - dashed line ) during this period . ]
we present an extension of the johansen - ledoit - sornette ( jls ) model to include an additional pricing factor called the `` zipf factor '' , which describes the diversification risk of the stock market portfolio . keeping all the dynamical characteristics of a bubble described in the jls model , the new model provides additional information about the concentration of stock gains over time . this allows us to understand better the risk diversification and to explain the investors behavior during the bubble generation . we apply this new model to two famous chinese stock bubbles , from august 2006 to october 2007 ( bubble 1 ) and from october 2008 to august 2009 ( bubble 2 ) . the zipf factor is found highly significant for bubble 1 , corresponding to the fact that valuation gains were more concentrated on the large firms of the shanghai index . it is likely that the widespread acknowledgement of the 80 - 20 rule in the chinese media and discussion forums led many investors to discount the risk of a lack of diversification , therefore enhancing the role of the zipf factor . for bubble 2 , the zipf factor is found marginally relevant , suggesting a larger weight of market gains on small firms . we interpret this result as the consequence of the response of the chinese economy to the very large stimulus provided by the chinese government in the aftermath of the 2008 financial crisis .
stochastic many - body processes have long been of interest to physicists , largely from applications in condensed matter and chemical physics , such as surface growth , the aggregation of structures , reaction dynamics or pattern formation in systems far from equilibrium . through these studies ,statistical physicists have acquired a range of analytical and numerical techniques along with insights into the macroscopic phenomena that arise as a consequence of noise in the dynamics .it is therefore not surprising that physicists have begun to use these methods to explore emergent phenomena in the wider class of _ complex _ systems which in addition to stochastic interactions might invoke a _ selection _ mechanism . in particular , this can lead to a system adapting to its environment . the best - known process in which selection plays an important part is , of course , biological evolution .more generally , one can define an evolutionary dynamics as being the interplay between three processes .in addition to selection , one requires _ replication _ ( e.g. , of genes ) to sustain a population and _ variation _( e.g. , mutation ) so that there is something to select on .a generalized evolutionary theory has been formalized by biologist and philosopher of science david hull that includes as special cases both biological and cultural evolution .the latter of these describes , for example , the propagation of ideas and theories through the scientific community , with those theories that are `` fittest '' ( perhaps by predicting the widest range of experimental results ) having a greater chance of survival . within this generalized evolutionary framework, a theory of language change has been developed which we examine from the point of view of statistical physics in this paper .since it is unlikely that the reader versed in statistical physics is also an expert in linguistics , we spend some time in the next section outlining this theory of language change . then , our formulation of a very simple _ mathematical _ model of language change that we define in sec .[ moddef ] should seem rather natural . as this is not the only evolutionary approach that has been taken to the problem of language change , we provide again , for the nonspecialist reader a brief overview of relevant modeling work one can find in the literature .the remainder of this paper is then devoted to a mathematical analysis of our model .a particular feature of this model is that all speakers continuously vary their speech patterns according to utterances they hear from other speakers . since in our model ,the utterances produced represent a finite - sized sample of an underlying distribution , the language changes over time even in the absence of an explicit selection mechanism .this process is similar to _ genetic drift _ that occurs in biological populations when the individuals chosen to produce offspring in the next generation are chosen entirely at random .our model also allows for language change by selection as well as drift ( see sec .[ moddef ] ) . for this reason, we describe the model as the `` utterance selection model '' .as it happens , the mathematics of our model of language change turn out to be almost identical to those describing classical models in population genetics .this we discover from a fokker - planck equation for the evolution of the language , the derivation of which is given in sec .consequently , we have surveyed the existing literature on these models , and by doing so obtained a number of new results which we outline in sec . [ singlespeaker ] and whose detailed derivation can be found elsewhere . since in the language context, these results pertain to the rather limiting case of a single speaker which is nevertheless nontrivial because speakers monitor their own language use we extend this in sec .[ multispeaker ] to a wider speech community . in all cases we concentrate on properties indicative of change , such as the probability that certain forms of language fall into disuse , or the time it takes for them to do so . establishing these basic facts is an important step towards realizing our future aims of making a meaningful comparison with observational data .we outline such scope for future work and discuss our results in the concluding section .in order to model language change we focus on _ linguistic variables _ , which are essentially `` different ways of saying the same thing '' .examples include the pronunciation of a vowel sound , or an ordering of words according to their function in the sentence . in order to recognize change when it occurs, we will track the frequencies with which distinct _variants _ of a particular linguistic variable are reproduced in _utterances _ by a language s _ speakers_. let us assume that amongst a given group of speakers , one particular variant form is reproduced with a high frequency .this variant we shall refer to as the _ convention _ among that group of speakers . now , it may be that , over time , an unconventional possibly completely new variant becomes more widely used amongst this group of speakers .clearly one possibility here is that by becoming the most frequently used variant , it is established as the new convention at the expense of the existing one .it is this competition between variant forms , and particularly the propagation of innovative forms across the speech community , that we are interested in .we have so far two important ingredients in this picture of language change : the speakers , and the utterances they produce .the object relating a speaker to her utterances we call a _grammar_. more precisely , a speaker s grammar contains the entirety of her knowledge of the language .we assume this to depend on the frequencies she has heard particular variant forms used within her speech community . in turn , grammars govern the variants that are uttered by speakers , and how often .clearly , a `` real - world '' grammar must be an extremely complicated object , encompassing a knowledge of many linguistic variables , their variant forms and their suitability for a particular purpose .however , it is noticed that even competent speakers ( i.e. , those who are highly aware of the various conventions among different groups ) might use unconventional variants if they have become _ entrenched _ .for example , someone who has lived for a long time in one region may continue to use parts of the dialect of that region after moving to a completely new area .this fact will impact on our modeling in two ways .first , we shall assume that a given interaction ( conversation ) between two speakers has only a small effect on the established grammar .second , speakers will reinforce their own way of using language by keeping a record of their own utterances .another observed feature of language use is that there is considerable variation , not just from speaker to speaker but also in the utterances of a single speaker .there are various proposals for the origin of this variation . on the one hand, there is evidence for certain variants to be favored due to universal forces of language change .for instance articulatory and acoustic properties of sounds , or syntactic processing factors which are presumed common to all speakers favor certain phonetic or syntactic changes over others .these universals can be recognized through a high frequency of such changes occurring across many speech communities .on the other hand , variation could reflect the wide range of possible intentions a speaker could have in communicative enterprise .for example , a particular non - conventional choice of variant might arise from the desire not to be misunderstood , or to impress , flatter or amuse the listener .nevertheless , in a recent analysis of language use with a common goal , it was observed that variation is present in nearly all utterances .it seems likely , therefore , that variation arises primarily as a consequence of the fact that no two situations are exactly alike , nor do speakers construe a particular situation in exactly the same way . hence there is a fundamental indeterminacy to the communicative process . as a result, speakers produce variant forms for the same meaning being communicated .these forms are words or constructions representing possibly novel combinations , and occasionally completely novel utterances . given the large number of possible sources of variation and innovation , we feel it appropriate to model these effects using a stochastic prescription . in order to complete the evolutionary description , we require a mechanism that selects an innovative variant for subsequent propagation across the speech community . in the theory of ref . it is proposed that social forces play this role .this is based on the observation that speakers want to identify with certain subgroups of a society , and do so in part by preferentially producing the variants produced by members of the emulated subgroup .that is , the preference of speakers to produce variants associated with certain social groups acts as a selection mechanism for those variants .this particular evolutionary picture of language change ( see sec . [compare ] for contrasting approaches ) places an emphasis on utterances ( perhaps more so than on the speakers ) .indeed , in ref . the utterance is taken as the linguistic analog of dna .as speakers reproduce utterances , linguistic structures get passed on from generation to generation ( which one might define as a particular time interval ) .for this reason , the term _ lingueme _ has been coined in to refer to these structures , and to emphasize the analogy with genetics .one can then extend to analogy to identify linguistic variables with a particular _ gene locus _ and variant forms with _alleles_. we stress , however , that the analogy between this evolutionary formulation of language change and biological evolution is not exact .the distinction is particularly clear when one views the two theories in the more general framework of hull .the two relevant concepts are _ interactors _ and _ replicators _ whose roles are played in the biological system by individual organisms and genes respectively . in biology ,a replicator ( a gene ) `` belongs to '' an interactor ( an organism ) , thereby influencing its survival and reproductive ability of the interactor .this is then taken as the dominant force governing the make - up of the population of replicators in the next generation .the survivability of a replicator is not due to an inherent `` fitness '' : it is the organism whose fitness leads to the differential survival or extinction of replicators .also , the relationship between genotype and phenotype is indirect and complex .nevertheless , there is a sufficient correlation between genes and phenotypic traits of organisms such that the differential survival of the latter causes the differential survival of the former ( this is hull s definition of `` selection '' ) , but the correlation is not a simple one . in the linguistic theory outlined here , the interactors ( speakers ) and replicators ( linguemes ) have quite different relationships to one another .the replicators are uttered by speakers , and there is no one - to - one relationship between a replicator ( a lingueme ) and the speaker who produces it .nevertheless , hull s generalized theory of selection can be applied to the lingueme as replicator and the speaker as interactor .linguemes and lingueme variation is generated by speaker intercourse , just as new genotypes are generated by sexual intercourse .the generation process is replication , that is , speakers are replicating sounds , words and construction they have heard before .finally , the differential survival of the speakers , that is , their social `` success '' , causes the differential survival of the linguemes they produce , and so the social mechanisms underlying the propagation of linguistic variants conforms to hull s definition of selection .in short , we do not suppose that the language uttered by an interactor has any effect on its survival , believing the dominant effects on language change to be social in origin .that is , the survivability of a replicator is not due to any inherent _ fitness _ , but arises instead from the social standing of individuals associated with the use of the corresponding variant form .it is therefore necessary that in formulating a mathematical model of language change , one should not simply adapt an existing biological theory , but start from first principles .this is the program we now follow .the utterance selection model comprises a set of rules that govern the evolution of the simplest possible language viewed from the perspective of the previous section .this language has a single lingueme with a restricted number variant forms . at presentwe simply assume the existence of multiple variants of a lingueme : modeling the the communicative process and the means by which indeterminacy in communication ( see sec . [ framework ] ) leads to the generation of variation is left for future work .in the speech community we have individuals , each of whose knowledge of the language the grammar is encoded in the set of variables . in a manner shortly to be defined precisely , the variable reflects speaker s ( ) perception of the frequency with which lingueme variant ( ) is used in the speech community at time . at all timesthese variables are normalized so that the sum over all variants for each speaker is unity , that is for convenience we will sometimes use a vector notation to denote the entirety of speaker s grammar .the state of the system at time is then the aggregation of grammars . after choosing some initial condition ( e.g. , a random initial condition ), we allow the system to evolve by repeatedly iterating the following three steps in sequence , each iteration having duration . _1 . social interaction . _ a pair of speakers is chosen with a ( prescribed ) probability .there is no notion of an ordering of a particular pair of speakers in this model , and so we implicitly have , normalized such that the sum over _ distinct _pairs .see fig .[ fig - society ] .speakers in the society interact with different frequencies ( shown here schematically by different thicknesses of lines connecting them ) .the pair of speakers is chosen to interact with probability . ]_ both the speakers selected in step 1 produce a set of _ tokens _ , i.e. , instances of lingueme variants . each token is produced independently and at random , with the probability that speaker utters variant equal to the _ production probability _ which will be determined in one of two ways ( see below ) .the numbers of tokens of each variant are then drawn from the multinomial distribution where , , , and where we have dropped the explicit time dependence to lighten the notation .speaker produces a sequence of tokens according to the same prescription , with the obvious replacement .the randomness in this step is intended to model the observed variation in language use that was described in the previous section .the first , and simplest possible prescription for obtaining the reproduction probabilities is simply to assign . since the grammar is a function of the speaker s experience of language use the next step explains precisely how this reproduction rule does not invoke any favoritism towards any particular variants on behalf of the speaker .we therefore refer to this case as _ unbiased _ reproduction , depicted in fig .[ fig - unbiased ] . both speakers and produce an utterance , with particular lingueme variants appearing with a frequency given by the value stored in the utterer s grammar when no production biases are in operation . in this particular casethree variants are shown ( and ) and the number of tokens , , is equal to 6 . ]we shall also study a _ biased _ reproduction model , illustrated in fig .[ fig - biased ] . here , the reproduction probabilities are a linear transformation of the grammar frequencies , i.e. , in which the matrix must have column sums of unity so that the production probabilities are properly normalized .this matrix is common to all speakers , which would be appropriate if one is considering effects of universal forces ( such as articulatory considerations ) on language .furthermore , in contrast to the unbiased case , this reproduction model admits the possibility of innovation , i.e. , the production of variants that appear with zero frequency in a speaker s grammar . in the biased reproduction model ,the probability of uttering a particular variant is a linear combination of the values stored in the grammar . ]_ 3 . retention ._ the final step is to modify each speaker s grammar to reflect the actual language used in the course of the interaction .the simplest approach here is to add to the existing speaker s grammar additional contributions which reflect both the tokens produced by her and by her interlocutor .the weight given to these tokens , relative to the existing grammar , is given by a parameter .meanwhile , the weight , relative to her own utterances , that speaker gives to speaker s utterances is specified by .this allows us to implement the social forces mentioned in the previous section .these considerations imply that \ ] ] for speaker , and the same rule for speaker after exchanging all and indices .[ fig - retain ] illustrates this step .the parameter , which affects how much the grammar changes as a result of the interaction is intended to be small , for reasons given in the previous section .after the utterances have been produced , both speakers modify their grammars by adding to them the frequencies with which the variants were reproduced in the conversation .note each speaker retains both her own utterances as well as those of her interlocutor , albeit with different weights . ]we must also ensure that the normalization ( [ normalize ] ) is maintained .therefore , although we have couched this model in terms of the grammar variables , we should stress that these are not observable quantities .really , we should think in terms of the population of utterances produced in a particular generation , e.g. , a time interval as indicated in fig .[ fig - generation ] .however , since the statistics of this population can be derived from the grammar variables indeed , in the absence of production biases they are the same we shall in the following focus on the latter .a generation of a population of utterances in the utterance selection model could be defined as the set of tokens produced by all speakers in the macroscopic time interval . ]evolutionary modeling has a long history in the field of language change and development . indeed , at a number of points in _ the origin of the species _, charles darwin makes parallels between the changes that occur in biological species and in languages .particularly , he used our everyday observation that languages tend to change slowly and continuously over time to challenge the then prevailing view that biological species were distinct species , occupying immovable points in the space of all possible organisms .as evolutionary theories of biology have become more formalized , it is not surprising that these there have been a number of attempts to apply more formal evolutionary ideas to language change ( see , e.g. , ) . in this sectionwe describe a few of these studies in order that the reader can see how our approach differs from others one can find in the literature . one area in which biological evolution plays a part is the development of the capacity to use language ( see , e.g. , for a brief overview ) .although this is in itself an interesting topic to study , we do not suppose that this ( presumably ) genetic evolution is strongly related to language change since the latter occurs on much shorter timescales .for example , the foxp2 gene ( which is believed to play a role in both language production and comprehension ) became fixed around 120,000 years ago , whereas the patterns in the use of linguistic variables can change over periods as short as tens of years .given an ability to use language , one can ask how the various linguistic structures ( such as particular aspects of grammar or syntax ) come into being . hereevolutionary models that place particular emphasis on language _ learning _ are often employed .some aspects of this type of work are reviewed in here we remark that in order to see the emergence of grammatical rules , one must model a grammar at a much finer level than we have done here .indeed , we have left aside the ( nevertheless interesting ) question of _ how _ an innovation is recognized as `` a different way of saying the same thing '' by all speakers in the community . instead , we assume that this agreement is always reached , and concentrate on the fate of new variant forms . similar kinds of assumptions have been used in a learning - based context by niyogi and berwick to study language change . in learning - based models in general ,the mechanism for language change lies in speakers at an early stage of their life having a ( usually finite ) set of possible grammars to choose from , and use the data presented to them by other speakers to hypothesize the grammar being used to generate utterances .since these data are finite , there is the possibility for a child listening to language in use to infer a grammar that differs from his parents , and becomes fixed once a speaker reaches maturity .our model of continuous grammatical change as a consequence of exposure to other speakers at all stages in a speaker s life is quite different to learning - based approaches . in particular, it assumes an inductive model of language acquisition , in which the child entertains hypotheses about sets of words and grammatical constructions rather than about entire discrete grammars .thus , our model does not assume that a child has in her mind a large set of discrete grammars .the specific model in assigns grammars ( languages ) to a proportion of the population of speakers in a particular generation .a particular learning algorithm then implies a mapping of the proportions of speakers using a particular language from one generation to the next .since one is dealing with nonlinear iterative maps , one can find familiar phenomena such as bifurcations and phase transitions in the evolution of the language .note , however , that the dynamics of the population of utterances and speakers are essentially the same in this model , since the only thing distinguishing speakers is grammar . in the utterance selection model , we have divorced the population dynamics of speakers and utterances , and allow the former to be distinguished in terms of their social interactions with other speakers ( which is explicitly ignored in ) . this has allowed us to take a _ fixed _ population of speakers without necessarily preventing the population of utterances to change . in other words ,language change may occur if the general structure of a society remains intact as individual speakers are replaced by their offspring , or even during a period of time when there is no change in the makeup of the speaker population ; both of these possibilities are widely observed .an alternative approach to language change in the learning - based tradition is not to have speakers attempt to infer the grammatical rules underpinning their parents language use , but to select a grammar based on how well it permits them to communicate with other members of the speech community .this path has been followed most notably by nowak and coworkers in a series of papers ( including ) as well as by members of the statistical physics community .this thinking allows one to borrow the notion of _ fitness _ from biological evolutionary theories the more people a particular grammar allows you to communicate with , the fitter it is deemed to be . in order for language use to change , speakers using a more coherent grammar selectively produce more offspring than others so that the language as a whole climbs a hill towards maximal coherence .the differences between this and our way of thinking should be clear from sec . [ framework ] .in particular we assume no connection between the language a speaker uses and her biological reproductive fitness . finally on the subject of learning - based models, we remark that not all of them assume language transmission from parents to offspring .for example , in the effects of children also learning from their peers are investigated .perhaps closer in spirit to our own work are studies that have languages competing for speakers .the simplest model of this type is due to abrams and strogatz which deems a language `` attractive '' if it is spoken by many speakers or has some ( prescribed ) added value .for example , one language might be of greater use in a trading arrangement .in good agreement with available data for the number of speakers of minority languages was found , revealing that the survival chances of such languages are typically poor .more recently , the model has been extended by minett and wang to implement a structured society and the possibility of bilingualism .one might view the utterance selection model as being relevant here if the variant forms of a lingueme represent different languages .however , there are then significant differences in detail . first , the way the utterance selection model is set upwould imply that all languages are mutually intelligible to all speakers .second , in the models of , learning a new language is a strategic decision whereas in the utterance selection model it would occur simply through exposure to individuals speaking that language . to summarize ,the distinctive feature of our modeling approach is that we consider the dynamics of the population of utterances to be separate from that of the speech community ( if indeed the latter changes at all ) .furthermore , we assume that language propagates purely through exposure with social status being used as a selection process , rather than through some property of the language itself such as coherence .the purpose of this work is to establish an understanding of the consequences of the assumptions we have made , particularly in those cases where the utterance selection model can be solved exactly .we begin our analysis of the utterance selection model by constructing a fokker - planck equation via an appropriate continuous - time limit .there are several ways one could proceed here .for example , one could scale the interaction probabilities proportional to ( the constant of proportionality then being an interaction rate ) . whilst this would yield a perfectly acceptable continuous time process ,the fokker - planck equation that results is unwieldy and intractable .therefore we will not follow this path , but will discuss two other approaches below .the first will be applicable when the number of tokens is large. this will not generally be the case , but will serve to motivate the second approach , which is closer to the situation which we are modeling . to clarify the derivation it is convenient to start with a single speaker which ,although linguistically trivial , is far from mathematically trivial .it also has an important correspondence to population dynamics , which is explored in more detail in sec .[ popgen ] . in this casethere is no matrix , and in fact we can drop the indices and completely .this means that the update rule ( [ update ] ) takes the simpler form and so is given by the derivation of the fokker - planck equation involves the calculation of averages of powers of . using eq .( [ multinomial ] ) , the average of is .if we begin by assuming unbiased reproduction , then and so the average of is zero . in the language of stochastic dynamics, there is no deterministic component the only contribution is from the diffusion term .this is characterized by the second moment which is calculated in the appendix to be where the angle brackets represent an average over all possible realizations . to give a contribution to the fokker - planck equation, the second moment ( [ second_mom ] ) has to be of order .one way to arrange this is as follows .we choose the unit of time such that utterances are made in unit time .thus the time interval between utterances , , is small if is large .furthermore , although the frequency of a particular variant in an utterance , , varies in steps , the steps are very small .therefore , when becomes very large , the time and variant frequency steps become very small and can be approximated as continuous variables . the second jump moment , which is actually what appears in the fokker - planck equation ,is found by dividing the expression ( [ second_mom ] ) by , and letting : since the higher moments of the multinomial distribution involve higher powers of , they give no contribution , and the only non - zero jump moment is given by eq .( [ second_jump_mom ] ) .as discussed in the appendix , or in standard texts on the theory of stochastic processes , this gives rise to the fokker - planck equation where we have suppressed the dependence of the probability distribution function on the initial state of the system .the equation ( [ fpe_1_nobias ] ) holds only for unbiased reproduction .it can be generalized to biased reproduction by noting that as , this process becomes deterministic .thus eq .( [ delta_x_1 ] ) is replaced by the deterministic equation however , we may write eq .( [ lintrans ] ) using the condition as the diagonal entries of are omitted in the last line because the condition means that in each column one entry is not independent of the others .if we choose this entry to be the one with , then all elements of in eq .( [ bias_deter ] ) are independent . thus the diagonal entries of have no significance ; they are simply given by . from eqs .( [ delta_x_1_deter ] ) and ( [ bias_deter ] ) we see that in order to obtain a finite limit as , we need to assume that the off - diagonal entries of are of order .specifically , we define for . then in the limit , deterministic effects such as this give rise to contributions in the derivation of the fokker - planck equation , unlike the contributions arising from diffusion . therefore , the first jump moment in the case of biased reproduction is given by the right - hand side of eq .( [ deter_eqn ] ) .the second jump moment is still given by eq .( [ second_jump_mom ] ) , since any additional terms involving are of order and so give terms which vanish in the limit .this discussion may be straightforwardly extended to the case of many speakers .the only novel feature is the appearance of the matrix . in order to obtain a deterministic equation of the type ( [ deter_eqn ] ), a new matrix has to be defined by .thus , in summary , what could be called the `` large approximation '' is obtained by choosing , and defining new matrices and through for and .it is the classic way of deriving the fokker - planck equations as the `` diffusion approximation '' to a discrete process .however , for our purposes it is not a very useful approximation .this is simply because we do not expect that in realistic situations the number of tokens will be large , so it would be useful to find another way of taking the continuous - time limit .fortunately , another parameter is present in the model which we have not yet utilized .this is , which characterizes the small effect that utterances have on the speaker s grammar . if we now return to the case of a single speaker with unbiased reproduction , we see from eq .( [ second_mom ] ) , that an alternative to taking , is to take .thus , in this second approach , we leave as a parameter in the model , and set the small parameter equal to .the second jump moment ( [ second_jump_mom ] ) in this formulation is replaced by bias may be introduced as before , and gives rise to eqs .( [ delta_x_1_deter ] ) and ( [ bias_deter ] ) .the difference in this case is that has been assumed to be , and so the off - diagonal entries of ( and the entries of in the case of more than one - speaker ) have to be rescaled by , rather than .this means that in this second approach we must rescale the various parameters in the model according to as .we have found good agreement between the predictions obtained using this continuous - time limit and the output of monte carlo simulations when was sufficiently small , e.g. , . in sec .[ continuous ] we have outlined the considerations involved in deriving a fokker - planck equation to describe the process .we concluded that , for our present purposes , the scalings given by eqs .( [ rescale1])-([rescale3 ] ) were most appropriate .much of the discussion was framed in terms of a single speaker , because the essential points are already present in this case , but here will study the full model .the resulting fokker - planck equation describes the time evolution of the probability distribution function for the system to be in state at time given it was originally in state , although we will frequently suppress the dependence on the initial conditions .the variables comprise _ independent _ grammar variables , since the grammar variable is determined by the normalization .the derivation of the fokker - planck equation is given in the appendix .it contains three operators , each of which corresponds to a distinct dynamical process .specifically , one has for the evolution of the distribution p(x , t ) \\ { } + \sum_{\langle i j \rangle } g_{ij } { { \ensuremath{\hat{\mathcal{l}}}}^{\rm(int)}_{ij } } p(x , t)\end{gathered}\ ] ] in which is the probability that speaker participates in any interaction .the operator arises as a consequence of bias in the production probabilities .note that the variable appearing in this expression must be replaced by in order that the resulting fokker - planck equation contains only the independent grammar variables .as discussed above , the finite - size sampling of the ( possibly biased ) production probabilities yields the stochastic contribution to the fokker - planck equation . in a physical interpretation, this term describes for each speaker an independently diffusing particle , albeit with a spatially - dependent diffusion constant , in the -dimensional space . on the boundaries of this space ,one finds there is always a zero eigenvalue of the diffusion matrix that corresponds to the direction normal to the boundary .this reflects the fact that , in the absence of bias or interaction with other speakers , it is possible for a variant to fall into disuse never to be uttered again .these _ extinction _ events are of particular interest , and we investigate them in more detail below .the third and final contribution to ( [ driftfpe ] ) comes from speakers retaining a record of other s utterances .this leads to different speakers grammars becoming coupled via the interaction term we end this section by rewriting the fokker - planck equation as a continuity equation in the usual way : , where is the probability current .the boundary conditions on the fokker - planck equation with and without bias differ . in the former case ,the boundaries are reflecting , that is , there is no probability current flowing through them . in the latter case , they are so - called exit conditions : all the probability which diffuses to the boundary is extracted from the solution space .the result ( [ current ] ) will be used in subsequent sections when finding the equations describing the time evolution of the moments of the probability distribution .the fokker - planck equation derived in the previous section is well - known to population geneticists , being a continuous - time description of simple models formulated in the 1930s by fisher and wright . despite criticism of oversimplification ( see , e.g. , the short article by crow for a brief history ) , these models have retained their status as important paradigms of stochasticity in genetics to the present day . although biologists often discuss these models in the terms of individuals that have two parents , it is sufficient for our purposes to describe the much simpler case of an asexually reproducing population .the central idea is that a given ( integer ) generation of the population can be described in terms of a gene pool containing genes , of which a number have allele at a particular locus , with and . in the literature , an analogy with a bag containing beansis sometimes made , with different colored beans representing different alleles .the next generation is then formed by selecting _ with replacement _ genes ( beans ) randomly from the current population .this process is illustrated in fig .[ fig - beanbag ] .the replacement is crucial , since this allows for _ genetic drift_i.e ., changes in allele frequencies from one generation to the next from random sampling of parents despite maintaining a fixed overall population size .fisher - wright ` beanbag ' population genetics .the population in generation is constructed from generation by ( i ) selecting a gene from the current generation at random ; ( ii ) copying this gene ; ( iii ) placing the copy in the next generation ; ( iv ) returning the original to the parent population .these steps are repeated until generation has the same sized population as generation . ]the probability of having copies of allele in generation , given that there were in the previous generation , is easily shown to be multinomial , i.e. , using the properties of this distribution ( see appendix ) , it is straightforward to learn that the mean change in the number of copies of allele is the population from one generation to the next is zero . if we introduce as the fraction of allele in the gene pool at generation , we find that the second moment of this change is [ x_w(t+1 ) - x_w(t ) ] \rangle } } = \\ \frac{1}{2k } \left ( x_v(t ) \delta_{v , w } - x_v(t )x_w(t ) \right ) \;.\end{gathered}\ ] ] by following the procedure given in the appendix , one obtains the fokker - planck equation to leading order in .since one is usually interested in large populations , terms of higher order in that involve higher derivatives are neglected .thus one obtains a continuous diffusion equation for allele frequencies valid in the limit of a large ( but finite ) population .we see by comparing the right - hand side of ( [ fwfpe ] ) with ( [ lrep ] ) that the fisher - wright dynamics of allele frequencies in a large biological population coincide with the stochastic component of the evolution of a speaker s grammar .because of this mathematical correspondence , it is useful occasionally to identify a speaker s grammar with a biological population .however , as noted at the end of sec .[ moddef ] , this should not be confused with the population of utterances central in our approach to the problem of language change .as we previously remarked , the fact that a speaker retains a record of her own utterances means that the grammar of a single speaker will be subject to drift , even in the absence of other speakers , or where zero weight given to other speaker s utterances . in this case, a single speaker s grammar exhibits essentially the same dynamics as a biological population in the fisher - wright model .we outline existing results from the literature , as well as some extensions recently obtained by us , in sec .[ singlespeaker ] below .the requirement that the population size is large for the validity of the diffusion approximation ( [ fwfpe ] ) of fisher - wright population dynamics relates to the large- approximation of sec .[ continuous ] .by contrast , the small- approximation relates to an _ ageing _ population , i.e. , one where a fraction of the individuals are replaced in each generation .this is similar to a moran model in population genetics , in which a single individual is replaced in each generation .its continuous - time description is also given by ( [ fwfpe ] ) but with a modified effective population size .it is worth noting that when production biases are present , i.e. , the parameters are nonzero , the resulting single - speaker fokker - planck equation corresponds to a fisher - wright process in which mutations occur . in the beanbag picture, one would realize this mutation by having a probability proportional to of placing a bean of color in the next population , given that the bean selected from the parent population was of color .it is again possible to obtain exact results for this model , albeit for a restricted set of mutation rates .we discuss these below in sec .[ singlespeaker ] .the remaining set of parameters in the utterance selection model , , correspond to _ migration _ rates from population to in its biological interpretation .it is apparently much more difficult to treat populations coupled in this way under the continuous - time diffusion approximation .a prominent exception is where one has two populations : a fixed mainland population and a changing island population .the assumption that the mainland population is fixed is reasonable if it is much larger than the island population .since a speaker s grammar does not have a well - defined size , this way of thinking is unlikely to be of much utility in the context of language change .therefore in sec .[ multispeaker ] we pursue the diffusion approximation where all speakers ( islands ) are placed on the same footing .this work contrasts investigations based on ancestral lineages ( `` the coalescent '' ) that one can find in the population genetics literature ( see , e.g. , for a recent review of applications to geographically divided populations ) .we shall also make use of these results to gain an insight into the multi - speaker model .finally in this section we note that a feature ubiquitous in many biological models , namely the selective advantage ( or fitness ) of alleles , is not relevant in the context of language change . for reasons we have already discussed in sec .[ framework ] , we do not consider lingueme variants to possess any inherent fitness .we begin our analysis of the utterance selection model by considering the case of a single speaker which is nontrivial because a speaker s own utterances form part of the input to her own grammar .we outline both relevant results that have been established in the population genetics literature , along with an overview of our new findings which we have presented in detail elsewhere .we begin with the case where production biases ( mutations ) are absent .when the probability of uttering a particular variant form is equal to the frequency stored in the speaker s grammar ( we drop the speaker subscript as there is only one of them ) , the fokker - planck equation reads where is the total number of possible variants .we see that in this case , enters only as a timescale and so we can put with no loss of generality in the following .one way to study the evolution of this system is through the time - dependence of the moments of . multiplying ( [ fpess ] ) by and integrating by parts one finds \ , .\label{mom_eqn}\ ] ] we see immediately that the mean of is conserved by the dynamics .the higher moments have a time - dependence that can be calculated iteratively for .for example , for the variance one finds that remarkably and as we showed in the full time - dependent solution of ( [ fpess ] ) can be obtained under a suitable change of variable .the required transformation is which maps the space onto the dimensional unit hypercube , . in the new coordinate systemthe fokker - planck equation is the solution is then obtained by separation of variables .first , we separate the time and space variables so that given a fixed initial condition one has here , and are the eigenvalues and corresponding eigenfunctions of the operator appearing in ( [ fpun1 ] ) , and a set of expansion coefficients that are determined by the initial condition. one can then separate each of the variables , since we have the recursion to see this , let us assume we have found an eigenfunction of the -variant operator with accompanying eigenvalue .now , we make an ansatz for an eigenfunction of the -variant operator , where the corresponding eigenvalue remains to be determined .inserting this ansatz into ( [ drec ] ) yields the ordinary differential equation that has to be solved for the function .note that when , we have only one independent variable and the eigenfunction of with eigenvalue is the solution of ( [ ode ] ) with . beginning with this case in ( [ phirec ] ) and iterating the requisite number of times ,one finds the solution for an eigenfunction of the -variant fokker - planck equation is that is , the partial differential equation ( [ fpun1 ] ) is separable in the variables as claimed , and each factor in the product is a solution of the ordinary differential equation ( [ ode ] ) that contains two parameters . after an appropriate substitution , ( [ ode ] ) can be brought into a standard hypergeometric form whose solutions are jacobi polynomials .this analysis yields the eigenvalues of the fokker - planck equation . when there are initially variants these are in which the variables are non - negative integers .note that all the eigenvalues are positive : that is , the function decays over time .this is because of the fact that , when no production biases are present , once a variant s frequency vanishes , it can never be uttered again : i.e. , variants become extinct until eventually one of them becomes _fixed_. hence , the stationary probability distribution comprises delta functions at the points where one of the frequencies . since the mean of the distributionis conserved ( see above ) , the weight under each delta function which is the probability that the corresponding variant is the only one in use as simply the variant s mean frequency in the initial condition .although we do not give the solution explicitly here , it is plotted for a two variant unbiased system in figure [ 2var_sol_0 ] .the distribution in the interior of the domain decays with time , as the probability of one variant being eliminated ( not plotted ) grows .time development of the exact solution of the fokker - planck equation for a single speaker with two variants initially , when bias is absent and . ]it is remarkable that the solution of the fokker - planck equation for variants is not much more complicated than the solution of the corresponding equation for 2 variants .this turns out to be a feature of other quantities associated with this problem . for example , the probability that variant is the only one remaining at a _time , given an initial condition , can be calculated rather easily because a reduction to an effective two - variant problem can be found to work in this case as well .to understand this idea , it is helpful to return to the beanbag picture of population genetics of the previous section .we are interested in knowing the probability that all beans in the bag have the same color say , for concreteness , chartreuse .let then be the fraction of such beans in the bag in the current generation . in the next generation , each bean has a probability of being chartreuse , and a probability of being some other color . clearly , the number of chartreuse beans in the next generation has the distribution ( [ fwmarkov ] ) with , which is the reduction to the two - variant problem .the form of in this case was first found by kimura and is given by { \ensuremath{\mathrm{e}}}^{-\ell(\ell+1)t/2}\end{gathered}\ ] ] in which is a legendre polynomial .several other results can be obtained by utilizing the above reduction to an equivalent two - variant problem together with combinatorial arguments .for example , the probability that exactly variants coexist at time may be expressed entirely in terms of the function and various combinatorial factors .other quantities , such as the mean time to the extinction , or the probability that a set of variants become extinct in a particular order , can be most easily found from the backward fokker - planck equation , which involves the adjoint of the operator . in some cases, one can carry out a reduction to an equivalent two - variant problem wherein such quantities as the mean time to fixation of a variant averaged over those realizations of the dynamics in which it does become fixed come into play .note , however , that this reduction is not always possible .for instance , in the two examples given at the start of this paragraph , the former can be calculated from such a reduction , whereas the latter can not .these subtleties are discussed in .we turn now to the case where the production probabilities and grammar frequencies are not identical , but related by ( [ lintrans ] ) . here , calculations analogous to those above are possible in those cases where .that is , in the interpretation where are mutation rates , we can obtain solutions when mutation rates depend only one the end product of the mutation . to calculate moments of is most efficient to use the fokker - planck equation in the form and the explicit formula for the current ( [ current ] ) adapted to the single - speaker case to find the equation satisfied by the moments : using the condition that the current vanishes on the boundary . using eq .( [ current ] ) the equation for the first moment , for instance , is in which .this has the solution a result that does not depend on the number of tokens exchanged per interaction since this affects only the stochastic part of the evolution .higher moments have more complicated expressions which can be found in . once again, we can find the complete time - dependent solution of the fokker - planck equation using the same change of variable and separation of variables as before . to achieve this ,one makes the replacement in eq .( [ fpun1 ] ) and where we have introduced note that it is necessary to reinstate the parameter since two timescales are now in operation : one corresponding to the probabilistic sampling effects , and the other to mutations . in the ensuing separation of variables , we find that each product in the eigenfunction analogous to ( [ eigey ] ) picks up a dependence on the variant through the parameters and . the eigenvalue spectrum also changes , becoming now where are , as before , non - negative integers and . on this occasion, we have a zero eigenvalue when .the corresponding eigenfunction is then the the ( unique )stationary state which is given by this result first appeared for the case in ref . . .] and . ]when , this is a beta distribution .it is peaked near the boundaries when and are both less than , as illustrated in figure [ 1sp_stationary20 ] . when the bias parameters are greater than , the distribution is centrally peaked , and is asymmetric when , as can be seen in figure [ 1sp_stationary8060 ] .it is perhaps interesting to note that the probability current is zero everywhere in this steady state : i.e. , that a detailed - balance criterion is satisfied .it seems likely that the more general situation where can depend both on the initial and final variants will give rise to a steady state in which there is a circulation of probability .we believe a solution for this case has not yet been found . finally in this survey of the single - speaker model we remark on the existence of a hybrid model in which some of the production biases are zero .then , those variants that have will fall into disuse , and the subsequent dynamics will be the same as for the case of biased production among that subset of variants to which mutation is possible .having established the basic properties of the single speaker model moments , stationary distribution and fixation times we now seek their counterparts in the rather more realistic situation where many different speakers are interacting .the large number of potential parameters specifying the interaction between speakers ( and ) means the complexity of the multiple speaker model is much greater than that for a single speaker .however , some analytic results can be obtained by considering the simplest set of interactions between speakers , one where all the interaction probabilities and weightings are equal .that is , we set this greatly simplifies the situation , as the interactions between speakers are now identical , with different speakers being only distinguished by their initial conditions . from a linguistic point of view , it also seems natural to begin with all speakers interacting with the same probability , as might happen in a small village .we are also not considering social forces here , and so we assume that is constant .it can also be seen from the results for a single speaker that the majority of behaviors can be observed in systems with only two variants .therefore we will not consider more than two variants for the remainder of this section .the fokker - planck equation ( [ driftfpe ] ) now takes the relatively simple form where we use without a subscript to denote the overall proportion of the first variant in the population .the parameter is the bias parameter , , and .although we have not succeeded in solving this equation exactly , we have been able to perform a number of calculations and analyzes which we present below .differential equations for moments of can be found using the same methods as before .when production biases are present we find , by multiplying ( [ flatfpe ] ) by , integrating and using the fact that the probability current vanishes at the boundaries , that ( compare with eq .( [ av_eqn ] ) ) \,.\ ] ] note that the sum over in this expression can be written as where is the mean frequency over the entire community of speakers . using this substitution , and summing ( [ flat_mean_eq ] ) over all speakers, we find that subtracting this expression from ( [ flat_mean_eq ] ) gives \langle x_i - x \rangle \;.\ ] ] these equations are now decoupled and their solution follows readily after implementing the initial condition and using the definitions ( [ ghflat ] ) .we find that ^{-rt/2n}\label{mean_xi_t}\\ \langle x(t)\rangle & = & \frac{m_1}{r}+\left(x_0-\frac{m_1}{r}\right)e^{-rt/2n } \label{mean_x_t}\;,\end{aligned}\ ] ] where .for two different choices of mutation parameters . in each case , and .the overall population mean is shown as a dashed line for comparison , with . ]each speaker s mean thus converges to the community s mean at a rate controlled by , and the latter relaxes to the fixed point of the bias transformation at a rate determined by . in both cases , the decay time grows linearly with the number of speakers .this behavior is shown in figure [ multi_mean_t ] in which the time development of the mean of a particular speaker has been plotted for two different bias parameter choices . in the unbiased case we can repeat the same procedure to find the time dependence of .the result is simply ( [ mean_xi_t ] ) and ( [ mean_x_t ] ) with and set to zero , though one must be careful with the boundaries when deriving the equivalent of ( [ flat_mean_eq ] ) .in particular and we see explicitly that the expected overall fraction of each variant in the population is conserved , just as in the single speaker case : although we could write time dependent equations for higher moments , they are much more complicated .instead we now turn to the stationary distribution . in the absence of production biases ,the stationary distribution is one in which all speakers grammars contain only one variant .this is similar to the situation for a single speaker , only we should note that ( except in the special case of , which is equivalent to the single speaker case ) equilibrium is only reached when _ all _ the speakers have the same variant . since is conserved by the dynamics , we have once again that the weight under the delta - function peaks equal to the initial mean frequency of the corresponding variants within the entire community . in the next subsection, we shall investigate the relaxation to this absorbing state of fixation .when production biases are present , we expect an extended stationary distribution with a mean given by ( [ mean_x_t ] ) in the limit .the second moments can be calculated by multiplying eq .( [ flatfpe ] ) by and , , integrating , and using the fact that the probability current vanishes at the boundaries , just as in the derivation of eq .( [ flat_mean_eq ] ) , except that in this case there is no time derivative .using the symmetry of the speakers we find that \langle x_ix_j\rangle^*-(n-1)m_1\langle x_i \rangle^*-h\langle x_i^2\rangle^*&=0\end{aligned}\ ] ] where the asterisk denotes the steady state. solving gives }{(n-1)r\tilde{r}+ h[(n-1)r+\tilde{r}]}\right\}\label{xi2_star}\ ] ] and , for , }{(n-1)r\tilde{r}+h[(n-1)r+ \tilde{r}]}\right\}\label{xixj_star}\ ] ] where , .for the overall proportion of the first variant }{(n-1)r\tilde{r}+h[(n-1)r+\tilde{r}]}\right\}\,,\end{aligned}\ ] ] where the sum on the first line now includes the case . when there are only two variants , the single speaker stationary distribution ( [ ss_stat ] ) is a beta distribution .the marginal distribution for each speaker in the multiple speaker model is modified by the presence of other speakers , but still the distribution is peaked near the boundaries when the bias is small , and changes to a centrally peaked distribution as the bias becomes stronger .we therefore propose that it is appropriate to approximate the stationary marginal distribution as a beta distribution with mean and variance just calculated .that is , where \;,\\ \beta & = 2t(r - m_1)\!\left[\frac{(n-1)r+hn}{(n-1)r+h}\right]\;.\end{aligned}\ ] ] unlike in ( [ ss_stat ] ) the parameters of the distribution now depend on and as well as .the marginal distribution is well fitted by this beta distribution for a broad range of and .an example is shown in figure [ multi_stat ] , where the distribution calculated from simulations is compared to an approximating beta distribution .when and are small , the transition from concave to convex shape occurs at approximately the same values of the mutation parameters as it does in the single speaker case , when .as or become larger , the transition value becomes smaller .for sufficiently large or , individual speakers will retain significant proportions of both variants , even for very small ( but still non - zero ) bias parameter values ; the distribution will be centrally peaked unless and are extremely small .this can be seen in figure [ multi_mtrans ] , which shows the value of at which the transition from concave to convex takes place for a range of and three different population sizes .this critical value of , denoted by , is the value of for which the parameters and in eq .( [ quasi ] ) pass through 1 .the single speaker marginal stationary distribution when , and .bars are the distribution obtained from simulation , while the curve is the approximate beta distribution . ]at which the stationary pdf of changes from a concave to a convex distribution , as a function of for , , .mutation is assumed symmetric : . ]the stationary distribution of ( the proportion of variant 1 throughout the population of speakers ) on the other hand , does not always have a simple shape .consider first when the mutation strength is fixed at some small value : .when is small some speakers can be at opposite ends of the interval . for small ,this leads to a multiply peaked distribution , with each peak representing a certain fraction of the speakers being at one end .as gets larger , the tendency to be at the same end increases , and the central peaks dwindle , leaving the familiar double - peaked distribution .this only holds so long as the mutation strength remains below the critical value , as shown in figure [ multi_mtrans ] . for sufficiently large or for larger ,the distribution becomes centrally peaked . when and are above the critical value , or if is sufficiently large that the central - limit effect becomes significant , the stationary distribution of is smooth and single peaked for all values of , becoming more bell shaped the higher the value of in accordance with the central limit theorem .here we find that both beta and gaussian distributions calculated from the mean and second moment fit well see figure [ multi_stat2 ] .the value of only has a small effect , altering the width of the distribution slightly .the average speaker stationary distribution when , and .bars are the distribution obtained from simulation , while the curve is the approximate beta distribution . ] in the calculations of sec .[ ss_moments ] we established that a single speaker s mean converges to the overall community s mean more slowly as the number of speakers is increased .when production biases are absent , we can also anticipate that the time to reach fixation also increases with the number of speakers .this fact can be established analytically by re - casting the description of the system in terms of the coalescent , a technique which can be found in .we will not give the details of this calculation here , but merely state the result , which is derived in .the mean time to extinction of the second variant , which corresponds to fixation of the first is =\frac{1-x_0}{x_0}\!\left [ \frac{n(n-1)}{2h}f[x(0)]-tn^2\ln(1-x_0)\right].\ ] ] note that the second term is of the same form as ( [ tauv ] ) .the function depends on the initial distribution of speaker s grammars .for example , when all the speakers start with the same initial proportion ( ) , =\sum_{m=1}^{n-1}\frac{x_0^m}{m}-\frac{x_0}{n } \frac{1-x_0^{n-1}}{1-x_0}\;,\ ] ] while when of the speakers start with and start with ( so that the overall proportion is still the same ) , these are perhaps the extreme possibilities for the distribution , and in fact the values of differ little . for large are virtually the same and both are well approximated by which gives the much simpler expression for the mean time to extinction of the second variant \ ] ] that appeared in .figure [ multi_fixtime ] shows the mean time to fixation at each boundary ( and ) for a system with only 20 speakers .already the times for inhomogeneous ( solid lines ) and homogeneous ( dashed lines ) are very similar .notice also the dramatic increase in the fixation time as becomes smaller . to calculate the mean to to fixation of _ any _ variant , we take a weighted average of the time for each variant : \! \left[\frac{n(n-1)}{2h}+\!tn^2\right]\label{tau_total}.\end{aligned}\ ] ] the mean time to fixation to each boundary as a function of , for a system with 20 speakers and .the solid curves are for an inhomogeneous initial condition , and the dashed curves are for a homogeneous initial condition . the lower curves are and the upper curves are . ]an interesting feature of the fixation time is that it increases quadratically with the number of speakers , whereas the moments were seen to relax with time constants that grow linearly with .these results relate to the qualitative behavior observed in simulation .one notices the initial condition relaxes quickly to one in which speakers have a distribution that persists for a long time until a fluctuation causes extinction of a variant .the nature of this distribution depends on the size of . when it is very small , the attraction of speakers to the boundaries is stronger than that to the other speakers . therefore , some speakers dwell near the boundary , others near the boundary and only a few being in the central part of the interval at any one time . hereit is evident that for fixation to occur , one needs all speakers near one of the boundaries thus explaining why the fixation time is so much longer than the initial relaxation . for larger , the attraction between speakers overcomes the tendency to approach the boundaries , so the speakers tend to dwell in the interior of the interval .the distribution of speaker grammar values over a time series , for ( top ) an ensemble of realizations ( none of which reach fixation during the period shown ) and ( bottom ) the analytic beta distribution approximation , both for and .,title="fig : " ] the distribution of speaker grammar values over a time series , for ( top ) an ensemble of realizations ( none of which reach fixation during the period shown ) and ( bottom ) the analytic beta distribution approximation , both for and .,title="fig : " ] the number of realizations remaining unfixed at time , with initially 1000 realizations .dashed curve is where is given by eq .( [ tau_total ] ) ] we shall concentrate on the quasi - stationary distribution with small . we obtain this using a mean - field argument , expected to be valid for large . as usual when applying mean - field theory we focus on one constituent , in this case speaker .we then replace the term involving all the other speakers in the fokker - planck equation by an average value .thus eq .( [ flatfpe ] ) , in the unbiased case , becomes the solution to this equation is separable , so we write , and find the fokker - planck equation for a single speaker to be after a rescaling of time , and dropping the index , this is exactly the fokker - planck equation for a single - speaker with bias and two variants , with the identification and . at large times we have from ( [ mean_res ] ) that .therefore we expect that at large times the solution of the fokker - planck equation to be identical to that of the single - speaker fokker - planck equation with bias , as long as the identification and is made .in particular , we expect the marginal probability distribution for a single speaker to have a stationary form which is a beta function of the form ( [ ss_stat ] ) with and and , that is , where this distribution is shown in the lower half of figure [ meta_dist ] for the case of small . in the upper half of this figure is the equivalent distribution calculated from numerical simulations , and it can be seen that the shape is maintained over time ( the numerical result only includes realizations that do not fix in the time period specified ) , and that it is very similar to the beta approximation .the distribution of speaker grammar values over a time series , for ( top ) an ensemble of realizations ( including fixing realizations ) and ( bottom ) the analytic beta distribution approximation , both for and .,title="fig : " ] the distribution of speaker grammar values over a time series , for ( top ) an ensemble of realizations ( including fixing realizations ) and ( bottom ) the analytic beta distribution approximation , both for and .,title="fig : " ] if we assume that the rate at which any individual realization of the process becomes fixed is constant , the number of unfixed realizations exhibits an exponential decay with a time - constant given by ( [ tau_total ] ) . that this is the case is suggested by fig .[ num_unfixed ] in which the number of unfixed realizations as a function of time obtained from monte carlo simulation is compared with this prediction .this then suggests for the full time - dependent distribution the expression in figure [ meta_timedev ] we compare this approximation , shown in the lower half , with numerical results in the upper half ( where now the numerical results include realizations that fix during the time interval ) .in this paper we have cast a descriptive theory of language change , developed by one of us , into a mathematical form , specifically as a markovian stochastic process . in the resulting modelthere are a set of speakers who each have a grammar which consists of possible variants of a particular linguistic structure ( a lingueme ) . in the initial phase of formulating the process ,two speakers out of the are picked out at every time step and allowed to communicate with each other .the utterances they produce modify the grammar of the other speaker as well as their own by a small amount .another two speakers are then picked at the next time step and allowed to communicate .this process is repeated , with two speakers and being chosen at each time step with a probability .this matrix therefore prescribes the extent of the social interaction between all speakers .after many time steps the initial grammar of the speakers will have been modified in a way which depends on the choice of the model parameters .the above formulation , that is , in terms of events which happen at regular time steps , is ideal for computer simulation . of course , the model is stochastic , and so many independent runs have to be carried out , and the results obtained as averages over these runs .the randomness in the model enters in two ways : in the choice of speakers and and in the choice of the variants spoken by a speaker in a particular utterance .we showed that it is possible to take the time interval between steps to zero , and derive a continuous time description of the process .when this procedure is carried out , the model takes the form of a fokker - planck equation . the whole approach to language change we have been investigatingwas conceived as an evolutionary process , with linguemes being analogous to genes in population genetics .so it is perhaps not surprising that the mathematical structures encountered when quantifying these theories are so similar .however , as stressed in sec .[ popgen ] , there are important differences .the most direct correspondence with population genetics is when there is a single speaker and where the number of tokens is large .furthermore , at each time step the update rule ( [ update_1 ] ) applies in the linguistic model , whereas the equivalent rule in the population genetics case would be corresponding to a completely new generation of individuals being created through random mating . thus the genetic counterpart is formally equivalent to letting , and giving the previous grammar ( ) zero weight compared to the random element ( ) ; for the actual linguistic problem , is small , and it is that has by far the greater weight . taking and reinstating the factor of through a rescaling of the time , does indeed give the population genetics result ( [ fwfpe ] ) , with taking the role of .although the limit is the precise correspondence , the scaling choice ( [ rescale1])([rescale3 ] ) which we use also gives a mathematical , if not a precise conceptual , equivalence between the genetic and linguistic models .our analysis of the fokker - planck equation began by considering the case of only one speaker .this is far from trivial , and as we have seen is formally equivalent to standard models of population genetics .this has the advantage that many results from population genetics may be taken over essentially without change .remarkably , the fokker - planck equation is in this case exactly soluble .this is due to the simple way in which the equation for variants is embedded in the -variant equation .a similar simplification holds when calculating quantities such as the probability that a given number of variants coexist at time or the mean time to the extinction of a variant : they can be related by induction to the solution of the two - variant problem . while the exact solution of the mathematically non - trivial single speaker case gives considerable insights into the effects caused by the bias ( or mutation ) term ( [ lbias ] ) and the diffusion term ( [ lrep ] ) , to understand the evolution of variants across a speech community it is clearly necessary to include the third term ( [ lint ] ) in the fokker - planck equation . in sec .[ multispeaker ] we carried out an analysis of the model with this term included in the simplest situation where all speakers were equally likely to talk to all other speakers ( independent of and ) and where all speakers gave the same weight to utterances from other speakers ( independent of and ) .just as for the single speaker case , there are distinctions between the situations where there is bias and where there is no bias . whilst the presence of a bias ( through the term ( [ lbias ] ) ) , makes the model more complicated ,its behavior is in fact simpler than if there were no bias : the distribution of the probability of a variant in the population tends to a stationary state which can be approximately characterized as a beta distribution . as we have seen , when no bias is present , interactions between the speakers causes them all to converge relatively quickly to a common marginal distribution which persists for a long time until a fluctuation causes the same variant to be fixed in all grammars .under a mean - field - type approximation , valid in the limit of a large number of speakers , we established the form of this quasi - stationary distribution . in this paper ,we have been primarily concerned with the mathematical formulation of the theory and beginning a program of systematic investigation of the model .we believe that we have laid the foundations for this study with the analysis we have presented , but clearly there is much left to do . in order to make connection with observational data we will need to consider more realistic social networks through which linguistic innovations may be propagated i.e . , non - trivial , as in fig . 1. bearing in mind the proposed importance of social forces that described in sec .[ framework ] , it will also be necessary to include of speakers or groups of speakers which may have more influence on language change than others i.e . , non - trivial . many of these cases will only be amenable to analysis through computer simulations , but it should be possible to obtain some analytical results with , for example , a simplified network structure .however , it is clear that even without any further developments , some of our results can be generalized .for instance , by proceeding as in sec .[ ss_moments ] , we can find that for general and , and therefore that the rate of change of is given by therefore is conserved not only when is constant , as demonstrated in sec .[ ss_moments ] , but also when is symmetric .in fact , the result can be further generalized .if we define the net `` rate of flow '' by then eq .( [ gen_2 ] ) may be written as so as long as for all , which may be thought of as a kind of detailed balance condition , then the overall mean is conserved .now if the mean is conserved , then the probability of a particular variant become fixed is simply its initial value .therefore no matter what the network or social structure , if for all , then this structure will have no effect on the probability of fixation .it is clear , however , that in general the further development of the model will necessitate the choice of a particular network and social structure . as an example of thiswe have recently begun to analyze the model in the context of the formation of the new zealand english dialect , for which a reasonable amount of data is available . in particularthese give some information about the frequencies with which different linguistic variables were used by the first generations of native new zealand english speakers and their ultimate fate in the formation of today s conventional dialect .predictions from our model relating to extinction probabilities and timescales will play an important part in better understanding this data .more widely , we hope that the work presented here will underpin many subsequent applications and form a basis for a quantitative theory of language change .rab acknowledges both an epsrc fellowship ( grant gr / r44768 ) under which this work was started , and a scottish executive / royal society of edinburgh fellowship , under which it has been continued .gjb thanks the nz tertiary education commission for a top achiever doctoral scholarship .in this appendix we derive the fokker - planck equation ( [ driftfpe ] ) .the method is standard , and involves the calculation of the so - called jump - moments for the process under consideration . since we have already sketched some of the background in sec .[ continuous ] for the single speaker case , let us begin with this simpler version of the model .our starting point is the kramers - moyal expansion here the dots represent higher order terms ( which will turn out not to contribute ) and the functions are the jump moments where .the kramers - moyal expansion itself is derived from the assumption that the stochastic process is markov together with a continuous time approximation . in the single speaker casewe have already established a form for ( see eq .( [ delta_x_1 ] ) ) and since the mean of the multinomial distribution ( [ multinomial ] ) is simply a manipulation as in eq .( [ bias_deter ] ) and a rescaling as in eqs .( [ rescale1 ] ) and ( [ rescale2 ] ) leads to where the dots indicate higher orders in .therefore , from eq .( [ jump_mom_def1 ] ) , . to find the second jump moment ,we need to consider , but from eq .( [ delta_x_1 ] ) we see that this is already , that is , .therefore any terms in the matrix which vanish as do not contribute at this order .since all off - diagonal entries and diagonal entries apart from 1 , are of this form , may be replaced by the unit matrix everywhere in this second order term , i.e. , any bias can be neglected .using eq .( [ delta_x_1 ] ) and eq .( [ navg_1 ] ) with replaced by , we obtain now the variance of the multinomial distribution is given by -t x_{v}^\prime x_{w}^\prime & v \ne w , \end{array } \right.\end{gathered}\ ] ] and so once again replacing by and using the definition of the jump moment ( [ jump_mom_def2 ] ) , we obtain eq .( [ second_jump_mom_2 ] ) .all higher jump moments vanish , since from eq .( [ delta_x_1 ] ) we see that the third and higher moments of are at least , that is , at least . therefore the kramers - moyal expansion is truncated at second order and we obtain the fokker - planck equation the derivation in the case of the full model with speakers follows similar lines . here is an dimensional grammar variable whose components we have written as .it is sometimes convenient to replace the two labels by the single one with .then eqs .( [ km_1])-([jump_mom_def2 ] ) in the derivation of the one - speaker case can be taken over by replacing and by and respectively . in the full utterance selection model ,there is randomness both in the choice of speakers that interact in the interval following time and in the tokens they produce .the jump moments are derived from averages of products of the quantity . from ( [ update ] )we find the analog of the one - speaker result ( [ delta_x_1 ] ) to be \ ] ] for a speaker given that speakers and have been already be chosen as the interacting pair in the time - step at .the mean change in the grammar variable can then be determined by knowing that the mean of the multinomial distribution ( [ multinomial ] ) is simply then \nonumber \\ \label{xmean } & = & \lambda \left [ \sum_{w \ne v } \left(m_{vw } x_{iw } - m_{wv } x_{iv}\right ) + h_{ij } \left ( x_{jv } - x_{iv}\right ) \right ] \nonumber\\ & & { } + \operatorname{o}(\lambda h m , \lambda^2 h , \lambda^2 m)\end{aligned}\ ] ] in which the second line was arrived from the first by using ( [ lintrans ] ) .similarly , from the variance of the multinomial distribution -t x_{i v}^\prime x_{i w}^\prime & v \ne w , i = j\\[1ex ] 0 & i \ne j \end{array } \right.\end{gathered}\ ] ] one finds if and otherwise . in order to have both a deterministic and stochastic part to the fokker - planck equation, we need both and to be proportional to in the limit . one can verify that the only way this can be arranged is if one rescales the variables as in eqs .( [ rescale1])([rescale3 ] ) , a choice which was motivated in more detail in section [ continuous ] . then , only the leading terms in eqs .( [ xmean ] ) and ( [ xvar ] ) remain in when one takes the limit in eqs .( [ jump_mom_def1 ] ) and ( [ jump_mom_def2 ] ) .furthermore , all higher jump moments vanish , as also discussed in section [ continuous ] , and the sum in eq .( [ km_1 ] ) terminates at the second moment . after substituting the jump moments into ( [ km_1 ] ) and averaging over all possible pairs of speakers , weighted by the interaction probabilities , one finally arrives at the fokker - planck equation given in the main text , eq .( [ driftfpe ] ) .
we present a mathematical formulation of a theory of language change . the theory is evolutionary in nature and has close analogies with theories of population genetics . the mathematical structure we construct similarly has correspondences with the fisher - wright model of population genetics , but there are significant differences . the continuous time formulation of the model is expressed in terms of a fokker - planck equation . this equation is exactly soluble in the case of a single speaker and can be investigated analytically in the case of multiple speakers who communicate equally with all other speakers and give their utterances equal weight . whilst the stationary properties of this system have much in common with the single - speaker case , time - dependent properties are richer . in the particular case where linguistic forms can become extinct , we find that the presence of many speakers causes a two - stage relaxation , the first being a common marginal distribution that persists for a long time as a consequence of ultimate extinction being due to rare fluctuations .
the last decade has witnessed tremendous interest devoted to the investigation of collective phenomena in multiple autonomous agents , due to broad applications in various fields of science ranging from biology to physics , engineering , and ecology , just to name a few . concerning the issues of multi - agent systems and distributed design , the revolutionary idea is underpinning a strong interaction of individual dynamics , communication topologies , and distributed controls .the problem is generally very challenging due to the complex dynamics and hierarchical structures of the systems .however , efforts have been started with relatively simple problems such as consensus , formation , and rendezvous , and many significant results have been obtained .the leader - follower coordination is an important multi - agent control problem , where the leader may be a real leader ( such as a target , an evader , or a predefined position ) , or a virtual leader ( such as a reference trajectory or a specified path ) . in most theoretical work ,a single leader with exact measurement is considered on multi - agent systems for each agent to follow . however , in practical situations , multiple leaders and target sets with unmeasurable variables are considered to achieve desired collective behaviors . in ,a simple model was given to simulate fish foraging and demonstrate the leader effectiveness when the leaders ( or informed agents ) guide a school of fish to a particular food region . in , a straight - line formation of a group of agents was discussed , where all the agents converge to the line segment specified by two edge leaders .a containment control scheme was proposed with fixed undirected interaction in , which aimed at driving a group of agents to a given target location and making their positions contained in the polytope spanned by multiple stationary or moving leaders during their motion .region following formation control was constructed , where all the robots are driven and then stay within a moving target region as a group . moreover , different dynamic connectivity conditions were obtained to guarantee that the multiple leaders ( or informed agents ) aggregate the whole multi - agent group within a convex target set in .additionally , control strategies were demonstrated and analyzed to drive a collection of mobile agents to stationary / moving leaders with connectivity - maintenance and collision - avoidance with fixed and switching directed network topologies in . as a matter of fact , multiple leaders are usually assigned to increase control effectiveness , enhance communication / sensing range , improve reliability , and optimize energy cost in multi - agent coordination .connectivity plays a key role in the coordination of multi - agent networks , which is related to the influence of agents and controllability of the network . due to mobility of the agents ,inter - agent topologies usually keep changing in practice .therefore , the various connectivity conditions to describe frequently switching topologies in order to deal with multi - agent consensus or flocking .in fact , the joint connection " or similar concepts are important in the analysis of stability and convergence to guarantee multi - agent coordination with time - dependent topology .uniformly jointly - connected conditions have been employed for different problems . studied the distributed asynchronous iterations , while proved the consensus of a simplified vicsek model .furthermore , and investigated the jointly - connected coordination for second - order agent dynamics via different approaches , while worked on nonlinear continuous - time agent dynamics with jointly - connected interaction graphs . also , flocking of multi - agent system with state - dependent topology was studied with non - smooth analysis in .what is more , the joint connection condition , which is more generalized than the uniformly joint connection assumption , was discussed by moreau , in order to achieve the consensus for discrete - time agents in .this connectivity concept was then extended in the distributed control analysis for the target set convergence in .it is well known that input - to - state stability ( iss ) is an important and very useful tool in the study of the stability and stabilization of control systems .its variants such as integral input - to - state stability ( iiss ) were discussed in . then few works on set input - to - state stability ( siss ) were done with respect to fixed sets in . on the other hand , iss or related ideas can facilitate the control analysis and synthesis with interconnection conditions like small gains ( referring to , for example ) .iss has recently been applied to the stability study of a group of interconnected nonlinear systems .moreover , an extended concept called leader - to - formation stability was introduced to investigate the stability of the formation of a group of agents in light of iss properties .in fact , iss application in multi - agent systems is promising .the contributions of the paper include : * we propose the generalized set input - to - state stability ( siss ) and set integral - input - to - state stability ( siiss ) to handle moving sets with time - varying shapes for switching multi - agent networks .* we study the multi - leader coordination from the iss viewpoint . with the help of siss and siiss, we give explicit expressions to estimate the convergence rate and tracking error of a group of mobile agents that try to enter the convex hull determined by multiple leaders .* we show relationships between the connectivity and set tracking of the multi - agent system , and find that various jointly - connected conditions usually provide necessary and/or sufficient conditions for distributed coordination . *we develop a method to study siss and siiss for a moving set and switching topology with graph theory and non - smooth analysis .in fact , we can not take the standard approaches to conventional iss or iiss using equivalent iss - lyapunov functions .in addition , the classic algebraic methods based on laplacian may fail due to disturbances in nonlinear agent dynamics , uncertain leader velocities , or moving multi - leader set .this paper is organized as follows .section 2 presents the preliminaries and problem formulation , while section 3 proposes results for the convergence estimation .section 4 mainly reports a necessary and sufficient condition for the siss with respect to the moving multi - leader set with switching inter - agent topologies , and then presents a set - tracking case based on the siss .correspondingly , section 5 obtains necessary and sufficient conditions for siiss and then shows set - tracking results related to siiss . finally , section 6 gives concluding remarks .in this section , we introduce some preliminary knowledge for the following discussion . first we introduce some basic concepts in graph theory ( referring to for details ) . a directed graph ( digraph ) of a finite set of nodes and an arc set , in which an arc is an ordered pair of distinct nodes of . describes an arc which leaves and enters .a _ walk _ in digraph is an alternating sequence of nodes and arcs for .a walk is called a _ path _if the nodes of this walk are distinct , and a path from to is denoted as .node is called _ reachable _ from if there is a path .if the nodes are distinct and , is called a ( directed ) _ cycle_. a digraph without cyclesis said to be _acyclic_. the union of the two digraphs and defined as if they have the same node set .furthermore , a time - varying digraph is defined as with as a piecewise constant function , where is the finite set which consists of all the possible digraphs with node set . moreover , the joint digraph of in time interval with is denoted as next , we recall some notations in convex analysis ( see ) .a set is said to be convex if whenever and . for any set , the intersection of all convex sets containing is called the _ convex hull _ of , denoted by .particularly , the convex hull of a finite set of points is a polytope , denoted by .in fact , we have .let be a closed convex subset in and denote , where denotes the euclidean norm for a vector or the absolute value of a scalar ( ) .then we can associate to any a unique element satisfying where the map is called the projector onto and clearly , is continuously differentiable at point , and ( see ) the following lemma was obtained in , which is useful in what follows . [ lem6 ] suppose is a convex set and .then , then and be two real numbers and consider a function and a point .the upper dini derivative of at is defined as it is well known that when is continuous on , is non - increasing on if and only if for any ( more details can be found in ) .the next result is given for the calculation of dini derivative .[ lem3 ] let be and .if is the set of indices where the maximum is reached at , then in this paper , we consider the set coordination problems for a multi - agent system consisting of follower - agents and leader - agents ( see fig .[ fig0 ] ) .the follower set is denoted as , and the leader set is denoted as . in what follows , we will identify follower or leader with its index ( namely , agent or leader ) if there is no confusion .then we describe the communication in the multi - agent network . at time , if can see " , there is an arc ( marking the information flow ) from to , and then agent is said to be a _ neighbor _ of agent .moreover , if sees " at time , there is an arc leaving from and entering , and then is said to be a _ leader _ of agent .let and represent the set of agent s neighbors and the set of agent s leaders ( that is , the leaders which are connected to agent ) , respectively .note that , since the leaders are not influenced by the followers , there is no arc leaving from entering .define as the whole agent set ( including leaders and followers ) .denote as the set of all possible interconnection topologies , and as a piecewise constant switching signal function to describe the switchings between the topologies .thus , the interaction topology of the considered multi - agent network is described by a time - varying directed graph .correspondingly , is denoted as the communication graph among the follower agents .additionally , let and represent the set of agent s neighbors and the set of its connected leaders in , respectively .as usual in the literature , an assumption is given for the switching signal .* assumption 1 * ( dwell time ) there is a lower bound between two switching instants .we give definitions for the connectivity of a multi - agent system with multiple leaders .\(i ) is said to be _l - connected _ if , for any , there exists a leader such that there is a path from leader to agent in at time .moreover , is said to be _jointly l - connected _ in time interval if the union graph is l - connected ; \(ii ) is said to be _jointly l - connected _ ( jlc ) if the union graph is l - connected for any ; \(iii ) is said to be _ uniformly jointly l - connected _ ( ujlc ) if there exists such that the union graph is l - connected for any .note that the l - connectedness describes the capacity for the follower agents to get the information from the moving multi - leader set in the information flow , and an l - connected graph may not be connected since the graph with leaders as its nodes may not be connected .in fact , if we consider the group of the leaders as one virtual node in , then the l - connectedness becomes the quasi - strong connectedness for a digraph .the state of agent , is denoted as ) , and the state of leader , is denoted as ) .denote and and let the continuous function be the weight of arc , if any , for , and continuous function be the weight of arc , if any , for .then we present the multi - agent model for the active leaders and the ( follower ) agents where describes the control inputs of the leader , which is continuous in for fixed and piecewise continuous in for fixed , and is a continuous function to describe the disturbances in communication links and individual dynamics to follower agent .then another assumption is given on the weight functions and .* assumption 2 * ( bounded weights ) there are and such that for any . in ( [ 5 ] ) , the weights , and , may not be constant . instead ,because of the complex communication and environment uncertainties , they are dependent on time or space or relative measurement ( see nonlinear models given in ) . some models such as those studied in can be written in the form of ( [ 5 ] ) , while other nonlinear multi - agent models may be transformed to this class of multi - agent systems in some situations . here and are written in a general form simply for convenience , and global information is not required in our study .for example , and can depend only on the state of , time and , which is certainly a special form of or .in other words , the control laws in specific decentralized forms are still decentralized . without loss of generality, we assume the initial time , and the initial condition and .denote the time - varying polytope formed by the active leaders and let be the maximal distance for the followers away from the moving multi - leader set .the following definition is to describe the convergence to the moving convex set .[ def - set ] the ( global ) _ set tracking _ ( st ) with respect to for system ( [ 5 ] ) is achieved if for any initial condition and . for a stationary convexset , set tracking can be reduced to set stability and attractivity , and methods to analyze were proposed in some existing works .in fact , discussed the convergence to the static convex set determined by stationary leaders with well designed control protocols . moreover , if we assume that the target set is exactly the polytope with the positions of the stationary leaders ( or informed agents ) as its vertices , then the convergence to the polytope , treated as a target set , can be obtained straightforwardly based on the results and limit - set - based methods given in .input - to - state stability has been widely used in the stability analysis and set input - to - state stability ( siss ) for a fixed set has been studied in . to study the multi - leader set tracking in a broad sense, we introduce a generalized siss with respect to , a moving set with a time - varying shape , for multi - agent systems with switching interaction topologies .denote , , , and with ( ) .a function is said to be a -class function if it is continuous , strictly increasing , and .moreover , a function is a -class function if is of class for each fixed and decreases to as for each fixed .[ def - siss ] system ( [ 5 ] ) is said to be globally generalized set input - to - state stable ( siss ) with respect to with input if there exist a -function and a -function such that for and any initial conditions and .integral - input - to - state stability ( iiss ) was introduced as an integral variant of iss , which has been proved to be strictly weaker than iss .we also introduce a definition of ( generalized ) set integral - input - to - state stability ( siiss ) with respect to a time - varying and moving set .[ def - siiss ] system ( [ 5 ] ) is ( globally ) generalized set integral - input - to - state stable ( siiss ) with respect to if there exist a -function and a -function such that for any initial conditions and .the conventional siss was given for a fixed set ( ) , while the generalized siss or siiss is proposed with respect to a time - varying set . in the following , we still use siss or siiss instead of generalized siss or siiss for simplicity .similar to the study of conventional iss , local siss and siiss can be defined . in this paper , we focus on the global siss and siiss .in fact , it is rather easy to extend research ideas of global set tracking to study local cases .for the set tracking with respect to a moving multi - leader set of system ( [ 5 ] ) , we have to deal with the estimation of when is a time - varying convex set , where is a trajectory of the moving leaders in system ( [ 5 ] ) with initial condition .define obviously , the following result is given to estimate the changes of the distance between an agent and the convex hull spanned by the leaders .[ lem7 ] for any and , where for with .define , and then also , similar analysis leads to ) and ( [ 102 ] ) lead to the conclusion . for simplicity , define and which is locally lipschitz but may not be continuously differentiable .clearly , and . then , we get the following lemma to estimate the set convergence .[ lem10 ] . proof : it is not hard to see that then , according to ( [ r10 ] ) , we obtain furthermore , according to lemma [ lem7 ] , and then it is easy to find that therefore , moreover , let denote the set containing all the agents that reach the maximal distance away from at time .then , for any , according to ( [ r9 ] ) , one has for any .furthermore , in light of lemma [ lem6 ] , since , for any .therefore , the conclusion follows since \nonumber\\ & \leq & 2(r(t)+\max_{i\in\mathcal { v}_f}|w_i(t)|)\max_{i \in \mathcal{i}(t ) } |x_i(t)|_{\mathcal { l}(y(t))}\nonumber\\ & = & 2q(t)\sqrt{\psi(t)}\nonumber\end{aligned}\ ] ] according to lemma [ lem3 ] .in this section , we study the siss with respect to the convex set spanned by the moving leaders in an important connectivity case , uniformly jointly l - connected ( ujlc ) topology . without loss of generality , we will assume in the sequel .suppose in this section. then we have the main result on siss . [ thm5 ]system ( [ 5 ] ) is siss with respect to and with as the input if and only if is ujlc .the main difficulties to obtain the siss inequalities in the ujlc case are how to estimate the convergence rate in a time interval by pasting " time subintervals together and how to estimate the impact of the input to the agent motion . to prove theorem [ thm5 ] , we first present two lemmas to estimate the distance error in the two standard cases during ] and a constant such that .\ ] ] proof : see appendix a.1 . [ lem9 ] if there is an arc leaving from entering in for all , and for constants and , then there exist a continuous function \mapsto(0,1] ] . * and are strictly increasing during |x(t)|_{\omega}\leq respect to with as the input if is ujlc . sometimes , the velocities of the moving leaders and uncertainties in agent dynamics ( maybe because of the online estimation ) may vanish . to be strict , consider the following condition clearly , ( [ c2 ] ) yields that for any , there is such that , where is the truncated part of defined on .suppose ( [ c2 ] ) holds and is ujlc .based on theorem [ thm5 ] , for any , there is such that hence , the set tracking for system ( [ 5 ] ) with respect to set is achieved easily . on the other hand ,similar to the proof of theorem [ thm5 ] , the necessity of the global set tracking for system ( [ 5 ] ) with condition ( [ c2 ] ) can also be simply proved by counterexamples since may be large and the distance error may accumulate to a very large value over a sufficiently long period of time .therefore , we have the following result .[ cor2 ] the global set tracking with respect to is achieved for all satisfying ( [ c2 ] ) if and only if is ujlc .we are now in a position to prove theorem [ thm5 ] : if " part : denote with . then we estimate at subintervals ] .since , .\ ] ] furthermore , in , there must be a follower , such that there exists an arc for some , or an arc in .there are two cases : * if for , one also has .\ ] ] * if for .according to ( [ g2 ] ) and lemma [ lem10 ] , one has thus , ( [ 20 ] ) will lead to .\ ] ] + then , by lemma [ lem9 ] , if we take , then .\ ] ] because , .\ ] ] repeating the above procedure yields and such that , there exists satisfying \|z\|_{\infty},\;\jmath = i_1,\dots , i_j\ ] ] for ] with such that .\ ] ] proof : according to lemma [ lem10 ] , for any . since there is an arc with in for , based on lemma [ lem6 ] , when , therefore , \psi_i(t)+2[\sqrt{2}|z(t)| + ( n-1)a^\ast(\sqrt{\psi(t_0)}+\int_{t_0}^{t}\sqrt{2}|z(s)|ds)]\sqrt{\psi_i(t)},\ ] ] or equivalently , \ ] ] where for . thus , with ] with such that .\ ] ] [ lem13 ] given a constant , if there is with and , then there is a strictly increasing function \mapsto [ \varepsilon_0,1) ] by virtue of the analysis given for lemma [ lem10 ] . then lemma [ lem14 ] can be obtained straightforwardly .now we are ready to prove theorems [ thmr1 ] , [ thm10 ] , and [ thm12 ] .* proof of theorem [ thmr1 ] * : denote with defined in lemma [ lem5 ] . if is l - connected , there has to be an arc for leaving from a leader entering and this arc is kept there for a period of at least . invoking lemmas [ lem11 ] and [ lem13 ] , ,\ ] ] where .furthermore , when , there must be a follower such that there exists an arc for some , or an arc when . according to lemmas [ lem12 ] and [ lem13 ] , ,\ ] ] where with .repeating the above procedure yields .\ ] ] for , where moreover , the nodes of are distinct .denote from ( [ r1 ] ) .then we obtain it follows immediately that based on lemma [ lem10 ] and ( [ g2 ] ) , we have hence , ( [ r2 ] ) holds with since , which completes the proof. * proof of theorem [ thm10 ] * : the only if " part is quite obvious , so we focus on the if " part .since is jlc , there exists a sequence of time instants such that and is l - connected for .moreover , each arc in will be kept for at least the dwell time during the time interval . then we estimate during ] , where .moreover , according to lemma [ lem12 ] , |z(s)|ds\nonumber\\ & \leq & \varphi_{\phi_1}(\tau_d)|x(t_1)|_{\mathcal { l}(y(t_1 ) ) } + 8\sqrt{2 } \int_{t_1}^{t_2+\tau_d}|z(s)|ds\end{aligned}\ ] ] for . because , ( [ 41 ] ) and ( [ 42 ] )lead to where .next , define , and similarly , from lemma [ lem14 ] , by one has repeating the process gives for until for some such that hence according to lemma [ lem14 ] , we obtain it is obvious to see that and .therefore , denote , then for , thus , similar to the proof of theorem [ thmr1 ] , we also have when , and then it is obvious to see that ( [ r4 ] ) leads to theorem [ thm10 ] immediately. * proof of theorem [ thm12 ] *: we also focus on the if " part since the only if " part is quite obvious .because is jlc , there is an infinite sequence in the form of ( [ t1 ] ) with ( [ ti ] ) such that is l - connected for .then , for any , there is such that there is an arc leaving from entering in .hence , recalling lemma [ lem11 ] , with a constant .according to lemma [ claim ] , for any , we have \ ] ] again by lemmas [ lem12 ] and [ claim ] , for any , ,\ ] ] where .similarly , with , ,\ ] ] for any , which leads to similar to the proof of theorem [ thm10 ] , siiss can be obtained. paper addressed multi - agent set tracking problems with multiple leaders and switching communication topologies . at first, the equivalence between ujlc and the siss of a group of uncertain agents with respect to a moving multi - leader set was shown .then it was shown that ujlc is a sufficient condition for siiss of the multi - agent system with disturbances in agent dynamics and unmeasurable velocities in the dynamics of the leaders . moreover ,when communication topologies are either bidirectional or acyclic , jlc is a necessary and sufficient condition for siiss .also , set tracking was achieved in special cases with the help of siss and siiss . multiple leaders , in some practical cases , can provide an effective way to overcome the difficulties and constraints in the distributed design . on the other hand ,iss - based tools were proved to be very powerful in the control synthesis .therefore , the study of multiple active leaders and related iss tools deserves more attention .* a.1 proof of lemma [ lem8 ] * then , by lemma [ lem6 ] , if for , then on the other hand , if , from lemma [ lem6 ] and ( [ 10 ] ) , .therefore , with ( [ 13 ] ) , ( [ r12 ] ) and ( [ 11 ] ) , it follows that \sqrt{\psi_i(t)},\ ] ] where , or equivalently , \ ] ] for . as a result , where ] . therefore , based on ( [14 ] ) and ( [ 15 ] ) , ] and .* when : denote . by ( [ 28 ] ) , similarly , we have \nonumber\\ & \leq & e^{-(n-1)a^\ast(t-(t_0+\tau_d))}[\hat{\xi}^\ast \sqrt{\psi(t_0)}+\gamma_0]\nonumber\\ & & + ( 1-e^{-(n-1)a^\ast(t-(t_0+\tau_d))})[\sqrt{\psi(t_0)}+\sqrt{2}\|z\|_{\infty}\frac{1+(n-1)a^\ast t_\ast}{(n-1)a^\ast}]\nonumber\\&\leq & \tilde{\xi } ( t - t_0 ) \sqrt{\psi(t_0 ) } + \gamma_2\|z\|_{\infty}+d_0 , \label{24}\end{aligned}\ ] ] where and $ ] , because f. xiao and l. wang , state consensus for multi - agent systems with swtiching topologies and time - varying delays , _ int .j. control _, 79 , 10 , 1277 - 1284 , 2006 .a. jadbabaie , j. lin , and a. s. morse. coordination of groups of mobile agents using nearest neighbor rules .48 , no . 6 , 988 - 1001 , 2003 . y. cao and w. ren , containment control with multiple stationary or dynamic leaders under a directed interaction graph , _ proc . of joint 48th ieee conf .decision & control/28th chinese control conference _ , shanghai , china , dec .2009 , pp .3014 - 3019 .l. scardovi , m. arcak , and e. sontag , synchronization of interconnected systems with an input - output approach - i : main results , _ proc . of joint 48th ieee conf .decision & control/28th chinese control conference _ , shanghai , china , dec .2009 , pp .609 - 614 .e. sontag and y. lin , stabilization with respect to noncompact sets : lyapunov characterizations and effect of bounded inputs , _ proc .nonlinear control systems design symp ._ , bordeaus , ifac publications , 9 - 14 , june 1992 ( m.fliess , ed . )
in this paper , we investigate distributed multi - agent tracking of a convex set specified by multiple moving leaders with unmeasurable velocities . various jointly - connected interaction topologies of the follower agents with uncertainties are considered in the study of set tracking . based on the connectivity of the time - varying multi - agent system , necessary and sufficient conditions are obtained for set input - to - state stability and set integral input - to - state stability for a nonlinear neighbor - based coordination rule with switching directed topologies . conditions for asymptotic set tracking are also proposed with respect to the polytope spanned by the leaders . * keywords . * multi - agent systems , multiple leaders , set input - to - state stability ( siss ) , set integral input - to - state stability ( siiss ) , set tracking .
this paper deals with a system composed by several populations and individuals , or agents .the former are described through their _ macroscopic densities _ , the latter through _ discrete points_. in analytic terms , this leads to a system of conservation laws coupled with ordinary differential equations . from a modeling point of view , it is natural to encompass also interactions that are _ nonlocal _ , in both cases of interactions within the populations as well as between each population and each individual agent . throughout, is time and the space coordinate is .the number of populations is and their densities are , for .the individuals are described through a vector , with . in the case of agents, may consist of the vector of each individual position , so that , or else it may contain also each individual speed , so that . setting , we are thus lead to consider the system = 0 , \\[10pt ] \displaystyle \dot p = f\left(t , p,\left(\mathcal{b } \left(\rho ( t)\right)\right ) ( p)\right ) , \end{array } \right.\ ] ] where and are nonlocal operators , reflecting the fact that the behavior of the members of the population as well as of the agents depends on suitable spatial averages .the function gives the speed of the -th population , and yields the evolution of the individuals .we defer to section [ sec : ar ] for the precise definitions and regularity requirements .motivations for the study of are found , for instance , in , which all provide examples of realistic situations that fall within . beside these, system also allows to describe new scenarios , some examples are considered in detail in section [ sec : ex ] .there , we limit our scope to ( i.e. , ) essentially due to visualization problems in higher dimensions . the analytic treatment below , however , is fully established in any spacial dimension .as a first example , in section [ sub : tourist ] we study two groups of tourists each following a guide .the two groups are described through the pedestrian model in and the guides move according to an ode .each group follows its guide and interacts with the other group , while both guides need to wait for their respective group .section [ sub : car ] is devoted to pedestrians crossing a street at a crosswalk , while cars are driving on the road .the pedestrians movement is described as in the previous example , the attractive role of the guides being substituted by a repulsive effect of cars on pedestrians . on the other hand , cars move according to a follow the leader model and try to avoid hitting pedestrians .this results in a strong coupling between the ode and pde , since the pedestrians can not cross the street if a car is coming and on the other hand the cars have to stop if there are people on the road . as a third example , see section [ sub : hools ] , two groups of hooligans confront with each other .police officers try to separate the two groups heading towards the areas with the strongest mixing of hooligans .thus , they move according to the densities of the hooligans , which themselves try to avoid the contact with the police .all examples are illustrated by numerical integrations showing central features of the models .the current literature offers alternative approaches to the modeling of crowds .notably , we recall the so called _ multiscale _ framework , based on measure valued differential equations , see . there , the interplay between the atomic part and the absolutely continuous part of the unknown measure reminds of the present interplay between the pde and the ode . nevertheless , differently from the cited references , here we exploit the distinct nature of the two equations to assign different roles to agents and crowds .this paper is organized as follows : in section [ sec : ar ] we give a precise definition of a solution of system and state the main analytic results . in section [ sec : ex ] we describe three examples which fit into the above framework and present accompanying numerical integrations .all the technical details are collected in section [ sec : technical ] .in this section we state some analytical results for solutions of . throughout we denote , is a positive constant and is an interval containing .the function describes the internal dynamics of the population and is required to satisfy ( q ) : : satisfies and . for the `` velocity '' vectors we require the following regularity ( v ) : : for every the velocity is such that + ( v.1 ) ; ; .( v.2 ) ; ; for all and all compact set , there exists a function such that , for ] , ( ) : : the map is lipschitz continuous and satisfies . in particular , there exists a positive constant such that , for every ^n) ] , the continuous map , defined for , is the unique solution to in the sense of definition [ def : solution-1 ] with initial datum assigned at time .3 . for any pair ^n ) \times { { \mathbb{r}}}^m ] , if , and satisfy * ( q ) * , * ( v ) * and * ( f ) * , then there exists a function such that and , calling the corresponding solutions , as soon as the initial datum for has compact support , it is possible to avoid the requirement * ( v.2 ) * in the assumptions of theorem [ thm : main ] .[ cor : zero ] assume that * ( v.1 ) * , * ( f ) * , * ( ) * , * ( ) * and * ( q ) * hold . for any positive and for any initial datum ^n ) \times { { \mathbb{r}}}^m ] .moreover , is compact for all ] with , where describes the position in of the guide of the -th group .the density solves the conservation law = 0\ ] ] as in , where here , and are positive constants and .moreover , describes the interaction between the member at of the -th population and his / her guide at .the addends in the non local operator model the interaction among members of the same population , the term , and between the two populations , the term .the leaders and adapt their speed according to the amount of members of their group nearby .we assume that is constrained to the circumference of radius , centered at the point ^t \in { { \mathbb{r}}}^2 ] .[ prop : ex1 ] assume .then , the functions defined in satisfy * ( v.1 ) * , * ( f ) * , * ( ) * , * ( ) * and * ( q)*. in particular , corollary [ cor : zero ] applies to -- . the proof is deferred to section [ sec : technical ] . as a specific example we consider the situation identified by the following parameters ^t , & c^2 & = & [ 2,3]^t , \\ r^1 & = & 1 , & r^2 & = & 1 , & d^1 & = & 1 , & d^2 & = & -1 , \end{array}\ ] ] and by the functions ^ 2 , \\ 0 , & \mbox{otherwise } ,\end{array } \right .\\ \bar \eta ( x ) & = & \displaystyle \frac{\tilde \eta_2(x)}{\int_{{{\mathbb{r}}}^2 } \tilde\eta_2(x ) { \mathinner{\mathrm{d}{x } } } } , & \tilde \eta_2 ( x ) & = & \left\ { \begin{array}{ll } \left(1 - \left(\frac{5x_1}{2}\right)^2\right)^3 \left(1 - \left(\frac{5x_2}{2}\right)^2\right)^3 , & x \in [ -0.4 , \ , 0.4]^2 , \\ 0 , & \mbox{otherwise . }\end{array } \right .\end{array}\ ] ] the computational domain is ^ 2 ] , with width , i.e. . therefore , we have the dynamics of the pedestrians is similar to that introduced in , namely = 0 , \qquad i=1,2\,.\ ] ] here describes the interaction between the member of the -th group located at and the cars . is chosen as in and models the interactions of pedestrians .finally , the vector field stands for the preferred trajectories of the people .the dynamics of cars along the road is described by the follow the leader model where the non increasing function ) ] vanishes on and describes the usual drivers behavior in follow the leader models .the assigned function is the speed of the _ leader _, i.e. , of the first vehicle . for simplicity , we assume that the initial position of the first car is after the crosswalk so that its subsequent dynamics is independent from the crowds .the present model fits into the framework presented in section [ sec : ar ] by setting \right ) \ , { \mathinner{\mathrm{d}{x } } } , \quad k=1 , \ldots , m\,.\!\!\!\ !\end{array}\ ] ] [ prop : ex2 ] assume , , , and \right) ] , with a road occupying the region \times [ 0.45 , 0.55] ] . therefore , pedestrians may walk in , while cars travel along from left to right . the population targets the bottom boundary \times \{0\} ] , see figure [ fig : po_int_car ] .no individual is allowed to cross the road aside the crosswalk .the vector , respectively , is chosen with norm and tangent to the geodesic path at for the population , respectively .in general , these vectors can be computed as solutions to the eikonal equation and their regularity depends on the geometry of the domain .first , for , we introduce the smooth function ) ] denotes the position in of the -th policeman ; so we set .the term avoids concentrations of officers at the same place , while the term takes into consideration the distribution of the hooligans .the present model fits in the framework presented in section [ sec : ar ] by setting in ( [ eq : hooligans-2 ] ) , the operator is composed by two terms describing the attraction , respectively repulsion , between members of the same , respectively different , group . here, we introduce a _ preferred density_ ] , we consider policemen , so that , and the parameters with the functions , & { \varepsilon}_3 & = & 0.1 , \\ w^2(x ,p ) & = & \displaystyle \frac{{\varepsilon}_4}{n } \sum_{j=1}^{n } \hat{\eta}(x - p^j ) \left [ \begin{array}{@{}c@ { } } 0 \\ 1 \end{array } \right ] , & { \varepsilon}_4 & = & 0.1 , \\i_k(p ) & = & \displaystyle \frac{\bar{{\varepsilon}}_2}{n } \sum_{j=1}^{n } \frac{\nabla_x \tilde{\eta}(p^j - p^k)}{\sqrt{1+\vert\nabla_x \tilde{\eta}(p^j - p^k ) \vert^2 } } , \quad & \bar { \varepsilon}_2 & = & 0.2\ , , & k & = & 1,\ldots , n \ , .\end{array}\ ] ] moreover , we let ^ 2 , \\ 0 , & \mbox{otherwise , } \end{array } \right . & \begin{array}{rcl@ { } } \eta ( x ) & = & \frac{\eta_{0.1 } ( x)}{\int_{{{\mathbb{r}}}^2 } \eta_{0.1 } ( x ) { \mathinner{\mathrm{d}{x } } } } \ , , \\ \hat\eta ( x ) & = & \frac{\eta_{0.15 } ( x)}{\int_{{{\mathbb{r}}}^2 } \eta_{0.15 } ( x ) { \mathinner{\mathrm{d}{x } } } } \ , , \end{array } \\ \tilde \eta_r ( x ) & = & \left\ { \begin{array}{ll@ { } } \left [ \left(1 - \frac{x_1}{r})^2\right ) \left(1 - \frac{x_2}{r})^2\right ) \right]^3 , & x \in [ -r , r]^2 , \\ 0 , & \mbox{otherwise , } \end{array } \right . & \begin{array}{rcl@ { } } \bar\eta ( x ) & = & \frac{\tilde\eta_{0.1 } ( x)}{\int_{{{\mathbb{r}}}^2 } \tilde\eta_{0.1 } ( x ) { \mathinner{\mathrm{d}{x}}}}\ , , \ ; \\ \tilde\eta ( x ) & = & \frac{\tilde\eta_{0.2 } ( x)}{\int_{{{\mathbb{r}}}^2 } \tilde\eta_{0.2 } ( x ) { \mathinner{\mathrm{d}{x } } } } \ , . \ ; \end{array } \end{array}\ ] ] for the numerical example , the initial conditions are \times[0.2,0.5 ] } , \\\rho_0 ^ 2 & = & 0.7 \ ; \chi_{\strut [ 0.25,0.75]\times[0.5,0.8 ] } , \end{array } \qquad \mbox { and } \qquad \begin{array}{rcl } p_o^1 & = & [ 0.1,0.7]^t , \\p_o^2 & = & [ 0.9,0.3]^t , \\p_o^3 & = & [ 0.1,0.4]^t , \\p_o^4 & = & [ 0.9,0.7]^t \ , .\end{array}\ ] ] in the pictures below the density of the two groups are plotted separately .the police officers are indicated by green circles .+ + at the beginning the two groups of hooligans start fighting in the middle of the domain , while some part of the groups split from the rest and stay calm ( figure [ fig : po_int_hools_1 ] , top left ) . as the conflicting groups mix ,the police approaches and tries to separate them .the first two officers can not completely isolate the groups ( figure [ fig : po_int_hools_1 ] , top right ) as at the boundaries the hooligans still attack .this stops when the other two policemen join the line of officers ( figure [ fig : po_int_hools_1 ] , bottom left ) . at the endthe police can separate the conflicting parties ( figure [ fig : po_int_hools_1 ] , bottom right ) .this latter configuration appears to be relatively stationary .+ + the same equations , but with no police officers so that , is displayed in figure [ fig : hools_no_police ] .note that the two groups superimpose and in the region occupied by both a fight takes place .denote . for later use , we state here without proof the grnwall type lemma used in the sequel .[ lem : gronwall ] let , ;{{\mathbb{r}}}^+\right) ] and ;{\mathaccent23{{{\mathbb{r}}}}}^+) ] then , } \alpha ( \tau)\right ) \ ; e^{\int_0^t \beta(\tau)\ , { \mathinner{\mathrm{d}{\tau } } } } \,,\quad \text{for a.e.~.}\ ] ] [ lem : stab-1 ] assume and ) .\end{aligned}\ ] ] then , there exists a unique krukov solution )\right) ] , then , for every and for every , & & \displaystyle + ( t_2 - t_1 ) { { \left\|q'\right\|}}_{{{\mathbf{l}^\infty } } } { { \left\|v\right\|}}_{{{\mathbf{l}^\infty } } } \sup_{\tau \in [ 0 , t_2 ] } { \mathinner{\rm tv}}\left(\rho(\tau)\right ) , \end{array}\ ] ] where .let , , , and , satisfy , and .call , the solutions to then , for every , ;{{\mathbf{l}^1 } } ) } \right ] \\ & & \quad \times \left [ { { \left\|q'_2\right\|}}_{{{\mathbf{l}^\infty } } } { { \left\|v_1 - v_2\right\|}}_{{{\mathbf{l}^1}}([0,t];{{\mathbf{l}^\infty } } ) } + { { \left\|q'_1 - q'_2\right\|}}_{{{\mathbf{l}^\infty } } } { { \left\|v_1\right\|}}_{{{\mathbf{l}^1}}([0,t];{{\mathbf{l}^\infty } } ) } \right ] \\ & + & \!\!\!\!\ ! \left [ { { \left\|q_1\right\|}}_{{{\mathbf{l}^\infty } } } { { \left\|{\nabla_x\cdot}(v_1 - v_2)\right\|}}_{{{\mathbf{l}^1 } } ( [ 0,t];{{\mathbf{l}^1 } } ) } + { { \left\|q_1-q_2\right\|}}_{{{\mathbf{l}^\infty } } } { { \left\|{\nabla_x\cdot}v_2\right\|}}_{{{\mathbf{l}^1 } } ( [ 0,t];{{\mathbf{l}^1 } } ) } \right ] e^{\kappa t}\ ! , \end{aligned}\ ] ] where ,{{\mathbf{l}^\infty } } ) } , \\ \kappa & = & \displaystyle { { \left\|q_1 ' \ ,{ \nabla_x\cdot}v_1 - q_2 ' \ , { \nabla_x\cdot}v_2\right\|}}_{{{\mathbf{l}^\infty}}}. \end{array}\ ] ] the equality directly follows from * ( q ) * and ( * ? ? ?* theorem 1 ) .the total variation bound follows from ( * ? ? ? * theorem 2.2 ) .estimate follows from ( * ? ? ?* corollary 2.4 ) .the stability estimate follows from ( * ? ? ?* proposition 2.9 ) .[ lem : uniform - p - estimate ] assume that * ( f ) * and * ( ) * hold .fix and .then , problem admits a unique caratheodory solution and for every if , and , calling the solutions to for every ] .denote with the constant related to in * ( f)*. using * ( f ) * and * ( ) * we get apply lemma [ lem : gronwall ] to complete the proof .fix the initial data ^n ) \times { { \mathbb{r}}}^m ] satisfies and for all , the following inequality holds { \mathinner{\mathrm{d}{x } } } { \mathinner{\mathrm{d}{t } } } { \geqslant}0 \end{array}\ ] ] for all ,t\right[\times{{\mathbb{r}}}^d ; { { \mathbb{r}}}^+) ] .lemma [ lem : stab-1 ] and lemma [ lem : uniform - p - estimate ] ensure that admits a unique solution . to obtain similar estimates for the component , we set for and compute ^ 2 \\\nonumber & & + \nabla_{a } v^i \left ( t , x , \left(\mathcal{a}^i \left(r(t)\right)\right)(x ) , \pi(t ) \right ) \cdot \nabla^2_x \left(\mathcal{a}^i \left(r(t)\right)\right)(x ) , \end{aligned}\ ] ] and by * ( v ) * and * ( ) * , setting , proceeding to the component , using and together with and above } { \mathinner{\rm tv}}\left(\rho(\tau , \cdot)\right ) \\ & { \leqslant } & { { \left\|q\right\|}}_{{{\mathbf{l}^\infty } } } \int_0^{t } \int_{{{\mathbb{r}}}^d } { { \left|{\nabla_x\cdot}v(s , x)\right| } } { \mathinner{\mathrm{d}{x}}}\ , { \mathinner{\mathrm{d}{s } } } \\ & & + t { { \left\|q'\right\|}}_{{{\mathbf{l}^\infty } } } { { \left\|v\right\|}}_{{{\mathbf{l}^\infty } } } \left({\mathinner{\rm tv}}\left(\rho_o\right ) + d \ , w_d { { \left\|q\right\|}}_{{{\mathbf{l}^\infty}}\left([0,r]\right ) } \int_0^t \int_{{{\mathbb{r}}}^d } { { \left\|\nabla_x { \nabla_x\cdot}v(\tau , x)\right\|}}_{{{\mathbb{r}}}^d } { \mathinner{\mathrm{d}{x } } } { \mathinner{\mathrm{d}{\tau}}}\right ) e^{\kappa_o\ , t } \\ & { \leqslant } & t \ , { { \left\|q\right\|}}_{{{\mathbf{l}^\infty } } } \left ( { { \left\|\mathcal{c}_k\right\|}}_{{{\mathbf{l}^1 } } } + l_a { { \left\|\mathcal{c}_k\right\|}}_{{{\mathbf{l}^\infty } } } \left({{\left\|\rho_o\right\|}}_{{{\mathbf{l}^1 } } } + \delta_\rho\right ) \right ) + t { { \left\|\mathcal{c}_k\right\|}}_{{{\mathbf{l}^\infty } } } { { \left\|q'\right\|}}_{{{\mathbf{l}^\infty } } } { \mathinner{\rm tv}}(\rho_o ) e^{\kappa_o t } \\ & & + t^2 { { \left\|\mathcal{c}_k\right\|}}_{{{\mathbf{l}^\infty } } } { { \left\|q'\right\|}}_{{{\mathbf{l}^\infty } } } d w_d { { \left\|q\right\|}}_{{{\mathbf{l}^\infty } } } \\ & & \times \left ( { { \left\|\mathcal{c}_k\right\|}}_{{{\mathbf{l}^1 } } } + 3 l_a { { \left\|\mathcal{c}_k\right\|}}_{{{\mathbf{l}^\infty } } } \left({{\left\|\rho_o\right\|}}_{{{\mathbf{l}^1 } } } + \delta_\rho\right ) + l_a^2 { { \left\|\mathcal{c}_k\right\|}}_{{{\mathbf{l}^\infty } } } \left ( { { \left\|\rho_o\right\|}}_{{{\mathbf{l}^1 } } } + \delta_\rho\right)^2 \right ) e^{\kappa_o t } , \end{aligned}\ ] ] where \\\label{eq : kappat } & { \leqslant } & ( 2d + 1 ) { { \left\|\mathcal{c}_k\right\|}}_{{{\mathbf{l}^\infty } } } { { \left\|q'\right\|}}_{{{\mathbf{l}^\infty } } } \left[1 + l_a \left({{\left\|\rho_o\right\|}}_{{{\mathbf{l}^1 } } } + \delta_\rho\right)\right ] .\end{aligned}\ ] ] hence , for sufficiently small , also completing the proof that is well defined . in the sequel , for notational simplicity, we introduce the landau symbol to denote a bounded quantity , possibly dependent on and on the constants in * ( v ) * , * ( f ) * , * ( ) * , * ( ) * and * ( q)*. fix and denote .we now estimate .consider first the component .using lemma [ lem : uniform - p - estimate ] we get ;{{\mathbf{l}^1 } } ) } \ , e^{l_f ( t + l_b { { \left\|r_1\right\|}}_{{{\mathbf{l}^1 } } ( [ 0,t];{{\mathbf{l}^1 } } ) } ) } \\ & { \leqslant } & l_b \ , t \ , { { \left\|r_1 - r_2\right\|}}_{{\mathbf{c}^{0 } } ( [ 0,t];{{\mathbf{l}^1 } } ) } \ , e^{l_f t ( 1 + l_b ( { { \left\|\rho_o\right\|}}_{{{\mathbf{l}^1 } } } + \delta_\rho ) ) } \\ & = & { \mathcal{o}(1)}\,t \ , { { \left\|r_1 - r_2\right\|}}_{{\mathbf{c}^{0 } } ( [ 0,t];{{\mathbf{l}^1}})}. \end{aligned}\ ] ] apply now lemma [ lem : stab-1 ] with , , for and .equality and * ( v ) * allow to bound in as follows which ensures , together with t \right ) \\ & & \times \exp\left ( ( 2d + 1 ) { { \left\|\mathcal{c}_k\right\|}}_{{{\mathbf{l}^\infty } } } { { \left\|q'\right\|}}_{{{\mathbf{l}^\infty } } } \left[1 + l_a \left({{\left\|\rho_o\right\|}}_{{{\mathbf{l}^1 } } } + \delta_\rho\right)\right ] t \right ) \\ & = & { \mathcal{o}(1)}\ , .\end{aligned}\ ] ] using also we obtain ;{{\mathbf{l}^1 } } ) } \right]\!\ !{ { \left\|q'\right\|}}_{{{\mathbf{l}^\infty } } } { { \left\|v_1^i - v_2^i\right\|}}_{{\mathbf{c}^{0}}([0,t];{{\mathbf{l}^\infty } } ) } \\ \!\!\!\!\!\ ! & & \!\!\ ! + { { \left\|q\right\|}}_{{{\mathbf{l}^\infty } } } { { \left\|{\nabla_x\cdot}(v_1^i - v_2^i)\right\|}}_{{{\mathbf{l}^1 } } ( [ 0,t];{{\mathbf{l}^1 } } ) } e^{\kappa t } \\ \!\!\!\!\!\ ! & = & \!\!\ !t { \mathcal{o}(1)}\left [ 1 + { { \left\|\nabla_x { \nabla_x\cdot}v_1^i\right\|}}_{{{\mathbf{l}^1 } } ( [ 0,t];{{\mathbf{l}^1 } } ) } \right ] { { \left\|v_1^i - v_2^i\right\|}}_{{\mathbf{c}^{0}}([0,t];{{\mathbf{l}^\infty } } ) } + { \mathcal{o}(1)}{{\left\|{\nabla_x\cdot}(v_1^i - v_2^i)\right\|}}_{{{\mathbf{l}^1 } } ( [ 0,t];{{\mathbf{l}^1}})}. \end{aligned}\ ] ] by , we get ;{{\mathbf{l}^1 } } ) } = { \mathcal{o}(1)} ] we first deal with , which can be estimated by * ( v ) * and * ( ) * therefore , using * ( ) * , ;{{\mathbf{l}^1 } } ) } \\\nonumber & \le & l_a { { \left\|\mathcal{c}_k\right\|}}_{{{\mathbf{l}^\infty } } } \int_0^t { { \left\|r_1(\tau ) - r_2(\tau)\right\|}}_{{{\mathbf{l}^1 } } } { \mathinner{\mathrm{d}{\tau } } } + { { \left\|\mathcal{c}_k\right\|}}_{{{\mathbf{l}^1 } } } \int_0^t { { \left\|\pi_1(\tau ) - \pi_2(\tau)\right\|}}_{{{\mathbb{r}}}^m } { \mathinner{\mathrm{d}{\tau } } } \\ & & + l_a { { \left\|\mathcal{c}_k\right\|}}_{{{\mathbf{l}^\infty } } } \int_0^t { { \left\|r_1(\tau ) - r_2(\tau)\right\|}}_{{{\mathbf{l}^1 } } } { \mathinner{\mathrm{d}{\tau } } } + l_a^2 { { \left\|\mathcal{c}_k\right\|}}_{{{\mathbf{l}^\infty } } } \int_0^t { { \left\|r_2(\tau)\right\|}}_{{{\mathbf{l}^1 } } } { { \left\|r_1(\tau ) - r_2(\tau)\right\|}}_{{{\mathbf{l}^1 } } } { \mathinner{\mathrm{d}{\tau } } } \nonumber \\\nonumber & & + l_a { { \left\|\mathcal{c}_k\right\|}}_{{{\mathbf{l}^\infty } } } \int_0^t { { \left\|r_2(\tau)\right\|}}_{{{\mathbf{l}^1 } } } { { \left\|\pi_1(\tau ) - \pi_2(\tau)\right\|}}_{{{\mathbb{r}}}^m } \ , { \mathinner{\mathrm{d}{\tau } } } \\\nonumber & { \leqslant } & l_a { { \left\|\mathcal{c}_k\right\|}}_{{{\mathbf{l}^\infty } } } \left(2 + l_a\left({{\left\|\rho_o\right\|}}_{{{\mathbf{l}^1 } } } + \delta_\rho\right)\right ) \int_0^t { { \left\|r_1(\tau ) - r_2(\tau)\right\|}}_{{{\mathbf{l}^1 } } } { \mathinner{\mathrm{d}{\tau } } } \\ & & + \left({{\left\|\mathcal{c}_k\right\|}}_{{{\mathbf{l}^1 } } } + l_a { { \left\|\mathcal c_k\right\|}}_{{{\mathbf{l}^\infty}}}\left({{\left\|\rho_o\right\|}}_{{{\mathbf{l}^1 } } } + \delta_\rho\right)\right ) \int_0^t { { \left\|\pi_1(\tau ) - \pi_2(\tau)\right\|}}_{{{\mathbb{r}}}^m } { \mathinner{\mathrm{d}{\tau } } } \label{eq : deltadiv } \\\nonumber & \le & t \ , { \mathcal{o}(1)}\left ( { { \left\|r_1 - r_2\right\|}}_{{\mathbf{c}^{0}}([0,t];{{\mathbf{l}^1 } } ) } + { { \left\|\pi_1 - \pi_2\right\|}}_{{\mathbf{c}^{0 } } } \right ) .\end{aligned}\ ] ] going back to the components , ;{{\mathbf{l}^1 } } ) } \right ] \ ! { { \left\|v_1^i - v_2^i\right\|}}_{{\mathbf{c}^{0}}([0,t];{{\mathbf{l}^\infty } } ) } + { \mathcal{o}(1)}{{\left\|{\nabla_x\cdot}(v_1^i - v_2^i)\right\|}}_{{{\mathbf{l}^1 } } ( [ 0,t];{{\mathbf{l}^1 } } ) } \\ \!\!\!\!\!\ ! & { \leqslant } & t \ , { \mathcal{o}(1)}\left ( { { \left\|r_1 - r_2\right\|}}_{{\mathbf{c}^{0}}([0,t];{{\mathbf{l}^1 } } ) } + { { \left\|\pi_1 - \pi_2\right\|}}_{{\mathbf{c}^{0 } } } \right ) .\end{aligned}\ ] ] the above estimate ensures that for sufficiently small , is a contraction .hence , it admits a unique fixed point , defined on ] .define :\ , \left(\rho_1 , p_1\right)(s ) = \left(\rho_2 , p_2\right)(s)\textrm { for all } s \in [ 0,t]\right\}.\ ] ] clearly , and .assume by contradiction that and define .the previous steps , which can be applied thanks to the bound , ensure the local existence to problem with datum assigned at time .hence , on a full neighborhood of .this contradicts the assumption , proving global uniqueness .[ [ mathbfbv - estimate - on - rho - and - mathbflinfty - estimate - on - p . ] ] estimate on and estimate on .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + let be the solution to . by and , call the closed ball in with radius e^{\int_0^t c_f(s){\mathinner{\mathrm{d}{s}}}} ] as long as is defined .let be the solution to .fix with inside the time interval where is defined .use , and * ( v ) * to obtain & & \displaystyle + ( t_2 - t_1 ) { { \left\|\mathcal{c}_{k_t}\right\|}}_{{{\mathbf{l}^\infty } } } { { \left\|q'\right\|}}_{{{\mathbf{l}^\infty } } } \ , \sup_{\tau \in [ 0 , t_2 ] } { \mathinner{\rm tv}}\left(\rho(\tau)\right ) \ , .\end{aligned}\ ] ] this estimate , together with the bound above , ensures that on any bounded time interval on which it is defined , is lipschitz continuous in time .let be the solution to .fix with inside the time interval where is defined . by 5 . in * ( f ) * , , * ( ) * andwe have , \end{aligned}\ ] ] which shows the uniform continuity of on any bounded time interval .fix the initial datum ^n ) \times { { \mathbb{r}}}^m ] .call .the cauchy problem consisting of with initial datum assigned at time still satisfies all conditions to have a unique solution defined also on a right neighborhood of , which contradicts the choice of .fix a positive . for , choose ^m) ] , by ;{{\mathbf{l}^1 } } ) } \right ) e^{l_f t(1 + l_b { { \left\|\rho_o^1\right\|}}_{{{\mathbf{l}^1 } } } ) } \\\label{eq : pp } & { \leqslant } & \left ( { { \left\|p_o^1-p_o^2\right\|}}_{{{\mathbb{r}}}^m } \!\!\!+ { \mathcal{o}(1)}{{\left\|\rho_1 - \rho_2\right\|}}_{{{\mathbf{l}^1 } } ( [ 0,t];{{\mathbf{l}^1 } } ) } \right ) e^{{\mathcal{o}(1)}t(1+{{\left\|\rho_o^1\right\|}}_{{{\mathbf{l}^1}}})}. \end{aligned}\ ] ] for and , define and using , , * ( v ) * and * ( ) * , compute preliminary the following terms ;{{\mathbf{l}^1 } } ) } \!\!\!\ ! & { \leqslant } & \!\!\!\ ! { \mathcal{o}(1)}\left(1+{{\left\|\rho_o^2\right\|}}_{{{\mathbf{l}^1}}}\right )\left ( { { \left\|\rho_1-\rho_2\right\|}}_{{{\mathbf{l}^1}}([0,t],{{\mathbf{l}^1 } } ) } + { { \left\|p_1-p_1\right\|}}_{{{\mathbf{l}^1 } } } \right)\ , , \\\nonumber { { \left\|\nabla_x { \nabla_x\cdot}v_1^i\right\|}}_{{{\mathbf{l}^1 } } ( [ 0,t];{{\mathbf{l}^1 } } ) } \!\!\!\ ! & { \leqslant } & \!\!\!\ !t \ , { \mathcal{o}(1)}\left ( 1 + { { \left\|\rho_o^1\right\|}}_{{{\mathbf{l}^1 } } } + \left ( { { \left\|\rho_o^1\right\|}}_{{{\mathbf{l}^1 } } } \right)^2 \right ) \ , , \\ \nonumber { { \left\|v_1^i - v_2^i\right\|}}_{{{\mathbf{l}^1}}([0,t];{{\mathbf{l}^\infty } } ) } \!\!\!\ ! & { \leqslant } & \!\!\!\ ! { { \left\|\mathcal{c}_k\right\|}}_{{{\mathbf{l}^\infty } } } \!\!\! \int_0^t \!\ !\left [ { { \left\| \mathcal{a}^i\left(\rho_1 ( \tau)\right ) - \mathcal{a}^i\left(\rho_2 ( \tau)\right ) \right\|}}_{{{\mathbf{l}^\infty } } } \!\!\ ! + \ !{ { \left\|p_1 ( \tau ) - p_2 ( \tau)\right\|}}_{{{\mathbb{r}}}^m } \! \right ] \!\ ! { \mathinner{\mathrm{d}{\tau } } } \\\nonumber \!\!\!\ ! & { \leqslant } & \!\!\!\ !{ { \left\|\mathcal{c}_k\right\|}}_{{{\mathbf{l}^\infty } } } \int_0^t \left ( l_a { { \left\|\rho_1 ( \tau ) - \rho_2 ( \tau)\right\|}}_{{{\mathbf{l}^1 } } } + { { \left\|p_1 ( \tau ) - p_2 ( \tau)\right\|}}_{{{\mathbb{r}}}^m } \right ) { \mathinner{\mathrm{d}{\tau } } } \\\label{eq : vmenovbis } \!\!\!\ ! & = & \!\!\!\ ! { \mathcal{o}(1)}\left ( { { \left\|\rho_1 - \rho_2\right\|}}_{{{\mathbf{l}^1 } } ( [ 0,t];{{\mathbf{l}^1 } } ) } + { { \left\|p_1 - p_2\right\|}}_{{{\mathbf{l}^1 } } } \right ) \ , . \end{aligned}\ ] ] by lemma [ lem : stab-1 ] we get ;{{\mathbf{l}^1 } } ) } e^{\kappa t } \\ \!\!\!\ ! & & \!\!\!\ !+ \frac{\kappa_o e^{\kappa_o t } - \kappa e^{\kappa t}}{\kappa_o - \kappa } \ !\left [ { \mathinner{\rm tv}}(\rho_o^{1,i } ) + d\ , w_d { { \left\|q\right\|}}_{{{\mathbf{l}^\infty } } } \ ! { { \left\|\nabla_x { \nabla_x\cdot}v_1^i\right\|}}_{{{\mathbf{l}^1 } } ( [ 0,t];{{\mathbf{l}^1 } } ) } \right ] \!\ ! { { \left\|q'\right\|}}_{{{\mathbf{l}^\infty } } } \ ! { { \left\|v_1^i - v_2^i\right\|}}_{{{\mathbf{l}^1}}([0,t];{{\mathbf{l}^\infty } } ) } \\ \!\!\!\ ! & = & \!\!\!\ !\left(1+t{\mathcal{o}(1)}\right ) { { \left\|\rho_o^{1,i } - \rho_o^{2,i}\right\|}}_{{{\mathbf{l}^1 } } } \\ \!\!\!\ ! & & \!\!\!\! + { \mathcal{o}(1)}\left ( { { \left\|{\nabla_x\cdot}(v_1^i - v_2^i)\right\|}}_{{{\mathbf{l}^1 } } ( [ 0,t];{{\mathbf{l}^1 } } ) } + \left [ 1 + { { \left\|\nabla_x { \nabla_x\cdot}v_1^i\right\|}}_{{{\mathbf{l}^1 } } ( [ 0,t];{{\mathbf{l}^1 } } ) } \right ] { { \left\|v_1^i - v_2^i\right\|}}_{{{\mathbf{l}^1}}([0,t];{{\mathbf{l}^\infty } } ) } \right ) \\ \!\!\!\ ! & = & \!\!\!\ !\left(1+t{\mathcal{o}(1)}\right ) { { \left\|\rho_o^{1,i } - \rho_o^{2,i}\right\|}}_{{{\mathbf{l}^1 } } } \\ \!\!\!\ ! & & \!\!\!\! + { \mathcal{o}(1)}\left ( 1 + { { \left\|\rho_o^1\right\|}}_{{{\mathbf{l}^1 } } } + \left ( { { \left\|\rho_o^1\right\|}}_{{{\mathbf{l}^1 } } } \right)^2 \right ) \left ( { { \left\|\rho_1 - \rho_2\right\|}}_{{{\mathbf{l}^1 } } ( [ 0,t];{{\mathbf{l}^1 } } ) } + { { \left\|p_1 - p_2\right\|}}_{{{\mathbf{l}^1 } } } \right ) .\end{aligned}\ ] ] a further application of lemma [ lem : gronwall ] , using also , allows to conclude the proof of 3 . in theorem [ thm : main ] .fix a positive . for ,let solve with replaced by and with the initial datum assigned at time .for and , define .using , , , , , compute preliminary ;{{\mathbf{l}^\infty } } ) } & = & { \mathcal{o}(1)}\left ( { { \left\|\rho_1 - \rho_2\right\|}}_{{{\mathbf{l}^1}}([0,t];{{\mathbf{l}^1 } } ) } + { { \left\|p_1 - p_2\right\|}}_{{{\mathbf{l}^1 } } } \right ) , \\ \nonumber { { \left\|\nabla_x { \nabla_x\cdot}v^i_1(\tau , x)\right\|}}_{{{\mathbf{l}^1 } } ( [ 0,t];{{\mathbf{l}^1 } } ) } & = & t \ , { \mathcal{o}(1 ) } , \\ \nonumber { { \left\|{\nabla_x\cdot}\left(v^i_1(\tau , x ) - v^i_2(\tau , x)\right)\right\|}}_{{{\mathbf{l}^1 } } ( [ 0,t];{{\mathbf{l}^1 } } ) } & = & { \mathcal{o}(1)}\left ( { { \left\|\rho_1-\rho_2\right\|}}_{{{\mathbf{l}^1}}([0,t];{{\mathbf{l}^1 } } ) } + t \,{{\left\|p_1 - p_2\right\|}}_{{\mathbf{c}^{0 } } } \right ) , \\ \nonumber { { \left\|{\nabla_x\cdot}v^i_2(\tau , x)\right\|}}_{{{\mathbf{l}^1 } } ( [ 0,t];{{\mathbf{l}^1 } } ) } & = & t \ , { \mathcal{o}(1)}\ , .\end{aligned}\ ] ] apply lemma [ lem : uniform - p - estimate ] to obtain ;{{\mathbf{l}^1 } } ) } \ , .\end{aligned}\ ] ] similarly , using lemma [ lem : stab-1 ] , ;{{\mathbf{l}^\infty } } ) } + { { \left\|{\nabla_x\cdot}(v_1 - v_2)\right\|}}_{{{\mathbf{l}^1 } } ( [ 0,t];{{\mathbf{l}^1 } } ) } \right ) \\ & = & { \mathcal{o}(1)}\left ( { { \left\|q_1 - q_2\right\|}}_{{{\mathbf{w}^{1,\infty } } } } + { { \left\|\rho_1-\rho_2\right\|}}_{{{\mathbf{l}^1}}([0,t];{{\mathbf{l}^1 } } ) } + t \ , { { \left\|p_1 - p_2\right\|}}_{{\mathbf{c}^{0 } } } \right ) .\end{aligned}\ ] ] a final application of lemma [ lem : gronwall ] completes the proof of this part .fix a positive . for ,let solve with replaced by and with the initial datum assigned at time . for , define .compute preliminary , using , * ( v ) * , * ( ) * ;{{\mathbf{l}^\infty } } ) } & = & t { { \left\|v_1-v_2\right\|}}_{{{\mathbf{l}^\infty } } } \\ & & + { \mathcal{o}(1)}\left ( { { \left\|\rho_1 - \rho_2\right\|}}_{{{\mathbf{l}^1}}([0,t];{{\mathbf{l}^1 } } ) } + { { \left\|p_1 - p_2\right\|}}_{{{\mathbf{l}^1 } } } \right ) , \\ \nonumber { { \left\|\nabla_x { \nabla_x\cdot}v^i_1(\tau , x)\right\|}}_{{{\mathbf{l}^1 } } ( [ 0,t];{{\mathbf{l}^1 } } ) } & = & t \ , { \mathcal{o}(1 ) } , \\ \nonumber { { \left\|{\nabla_x\cdot}\left(v^i_1(\tau , x ) - v^i_2(\tau , x)\right)\right\|}}_{{{\mathbf{l}^1 } } ( [ 0,t];{{\mathbf{l}^1 } } ) } & = & t { { \left\|{\nabla_x\cdot}(v_1-v_2)\right\|}}_{{{\mathbf{l}^\infty } } } \\ & & + { \mathcal{o}(1)}\left ( { { \left\|\rho_1-\rho_2\right\|}}_{{{\mathbf{l}^1}}([0,t];{{\mathbf{l}^1 } } ) } + t \,{{\left\|p_1 - p_2\right\|}}_{{\mathbf{c}^{0 } } } \right ) , \end{aligned}\ ] ] so that ;{{\mathbf{l}^1 } } ) } \right ] { { \left\|v_1 - v_2\right\|}}_{{{\mathbf{l}^1}}([0,t];{{\mathbf{l}^\infty } } ) } \\ & & + { \mathcal{o}(1)}{{\left\|{\nabla_x\cdot}(v_1 - v_2)\right\|}}_{{{\mathbf{l}^1 } } ( [ 0,t];{{\mathbf{l}^1 } } ) } \\ & = & { \mathcal{o}(1)}{{\left\|v_1 - v_2\right\|}}_{{{\mathbf{w}^{1,\infty } } } } + { \mathcal{o}(1)}\left ( { { \left\|\rho_1-\rho_2\right\|}}_{{{\mathbf{l}^1 } } ( [ 0,t];{{\mathbf{l}^1 } } ) } + { { \left\|p_1 - p_2\right\|}}_{{{\mathbf{l}^1 } } ( [ 0,t];{{\mathbb{r}}}^m ) } \right ) , \end{aligned}\ ] ] and similarly to the previous step , applying lemma [ lem : uniform - p - estimate ] and lemma [ lem : stab-1 ] ;{{\mathbf{l}^1 } } ) } , \\ { { \left\|\rho_1(t ) - \rho_2(t)\right\|}}_{{{\mathbf{l}^1 } } } & { \leqslant } & { \mathcal{o}(1)}{{\left\|v_1 - v_2\right\|}}_{{{\mathbf{w}^{1,\infty } } } } + { \mathcal{o}(1)}{{\left\|\rho_1-\rho_2\right\|}}_{{{\mathbf{l}^1 } } ( [ 0,t];{{\mathbf{l}^1 } } ) } , \end{aligned}\ ] ] the proof of the stability with respect to is completed .apply in lemma [ lem : uniform - p - estimate ] and lemma [ lem : stab-1 ] to obtain ;{{\mathbf{l}^1 } } ) } \right ) , \\{ { \left\|\rho_1 ( t ) - \rho_2 ( t)\right\|}}_{{{\mathbf{l}^1 } } } & { \leqslant } & { \mathcal{o}(1)}\left({{\left\|\rho_1 - \rho_2\right\|}}_{{{\mathbf{l}^1 } } ( [ 0,t];{{\mathbf{l}^1 } } ) } + { { \left\|p_1 - p_2\right\|}}_{{{\mathbf{l}^1 } } ( [ 0,t];{{\mathbb{r}}}^m ) } \right ) , \end{aligned}\ ] ] and a further application of lemma [ lem : gronwall ] completes the proof .corollary [ cor : zero ] define .note that for any function ^n) ] , we have where ; { { \mathbb{r } } } ) } \right ) \exp { { \left\|c_f\right\|}}_{{{\mathbf{l}^1 } } ( [ 0,t ] ; { { \mathbb{r } } } ) } \,.\ ] ] let ) ] such that for all and let ) ] .hence , also solves .proposition [ prop : ex1 ] assumption * ( v.1 ) * is immediate .the verification of * ( f ) * and * ( q ) * , with , is straightforward .assumption ( ) directly follows from the fact that the map defined by is of class and . by the standard properties of the convolution product, we deduce that which implies ( ) , concluding the proof .proposition [ prop : ex2 ] assumption * ( v.1 ) * is immediate .the verification of * ( f ) * and * ( q ) * , with , is straightforward .assumption ( ) follows in the same way as in the proof of proposition [ prop : ex1 ] .standard properties of the convolution product permit to verify assumption ( ) .proposition [ prop : ex3 ] the proofs of * ( v.1 ) * , * ( f ) * and * ( q ) * are immediate , with . to prove * ( ) * , note that the real valued function is globally lipschitz continuous and the map is lipschitz continuous for ^ 2 $ ] .the standard properties of the convolution also ensure the lipschitz continuity and the boundedness of the maps and in the required norms , proving * ( )*. the proof of * ( ) * is entirely analogous .* acknowledgment : * this work was partially supported by the indam gnampa project _conservation laws : theory and applications _ and the graduiertenkolleg 1932 _ `` stochastic models for innovations in the engineering sciences''_.
nonlocal conservation laws are used to describe various realistic instances of crowd behaviors . first , a basic analytic framework is established through an _ ad hoc _ well posedness theorem for systems of nonlocal conservation laws in several space dimensions interacting non locally with a system of odes . numerical integrations show possible applications to the interaction of different groups of pedestrians , and also with other _ agents_. _ 2000 mathematics subject classification : _ 35l65 , 90b20 . _ keywords : _ non - local conservation laws , crowd dynamics , car traffic .
flexion has recently been introduced as a means of measuring small scale variations in weak gravitational lens fields ( goldberg & bacon , 2005 ; bacon , goldberg , rowe & taylor , 2006 , hereafter bgrt ) . rather than simply measuring the ellipticities of arclets, this technique aims to measure the `` arciness '' and `` skewness '' ( collectively referred to as `` flexion '' ) of a lensed image .flexion is a complementary approach to shear analysis in that it uses the odd moments ( 3rd multipole moments , for example ) to compute local gradients in a shear field .bgrt have discussed how flexion may be used to identify substructure in clusters , to normalize the matter power spectrum on sub - arcminute scales via `` cosmic flexion '' ( as an analog to cosmic shear ) , and to estimate the ellipticity of galaxy - galaxy lenses . as a practical application , flexion has already been used to measure galaxy - galaxy lensing ( goldberg & bacon , 2005 ) , and is presently being used in cluster reconstruction ( leonard et al ., in preparation ). however , there have been several difficulties in the estimation of flexion on real objects .first , the flexion inversion is difficult to describe , contains an enormous number of terms , and thus , is rather daunting to code .secondly , there has been little discussion of the explicit effects of psf convolution or deconvolution . finally , unlike shear , there has , until recently , been no simple form to even approximate what the `` flexion '' is .the remainder of this paper will thus be a practical guide to measuring flexion in real images .we begin , below , by reminding the reader of the basic terms involved in flexion analysis . in [ sec : shapelets ] , we review shapelet decomposition , and discuss some of the issues involved in using shapelets to measure flexion . in [ sec : holics ] , we discuss a new , conceptually simpler , form of flexion analysis developed by okura et al .( 2006 ) , which uses moments , rather than basis functions to measure flexion .they call their technique higher order lensing image s characteristics , or holics .we refine the holics approach somewhat , and develop a ksb ( kaiser , squires , & broadhurst , 1995)-type approach using a gaussian filter to perform an inversion , as well as describe a technique for psf deconvolution . in [ sec : simulate ] , we discuss comparisons of the two techniques using simulated lenses and simulated psfs . in [ sec : measurement ] , we compare shapelets and holics inversions on hst images . finally , in [sec : discuss ] , we discuss the implications of this study . in appendixa , we also present the explicit holics inversion matrix , so the reader can write his / her own code .he / she need not do so , however , as all codes discussed herein are available from the flexion webpage . what is flexion ? conceptually , flexion represents local variability in the shear field which expresses itself as second - order distortions in the coordinate transformation between unlensed and lensed images : with where is shorthand for . here, is the normal deprojection operator : and thus , the second term on right in equation ( [ eq : transform ] ) represents the flexion signal . may be written as : these distortions create asymmetries in a lensed image a skewness and a bending , depending on the values of individual coefficients .irwin & shmakova ( 2005;2006 ) describe a similar lensing analysis technique in which the elements of are referred to as `` catenoids '' and `` displacements . ''bgrt describe an inversion whereby one can estimate the individual components , and thus measure two `` flexions '' : where is the complex derivative operator : figure [ fg : shapes ] is reproduced from bgrt and shows the effect of a first or second flexion on a circular source .an object with first flexion , , appears skewed , while an object with second flexion , , appears arced , especially if the image has an induced shear as well .the first flexion has an rotational symmetry , and thus behaves like a vector . in particular , it is a direct tracer of the gradient of the convergence : where is the real component and is the imaginary part ( as with the second flexion and , as the standard convention , with shear ) .the second flexion has an rotational symmetry , though unlike the first flexion , it has no simple physical interpretation like that of the first flexion .it is , however , roughly proportional to the local derivative of the magnitude of the shear .a more complete discussion of flexion formalism can be found in bgrt .measurement of flexion ultimately requires very accurate knowledge of the distribution of light in an image .the shapelets ( refregier 2003 ; refregier & bacon 2003 ) method of image reconstruction decomposes an image into 2d hermite polynomial bases : this technique has a number of very natural advantages . in the absence of a psf, all shapelet coefficients will have equal noise .moreover , the basis set is quite localized ( hermite polynomials have a gaussian smoothing filter ) , and thus is ideal for modeling galaxies .furthermore , the generating `` step - up '' and `` step - down '' operators for the hermite polynomials are simply combinations of the , and operators .refregier ( 2003 ) shows that if we decompose a source image , , into shapelet coefficients , the transformation to a lensed image may be expressed quite simply as : where the various lensing operators are : and are the normal step - up and step - down operators , and the subscript refers to the directional component of the coefficient ( i.e. 1 for the first or x - component , and 2 for the second , or y - component ) .note that in the weak field limit , these operators indicate that power will be transferred between coefficients with indices , which preserves symmetry as well as keeping the image representation in shapelet space compact . in goldberg & bacon ( 2005 ) , similar ( albeit more complicated ) transformswere found relating the derivatives of shear .we will not reproduce the full second order operators here , as they are written in full in the earlier work , but we will point out some key features .first , some of the elements in the operators have an explicit dependence on the ( unlensed ) quadrupole moments of the light distribution .this is due to a relatively subtle effect not present in shear analysis .since the flexion signal is asymmetric , the center of brightness in the image plane will no longer necessarily correspond to the center of brightness in the source plane , and since the shapelet decomposition is performed around the center of light , we need to correct for this . most important ,though , is the fact that second order lensing terms yield transfer of power between indices with 1 or 3 . to second order , then , a lensed image can be expressed as : flexion analysis assumes ( as does shear analysis ) that the intrinsic flexion is random , and thus all `` odd '' ( defined as n+m ) moments are expected to be zero .thus , from a set of shapelet coefficients , a best estimate for the flexion signal may be found via minimization , where : ^{-1}_{n_1m_1n_2 m_2 } \left[(\mu_{n_2m_2}-f_{n_2m_2}+(\gamma_i \hat{s}_i^{(1)}+\gamma_{i , j}s_{ij}^{(2 ) } ) \overline{f}_{n'_2m'_2}\right ] , \label{eq : chi2}\ ] ] is the covariance matrix of the shapelet coefficients , and is the `` unlensed '' estimate of a shapelet coefficient . for odd modes ,this is zero . for even modes ,the relative effect of shear is typically much smaller than the intrinsic ellipticity of an image , thus it makes sense to set .though the form looks quite complicated , conceptually , computing the flexion is very straightforward .a simplified pipeline may be written as follows : 1 . generate a catalog of objects and , for each , excise an isolated postage stamp .compute the shapelet coefficients of the postage stamp .3 . deconvolve the postage stamp with a known psf kernel .4 . compute the transformation matrices associated with each of the four flexion operators , solve the minimization ( equation [ eq : chi2 ] ) for , and estimate the flexion .we discuss each of these steps in turn below .the data used for this analysis was taken using hst and the advanced camera for surveys , and in the particular context of cluster lensing . in this context , the galaxies in which we are interested are potentially blended with much larger and brighter foreground objects .we discuss the specific properties of our data catalog in [ sec : measurement ] , but many of the issues involved are quite generic .the first step in the process , the generation of a catalog and postage stamps seems quite straightforward .for some datasets , such as the sdss ( york et al .2000 ) , the data release includes an atlas of pre - cut postage stamps . for other applications , such as in relatively shallow galaxy - galaxy or cosmic shear / flexion studies , fields will be relatively uncrowded and thus simple application of widely used packages such as sextractor ( bertin & arnouts , 1996 ) can be used . when fields are crowded , however , and contain a wide range of brightnesses and sizes , the catalog generation becomes more complicated .it has been noted by rix et al .( 2004 ) that in general , a single set of sextractor parameters is insufficient for detection of all the objects of interest within an image ; setting the source detection threshold too low will result in excessive blending near bright objects , whereas a high threshold results in a failure to detect fainter sources .rix et al .describe a two - pass strategy for object detection and deblending involving an initial ( `` cold '' ) pass to identify large , bright objects , followed by a lower - threshold ( `` hot '' ) pass to pick up dimmer objects .their final catalog consists of all the objects detected in the cold run , plus any objects detected in the hot run that do not lie within the isophotal area of any object detected in the first pass .this technique works well to prevent spurious deblending by sextractor in images in which there is significant substructure .however , when dealing with crowded fields ( such as clusters of galaxies ) the largest problem in catalog generation is excessive blending of sources , particularly in the central region . to remedy this , we use a modified version of this hot / cold technique .our method consists of a primary sextractor run to detect only the brightest objects . in a lensing field , especially in a lensing cluster, these bright objects will tend to be the lenses .making use of the rms maps generated during this sextractor run , we mask out the bright objects by setting the pixel values to background noise , and thus simulate an emptier field .we then run sextractor on the masked image , using a much lower detection threshold , to create a catalog of background objects . since shape estimation including both flexion and shear have a minimum of 10 degrees of freedom , we require at least 10 connected pixels above the detection threshold , though in reality, we are unlikely to be able to get a reliable measurement from an image with fewer than 15 included pixels .we then discard all objects for which reliable shape estimates can not be found . for each remaining object ,a postage stamp is generated .ideally , this should identify any neighboring objects and mask them out ( by setting their pixel values equal to background noise ) .our postage stamp code also identifies objects which are blended by using a friends - of - friends algorithm to find sets of connected pixels that are a certain threshold ( typically 2 - 3 for the stacked images described below ) above the background .if there is any overlap between the object of interest and another object within the field of the postage stamp , we consider the source to be excessively blended and exclude it from further analysis .shapelets can be an extremely compact representation of an individual image .however , in reality , they are a _family _ of basis functions .there is a characteristic scaling parameter , , which represents the width of the gaussian kernel in the basis function hermite polynomials : in principle , while all values of will yield an orthonormal basis set , some values produce a dramatically faster reconstruction in terms of the number of coefficients required to reach convergence .moreover , in reality we do nt _ want _ to reconstruct all details in an image .structure on the individual pixel scale may simply represent noise . from a practical perspective, our goal is to optimize selection of , and the maximum coefficient index , .refregier ( 2003 ) suggests the following parameters : where and represent the minimum and maximum scales of image structure , respectively .r. massey ( private communication ) has found that rather than performing overlap integrals to solve for the shapelet coefficients ( as was done , for example , in the analysis of goldberg & bacon 2005 ) , the ideal approach is to do a minimization of the reconstructed image with the original postage stamp .this may seem complicated , and it is .fortunately , a shapelets package is available in idl at the shapelets webpage . for our sample of co -added , background - subtracted , hst acs images of abell 1689 , we find that pixels and give good shapelet reconstructions , where and are the semi - major and semi - minor axes of the galaxy as measured by sextractor .however , it is important to note that these parameters are somewhat dependent on the noise level in the images . for a sky - limited sample, we have found that the optimal choice of scales approximately linearly as the ratio of the flux to the rms sky noise .we have looked at this scaling in a sample of galaxies detected with acs ( and which we describe in greater detail below ) , each of which was imaged in 4 frames .prior to stacking of these frames , we found that produced low and convergence with small values of . after stacking , was required .this makes sense , since the noisier our image , the more prone we might otherwise be to fitting complex polynomials to what is , essentially , noise .roughly , the processing time for a decomposition scales as , as determines both the postage stamp size and the maximum order of the shapelet decomposition . due to the high resolution of our images, we encountered a number of objects for which was so large that the decomposition time became prohibitive .we opted to re - grid images with into larger pixels by taking the mean of the pixel values in square bins , the size of which is determined by .this number was rounded up for objects with . a flexion measurementis then carried out using the minimization technique described previously .however , we have found that truncating the shapelet series prior to the flexion measurement yields a more accurate and robust measure of the flexion than using the full series .excluding the higher order shapelet modes avoids contamination of the flexion signal by small scale substructure and by noise ( particularly in dimmer objects ) .we exclude all shapelet modes with in our flexion measurement .this effectively increases to 2 pixels , without affecting the accuracy of the reconstruction .one of the complications in measuring properties of lensed images is that , in practice , they are convolved with a psf : in principle , the psf can be estimated through measurement of stars , but in deep , small - field , high galactic - latitude observations , stars may be scarce , and thus psf estimation may rely partly on numerical analysis of the instrument ( e.g. the tiny tim algorithm , krist , j. 1993 ) . in reality ,though , this should rarely be an issue .estimations of the psf flexion from tiny tim yield values of .this represents the maximum induced flexion which can arise from convolution with the psf , and is still several orders of magnitude lower than the scatter in intrinsic flexion of galaxies .we are not surprised by this since , for example , psf distortions arising from variable sizes in chips is likely to scale as the variation in psf ellipticity . in acs ,chip distortions produce ellipticities of order , and vary on scales of 100 s of pixels , producing an induced flexion of . from the ground ,the atmospheric distortions are expected , on average , to be even more isotropic .there is another reason to suppose that psf flexion contributions will be unimportant . in shear measurements ,the psf ellipticity typically varies smoothly and somewhat symmetrically around the center of a field , mimicking ( or partially reversing ) the overall behavior of the expected shear field .since flexion probes smaller scale effects , the induced flexion by the psf will , on average , cancel out .this is not to say that we can not deal with psf flexion inversion .refregier ( 2003 ) describes an explicit deconvolution algorithm ( see also refregier and bacon 2003 , and references therein ) . in shapelet space ,equation ( [ eq : psfdef ] ) can be re - written as : where is the 2-dimensional convolution tensor : and , and are the characteristic scales of , and , respectively .we may then define a psf convolution matrix as : if only low order terms in the convolution matrix are included , it may be inverted to perform a deconvolution via : this provides a good estimate of the low order coefficients , but high order information is lost .an alternative inversion scheme involves fitting the observed galaxy coefficients using a minimization scheme .refregier and bacon ( 2003 ) note that the scheme may be more robust numerically , and can take full account of variations in the noise characteristics across an image ( although it is strictly only valid in the case of gaussian noise ) .it is this scheme that is implemented in the shapelets idl software .if the shapelet coefficients are statistically independent ( as they will be in the absence of an explicit psf deconvolution ) , formal inversion of the flexion operator is quite straightforward . under these circumstances, we also have the benefit that the measurement error for each moment is identical ( see refregier 2003 for discussion ) .noting that , in most galaxies , the coefficients corresponding to the moments will be much larger than the odd moments ( and , indeed , upon random rotations , the latter will necessarily average to zero ) we can dramatically simplify equation ( [ eq : chi2 ] ) .first , we define the susceptibility of each odd moment as : where represents all of the `` even '' coefficients , and represents all of the odd coefficients .thus , we wish to solve for the relation : where the first term is taken directly from measurement . taking the derivatives and rearranging ,we find : which can readily be inverted to solve for . in practice , however , there are a number of issues which must be considered . first ,if the psf or pixel scale are relatively large compared to the minimum resolution scale of an image then many of the high - order moments returned by shapelets decomposition will , in fact , not have any information .thus , the above inversion will yield a systematic underestimate of the true image flexion .above , we describe a truncation which minimizes this effect . while the flexion inversion is , at its core , linear algebra , it involves an enormous number of terms .we have thus provided an inversion code for shapelets estimates of flexion along with examples on the flexion webpage .okura et al . ( 2006 ) recently related flexion directly to the 3rd moments of observed images .this is a significant extension of flexion , and very much along the lines of goldberg & natarajan s ( 2002 ) original work which talked about `` arciness '' in terms of the measured octopole moments . throughout our discussion, we will use the notation : to refer , in this case , to the unweighted quadrupole moments , with all higher moments being defined by exact analogy . in this context, refers to the unweighted integrated flux .they define the complex terms : and where these terms are collectively referred to as holics . if a galaxy is otherwise perfectly circular ( i.e. no ellipticity ) , and in the absence of noise , then the holics may be directly related to estimators of the flexion ( subject to an unknown bias of ) .namely : where the latter term in the denominator of does not appear in the okura et al .analysis . bacon and goldberg ( 2005 ) show that a flexion induces a shift in the centroid proportional to the quadrupole moments . in order to correctly invert the holics, this term needs to be incorporated explicitly .the simplicity of the extra term results from an approximation of near circularity .the beauty of this approach is that it gives us a very intuitive feel for what flexion means in an observational way .we thus introduce the term `` skewness '' to the intrinsic properties of a galaxy as measured from equation ( [ eq : skewness ] ) whether or not the galaxy is otherwise circular , and whether or not it is lensed . the skewness may be thought of as the intrinsic property , much as the `` ellipticity '' is the intrinsic property related to the `` shear . ''likewise , the intrinsic property associated with equation ( [ eq : arciness ] ) will be referred to as the `` arciness . '' in reality , however , equations ( [ eq : skewness ] ) and ( [ eq : arciness ] ) are not sufficient to perform a flexion estimate even if a galaxy has an ellipticity of only a few percent .okura et al provide a general relationship between estimators for flexion and holics , though the relation is best expressed in matrix form : where is a matrix consisting of elements proportional to sums of and , the former of which can be found by explicitly expanding the expressions in okura et al . , andthe latter of which is again derived from the shift in the centroid . for the convenience of the reader, we write out the explicit form of in appendix a. it may be seen by examining the elements of why this inversion must be done explicitly for even mildly elliptical sources .for fully circular sources , it may be seen by inspection that is diagonal .however , when a source has an ellipticity even as small as , it can be shown that , and thus equations ( [ eq : skewness ] ) and ( [ eq : arciness ] ) are no longer even approximately correct .the application of the holics technique would be trivial if there were no measurement noise . in the presence of noise , and especially , when the sky dominates , measurement of unweighted moments is inherently quite noisy . in a case where we are measuring the 3rd and 4th moments, it is even more so .kaiser , squires & broadhurst ( 1995 ; see also a nice review by bartelmann & schneider , 2001 ) developed perhaps the most comprehensive approach to dealing with the second moments ( the ellipticity ) with noisy observing , and with a ( potentially anisotropic ) psf .our approach is similar .we have only worked with a gaussian window thus far , but the approach is generalizable for any circularly symmetric weighting .we thus define a window function : where the origin is taken to be the center of light , and the integral is normalized to unity .further , we define the weighted moments as , for example : we can thus redefine all holics and moments similarly .we have found through experimentation ( see below ) that for a sky noise limited source , a reasonable value of is 1.5 times the half - light radius .if we were to simply replace all elements in , , and from equation ( [ eq : fsolve ] ) with their weighted counterparts we would not get an unbiased estimate of the flexion .there are two corrections .one has to do with the fact that centroid shift will differ from the unweighted case to the weighted case .consider an extreme scenario in which the window width is arbitrarily small and in which the unlensed image was circularly symmetric with a peak at the center .in that case , the centroid will essentially remain at the center ( peak brightness ) even if the unweighted moments shift .thus , compared to the unweighted moments , the centroid will shift : where we have used the explicit fact that for a gaussian : the other correction has to do with the fact that though lensing preserves surface brightness , it does not preserve total flux .this is normally related by the jacobian of the coordinate transformation . however , when considering a window function , we need consider that transformation explicitly : as used by okura et al .( 2006 ) , and where we have simply multiplied both sides by the factor . in this context , refers to the image coordinate in the source plane . ignoring the terms proportional to shear ( which can not be directly addressed by this method at any rate ) , we have the approximate relation : or , as we have already asserted : note that the latter term contains an odd number of position elements , and thus , coupling to the generating equations for and produces contributions of moments in : which , in turn , must be corrected for .we may thus say that : where the latter expression can also be found in appendix a. as with our discussion of shapelets , above , we must also consider psf deconvolution in our holics pipeline .we define the psf function in equation ( [ eq : psfdef ] ) , and all unweighted moments of the psf are denoted by , etc . in principle , because of the higher signal - noise of the psf , the unweighted moments are easier to estimate than the moments of the detected image .while we argued , above , that the induced flexion from a psf is likely to be small , it is still the case , as with shear , that the psf will reduce the measured flexion .let s first consider the case in which we were able to measure the unweighted moments of both the psf and the observed image .it is straightforward to show that : thus may be computed via the relation : making the substitution , yields it is straightforward to show that this yields equation ( [ eq : quadrel ] ) .similarly , it may be shown that : however , with a similar expression for , and provided we assume the psf is nearly circular . if we further look only at nearly circular sources , then we may estimate the flexion using the forms in equations ( [ eq : skewness ] ) and ( [ eq : arciness ] ) . again , assuming unweighted moments , and zero psf and intrinsic flexion we find : where is an unbiased estimate of the flexion , and is the estimated flexion if one does not include the correction for the psf .however , the normalization constant may be estimated directly from combinations of the psf 2nd and 4th moments , and the unweighted moments of the image .since this term represents something like the overall radial profile of the source , the unweighted moments can be estimated even under noisy conditions .similarly , the second flexion may be estimated as : though we have derived these relations for a nearly circular source , we have found they provide a good correction even when the psf and intrinsic image size are comparable , and when ellipticities for the source image are .which approach is better , shapelets or holics ? from a signal perspective , the shapelets technique is better .it is designed to provide optimal weighting and return optimal signal - noise . moreover , as described above , inversion of the psf is a straightforward and well - designed process . in the absence of noise ,the two techniques produce very similar results . on the other hand ,the holics technique has several practical advantages , especially for large surveys .for one , the holics code is typically much faster than shapelets . for an n pixel image ,the holics technique is an calculation , whereas the shapelets is .additionally , some values of produce very bad reconstructions , and hence , minimization of can be time - consuming and may not converge to a minimum . as a simple test , we created images with brightness profiles of : \ ] ] and though we found similar results for a reasonable range of exponents , the results presented below are for .we have used a constant source ellipticity , typical of those observed in the field , , and had measurement errors which were dominated by sky brightness . in each case , we had no intrinsic arciness or skewness ( that is , the flexion of the unlensed objects were zero ) , since our aim was to measure the response of each of the estimators to lensing .we then artificially lensed each of our simulated images , added sky noise , and measured the flexion using both the holics and shapelets techniques .the noise is fixed throughout this discussion , as is the strength of the flexion signal .it is clear , however , that all relevant signal - noise values will scale linearly with the strength of the lensing signal and inversely with sky noise .our first questions is , what is the optimal value of , such that : ideally , we would like an unbiased estimator of the flexion which also has very little scatter .it is clear that the larger the value of , the larger the scatter will be ( in general ) , since we will be measuring more and more of the noisy sky .however , the smaller the , the less accurate will be our measure of the real shape of the galaxy .figure [ fg : findc ] bears this out .there is an optimal value of around 1.5 , which reflects a balance between minimizing measurement errors as well as any measurement bias inherent in the technique . with shapelets ,we find a systematic underestimate of 11% in the first flexion , and an overestimate of 12% in the second flexion .we find a scatter of about 12% in both .this is very similar in magnitude to the results found by an `` optimal '' holics analysis .since both holics and shapelets give similar measurement errors at fixed sky noise , it is worth considering whether we expect measurement errors between the two techniques to be correlated . even in these idealized circumstances, uncorrelated errors would mean that there is significant information in the images which is not being used . in fig .[ fg : corrsim ] , we show the correlation in uncertainty between our holics estimates , and our shapelets estimates . for the first flexion ,in particular , the correlation is quite high , with a pearson correlation coefficient of .the correlation in measurements of the second flexion is much lower , with . why do nt they have perfect correlation ?the two techniques weight various components of the signal ( and thus , the noise ) differently , and therefore have a slightly different response to the noise .this general trend is borne out with observed objects as well , in which we will see much higher correlation between measurements of the first flexion than the second flexion between the two techniques .finally , we can simulate psf deconvolution . using a gaussian psf with a characteristic size somewhat larger than the intrinsic image ( the correction factor described in equation [ eq : fcorrect ] is 2.7 ) , we distorted and then recovered the flexion estimates from images of increasing intrinsic ellipticity .this analysis is done in the absense of sky noise , and thus any errors in shape recovery represent a systematic effect .we show the fractional errors in measurement of the first and second flexion in figure [ fg : psf ] . since it is possible to estimate the systematic error for a combination of measured shear and psf shape , it is advisable to those wishing to make high - precision flexion measurements to take this empirical correction into account .we find that , despite the fact that the psf correction is based on an assumption of circularity , it continues to produce a good result even if the image has an intrinsic ellipticity as high as 0.3 .we also compare the two approaches to flexion inversion on real objects .our data consists of 4 hst acs cosmic - ray rejected ( crj ) images of abell 1689 using the f625w wfc filter ( hereafter `` r - band '' ) .each image was taken by h. ford during hst cycle 11 , and has an exposure time between 2300 - 2400 seconds .the observations are described in detail in broadhurst et al .( 2005 ) . using the swarp software package , these four images were co - added to create a single `` full '' r - band image .we also generated 2 independent `` split '' images for comparison purposes by combining only two of the original images .the images are background - subtracted , aligned and re - sampled , then projected into subsections of the output frame using a gnomonic ( or tangential ) projection , and combined using median pixel values .each image undergoes a primary sextractor run designed to detect only the foreground objects ( cluster members and known stars ) .this detection is carried out using the cross - correlation utility in sextractor , which allows us to specify the locations of the foreground objects .our foreground object catalog was generated using a combination of spectroscopically confirmed cluster members ( duc et al .2002 ) , and identification by eye of foreground objects that were later confirmed as such by use of the nasa / ipac extragalactic database ( ned ) , as well as clearly identifiable stars in the field .these objects are then masked out as described previously , and a second sextractor run carried out .a catalog of objects is then generated , using only those objects that were detected in both of the split r - band images .we measure the flexion in our catalog objects using both the truncated shapelets method ( described above ) and the holics approach , and then compare the measurements by computing pearson correlation coefficients between the different estimates in the full image .we also compute correlation coefficients between measurements taken using the same technique in the two split images .this gives us an estimate of the robustness of the measurement technique .when computing the correlation coefficients , we include only objects with pixels , and consider only the brightest half of our catalog objects . in order to exclude extreme or erroneous measurements , we require 0.2 and 0.5 . figure [ fullcorr ] shows a comparison of the holics and shapelets estimates of flexion in the full image .both and have a positive correlation , with a pearson correlation coefficient of 0.17 for and 0.12 for .additionally , both methods yield similar standard deviations for both first and second flexion .this is what we expected from our simulated results above . clearly , if flexion represents any real signal , the two techniques should be correlated , and , as we showed in our simulated results , the correlation in first flexion is higher than in second flexion .but the correlation in our measured results is lower than in the simulated ones .in part , this is due to a relatively noisy field .we ve found that selections on brighter magnitudes and larger objects improves the correlation somewhat . in part , however , this is due to what we mean by `` flexion . ''recall that the shapelets and holics analysis of flexion involve weighting different modes in different ways .real , unlensed , galaxies will have odd modes which are not necessarily correlated in a simple or obvious way .lensing , of course , produces a significant correlation , and thus , a population of significantly lensed objects ( for which the majority of the flexion is due to lensing ) would be expected to have a more correlated flexion .this is similar to the case with weak shear analysis in that the s / n from a typical object is usually less than 1 .we can test this hypothesis directly by comparing the measurements in the split images and estimating the flexion in both using the same technique .any discrepancies between the two ought to be the result of photon noise rather than intrinsic complexity in the structure of the 3rd moments .figure [ split_moments ] shows a comparison of the holics measurements made on each of the split images .these measurements are well correlated : the pearson correlation coefficient here is 0.37 for and 0.23 for . in figure[ split_shapelets ] , we see a comparison of the shapelets measurements in these images , which appear to be more strongly correlated ( particularly for ) . the pearson correlation coefficients here are 0.58 for and 0.18 for . as motivated above , most of the `` noise '' in our measurements comes from the intrinsic distribution of flexion within our sample . indeed ,using the holics approach , we find : the distribution function may be seen in fig .[ fg : fhist ] .note that this result includes noise .however , we may estimate the relative effect of photon noise on this scatter by using correlation between frames .that is : and thus , we find that our best estimate of the intrinsic scatter in first flexion is : ( as found in goldberg & bacon 2005 ) , and the combination represents a dimensionless term , and thus is independent of distance . it should also be noted that since these measurements are taken within a cluster , the signal is included as well , and one might question whether it is reasonable to estimate the intrinsic variability of flexion from lensed images .the intrinsic scatter in flexion was originally measured in goldberg & bacon ( 2005 ) , and we merely confirm the result here . however , this is a reasonable thing to do , as flexion drops off much more quickly than shear , and thus , even within a rich cluster , the flexion signal is dominated by individual galaxies . even at a separation of , the flexion from even a very massive galaxy on an source is about 0.05 , approximately the level of the intrinsic flexion .such separations are relatively rare , however .we have endeavored to present a detailed guide to measuring flexion in real observations , with a focus on space - based imaging . in the process , we have taken a look at two different approaches to measuring flexion : shapelets and holics , with an eye toward which approach is `` better . '' from an idealized perspective of maximal signal - noise , the answer is simple .shapelets produces a mode - by - mode comparison which optimally averages to produce a unique estimate of flexion .however , this result is complicated somewhat in two limits : blending , which affects larger objects , and psf convolution , which affects smaller ones .when images are blended , it is clear that we benefit by giving extra weighting to those pixels near the center of the the object . in that sense, holics can be said to produce more robust results . likewise , despite an explicit psf deconvolution algorithm , applying the flexion inversion using shapelets results in inclusion of small scale power which has been blended away through the atmosphere or instrument .we have discussed , above , how this might be alleviated by only using relatively low order modes from the reconstruction in the estimate of flexion . however, doing so comes at the expense of some ( but by no means all ) , of the signal - noise advantage from shapelets . indeed , even using a relatively truncated form of the shapelets analysis still produced greater correlation between independent images of the same objects , and thus , cleaner estimates of the flexion .however , one complication in the shapelets analysis is producing a good shapelets decomposition in the first place . while r. massey s shapelet code comes with an optimization routine to find the best fit scaling parameter , , the shapelet decomposition runs several orders of magnitude slower than holics .for very large lensing fields , this may prove a significant limitation , and thus , holics provides a fast , physically motivated , reasonably reliable alternative .this work was supported by nasa atp nng05gf61 g and hst archival grant 10658.01-a .the authors would like to gratefully acknowledge useful conversations with jason haaga , david bacon & sanghamitra deb , and thank richard massey for thoughtful comments and the use of his shapelets code .we would also like to thank the anonymous referee whose comments greatly improved the final draft . 99 bacon , d. j. , goldberg , d.m . ,rowe , b.t.p .& taylor , a.n . , 2006 ,mnras 365 , 414 bartelmann , m. & schneider , p. , 2001 , physics reports 340 , 291 bertin , e. & arnouts , s. , 1996 , a&as 117 , 393 broadhurst , t. , et al .2005 , apj 621 , 53 duc , p. et al . , 2002 ,a 382 , 60 goldberg , d.m . & bacon , d.j ., 2005 apj 619 , 741 goldberg , d.m . &natarajan , p. 2002 , apj 564 , 65 irwin , j. & shmakova , m. , 2005 , newar 49 , 83 irwin , j. & shmakova , m. , 2006 , apj , 645 , 17 kaiser n. , squires g. , broadhurst t. , 1995 , apj , 449 , 460 .krist , j. 1993 , aspc 52 , 536 okura , y. , umetsu , k. , & futamase , t. , 2006 , submitted to apj , http://xxx.lanl.gov/abs/astro-ph/0607288 refregier , a. , 2003 , mnras 338 , 35 refregier , a. & bacon , d.j . , 2003 , mnras 338 48 rix , h. et al . , 2004 ,apjs 152 , 163 york , d.g .2000 , aj , 120 , 1579in equation ( [ eq : fsolve ] ) , we state that the flexion may be solved via inversion of the relation : where is a vector of the desired flexion estimators , and is the measure of the 3rd order holics . here, we show the explicit form of m. if we apply a gaussian weighting with width , , to our moment measurements , then should be computed using the weighted moments . in addition, the following terms must be added :
we describe practical approaches to measuring flexion in observed galaxies . in particular , we look at the issues involved in using the shapelets and holics techniques as means of extracting 2nd order lensing information . we also develop an extension of holics to estimate flexion in the presence of noise , and with a nearly isotropic psf . we test both approaches in simple simulated lenses as well as a sample of possible background sources from acs observations of a1689 . we find that because noise is weighted differently in shapelets and holics approaches , that the correlation between measurements of the same object is somewhat diminished , but produce similar scatter due to measurement noise .
a relativistic fluid approach has been applied to various high - energy phenomena in astrophysics , nuclear , and hadron physics , bringing a lot of interesting and outstanding results . in particular, recent relativistic hydrodynamic analyses revealed a new and interesting feature of quark - gluon plasma ( qgp ) in high - energy heavy - ion collisions .since the relativistic heavy - ion collider ( rhic ) at brookhaven national laboratory ( bnl ) started operation in 2000 , a number of discoveries have been made , providing insight into quantum chromodynamics ( qcd ) phase transition and the qgp .one of the most interesting and surprising outcomes at rhic was the production of the strongly interacting qgp ( sqgp ) , which was confirmed by both theory and experiment .the highlights are : ( i ) strong elliptic flow , which suggests that collectivity and thermalization are achieved ; ( ii ) strong jet quenching , which confirms that hot and dense matter is created after collisions ; ( iii ) the quark number scaling of elliptic flow , which indicates that the hot quark soup is produced .relativistic hydrodynamic models have made a significant contribution to these achievements .for example , at the time , only hydrodynamic models could explain the strong elliptic flow at rhic , which was considered to be direct evidence for the production of sqgp at rhic .because of the success of the relativistic hydrodynamic model at rhic , hydrodynamic analysis has become a useful and powerful tool for understanding dynamics of hot and dense matter in high - energy heavy - ion collisions . in the early stage of the hydrodynamic studies at rhic ,viscosity effects were not taken into account . however , detailed analyses of experimental data in relativistic heavy - ion collisions gradually revealed limitation of ideal hydrodynamic models . in ref . , for the first time , quantitative analyses of elliptic flow were performed with a relativistic viscous hydrodynamic model .the authors showed that ideal hydrodynamics overestimates elliptic flow as a function of transverse momentum , and that a hydrodynamic calculation with finite viscosity explains the experimental data better .since then , the main purpose of the phenomenological study for relativistic heavy - ion collisions at rhic and lhc has been to obtain detailed information of bulk properties of qgp , such as its transport coefficients .besides , recent high statistical experimental data at rhic and lhc require more rigorous numerical treatment on the hydrodynamical models .recently both at rhic and lhc the higher harmonic anisotropic flow , which is the fourier coefficient of particle yield as a function of azimuthal angle , has been reported .one of the origins of the higher harmonics is event - by - event fluctuations . to obtain the precise value of transport coefficients with relativistic viscous hydrodynamics , we need to choose an algorithm with small numerical dissipation and treat the inviscid part with care .usually each algorithm has advantages or disadvantages in terms of coding , computational time , precision and stability . thus far , unfortunately , only limited attention has been paid to numerical aspects in hydrodynamic models for high - energy heavy - ion collisions . in this article, we present a state - of - the - art algorithm for solving the relativistic viscous hydrodynamics equation with the qcd equation of state ( eos ) .our applications require a numerical scheme that can treat a shock wave appropriately and has less numerical dissipation in order to gain comprehensive understanding of recent high - energy heavy - ion collision physics .these advantages can be achieved by implementing a riemann solver for the relativistic ideal hydrodynamics . in particular , we propose a new riemann solver for the qcd eos at low baryon density , which has not been considered in astrophysical application where baryon density is usually much higher .we derive our riemann solver by analytically solving the relativistic riemann problem for low baryon density , within the approximation scheme proposed by . as we will see in section [ sec : num_test ] , where we perform several numerical tests , our new algorithm with the riemann solver has an advantage over other algorithms such as kurganov - tadmor ( kt ) , nessyahu - tadmor ( nt ) and shasta from the point of view of analyses for current relativistic heavy - ion collisions . by implementing our new riemann solver for relativistic ideal hydrodynamics in a numerical scheme for causal viscous hydrodynamicsrecently proposed in ref . , we can also construct a new algorithm for causal viscous hydrodynamics for qgp .this article is organized as follows . in section [ sec :hydro ] , we review current hydrodynamic models for relativistic heavy - ion collisions and introduce the basics of relativistic hydrodynamics . in section [ sec : qcd_eos ] , we explain the qcd eos at high temperature and low baryon density based on the latest lattice qcd calculation . in section [ sec : riemann ] , we propose a new riemann solver for the ideal fluid with the qcd eos at high temperature and low baryon density . in section[ sec : num_test ] , using the numerical scheme , we show results of several numerical tests , such as sound wave propagation , as well as shock tube and blast wave problems .section [ sec : sum ] is devoted to summary and discussions . in this article , we adopt natural units , with the speed of light in vacuum , boltzmann constant and planck s constant .first we list current hydrodynamic models , which are applied to relativistic heavy - ion collisions in tables [ table : ideal - hydro ] and [ table : viscous - hydro ] . herewe mention the key aspects of numerical simulations in relativistic hydrodynamic models , which are classified into ideal versions and viscous ones .one of the important ingredients of hydrodynamic models is an eos , needed for solving the relativistic hydrodynamics equation .different types of physics related to qcd phase transitions can be input into the eos . and [ table : viscous - hydro ] . ] from comparison between hydrodynamic calculations and experimental data of high - energy heavy - ion collisions ,the information for the qcd phase diagram is obtained through the eos used in the hydrodynamic calculation .the bag model type eos with the first - order phase transition has been widely used in relativistic hydrodynamic models , because of its simplicity and the lack of conclusive results on eos of qcd . in recent hydrodynamical calculations ,lattice - inspired eos has begun to be employed , thanks to the progress of thermodynamical analyses based on first principle calculations with lattice qcd simulation .hama et al . & 3 + 1 & bag model & sph + hirano et al . & 3 + 1 & bag model & ppm + nonaka and bass & 3 + 1 & bag model & lagrange + hirano et al . & 3 + 1 & lqcd & ppm + petersen et al . & 3 + 1 & lqcd & shasta + karpenko and sinyukov & 3 + 1 & lqcd & hlle + holopainen et al . & 2 + 1 & lqcd & shasta + pang et al . & 3 + 1 & lqcd & shasta + romatschke and romatschke & 2 + 1 & lqcd & cd + luzum and romatschke & 2 + 1 & lqcd & cd + schenke et al . & 3 + 1 & lqcd & kt + song et al . & 2 + 1 & lqcd & shasta + chaudhuri & 2 + 1 & bag model & shasta + bozek & 3 + 1 & lqcd & cd + another important ingredient in hydrodynamical models is a numerical scheme for solving the relativistic ideal and viscous hydrodynamical equations .historically , in terms of analyses of high - energy heavy - ion collisions , only physical conditions , such as initial conditions , eos and termination conditions of hydrodynamic expansion have been discussed . however , because of the nonlinearity of the relativistic hydrodynamics equations , even if we use the same physical conditions , different numerical schemes would give us different numerical solutions .furthermore , when we start to investigate viscosity effects and event - by - event fluctuations in recent high statistic experimental data , we need to choose suitable numerical schemes carefully . for numerical stability of hydrodynamic calculation , numerical dissipation is needed .therefore , in order to evaluate physical viscosity in high - energy heavy - ion collisions , we need to avoid or control the effect of numerical dissipation in the numerical relativistic viscous hydrodynamic calculation .accurate numerical schemes can be found in those with riemann solvers for relativistic ideal hydrodynamics ( references therein ) .the riemann solver is a method to calculate numerical flux by using the exact solution of the riemann problems at the interfaces separating numerical grid cells , and can be used to describe the flows with strong shocks and sharp discontinuity stably and highly accurately . here , we mention the basis of hydrodynamics briefly .the relativistic hydrodynamics equations are given by the conservation laws of energy , momentum and baryon number : where is the energy - momentum tensor and is the baryon current . throughout this paper, we use the cartesian coordinates where the metric tensor is given by . in the case of relativistic ideal fluid , the energy - momentum tensor and baryon current are given by u^\mu ( x ) u^\nu ( x ) - p(x ) g^{\mu \nu } \ , , \\j_b^\mu(x)&=&n_b ( x)u^\mu(x)\end{aligned}\ ] ] where , , and ( ) are the proper energy density , pressure and baryon density which are evaluated in the rest frame of the fluid and four - velocity , respectively .when the effects of dissipation are included into relativistic hydrodynamics , a rather complicated situation arises .one of the difficulties is that the naive introduction of viscosities as in the first - order theory , in which the entropy current contains no terms higher than the first - order term in the thermodynamic fluxes , suffers from acausality . in order to avoid this problem, the second - order terms in heat flow and viscosities have to be included in the expression for the entropy , but a systematic treatment of these second - order terms has not yet been established .although there has been remarkable progress toward the construction of a fully - consistent , relativistic viscous hydrodynamical theory for the description of high - energy heavy - ion collisions , there are still ongoing discussions about the formulation of the equations of motion and about the numerical procedures . at first orderthe new structures are proportional to gradients of the velocity field and the baryon number density , and only three proportionality constants appear : the shear viscosity , the bulk viscosity , and the baryon number conductivity . at second order , many more new parameters related to relaxation phenomena , such as relaxation times for each diffusive modes , , and appear .currently , most viscous hydrodynamical calculations use the relativistic dissipative equations of motion that were derived phenomenologically by israel and stewart which are utilized in this work ( see appendix [ sec : is ] ) , and their variants .recently , a second - order viscous hydrodynamics from ads / cft correspondence was derived , as well as a set of generalized israel - stewart equations from kinetic theory via grad s 14-momentum expansion , which have several new terms .however , a qualitatively different first - order relativistic dissipative hydrodynamical scheme was also proposed on the basis of renormalization - group consideration .there are two choices for the local rest frame in a relativistic viscous hydrodynamics equation .one is the eckart frame , where the direction of the four - velocity is the same as that of the particle flux vector .the other is the landau - lifshitz frame , where the direction of the four - velocity is the same as that of energy flux vector .because in high - energy collisions at rhic and lhc the baryon number density is very small ( section [ sec : qcd_eos ] ) , the landau - lifshitz frame is more suitable for qcd at high temperature and low baryon number density .the phase diagram of qcd matter has been investigated for decades . in fig . [fig : phase_diagram ] , a schematic qcd phase diagram is depicted with the axes of temperature and baryon chemical potential . among the six flavors of quarks in the standard model, we only consider the three light flavors of quarks ( up , down , and strange ) with physical quark masses .the phase diagram is characterized by three typical phases : a hadronic phase , a quark - gluon plasma ( qgp ) phase , and a color super - conducting ( csc ) phase . in the hadronic phase , which is realized in the ground state of the qcd hamiltonian ( vacuum state ) , the chiral symmetry of qcd is broken , and quarks and gluons are confined in the hadrons . in the qgp phase , which was realized in the early universe ,the chiral symmetry is restored and quarks and gluons are liberated from the hadrons . in the csc phase , which may be realized inside neutron stars , the quarks on the fermi surface form cooper pairs and are condensed to create a super - conducting state . for further details of the qcd phase diagram , see the review . in ultra - relativistic heavy - ion collisions at the lhc and rhic ,the relevant region in the qcd phase diagram is high - temperature ( - mev ) , low baryon density ( - ) one . in this region, there is a transition from the qgp phase to the hadron phase .the transition is a crossover confirmed by the state - of - the - art lattice qcd simulation , in contrast to the bag eos , which is a phenomenological equation of state with a first - order phase transition and has been widely utilized in previous hydrodynamic models . in the high - temperature , low baryon density region, we expect that the qcd eos can be approximated by taking into account the leading - order contribution of the finite baryon chemical potential .in other words , due to the charge conjugation ( ) symmetry of the qcd , the -even quantities , e.g. pressure , energy density , temperature , and sound velocity , are approximated by those at vanishing baryon chemical potential , while the -odd quantities , e.g. baryon density , and baryon chemical potential , are approximated by the first - order contribution of the chemical potential .note that in this approximation the -even quantities are independent of , while the -odd quantities depend on both and in principle .for example , where stands for the baryon number susceptibility . is defined in terms of differentials of pressure along the isentropes : when is positive , the convexity condition is satisfied .our approximation corresponds to at , and is easily confirmed at . ]although the first - principles lattice qcd simulation is limited at vanishing baryon chemical potential , we can access the thermodynamic properties at low baryon chemical potential by using in the above approximation .indeed , combining the result of the state - of - the - art lattice simulation ( - at - mev , - at - mev ) and the typical values of the baryon chemical potential in the heavy - ion collisions ( mev at rhic and 1 mev at lhc ) , we can estimate the importance of the next - to - leading order term in the pressure by taking its ratio with the leading - order term , which yields only a 0.4% correction at the rhic and a 0.00075% one at the lhc .therefore , we regard this approximation to be quantitatively reliable in all the regions of the qgp fireball at both rhic and lhc . in numerical tests in section [ sec : num_test ] , we will consider an eos for free gas of gluons ( free gas eos ) and that for realistic interacting quarks and gluons calculated by the lattice qcd simulation ( lattice qcd eos ) , which we plot in fig .[ fig : eos ] . in the free gas eos, we adopt the parameterization of : to make comparison with other numerical schemes and introduce with in order to achieve effectively gluonic matter without quarks . in the lattice qcd eos, we adopt the parameterization for the trace anomaly for ( 2 + 1 ) flavors given in eq .( 3.1 ) and table 2 of : }{1+g_1\cdot t+g_2\cdot t^2 } \right),\end{aligned}\ ] ] with , , , , , , , , and .we parameterize the baryon number susceptibility by fitting fig . 7 and table 1 of :,\\ a & = & 0.15 , \ t_0=167 \ { \rm mev } , \\delta t = 60 \ { \rm mev}.\end{aligned}\ ] ]riemann problem is a classic one - dimensional initial value problem in hydrodynamics with infinitesimal dissipation and plays an essential role in numerical hydrodynamics .since we are interested in qcd matter in extremely high temperatures , we restrict our discussion to the relativistic hydrodynamics .the basic equations of the relativistic ideal hydrodynamics are the conservation equations for baryon number , momentum , and energy : where are densities of baryon number , momentum , and energy ; are pressure and flow vector ; and is the identity matrix .the relation between the conservative variables and primitive variables are where and is given by the qcd eos .the initial condition of the riemann problem is given by two uniform states separated by a discontinuity surface at : the exact solution to this problem is constructed from three types of flows : shock wave , rarefaction wave , and contact discontinuity . in the solution , they evolve self - similarly and the wave structure depends only on at ( self - similar flow ) .shock wave is a discontinuous surface moving at a constant velocity , across which physical states are related by rankine - hugoniot jump conditions : & = & -\zeta\left[v_x\right],\\ \label{eq : rh_cond2 } \left[(e+p)\gamma v_x\right ] & = & \zeta\left[p\right],\\ \label{eq : rh_cond3 } \left[(e+p)\gamma v_{y , z}\right ] & = & 0 , \\\label{eq : rh_cond4 } \left[(e+p)\gamma - p / d\right ] & = & \zeta\left[pv_x\right],\end{aligned}\ ] ] where \equiv q - q_s ] and \to 0 ] or ] , the linearized sound wave solution can be practically regarded as the exact solution , which is the case for in fig .[ fig : l1norm ] . ]we analyze the precision of our numerical scheme and its dependence on by calculating the l1 norm for pressure after one cycle : we expect a scaling after one cycle since our numerical scheme is of second - order accuracy with respect to space and time discretization . is independent of the wavelength and the sound velocity .as far as the linear approximation to the original full hydrodynamics equation works , any sound wave problem is identical to a single problem by scaling and .since is fixed and so is the number of time steps after one cycle with the same courant number ( in this analysis ) , the precision is independent of and . ] the results of the l1 norm for the free gas eos and the lattice qcd eos are shown in fig .[ fig : l1norm ] .we indeed find a scaling for both equations of states , which is consistent with the theoretical expectation .we repeat the same analyses of the sound wave propagation using shasta algorithm for the relativistic ideal hydrodynamics . in this calculation , we only adopted the free gas eos .the result of the l1 norm is shown in fig .[ fig : l1norm_shasta ] .we find that the numerical accuracy is quite sensitive to the choice of the anti - diffusion parameter in the code . with the anti - diffusion parameter , we find that the shasta scheme not only exhibits the second - order accuracy but also has quantitatively similar accuracy to the algorithm based on our riemann solver . on the other hand , with and 0.8, the shasta scheme only exhibits the first - order accuracy and the l1 norm is quite large compared to that with with the same grid size .the anti - diffusion parameter is introduced to reduce the numerical dissipation . the default value minimizes the numerical dissipation due to the finite cell size when the system is smooth .however , numerical accuracy of a scheme must be discussed together with its stability required by a problem to be solved .this will be discussed in the next numerical test of shock tube problem .the simulation of sound wave propagation can also be utilized to estimate the numerical dissipation of the scheme . since any numerical scheme introduces tiny numerical dissipation , the sound wave in the simulationis attenuated even without physical viscosity .the value of numerical dissipation is evaluated by the value of physical shear viscosity which gives the same amount of sound wave attenuation in the linearized region . by linear analysis ,the dispersion relation of the sound mode in viscous hydrodynamics with is note that the dispersion relation is independent of the relaxation time for shear mode in long wavelength limit .the amplitude of sound wave with wave length is decreased by a factor of ] . using the values of , , , and , we find for both eoss . in fig .[ fig : attenuation ] , we show the numerical results of sound wave propagation for causal viscous hydrodynamics with the free gas eos . in this calculation , we choose and and set an initial condition by eq . .in the left panel , we show our numerical result of the sound wave attenuation due to both physical and numerical viscosities by calculating it shows that converges to with larger at fixed and the convergence is faster at larger .this tendency is due to the numerical dissipation ; when , the discretization effect is expected to be overwhelmed by the physical viscosity . in order to disentangle the physical and numerical viscosities , we calculate the following l1 norm : which eliminates the contribution from sound wave attenuation due to the physical viscosity .the result is shown in the right panel .we find that does not depend much on the physical viscosity .this indicates that the numerical dissipation \cdot ( \delta x)^2 ] .due to this difference , the l1 norm equation saturates at with large . ] in our numerical scheme , the numerical dissipation is \cdot ( \delta x)^2=\left(c_{\rm s}st/\lambda\right)\cdot ( \delta x)^2 ] gives - at and with , which are the typical temperature and system length scale at the relativistic heavy - ion collisions .this condition becomes more severe at higher temperature or when finer structure is of interest .we emphasize that appropriate fine grid size calculation is indispensable for any physical observables in heavy - ion collisions , to discuss the value of physical viscosity from comparison with experimental data .the shock tube problem is analytically solvable for a perfect fluid with the free gas eos .it provides an important test for measuring the performance and accuracy of different numerical schemes . to compare our numerical algorithm to other numerical schemes ( shasta , nt , kt schemes ) and the analytical solution , we start the test calculation with the same initial conditions as those of ref .the initial temperature on the left is mev , and that on the right is mev . in the calculationwe employ the free gas eos .the spatial cell size and the courant number are set to be and , ) .] respectively . because a numerical calculation with fine - enough grid and time step should converge on the analytical solution ,the same discretization for spatial grid size and time step is important for accuracy testing of numerical methods .cells with fm , after time steps at fm / c .( a ) the energy density distribution , ( b ) the velocity , ( c ) the invariant expansion rate with our algorithm ( solid line ) kt ( dotted line ) , nt ( dashed - dotted line ) , and shasta ( dashed line ) . ,title="fig:",width=264 ] cells with fm , after time steps at fm / c .( a ) the energy density distribution , ( b ) the velocity , ( c ) the invariant expansion rate with our algorithm ( solid line ) kt ( dotted line ) , nt ( dashed - dotted line ) , and shasta ( dashed line ) ., title="fig:",width=264 ] cells with fm , after time steps at fm / c .( a ) the energy density distribution , ( b ) the velocity , ( c ) the invariant expansion rate with our algorithm ( solid line ) kt ( dotted line ) , nt ( dashed - dotted line ) , and shasta ( dashed line ) . , title="fig:",width=264 ] fig .[ fig : shock - tube - test ] shows the energy density distribution , the velocity and the invariant expansion rate with our algorithm , kt , nt , and shasta , together with the analytical solution for an ideal fluid . for these values , kt , nt , and shasta algorithms reproduce the analytical solution with almost the same accuracy and numerical artifacts .the difference between the analytical solution and numerical calculations indicates existence of numerical dissipation in numerical schemes .it is worth noting that , our numerical results are closer to the analytical solution , especially at fm compared to kt , nt , and shasta algorithms , which suggests that our algorithm contains less numerical dissipation .this tendency appears clearly in the invariant expansion rate in fig .[ fig : shock - tube - test ] .moreover , only our numerical scheme follows the shape of the analytical solution from fm to fm .numerical dissipation is indispensable for the stability of numerical calculations of the relativistic hydrodynamical equation .however , too much numerical dissipation smears numerical results and leads a solution far off from the analytical one .we evaluate the l1 norm for the shock tube problems using our algorithm and the shasta scheme which is often used in hydrodynamic models applied to high - energy heavy - ion collisions .the cfl number is set to be 0.4 in the following l1 norm calculation . in fig.[fig : l1norm - shock - tube ] ( a ) the l1 norm errors of the shasta scheme with , 0.99 and 0.8 and our algorithm are shown . herethe initial temperatures on the left and the right are as the same as ones in fig .[ fig : shock - tube - test ] .we find that the l1 norm of our algorithm is smaller than that of the shasta scheme for each , which suggests that our algorithm has smaller numerical dissipation compared to the shasta .the difference of the l1 norm between our algorithm and the shasta scheme becomes large , as the value of decreases .we find that the shasta scheme with becomes unstable , if the temperature difference between the left and the right becomes large .for example , in the case of the initial temperature on the left mev and that on the right mev , the calculation with the shasta with does not work . to stabilize the numerical calculation with the shasta , we change from 1 to 0.99 , which means introduction of additional numerical dissipation to the shasta . on the other hand ,our algorithm is stable with the initial temperatures without any additional numerical dissipation .this difference appears in the value of the l1 norm . in fig .[ fig : l1norm - shock - tube ] ( b ) we can see that the difference between the l1 norm of our algorithm and that of the shasta scheme becomes larger , compared to the difference between them in fig .[ fig : l1norm - shock - tube ] ( a ) .furthermore in the case of ( 450 mev , 170 mev ) , is set to be 0.8 for stability of the numerical calculation in the shasta .[ fig : l1norm - shock - tube ] ( c ) indicates that the shasta algorithm has large numerical dissipation compared to our algorithm . in analyses of high - energy heavy - ion collisions with hydrodynamic model , such a temperature difference between cells can be realized .for instance , the maximum value of initial temperature for au+au gev collisions at rhic is estimated to be 300 - 600 mev . in the heavy - ion collisions at lhc higher temperatureis achieved . on the other hand, we can utilize the hydrodynamic picture if the temperature of the system is above mev .therefore , the temperature fluctuations between mev and mev which is shown in the previous shock tube problems can exist in an initial temperature distribution for the high - energy heavy - ion collisions .this fact suggests that the numerical scheme that is stable for strong shock wave with small numerical dissipation is more suitable for investigation of physics of high - energy heavy - ion collisions .our algorithm has an advantage over the shasta scheme on this point . , ( b ) velocity and ( c ) invariant expansion rate ., title="fig:",width=264 ] , ( b ) velocity and ( c ) invariant expansion rate . ,title="fig:",width=264 ] , ( b ) velocity and ( c ) invariant expansion rate ., title="fig:",width=264 ] , ( b ) velocity and ( c ) invariant expansion rate . ,title="fig:",width=264 ] , ( b ) velocity and ( c ) invariant expansion rate . ,title="fig:",width=264 ] , ( b ) velocity and ( c ) invariant expansion rate ., title="fig:",width=264 ] if fine - enough cell size is utilized in numerical calculation , the distinction among different algorithms becomes small , because numerical solutions should converge to the analytical one .however , the speed of convergence to the analytical solution varies among different numerical schemes .for example , to analyze the higher harmonics induced by event - by - event fluctuations in experiments , we need to carry out numerical calculations with fluctuating initial conditions , which indicates that we reconcile a numerical calculation on coarser grids under current computational resources . according to the physics application of hydrodynamics, we need to choose an appropriate numerical method for solving the relativistic hydrodynamics equation . besides , in relativistic heavy - ion collisions , one of the interesting and important topics is investigating bulk properties of the qgp , such as its transport coefficients . to evaluate the physical viscosities of qgp from analyses of experimental data based on hydrodynamic models , we need to control the numerical dissipation .the difficulty of distinguishing between the physical viscosity and the numerical dissipation was discussed in ref . . for investigation of physical viscosity of qgp ,the algorithm in which the numerical dissipation is well controlled is indispensable .[ fig : shock - tube - test - eta ] shows the shear viscosity dependence of the energy density distribution , velocity and invariant expansion rate . at finite shear viscosity , deviation from the result of the ideal fluid becomes large and the shape of distribution is smeared .we observe the same tendency in finite bulk viscosity and baryon number conductivity calculation .[ fig : shock - tube - test - qcd ] shows the eos dependence of the pressure distribution , velocity and invariant expansion rate . for comparison ,the same initial pressure distribution is employed for both cases .the fact that the sound velocity of lattice qcd eos is smaller than that of the free gas eos ( fig .[ fig : eos ] ) affects expansion rate . in fig .[ fig : shock - tube - test - qcd ] ( c ) the expansion rate of lattice qcd eos is smaller that that of the free gas eos in almost everywhere . as a consequence ,the velocity of lattice qcd eos is smaller than that of the free gas eos ( fig .[ fig : shock - tube - test - qcd ] ( b ) ) and expansion of the shock wave in pressure distribution is smaller than that of the free gas eos ( fig .[ fig : shock - tube - test - qcd ] ( a ) ) .we solve a ( 2 + 1)-dimensional blast wave problem .the initial pressure and density are uniform , and the initial flow vector is normalized to and points to the center of the system : with and . the system area is a square with square , we discretize it with 384 points in each direction .we perform the blast wave simulation in the ( i ) ideal and viscous hydrodynamics with free gas eos and ( ii ) ideal and viscous hydrodynamics with lattice qcd eos . in viscous hydrodynamic simulations , we choose viscous coefficients , baryon number conductivity , and relaxation time for the shear mode . in fig .[ fig : blast_free ] , we show the results of simulation ( i ) at fm ( 1500 steps ) . in the upper panels , we plot the pressure and velocity profiles for the ideal hydrodynamic simulation .note that the flow velocity field is dimensionless . at the center, we find a region with high pressure and vanishing flow velocity , which grows in time . in the lower panels , we show one - dimensional profiles of pressure and -component of flow velocity at fm for both ideal and viscous hydrodynamic simulations .it is clear that there is a symmetry between and directions , which must be realized because of the initial conditions .we find that the discontinuous change of pressure and flow velocity at fm in the ideal hydrodynamic simulation becomes continuous due to the finite shear viscosity . in fig .[ fig : blast_qcd ] , we show the results of simulation ( ii ) at fm ( 1500 steps ) . in the upper panels we plot the pressure and velocity profiles for the ideal hydrodynamic simulation , and in the lower panels we show one - dimensional profiles of pressure and -component of flow velocity at fm for both ideal and viscous hydrodynamic simulations . herewe find qualitatively same features as in the simulation ( i ) , but there are quantitative differences .the pressure in the central region is about two times higher than that in the simulation ( i ) .the radius of the central region is about 10% smaller than that in simulation ( i ) .the smaller radius is explained by the fact that the lattice qcd eos is softer than the free gas eos , as shown in fig . [fig : eos ] .the pressure difference is explained by the ratio of the lattice qcd eos at low and high temperatures as follows . at low ( high ) temperature , this ratio is , while it is for the free gas eos .therefore , the energy density of the central region becomes about times larger than in simulation ( i ) , and the pressure of this hot region is also about 2 times larger .we have also successfully performed ( 3 + 1)-dimensional blast wave simulations for both ideal and viscous hydrodynamics with the same initial conditions , as in the ( 2 + 1)-dimensional simulations . in the viscous hydrodynamic simulation, we choose the same parameterization for viscosity and relaxation time as before .since these results were quite similar to those of the ( 2 + 1)-dimensional case , we do not show them here .in this article , we have presented a state - of - the - art numerical algorithm for solving the relativistic viscous hydrodynamics equation with the qcd eos .the numerical scheme is suitable for analyses of shock wave phenomena and has less numerical viscosity .both features are important for understanding feature of qgp features in high - energy heavy - ion collisions .we apply the algorithm to several numerical test problems , such as sound wave propagation , shock tube and blast wave problems .we investigated the precision of our numerical scheme in sound wave propagation using the free gas eos and the lattice qcd eos . in both cases , the l1 norm scales as with the number of cells , which shows the second - order accuracy of our algorithm .moreover , we have estimated the numerical dissipation of our scheme \cdot(\delta x)^2 ] .we use to evaluate in all the other parts in the israel - stewart equation .we use the intermediate state in the next step . in multi - dimensional case ,the advection part is solved by the dimensional splitting method while the relaxation time can not be dimensionally split because of the navier - stokes terms .viscous part of the conservation laws : . here we evolve the sum by the currents of the viscous component using and to obtain . from and , we get and in general . for details of primitive recovery for viscous hydrodynamics ,see ref .note that this step also satisfies the conservation law . in multi - dimensional case , we evolve by the dimensional splitting method .the is defined so that it satisfies the cfl condition of the telegrapher equation as in ref .this numerical algorithm is applicable to both landau - lifshitz and eckart frames of causal viscous hydrodynamics . in our algorithm , we approximate the spatial derivatives of the navier - stokes terms with the centered finite differences because the physical meanings of the dissipation variables are the diffusion . for the other part of the spatial derivatives , we utilize the muscl scheme by van leer for the second - order accuracy in space . by this numerical algorithm, we achieve the second - order accuracy in both space and time as we checked in section [ sec : num_test ] .here we compare simulations of viscous hydrodynamics in landau - lifshitz and eckart frames . the purpose of this comparison is to show that the original hydrodynamic code in eckart frame is extended correctly to a code in landau - lifshitz frame .therefore we perform the simulation with the equation of state with high baryon density as in and we do _ not _ use the riemann solver that we propose in the text .we simulate a shock tube problem with an initial condition : with , , and .note that denotes the mass density in this simulation .the system length is 2 fm and is discretized with 200 cells .the particle diffusion in landau - lifshitz frame , or equivalently the heat conductivity in eckart frame , is and shear and bulk viscous coefficients are . shown in fig .[ fig : el_test ] is the temperature at fm ( 1400 steps ) in landau - lifshitz and eckart frames . according to ,the difference in thermodynamic quantities in these frames is small .since we can not find frame dependence in the temperature profiles , we conclude that we have successfully extended the original code to the one in landau - lifshitz frame .p. romatschke and u. romatschke , phys .lett . * 99 * , 172301 ( 2007 ) [ arxiv:0706.1522 [ nucl - th ] ] .a. mignone , t. plewa and g. bodo , astrophys .j. s*160 * , 199 ( 2005 ) .[ astro - ph/0505200 ] .a. kurganov , e. tadmor , j. comput .phys . * 160 * , 214 ( 2000 ) .m. takamoto and s. inutsuka , j. comput .phys . * 230 * , 7002 ( 2011 ) [ arxiv:1106.1732 [ astro-ph.he ] ] . c. nonaka and m. asakawa , ptep * 2012* , 01a208 ( 2012 ) [ arxiv:1204.4795 [ nucl - th ] ] .y. hama , t. kodama and o. socolowski , jr .j. phys .* 35 * , 24 ( 2005 ) [ hep - ph/0407264 ] .t. hirano , u. w. heinz , d. kharzeev , r. lacey and y. nara , phys .b * 636 * , 299 ( 2006 ) [ nucl - th/0511046 ] . c. nonaka and s. a. bass , phys .c * 75 * , 014902 ( 2007 ) [ nucl - th/0607018 ] .t. hirano , p. huovinen and y. nara , phys .c * 83 * , 021902 ( 2011 ) [ arxiv:1010.6222 [ nucl - th ] ] .t. hirano , p. huovinen and y. nara , phys .c * 84 * , 011901 ( 2011 ) [ arxiv:1012.3955 [ nucl - th ] ] .h. petersen , g. -y .qin , s. a. bass and b. muller , phys .c * 82 * , 041901 ( 2010 ) [ arxiv:1008.0625 [ nucl - th ] ] .i. .a .karpenko and y. .m .sinyukov , phys .c * 81 * , 054903 ( 2010 ) [ arxiv:1004.1565 [ nucl - th ] ]. h. holopainen , h. niemi and k. j. eskola , phys .c * 83 * , 034901 ( 2011 ) [ arxiv:1007.0368 [ hep - ph ] ] .l. pang , q. wang and x. -n .wang , phys .c * 86 * , 024911 ( 2012 ) [ arxiv:1205.5019 [ nucl - th ] ] .m. luzum and p. romatschke , phys .c * 78 * , 034915 ( 2008 ) [ erratum - ibid .c * 79 * , 039903 ( 2009 ) ] [ arxiv:0804.4015 [ nucl - th ] ] .b. schenke , s. jeon and c. gale , phys .lett . * 106 * , 042301 ( 2011 ) [ arxiv:1009.3244 [ hep - ph ] ] .h. song , s. a. bass , u. heinz , t. hirano and c. shen , phys .lett . * 106 * , 192301 ( 2010 ) [ arxiv:1011.2783 [ nucl - th ] ] .a. k. chaudhuri , arxiv:0801.3180v2 [ nucl - th ] .v. roy and a. k. chaudhuri , phys .c * 85 * , 024909 ( 2012 ) [ arxiv:1109.1630 [ nucl - th ] ] .p. bozek , phys .c * 85 * , 034901 ( 2012 ) [ arxiv:1110.6742 [ nucl - th ] ] .w. israel , annals phys .* 100 * , 310 ( 1976 ) ; w. israel and j. m. stewart , phys .lett.*a58 * , 213 ( 1976 ) ; annals phys . * 118 * , 341 ( 1979 ) .i. mller , z. phys .* 198 * ( 1967 ) , 329 .r. baier , p. romatschke , d. t. son , a. o. starinets and m. a. stephanov , jhep*0804 * ( 2008 ) , 100 [ arxiv:0712.2451 [ hep - th ] ] .b. betz , d. henkel and d. h. rischke , j. phys .g*36 * ( 2009 ) , 064029. t. tsumura , t. kunihiro and k. ohnishi , phys . lett . *b646 * , 132 ( 2007 ) .k. tsumura and t. kunihiro , phys . lett .* b668 * , 425 ( 2008 ) [ arxiv:0709.3645 [ nucl - th ] ] . k. fukushima and t. hatsuda , rept .phys . * 74 * , 014001 ( 2011 ) [ arxiv:1005.4814 [ hep - ph ] ] .j. m. ibanez , i. cordero - carrion , j. m. marti and j. a. miralles , class .* 30 * , 057002 ( 2013 ) [ arxiv:1302.3758 [ gr - qc ] ] .s. borsanyi , g. endrodi , z. fodor , a. jakovac , s. d. katz , s. krieg , c. ratti and k. k. szabo , jhep * 1011 * , 077 ( 2010 ) [ arxiv:1007.2580 [ hep - lat ] ] .s. borsanyi , z. fodor , s. d. katz , s. krieg , c. ratti and k. szabo , jhep * 1201 * , 138 ( 2012 ) [ arxiv:1112.4416 [ hep - lat ] ] .a. andronic , p. braun - munzinger , k. redlich and j. stachel , j. phys .g * 38 * , 124081 ( 2011 ) [ arxiv:1106.6321 [ nucl - th ] ] .h. niemi , private communication .e. molnar , h. niemi and d. h. rischke , eur .j. c * 65 * , 615 ( 2010 ) [ arxiv:0907.2583 [ nucl - th ] ] .j. a. pons , j. m. marti and e. mueller , j. fluid mech .* 422 * , 125 ( 2000 ) [ astro - ph/0005038 ] ; j. m. marti and e. mueller , j. fluid mech .* 258 * , 317 ( 1994 ) .
in this article , we present a state - of - the - art algorithm for solving the relativistic viscous hydrodynamics equation with the qcd equation of state . the numerical method is based on the second - order godunov method and has less numerical dissipation , which is crucial in describing of quark - gluon plasma in high - energy heavy - ion collisions . we apply the algorithm to several numerical test problems such as sound wave propagation , shock tube and blast wave problems . in sound wave propagation , the intrinsic _ numerical _ viscosity is measured and its explicit expression is shown , which is the second - order of spatial resolution both in the presence and absence of _ physical _ viscosity . the expression of the numerical viscosity can be used to determine the maximum cell size in order to accurately measure the effect of physical viscosity in the numerical simulation .
algorithmic information theory ( ait , for short ) is a framework for applying information - theoretic and probabilistic ideas to computability theory .one of the primary concepts of ait is the _ program - size complexity _ ( or _ kolmogorov complexity _ ) of a finite binary string , which is defined as the length of the shortest binary program for a universal decoding algorithm , called an _ optimal prefix - free machine _ , to output . by the definition , is thought to represent the amount of randomness contained in a finite binary string . in particular , the notion of program - size complexity plays a crucial role in characterizing the _ randomness _ of an infinite binary sequence , or equivalently , a real . in chaitinintroduced the number as a concrete example of random real .the first bits of the base - two expansion of solve the halting problem of for inputs of length at most . by this property , is shown to be a random real , and plays a central role in the development of ait . in this paper , we study the _ statistical mechanical interpretation _ of ait . in a series of works , we introduced and developed this particular subject of ait .first , in we introduced the _ thermodynamic quantities _ at temperature , such as partition function , free energy , energy , statistical mechanical entropy , and specific heat , into ait .these quantities are real functions of a real argument , and are introduced in the following manner : let be a complete set of energy eigenstates of a quantum system and the energy of an energy eigenstate of the quantum system .in we introduced thermodynamic quantities into ait by performing replacements [ cs06 ] below for the corresponding thermodynamic quantities in statistical mechanics .[ cs06 ] 1 .replace the complete set of energy eigenstates by the set of all programs for .2 . replace the energy of an energy eigenstate by the length of a program .3 . set the boltzmann constant to .for example , in statistical mechanics , the partition function at temperature is given by + thus , based on replacements [ cs06 ] , the partition function in ait is defined as + in general , the thermodynamic quantities in ait are variants of chaitin number . in fact , in the case of , is precisely chaitin number . in then proved that if the temperature is a computable real with then , for each of the thermodynamic quantities , , , , and in ait , the partial randomness of its value equals to , where the notion of _ partial randomness _ is a stronger representation of the compression rate by means of program - size complexity .thus , the temperature plays a role as the partial randomness ( and therefore the compression rate ) of all the thermodynamic quantities in the statistical mechanical interpretation of ait . in further showed that the temperature plays a role as the partial randomness of the temperature itself , which is a thermodynamic quantity of itself in thermodynamics or statistical mechanics .namely , we proved the _fixed point theorem on partial randomness _ , which states that , for every , if the value of partition function at temperature is a computable real , then the partial randomness of equals to , and therefore the compression rate of equals to , i.e. , , where is the first bits of the base - two expansion of the real . in our second work on the statistical mechanical interpretation of ait , we showed that a fixed point theorem of the same form as for holds also for each of , , and . in the third work , we further unlocked the properties of the fixed points on partial randomness by introducing the notion of composition of prefix - free machines into ait , which corresponds to the notion of composition of systems in normal statistical mechanics . in the work we developed a total statistical mechanical interpretation of ait which attains a perfect correspondence to normal statistical mechanics , by making an argument on the same level of mathematical strictness as normal statistical mechanics in physics .we did this by identifying a _ microcanonical ensemble _ in ait .this identification clarifies the meaning of the thermodynamic quantities of ait .our first work showed that the values of all the thermodynamic quantities in ait diverge when the temperature exceeds .this phenomenon might be regarded as some sort of _ phase transition _ in statistical mechanics . in the work we revealed a computational aspect of the phase transition in ait .the notion of _ weak truth - table reducibility _ plays an important role in recursion theory . in the work we introduced an elaboration of this notion , called _ reducibility in query size .this elaboration enables us to deal with the notion of asymptotic behavior of computation in a manner like in computational complexity theory , while staying in computability theory .we applied the elaboration to the relation between and , where the latter is the set of all halting inputs for the optimal prefix - free machine , i.e. , the _ halting problem_. we then revealed the critical difference of the behavior of between and in relation to .namely , we revealed the phase transition between the _ unidirectionality _ at and the _ bidirectionality _ at in the reduction between and .this critical phenomenon can not be captured by the original notion of weak truth - table reducibility . in this paper, we reveal another computational aspect of the phase transition in ait between and .we introduce the notion of _ strong predictability _ for an infinite binary sequence .let be an infinite binary sequence with each .the strong predictability of is the existence of the computational procedure which , given any prefix of , can predict the next bit in with unfailing accuracy , where the suspension of an individual prediction for the next bit is allowed to make sure that the whole predictions are error - free . we introduce three types of strong predictability , _ finite - state strong predictability _ , _ total strong predictability _ , and _ strong predictability _ , which differ with respect to computational ability .we apply them to the base - two expansion of . on the one hand, we show that the base - two expansion of is not strongly predictable at in the sense of any of these three types of strong predictability . on the other hand , we show that it is strongly predictable in the sense of all of the three types in the case where is computable real with . in this manner, we reveal a new aspect of the phase transition in ait between and .we start with some notation and definitions which will be used in this paper . for any set denote by the cardinality of . is the set of natural numbers , and is the set of positive integers . is the set of rationals , and is the set of reals . is the set of finite binary strings , where denotes the _ empty string _ , and ordered as indicated .we identify any string in with a natural number in this order . for any , is the _ length _ of .a subset of is called _ prefix - free _ if no string in is a prefix of another string in .we denote by the set of infinite binary sequences , where an infinite binary sequence is infinite to the right but finite to the left .let .for any , we denote the bit of by . for any , we denote the first bits of by .namely , , and for every .for any real , we denote by the greatest integer less than or equal to .when we mention a real as an infinite binary sequence , we are considering the base - two expansion of the fractional part of the real with infinitely many zeros .thus , for any real , and denote and , respectively , where is the unique infinite binary sequence such that and contains infinitely many zeros . a function or called _ computable _ if there exists a deterministic turing machine which on every input halts and outputs .a real is called _ computable _ if there exists a computable function such that for all .we say that is _ computable _ if the mapping is a computable function , which is equivalent to that the real in base - two notation is computable .let and be any sets .we say that is a _ partial function _ if is a function whose domain is a subset of and whose range is .the domain of a partial function is denoted by .a _ partial computable function _ is a partial function for which there exists a deterministic turing machine such that ( i ) on every input , halts if and only of , and ( ii ) on every input , outputs .we write `` c.e . '' instead of `` computably enumerable . '' in the following we concisely review some definitions and results of ait .prefix - free machine _ is a partial computable function such that is prefix - free . for each prefix - free machine and each , is defined by ( may be ) . a prefix - free machine is called _ optimal _if for each prefix - free machine there exists with the following property ; if , then there is for which and .it is then easy to see that there exists an optimal prefix - free machine .we choose a particular optimal prefix - free machine as the standard one for use , and define as , which is referred to as the _ program - size complexity _ of or the _ kolmogorov complexity _ of .chaitin introduced number by . since is prefix - free , converges and . for any , we say that is _ weakly chaitin random _ if there exists such that for all .chaitin showed that is weakly chaitin random. therefore . in the work ,we generalized the notion of the randomness of a real so that the _ partial randomness _ of a real can be characterized by a real with as follows .let ] and let .we say that is _strictly -compressible _ if there exists such that , for all , .we say that is _ strictly chaitin -random _ if is both weakly chaitin -random and strictly -compressible . in the work , we generalized chaitin number to as follows . for each real , the _ partition function _ at temperature is defined by the equation .thus , . if , then converges and , since .the following theorem holds for .[ zvtwctr - stricttcb ] let . 1 .if and is computable , then is strictly chaitin -random .if , then diverges to .this theorem shows some aspect of the phase transition of the behavior of when the temperature exceeds . in this subsectionwe review the notion of _ martingale_. compared with the notion of strong predictability which is introduced in this paper , the predictability based on martingale is weak one .we refer the reader to nies ( * ? ? ?* chapter 7 ) for the notions and results of this subsection .a martingale is a betting strategy .imagine a gambler in a casino is presented with prefixes of an infinite binary sequence in ascending order .so far she has been seen a prefix of , and her current capital is .she bets an amount with on her prediction that the next bit will be , say . then the bit is revealed .if she was right , she wins , else she loses .thus , and , and hence .the same considerations apply if she bets that the next bit will be .these considerations result in the following definition .a _ martingale _ is a function such that for every . for any , we say that the martingale _ succeeds _ on if the capital it reaches along is unbounded , i.e. , . for any subset of , we say that is _ computably enumerable __ , for short ) if there exists a deterministic turing machine such that , on every input , halts if and only if . a martingale is called _ computably enumerable _ if the set is c.e . for every , no c.e .martingale succeeds on if and only if is weakly chaitin random . for any subset of , we say that is _ computable _ if there exists a deterministic turing machine such that , on every input , ( i ) halts and ( ii ) outputs if and otherwise .a martingale is called _ computable _ if the set is computable . for any , we say that is _ computably random _ if no computable martingale succeeds on .a _ partial computable martingale _ is a partial computable function such that is closed under prefixes , and for each , is defined iff is defined , in which case holds .let be a partial computable martingale and .we say that _ succeeds _ on if is defined for all and .we say that is _ partial computably random _ if no partial computable martingale succeeds on .[ wcpcr_pcrcr ] let . 1 .if is weakly chaitin random then is partial computably random .if is partial computably random then is computably random .the converse direction of each of the implications ( i ) and ( ii ) of theorem [ wcpcr_pcrcr ] fails .the main result in this section is theorem [ main3 ] , which shows that partial computable randomness implies non strong predictability . for intelligibility we first show an easier result , theorem [ main2 ] , which says that computable randomness implies non total strong predictability .[ def_tsp ] for any , we say that is _ total strongly predictable _ if there exists a computable function for which the following two conditions hold : 1 . for every , if then .the set is infinite . in the above definition, the letter outputted by on the input means that the prediction of the next bit is suspended .[ main2 ] for every , if is computably random then is not total strongly predictable .we show the contraposition of theorem [ main2 ] . for that purpose ,suppose that is total strongly predictable .then there exists a computable function which satisfies the conditions ( i ) and ( ii ) of definition [ def_tsp ] .we define a function recursively as follows : first is defined as .then , for any , is defined by + and then is defined by .it follows that is a computable function and for every .thus is a computable martingale . on the other hand , it is easy to see that + for every . since the set is infinite , it follows that . therefore , is not computably random , as desired .[ def_sp ] for any , we say that is _ strongly predictable _ if there exists a partial computable function for which the following three conditions hold : 1 . for every , is defined .2 . for every , if then .the set is infinite .obviously , the following holds .[ fact1 ] for every , if is total strongly predictable then is strongly predictable .[ main3 ] for every , if is partial computably random then is not strongly predictable .we show the contraposition of theorem [ main3 ] . for that purpose ,suppose that is strongly predictable .then there exists a partial computable function which satisfies the conditions ( i ) , ( ii ) , and ( iii ) of definition [ def_sp ] .we define a partial function recursively as follows : first is defined as .then , for any , is defined by and then is defined by it follows that is a partial computable function such that 1 . is closed under prefixes , 2 . for every , if and only if , and 3 . for every , if then .thus is a partial computable martingale . on the other hand , it is easy to see that , for every , is defined and since the set is infinite , it follows that .therefore , is not partial computably random , as desired .[ wcrnonsp ] for every , if is weakly chaitin random then is not strongly predictable .the result follows immediately from ( i ) of theorem [ wcpcr_pcrcr ] and theorem [ main3 ] .thus , since , i.e. , , is weakly chaitin random , we have the following . [ zt=1nonsp ] is not strongly predictable .in this section , we introduce the notion of _ finite - state strong predictability_. for that purpose , we first introduce the notion of _ finite automaton with outputs_. this is just a deterministic finite automaton whose output is determined , depending only on its final state .the formal definitions are as follows .[ def_fao ] a finite automaton with outputs is a -tuple , where 1 . is a finite set called the _ states _ , 2 . is a finite set called the _ input alphabet _ , 3 . is the _ transition function _ , 4 . is the _ initial state _ , 5 . is a finite set called the _output alphabet _ , and 6 . is the _ output function from final states_. a finite automaton with outputs computes as follows .[ comp_fao ] let be a finite automaton with outputs .for every with each , the output of on the input , denoted , is for which there exist such that 1 . for every , and 2 . . in definitions [ def_fao ] and [ comp_fao ] ,if we set , the definitions result in those of a normal deterministic finite automaton and its computation , where means that accepts and means that rejects .[ def_fssp ] for any , we say that is _ finite - state strongly predictable _if there exists a finite automaton with outputs for which the following two conditions hold : 1 . for every , if then .the set is infinite . since the computation of every finite automaton can be simulated by some deterministic turing machine which always halts , the following holds , obviously . [ fact_fssp_tsp ] for every , if is finite - state strongly predictable then is total strongly predictable .[ main_fssp ] let be a real with .for every , if is strictly chaitin -random , then is finite - state strongly predictable . in order to prove theorem [ main_fssp ] we need the following theorem . for completeness, we include its proof .[ bounded - run ] let be a real with . for every ,if is strictly chaitin -random , then there exists such that does not have a run of consecutive zeros .based on the optimality of used in the definition of , it is easy to show that there exists such that , for every and every , + since , it follows also from the optimality of that there exists such that .hence , by we see that , for every , now , suppose that is strictly chaitin -random .then there exists such that , for every , we choose a particular with .assume that has a run of consecutive zeros .then for some .it follows from that .thus , using we have , which contradicts the fact that .hence , does not have a run of consecutive zeros , as desired .suppose that is strictly chaitin -random .then , by theorem [ bounded - run ] , there exists such that does not have a run of consecutive zeros . for each , let be the length of the block of consecutive zeros in from the left .namely , assume that has the form for some natural number and some infinite sequence of positive integers .let .since for all , we have .moreover , since is a sequence of positive integers , there exists such that + for every , and + for infinitely many . let be the length of the prefix of which lies immediately to the left of the block of consecutive zeros in .namely , .now , we define a finite automaton with outputs as follows : first , is defined as . the transition function then defined by + where is arbitrary .finally , the output function is defined by if and otherwise .then , it is easy to see that , for every , 1 . if and only if there exists such that and , and 2 . . now , for an arbitrary , assume that .then , by the condition ( ii ) above , we have . therefore , by the condition ( i ) above , there exists such that and .it follows from that and therefore .thus the condition ( i ) of definition [ def_fssp ] holds for and . on the other hand , using and the condition ( i ) above, it is easy to see that the set is infinite . thus the condition ( ii ) of definition [ def_fssp ] holds for and .hence , is finite - state strongly predictable .[ main1 ] let be a computable real with .then is finite - state strongly predictable .the result follows immediately from ( i ) of theorem [ zvtwctr - stricttcb ] and theorem [ main_fssp ] . in the case where is a computable real with , is not computable despite theorem [ main1 ] .this is because , in such a case , is weakly chaitin -random by ( i ) of theorem [ zvtwctr - stricttcb ] , and therefore can not be computable .it is worthwhile to investigate the behavior of in the case where is not computable but .on the one hand , note that is of class as a function of and for every . on the other hand , recall that a real is weakly chaitin random almost everywhere .thus , by theorem [ wcrnonsp ] , we have , where is the set of all such that is not computable and is not strongly predictable , and is lebesgue measure on .the author is grateful to professor cristian s. calude for his encouragement .this work was supported by jsps kakenhi grant number 23340020 .k. tadaki , a statistical mechanical interpretation of algorithmic information theory .local proceedings of computability in europe 2008 ( cie 2008 ) , pp.425434 , june 15 - 20 , 2008 , university of athens , greece . an extended version available from : arxiv:0801.4194v1 .k. tadaki , `` fixed point theorems on partial randomness , '' _ annals of pure and applied logic _ , vol .163 , pp.763774 , 2012 .k. tadaki , `` a statistical mechanical interpretation of algorithmic information theory iii : composite systems and fixed points , '' _ mathematical structures in computer science _ ,22 , pp.752770 , 2012 .k. tadaki , `` a statistical mechanical interpretation of algorithmic information theory : total statistical mechanical interpretation based on physical argument , '' _ journal of physics : conference series ( jpcs ) _ , vol .201 , 012006 ( 10pp ) , 2010 .k. tadaki , robustness of statistical mechanical interpretation of algorithmic information theory .proceedings of the 2011 ieee information theory workshop ( itw 2011 ) , pp.237241 , october 16 - 20 , 2011 , paraty , brazil .k. tadaki , phase transition between unidirectionality and bidirectionality .proceedings of the international workshop on theoretical computer science , dedicated to prof .cristian s. calude s 60th birthday ( wtcs2012 ) , lecture notes in computer science festschrifts series , springer - verlag , vol.7160 , pp.203223 , 2012 .
the statistical mechanical interpretation of algorithmic information theory ( ait , for short ) was introduced and developed in our former work [ k. tadaki , local proceedings of cie 2008 , pp.425434 , 2008 ] , where we introduced the notion of thermodynamic quantities into ait . these quantities are real functions of temperature . the values of all the thermodynamic quantities diverge when exceeds . this phenomenon corresponds to phase transition in statistical mechanics . in this paper we introduce the notion of strong predictability for an infinite binary sequence and then apply it to the partition function , which is one of the thermodynamic quantities in ait . we then reveal a new computational aspect of the phase transition in ait by showing the critical difference of the behavior of between and in terms of the strong predictability for the base - two expansion of .
the first name that comes to our minds when we hear the word ` evolution ' is darwin . no doubt that charles robert darwin s _ on the origin of species _ , together with the sequels that he also published _( the descent of man , the expression of emotions in man and animals ) _ , form the cornerstone of our current understanding of the most fundamental process of life . nevertheless , darwin neither discovered evolution himself , nor was he the only one to propose the mechanism of natural selection to explain the evolution of species . at darwin s time , the fact that species evolved was common knowledge . on the other hand , alfred russel wallace published , simultaneously with darwin , a theory of evolution based on what we currently know as natural selection , the same key idea put forward in `` the origin '' .then , why is darwin s work so fundamental for the current theory of evolution ? to understand the depth of his contribution , one must read `` the origin '' just an abstract , in his words , of the work he intended to publish two or three years later .he deserves the credit for this theory because of both the overwhelming accumulation of empirical data he presented and the clear explanations that his theory offered to many different and at the time independent observations : geographical diversity , artificial selection , coevolution of plants and insects , appearance of complex organs , instincts in man and animals he gave a unified view of the complexity of life by means of a unique universal mechanism .evolution by natural selection was endowed with a creative power far beyond what darwin s predecessors , or even wallace , had ever proposed .it is for this reason that there was a centennial celebration of the publication of this fundamental book ( see fig . [fig : conference ] ) and the motive of last year s sesquicentennial celebration , in the internationally proclaimed darwin year .however , darwin s theory was incomplete .all throughout `` the origin '' , darwin bumps once and again into the same problem : the mechanism of inheritance . at darwin s timethe standard theory of inheritance in sexual organisms assumed that individuals roughly inherited an average of their parent s traits .sir francis galton , one of darwin s cousins , discovered the statistical phenomenon of regression towards the mean , according to which traits that deviate from the mean of a population revert to this value as they breed , in a few generations .this problem permeates his work and forces darwin to resort to the isolation of populations in order to explain the appearance and maintenance of new species .it was unfortunate that darwin was not aware of mendel s discovery of the laws of inheritance , published almost simultaneously with `` the origin '' in an obscure austrian journal .mendel laws would have solved many of darwin s problems with the sustainment of diversity .in fact , the rediscovery of these laws by de vries , correns and von tschermak in 1900 triggered a big deal of research , both theoretical and experimental , which led , by the middle of the xxth century , to the so - called `` modern synthesis '' .this revision of darwinism can be considered as a true scientific theory in the sense that it is based on population genetics , a _ quantitative _ formulation of the theory of evolution by natural selection under the mechanisms of genetic inheritance .population genetics is the creation of a group of statisticians among whom we find some of the big names of evolutionary theory : fisher , haldane , wright , and later kimura .the focus of this theory is to determine the fate of a population whose individuals reproduce with variability and struggle for survival in an environment which discriminates their traits , favoring ones over others .more precisely , population genetics assumes that populations live in a more or less exhausted environment which maintains the amount of individuals almost constant along generations .individuals breed and their offspring inherit their traits according to genetic laws .different traits have different survival probabilities , and the action of chance upon this biased set decides who dies with no descent and who survives and reproduces , and , among the latter , the number of offspring of each individual .new traits appear randomly , at a very low rate , through mutations of existing genes . from this point of view , evolution is , to a large extent , a result of the laws of probability , hence the intrinsic statistical nature of population genetics .population genetics stands as the first coherent and quantitative account of the theory of evolution , and still today provides the paradigm that scientists have in mind when thinking about evolution .the picture it draws is that of a population of entities which _ replicate _ at a rate that depends on _ selection _ pressures , i.e. a measure of how adapted are their traits to the environment .new traits appear at a very low rate through _ mutations . _the process is random and therefore subject to historical contingency , which translates into another feature exhibited by the evolution of populations : _ genetic drift _ , or sampling noise . by thiswe mean the fact that even for a population with two traits replicating at the same rate i.e . having the same _ fitness_ and represented fifty - fifty , the ratio of the two traits will deviate from this equal ratio in the next generation .this process is especially important in small populations ( for instance , in evolutionary bottlenecks ) , but it has always been considered a secondary effect in large populations . the paradigm yielded by population genetics has been very successful not only in biology , but also in other disciplines which have borrowed it to explain related phenomena .economics , sociology , linguistics , or computer science are a few examples of areas where evolution as a result of the combined effect of replication , selection , and mutation , has provided a new framework to understand collective dynamics or to devise applications to solve existing problems . but population genetics also makes several implicit assumptions which have basically remained unquestioned and have thus become part of the standard thinking in this discipline .explicit models in population genetics make use of a metaphor introduced by wright : the fitness landscape . in brief, it is assumed that fitness is uniquely determined once the genotype and the environment are given , so if the environment remains unchanged , the fitness landscape becomes a mapping from genotype to the mean replication rate ( interpreted as fitness ) of the individuals carrying that genotype .evolution is then the movement through that fitness landscape .but what does it move ?this is the first implicit assumption of population genetics : evolution moves the population as a whole .the mutation rate is considered so low that a mutation causing a new allele gets fixed in the population before the next mutation occurs and introduces a new allele into play .thus evolution is the movement of a homogeneous population throughout the fitness landscape .this implicit assumption is made explicit in several works aimed at describing the evolution of populations with the language of statistical mechanics .a second implicit assumption shows up when examining the basic models of population genetics .fisher s fujiyama landscape assumes , for instance , that there is an optimum genotype for which fitness is maximal , and any deviation from that genotype by point mutations only degrades that fitness , the more the larger the distance in configurational space ( genotype distance is usually measured in terms of hamming distance , i.e. the number of positions in which two sequences differ ) .wright s rugged landscapes are thought of as hilly landscapes , with many mountains and valleys , tops being fitness maxima , again located at specific genotypes .many theoretical models like muller s ratchet or eigen s quasispecies , which have been very influential in our current evolutionary thinking , strongly rely on this optimum genotype assumption of population genetics ._ gradualism _ is implicit in this evolutionary paradigm : evolutionary changes occur only through the gradual , slow accumulation of small changes caused by the very infrequent appearance of beneficial mutations ( most mutations are just deleterious ) .gradualism , an idea that darwin took from geology , is one of the strong arguments of `` the origin '' in justifying why we are not able to see evolution at work .we can not see it like we can not see mountains erosion , and yet we know it exists .but gradualism is also one of the most controversial points of evolutionary theory because it conflicts with the fossil record , where species are observed to remain nearly unchanged for long stasis periods , only to be quickly ( in geological terms ) replaced by new species ( something that has been termed _ punctuated equilibrium _ ) .gradualism is only the tip of the iceberg .perhaps it is so because a case can be made against it from the empirical evidence accumulated by more than a century of paleontological research and from the accumulated knowledge on non - parsimonious evolutionary mechanisms .still , it is not the only difficulty that the paradigm of population genetics faces , nor is it the first one to show up .we will see immediately that the strongest body of evidence against many of the assumptions underlying population genetics comes from molecular biology . andit urgently calls for a change of paradigm .this does not mean that population genetics is wrong : on the contrary , the tools it provides are still valid .it is only the picture it draws , more based on somehow prejudicial assumptions and on misleading metaphors , that is essentially incorrect .in 1968 kimura surprised the scientific community with the argument that most mutations in the genome of mammals have no effect on their phenotype : in other words , most mutations are _ neutral , _ neither beneficial nor deleterious .the argument goes as follows .comparative studies of some proteins indicate that in chains nearly 100 aminoacids long a substitution takes place , on average , every million years .the typical length of a dna chain in one of the two sets of mammal chromosomes is about billion base pairs. every base pairs _ ( codon ) _code for an aminoacid and , because of redundancy , only 80% base pair substitutions give rise to an aminoacid substitution in the corresponding protein .therefore there are million substitutions in the whole genome every million years ; in other words , approximately a substitution every years !kimura concluded that such an enormous mutational load can only be tolerated if the great majority of mutations are neutral .subsequent studies with different systems ( we will see later the case of rna molecules ) support this conclusion . at least at the molecular level, neutrality seems to be the rule , rather than the exception , thus contradicting the homogeneity assumption of population genetics .one could argue that neutral mutations can simply be disregarded , so that we can just focus on those that do produce a phenotypic change in the individual .this might be an appropriate description of what is going on if the effect of mutations on phenotype , and therefore on fitness , could be added up , as if genes were simple switches of different traits that can be turned on and off by mutations ( unfortunately a widespread misconception of how genes work ) .but things are far less simple .it turns out that genes are involved in a complex regulatory network in which the proteins codified by some genes activate or inhibit the coding of other proteins ( even themselves ) , so that the action of a single protein hence of a gene can not be disentangled from the action of very many others .in fact , there is nearly no single trait in multicellular animals or plants which is not the consequence of the combined effect of many genes acting together in this complex way .the phenotype is thus the effect of the genome as a whole , rather than a ` linear combination ' of traits .now , the accumulation of neutral mutations motivates that apparently similar individuals of the same species bear genomes that may be very far apart from each other .in this situation a new mutation may induce a big phenotypic change in one of these individuals but not in others because the net effect is as if the genome as a whole had been modified in just one step ( all previous mutations were silent ) .this effect challenges the standard picture of gradualism and makes a case for punctuated equilibrium .not only that : the idea that there is an optimum genotype makes no sense under such a wide neutral wandering in the space of sequences , and this , as we will see , questions many commonly accepted models in population genetics .biology is extremely redundant , and it is so at all its levels of complexity .we have just mentioned the redundancy of the genetic code . every codon codes for an aminoacid using an almost universal code ( see fig .[ fgc ] ) . setting aside three `stop ' codons ( which mark the end of the gene ) , this implies that 61 codons code for only 20 aminoacids .thus most aminoacids are coded by two , four , or even six codons , so many base pair substitutions in the dna do not alter the coded protein .proteins , in their turn , fold in an almost rigid three - dimensional structure ( the so - called _ tertiary _ structure ) .this folding is induced by the interaction between the sequence of aminoacids conforming their _ primary _ structure .but not all aminoacids play the same role in folding the protein : some of them are critical , in the sense that if they are replaced by others the conformation of the protein changes , but most are nearly irrelevant , in the sense that their replacement leaves the protein unchanged or nearly so . as the tertiary structure determines the protein function , it turns out that many aminoacid substitutions do not modify the structure , and thus have no biological effect .proteins then enter a complex regulatory or metabolic network in which they interact with other proteins regulating their coding or participating in metabolic pathways .but then again some of this proteins may be replaced by other similar proteins with no major change in the network function .this extraordinary redundancy of biological systems makes them very robust to change .this is the origin of neutrality . in order to understand how much room for neutrality is there in biological systems and grasp some of the effects induced by variability we will closely examine a relatively simple example to which a big deal of research has been devoted in the last decades : rna folding .an rna molecule is a chain formed by a sequence of the four nucleotides g , c , a , and u. although it can form double chains , as dna does , rna molecules are usually single stranded .nucleotides in an rna sequence tend to form pairs to minimize the free energy of the molecule .this so - called secondary structure of rna molecules determines to a large extent their chemical functions , and as such has been often used as a crude representation of the phenotype .an upper bound for the number of sequences of length compatible with a fixed secondary structure is , where is a constant that depends on geometric constraints imposed on the secondary structure ( e.g. the minimum number of contiguous pairs in a stack ) .the calculation of is done in a recursive manner , summing over all possible modifications of a structure when its length increases in one nucleotide .the resulting equations may be considered a generalization of catalan and motzkin numbers .the values of for moderate are certainly huge : there are about sequences compatible with the structure of a transfer rna ( which has length ) , while the currently known smallest functional rnas , of length , could in principle be obtained from more than different sequences .figure [ f2 ] portrays a computational example of sequences folding into the same secondary structure , of length in that case .note that the similarity between sequences may be very low , even if they share their folded configuration : a random subsample of a population reveals that sequences differ on average in 10 to 15 nucleotides , while differences up to 100% are possible .all this enormous variability that redundancy supports may have a measurable effect : the equilibrium configuration of either large populations or of populations evolving at a high enough mutation rate , is very heterogeneous . for the sake of illustrationlet us consider a population of size undergoing a mutation rate per generation and per individual . to simplify , let us also assume that all mutations are neutral .in this case , the time in number of generations required for a mutation to spread to all individuals ( or to disappear ) is proportional to the population size , .now , the number of mutants that appear in this characteristic time is .the conclusion is straight : if the population will be homogeneous most of the time , but if mutants appear at a rate faster than that at which mutations are fixed in the whole population , so the statistical equilibrium will correspond to a heterogeneous population .heterogeneity is dynamically maintained not only in neutral characters , but also in features that affect fitness .there are abundant observations of suboptimal phenotypes that coexist with better adapted phenotypes .this is also a result of a high mutation rate that translates into non - zero transition probabilities between phenotypic classes . in other words ,the existence of just one of the phenotypes generates all the others , which are mutually maintained at equilibrium .this type of organization is called a quasispecies .it was first introduced in a theoretical setting to describe the organization of macromolecules at prebiotic times , and the concept was subsequently applied to viruses .actually , rna viruses yield abundant examples of heterogeneity , both in sequences and in function .the common situation is that each genotype is unique in the population , differing in at least one nucleotide from any other .but the isolation of those genotypes and the subsequent generation of clonal populations that descend from each of them reveals a high variability in phenotypic properties ( replication time or virulence , for instance ) , such that the population is a heterogeneous ensemble in genotype and phenotype .genotype spaces are rather complex objects amenable to a deceptively simple description .so complex and so simple that thinking of them may easily lead to misleading images that misguide our intuition .much of our difficulty in understanding the dynamics of evolving systems lies in these objects .consider a biological sequence of length .position of this sequence can adopt one out of variants , that we can think of as letters of an alphabet .depending of the type of sequence this alphabet may be formed by the bases of which dna or rna are made of , by the aminoacids that build up proteins , or even by the different alleles of gene of a given chromosome .the description is similar in any of these instances but we shall focus e.g. on dna to fix ideas .every realization of a dna sequence is a genotype , and represents a `` point '' in genotype space .there are -long different genotypes ; if , for instance a rather short sequence , by the way , the size of the genotype space is , a huge set .movement across this space proceeds through mutations .mutations can be very complicated transformations of a genotype that can even modify its length , but again , to keep it simple we shall constraint ourselves to consider only point - like mutations , i.e. substitutions of the letter at a given position by another one of the alphabet .if we now make a graph whose nodes are all possible sequences and whose links join sequences separated by a point - like mutation , we have a topological description of a genotype space ( fig .[ fgs ] represents one of these spaces for for an alphabet with only two letters ) .mutations move the sequence from a node of this graph to one of its neighbors which differ from it in just one position . in general, the genotype space is a regular lattice in a euclidean space of dimension .the huge size and high dimensionality of sequence spaces have non - trivial implications for the distribution of phenotypes in genotype space .sequences with the same phenotype have therefore the same fitness , so a sequence can move across any connected component of the graph corresponding to one phenotype at no cost in fitness .figure [ fgs](b ) yields a very simple example of sequences that can be accessed without changing the fitness of an individual .note that a single mutation causes no changes if the mutated genome belongs to the same neutral network than its parental genome .however , in regions where two different networks are close , a point mutation may generate a genome that belongs to a different network , such that major novelties in phenotype arise . in order to better understand what these connected components look likelet us consider a simple model in high dimensions i.e . for genomes which are longer than that of fig .let us assume that sequences are randomly and independently assigned to phenotypes , and let be the fraction of sequences corresponding to a given phenotype . due to the complexity of the genotype space we can locally regard it as a tree ( see fig .[ frr ] ) . given a node ,each of its neighbors has new neighbors ; each of these second neighbors of the first node will have , in its turn , new neighbors ; and so on .now , because nodes belong to randomly and independently of each other , assuming that the first node belongs to , each second , third , etc . , neighbor will also belong to with probability . if , on average every -node will have another -node among its neighbors , so the set of -nodes contains a connected cluster with a finite fraction of all the nodes of the graph . on the contrary , if eventually the number of -nodes will drop to zero , and so the set of -nodes will be made of `` small '' disconnected clusters .notice that the critical fraction of nodes is , a very small number in high - dimensional spaces , so what we have just described is the typical situation .the picture this provides is very different from that of the standard fitness landscapes employed in population genetics . heregenotype spaces should be thought of as a patchwork of different phenotypes , each patch containing a finite fraction of the total set of nodes , all of which have the same fitness .patches are intertwined in very irregular ways .again , rna folding can give us a quantitative picture of how a neutral network of genotypes should look like , and how different networks are interrelated .suppose that one can construct the complete mapping of rna sequences of a given length into the secondary structures they fold into .the genome space would be partitioned into a large number of neutral networks , as sketched in fig .the size of neutral networks varies broadly around an average of sequences per network .for example , in the case of sequences of length , there are around structures ( called _ common structures ) _ which are a thousand - fold more frequently obtained from the folding of a randomly chosen sequence than a background of millions of other structures that are yielded by few selected sequences .interestingly , the functional structures found in nature , though arising from a long and demanding selection process through geological time , all belong to the set of common structures .the network of genotypes corresponding to common structures traverses the whole space of genomes . in practice , thus , a population can contain a huge number of different genotypes with identical selective value .populations can spread in the space of genomes without seeing its fitness affected .one important implication of the above is accessibility : almost any other possible secondary structure can be accessed with one or few changes in the sequence , since networks belonging to different folds have to be necessarily close to one or another of the common structures .systematic measures with rna structures indicate that any common structure lies at most nucleotides apart , with , of any other randomly chosen common structure .evidence for the spread of neutral networks throughout the sequence space , and for the existence of sequences performing different chemical functions ( thus having different phenotypes ) that lie just a few nucleotides apart , comes not only from rna , but also from empirical results with aptamers and ribozymes . in a revealing experiment, schultes and bartel discovered close contacts between the neutral networks representing a class - iii self - ligating ribozyme and that of hepatitis virus self - cleaving ribozyme .the experiment began with the two original rna sequences of the corresponding functional molecules , which had no more than the 25% similarity expected by chance .after about 40 moves in genome space , they located an intersection between the two neutral networks where two sequences just two nucleotides apart could perform the original functions without a major loss in fitness .this observation has been repeated in several other systems ( see ref . for a review ) .an illustration of the relationship between genomes , neutral network spreading and phenotypes is represented in fig .[ f4 ] . even in this two - dimensional representationit is clear how moving on a neutral network ( thus conserving fitness ) permits to access different phenotypes in a single mutational move .this property might underlie punctuated equilibrium , explaining the sudden changes in phenotypes observed after long periods of stasis .the movement of the population on the neutral network , though having effects at the genomic level , does not cause any visible change .however , if a better phenotype is encountered through this silent evolution behind the curtain , it will be fixed in the population rapidly ( due to its advantage compared to the previously dominating one ) in what will be interpreted as a punctuation of the dynamics . note , however , that the population will then be genomically trapped in a position of the neutral network close to the old phenotype. it will take a while until it diffuses again on the new network and is able to access different , maybe improved , phenotypes .punctuated equilibrium was first defined in relation to the fossil record , and yet we have used a simple computational model for rna folding to describe it . the question arises : has this process been observed also at the molecular level in natural systems ? andthe answer is yes . the process of spreading on a neutral network followed by a selective sweep when the population discovers a new , fitter phenotype , plus the subsequent exploration ( again spreading ) without phenotypic change to repeat the discovery of innovation , and so on , has been observed in the yearly dynamics of influenza a .this dynamics describes the replacement every to years of circulating populations ( where all individuals share a genetically similar hemaglutinine ) , by new populations , different from the previous one ( but whose individuals again share similar sequences ) .hemaglutinine is a protein that determines the antigenic properties of the virus : continuous changes in this protein permit influenza to escape immunity .this case constitutes a wonderful example of how relevant it is to use an appropriate genotype - phenotype map to understand the co - evolution of pathogens and hosts or the adaptation properties of quasispecies .in order to understand the complex interplay between the fitness of genomes ( which is determined by the adaptation that they provide to a specific environment ) and the topology of the genome space , different paradigmatic fitness landscapes have been devised .their introduction has been very much conditioned by the interest in obtaining analytical results describing the dynamics of quasispecies and other complex populations , as well as the characteristics of the process of adaptation and of the mutation - selection equilibrium .one of the most popular fitness landscapes is the single - peak landscape .usually , it is assumed that a privileged genotype has the largest fitness and all the rest have lower fitness , well below that of the fittest sequence , or even zero .the fujiyama landscape is smoother ( also more complex ) since it assumes that fitness of genotypes decreases with the number of mutations with respect to the fittest type . at the other extreme, we find rugged landscapes , among which two prototypical examples are the random landscape , where each genotype is assigned a randomly and independently chosen fitness value , or kauffman s -landscapes , in which each of the genes of a sequence contributes additively to the fitness of the genome , but its fitness value results from its epistatic interactions ( typically random ) with other genes .there is not much in between , where one would guess that realistic landscapes should lie .but , according to the picture we have just drawn , fitness landscapes should incorporate the high redundancy observed in biological sequences .now we know that genotypes organize themselves into regions of common phenotypes , which therefore have constant fitness and which spread all over the genome space , forming so - called _ neutral networks ._ we can then try to figure out what the prototypical fitness landscapes should look like when these neutral networks of common phenotypes are taken into account .this is what fig .[ f5 ] summarizes .the top row of that figure sketches a representation of the single peak , the fujijama , and the random landscapes , as referred to single genotypes .the single peak exhibits a single point of high fitness in a sea of points of lower or zero fitness . in the fujijama landscape , points decrease in fitness as the get away from the optimum sequence . in the random landscape pointshave random fitness , independently of each other. the lower row of fig .[ f5 ] shows the phenotype counterparts of these three archetypes .points are arranged into networks of constant fitness ( equal phenotype ) , so the single peak now shows one of this networks with high fitness surrounded by other networks of low fitness and by non - viable genotypes ( zero fitness ) .the fujijama landscape is now defined in terms of distance between phenotypes , producing a landscape not quite distinguishable from what a random landscape now looks like . in order to describe evolution in these new fitness landscapeswe need new mathematical tools to deal with neutral networks .neutral networks can be described through a connectivity matrix , whose elements are if genotypes and are mutually accessible and otherwise .evolution and adaptation , understood as a process of search and fixation of fitter phenotypes , is conditioned by the topology of these connectivity matrices and by the relationships between them , understood as objects defined in the space of genomes .there are a number of results that relate the topology of those graphs with the equilibrium states of populations and the dynamics of adaptation on the neutral network .it has been shown that the distribution of a population evolving ( i.e. replicating and mutating ) on a neutral network is solely determined by the topological properties of , and given by its principal eigenvector . in that configurationthe population has evolved mutational robustness , since it is located in a region of the neutral network where the connectivity is as large as possible ( thus where mutations affect as less as possible the current phenotype ) .this maximal connectivity equals the spectral radius of .equilibrium properties are thus well described once is known .the dynamics of adaptation on neutral networks are more difficult to fully quantify because , in principle , all eigenvalues of the matrix intervene in the transient towards equilibrium .in addition , the time required to reach the equilibrium configuration depends on the initial condition : it might differ in orders of magnitude ( in units of generations ) if the population enters the network through a particular node as in the case of influenza a or if all genomes are equally represented as in _ in vitro _ experiments that begin with a large population of random sequences . it has been shown that time to equilibrium is inversely proportional to the mutation rate , such that homogeneous populations ( low mutation rates ) will have it difficult to develop high mutational robustness . in very general conditions , the dominant term in the time to equilibrium is proportional to the ratio between the second largest and the largest eigenvalue of . are highly sparse , symmetric matrices for which it seems likely to develop approximations that could yield their two largest eigenvalues as a function of the average connectivity , for instance . to this end, the analysis of neutral networks could be performed in the limit of infinite size , given their exponentially fast growth in size with the sequence length .finally , an essential ingredient in the evolutionary process is randomness , and not only in relation to genetic drift .random fluctuations play a main role in the searching process .too low a variability in a population might even completely block adaptation .for example , the quantity that determines whether a population will be able to attain the region of maximal neutrality in finite time is the product of the population size times the mutation rate .higher adaptability can be reached by means of a large population or through a large mutation rate .overly small or homogeneous populations might get trapped in suboptimal configurations analogous to the metastable states observed in disordered systems . a deeper knowledge of the topological properties of neutral networks and their mutual relationship in sequence space should lead to more realistic dynamical models for the evolution of populations. provided one could characterize the fitness landscape , the probability of changing from one phenotype to another would be described through a matrix of transitions between states , with .this is actually a common formal framework to study population dynamics .matrices are stochastic , i.e. and thus define a homogeneous markov chain . a full knowledge of the dynamics of the system amounts to knowing the eigenvalue spectrum of .the process of adaptation is not strongly relying on happy coincidences .the existence of huge and extensive neutral networks permits systematic explorations of the space of possible functions without paying high fitness costs a practical way to find out viable pieces later assembled to form complex individuals .our current understanding of the relationship between genotype and phenotype clearly hints at the fact that even an evolutionary process restricted in the amount of change it can produce at the genomic level is not necessarily restricted in the amount of change it can cause at the phenotypic level .further , it seems plausible that all possible phenotypes are sufficiently close to each other , such that it is not necessary to explore all the space of genotypes to find the optimal phenotype .while a genotype might be the needle in a haystack , you ca nt help but stumble upon the phenotype .this picture of a space of genomes where neutral networks corresponding to common functions are vastly extended and deeply interwoven has important implications in the way we understand and model the evolutionary process . fast mutating populations , as rna viruses , are able to spread rapidly and find new adaptive solutions thanks to the sustained generation of new viral types and the costless drift through large regions of genome space . due to their relatively short genomes and the continuous accumulation of new mutations ,it is very difficult ( impossible in many cases ) to trace the ancestry of extant viruses .thus , viral phylogeny is located in evolutionary time , and the signal that speaks for its origins becomes increasingly weaker as we move backwards , until it is eventually lost . as a result , there is an on - going controversy on the origin of viruses , on their being a product of the post - cellular era or the remnants of an ancient , pre - cellular rna world .high mutation rates have been a successful strategy in their case , allowing the perpetual exploration of new genomic regions and thus escaping the attack of their hosts defenses .but when we come to talk about life on earth , with all the amazing complexity and diversity of organisms formed at least by one cell , it turns out that their common origins can be unequivocally identified .the phylogeny reconstructed through ribosomal units , single genes or whole genomes of living organisms clearly reveals the existence of luca , our last universal common ancestor , some billion years ago .is thus life on earth resting on a frozen accident , that is the precise genomic pieces that formed luca ? in the light of the above , we should answer `` no '' .the first genomes could have occupied far - away places in the space of genomes and , still , it is highly improbable that functional life would look nowadays very different from the solutions ( the phenotypes ) we see all around us .the authors acknowledge support from the spanish ministerio de educacin y ciencia under projects fis2008 - 05273 and mosaico , and from dgui of the comunidad de madrid under the r & d program of activities modelico - cm / s2009esp-1691 .grner , w. , giegerich , r. , strothmann , d. , reidys , c. , weber , j. , hofacker , i. l. , stadler , p. f. , & schuster , p. ( 1996 ) .analysis of rna sequence structure maps by exhaustive enumeration .structures of neutral networks and shape space covering . , 127:375389 .
our understanding of the evolutionary process has gone a long way since the publication , 150 years ago , of `` on the origin of species '' by charles r. darwin . the xxth century witnessed great efforts to embrace replication , mutation , and selection within the framework of a formal theory , able eventually to predict the dynamics and fate of evolving populations . however , a large body of empirical evidence collected over the last decades strongly suggests that some of the assumptions of those classical models necessitate a deep revision . the viability of organisms is not dependent on a unique and optimal genotype . the discovery of huge sets of genotypes ( or neutral networks ) yielding the same phenotype in the last term the same organism , reveals that , most likely , very different functional solutions can be found , accessed and fixed in a population through a low - cost exploration of the space of genomes . the ` evolution behind the curtain ' may be the answer to some of the current puzzles that evolutionary theory faces , like the fast speciation process that is observed in the fossil record after very long stasis periods .
the human visual system can easily identify perceptually salient edges in an image . endowing machine vision systems with similar capabilitiesis of interest as edges are useful for diverse tasks such as optical flow , object detection , and object proposals . however , edge detection has proven challenging . early approaches relied on low - level cues such as brightness and color gradients .reasoning about texture markedly improved results , nevertheless , accuracy still substantially lagged human performance .the introduction of the bsds dataset , composed of human annotated region boundaries , laid the foundations for a fundamental shift in edge detection . rather than rely on complex hand - designed features , dollr et al . proposed a data - driven , supervised approach for learning to detect edges .modern edge detectors have built on this idea and substantially pushed the state - of - the - art forward using more sophisticated learning paradigms . however , existing data - driven methods require strong supervision for training .specifically , in datasets such as bsds , human annotators use their knowledge of scene structure and object presence to mark semantically meaningful edges .moreover , recent edge detectors use imagenet pre - training . in this paper, we explore whether this is necessary : _ is object - level supervision indispensable for edge detection ? and moreover , can edge detectors be trained entirely without human supervision ? _ we propose training edge detectors using motion in place of human supervision .motion edges are a subset of image edges , see figure [ fig : teaser ] .therefore motion edges can be used to harvest positive training samples . on the other hand, locations away from motion edges may still contain image edges .fortunately , as edges are sparse , simply sampling such locations at random can provide good negative training data with few false negatives .thus , assuming accurate motion estimates , we can potentially harvest unlimited training data for edge detection . while it would be tempting to assume access to accurate motion estimates ,this is arguably an unreasonably strong requirement . indeed , optical flow and edge detection are tightly coupled .recently , revaud et al . proposed epicflow : given an accurate edge map and semi - dense matches between frames , epicflow generates a dense edge - respecting interpolation of the matches .the result is a state - of - the - art optical flow estimate .this motivates our approach . we begin with only semi - dense matches between frames and a rudimentary knowledge of edges ( simple image gradients ) .we then repeatedly alternate between computing flow based on the matches and most recent edge maps and retraining an edge detector based on signal obtained from the flow fields . specifically , at each iteration , we first estimate dense flow fields by interpolating the matching results using the edge maps obtained from the previous iteration . given a large corpus of videos , we next harvest highly confident motion edges as positives and randomly sample negatives , and use this data to train an improved edge detector .the process is iterated leading to increasingly accurate flow and edges .an overview of our method is shown in figure [ fig : overview ] .we experiment with the structured edge ( se ) and holistic edge ( he ) detectors .se is based on structured forests , he on deep networks ; se is faster , he more accurate .both detectors achieve state - of - the - art results .the main result of our paper is that both methods , trained using our unsupervised scheme , approach the same performance as when trained in a fully supervised manner .finally , we demonstrate that our approach can serve as a novel unsupervised pre - training scheme for deep networks .specifically , we show that when fine - tuning a network for object detection , starting with the weights learned for edge detection improves performance over starting with a network with randomly initialized weights . while the gains are modest , we believe this is a promising direction for future exploration .* edge detection : * early edge detectors were manually designed to utilize image gradients and later texture gradients . of more relevance to this work are edge detectors trained in a data - driven manner . since the work of , which formulated edge detection as a binary classification problem , progressively more powerful learning paradigms have been employed , including mulit - class classification , feature learning , regression , structured prediction , and deep learning .recently , weinzaepfel et al . extended to motion edge estimation .these methods all require strong supervision for training . in this workwe explore whether unsupervised learning can be used instead ( and as discussed select for our experiments ) .* optical flow : * the estimation of optical flow is a classic problem in computer vision .a full overview is outside of our scope , instead , our work is most closely related to methods that leverage sparse matches or image edges for flow estimation . in particular , like , we utilize edge - respecting sparse - to - dense interpolation of matches to obtain dense motion estimates .our focus , however , is not on optical flow estimation , instead , we exploit the tight coupling between edge and flow estimation to train edge detectors without human supervision . * perceptual grouping using motion : * motion plays a key role for grouping and object recognition in the human visual system . in particular , ostrovsky et al . studied the visual skills of individuals recovering from congenital blindness and showed that motion cues were essential to help facilitate the development of object grouping and representation .our work is closely inspired by these findings : we aim to learn an edge detector using motion cues . * learning from video : * there is an emerging interest for learning visual representations using video as a supervisory signal , for example by enforcing that neighboring frames have a similar representation , learning latent representations for successive frames , or learning to predict missing or future frames . instead of simply enforcing various constraints on successive video frames , wang and gupta utilize object tracking and enforce that tracked patches in a video should have a similar visual representation .the resulting network generalizes well to surface normal estimation and object detection .as we will demonstrate , our approach can also serve as a novel unsupervised pre - training scheme .however , while in previous approaches the training objective was used as a surrogate to encourage the network to learn a useful representation , our primary goal is to train an edge detector and the learned representation is simply a useful byproduct .we start with a set of low level cues using standard tools in computer vision , including point correspondences and image gradients .we use deepmatching to obtain semi - dense matches between two consecutive frames . deepmatching computes correlations at different locations and scales to generate the matches . note that contrary to its name , the method involves no deep learning . for the rest of the paper, we fix the matching results . our proposed iterative process is described in figure [ fig : overview ] and algorithm [ fig : algorithm ] .we denote the edge detector at iteration by . for each image , we use and to denote its image edges and motion edges at iteration .we initialize to the raw image gradient magnitude of , defined as the maximum gradient magnitude over color channels .the gradient magnitude is a simple approximation of image edges , and thus serves as a reasonable starting point . at each iteration , we use epicflow to generate edge - preserving flow maps given matches and previous edges .we next apply on a colored version of to get an estimate of motion edges . is further refined by aligning to superpixel edges .next , for training our new edge detector , we harvest positives instances using a high threshold on and sample random negatives away from any motion edges .the above process is iterated until convergence ( typically 3 to 4 iterations suffice ) . at each iterationthe flow and edge maps and improve . in the following sections we describe the process in additional detail .pairs of frames , matches = gradient magnitude operator , estimate flow using previous edge maps + detect motion edges by applying to + train new edge detector using motion edges + apply edge detector to all frames + and * epicflow : * epicflow takes as input an image pair , semi - dense matches between the images , and an edge map for the first frame .it efficiently computes approximate geodesic distance defined by between all pixels and matched points in .for every pixel , the geodesic distance is used to find its nearest matches , and the weighted combination of their motion vectors determines the source pixel s motion .a final optimization is performed by a variational energy minimization to produce an edge - preserving flow map with high accuracy .we refer readers to for additional details . on the input images .( c ) motion edges computed by applying an edge detector to the colorized flow .( d ) motion edges after alignment , non - maximum suppression , and aggressive thresholding .the aligned motion edge maps serve as a supervisory signal for training an edge detector . ] * motion edge detection : * detecting motion edges given optical flow estimates can be challenging , see figure [ fig : motion ] .weinzaepfel et al . showed that simply computing gradients over a flow map produces unsatisfactory results and instead proposed a data - driven approach for motion edge detection ( for a full review of earlier approaches see ) . in this workwe employ a simpler yet surprisingly effective approach .we use an edge detector trained on image edges for motion edge estimation by applying the ( image ) edge detector to a color - coded flow map . the standard color - coding scheme for optical flow maps 2d flow vectors into a 3d color space by encoding flow orientation via hue and magnitude via saturation .motion edges become clearly visible in this encoding ( [ fig : motion]b ) . running an edge detector on the colored flow mapgives us a simple mechanism for motion edge detection ( [ fig : motion]c ) .moreover , in our iterative scheme , as both our edge detector and flow estimate improve with each iteration , so do our resulting estimates of motion edges .* motion edge alignment : * motion edges computed from flow exhibit slight misalignment with their corresponding image edges .we found that this can adversely affect training , especially for he which produces thick edges . to align the motion edgeswe apply a simple heuristic : after applying non - maximum suppression and thresholding , we align the motion edges to superpixels detected in the color image .specifically , we utilize slic super - pixels , which cover over 90% of all image edges , and greedily match motion and superpixel edge pixels ( using a tolerance of 3 pixels ) . matched motion edge pixels are shifted to the superpixel edge locations and unmatched motion edges are discarded .this refinement , illustrated in figure [ fig : motion]d , helps filter out edges with weak image gradients and improves localization .we emphasize that our goal is not to detect all motion edges .a subset with high precision is sufficient for training .given a large video corpus , high - precision motion edges should provide a dense coverage of image edges .however , due to our alignment procedure our sampling is slightly biased . in particular ,motion edges with weak corresponding image edges are often missing . this limitation and its impact on performanceis discussed in section [ sec : exps ] .* training : * the aligned motion edge maps serve as a supervisory signal for training an edge detector .positives are sampled at locations with high scoring motion edges in .negatives are uniformly sampled from location with motion edges below a small threshold .note that locations with ambiguous motion edge presence ( with intermediate scores ) are not considered in training .as we will demonstrated , samples harvested in this manner provide a strong supervisory signal for training . * video dataset : * for training , we combine videos from two different datasets : the video segmentation benchmark ( vsb ) and the youtube object dataset .we use all hd videos ( ) in both datasets .we drop all the annotations for youtube object dataset . our collection of videos ( ) contains more than frames and has sufficient diversity for training an edge detector .* frame filtering : * given the vast amount of available data , we apply a simple heuristic to select the most promising frames for motion estimation .we first fit a homography matrix between consecutive frames using orb descriptor matches ( which are fast to compute ) .we then remove frames with insufficient matches , very slow motion ( max displacement pixels ) , very large motion ( average displacement pixels ) , or a global translational motion .these heuristics remove frames where optical flow may be either unreliable or contain few motion edges . for all experiments we used this pruned set of frames .we experiment with the structured edge ( se ) and holistic edge ( he ) detectors , based on structured forests and deep networks , respectively .se has been used extensively due to its accuracy and speed , e.g. for flow estimation and object proposals .he is more recent but achieves the best reported results to date .when trained using our unsupervised scheme , both methods approach similar performance as when trained with full supervision .* structured edges ( se ) : * se extracts low - level image features , such as color and gradient channels , to predict edges .the method learns decision trees by using structured labels ( patch edge maps ) to determine the split function at each node . during testing, each decision tree maps an input patch to a local edge map .the final image edge map is the average of multiple overlapped masks predicted by each tree at each location , leading to a robust and smooth result .we use the same parameters as in for training .the forest has 8 trees with maximum depth of 64 .each tree is trained using a random subset ( 25% ) of patches , with equal number of positives and negatives . during training ,we convert a local edge map to a segmentation mask as required by se by computing connected components in the edge patch .we discard patches that contain edge fragments that do not span the whole patch ( which result in a single connected component ) . during each iteration of training, the forest is learned from scratch . during testing , we run se over multiple scales and with sharpening for best results .* holistic edges ( he ) : * he uses a modified vgg-16 network with skip - layer connections and deep supervision .our implementation generally follows .we remove all fully connected layers and the last pooling layer resulting in an architecture with 13 conv and 5 max pooling layers .skip - layers are implemented by attaching linear classifiers ( convolutions ) to the last conv layer of each stage , their outputs are averaged to generate the final edge map . in our implementationwe remove the deep supervision ( multiple loss functions attached to different layers ) as we found that a single loss function has little performance penalty ( vs in ods score ) but is easier to train .we experimented with both fine - tuning a network pre - trained on imagenet and training a network from scratch ( random initialization ) . for fine - tuning, we use the same hyper - parameter as in with learning rate , weight decay , momentum , and batchsize . when training from scratch , we add batch normalization layers to the end of every conv block .this accelerates training and also improves convergence .we also increase learning rate and weight decay when training from scratch .we train the network for epochs in each iteration , then reduce learning rate by half .unlike for se , we can reuse the network from previous iterations as the starting point for each subsequent iteration .the somewhat noisy labels , in particular missing positive labels , prove to be challenging for training he .the issue is partially alleviated by discarding ambiguous samples during back propagation .furthermore , unlike in , we randomly select negative samples ( as many negatives as positives ) and discard negatives with highest loss ( following the same motivation as in ) . without these steps for dealing with noisy labels convergenceour method produces motion edges , image edges , and optical flow at each iteration .we provide an extensive benchmark for each task tested with two different edge detectors ( se and he ) .our main results is that the image edge detectors , trained using videos only , achieve comparable results as when trained with full supervision . as a byproduct of our approachwe also generate competitive optical flow and motion edge results .finally , we show that pre - training networks using video improves their performance on object detection over training from scratch ..motion edge results on the vsb benchmark .see text . [cols="<,^,^,^,^",options="header " , ]in this work , we proposed to harvest motion edges to learn an edge detector from video without explicit supervision .we developed an iterative process that alternated between updating optical flow using edge results , and learning an edge detector based on the flow fields , leading to increasingly accurate edges and flows .the main result of our paper is that edge detectors , trained using our unsupervised scheme , approach the same performance as when trained in a fully supervised manner .we additionally demonstrated our approach can serve a novel unsupervised pre - training scheme for deep networks . while the gains from pre - training were modest, we believe this is a promising direction for future exploration .in the long run we believe that unsupervised learning of edge detectors has the potential to outperform supervised training as the unsupervised approach has access to unlimited data .our work is the first serious step in this direction .we would like to thank saining xie for help with the he detector and ahmad humayun , yan zhu , and yuandong tian and many others for valuable discussions and feedback .
data - driven approaches for edge detection have proven effective and achieve top results on modern benchmarks . however , all current data - driven edge detectors require manual supervision for training in the form of hand - labeled region segments or object boundaries . specifically , human annotators mark semantically meaningful edges which are subsequently used for training . is this form of strong , high - level supervision actually necessary to learn to accurately detect edges ? in this work we present a simple yet effective approach for training edge detectors without human supervision . to this end we utilize motion , and more specifically , the only input to our method is noisy semi - dense matches between frames . we begin with only a rudimentary knowledge of edges ( in the form of image gradients ) , and alternate between improving motion estimation and edge detection in turn . using a large corpus of video data , we show that edge detectors trained using our unsupervised scheme approach the performance of the same methods trained with full supervision ( within 3 - 5% ) . finally , we show that when using a deep network for the edge detector , our approach provides a novel pre - training scheme for object detection .
determining an unknown signal from a set of measurements is a fundamental problem in science and engineering .however , as the number of free parameters defining the signal increases , its tomographic determination may become a daunting task .fortunately , in many contexts there is prior information about the signal that may be useful for tomography .compressed sensing is a signal recovery technique developed for this aim .it utilizes specific types of prior information about the structure of the signal to substantially compress the amount of information needed to reconstruct it with high accuracy .in particular , it harnesses the prior information that the signal has a concise representation , e.g. , that it is a sparse vector with a few nonzero elements or a low - rank matrix with a few nonzero singular values .the compressed sensing protocol then defines special classes of measurements , henceforth referred to as `` compressed sensing measurements , '' that enable the unique identification of the signal from within the restricted set of sparse vectors or low - rank matrices using substantially fewer measurement settings .moreover , it provides algorithms for efficient reconstruction by defining a specific class of convex optimization heuristics whose solution determine the unknown signal from the measurement outcomes with very high accuracy ( see methods ) .importantly , solving any other optimization programs outside this class will not necessarily result in a compressed sensing protocol .in the context of quantum information science , the signals " we seek to reconstruct are , for example , quantum states and processes , and the protocol for reconstruction is quantum tomography . because the number of free parameters in quantum states and processes scale poorly ( growing as some power of the total hilbert space dimension , which in turn grows exponentially with the number of subsystems ) , there has been a concerted effort to develop techniques that minimize the resources necessary for tomography . to this end , the methodology of compressed sensing has been applied to the problem of quantum tomography . in the pioneering work of it was proved that quantum measurements can be easily designed to be within the special class of measurements required for compressed sensing .then , using the specifically chosen convex optimization , low - rank density matrices ( close to pure quantum states ) or low - rank process matrices ( close to unitary evolutions ) can be accurately reconstructed with a substantially reduced number of measurement settings .the work we report here identifies a critical link between quantum tomography and compressed sensing .we discuss in particular the case of quantum state tomography , where the aim is to recover the density matrix , a _ positive semidefinite _matrix , typically normalized with unit trace .we show that the positivity property alone imposes a powerful constraint that places strong restrictions on the physical states that are consistent with the data . as illustrated in fig .[ fig : sets ] , this restriction is stronger than the one present in generic compressed sensing of signals which are not necessarily positive semidefinite matrices .this , in turn , has far reaching consequences .first and foremost , it implies that as long as quantum measurements are within the special class associated with compressed sensing , then any optimization heuristic that contains the positivity constraint is effectively a compressed sensing protocol .second , tools provided by the compressed sensing methodology now enable the construction of special types of informationally complete measurements that are robust to noise and to small model imperfections , with rigorous bounds .finally , our results fundamentally unify many different quantum tomography protocols which were previously thought to be distinct , such as maximum - likelihood solvers , under the compressed sensing umbrella .we emphasize that constraining the normalization ( trace ) to a fixed value , as one does for density matrices , plays no role in the theorems we discuss below .thus our results extend beyond the context of quantum state tomography , applying , e.g. , to process tomography when the latter is described by a completely positive map , and more generally to the reconstruction of low - rank positive semidefinite matrices .* informational completeness * in quantum theory , a measurement is represented by a positive operator - valued measure , povm , a set of positive semidefinite matrices that form a resolution of the identity , . the elements of a povm represent the possible outcomes ( events ) of the measurement , and probability of measuring an outcome is given by the usual born rule , , where is the state of the system , a positive semidefinite matrix , , normalized such that .in the context of quantum - state tomography , _ informationally complete _ measurements play a central role .let be the set of all quantum states ( density matrices ) .a measurement is said to be informationally complete if in other words , no two distinct states and yield the same measurement outcome probabilities .thus , a ( noise - free ) record of an informationally complete measurement uniquely determines the state of the system . in general , for a -dimensional hilbert space , an informationally complete measurement consists of at least outcomes ( povm elements ) .while eq . gives a general definition of an informationally complete measurement , if one has prior information about the state of the system , we can make this definition more specific .in particular , suppose the state is known _ a priori _ to be of a special class , , e.g. , the class of density matrices of at most rank .one defines a measurement to be restricted informationally complete ( restricted - ic ) if it can only uniquely identify a quantum state from within the subset , but can not necessarily uniquely identify it from within the set of all quantum states .such restricted - ic measurements can be composed of fewer outcomes than the outcomes required for a general informationally complete measurement .for example , heinosaari _ et al . _ showed that when is the set of density matrices of at most rank , then rank- restricted - ic measurements can be constructed with outcomes , rather than outcomes required for a general informationally complete measurement .one can formalize this definition in the context of quantum - state tomography .a measurement is said to be restricted - ic , if in some situations , a measurement can satisfy a stricter definition of informational completeness than the restricted - ic of eq . .a measurement is said to be strictly - ic , if there is a subtle yet important difference in the definitions of restricted - ic and strictly - ic .while the measurement record of the former identifies a unique state within the set , the measurement record of a the latter identifies a unique state within the set of _ all _ quantum states .these notions of informationally completeness are key to understanding compressed sensing and its application in quantum tomography , as we discuss below . *the relation between informational completeness and compressed sensing * at its heart , the compressed sensing methodology employs prior information to reduce the number of measurements required to reconstruct an unknown signal . herewe consider the compressed sensing recovery of a hermitian matrix , .let the measurement record be specified as a vector - valued linear map , = { { \rm tr}}(a_i m) ] be the measurement record obtained by a sensing map , , that corresponds to compressing sensing measurements for rank .then is the _unique _ hermitian matrix within the set of _ low - rank _ hermitian matrices ( up to rank ) that is consistent with the measurement record .+ importantly , in compressed sensing , when , there are generally an infinite number of hermitian matrices with rank larger than that are consistent with the measurement record. thus , the measurement record associated with compressed sensing can not uniquely specify among all hermitian matrices , and therefore it is not informationally complete in the sense of eq . .if , however , the sensing map corresponds to compressed sensing measurements ( e.g. , it satisfies the restricted isometry property , see methods ) , then according to the above theorem , the measurement record uniquely specifies within the restricted set of low - rank hermitian matrices ( rank ) .therefore compressed sensing measurements correspond to rank- restricted - ic , in the sense of eq . .this relation between compressed sensing measurements and rank- restricted - ic implies that any successful search must be restricted to the low rank set of hermitian matrices . to achieve this ,one solves the convex optimization problem , ,\ ] ] where , is the nuclear ( or trace ) norm , which serves as the convex proxy for rank minimization . under the conditions above, the optimal solution is , i.e. , exact recovery .the use of the nuclear norm is essential here .if one uses only the compressed number of samples , solving any other optimization that is not related to the above rank - minimization heuristic by some regularization will not result in a successful recovery .for example , the solution of the convex programs ] with samples will generally yield a solution that is very different from .such estimators generally require samples to recover .the analogous result holds for compressed sensing of sparse vectors .there ones require minimization of the norm of the vector , a convex heuristic for vector - sparsity . in what follows , we specialize the compressed sensing paradigm to the case of positive matrix recovery , and particular to quantum - state tomography .there , the aim is to recover the state of the system , , which has the key property of positivity , . *the role of positivity in compressed sensing quantum tomography * our central result is summarized in the following theorem : + * theorem 1 .* let be a positive semidefinite matrix with , and let ] .then , according to theorem 1 , is the only density matrix within the set of positive hermitian matrices of any rank that yields the measurement probabilities .geometrically , as observed in , theorem 1 states that the rank - deficient subset of the positive matrices cone is `` pointed . '' therefore , under the promise that and corresponds to compressed sensing measurements , the space of matrices that satisfy =\bm{p} ] , such that , and where corresponds to compressed sensing measurements , then the solution to =\bm{p}\ , \ , { \rm and } \ , \ , \rho \geq 0,\ ] ] or to -\bm{p}\vert\;\ ; { \rm s.t.}\ ; \rho \geq 0,\ ] ] where is a any convex function of , and is any norm function , is unique : . by confining the feasible set of matrices to positive matrices ,we ensure that the measurement record uniquely identifies from the set of all density matrices , and thus any convex function of or the measurement error may serve as a cost function .for example , this result applies to maximum-(log)likelihood estimation where .we thus conclude that when the feasible set of density matrices is constrained to be physical ( i.e. , have positive eigenvalues ) , any quantum tomography protocol whose sensing map corresponds to compressed sensing measurements will exhibit the compressed sensing effect .we do not include a trace constraint in the convex programs above . in the noiseless case considered hereit is redundant .because the data came from a trace - preserving quantum measurements , the unique solution must be a normalized quantum state . as discussed in the supplementary information, the constraints and , taken together , immediately imply that is the only density matrix consistent with the noiseless data .when we consider the important case of noisy measurements , the consequence trace constraint is nontrivial , as we discuss in the next section .+ * robustness to measurement noise and model imperfection * so far , we have discussed the ideal case of a noiseless measurement record , where in the context of quantum tomography , denoted a probability vector .the compressed sensing methodology , however , assures a robust reconstruction of the signal in the presence of measurement noise .our analysis inherits this crucial feature . in a realistic scenario ,we allow for a noisy measurement record , +\bm{e} ] ( least - squares ) , and ] , and ( iii ) trace minimization : ] .this yields compressed sensing measurements for rank- " if it guarantees a robust recovery of matrices with rank by solving a nuclear - norm minimization program , e.g. , the compressed sensing heuristic , -\bm{f}\vert_2\leq\epsilon,\ ] ] where is the noisy measurement record , . when the matrix is promised to have rank ,the number of sufficient samples is of order , with possible logarithmic corrections , and the distance between the reconstruction and is , where . in this sense ,the reconstruction is robust , " and compressed sensing when .an analogous definition holds in the case of sensing maps for sparse vector reconstruction . a sufficient condition that a sensing map yields compressed sensing measurements for matrix reconstruction is if it satisfies the restricted isometry property ." the map satisfies the restricted isometry property for rank- if there is some constant such that , \vert_2 ^ 2\leq(1+\delta_r)\vert { m}\vert_{\rm f}^2,\ ] ] holds for all hermitian matrices with rank , where .the smallest constant for which this property holds is called the restricted isometry constant . with small isometry constant , the sensing map acts almost like an isometry when applied to rank matrices , and thus allows us to effectively invert the measurement data to determine the matrix .depending on the context , there are various results in the compressed sensing literature that apply for different values of the isometry constant . for example ,cands and collaborators , show that the compressed sensing theory is applied when ( see supplementary information section b ) .our results are general and apply whenever the sensing map corresponds to compressed sensing measurements that assures robust recovery through the solution of eq . while the restricted isometry property is sufficient , our results are applicable in other cases , such as those described in where a robust recovery is guaranteed by generic rank - one projections , or by projectors onto random elements of an approximate 4-design . * numerical experiments . * in our numerical experiments ,we simulate independent measurements of random pauli bases on a haar - random pure state of dimension , .the measurement record , given by the frequency of outcomes , , is generated by sampling times from the probability distribution . here is the vector of povm elements , each corresponding to a tensor product of projectors onto the eigenbasis of pauli observables , , where indexes the series of , , and .the measurement record is then used in various estimators .we measure the performance by the average infidelity over 10 random pure states , . 50 donoho ,d. l. compressed sensing ._ ieee trans . inf .th . _ * 52 * , 1289 - 1306 ( 2006 ) .cands , e. j. , romberg , j. k. , and tao , t. robust uncertainty principles : exact signal reconstruction from highly incomplete frequency information ._ ieee trans . inf .th . _ * 52 * , 489 - 509 ( 2006 ) .cands , e. j. , romberg , j. k. , and tao , t. stable signal recovery from incomplete and inaccurate measurements .pure appl .math . _ * 59 * , 1207 - 1223 ( 2006 ) .cands , e. j. the restricted isometry property and its implications for compressed sensing ._ comptes rendus mathematique _ * 346 * , 589 - 592 ( 2008 ) .cands , e. j. , and recht , b. exact matrix completion via convex optimization .math . _ * 9 * , 717 - 772 ( 2009 ) .cands , e. j. , and plan , y. matrix completion with noise .ieee _ 98 , 925 - 936 ( 2010 ) .cands , e. j. , and tao , t. the power of convex relaxation : near - optimal matrix completion ._ ieee trans . inf .th . _ * 56 * , 2053 - 2080 ( 2010 ) .recht , b. , fazel , m. , and parrilo , p. a. guaranteed minimum - rank solutions of linear matrix equations via nuclear norm minimization ._ siam review _ * 52 * 471 - 501 ( 2010 ) .cands , e. j. , and plan , y. tight oracle inequalities for low - rank matrix recovery from a minimal number of noisy random measurements ._ ieee trans . inf .th . _ * 57 * , 2342 - 2359 ( 2011 ) .kosut , r. l. quantum process tomography via l1-norm minimization .preprint at http://arxiv.org/abs/0812.4323 ( 2008 ) .gross , d. , liu , y .- k . ,flammia , s. t. , becker , s. , and eisert , j. quantum state tomography via compressed sensing .lett . _ * 105 * , 150401 ( 2010 ) .liu , y .- k .universal low - rank matrix recovery from pauli measurements ._ advances in neural information processing systems ( nips ) _ * 24 * 1638 - 1646 ( 2011 ) .shabani , a. , kosut , r. l. , mohseni , m. , rabitz , h. , broome , m. a. , almeida , m. p. , fedrizzi , a. , and white , a. g. efficient measurement of quantum dynamics via compressive sensing .lett . _ * 106 * , 100401 ( 2011 ) .flammia , s. t. , gross , d. , liu , y .- k . , and eisert , j. quantum tomography via compressed sensing : error bounds , sample complexity and efficient estimators ._ new j. phys . _* 14 * , 095022 ( 2012 ) .liu , w .- t . , zhang , t. , liu , j .- y . , chen , p .- x . , and yuan , j .- m .experimental quantum state tomography via compressed sampling .lett . _ * 108 * , 170403 ( 2012 ) .sanders , j. n. , saikin , s. k. , mostame , s. , andrade , x. , widom , j. r. , marcus , a. h. , and aspuru - guzik , a. compressed sensing for multidimensional spectroscopy experiments ._ j. phys .chem . lett . _ * 3 * , 2697 - 2702 ( 2012 ) .smith , a. , anderson , b. e. , sosa - martinez , h. , riofro , c. a. , deutsch , i. h. , and jessen , p. s. quantum control in the cs ground manifold using radio - frequency and microwave magnetic fields . _lett . _ * 111 * , 170502 ( 2013 ) .schwemmer , c. , tth , g. , niggebaum , a. , moroder , t. , gross , d. , ghne , o. , and weinfurter , h. experimental comparison of efficient tomography schemes for a six - qubit state . _ phys .lett . _ * 113 * , 040503 ( 2014 ) .rodionov , a. v. , veitia , a. , barends , r. , kelly , j. , sank , d. , wenner , j. , martinis , j. m. , kosut , r. l. , and korotkov , a. n. compressed sensing quantum process tomography for superconducting quantum gates . _ phys .b _ * 90 * , 144504 ( 2014 ) .tonolini , f. , chan , s. , agnew , m. , lindsay , a. , and leach , j. reconstructing high - dimensional two - photon entangled states via compressive sensing ._ * 4 * , 6542 ( 2014 ) .kueng , r. , rauhut h. , and terstiege u. low rank matrix recovery from rank one measurements ._ ( e - pub ahead of print 30 july 2015 ; doi : 10.1016/j.acha.2015.07.007 ) .scott , a. j. tight informationally complete quantum measurements ._ j. phys . a : math ._ * 39 * , 13507 ( 2006 ) .heinosaari , t. , mazzarella , l. , and wolf , m. m. quantum tomography under prior information ._ * 318 * , 355 - 374 ( 2013 ) .carmeli , c. , heinosaari , t. , schultz , j. , and toigo , a. tasks and premises in quantum state determination ._ j. phys . a : math .* 47 * , 075302 ( 2014 ) .dariano , g. m. , perinotti , p. , and sacchi , m. f. informationally complete measurements and group representation .b : quantum semiclass . opt . _ * 6 * , s487 ( 2004 ) .bruckstein , a. m. , elad , m. , and zibulevsky , m. on the uniqueness of nonnegative sparse solutions to underdetermined systems of equations ._ ieee trans . inf .th . _ * 54 * , 4813 ( 2008 ) .cands , e. j. , strohmer , t. , and voroninski , v. phaselift : exact and stable signal recovery from magnitude measurements via convex programming .pure appl .* 66 * , 1241 - 1274 ( 2013 ) .demanet , l. , and hand p. stable optimizationless recovery from phaseless linear measurements .j. fourier anal . appl .* 20 * , 199 - 221 ( 2014 ) .hradil , z. quantum - state estimation .a _ * 55 * , r1561 ( 1997 ) .riofro , c. private communication . software for disciplined convex programming can found at http://cvxr.com/. teo , y. s. , zhu , h. , englert , b .-g . , ehek , j. , and hradil , z. quantum - state reconstruction by maximizing likelihood and entropy ._ * 107 * , 020404 ( 2011 ) .a description of the admm algorithms and their application can be found at http://stanford.edu/ boyd / admm.html .eisert , j. private communication . * acknowledgments * + we thank jens eisert and carlos a. riofro for stimulating discussions . in particular we thank j.e . for his insights regarding the proof of theorem 1 and c.a.r . for initial work that led to supplementary information section c and the development of numerical methods used here .a.k . and i.h.d .acknowledge the support of nsf grants phy-1307520 and phy-1212445 .r.l.k partially supported by the aro muri grant w911nf-11 - 1 - 0268 .+ * author contributions * + all authors contributed ideas .performed the calculations .all authors wrote the manuscript . +* additional information * + correspondence and requests for materials should be addressed to a.k . +* competing financial interests * + the authors declare no competing financial interests .* section a : proof of theorem 1 * in a direct extension of bruckstein _ et al . _ , we first show that under the appropriate conditions , positivity implies that the set =\bm{p},\ ; m\geq0\} ] , where the elements of the vector are some matrices , , .suppose that the span of is strictly positive , namely , with a ( strictly ) positive matrix .this allows us perform a change of representation to an auxiliary problem . defining , ={{\rm tr}}(b^{-1}\bm{{e}}b^{\intercal-1 } \odot) ] for some with .if satisfies the restricted isometry property with constant , then the set =\bm{p},\;z\geq0,\;{{\rm tr}}z = c\} ] must have a trace larger than , thus it is necessarily not in the set =\bm{p},\;z\geq0,\;{{\rm tr}}z = c\} ] contains only one element , so does the set =\bm{p},m\geq0\}$ ] given that satisfies the restricted isometry property with constant .in general , it is required to find a transformation of the sensing map that yields with .this general result can be applied to the specific case of quantum tomography , where now , a positive - semidefinite density matrix , and the elements of the vector form a ( trace preserving ) povm . in this case , we can choose , a vector whose elements are all 1 , then , and thus .therefore , in this particular case , .note that in order to show the generality of our result in cs , we have chosen to present arguments in the course of the proof that apply to general positive matrices and sensing maps and only then to apply it specifically to the quantum tomography case . in the quantum case , however , this theorem follows directly , without the need for the construction of bruckstein _et al._. for a trace - preserving povm , it follows immediately that and .therefore , for quantum tomography all of the arguments above that are made with relation to and can be made on and directly , and theorem 1 follows as extension of , applied to positive matrices . consider the following heuristic suppose that is an arbitrary rank matrix .let be the singular value decomposition of where is the list of ordered singular values .we let be the part of corresponding to its largest singular values . by definition to the smallest singular values of , i.e. , the ` tail ' of .to bound we use the following lemma . +* lemma 2 . *suppose and let be a matrix such that .then the solution to obeys where , and are constants depending only on the isometry constant .+ lemma 2 , is somewhat different than lemma 3.2 proved in .however the proof of lemma 3.2 applies directly to lemma 2 to bound we use the result of lemma 2 which give an upper bound on .the only assumption regarding that entered the proof of lemma 2 is that it is a feasible matrix , .however , is a feasible matrix for the problem of since by its definition it minimizes .therefore , necessarily , .consider the two minimization programs and where as before , is a linear map , and is the record , where is the density matrix and denotes the noise .similarly to ref . , we take the map to be of the form , where , and , , are matrices represent the measurement operators . inspired by the formulation of measurement we further assume that and . + * lemma 3 * for a given map and a record , if , then the two convex programs and are the mathematically equivalent .since the objective functions and the constrains of the two convex programs are linear or quadratic , both programs have zero duality gap , thus a strong duality holds for them both . to prove the lemma we construct and solve the dual problem of each ( primal ) program and than show that for the solutions of the two corresponding dual problems coincide .since there is no duality gap for these problems , this implies that the solutions to the two primal problems , first , equal to the solutions of the dual problems and , second , coincide with each other , as claimed .the ( conic ) lagrangian of is given by , with the dual variable ( lagrange multipliers ) , and .the dual function is obtained by , which is given by the condition . using we get and therefore , with , and .the dual problem of thus reads in fact we can solve this program exactly .equation , , together with implies a solution for . therefore the condition now reads .moreover .plugging all that in equation , we obtain the solution of this problem is given by taking the minimum value of , , that is , .since we have a strong duality in this program we get that let be the argument that solves , then , next , we consider the problem of which is equivalent to , thus , the ( conic ) lagrangian function of this problem is given by , with .the dual function is obtained by , which is given by the condition .using , we get and therefore , with .the dual problem thus reads similarly to the previous case , we can solve this program exactly .equation , , together with implies a solution for .therefore , the condition now reads , that is , .moreover .plugging all that in equation , we obtain the solution to this problem is given by taking the maximum value of , , i.e. , .since we have a strong duality in this program we get that let be the argument that solves , then and . the problem of finds a matrix which has the minimal trace and satisfies . using the value of in , means that the program finds the matrix which has the minimal and satisfies .we showed that the solution is such that the minimal value is .this implies that every element in the set satisfies .therefore , we conclude that , the solution of necessarily satisfies . this in turnimply that both programs and return the same solution with and .lastly , we remark that the while the proof of equivalence was given here using the two - norm , , it holds for any norm .therefore , the mathematical equivalence between the programs of , also holds if we replace the two - norm that appears in these programs by any other norm .
characterizing complex quantum systems is a vital task in quantum information science . quantum tomography , the standard tool used for this purpose , uses a well - designed measurement record to reconstruct quantum states and processes . it is , however , notoriously inefficient . recently , the classical signal reconstruction technique known as compressed sensing " has been ported to quantum information science to overcome this challenge : accurate tomography can be achieved with substantially fewer measurement settings , thereby greatly enhancing the efficiency of quantum tomography . here we show that compressed sensing tomography of quantum systems is essentially guaranteed by a special property of quantum mechanics itself that the mathematical objects that describe the system in quantum mechanics are matrices with nonnegative eigenvalues . this result has an impact on the way quantum tomography is understood and implemented . in particular , it implies that the information obtained about a quantum system through compressed sensing methods exhibits a new sense of `` informational completeness . '' this has important consequences on the efficiency of data taking for quantum tomography , and enables us to construct informationally complete measurements that are robust to noise and modeling errors . moreover , our result shows that one can expand the numerical tool - box used in quantum tomography and employ highly efficient algorithms developed to handle large dimensional matrices on a large dimensional hilbert space . while we mainly present our results in the context of quantum tomography , they apply to the general case of positive semidefinite matrix recovery .
constraint programming ( cp ) is a widely used and efficient technique to solve combinatorial optimization problems .however in practice many problems are over - constrained ( intrinsically or from being badly stated ) .several frameworks have been proposed to handle over - constrained problems , mostly by introducing _ soft constraints _ that are allowed to be ( partially ) violated .the most well - known framework is the partial constraint satisfaction problem framework ( pcsp ) , which includes the max - csp framework that tries to maximize the number of satisfied constraints . since in this frameworkall constraints are either violated or satisfied , this objective is equivalent to minimizing the number of violations .it has been extended to the _ weighted - csp _ , associating a degree of violation ( not just a boolean value ) to each constraint and minimizing the sum of all weighted violations .the _ possibilistic - csp _ associates a preference to each constraint ( a real value between 0 and 1 ) representing its importance .the objective of the framework is the hierarchical satisfaction of the most important constraints , that is , the minimization of the highest preference level for a violated constraint .the _ fuzzy - csp _ is somewhat similar to the possibilistic - csp but here a preference is associated to each tuple of each constraint .a preference value of 0 means the constraint is highly violated and 1 stands for satisfaction .the objective is the maximization of the smallest preference value induced by a variable assignment .the last two frameworks are different from the previous ones since the aggregation operator is a function instead of addition .max - csps are typically encoded and solved with one of two generic paradigms : valued - csps and semi - rings .another approach to model and solve over - constrained problems involves _ meta - constraints _ .the idea behind this technique is to introduce a set of domain variables that capture the violation cost of each soft constraint . by correctly constraining these variables it is possible to replicate the previous frameworks and even to extend the modeling capability to capture other types of violation measures .namely the authors argue that although the max - csp family of frameworks is quite efficient to capture local violation measures it is not as adequate to model violation costs involving several soft constraints simultaneously . by defining ( possibly global ) constraints on sucha behaviour can be easily achieved .the authors propose to replace each soft constraint present in a model by a disjunctive constraint specifying that either and the constraint is hard or and is violated .this technique allows the resolution of over - constrained problem within traditional cp solvers .comparatively few efforts have been invested in developing soft versions of common global constraints .global constraints are often key elements in successfully modeling real applications and being able to easily and effectively soften such constraints would yield a significant improvement in flexibility . in this paperwe study two global constraints : the widely known global cardinality constraint ( ) and the new constraint . for each of thesewe propose new violation measures and provide the corresponding filtering algorithms to achieve domain consistency .all the constraint softening is achieved by enriching the underlying graph representation with additional arcs that represent possible relaxations of the constraint .violation costs are then associated to these new arcs and known graph algorithms are used to achieve domain consistency .the two constraints studied in this paper are useful to model and solve personnel rostering problems ( prp ) .the prp objective is typically to distribute a set of working shifts ( or days off ) to a set of employees every day over a planning horizon ( a set of days ) .the is a perfect tool to restrict the number of work shifts of each type ( day , evening , and night for instance ) performed by each employee .other types of constraints involve sequences of shifts over time , typically forbidding non ergonomic schedules .the constraint has the expressive power necessary to cope with the complex regulations found in many organizations .since most real rostering applications are over - constrained ( due to lack of personnel or over - optimistic scheduling objectives ) , soft versions of the and constraints promise to significantly improve our modelling flexibility .this paper is organized as follows .section [ background ] presents background information on constraint programming and the softening of ( global ) constraints . in section [ gcc ] and [ reg ]we describe the softening of the and the constraint respectively .both constraints are softened with respect to two violation measures .we also provide corresponding filtering algorithms achieving domain consistency .section [ agg ] discusses the aggregation of several soft ( global ) constraints by meta - constraints .finally , a conclusion is given in section [ conclusion ] .we assume familiarity with the basic concepts of constraint programming . for a thorough explanation of constraint programming , see .a constraint satisfaction problem ( csp ) consists of a finite set of variables with finite domains such that for all , together with a finite set of constraints , each on a subset of .a constraint is defined as a subset of the cartesian product of the domains of the variables that are in .a tuple is a solution to a csp if for every constraint on the variables we have .a constraint optimization problem ( cop ) is a csp together with an objective function to be optimized .a solution to a cop is a solution to the corresponding csp that has an optimal objective function value .[ def : hac ] a constraint on the variables is called domain consistent if for each variable and value , there exist values in , such that .our definition of domain consistency corresponds to hyper - arc consistency or generalized arc consistency , which are also often used in the literature .a csp is domain consistent if all its constraints are domain consistent .a csp is inconsistent if it has no solution .similarly for a cop .when a csp is inconsistent it is also said to be over - constrained .it is then natural to identify soft constraints , that are allowed to be violated , and minimize the total violation according to some criteria . for each soft constraint , we introduce a function that measures the violation , and has the following form : this approach has been introduced in and was developed further in . there may be several natural ways to evaluate the degree to which a global constraint is violated and these are not equivalent usually .a standard measure is the variable - based cost : [ def : varcost ] given a constraint on the variables and an instantiation with , the variable - based cost of violation of is the minimum number of variables that need to change their value in order to satisfy the constraint .alternative measures exist for specific constraints .for example , if a constraint is expressible as a conjunction of binary constraints , the cost may be defined as the number of these binary constraints that are violated . for the soft and the soft constraint, we will introduce new violation measures , that are likely to be more effective in practical applications .a global cardinality constraint ( ) on a set of variables specifies the minimum and maximum number of times each value in the union of their domains should be assigned to these variables .rgin developed a domain consistency algorithm for the , making use of network flows .a variant of the is the cost- , which can be seen as a weighted version of the . for the cost-a weight is assigned to each variable - value assignment and the goal is to satisfy the with minimum total cost . throughout this section, we will use the following notation ( unless specified otherwise ) .let denote a set of variables with respective finite domains .we define and we assume a fixed but arbitrary ordering on . for ,let , with .finally , let be a variable with finite domain , representing the cost of violation of the .[ def : gcc ] we first give a generic definition for a soft version of the .[ def : softgcc ] (x , l , u , z ) = \{(d_1 , \dots , d_n , \tilde{d } ) \mid &d_i \in d_i , \tilde{d } \in d_z , \\ & { \rm violation}_{{\textup{\texttt{soft\_gcc}}}[\star]}(d_1 , \dots , d_n ) \leq \tilde{d } \ } , \end{array}\ ] ] where defines a violation measure for the .in order to define measures of violation for the , it is convenient to introduce the following functions . given , define for all let } ] in terms of the above functions .[ lem : varcost ] given , }(x ) = \max \left ( \sum_{d \in d_x } { \rm overflow}(x , d ) , \sum_{d \in d_x } { \rm underflow}(x , d ) \right)\ ] ] provided that the variable - based cost of violation corresponds to the minimal number of re - assignments of variables until both and .assume .variables assigned to values with can be assigned to values with , until . in order to achieve , we still need to re - assign the other variables assigned to values with .hence , in total we need to re - assign exactly variables . similarly when we assume .if ( [ eq : assumption ] ) does not hold , there is no variable assignment that satisfies the .+ without assumption ( [ eq : assumption ] ) , the variable - based violation measure for the can not be applied .therefore , we introduce the following value - based violation measure , which can also be applied when assumption ( [ eq : assumption ] ) does not hold .[ def : valcost ] for the value - based cost of violation is we denote the value - based violation measure for the by } ] , minimizing the variable - based violation . an assignment corresponds to the arc with . by construction ,all variables need to be assigned to a value and the cost function exactly measures the variable - based cost of violation .the graph corresponds to a particular instance of the cost- .hence , we can apply the filtering procedures developed for that constraint directly to the [ var ] .the [ var ] also inherits from the cost- the time complexity of achieving domain consistency , being where and .note that also consider the variable - based cost measure for a different version of the soft .their version considers the parameters and to be variables too .hence , the variable - based cost evaluation becomes a rather poor measure , as we trivially can change and to satisfy the .they fix this by restricting the set of variables to consider to be the set , which corresponds to our situation .however , they do not provide a filtering algorithm for that case . for the value - based violation measure, we adapt the graph in the following way .we add arc sets and , with demand for all and capacity further , we again apply a cost function , where let the resulting graph be denoted by .consider the csp (x , l , u , z)\\ \texttt{minimize } z \end{array}\ ] ] where , , , and . in figure[ fig : gcc].c the graph for the with respect to value - based cost is presented .a minimum - cost flow in the graph corresponds to a solution to the ] is domain consistent if and only if and where denotes the cost of a shortest path from to in the residual graph . from flow theory we know that , given a minimum - cost flow in , if we enforce arc to be in a minimum - cost flow in , where is the shortest path in .in order for a value to be consistent , the cost of a minimum - cost flow that uses should be less than or equal to . by the above fact, we only need to compute a shortest path from to instead of a new minimum - cost flow . +a minimum - cost flow in can be computed in time ( see ) , where again and .compared to the complexity of the [ var ] , we have a factor instead of .this is because computing the flow for [ val ] is dependent on the number of arcs rather than on the number variables .a shortest path in can be computed in time .hence the with respect to the value - based violation measure can be made domain consistent in time as we need to check arcs for consistency .when in (x , l , u , z) ] , for any sequence of values taken by the variables of we have .our first instantiation of the distance function yields the variable - based cost : the number of positions in which two strings of same length differ is called their _hamming distance_. intuitively , such a distance represents the number of symbols we need to change to go from one string to the other , or equivalently the number of variables whose value must change . using the hamming distance for in the previous definition , becomes the variable - based cost .another distance function that is often used with strings is the following : the smallest number of insertions , deletions , and substitutions required to change one string into another is called the _ edit distance_. it captures the fact that two strings that are identical except for one extra or missing symbol should be considered close to one another .for example , the edit distance between strings `` bcdea '' and `` abcde '' is two : insert an a at the front of the first string and delete the a from its end .the hamming distance between the same strings is five : every symbol must be changed .edit distance is probably a better way to measure violations of a constraint .we provide a more natural example in the area of rostering . given a string , we call _ stretch _ a maximal substring of identical values . we often need to impose restrictions on the length of stretches of work shifts , and these can be expressed with a constraint .suppose stretches of s and s must each be of length and consider the string `` abbaabbaab '' : its hamming distance to a string belonging to the corresponding regular language is since changing either the first to a or to an has a domino effect on the following stretches ; its edit distance is just since we can insert an at the beginning to make a legal stretch of s and remove the at the end . in this case , the edit distance reflects the number of illegal stretches whereas the hamming distance is proportional to the length of the string .for both cost measures , we proceed by modifying the layered directed graph built for the `` hard '' version of into graph .before , we added an arc from to if for some ; now we relax it slightly to any .this only makes a difference if the domains of the variables are not initially full .arcs are never removed in but their labels are updated instead .the label of an arc is generalized to the invariant ; as values are removed from the domain of variable , they are also removed from the corresponding s .the cost of using an arc for variable - value pair will be zero if belongs to and some positive integer cost otherwise .this cost represents the penalty for an individual violation . in the remainder of the section we will consider unit costs but the framework also makes it possible to use varying costs , e.g. to distinguish between insertions and substitutions when using the edit distance. the graph on the left at figure [ soft - digraph ] is a shorthand version of for the automaton of figure [ digraph ] .since all values in are considered , the same arcs appear between consecutive layers .what changes from one layer to the other are the labels .taking into account _ substitutions _, common to both hamming and edit distances , is immediate from the previous modification .it is not difficult to see that the introduction of costs transforms a supporting path in the domain consistency algorithm for into a zero - cost path in the modified graph .the cost of a shortest path from in the first layer to a member of in the last layer corresponds to the smallest number of variables forced to take a value outside of their domain .[ thm : cost - eval ] a minimum - cost path from to in corresponds to a solution to ] is domain consistent on and bound consistent on if and only if and where and denotes the cost of a shortest path from to in . computing shortest paths from the initial state in the first layer to every other node and from every node to a final state in the last layer can be done in time refers to the number of transitions in the automaton . ] through topological sorts because of the special structure of the graph . that computation can also be made incremental in the same way as in , that same result was independently obtained in .we however go further by considering edit distance , for which insertions and deletions are allowed as well . for_ deletions _ we need to allow `` wasting '' a value without changing the current state . to this effect ,we add to an arc , with , if it is nt already present in the graph .to allow _ insertions _ , inspired by -transitions in dfas , we introduce some special arcs between nodes in the same layer : if then we further add an arc with fixed positive cost .figure [ soft - digraph ] provides an example of the resulting graph ( on the right ) .unfortunately , those special arcs modify the structure of the graph since cycles ( of strictly positive cost ) are introduced .consequently shortest paths can no longer be computed through topological sorts .an efficient implementation of dijkstra s algorithm increases the time complexity to .regardless of this increase in computational cost , theorems [ thm : cost - eval ] and [ thm : cost - filt ] can be generalized to hold for [ edit ] as well .the preceding sections have introduced filtering algorithms based on different violation measures for two soft global constraints .if these filtering techniques are to be effective , especially in the presence of soft constraints of a different nature , they must be able to cooperate and communicate .even though there are many avenues for combining soft constraints , the objective almost always remains to minimize constraint violations .we propose here a small extension to the approach of , where meta - constraints on the cost variables of soft constraints are introduced .we illustrate this approach with the newly introduced .let be a set of soft constraints and the variable indicating the violation cost of .the _ soft global cardinality aggregator ( sgca ) _ is defined as (z , l , u , z_{\rm agg})$ ] where , is the interval defining the allowed number of occurrences of each value in the domain of and the cost variable based on the violation measure .when all constraints are either satisfied or violated ( ) the max - csp approach can be easily obtained by setting , , and reading the number of violations in .the sgca could also be used as in to enforce homogeneity ( in a soft manner ) or to define other violation measures like restricting the number of highly violated constraint .for instance , we could wish to impose that no more then a certain number of constraints are highly violated , but since we can not guarantee that this is possible the use of sgca allows to state this wish without risking to create an inconsistent problem . more generally , by defining the values of and accordingly it is possible to limit ( or at least attempt to limit ) the number violated constraints by violation cost .another approach could be to set all to 0 and adjust the violation function so that higher violation costs are more penalized . the use of soft meta - constraints , when possible , is also an alternative to the introduction of disjunctive constraints since they need not be satisfied for the problem to be consistent . in the original meta - constraint framework, similar behaviour can be established by applying a cost- to .for instance , we can define for each pair ( ) a cost which penalizes higher violations more . with the, this cost function can be stated as . however , as for this variant of the we have , the will be much more efficient than the cost- , as was discussed at the end of section [ gcc ] .in fact , the sgca can be checked for consistency in time and made domain consistent in time ( where and whenever and for any cost function .we have presented soft versions of two global constraints : the global cardinality constraint and the regular constraint .different violation measures have been presented and the corresponding filtering algorithms achieving domain consistency have been introduced .these new techniques are based on the addition of `` relaxation arcs '' in the underlying graph and the use of known graph algorithms .we also have proposed to extend the meta - constraint framework for combining constraint violations by using the soft version of .since these two constraints are very useful to solve personnel rostering problems the next step is thus the implementation of these algorithms in order to model such problems and benchmark these new constraints .n. beldiceanu , m. carlsson , and t. petit . .in _ proceedings of the tenth international conference on principles and practice of constraint programming ( cp 2004 ) _ , volume 3258 of _ lncs_. springer , 2004 .h. fargier , j. lang , and t. schiex . selecting preferred solutions in fuzzy constraint satisfaction problems . in _ proceedings of the first european congress on fuzzy and intelligent technologies _ , 1993 . t. petit , jrgin , and c. bessire .meta constraints on violations for over constrained problems . in _ proceedings of the 12th ieee international conference on tools with artificial intelligence ( ictai ) _ , pages 358365 , 2000 .t. petit , j .-c . rgin , and c. bessire . .in _ proceedings of the seventh international conference on principles and practice of constraint programming ( cp 2001 ) _ , volume 2239 of _ lncs _ , pages 451463 .springer , 2001 .
we describe soft versions of the global cardinality constraint and the regular constraint , with efficient filtering algorithms maintaining domain consistency . for both constraints , the softening is achieved by augmenting the underlying graph . the softened constraints can be used to extend the meta - constraint framework for over - constrained problems proposed by petit , rgin and bessire .
suppose that we have written , in a sort of table , the statistical data collected from a group of experiments the nature of which can be classical , quantum , or something else . suppose that we also want to store this table s data in a compact way .how could we proceed ? in this paperit is shown that , given the situation described above , when we try to store or organise the table s data in a more compact way we find that real vectors can be associated to preparations and results , in a way which , for quantum mechanical phenomena , is essentially the same as hardy s representation of ` states ' and ` measurement outcomes ' .this curious fact may offer new points of view for looking at some of the current ` foundational ' issues in quantum mechanics .the ideas here presented are a summary of those developed in ref . , to which the reader is referred for further details .the emphasis in this paper is on the main idea of a ` table decomposition ' , and on the implications of the latter for various topics discussed in this conference .imagine that we are in a laboratory , performing experiments of various kinds to study some interesting phenomena ; the purpose of the experiments is to statistically study the correlations among different kinds of these phenomena .in general , we try to reproduce a given phenomenon either by controllably preparing it at will , or simply by waiting for its occurrence , to observe which concomitant phenomena , or _results _ , occur . some experiments present common features: for example , part of the preparation can be the same for some of them .we separate ideally each experiment into a _ preparation _ and an _ intervention _ ; the latter also delimits the kind of results which can be obtained , which implies that if we are told a result , we know which intervention was made .we then consider a set of preparations and a set of interventions , with the clause that sensible experiments may be made by combining each of the preparations with each of the interventions ( preparations or interventions which do not satisfy this condition are set aside for the moment ) .thus , suppose that we have different preparations , and a given number of possible interventions , each with a different number of results ( mutually exclusive and exhaustive ) , where the number depends on the particular intervention .the total number of results , counted from all interventions , is . through repetitions of the experiments , or through theoretical assumptions , or just by analogy with other experiments which we have already seen and which we judge similar to those that are now under study, we can write down a table with the probabilities that we assign to every result , for every intervention and preparation . the table may look like the following : cc|ccccccc & & & & & & & + & [ cols="^ " , ] + &&&&&& such a table has , notwithstanding the limiting infinite size , rank , and so we can associate to every preparation and to every result a -dimensional vector .the preparation vectors , however , lie on a -dimensional ( affine ) hyper - plane , as in the numerical example previously discussed .it can be shown indeed that the resulting set of preparation vectors is equivalent to the standard bloch - sphere for two - level quantum systems .this was just an example , but all quantum - mechanical concepts like density matrix , positive - operator - valued measure , and completely positive map can be expressed in an equivalent table - vector formalism ( e.g. , the relation with the trace rule is quickly shown in the appendix ) .the following discussion concentrates on the possible relations among the ideas hitherto presented and ideas presented by other authors in this conference .the reader is referred to ref . for a more general and detailed discussion . in sect .[ sec : decom ] , we quickly introduced the preparations , interventions , and results , , , and the probabilities which make up the table .let us define them more clearly .symbols like and represent propositions which together describe an actual , well - defined procedure to set up an experiment , e.g. _ _ etc . , and _ _ etc .the separation of the experiment s description into the two propositions is not unique , and indeed more kinds of separations can be considered .a symbol like represents a proposition which describes the results of an experiment , e.g. _ _ ; it is then clear that it depends on the particular experiment being performed .the probability is consequently defined as where is a proposition representing the rest of the experimental details and our prior knowledge . as representing the ` agent ' .] we are thus considering probability theory as extended logic , an approach which will prove to be , in the following , powerful , flexible , and intuitive at once .usually , many repetitions of the same experiment are made and the relative frequencies of the different results of the intervention are observed . _ in this case , given a judgement of total exchangeability of the experiment s repetitions , the probability is practically equal to the observed frequency of the result _ , thanks to de finetti s representation theorem .but the probability can also be assigned on grounds of similarity with other experiments , or just by theoretical assumptions .note that the table is silent with regard to the probabilities for the preparations or the interventions , and . ina given ` situation ' , if _ we _ decide which preparation and intervention to perform , say and , then these probabilities are and of course ( which amounts to saying that `` we know what we re doing '' ) .if it is someone else who is deciding the particular preparation and intervention , then the probabilities must be assigned in some other way , e.g. by asking `` which preparation are you making ? '' , or by using some other knowledge , and they will in general differ from and .the difference between these probabilities and those of eq .is somehow analogous to the difference between initial conditions and equations of motion in classical mechanics : the theory concerns only the latter , while the former has to be specified on a case - by - case basis .the probabilities of the results of a given intervention and a given preparation form a probability distribution , because the results were arranged so as to be mutually exclusive and exhaustive .this implies that , for two results and of a given intervention and for a given preparation , we also have the trivial identity but with probability theory as logic we can also evaluate , for a given preparation , the disjoint probability for the results and of two _ different _ interventions and , just using the product and sum rules : &= { p}[({{\textsf{\emph{r}}}}'\land{{\textsf{\emph{m}}}}')\lor ( { { \textsf{\emph{r}}}}''\land { { \textsf{\emph{m } } } } '' ) { \mathpunct{|}}{{\textsf{\emph{s}}}}_j\land { { \textsf{\emph{q}}}}],\\ & = \begin{aligned}[t ] & { p}({{\textsf{\emph{r}}}}'{\mathpunct{|}}{{\textsf{\emph{m}}}}'\land { { \textsf{\emph{s}}}}_j\land { { \textsf{\emph{q}}}})\times { p}({{\textsf{\emph{m}}}}'{\mathpunct{|}}{{\textsf{\emph{s}}}}_j\land { { \textsf{\emph{q}}}})+\mbox{}\\ & { p}({{\textsf{\emph{r}}}}''{\mathpunct{|}}{{\textsf{\emph{m}}}}''\land { { \textsf{\emph{s}}}}_j\land { { \textsf{\emph{q}}}})\times { p}({{\textsf{\emph{m}}}}''{\mathpunct{|}}{{\textsf{\emph{s}}}}_j\land { { \textsf{\emph{q } } } } ) , \end{aligned}\\ & = [ { \bm{r } } ' { p}({{\textsf{\emph{m}}}}'{\mathpunct{|}}{{\textsf{\emph{s}}}}_j\land { { \textsf{\emph{q}}}})+ { \bm{r } } '' { p}({{\textsf{\emph{m}}}}''{\mathpunct{|}}{{\textsf{\emph{s}}}}_j\land { { \textsf{\emph{q } } } } ) ] { \cdot}{\bm{s}}_j , \end{split}\ ] ] where it is assumed that , i.e. , we are sure that one or the other intervention was performed .the content of the formula above is intuitive : the occurrence of the result implies that the intervention was performed , and so analogously for the result and the intervention .then the probability of getting the one or the other result depends in turn on the probability that the one or the other intervention was made , hence these probabilities appear in the last line of the above equation .however , as already said , the probabilities of these interventions are _ not _ contained in the table , but must be given on a case - by - case basis . as a result , the two disjoint probabilities in eqs . and` behave ' differently , and the reason for this is intuitively clear .however , we can partially trace in this fact the source of much research and discussion on partially ordered lattices and quantum logics for the set of intervention results . roughly speaking ,the point is that , for a classical system , there is the theoretical possibility of joining all possible interventions ( measurements ) in a single `` total intervention '' : the table associated to a classical system can then be considered as having only one intervention ; thus one needs never consider the case of eq . .however , such `` total intervention '' is excluded in quantum mechanics , and one is thus forced to consider the case of eq . .from the point of view of probability theory as logic , instead , there is no need for non - boolean structures thanks to the possibility of changing and adapting , by means of bayes theorem , the _ context _ ( also called ` prior knowledge ' or ` data ' ) of a probability , i.e. , the proposition to the right of the conditional symbol ` . ' on the other hand , we may have the following scenario : we have repeated instances of a given preparation , but we do not know which . by performing interventions on these instances and observing the results , we can estimate which preparation is being made . introducing the proposition representing the resultsthus obtained , we have this is a standard `` inverse - inference '' result of bayesian analysis ( * ? ? ?* ch . 4 ) .the probability can be written in terms of scalar products of result and preparation vectors , but the probability distribution depends on the prior knowledge that one has in each specific case . if we now want to _ predict_ which result will occur in a new intervention , we have the following probability : i.e. , we can effectively associate the vector to the unknown preparation .a further kind of scenario is this : we have a brand - new kind of preparation , i.e. , a new phenomenon , still untested .it has _ no place _ in our probability table ; yet _ we think that we could reserve a new column to it without making substantial changes to our table s decomposition ( by which we mean that the table s rank would not change)_. given this , in order to associate a vector to this new preparation we proceed as in the preceding scenario , performing interventions and observing results .the result is that eqs. and also apply in this case .note that there is no conflict between our talking about `` unknown preparations '' and fuchs criticism of the term ` unknown quantum state ' .a preparation , as we have seen , is a well - defined procedure ( that can be shown or described to others , etc . ) to set up a given physical situation ; on the other hand , fuchs meaning of ` quantum state ' is ` density matrix ' , which corresponds ( more or less ) to the preparation _ vector _ instead .thus , consider the two sentences `` it is unknown to me which kind of laser and of beam splitter are used in this experiment '' and `` i do not know with which probability the detector behind the vertical filter will click '' : the latter sentence is nonsensical from a bayesian point of view , because there are no ` unknown ' degrees of belief ; but the former sentence is unquestionably meaningful .thus , even if the _ preparation _is unknown to us , we can _ always _ associate a preparation _ vector _ to it .some remarks have already been made in sect .[ sec : decom ] after the derivation of the formula with regard to the fact that the vectors and do not have any physical meaning _ separately _ : a given preparation vector tells us nothing if we do not have an intervention - result vector ; even less if we know nothing about the set of intervention results .it follows that the preparation vectors also lack any _ probabilistic _ meaning _ per se _ : they are _ not _ probabilities _ nor _ collections of probabilities they are just mathematical objects which yield probabilities when combined in a given way with objects of similar kind .this , in particular , is true for density matrices , as we have seen that they are just a particular case of preparation vectors .it is slightly incorrect , as well , to say that probabilities are ` encoded ' or ` contained ' in the ( quantum ) preparation vectors : rather , they are _ parts _ of an encoding . from these considerations ,the quantum state ( the density matrix ) appears to be _part _ of a state of belief , and not the ` whole ' state of belief .perhaps the point is that caves , fuchs , and schack s notion of a ` quantum ' state of belief implicitly assumes the _ existence _ and the _ particular structure _ of the whole set of quantum positive - operator - valued measures ( i.e. , the interventions ) .this is an important difference from a ` usual ' degree of belief which does not need to be combined with other mathematical objects to reveal its content .it has already been remarked that the kind of vector representation arising from the table decomposition is essentially the same as hardy s .the derivation presented here can be seen as a sort of shortcut for his derivation , but it implies something more .hardy supposes that it is possible to represent a preparation by means of a -dimensional vector with because most physical theories have some structure which relates different measured quantities ; but the reasoning behind the decomposition of sect .[ sec : decom ] shows that this possibility exists even without a theory that describes the data ( indeed , the question arises : has this possibility any physical meaning at all ? ) in any case , the idea of a ` probability table ' and its decomposition has probably very little usefulness in experimental applications , but provides a very simple approach to study the mathematical and geometrical structures of classical and quantum theories , and offers a different way to look at their `` foundational '' and `` interpretative '' issues .this approach is even more general than other standard ones based , e.g. , on -algebras , or even jordan - banach - algebras , or on convex state - spaces .is only part of the set of normalised positive linear functionals of ( the convex hull of ) the set of results ( the latter would be a square circumscribed on the given one ) . ]thus , with the idea of a ` probability table ' we can very easily implement ` toy theories ' or models like those of spekkens and kirkpatrick ( cf . also the issue raised by terno ) , which can then be compared to classical or quantum mechanics using a unique , common formalism .the author would like to thank gunnar bjrk , ingemar bengtsson , christopher fuchs , lucien hardy , sa ericsson , anders mnsson , and anna for advice , encouragement , and many useful discussions .it is shown that the ` scalar product formula ' , eq ., includes also the ` trace rule ' of quantum mechanics ( see also ref .* sect . 5 ) ) . a preparation is usually associated in quantum mechanics to a _ density matrix _ , and a intervention result to a _ positive - operator - valued - measure element _ ; both are hermitian operators in a hilbert space of dimension .the probability of obtaining the result for a given intervention on preparation is given by the trace formula the hermitian operators form a linear space of _ real _ dimension ; one can choose linearly independent hermitian operators as a basis for this linear space .these can also be chosen ( basically by gram - schmidt orthonormalisation ) to satisfy both and can be written as a linear combination of the basis operators : where the coefficients and are real . using eqs . and the trace formula becomes where and are vectors in , q.e.d .
the idea of writing a table of probabilistic data for a quantum or classical system , and of decomposing this table in a compact way , leads to a shortcut for hardy s formalism , and gives new perspectives on foundational issues .
in this work a collective of interacting automata in an one - dimensional geometric environment is considered as an integral automata - like computational object .the problem to define unambiguously what is the state of such dispersed and moving on the environment object and how to measure the amount of state transitions in this case is quite non - trivial . as opposed to the finite state automata where the measure of state transition is one state per unit of discrete time , for a computational dynamic object distributed on the environment different approaches to definition of the measure of state transition are possible . in this paperwe propose a way for defining what a state is in the context of collective of stateless automata .it allows us to define it on the basis of the relative positioning of automata , i.e. on the basis of its geometry .the proposed approach distinguishes two types of states : internal and external states of automata collective .the measure of state transition of collective of automata , introduced in this paper , we name the proper time of this collective .the proposed research is inspired by three major research directions which are : 1 ) collective of automata in finite automata theory ; 2 ) discrete models of physical processes and projecting physical world into informational space of symbols and languages for computer modelling of physical world ; 3 ) studying the notion of time .each of these research directions has an extensive bibliography confirming their importance .the basis for this research is the notion of relativity as given by poincare in his popular works .the concluding comparison of the obtained results with some formulas of special relativity theory shows that the formulated principles are invariant in relation to linguistic means of expression : semantic affinity of the principles ( e.g. , coordinate , velocity , reference frame ) that form the language of our discrete model to the principles of language of special relativity theory resulted in their syntactic affinity ( e.g. , velocity - addition formula , `` length contraction / extension '' formula ) .the way of forming the language ( velocity , time , reference frame ) of interaction and interpreting this interaction between automata collectives in our model reflects poincare s conventional point of view toward laws of physics . in order to suggest a physical analogy for this model we use the word `` body '' as alias for `` collective of automata '' .the paper is organized as follows . in the section [ body ]we define the model of collective of automata .then in the section [ state ] we derive the notions of external and internal states of a collective of automata and study their properties connecting such notions as coordinate , spacial velocity , proper time velocity and proper time of collective .in what follows we use denotations and for the sets of integers and real numbers , respectively . also we denote the domains for the time and space coordinates by and .initially in the model defintion we assume that and coincide with but then we will extend it to .the general framework of the model , that we use for this study , consists of two main components : an environment that is represented by a graph and a set of stateless automata , which are interacting with the environment and between themselves . the environment is defined as the infinite directed graph with the set of nodes and the set of edges . an edge for some has the absolute coordinate and the direction .absolute coordinate of an edge will be denoted by and its direction by . also the edge will be denoted by . by the neighborhood of an edge understand the pair of edges and . the edges and will be called opposite edges and and be called contrary edges .stateless automata on the environment we name as elementary bodies . in the general frameworkwe assume that elementary bodies are coloured in a way that isomorphic automata will have the same colour and non - isomorphic automata will have different colours .we assume that different numbered from to colours are used .every moment of time any elementary body is located on an edge of the graph . the input for an elementary body , located on an edge , is the sequence called the neighbourhood state of the edge , where and are the numbers of elementary bodies of the colour , located on the edges and at the same moment of time , respectively .the output of an elementary body is one of the two motions either the straight - line motion or the turn .if the output of an elementary body at a time moment on an edge is the straight - line motion , then at the next time moment and we say that it does not change its external state . if the output is the turn then and we say that the elementary body changes its external state . denoting by the number of external state changes of until the moment of time we have that and also , where is the path covered by during the period of time .in other words any elementary body uses the absolute time unit either for one spatial coordinate change in the environment or for one external state transition .we call the proper time of and the proper time velocity of .let us denote by .we call the absolute coordinate of at the moment of time .we denote by the absolute spatial velocity of at the moment of time .we call it uniform spatial velocity if is a constant . for example, it follows from above definitions that any elementary body can have only one of the following uniform spatial velocities : , , . the only state of a stateless automaton ( i.e. of elementary body ) we call the internal state . an elementary body is unambiguously defined by the set of input symbols that change its external state . in additionalwe assume also that elementary body can not change its external state anyway if its opposite edge is empty .we call the pair of a space coordinate and a time coordinate as coordinate in the absolute reference frame and denote by the column vector .we call also the event space .we define the discrete world line of in the event space from a time to as , where , .a body is an arbitrary finite set of elementary bodies .according to the defintion different bodies may have common parts and one body can contain another body as a subset .if an elementary body belongs to a body then we will look at it as an elementary part of this body . an elementary body can be an elementary part of different bodies simultaneously .the following two examples illustrate some of introduced definitions .any elementary body in both examples changes its external state if and only if its opposite edge is not empty . from itfollows that all elemantary bodies are isomorphic .we assume that all elementary bodies in each example are enumerated by integer numbers .[ example1 ] at time for each the elementary body with the number is located on the edge if is even number and otherwise .we define the body as the set of elementary bodies , and . ] [ example2 ] at time for each the elementary body with the number has the coordinate and located on the edge with the direction if and on the edge with the direction otherwise . in this example we define the body .let a body consist of elementary bodies enumerated by numbers .then the absolute ( average ) coordinate of the body at time is the value and absolute spatial velocity of the body at time is the value . the bodies and from the above examples have uniform spatial velocities and , respectively . from the definitions it follows that the maximal possible positive or negative spatial velocities of any body can be or .since the coordinate values of a body can be non - integers let us extend the absolute reference frame from to .let and then we say that an elementary body at time has the coordinate and is located on the edge .now we can define the ( continues ) world line of an elementary body in time interval from to as the extension of its discrete positions in the event space : , where , .if is a straight line segment in the event space then the vector can have only the direction either of the vector or of the vector and we say that the corresponds to _ an elementary move _ of .in this section we define what does it mean that two bodies are in the same external or internal state , rather than what the external or internal state of a body in fact is .if needed the notion of state can be in generally defined as follows .since the relation `` to be in the same external state '' is an equivalence relation , the external states can be defined as equivalence classes of this relation .the same holds for the definition of internal state .a body interacting with other bodies exert influence on them and at the same time is also under their influence .it is quite natural to describe such influences on the basis of the notion of a state of a body .our definition of a state of a body takes into consideration the relative positioning of its elementary parts in the environment .the changes of relative positioning of elementary parts in a body can affect the body entirely or a particular part of it .this motivates the question how to measure the amount of state transition . before the definition of the notion of a state we introduce the denotation for the measure of state transition of a body with the flow of time .a casual meaning of is the `` age '' of the body at the moment .we call the proper time of . independently from the definition of , we introduce the velocity of state transition of the body as .we call this value as the proper time velocity of at the moment of the absolute time . for any body for any body from it follows that a body does not change its external state if all its elementary bodies do not change their external states .it means that two bodies are at the same external state if one can be transformed into another by isometric straight - line shifts in the environment applied to all its elementary parts . for any body , if then .the statement follows from the fact that any change of the external state of a body is not possible in case of maximal spatial velocity of all its elementary parts .the notion of external state of a body allows to consider the bodies as an automata - like model of algorithms .but since two bodies with different absolute spatial velocities are definitely in different external states we can not speak of them as of realization of the same algorithm .for example there is no sense to `` ask '' a body to determine its absolute spatial velocity .however we would like to identify two bodies as the same algorithm even if they move with different spatial velocities .it will be achieved by introduction of affine isomorphism of bodies through definition of inertial reference frame associated with a body so that the external state of a body will be presented as pair of components : spatial velocity of the body and its internal state that is spatial velocity invariant .the point of introducing the notion of inertial reference frame associated with a body lies in the ability to consider other bodies in relation to the given one .an example of inertial reference frame is the absolute reference frame associated with an immovable body such that for all , , , , and , hence , .thus , the introduced notions of absolute time , absolute coordinate and absolute spatial velocity implicitly mean an absolutely motionless body in relation to which objects were considered .the reference frames associated with the bodies allow us to make these notions relative .let us denote ( for a pair of bodies and ) by , , and the coordinate , the spatial velocity , the proper time velocity and the proper time of the body at the moment of time in the reference frame associated with the body , respectively . by definitionwe assume that , , and .a body is called an inertial body if and are both constants .[ rem1 ] it follows that and for inertial bodies and .further we consider only inertial bodies .in addition we assume that coordinates of the same events in different inertial reference frames are connected by affine mapping . for any bodies and let us denote by the affine mapping that connects and such that an event in coincides with the event in .these assumptions are sufficient to find out . without loss of generalitywe assume that the origins of both reference frames and are the same : and .then the mapping is linear .let us work out the form of transformation matrix .[ lemma_lba ] the mapping either holds the directions of the vectors and ( i.e. these vectors are eigenvectors of the mapping ) or permutes their directions .the directions of reference frame axes are imaginary directions in the event space .but the set of directions of the vectors and in the absolute reference frame corresponds to the directions of the `` real '' `` material '' world lines of elementary bodies by elementary moves and therefore this set of directions does not depend on reference frames . from itfollows that this set of directions _ is invariant _ by any affine transformation . for the matrix holds either , or , .based on the lemma [ lemma_lba ] the corollary statement follows as a result of straightforward calculations .if and holds , then the reference frames and are said to be in standard configuration .if and holds , then the reference frames and are said to be in symmetric configuration .[ theorem1 ] in the standard configuration the following holds since , , , , then . fromit follows that and . in the symmetric configuration the following holds proof is analogous to that of theorem [ theorem1 ] .let . by symmetric configuration the space axis of the reference frames and are in opposite directions , by standard configurations they are in the same directions .further for the sake of convenience we consider reference frames only in the standard configuration .the following corollaries hold for any intertial bodies , , .[ cor3 ] it holds and the equalities can be derived from .[ cor_addition ] ( velocity - addition formula ) .this velocity - addition formula is derived from the equation .[ corollary1 ] ( `` length contraction / extension '' formula ) given inertial bodies , and such that .let be the distance between and in the reference frame .let be the distance between and in the reference frame , then . and in corollary [ corollary1 ] ] notice that the values of and are constants . without loss of generalitywe assume , , , . then and .let be such a moment of time that the events and are the same .then the formula of `` length contraction '' follows from and theorem [ theorem1 ] .as it will be seen , from the example at the end of this section , may take on a value which is less than 1 as well as more than 1 .so it means that in our discrete model we have contracting length as well as extending length in respect to different inertial frame system .now we give a definition of internal state of a body .let for bodies and there be a bijection such that for all elementary bodies and are isomorphic .we say that at the moment of proper time and at the moment of proper time are affine isomorphic iff = .two inertial bodies are in the same internal state at some moments of their proper time iff they are affine isomorphic at their respective proper time . internal state of an inertial bodydoes not depend on its spatial velocity in the absolute reference frame .thus , the external state of an inertial body can be seen as a combination of two components : the spatial velocity of the body and its internal state . in order to illustrate the concept of affine isomorphismlet us consider bodies and from the examples [ example1 ] and [ example2 ] .this bodies are affine isomorphic .the corresponding transformation of the reference frame of to of is : the dynamics of the bodies and illustration of the transformation are shown on the figure [ f_ex2 ] . from the value of transformation matrix and corollary [ cor3 ]it follows that , . and [ example2 ] .let us compare the obtained results with formulas of special relativity theory .it is interesting to have a look , from our model viewpoint , at two equations of time dilation and of length contraction of the special relativity theory . drawing a proper analogy between them and ( remark [ rem1 ] ) and ( corollary [ corollary1 ] )respectively we can see , due to generally asymmetry in our discrete virtual `` world '' , that the coefficient reciprocal to lorentz factor has different `` physical '' meanings in these formulas .the factor has in the first equations a meaning of the coefficient and in the second equations has a meaning of the coefficient if we consider a `` moving '' with respect to a `` rest '' .we would like to position this paper as an introductory research work on fundamental notion of a state for distributed automata object and to draw attention to the number of problems related to such notion .it is shown that a measure of state transition of such object can be described by the language of internal and external state changes .we hope that the proposed analogies between automata theory and relativity theory can generate further interest to the topic towards a better understanding of such analogy .apart from the study of the notion of state somebody can ask a number of more technical questions which were not intention of this work , but nevertheless are important research issues .in particular , it was not considered what kind of values the transformation matrix of basically can have .also the algorithmic universality of model is not proved , though a proof of this fact simply enough by simulation of cellular automata .it will be interesting to consider the model in higher dimensions and the case of not inertial bodies and inhomogeneous environment . at the same time various problems are unsolvable in the given model because of peculiarity of the model , e.g. like the question whether a body can define its absolute velocity .this seemingly natural question is meaningless in the considered model and therefore is an algorithmically unsolvable problem in it .these and a number of other questions will be considered in the future publications of authors .+ * acknowledgements * : the author acknowledges the useful discussions on this work with dr .valeriy kozlovskyy and would like to thank for his valuable comments that helped us to improve the presentation .the work of authors was supported in part by nato collaborative linkage grant 983162 . i. s. grunskyy , a. n. kurgansky , dynamics of collective of automata in discrete environment //prikl . mat .mekh , 15 , 5056 ( 2007 ) ( in russian ) .g. kilibarda , v. b. kudryavtsev , .u , collectives of automata in labyrinths , discrete mathematics and applications , - 13(5):429 - 466 , ( 2003 ) .e. yu . kondrashina , e. v. litvintseva , and d. a. pospelov , representation of the knowledge of time and space in intelligent systems , moscow : nauka , 1989 ( in russian ) .o. kurganskyy , dynamics of a `` body '' in information environment , the 10th international conference `` stability , control and rigid bodies dynamics '' ( icscd08 ) .- donetsk , ukraine , iamm nasu , 2008 , p.59 .h. poincar , la science et lhypoth ( 1902 ) .h. poincar , la valeur de la science ( 1905 ) .h. poincar , science et m ( 1908 ) . h. poincar , derni pens ( 1913 ) . m. l. tsetlin , automaton theory and modeling of biological systems , new york : academic press , 1973 .z. d. usmanov , modelirovanie vremeni ( modelling of time ) , moscow : znanie , 1991 ( in russian ) .v. i. varshawski , collective behaviour of automata , moscow : nauka , 1973 ( in russian ) .
this is an introductory paper in which we rise and study two fundamental problems related to the analysis of a computational dynamic object distributed on the environment : * how to define unambiguously what is the state of such object ? * how to measure the amount of state transitions in this case ? the main idea of the paper is to show that the state of such computational dynamic object distributed on the environment can be described by the language of internal and external states . the results based on proposed approch have something in common with special relativity theory and suggest existence of further connections between the automata theory and relativity theory . collective of automata , cellular automata , finite automata theory , special relativity theory
a growing number of seeming universalities have been identified in numerical simulations of dark matter structures .most of these are integrated quantities , such as the density profile , the pseudo phase - space density , and the velocity anisotropy .the cause of these universalities remains , however , essentially unknown .the origin of the universalities may lie in some fundamental property of dark matter , be it some statistical mechanics or optimization of some generalized entropy .it may also be associated to dynamical effects , like radial orbit instability , or phase mixing or violent relaxation .alternatively , it could just be a `` coincidence '' , since all structures have been built up through similar processes of mergers and accretion . a first step towards answering the question of the origin of the integrated universalities , is to look at the actual distribution of velocities .also the actual shape of the velocity distribution function ( vdf ) has been suggested to be universal , which naturally could explain all the integrated universalities . asking the question about what dark matter structures fundamentally want , is different from asking what dark matter structures in an expanding universe actually end up doing .we will therefore not be considering structures from cosmological simulations , since their profiles often have merger history and environment dependent profiles .we will instead consider a range of numerical simulations where we have better control of their evolution . in this way we can repeatedly perturb the structures in controlled manners , as well as giving the structures sufficiently time that phase mixing between individual perturbationsmay be more complete than what is the case in cosmological simulations .a non - trivial dark matter vdf also has direct implications for direct dark matter experiments ( see e.g. for discussions and references ) .we here present numerical evidence that the origin of the shape of the tangential vdf is simple dynamics , hence supporting the idea that dark matter wants to follow very simple dynamical rules .this explains the origin of the velocity anisotropy profile in the inner region of dark matter structures , with no seeming need for advanced statistical mechanical or generalized entropic principles .however , as we will point out , we are still left with an unknown origin of the radial vdf and hence also the density profile and the pseudo - phase space density profile are not explained yet .below we will explain the surprisingly simple dynamical reason for the full shape of tangential part of the vdf , and we will perform numerical simulations supporting this conclusion . some of these physical arguments have been presented previously , however , the simulations presented here are significantly improved .in particular we create a set of controlled perturbations using energy exchange reminiscent of violent relaxation and dynamical friction , which allows the particle distributions to change significantly , without having the structure depart from spherical symmetry . at the same timethe structures are analysed only after convergence to a fully stable configuration has been achieved .let us consider a particle moving in the smooth and spherical potential of many collisionless particles .the velocity of this particular particle can be decomposed into three components , namely the radial and the two tangential components . with such decompositionwe can consider all particles in a given radial bin , and get the velocity distribution function ( vdf ) in both the radial and tangential directions .if the structure is non - rotating , then the two tangential vdf s will be identical .it has long been known that the radial and the tangential vdf s are different , and physically this difference may seem very reasonable for equilibrated systems , as we will now explain .we will first discuss the radial vdf .consider a thin spherical bin , at radius .if we consider the velocity components moving outwards in the radial direction , then those must be compensated by particles further out moving inwards .this compensation must depend on the particular density profile of the structure .this is most clearly seen in the eddington inversion method , from which one can easily derive the full radial vdf from the full density profile .now , let us instead consider the tangential velocity components .instantaneously the components are moving in the tangential plane .for particles with the circular speed this means that the component is moving in constant density and constant potential .particles moving slowly in the tangential plane will still be near the same density and potential after a short time interval .that implies that the equilibrium can be achieved simply by having other components in the same radial bin moving in the opposite direction .therefore , whereas the radial vdf s depend on the full radial density profile , then the tangential vdf s apparently do not have to concern themselves with other radial bins .the tangential vdf could therefore , in principle , be the same at all radii .this argument only holds instantaneously .particles whose tangential velocity components are high , will after a longer time interval , , be moving outwards into lower density regions , and will therefore later have converted their tangential component into both radial and tangential velocity components .effectively this implies that the argument , that the tangential velocity component moves in constant potential and constant density , only holds for the low velocity particles .below we will use numerical simulations to show that effectively the breakdown of this argument happens around of the escape velocity , .we have argued above that the low velocity component of the tangential vdf should have a simple shape , which should be the shape of collisions - less particles moving in a constant potential and constant density .to simulate a uniform medium is rather difficult using n - body simulations , because any power or noise will induce gravitational collapse , which leads to a departure from homogeneity .instead one can make a very simple analytical argument , which allows one to derive the vdf of a homogeneous medium .let us consider a spherical structure which has a known density profile , .if one has , then we can use eddington s method to derive the vdf , where is the relative potential as a function of radius , and is the relative energy , .eddington s method provides the unique ergodic distribution function .we use this method to find the vdf at all radii .imagine that the structure is particularly simple , namely with a constant density slope , , over a very large radial range. to be concrete , say that over 20 orders of magnitude in radius , and then truncated abruptly inside and outside this range .eddington s method shows us that this ( isothermal sphere ) has a vdf which is a gaussian at all radii ( except for the details arising from the truncation ) .now , consider a more shallow slope , e.g. or .for any value of we can use eddington s method to calculate the vdf , and at any radius it will have exactly the same shape as function of . for each density slopewe use this method to find the unique distribution function .finally we can extrapolate this approach to , which is identical to the case of constant density and constant potential .the shape of the vdf for this case is is the condition for a particle moving in constant density and potential , and this is therefore the shape for the tangential vdf . in the case with is no difference between the radial and tangential directions , and the assumption of in the derivation is therefore correct , and the technique is thus self consistent .we now have an analytical expression for the shape of the tangential vdf at small velocities , and only the normalizations are unknown .these are the overall normalization ( which must be related to the density at that radius ) , and the normalization of the velocity ( which is related to the tangential velocity dispersion ) .the high velocity components are possibly even simpler to describe .if a velocity component is purely tangential at a given time , then shortly later it will be a combination of tangential and radial ( unless it happens to have exactly the circular speed ) .we should therefore expect that the shape of the tangential and radial components are similar at high velocities .there is only one complication , namely the normalization .the overall normalization must be identical between the radial and tangential components ( since this is just the local density ) , however , as opposed to the low velocity component discussed above , the normalization of the high velocities components must be absolute , i.e. . the transition from the _ low _ to the _ high _ velocity components in a real system must be smooth , and is probably rather non - trivial , however , for simplicity we will here make the approximation that the transition is abrupt , and we make no attempt to make it smooth .practically , we simply assume that the transition always happens near .we will address this issue further in the discussion section .the first simulation is a cold collapse , where the inclusion of substructure breaks the spherical symmetry .we distributed particles according to a hernquist density profile with scale radius , and a cutoff at .in addition particles , with the same mass as the main halo particles , were distributed in identical subhaloes , also having hernquist density profiles , but with a scale radius of and a cutoff radius of .the centers of the subhaloes were sampled in the same way as the particles in the main halo .the velocities of all the particles were initially zero .the total mass in the simulation was .we ran the simulation for 200 time units , which corresponds to 200 dynamical times at the scale radius for the initial structure .such a cold collapse is similar to the simulations by van albada ( 1982 ) .for all the non - cosmological simulations discussed here , we used the parallel n - body simulation code , gadget2 .for further details on the cold collapse see . and , and the binsare shifted vertically to improve readability .the red ( solid ) lines are all of the same theoretical shape in eq .( [ eq : ftan ] ) , which are seen to provide an acceptable fit in the low velocity region.,scaledwidth=48.0% ] to sample the vdf s we distribute the particles in radial bins with the same number of particles in each bin . in figure[ fig : infall.lin.lin ] we show both the radial ( blue stars ) and tangential ( green diamonds ) vdf for three radial bins , chosen near a slope of and ( from top to bottom ) .we also show the predicted shape of the tangential vdf ( solid line ) , which is clearly seen to provide an acceptable fit in the low velocity region .it is also clear that the radial and the tangential vdf s are very different in the low velocity region . and , and the binsare shifted vertically to improve readability .the red ( solid ) lines are of the theoretical shape for the low velocity region . for the high velocitiesit is clearly seen that the radial and the tangential vdf s approach each other rapidly.,scaledwidth=48.0% ] to see the details of how the radial and tangential vdf s start agreeing in the high velocity region we plot the vdf s from the same three radial bins in a lin - log space in figure [ fig : infall.lin.log ] .it is clear that for high velocity particles the radial vdf s ( blue stars ) are in good agreement with the tangential vdf s ( green diamonds ) .interestingly , the tangential vdf s can be approximated with the theoretical solid curve for low energies , and with the radial vdf s for high velocities . and for the radially innermost bins ( at slopes shallower than ) this transition happens to be just around . only for radial bins further out than , it seems needed to use a more smooth transition . in order to test further the theoretical claims for the tangential vdf , we wish to construct a perturbation / equilibration scheme , which allows the vdf s to change significantly , without having the structure depart from spherical symmetry .we set up structures in perfect equilibrium. these structures may have any density profile , and have zero anisotropy or follow and osipkov - merritt beta profile .now we increase the value of the gravitational constant by .this increases the potential and makes the structure contract , and after a few dynamical times a new equilibrium is reached .next we repeatedly increase or decrease the gravitational constant , and between each change we allow the structure to phase - mix and find a new equilibrium .after 20 such perturbations we use the standard value of , and let the structure relax completely. for further details , see . in figure[ fig : omg19.lin.log ] we show three radial bins from a simulation which initially was constructed as an isotropic hernquist profile .this in particular means that the initial conditions ( before the g - perturbations were executed ) had identical radial and tangential vdf s .now , after the perturbations and subsequent relaxation there is a large difference between the radial and tangential vdf for small velocities . instead , at high velocities the tangential and radial vdfquickly approach one another .as is also visible from the figure , the tangential vdf is well fitted by the theoretical prediction for small velocities . and , and the binsare shifted vertically to improve readability .all the red ( solid ) lines are of the same theoretical shape in eq .( [ eq : ftan ] ) for the low velocity region .for the high velocities it is clearly seen that the radial and the tangential vdf s are very similar.,scaledwidth=48.0% ] collisionless particles experience different kinds of energy exchange between each other , in particular through violent relaxation ( where the changing potential implies that the particle energies change ) , and through dynamical friction ( which transfers energy from the fast to the slower particles ) .we therefore consider a perturbation where the spherical symmetry is again conserved , however , we allow the particles to exchange energy amongst each other .this is done in such a way , that each radial bin conserves energy , whereby both density and dispersion profiles are unaffected by the perturbation itself .this energy exchange is instantaneous and the subsequent evolution is with normal collisionless dynamics .after each perturbation we again allow for sufficient phase mixing ( see * ? ? ? * for details ) .after sufficiently many perturbations ( typically 20 or 30 ) the structures have converged to a stable state , which will not change when exposed to further similar perturbations . in figure[ fig : hjs035.lin.log ] we present the vdf from three radial bins in the final structure , which are taken at density slopes of and .again we see that the final vdf s agree well between the radial and tangential for high velocity components , and that the low velocity components are well fitted by the analytical expression . and , and the binsare shifted vertically to improve readability .the red ( solid ) lines are of the theoretical shape for the low velocity region .for the high velocities it is clearly seen that the radial and the tangential vdf s rapidly approach each other.,scaledwidth=48.0% ]we have demonstrated that three extremely different artificial and controlled perturbations all lead to a tangential vdf , which is in good agreement with the theoretical prediction .this result is in good agreement with earlier studies including head - on collisions and galaxy formation , and provides very strong evidence that the origin of the shape of the dark matter tangential vdf is indeed as simple as explained in section [ sec : explain ] .an important difference from earlier studies is that we have here been investigating structures which have been exposed to controlled perturbations , and analysed only after convergence to a stable configuration has been achieved .let us remind the idea behind this paper .if we know the radial and the tangential vdf , then we know everything else , such as the phase - space density , the velocity anisotropy and the density profiles . for some idealized structures it is possible to derive the radial vdf directly from the density profile , and our results here imply that in that case we can derive the tangential vdf . that is , _ if _ we have the radial vdf ,_ then _ we can derive the tangential vdf . in the derivation of the tangential vdf we made no assumptions about the anisotropy of the system , and the tangential vdfshould therefore be the same for all systems and at all radii , irrespective of their anisotropy profiles .this statement naturally only holds for realistic systems which have been perturbed and allowed to relax .one can always create systems in a quasi equilibrium states which may even have highly different distribution function .systems created away from an equilibrium state most often also have very different tangential distribution functions . however , as we are demonstrating in this paper , all systems which are exposed to sufficient perturbations and subsequently allowed to relax to a quasi equilibrium state , will indeed have a tangential vdf of exactly this shape . in figure [fig : beta ] we present the anisotropy profiles for the 3 systems considered .in particular one sees that the anisotropy has a strong radial variation , going from essentially isotropic in the central region , to radially dominated orbits in the outer regions . andyet the shape of the tangential vdf is the same at all radii . as function of radius for the structures considered in figures 1 - 4 ( solid line hjs perturbation , dashed line g - perturbation , dot - dashed line infall simulation ) . the structures all have a strong variation , going from essentially isotropic towards the central region , to radial orbits in the outer regions.,scaledwidth=48.0% ]it has previously been suggested that possibly the radial vdf is sufficiently close to a rescaled vdf resulting from the eddington method .we have tested this suggestion by fitting the density profile and then using the eddington method to extract the radial vdf at all radii .however , the resulting vdf is not an accurate representation of the actual radial vdf .this means that we are still not at the point of understanding the radial vdf .one interesting aspect of the arguments presented for the shape of the tangential vdf is , that for it should be identical to the radial one .that trend is already clear from the figures , namely that the tangential vdf is suggestively close to the radial one for the innermost bins ( upper curves in all figures ) . to test this further, we selected a radial bin in the inner region ( outside of 5 times the softening length for each structure , and also outside a further 30.000 particles ) .the result is seen in figure [ fig : central ] , where the tangential ( dashed ) and radial ( solid ) vdf s are seen to be very similar .the local density slope at these bins were ranging from to ( bottom to top on figure [ fig : central ] ) . ) in red ( dot - dashed ) lines , just to demonstrate the agreement .the structures included here are the cold collapse , two different g - perturbations and three different hjs2010 perturbations , covering a range of initial density and anisotropy profiles , in particular do we here present results for structures which were initially set up with a shallow central density profile .velocities are scaled by the escape velocity and shifted vertically to improve readability . for this figure we selected structures which were created with zero inner slope , , before perturbationswere applied.,scaledwidth=48.0% ] we have discussed that the transition between high and low velocity may be approximated as a rather sharp transition .this is certainly a good approximation for the inner region ( inside a slope of ) .however , at larger radii it is clear , that a more smooth transition would provide a more accurate representation of the tangential vdf .for all the radial bins and for all the structures considered in this paper , we have estimated the best velocity for the transition , , and it appears that this transition is in the range .thus , in order to avoid unnecessary fitting parameters , it is a rather good approximation to fix this at .we have demonstrated that _ half _ of the distribution function ( specifically , the tangential velocity distribution function ) for dark matter structures can be understood from simple dynamical arguments .this implies that when we will eventually be able to derive the _ other half _( namely the radial velocity distribution function ) , then we understand all the properties of dark matter structures , including the seeming universalities of the density , phase - space density and velocity anisotropy profiles .we saw that the derivation of the tangential vdf did not require any reference to statistical mechanics or generalized entropy , but instead appears as a result of very simple dynamics .it now remains to derive the radial vdf , and it will be interesting to see if this will also be possible based on similar basic dynamical arguments .* acknowledgements * + the dark cosmology centre is funded by the danish national research foundation .the simulations were performed on the facilities provided by the danish center for scientific computing .
all dark matter structures appear to follow a set of universalities , such as phase - space density or velocity anisotropy profiles , however , the origin of these universalities remains a mystery . any equilibrated dark matter structure can be fully described by two functions , namely the radial and the tangential velocity distribution functions ( vdf ) , and when we will understand these two then we will understand all the observed universalities . here we demonstrate that if we know the radial vdf , then we can derive and understand the tangential vdf . this is based on simple dynamical arguments about properties of collisionless systems . we use a range of controlled numerical simulations to demonstrate the accuracy of this result . we therefore boil the question of the dark matter structural properties down to understanding the radial vdf .
radio pulsars exhibit dramatic fluctuations in total and polarized flux densities on a diverse range of longitudinal , temporal and spectral scales . as a function of pulse phase ,both average profiles and sub - pulses make sudden transitions between orthogonally polarized modes ( e.g. * ? ? ?* ; * ? ? ?the radiation at a single pulse phase often appears as an incoherent superposition of modes , both orthogonal and non - orthogonal ( e.g. * ? ? ?* ; * ? ? ?evidence has also been presented for variations that may indicate stochastic generalized faraday rotation in the pulsar magnetosphere . in an extensive study of single - pulse polarization fluctuations at 1400 mhz, observed that histograms of the linear polarization position angle were broader than could be explained by instrumental noise alone .this modal broadening of the position angle distribution was interpreted as evidence of a superposed randomized emission component . revisited the issue with a statistical model that described correlated intensity fluctuations of completely polarized orthogonal modes . although the predicted position angle distributions were qualitatively similar to the observations , the measured histograms were wider than those produced by a numerical simulation of modal broadening that included instrumental noise .source - intrinsic noise was later used to explain the excess polarization scatter of bright pulses in an analysis of mode - separated profiles ; however , it was given no further consideration in all subsequent statistical treatments by .these begin with the reasonable assumptions that the instantaneous signal - to - noise ratio is low and a sufficiently large number of samples have been averaged , such that the stochastic noise in all four stokes parameters can be treated as uncorrelated and normally distributed .these assumptions form part of the three - dimensional eigenvalue analyses of the stokes polarization vector by and . as in , these studies concluded that modal broadening is due to the incoherent addition of randomly polarized radiation that is intrinsic to the pulsar .the basic premises of these experiments are valid for the vast majority of pulsar observations ; however , they become untenable for the bright sources on which these studies focus .that is , when the instantaneous , the stokes polarization vector can no longer be treated in isolation and the correlated self noise intrinsic to all four stochastic stokes parameters must be accounted .these considerations are particularly relevant to the study of giant pulses .those from the crab pulsar reach brightness temperatures in excess of kelvin and remain unresolved in observations with nanosecond resolution .consequently , previous analyses of giant pulses have typically presented polarization data at the sampling resolution of the instrument ; that is , where the time - bandwidth product is of the order of unity .for example , studied giant pulses from the crab pulsar using an instrument with bandwidth , khz , and presented plots of the stokes parameters at a resolution of . plotted the total intensity and circular polarization of the strongest giant pulse and interpulse observed from psr b1937 + 21 with and khz . using a baseband recorder with mhz , plotted the intensities of left and right circularly polarized radiation from crab giant pulses at a time resolution of 2 ns .each of these studies concluded that giant pulses are highly polarized .however , in each experiment , ; at this resolution , every discrete sample of the electric field is completely polarized , regardless of the intrinsic degree of polarization of the source . that is , the instantaneous degree of polarization is fundamentally undefined .to study the polarization of intense sources of impulsive radiation at high time resolution , it is necessary to consider averages over small numbers of samples and source - intrinsic noise statistics .these limitations are given careful attention in the statistical theory of polarized shot noise .this seminal work enables characterization of the timescales of microstructure polarization fluctuations via the auto - correlation functions of the total intensity and degrees of linear and circular polarization ( e.g. * ? ? ?it has also been extended to study the degree of polarization via the cross - correlation as a function of time lag between the instantaneous total intensity spectra of giant pulses .this paper presents a complimentary approach based on the four - dimensional joint distribution and covariance matrix of the stokes parameters , with emphasis placed on the statistical degrees of freedom of the underlying stochastic process .relevant theoretical results are drawn from various sources , ranging from studies of the scattering of monochromatic light in optical fibres ( e.g. * ? ? ?* ) to the classification of synthetic aperture radar images ( e.g. * ? ? ?* ) . following a brief review of polarization algebra in [ sec : review ] , the joint probability density functions of the stokes parameters at different resolutions are derived in [ sec : joint ] , where the results are compared and contrasted with previous works .the formalism is related to the study of radio pulsar polarization in [ sec : application ] , where the effects of amplitude modulation and wave coherence on the degrees of freedom of the pulsar signal are discussed .finally , the results are utilized to reexamine past statistical analyses of orthogonally polarized modes , randomly polarized radiation , and giant pulse polarimetry .it is concluded that randomly polarized radiation is unnecessary and the degree of polarization of giant pulses must be more rigorously defined .potential applications of the four - dimensional statistics of polarized noise are proposed in [ sec : conclusion ] .this section reviews the relevant algebra of both jones and mueller representations of polarimetric transformations .unless otherwise noted , the notation and terminology synthesizes that of the similar approaches to polarization algebra presented by , , and .the polarization of electromagnetic radiation is described by the second - order statistics of the transverse electric field vector , , as represented by the complex coherency matrix , . here, the angular brackets denote an ensemble average , is the hermitian transpose of , and an outer product is implied by the treatment of as a column vector .a useful geometric relationship between the complex two - dimensional space of the coherency matrix and the real four - dimensional space of the stokes parameters is expressed by the following pair of equations : here , are the four stokes parameters , einstein notation is used to imply a sum over repeated indeces , , is the identity matrix , are the pauli matrices , and is the matrix trace operator .the stokes four - vector is composed of the the total intensity and the polarization vector , .equation ( [ eqn : combination ] ) expresses the coherency matrix as a linear combination of hermitian basis matrices ; equation ( [ eqn : projection ] ) represents the stokes parameters as the projections of the coherency matrix onto the basis matrices .these properties are used to interpret the well - studied statistics of random matrices through the familiar stokes parameters .linear transformations of the electric field vector are represented using complex jones matrices .substitution of into the definition of the coherency matrix yields the congruence transformation , which forms the basis of the various coordinate transformations that are exploited throughout this work .if is non - singular , it can be decomposed into the product of a hermitian matrix and a unitary matrix known as its polar decomposition , where , is positive - definite hermitian , and is unitary ; both and are unimodular . under the congruence transformation of the coherency matrix , the hermitian matrix effects a lorentz boost of the stokes four - vector along the axis by a hyperbolic angle .as the lorentz transformation of a spacetime event mixes temporal and spatial dimensions , the polarimetric boost mixes total and polarized intensities , thereby altering the degree of polarization .in contrast , the unitary matrix rotates the stokes polarization vector about the axis by an angle .as the orthogonal transformation of a vector in euclidean space preserves its length , the polarimetric rotation leaves the degree of polarization unchanged .these geometric interpretations promote a more intuitive treatment of the matrix equations that typically arise in polarimetry .boost transformations can be utilized to convert unpolarized radiation into partially polarized radiation , and rotation transformations can be used to choose the orthonormal basis that maximizes symmetry .these properties are exploited in [ sec : joint ] to simplify the relevant mathematical expressions that describe the four - dimensional joint distribution of the stokes parameters .it proves useful in [ sec : joint ] to express the coherency matrix as a similarity transformation known as its eigen decomposition , here , is a matrix with columns equal to the eigenvectors of , and are the corresponding eigenvalues , given by , where is the degree of polarization . if the signal is completely polarized , then . if the signal is unpolarized , then there is a single 2-fold degenerate eigenvalue , and is undefined . if the eigenvectors are normalized such that , then equation ( [ eqn : eigen ] ) is equivalent to a congruence transformation by the unitary matrix , . in the natural basis defined by ,the eigenvalues are equal to the variances of two uncorrelated signals received by orthogonally polarized receptors described by the eigenvectors .the total intensity , ; the polarized intensity , ; and .that is , rotates the basis such that the mean polarization vector points along , providing cylindrical symmetry about this axis .the congruence transformation of the coherency matrix by any jones matrix * j * may be represented by an equivalent linear transformation of the stokes parameters by a real - valued mueller matrix such that mueller matrices that have an equivalent jones matrix are called pure , and such transformations are related to the lorentz group ( e.g. * ? ? ?* ; * ? ? ?this motivates the definition of an inner product where and are stokes four - vectors and is the minkowski metric tensor with signature .the lorentz invariant of a stokes four - vector is equal to four times the determinant of the coherency matrix ; that is , similarly , the euclidean norm is twice the frobenius norm of the coherency matrix ; i.e. the coherency matrix is a positive semi - definite hermitian matrix ; therefore , the lorentz invariant of any physically realizable source of radiation is greater than or equal to zero .it is equal to zero only for completely polarized radiation , when the degree of polarization is unity ( ) . as with the spacetime null interval, no linear transformation of the electric field can alter the degree of polarization of a completely polarized source .in this section , the joint distribution functions of the stokes parameters for a stationary stochastic source of polarized radiation are derived .three regimes of interest are considered : single samples , local means , and ensemble averages of large numbers of samples .the electric field vector is assumed to have a jointly normal density function as described by .this distribution may not accurately describe the possibly non - linear electric field of intense non - thermal radiation .for example , over a limited range of pulse phase , the fluctuations in the intensity of the vela pulsar have a lognormal distribution that is consistent with the predictions of stochastic growth theory .the normal distribution is a valid choice if the pulsar signal can be accurately modeled as polarized shot - noise , provided that the density of shots is sufficiently high .it is also the minimal assumption if no information about the higher - order moments of the field is available . as second - order moments of the electric field , the coherency matrix and stokes parametersare defined only after an average is made over some number of samples .given a single instance of the electric field ( for example , discretely sampled at an infinitesimal moment in time ) define the instantaneous coherency matrix , , and stokes parameters , , such that the ensemble averages , and .it is trivial to show that the determinant of the instantaneous coherency matrix , , regardless of the degree of polarization of the source or the probability density of the electric field .that is , the instantaneous degree of polarization is undefined .the complex - valued components of are the analytic signals associated with the real - valued voltages measured in each receptor .if the voltages are normally distributed with zero mean , then has a bivariate complex normal distribution , where is the population mean coherency matrix . to compute the probability density function of the instantaneous stokes parameters , note that is independent of the absolute phase of .conversion to polar coordinates and marginalization over this variable yields the intermediate result , where is the instantaneous phase difference between and .this result may be compared with equation ( 2.10 ) of , except that here the evaluation of the inner product has been postponed .the three remaining degrees of freedom are described by the instantaneous polarization vector , for which the jacobian determinant is application of the above with and yields the four - dimensional joint distribution of the instantaneous stokes parameters , where is the dirac delta function and is the instantaneous total intensity .this result is consistent with equation ( 11 ) of .converting to spherical polar coordinates , the inner product , where is the angle between and .subsequent integration over yields the marginal distribution of the instantaneous total intensity , which has mean and variance .this distribution is consistent with equation ( 13 ) of and equation ( 4 ) of .as noted by these authors , the total intensity becomes distributed in the two cases of unpolarized and completely polarized radiation .this is easily seen in the natural basis defined by the eigen decomposition of the coherency matrix . as the squared norm of a complex number ,the instantaneous intensity in each of the two orthogonally polarized modes is distributed with two degrees of freedom . if the signal is unpolarized , then each mode contributes identically and the total intensity is distributed with four degrees of freedom .if the signal is 100% polarized , then only one mode contributes and the total intensity is distributed with two degrees of freedom .the marginal distributions of the instantaneous stokes polarization vector components are most easily derived in the natural basis , where . converting to cylindrical coordinates with the axis of symmetry along then integrating equation ( [ eqn : single ] ) over the radial and azimuthal dimensions ( as well as the total intensity ) yields the marginal distribution of the instantaneous major polarization , this is an asymmetric laplace density with mean and variance , consistent with equation ( 8) of .the marginal distribution of the instantaneous minor polarization is derived in appendix [ app : marginal_minor ] ; by symmetry , equation ( [ eqn : marginal_minor ] ) yields where is the lorentz interval .this is a symmetric laplace density with mean zero and variance , as in equation ( 16 ) of .equations ( [ eqn : single_intensity ] ) through ( [ eqn : single_minor ] ) are plotted in figure [ fig : single_marginal ] .the bottom two panels illustrate the asymmetric , three - dimensional laplace distribution of the instantaneous polarization vector ; the top panel shows the distribution of the magnitude of this vector .qualitatively , instances of are distributed as a tapered needle that points along the major polarization axis , with the greatest density of instances in the head of the needle at the origin .for unpolarized radiation , the distribution is spherically symmetric and . for completely polarized radiation , and ; that is , the distribution of becomes one - dimensional and exponentially distributed along the positive axis .l + + the distribution of the local mean of independent and identically distributed instances of the stokes parameters are derived from the distribution of the local mean coherency matrix which has a complex wishart distribution with degrees of freedom ; i.e. \right).\ ] ] the wishart distribution is a multivariate generalization of the distribution , and plays an important role in communications engineering and information theory . as in [ sec : single ] , the jacobian determinant and equation ( [ eqn : inverse ] ) are used to arrive at the joint distribution function of the local mean stokes parameters , in appendix [ app : sample_p ] , this joint distribution is used to derive the probability density and the first two moments of the local mean degree of polarization , . in figure[ fig : sample_p ] , the distribution of is plotted as a function of the intrinsic degree of polarization and the number of degrees of freedom .note that , when , ; therefore , this case is not shown .the expected value of as a function and is plotted in figure [ fig : expected_p ] ; the standard deviation is similarly plotted in figure [ fig : stddev_p ] .figure [ fig : expected_p ] demonstrates that the self noise intrinsic to a stationary stochastic source of polarized radiation induces a bias , , in the estimated degree of polarization .the bias and standard deviation define the minimum sample size required to estimate the degree of polarization to a certain level of accuracy and precision . given the sample size and a measurement of , the probability density of ( eqs .[ [ eqn : sample_p ] ] and [ [ eqn : sample_p0 ] ] ) or the expectation value of ( eq . [ [ eqn : sample_p_mean ] ] ) could be used to numerically estimate the intrinsic degree of polarization using methods similar to those reviewed by . by the central limit theorem ,at large the mean stokes parameters tend toward a multivariate normal distribution where and is the covariance matrix of the stokes parameters . to derive the components of ,consider a completely unpolarized , dimensionless signal with unit intensity ( that is , with mean coherency matrix and mean stokes parameters $ ] ) and covariance matrix , where is the dimensionless variance of each stokes parameter and is the identity matrix .this unpolarized signal may be transformed into any partially polarized signal with mean coherency matrix via a congruence transformation where * b * is the hermitian square root of ( e.g. * ? ? ?* ) and the elements of have physical dimensions such as flux density . the covariance matrix of the stokes parameters of the resulting partially polarized signal is , where * b * is the mueller matrix of * b * , defined by equation ( [ eqn : mueller ] ) .noting that * b * is symmetric , is simply a scalar multiple of the mueller matrix of ; i.e. this result is a generalization of equation ( 9 ) in , which derives from the complex gaussian moment theorem .in contrast , equation ( [ eqn : covariance ] ) requires no assumptions about the distribution of the electric field .the covariance matrix of the stokes parameters has its simplest form in the natural basis defined by the eigen decomposition of the coherency matrix , where comparison between the diagonal of and the variances of the distributions derived in [ sec : single ] yields the relationship between the dimensionless variance and the number of degrees of freedom , . in the natural basis , it is also readily observed that the polarization vector is normally distributed as a prolate spheroid with axial ratio , .the dimension of the major axis of the spheroid is equal to the standard deviation of the total intensity and the dimension of the minor axis goes to zero as the degree of polarization approaches unity .furthermore , the multiple correlation between and is the multiple correlation ranges from 0 for an unpolarized source to 1 for a completely polarized source .it characterizes the correlation between total and polarized intensities and expresses the fact that the stochastic stokes parameters can not be treated in isolation .the previous section develops the four - dimensional statistics of the stokes parameters intrinsic to a single , stationary source of stochastic electromagnetic radiation . to apply these results to radio pulsar polarimetry, it is necessary to consider various observed properties of pulsar signals , including amplitude modulation , wave coherence , and the superposition of signals from multiple sources , such as instrumental noise and orthogonally polarized modes .emphasis is placed on the statistical degrees of freedom of the radiation ; in particular , the effects of amplitude modulation and wave coherence on the _ identically distributed _ and _ independent _ conditions of the central limit theorem are considered .attention is then focused on two areas of concern : randomly polarized radiation and giant pulse polarimetry .radio pulsar signals are well described as amplitude - modulated noise .the modulation index is generally defined as , where and are the standard deviation and mean of the total intensity . in single - pulse observations of radio pulsars , is typically corrected for the instrumental noise estimated from the off - pulse baseline and the self noise is assumed to be negligible . a value of greater than zerois then interpreted as evidence of amplitude modulation . however , for intense sources of radiation , the self noise of the ensemble average total intensity , ( cf .[ 29 ] of cordes 1976 ) must be taken into account .that is , the self noise of the source induces a positive bias in the modulation index .amplitude modulation modifies the statistics of all four stokes parameters .for example , under scalar amplitude modulation , the covariance matrix of the stokes parameters becomes , where is the instantaneous intensity of the modulating function and .that is , amplitude modulation uniformly increases the covariances of the stokes parameters .this can be interpreted as a reduction in the statistical degrees of freedom of the signal , which becomes dominated by the fraction of realizations that occur when .that is , especially when the modulations are deep , the samples are not identically distributed and the central limit theorem does not trivially apply .the degrees of freedom of a stochastic process are also reduced when the samples are not independent . for plane - propagating electromagnetic radiation ,statistical dependence is manifest in various forms of wave coherence , including that between the orthogonal components of the wave vector ( i.e. polarization ) and that between instances of the field at different coordinates ( e.g. spectral , spatial , temporal ) . in radio pulsar observations , wave coherence propertiesmay be modified by propagation in the pulsar magnetosphere and the interstellar medium ( e.g. * ? ? ?* ; * ? ? ?* ; * ? ? ?. given the coherence time of a band - limited source of gaussian noise , the effective number of degrees of freedom is where is the integration interval , is the sampling interval , and is the number of discrete time samples in the interval . that is , owing to wave coherence , the signal could be encoded by independent samples without any loss of information .substituting into equation ( [ eqn : local ] ) or into equation ( [ eqn : ensemble ] ) , it is seen that wave coherence inflates the four - dimensional volume occupied by the joint distribution of the mean stokes parameters . with respect to the statistics of independent samples ,this inflation increases both the modulation index of the mean total intensity and the eigenvalues of the covariance matrix of the mean stokes polarization vector .referring to figure [ fig : expected_p ] , it is readily observed that wave coherence also increases the local mean degree of polarization .if only the covariance matrix of the stokes parameters is measured , the effects of wave coherence are indistinguishable from those of amplitude modulation .in fact , the non - stationary statistics that arise from amplitude modulation can be described by their spectral coherence properties ( e.g. * ? ? ? * ) .the various types of wave coherence may be differentiated via auto - correlation and fluctuation spectral estimates ( e.g. * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ) that are outside the scope of the current treatment .when one or more incoherent sources of radiation are added together , the resulting covariance matrix of the total ensemble mean stokes parameters is the sum of the covariance matrices of the individual sources .for example , unpolarized noise adds a term to the diagonal of the covariance matrix , such that thereby reducing both the ellipticity of the distribution of the polarization vector and the multiple correlation between total and polarized intensities .the incoherent addition of signals with different polarizations , especially the superposition of orthogonally polarized modes , also decreases the degree of polarization of the mean stokes parameters .if the modes are covariant ( e.g. * ? ? ?* ) , then the covariance matrix of the sum includes cross - covariance terms .for example , consider the incoherent sum of two sources described by stokes parameters and .if the intensities of the two modes are correlated , then the resulting covariance matrix is given by where and are defined as in equation ( [ eqn : covariance ] ) and the cross - covariance matrix is where is the intensity correlation coefficient .as shown in appendix [ app : covariant_opm ] , the incoherent superposition of covariant orthogonally polarized modes causes the variance of the total intensity to increase while that of the major polarization decreases .( the major polarization is defined by the eigen decomposition and is parallel to the major axis of the spheroidal distribution of . )furthermore , it is shown that the four - dimensional covariance matrix of the stokes parameters can be used to derive the correlation coefficient as well as the intensities and degrees of polarization of superposed covariant orthogonal modes . and independently developed and applied novel eigenvalue analyses of the three - dimensional covariance matrix of the stokes polarization vector .each noted an apparent excess dispersion of and concluded that it is due to the incoherent addition of randomly polarized radiation intrinsic to the pulsar signal .this hypothesis is based on the assumption that , apart from the proposed randomly polarized component , the noise in each of the stokes parameters is purely instrumental , a premise that breaks down for sources as bright as the pulsars studied in these experiments .for example , analyzed observations of psrs b2020 + 28 and b1929 + 10 that were recorded with the arecibo 300 m antenna when the forward gain was 8 k / jy and the system temperature was 40 k . referring to the average pulse profiles presented in figure 2 of , the total intensity of psr b2020 + 28 peaks at 8 jy where the modulation index is .that is , the noise intrinsic to the pulsar signal exceeds the system equivalent flux density ( sefd ; jy ) by as much as 100% ( cf .figure 6 of * ? ? ?similarly , the self noise of psr b1929 + 10 is as much as 50% of the sefd . in both cases ,source - intrinsic noise statistics can not be neglected . to quantify the impact of self noise on single pulse polarimetry , the results of are revisited .note that focused on the non - orthogonal modes of psr b0329 + 54 and reported neither the instrumental sensitivity nor the statistics of the total intensity ; therefore , this experiment is not reviewed here .figures 3 and 4 of present two - dimensional projections of the ellipsoidal distributions of at different pulse phases , along with the dimensions of the major axis and minor axes , and , at each phase .panel a ) in each of these plots indicates the off - pulse , instrumental noise in each component of the stokes polarization vector ; for psr b2020 + 28 , 0.1 jy and , for psr b1929 + 10 , 0.04 jy . figures 2 - 4 are summarized in table 1 , which lists the pulse phase bin number ; the modulation index ; the total intensity ; and the standard deviations of the total intensity , the major polarization , and the minor polarizations .llllll bin & & ( jy ) & & & + + 61 & 0.45 & 5.5 & 2.48 & 1.14 & 0.59 + 74 & 0.19 & 2.8 & 0.54 & 0.33 & 0.30 + 91 & 0.56 & 3.1 & 1.74 & 0.91 & 0.71 + + 90 & 0.55 & 0.68 & 0.38 & 0.32 & 0.09 + 119 & 0.68 & 1.35 & 0.92 & 0.63 & 0.13 + 125 & 0.65 & 1.00 & 0.65 & 0.46 & 0.10 + [ tab : mck04 ] as shown in [ sec : ensemble ] , the standard deviation of the total intensity and the major axis of the spheroidal distribution of the polarization vector should be equal ; however , for every phase bin listed in table 1 , .that is , there is no excess dispersion of the polarization vector and therefore no need for additional randomly polarized radiation ; rather , the excess dispersion of the total intensity requires explanation . noting that scalar amplitude modulation and wave coherence uniformly inflate the covariance matrix of the stokes parameters , one possible explanation for is the incoherent superposition of covariant orthogonally polarized modes ( see appendix [ app : covariant_opm ] ) .the coefficient of correlation between the mode intensities in equations ( [ eqn : cov_opm_c00 ] ) and ( [ eqn : cov_opm_c11 ] ) is equivalent to in the analogous equations ( 5 ) and ( 6 ) of .the last term in all four of these equations indicates that positive correlation between the mode intensities increases the variance of the total intensity ( ) and decreases that of the major polarization ( ) .( anti - correlated mode intensities have the opposite effect . ) the increased variance of the total intensity also explains why orthogonally polarized modes typically coincide with increased amplitude modulation .similarly , equation ( [ eqn : cov_opm_c22 ] ) shows that the self noise of partially polarized modes increases the dimensions of the minor axes ( and ) .the source - intrinsic contribution diminishes to zero only for 100% polarized modes . for all of the phase bins listed in table 1 , , indicating that the modes are not completely polarized , as has been previously assumed .equations ( [ eqn : cov_opm_c00 ] ) to ( [ eqn : cov_opm_c01 ] ) and the discussion in appendix [ app : covariant_opm ] motivate the development of a new technique for producing mode - separated profiles .from the four - dimensional covariance matrix of the stokes parameters , it is possible to derive the mode intensity correlation coefficient as well as the intensities and degrees of polarization of the modes .these values and the mean stokes parameters , computed as a function of pulse phase , can be used to decompose the pulse profile into the separate contributions of the orthogonal modes .the estimation of the degree of polarization of giant pulses is beset by fundamental limitations .on one hand , giant pulses remain unresolved at the highest time resolutions achieved to date . on the other hand ,a sufficiently large number of independent samples must be averaged before an accurate estimate of polarization is possible .even if many time samples are averaged , those samples may be correlated due to scattering on inhomogeneities in the interstellar medium and/or the mean may be dominated by a single unresolved giant nanopulse that greatly reduces the effective degrees of freedom .for example , presented a histogram of the degree of circular polarization of giant pulses from the crab pulsar observed at 600 mhz .the histogram is interpreted as evidence that each giant pulse is the sum of nanopulses , each with 100% circular polarization .however , referring to figure [ fig : stddev_p ] , the width of the distribution is also consistent with that of completely unpolarized radiation with degrees of freedom .although the data were averaged over 256 samples to 32 resolution , the characteristic timescale of scattering in these observations was estimated to be ; therefore , the effective number of degrees of freedom is of the order of unity .small - number statistics also limit the conclusions that can be drawn from the correlations between giant pulse spectra presented in figure 12 of .the asymptotic correlation coefficient at small lag was interpreted as evidence that nanopulses are highly polarized . however , with only one estimate of the correlation coefficient at a lag less than 0.1 seconds , the extrapolation to nanosecond resolution is questionable .therefore , the constraint on the degree of polarization at this timescale is of negligible significance .a four - dimensional statistical description of polarization is presented that exploits the homomorphism between the lorentz group and the transformation properties of the stokes parameters . within this framework , a generalized expression for the covariance matrix of the stokes parameters is developed and applied to the analysis of single - pulse polarization fluctuations .the consideration of source - intrinsic noise renders randomly polarized radiation unnecessary , explains the coincidence between increased amplitude modulation and orthogonal mode fluctuations , and indicates that orthogonally polarized modes are only partially polarized .furthermore , the four - dimensional covariance matrix of the stokes parameters enables estimation of the mode intensities , degrees of polarization , intensity correlation coefficient , and effective degrees of freedom of covariant , orthogonal , partially polarized modes .measured as a function of pulse phase , these parameters may be used to produce mode - separated profiles without any assumptions about the intrinsic degree of mode polarization .the formalism is also used to derive the first and second moments of the degree of polarization as a function of the intrinsic degree of polarization and the number of degrees of freedom of the stochastic process .these are used to demonstrate that giant pulse polarimetry is fundamentally limited by systematic bias due to insufficient statistical degrees of freedom .the discussions of amplitude modulation in [ sec : modulation ] and wave coherence in [ sec : coherence ] serve to illustrate the difficulties in defining an appropriate average when the signal is unresolved . in this regard ,the approach employed in this paper is complimentary to previous analyses that make use of auto - correlation functions and fluctuation spectra to separately measure the statistics of the total and polarized intensities .useful new results may be derived by extending the techniques employed in these works to include the cross - correlation terms that describe the statistical dependences between the stokes parameters .i am grateful to j.p .macquart for fruitful discussions and k. stovall for technical assistance with the computer algebra system .the insightful comments of the referee led to significant improvements to the manuscript .m. bailes , r. bhat , j. verbiest and m. walker also provided helpful feedback on the text .equations ( [ eqn : single_intensity ] ) and ( [ eqn : single_major ] ) could have been also derived by first expressing equation ( [ eqn : single_modes ] ) in the natural basis . here, it takes the simplified form , where are rayleigh distributions . note that in the natural basis , , , and are statistically independent and is uniformly distributed .the distributions of and may then be obtained from the convolution and cross - correlation , respectively , of and .similarly , following , equation ( [ eqn : single_modes_natural ] ) is used to compute the marginal distribution of . in the natural cylindrical coordinates defined in [ sec : single ] , the distribution of the radial dimension is found by integrating equation ( [ eqn : single_modes_natural ] ) over , then transforming the integrand to yield the intermediate result where is a modified bessel function of the second kind . as the azimuthal dimension uniformly distributed , the probability density of is combining with and performing another integral transformation yields for , where is the lorentz interval . by symmetry, a similar result is found for .conversion of equation ( [ eqn : local ] ) to spherical coordinates and integration over all orientations of the polarization vector yields the joint distribution of the local mean total and polarized intensities , this joint density is defined on and is used to derive the distribution of the local mean degree of polarization , , as a function of the intrinsic degree of polarization and the number of samples averaged : ^{1 - 2n } - \left [ 1+pp \right]^{1 - 2n } \right ) \over 2^{2n-1 } p } .\ ] ] as , the distribution of the local mean degree of polarization is plotted in figure [ fig : sample_p ] ; its first and second moments are and where is a regularized generalized hypergeometric function .these moments are used to calculate the variance of the local mean degree of polarization .the theoretical values of and are plotted as a function of and in figures [ fig : expected_p ] and [ fig : stddev_p ] .following the discussion in [ sec : incoherent ] , the covariance matrix of orthogonally polarized modes with correlated intensities is derived by starting with equation ( [ eqn : covariant_sum ] ) , neglecting instrumental noise and considering only the self noise of the modes . if the modes and are orthogonally polarized , then ; furthermore , if is the dominant mode , then and in the natural basis defined by , the six non - zero elements of the covariance matrix are including and , there are a total of seven unknowns and six unique constraints . however ,if it is assumed that the modes have similar degrees of freedom ( i.e. ) , then the system can be solved numerically ; e.g. using the newton - raphson method . in , not measured ; therefore , no derived parameter estimates are currently presented .
a four - dimensional statistical description of electromagnetic radiation is developed and applied to the analysis of radio pulsar polarization . the new formalism provides an elementary statistical explanation of the modal broadening phenomenon in single pulse observations . it is also used to argue that the degree of polarization of giant pulses has been poorly defined in past studies . single and giant pulse polarimetry typically involves sources with large flux densities and observations with high time resolution , factors that necessitate consideration of source - intrinsic noise and small - number statistics . self noise is shown to fully explain the excess polarization dispersion previously noted in single pulse observations of bright pulsars , obviating the need for additional randomly polarized radiation . rather , these observations are more simply interpreted as an incoherent sum of covariant , orthogonal , partially polarized modes . based on this premise , the four - dimensional covariance matrix of the stokes parameters may be used to derive mode - separated pulse profiles without any assumptions about the intrinsic degrees of mode polarization . finally , utilizing the small - number statistics of the stokes parameters , it is established that the degree of polarization of an unresolved pulse is fundamentally undefined ; therefore , previous claims of highly polarized giant pulses are unsubstantiated .
in this paper we have constructed polynomial chaos expansions to act as surrogate models to a cmg commercial solver used to estimate peak and cumulative gas extraction from a coal seam gas well .each pce propagates uncertainty in four input variables and builds a surrogate five dimensional response surface .the polynomial expansion delivers fast evaluations across the entire parameter space . for instancea pce completes 3000 evaluations in under a second ( on an intel(r ) core(tm ) i7 - 4770 cpu at 3.40ghz ) whereas the commercial solver requires around five minutes per single evaluation .being able to cheaply evaluate many points across the parameter space provides uncertainty quantification through the generation of summary statistics and empirical probability and cumulative density functions .in addition analytic global sensitivity analyses can be formulated from pce at negligible additional computational cost .the success of this process is demonstrated by the low discrepency between the surrogates and original models .the constructed pces of order six approximate the commercial solver peak gas extraction rate with relative root mean square error of less than and the cumulative gas extraction with a relative root mean square error less than .donovan , d. , burrage , k. , burrage , p. , mccourt , t.a . ,thompson , h.b . ,yazici , e. . , 2016 .estimates of the coverage of parameter space by latin hypercube and orthogonal sampling : connections between populations of models and experimental designs .arxiv:1510.03502 novak , e. , ritter , k. , 1999 .simple cubature formulas with high polynomial exactness .approx . , 15(4 ) , 499 - 522 .palmer , i. , mansoori , j. , 1998 .how permeability depends on stress and pore pressurein coalbeds : a new model .spe reservoir evaluation & engineering 1(6 ) , 539 .sarma , p. , xie , j. , 2011 .efficient and robust uncertainty quantification in reservoir simulation with polynomial chaos expansions and non - intrusive spectral projection .spe reservoir simulation symposium .society of petroleum engineers .sobol , i.m . , 1990 .sensitivity estimates for non - linear mathematical models .matematicheskoe 2 , 112118 ( in russian ) .sensitivity estimates for non - linear mathematical models .math . model .exp . 1 , 407414 . ( 1993 ) ]
a surrogate model approximates a computationally expensive solver . polynomial chaos is a method to construct surrogate models by summing combinations of carefully chosen polynomials . the polynomials are chosen to respect the probability distributions of the uncertain input variables ( parameters ) ; this allows for both uncertainty quantification and global sensitivity analysis . in this paper we apply these techniques to a commercial solver for the estimation of peak gas rate and cumulative gas extraction from a coal seam gas well . the polynomial expansion is shown to honour the underlying geophysics with low error when compared to a much more complex and computationally slower commercial solver . we make use of advanced numerical integration techniques to achieve this accuracy using relatively small amounts of training data . [ cols= " < " , ] + note that for cumulative gas extraction the variable with the most significant impact on the variance of the model is the langmuir volume . this is illustrated in figure [ fig : slices ] by taking two slices of the five dimensional response surface generated by the pce . in the case of peak gas the dominant variable in terms of contribution to the overall variance is the fracture porosity .
the theme of this workshop , clock synchronization , has occupied a prominent position at the frontier of technology for a long time . from a recent article by peter galison , i learned that the international conference on chronometry in 1900 included a session devoted to the problem of clock synchronization .this was an urgent problem at the time , especially in europe , where a single track would often carry railroad traffic in both directions , so that precise scheduling was necessary to avoid disastrous collisions . according to galison , the technical community s keen interest in clock coordination 100 years agocould not have escaped the attention of a certain patent clerk in bern named einstein : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ meanwhile , all around him , literally , was the burgeoning fascination with electrocoordinated time .every day einstein took the short stroll from his house , left down the kramgasse to the patent office ; every day he must have seen the great clock towers that presided over bern with their coordinated clocks , and the myriad of street clocks branched proudly to the central telegraph office . __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ to galison , whose work often highlights the role of technology in the development of scientific ideas , it is irresistible to speculate that the preoccupation with clock coordination circa 1900 helped to steer einstein to the insight that simultaneity is the key concept for understanding the electrodynamics of moving bodies , and so may have inspired the most famous scientific paper of the past century . to me as a theoretical physicist , a pleasing outcome of a workshop like this one would be that our musings about clock synchronization lead to conceptual insights into the properties of quantum information . some might hope for a flow of ideas in the other direction , but both directions can be beneficial .most of this talk will concern a scenario in which two parties , alice and bob , both have good local clocks that are stable and accurate , and wish to synchronize these clocks in their common rest frame .one method that works , and does not require alice and bob to have accurate knowledge of the distance between them , is slow clock transport ( sct ) , illustrated in fig .[ fig : sct ] .alice has a traveling clock that she synchronizes with her local clock ( event ) and then sends to bob , who receives it and reads it ( event ) . when bob reads the clock it has advanced by , the proper time along the clock s world line that has elapsed during transit . if the clock moved slowly , then is close to the elapsed time as measured by alice s clock during the transit , so that bob can synchronize his clock with alice s .sct works but it is not very sexy. it would be more fun to use a method that exploits the resource of quantum entanglement , what the jpl group called quantum ( atomic ) clock synchronization ( quacs or qcs ) , illustrated in fig .[ fig : qcs ] .so now suppose that alice and bob have a co - conspirator charlie , who prepares maximally entangled pairs ( event ) ; let s suppose that the state of each pair is the singlet state .charlie sends half of each pair to alice , and half to bob .alice measures the observable of her qubits ( event ) , and bob measures x of his qubits ( event ) . comparing their results , alice and bob can infer the value of the proper time along the world line of the qubits that moves from backward in time to and then forward in time to . if the qubits were transported slowly , this difference of proper times is close to the time difference in the reference frame in which alice and bob are at rest .both protocols work , but qcs is more technically demanding than sct , so why would we prefer qcs ? perhaps if we need to synchronize periodically , rather than just once , we ll find it convenient to ship many pairs of qubits ahead of time and save them until they are needed , rather than sending another clock on demand every time we need to synchronize .but when i first heard about the idea that quantum information could be used for clock synchronization ( from hideo mabuchi ) , what seemed intriguing to me is that we might be able to correct phase errors that afflict the traveling qubits , by _ purifying _ the shared entanglement . in iii - vii ,i ll explain why i havent been able to get this idea to work .to assess whether entanglement purification might improve the accuracy of clock synchronization , let us begin by looking at the qcs protocol in more detail .our qubits are two - level atoms , each governed by the hamiltonian , .the pairs prepared by charlie are in the state which is stationary ( i m not bothering to write factors of here . ) when alice measures , obtaining the outcome , she prepares for bob on the same time slice the state , which evolves after time to .then bob measures , obtaining the outcome with probability alice and bob confer so that bob knows for which qubits the result of her measurement was . then with qubits , the time can be determined to an accuracy it is instructive to consider what would happen if the pairs prepared by charlie were not really s , but were instead in the state this is the state that would be obtained if the time evolution operator , for bob s qubit only , were to act on the initial state for time . with the pairs in this state ,when alice measures she prepares for bob one of the states , which will evolve in time to therefore , the apparent time offset detected in the qcs protocol will be rather than . if , without telling alice and bob , charlie replaces the s by s , then alice and bob will think that their clocks are synchronized even though bob s really lags behind alice s by .if alice and bob perform sct or qcs , dephasing of the qubits will weaken the signal .if bob measures his qubit a time after alice measures of hers , the probability that bob finds is here is the phase damping factor , _e.g. _ where is the time that the qubits have been exposed to phase noise and is the damping time .( we have assumed that there are no systematic phase errors the noise has zero mean . ) the damage to the qubits caused by phase damping might be reversed by an entanglement purification protocol , where an initial supply of noisy entangled pairs is `` distilled '' to a smaller number that approximate with better fidelity .such a protocol is illustrated in fig .[ fig : purify ] .alice and bob select two pairs , and each performs an operation on her / his half of the two pairs , culminating in a bilateral measurement that destroys one of the two pairs . if alice and bob get the same measurement result when they measure pair number 2 , then they retain pair number 1 ; otherwise , they throw pair number 1 away .= 3.5 in how does it work ?alice and bob want to have pairs with .their bilateral procedure allows them to measure the value is if one of the pairs has a phase error , and it is if either both pairs are good or both pairs are bad .hence , as long as the original ensemble approximates with good enough fidelity ( _ e.g. _ , ) , the pairs that are retained have higher fidelity than the original pairs .we need to notice though , that in order for the purification protocol to achieve its intended purpose , alice s and bob s operations must be perfectly synchronized .if bob s operations were to systematically lag behind alice s by time , then the protocol would actually distill an eigenstate of namely the state .therefore , if alice and bob first distill their pairs and then perform qcs , all they will be able to detect is a _ relative _ offset , the difference between the offset used in the purification protocol and the offset used in qcs .but this is information they could discern by reading their accurate local clocks. they would not gain anything from consuming their shared entanglement .we might try this alternative procedure : to combat dephasing , charlie does not prepare raw entangled pairs of qubits , but instead he _ encodes _ the pairs using a quantum error - correcting code that resists phase errors . the encoded qubits can be actively stabilized as they travel to alice and bob , so that they are sure to arrive safely. then alice and bob can execute qcs .but if phase errors are the enemy then there is a problem , because the phase rotation due to the natural evolution of the raw qubits is one kind of error that the code is designed to resist .the stabilization of the encoded state freezes this natural evolution .alice can decode her qubit and measure , but if bob is still preserving his encoded qubit , alice s action will not `` start the clock '' of bob s qubit on the same time slice ; rather its evolution will remain frozen .in fact then , all alice and bob can learn about if they first decode and then execute qcs , is a _ relative _ offset : the difference of the offset used in decoding and the offset of the final measurements .again , this is information that can be inferred by referring to alice s and bob s local clocks .the difficulty we have encountered here is closely related to the obstacle that has so far prevented us from finding a powerful way to use quantum error - correcting codes to improve frequency standards .we would like to use quantum error correction to stabilize a _ precessing _ qubit that can serve as an accurate standard ; therefore the natural evolution of the qubits should preserve the code subspace ( _ i.e. _ , should commute with the code stabilizer ) .this requirement strongly restricts the error - correcting power of the code .one code with the desired property is the repetition code , which can correct coherent bit flip errors ( that is , stochastic errors ) ; the encoded state evolves as this is just the rapidly precessing cat state whose advantages have been extolled by the nist group . if the limiting factor in the accuracy of the standard were a `` bit - flip '' channel , then this state could be actively stabilized by quantum error correction . as a mathematical statement , this is not completely vacuous , since it is possible in principle for the precision in the measurement of the precession rate to be limited by bit - flip errors .[ fig : master ] shows the decay of the polarization of a qubit , found by integrating the master equation , for a bit - flip channel and a dephasing channel with comparable rates .qualitatively , they are similar ; the visible difference is that the bit - flips ( errors ) do not damage the -component of the polarization .the bit - flip damage can be controlled by the quantum repetition code ; the phase damping can not be .while correct as a mathematical statement , this is not a very useful observation , since i do nt know of any realistic physical setting in which a bit - flip channel really limits the accuracy of an interesting measurement .what is really `` quantum '' about the qcs protocol ?of course , each `` clock '' is a qubit , so when we read it we acquire at best one bit of information , and we need many qubits to estimate a time offset accurately .but it is worthwhile to emphasize that the correlations that were exploited might as well have been classical rather than quantum correlations .that is , although we have said that the qcs protocol exploits entanglement , we have only used the property that the state has two qubits with correlated values of ; it makes no difference to us that the qubits are also correlated in the complementary variable .therefore , it would work nearly as well for charlie to prepare a product state .unlike the entangled state , this state is not stationary ; in time it evolves to if alice and bob both measure , how strongly correlated their results are depends on the time offset of their measurements , as is the case if charlie prepares s .specifically , if there is no dephasing of the qubits , then if bob measures a time after alice , and we average over the time since charlie s preparation ( which is assumed to be completely unknown ) , then the probability that bob finds if alice does is the signal is weaker then if alice and bob perform qcs , so that they need product pairs to find to the same accuracy as with entangled pairs , but that is not a very heavy price to pay in return for the advantages of working with product states rather than entangled states .( of course , with sct we do nt have to pay even that price qubits sent from alice to bob are as effective as entangled pairs measured by alice and bob . )what about the purification ?since it involves coherent processing , there may seem to be something quantum about it i m not sure i would know how to make a collective observation on two pairs of analog classical clocks without actually reading each clock .but since the correlations are essentially classical , we are not really making any use of the essentially quantum features of entanglement purification . if we are really interested in , say , using our entanglement for epr quantum key distribution , then purification may be essential for achieving _ quantum privacy amplification _ it is how we ensure that no potential eavesdropper has more than a negligible amount of information about the key . since in qcswe are really exploiting a correlation that is essentially classical , the purification does nt buy us anything , at least if there are no systematic phase errors ( if the phase noise has zero mean ) .for example , suppose that our supply of pairs is a mixture of s and s with fidelity : then bob s measurement of his precessing qubit is governed by the probability distribution so that with pairs we can determine to accuracy if we perform one round of purification ( perfectly synchronized ) , the number of surviving pairs is reduced ( on average ) to while the fidelity of the remaining pairs improves to therefore , if we perform qcs with the remaining pairs we can determine to the accuracy if the fidelity is close to one , purification hurts , because we waste half the pairs needlessly . even for fidelity with , purificationdoes nt help : we can boost by a factor of about two , but we re no better off because the number of pairs is reduced by a factor of about 4 . in principle , purification might reduce _ systematic _ phase errors . for example , suppose that all of the pairs are identical , but with the unknown phase error though the states are maximally entangled , alice and bob ca nt fully exploit the entanglement if they do nt know the value of .now if they execute the ( perfectly synchronized ) purification protocol , the number of pairs is reduced ( on average ) to ~,\ ] ] and the phase of the remaining pairs becomes where after a few rounds , is small alice and bob have extracted from the initial supply of unknown entangled states a smaller number of known entangled states .now this sounds like it could be useful .if alice and bob perform qcs using the pairs with a systematic phase error , that systematic error will show up in their measurement of their time offset .if they can trade in the original pairs with unknown phase for a reduced supply of pairs with known phase , it s a win .but as already discussed , the `` purification '' protocol actually replaces the original supply with a reduced supply where the phase of the new pairs is determined by their time offset and therefore _ still _ unknown .alice and bob are no better off .the `` recurrence '' protocol that we have described is relatively simple to execute ( though still not easy ) , but if we are willing to do more sophisticated coherent processing , there are much more efficient protocols that waste far fewer pairs . in particular , there is a `` hashing '' protocol , requiring only one - way communication from alice to bob , that ( according to standard shannon arguments ) yields , from initial pairs with fidelity , a supply of distilled pairs , where the fidelity of the distilled pairs is as close as desired to ; the number of distilled pairs is asymptotically ( for large ) close to ~,\ ] ] where is the binary entropy function .( i m assuming that the only errors we have to worry about are phase errors . )this hashing protocol really seems to be a quantum protocol : it involves collective measurement of many qubits at once , and i do nt think there is an analogous operation that could be performed on a supply of classical analog clocks .furthermore , since if has a much better yield of highly distilled pairs than the recurrence protocol , it really does seem to be capable , in principle , of significantly improving our sensitivity to the time offset between alice and bob .but there s a problem , really the same problem as that encountered when we imagined that charlie creates encoded pairs that are actively stabilized through quantum error correction . to approach the optimal yield of distilled pairs given in eq .( [ shannon_yield ] ) , alice and bob use a phase - error correcting quantum code that they have agreed on in advance .alice measures the stabilizer generators of the code and sends the measurement outcomes to bob , who also measures the same generators and then corrects errors to prepare a supply of high fidelity encoded pairs .but the pairs are encoded , and the natural evolution of the qubits does not preserve the code space .therefore , alice and bob need to decode the pairs before performing qcs , and once again they will only be able to detect the offset in qcs _ relative _ to the offset used in the decoding ( and the measurement of the stabilizer operators ) . in short , if quantum information really offers an advantage for clock synchronization , we do nt seem to be realizing that advantage in the qcs protocol , as far as i can see .perhaps it is because the protocol is really `` too classical . ''while there seems to be an obstacle to using entanglement purification to improve the reliability of clock synchronization , alice and bob can nevertheless use purification to enhance the efficacy of other protocols , such as epr key distribution or teleportation , even if they do not have synchronized clocks .we have seen that if bob s clock lags behind alice s by the unknown offset , then by executing the usual distillation procedure , alice and bob can prepare high fidelity pairs in the state if , say , alice wants to teleport the unknown state to bob , then with good s , alice s joint measurement on the unknown state and her member of the entangled pair would prepare in bob s laboratory on the same time slice one of the states , where is known from alice s measurement .if the pairs are in the state instead , then alice s measurement prepares this state evolves in time to .therefore , if bob s operation lags behind alice s by the same amount in both the purification and the teleportation , the teleportation works normally .this is good news , because it means that alice and bob do nt need to know the offset in order to use purification to improve such protocols .but the bad news is that ( consistent with our earlier observations ) , after purifying alice and bob ca nt use the fidelity of teleportation as a criterion for judging how well their operations are synchronized .let s return now to the theme with which i began : that this workshop invites us to reconsider some aspects of quantum information theory in a relativistic setting .in fact , there is a question about relativistic quantum theory that has bothered me for a while , and ruminating about clock synchronization has stimulated me to reconsider it .the question is : _ what is an observable?_. the standard answer in quantum field theory is that an observable is a self - adjoint operator that can be defined on a spacelike slice through spacetime .but this is not the right answer in general , not if we mean by an observable something that could really be measured in principle . for many self - adjoint operators ,if they could really be `` measured , '' the measurement would allow spacelike - separated parties to communicate .when i speak of a `` measurement '' of observable occuring on a time - slice , i do nt mean that the outcome of the measurement is instantly known by anyone .rather i mean that the density operator decoheres on that time slice as where is the set of orthogonal projectors onto the eigenspaces of the observable . later on , if information from various locations on the slice arrives at a central location , the outcome can be inferred and recorded . now suppose that alice and bob share a quantum state . at time , alice performs a unitary transformation , , at time the superoperator eq .( [ meas_so ] ) acts on the state , and at time , bob performs a measurement on his density operator , ~.\ ] ] if alice and bob are spacelike separated , then the superoperator can be physically realizable only if it is _ causal _ bob s density operator must not depend on the unitary transformation that alice applies .how can the causal observables be characterized ?are all causal observables physically implementable in principle ? to clarify the concept ,let s consider an example , noted by sorkin , of a measurement that is not causal .it is a two - outcome incomplete bell measurement performed on a pair of qubits .the orthogonal projectors corresponding to the two outcomes are suppose that the initial pure state shared by alice and bob is .this state is orthogonal to , so that outcome 2 occurs with probability one , and the state is unmodified by the superoperator .afterwards bob still has a density operator .but what if , before the superoperator acts , alice performs a unitary that rotates the state to ?since this is an equally weighted superposition of and , the two outcomes occur equiprobably , and in either case the final state is maximally entangled , so that bob s density operator afterwards is .bob can make a measurement that has a good chance of distinguishing the density operators and , so that bob can decipher a message sent by alice .the measurement superoperator is acausal .an obvious example of an operation that _ is _ causal is measurement of a tensor product observable alice and bob can induce decoherence in the basis of eigenstates of a tensor product through only local actions .but there are other examples of causal operations that are a bit less obvious .one is complete bell measurement , _i.e. _ decoherence in the bell basis .no matter what alice does , the shared state after bell measurement is maximally entangled , so that bob always has , and he ca nt extract any information about alice s activities . though bell measurement is a causal operation , it is not something that alice and bob can achieve locally without additional resources .now one wonders , why should there exist causal operations that can not be implemented locally ?a possible answer is that there are previously prepared resources that alice and bob might share , that while two weak to allow faster - than - light communication , are strong enough to enable decoherence in the bell basis .one such resource might be shared entanglement ; in the case of complete bell measurement , an even weaker resource will do shared randomness .suppose that alice and bob both have the same string of random bits .this is a useful resource . in particular, the whole idea of quantum key distribution is to establish secure shared randomness alice uses a random key for encoding , bob for decoding , and if eve does nt know the key she ca nt decode the message .the shared random string also allows alice and bob to induce decoherence in the bell basis .they share a pair of qubits , and on the same time slice , they both consult two bits of the string ; depending on whether they read or , they both apply the unitary operator or .the superoperator \end{aligned}\ ] ] annihilates all the terms in that are off the diagonal in the bell basis . by a similar method ,randomness shared by parties enables decoherence in the basis of simultaneous eigenstates of any set of commuting operators , where each operator is a tensor product of pauli operators . with dave beckman , daniel gottesman , and michael nielsen ,i have been mulling over the problem of characterizing causal operations for several months . atfirst we guessed that any causal superoperator ( one that does not allow alice to signal bob or bob to signal alice ) can be implemented if alice and bob share entanglement and perform local operations , but beckman discovered counterexamples .all of the cases studied so far are consistent with a modified conjecture , suggested by david divincenzo : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ conjecture _ : a superoperator that does not allow alice to send a signal to bob can be implemented with * local operations by alice and bob , * one - way quantum communication from bob to alice . __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ some special cases of this conjecture have been proved , but we have no general proof , and in fact i am far from confident that the conjecture holds in general . even if it does , the conjecture does not provide a very satisfying way to characterize the causal operations .on the one hand , it _ is _ clear that the two resources listed are too weak to allow alice to signal bob .but on the other hand , one - way communication from bob to alice is not achievable when alice and bob are spacelike separated , so an operation that can be implemented with these resources can not necessarily be applied on a time slice . in quantum physics , there is a perplexing gap between what is _ causal _ and what is _ local_.it is a captivating challenge to find ways in which quantum error correction and/or entanglement purification can be invoked to improve the accuracy of clock synchronization or frequency standards .but my efforts so far have not met with much success .i d like to comment here on an issue that i did nt mention in my talk at the meeting . a problem related to the issues i discussed in my talk , butin a sense logically independent , arises if we consider in more detail how an `` measurement '' is actually performed with real atomic clocks : a pulse is applied that rotates eigenstates to eigenstates , and then is measured .thus , if both alice and bob are to measure , they need to establish a `` phase lock '' to ensure that they are really using the same convention to define .an approach to this problem was suggested in , but criticized in and . at the meeting , both dave wineland and paul kwiat proposed an alternative approach .they suggested applying the concept of a `` decoherence - free subspace '' to encode alice s phase convention in a robust two - qubit stationary state that can be sent to bob .this proposal relies on the assumption that two qubits transported together will be subjected to identical phase errors in the preferred basis ; thus an eigenstate of the operator will be invulnerable to phase errors . if we encode the logical qubit as then the encoded state resists dephasing .alice and bob both have local interrogating oscillators that are used to apply pulses . at the time that she calls 0, alice can apply a pulse that prepares , she can encode that state as , and she can send that encoded state to bob .bob can decode it and measure it at the time he calls , and thereby ( if alice sends many such encoded states ) , he can lock his phase convention at time to alice s convention at time .this shared phase convention might be useful , but in itself it does not solve the problem of synchronizing alice s clock with bob s .we can also ask how well the assumptions of a preferred dephasing basis , and identical phase errors on qubits traveling together , apply in a realistic physical setting .these may be reasonable assumptions if and are energy eigenstates of a two - level atom , and the dephasing is dominated by fluctuating electromagnetic fields .the same idea works if and are linear polarization states of a photon , aligned with the preferred axes of the optical medium .the clock synchronization problem might be viewed as an entry into a fascinating subject relativistic quantum information theory .i ve reported here on some modest progress toward a better understanding of one issue in this theory : the structure of causal observables .i thank jon dowling for organizing this meeting and for encouraging me to write up this report .i m also grateful to the participants , especially xinlan zhou , for stimulating discussions , and to dave wineland and paul kwiat for helpful correspondence .my interest in the clock synchronization problem was originally stimulated by discussions with hideo mabuchi and jeff kimble ; flaws in some of my ideas were pointed out by steven van enk .the comments about causal operations are based on work with dave beckman , daniel gottesman , and michael nielsen , aided by useful advice from david divincenzo .this work has been supported in part by the department of energy under grant no .de - fg03 - 92-er40701 , by darpa through the quantum information and computation ( quic ) project administered by the army research office under grant no .daah04 - 96 - 1 - 0386 , and by an ibm faculty partnership award .p. galison , `` einstein s clocks : the place of time , '' critical inquiry * 26*:2 ( 2000 ) .r. josza , d. s. abrams , j. p. dowling , and c .p .williams , `` quantum clock synchronization based on shared prior entanglement , '' phys .lett . * 85 * , 2010 ( 2000 ) , quant - ph/0004105 .h. mabuchi , private communication ( 1998 ) . c. h. bennett , d. p. divincenzo , j. a. smolin and w. k. wootters , `` mixed state entanglement and quantum error correction , '' phys .a * 54 * , 3824 ( 1996 ) , quant - ph/9604024. r. d. sorkin , `` impossible measurements on quantum fields , '' in _ directions in general relativity , vol . 2 _ , b. l. hu and t. a. jacobson ( eds . ) , ( cambridge , cambridge university press , 1993 ) , gr - qc/9302018 .
i consider quantum protocols for clock synchronization , and investigate in particular whether entanglement distillation or quantum error - correcting codes can improve the robustness of these protocols . i also draw attention to some unanswered questions about the relativistic theory of quantum measurement . this paper is based on a talk given at the nasa - dod workshop on quantum information and clock synchronization for space applications ( quicssa ) , september 25 - 26 , 2000 .
over half of all stars in the sky are actually multiple star systems and , of the binaries , about half again are close enough to one another for mass to be exchanged between the components at some point in their evolution .there is a subset of these close binary systems in which periodic or aperiodic variations in luminosity and spectral features can be explained by on - going mass - transfer events and instabilities in the accretion flow .for example , long term stable mass transfer in which the accretor is either a white dwarf , a neutron star , or a black hole is widely recognized as the mechanism powering cataclysmic variables and x - ray sources . in algol type systems an evolved star transfers mass via roche lobe overflow to a near main sequence accretor .each of these systems evolves on a ( secular ) time scale that is long compared to the orbital period of the system , with the mass - transfer rate determined by angular momentum losses from the binary , and thermal relaxation and nuclear evolution of the donor star . in these systems ,the fraction of the donor s mass that is transferred during one orbit is typically to , many orders of magnitude less than what current numerical 3-d hydrocodes can resolve .all of the above systems must have descended from binaries in which the accretor of today was initially the more massive component who evolved first off the main sequence .the mass transfer in these progenitor systems was in many instances dynamically unstable , yielding mass transfer rates many orders of magnitude above the currently observed rates and leading in some cases to a common envelope phase ( see e.g. warner 1995 ; verbunt & van den heuvel 1995 ; nelson & eggleton 2001 ) in addition , there is a wide class of binary star systems not presently undergoing mass - transfer for which the astrophysical scenarios that have been proposed to explain their origin or ultimate fate involve dynamical or thermal mass - transfer in a close binary , sometimes leading to a common envelope phase of evolution . examples of such systems are millisecond pulsars , some central stars of planetary nebulae , double degenerate white dwarf binaries , perhaps leading to supernovae of type ia through a merger , or subdwarf sd0 , sdb stars , and double neutron star binaries , perhaps yielding -ray bursts in a fireball when the neutron stars coalesce .the evolutionary scenarios that are drawn upon to explain the existence of some of these systems call for events in which or more of the donor s mass can be transferred during a single orbit .if we are to fully understand these rich classes of astrophysically interesting systems their origin , present evolutionary state , and ultimate fate it seems clear that we will have to develop numerical algorithms that can accurately simulate mass - transfer events in binary systems under a wide range of physical conditions ( for example , systems having a wide range of total masses , mass ratios , and ages ) over both short and long evolutionary time scales . the astrophysics community as a whole is far from achieving this ultimate goal , but progress is being made as various groups are methodically tackling small pieces of this very large and imposing problem .examples of recent progress in the numerical simulation of interacting binaries include two - dimensional simulation of mass transfer in algol , three - dimensional evolutions of roche lobe overflow in lmc x-4 and the accretion stream in lyrae , simulations of the common envelope phase and merger of a point - mass white dwarf with a red giant star , neutron star binary and black hole - neutron star binary mergers in the context of -ray bursts and the dispersal of the secondary star s material in type ia supernovae .building on our experience simulating the nonlinear development of dynamical instabilities in self - gravitating systems such as protostellar gas clouds , stellar cores , and young neutron stars and on our experience in studying mass - transfer in close binaries through analytical and semi - analytical techniques we are developing a hydrodynamical tool to study relatively _ rapid _ phases of mass - transfer in binary systems . our immediate aim is to be able to follow the dynamical redistribution of material through orbits after the onset of a mass - transfer instability , in binary systems having a wide range of initial mass ratios with either star initially selected to be in contact with its roche lobe and thereby become the `` donor '' .our simulation tool treats both stars as self - gravitating fluids ; they are embedded in the computational grid in such a way that their internal structures are both fully resolved ; and the system as a whole is evolved forward in time through an explicit integration of the standard fluid equations , coupled with the poisson equation so that the newtonian gravitational field changes along with the mass distribution in a fully self - consistent way . initially we will examine structures that can be well - represented by relatively simple barotropic ( and adiabatic ) equations of state , but this constraint can easily be lifted in the future .we will be restricted to studies of relatively rapid phases of mass - transfer because we are integrating the equations of motion forward in time via an explicit integration scheme . while this simulation tool will not permit us to model stable flows with low mass - transfer rates such as the observed flows in cvs and x - ray binaries it should be capable of a wide range of astrophysical applications including : a determination of the conditions required to become unstable toward dynamical mass - transfer in all kinds of close binaries with normal and degenerate components ; the ability to follow dynamical phases of mass transfer through to completion , which may mean a return to stability at a new system mass ratio , the formation of a massive disk or a common envelope with or without rapid mass loss from the system , or a merger of the two objects into one ; and an investigation of the steady - state structure of secularly stable binaries . through such investigationswe will be able to place on much firmer footing a variety of theoretical scenarios ( as alluded to above ) that have been proposed to explain the evolution and fate of close binary systems . with the commissioning of gravitational wave interferometers such as tama , ligo , and virgo ,there has been a growing interest in understanding the detailed behavior of , especially , neutron star inspirals and mergers . as has been reviewed by swesty , wang and calder ( 2000 ; hereafter swc ) , a number of different groups have developed hydrodynamical codes to simulate the late stages of inspiral and merger of such compact objects . indeed , as has been described by and reviewed by swc , an earlier version of our own simulation tool has been used to study the dynamical merger of equal - mass systems in which the stellar components were modeled with polytropic , white dwarf , and neutron star equations of state . generally speaking ,however , the last phase of a neutron star inspiral can be modeled with a hydrodynamical code that is less sensitive to initial conditions and more tolerant of errors in the algorithm that integrates the fluid equations forward in time than a hydrodynamical code that is designed to study more generic mass - transfer events in close binary systems .this is because general relativistic effects will necessarily drive a binary neutron star system to smaller separation , guaranteeing that the system will merge ; and , even in the absence of relativistic effects , it appears as though a tidal instability that disrupts one or both stars will be encountered before either fills its roche lobe and encounters a classic mass - transfer instability .building on the work of , we now have a simulation tool that can hydrodynamically follow the orbital evolution of binary stars with high precision .in developing this tool we have made a number of improvements to the hydrodynamics algorithm that was used in this earlier work only to study the tidal merger problem .we also have implemented a self - consistent - field algorithm that can construct very accurate initial equilibrium models of unequal - mass binaries in circular orbits , have paid special attention to the manner in which initial models are introduced into the hydrodynamics code , and have taken full advantage of continuing improvements in high - performance computers . with this tool in hand, we should be able to accurately model the evolution of a much broader class of close binary systems , specifically , systems in which the components initially have unequal mass and/or radii and in which a mass - transfer instability rather than a tidal instability sets in . with the inclusion of appropriate relativistic corrections , this simulation tool should in principlealso be able to simulate the merger of equal or unequal mass neutron star binaries , but our intention is not to focus so narrowly on this particular class of systems . in 2 of this paper , we collect results from theoretical investigations of the linear stability of mass transfer in close binaries and discuss the approximations that have been required to arrive at these results . in 3 we present the self - consistent field method we use for the construction of initial , equilibrium models .we then describe our implementation of a parallel hydrodynamics code for the solution of the ideal fluid equations and poisson s equation for an isolated mass distribution in 4 . in 5we compare results from the hydrodynamics code with known solutions for a set of test problems and in 6 we present the results from the evolution of two benchmark detached binaries .these simulations demonstrate our ability to faithfully represent the forces acting on the fluid and allow us to estimate the mass transfer rate we will be able to resolve and the computational expense required to evolve a given system through an interesting number of orbits .we conclude in 7 by summarizing the limits we have been able to attain at practical simulation resolutions and discuss the future application of the tool set to systems of interest .in this section we will argue that roche lobe overflow in a binary system approximated by two polytropic components can result in mass transfer on a dynamical time scale for a certain range of polytropic indices .a spherical polytrope with uniform entropy in mechanical equilibrium obeys the following mass radius relation ( c.f .* ) , which implies , hence , the body will expand upon mass loss for polytropic indices satisfying . consider paczyski s ( 1971 ) approximation for the effective roche lobe radius of a donor star , taken to be the secondary , with mass in a point mass binary of total mass and separation , from this, one obtains the following relation for the logarithmic change in the donor s roche lobe radius , upon eliminating the separation in favor of the system s orbital angular momentum , one arrives at where is the mass of the accreting star , taken to be the primary .if we further assume that the mass transfer is conservative with respect to the total mass and orbital angular momentum we deduce that , comparing equation ( [ eq : xi_poly ] ) with equation ( [ eq : log_roche_r ] ) , the condition for stable mass transfer , can be expressed as , which , for a given polytropic index , implies a stable mass ratio for a polytropic binary with and mass ratio , mass transfer must occur on a dynamical time scale as the donor will readjust its structure within a few sound crossing times to its new mass .note that if the donor is initially the less massive star ( i.e. ) , the binary separation is expected to steadily increase during the mass transfer event .but , if the donor is initially the more massive component ( i.e. ) , conservation of orbital angular momentum implies that the separation must decrease and that the donor s roche lobe radius will contract thus increasing the degree of overflow .the resulting mass transfer rate is expected in this case to be quite substantial .the dependence of the mass transfer rate on the degree of over - contact can be estimated from the product of the volume swept out by the flow near the inner lagrange point , , in unit time and the local value of the density .the cross section of the flow near will scale as the square of the local sound speed , and the flow velocity is approximately equal to the sound speed .the volume of material transferred in unit time then scales as the cube of the sound speed .the density near the edge of a spherical polytrope of index , radius , mass , and polytropic constant can be found by integrating the equation of hydrostatic equilibrium to obtain , ^{n}.\ ] ] if we change variables to , the width of a spherical shell near the edge of the star , we obtain the sound speed , in turn obeys , so that taking the radius , , to be the effective roche lobe radius of the donor , is the degree of over - contact . the mass transfer rate is then expected to scale with the degree of over - contact as where is the orbital period .this agrees with the calculation of jedrzjec as presented in . for a polytropic index of ,( [ eq : mdot ] ) indicates that the mass transfer rate will scale as the cube of the degree of over - contact .while the actual mass transfer rate observed in a fully self - consistent , three - dimensional evolution may differ substantially from the estimate given in ( [ eq : mdot ] ) , it nevertheless indicates that for unstable binaries mass - transfer events will evolve on a dynamical time scale once the donor reaches contact with its roche lobe .all the results presented in this section have relied on a great many simplifying assumptions including the disregard of internal angular momentum ( spin ) in each star , the use of the roche model , neglecting the intrinsically nonspherical geometry of the components , and assuming that the mass transfer event is , in fact , conservative . to proceed beyond this point one must deal with extended distributions for the density and velocity in some approximation .we would argue further that it is advantageous to use a potential derived from the matter distribution in a self - consistent manner . with these additional complications ,the task is well beyond the regime of analytical mechanics , but is tractable if we employ three - dimensional computational fluid dynamical techniques . to investigate short time - scale mass transfer events numerically ,we have developed a set of tools for both constructing equilibrium polytropic binaries and a hydrodynamics code to evolve systems of interest in time .these tools are described below .the iterative method that we have used to generate equilibrium , polytropic binaries is very closely related to the self - consistent field ( scf ) technique developed by hachisu ( 1986 ; see also hachisu , eriguchi & nomoto 1986 ) .this technique previously has been used to construct initial models of _ equal_-mass binary systems for dynamical studies of the binary merger problem ( see new & tohline 1997 and swc ) . herewe employ a more generalized version of the technique to construct _ unequal_-mass binaries . in the following discussion we use to refer to an arbitrary point in space .the vector is the cylindrical radius vector which can be expressed as .the axis of rotation is always taken to be parallel to , but not necessarily coincident with , the -axis .the numerical results presented here are in a system of units where the gravitational constant , the radial extent of our numerical grid and the maximum density of one binary component are all taken to be unity .as we are treating polytropic models exclusively in the present work , these models can be scaled to represent different physical systems by choosing a total system mass or orbital separation for example .we would like to emphasize that the polytropic models could represent binaries consisting of neutron stars , white dwarfs or normal stellar components .assuming synchronous rotationso that the bodies are stationary in a corotating reference frame , the equations of hydrostatic equilibrium reduce to the following single vector equation , here is the gravitational potential , is the angular velocity of the reference frame in which the fluid is stationary , h is the enthalpy , and is the cylindrical radius vector of the system s center of mass so that is each fluid element s distance from the axis of rotation . for a polytrope of index , given to within an arbitrary constant by where and are the pressure and density of the fluid , respectively .equation ( [ eq : euler_equation_static ] ) in turn implies that , for some constant , which in general will be different for each binary component .hereafter we denote these two integration constants as and .using equation ( [ eq : iteration ] ) one can construct an iterative scheme as follows .an initial guess at the density field is constructed .poisson s equation is solved to obtain the gravitational potential arising from the chosen mass distribution .this is , by far , the most computationally intensive part of the algorithm . for our work , we have chosen a cylindrical coordinate grid and utilized subroutines from the fishpack fortran subroutine set for the solution of elliptic partial differential equations with the boundary potential being calculated via a spherical harmonic expansion of the density field utilizing harmonic moments through . with the gravitational potential in hand we can use algebraic relations at three boundary points wherethe density field is forced to vanish in order to set the integration constants and , and the value of the angular velocity .the boundary points all lie along the line of centers .they correspond to the inner and outer boundary points for one star , and the inner boundary point for the companion as illustrated in fig .[ fig : boundary_points ] .the values of the gravitational potential at the three boundary points , , and are used to solve for the set of constants , and as follows : equation ( [ eq : iteration ] ) can then be used to construct the enthalpy throughout the computational domain and , from it , an improved density distribution can be constructed using the relation , where labels the two stellar components . as has explained , it is best to hold the values of and fixed throughout the iteration cycles .the iteration cycle is then repeated using the improved density distribution until the relative change from iteration to iteration in , , , and are all smaller than some prescribed convergence criterion , . for a grid resolution of 128 radial points by 128 vertical points by 256 points in azimuth, we typically use a tolerance .unfortunately the self - consistent field method does not allow one to specify physically meaningful parameters such as the binary mass ratio or separation _ a priori_. instead , as already described , it is best to specify the three boundary points and the maximum density for each body. nevertheless , the method described above remains , to our knowledge , the most effective means of generating fully self - consistent models of synchronously rotating , equilibrium binary systems with unequal masses and/or radii .we gauge the quality of a converged solution by the degree to which it satisfies the scalar virial equation .specifically , we define the following dimensionless virial error where the terms appearing in equation ( [ eq : virial - error ] ) are defined by the following integral quantities : where is the velocity field as measured in the inertial frame of reference . as applied in the scf technique , the velocity is entirely due to the rotation of the frame , that is in fig .[ fig : scf_side ] we plot density contours in the meridional plane for one contact binary system , three semi - detached systems , and two detached systems that we have constructed using the scf technique .the more massive component always appears on the left of the plots .figure [ fig : scf_top ] shows contours in the equatorial plane for the same six systems .the solid lines are at mass density levels of , , , and , where the density has been normalized to the maximum density for each model , and the dashed line follows the self - consistently determined critical roche surface for the system .the binaries all have a polytropic index ; other key parameters for these models are listed in table [ tab : scf_models ] . throughout this workwe take the secondary component ( denoted by a ` 2 ' ) to be the component closest to contact ( closest to being the donor ) .the values of shown in table [ tab : scf_models ] ( as well as in figs.[fig : scf_side ] and [ fig : scf_top ] ) give the ratio of the mass of the secondary to the primary ; the stellar radii ( and ) and roche lobe radii ( and ) have all been normalized to the orbital separation .the stellar radii and roche lobe radii are the radii of spheres that have a volume equal to the star or critical roche surface , respectively . for model 4, the roche lobe of the primary extends beyond the computational grid so the effective radius of its roche lobe is only a lower limit .all six of these models were constructed on a cylindrical grid of 128 radial and vertical zones by 256 azimuthal zones .lrrrrrrrr 1 & 1.0000 & 1.00 & 0.3720 & 0.3723 & 1.00 & 0.3720 & 0.3723 & + 2 & 1.2111 & 1.00 & 0.3056 & 0.3580 & 0.60 & 0.3893 & 0.3915 & + 3 & 0.4801 & 1.20 & 0.3727 & 0.4401 & 1.00 & 0.3126 & 0.3129 & + 4 & 0.1999 & 1.00 & 0.3817 & 0.5194 & 0.77 & 0.2476 & 0.2478 & + 5 ( eb ) & 1.0000 & 1.00 & 0.2984 & 0.3778 & 1.00 & 0.2984 & 0.3778 & + 6 ( ub ) & 0.8436 & 1.20 & 0.3180 & 0.3919 & 1.00 & 0.3200 & 0.3620 & table [ tab : scf_convg ] lists the resulting virial error for the contact system ( model 1 from table [ tab : scf_models ] ) constructed on grids of differing resolutions . as the convergence criterion , ,is decreased , the number of required iterations increases . for fixed resolution ,the overall quality of the solution does not significantly improve beyond some limiting value of , regardless of the number of iterations taken . as the resolution is increased , the virial error decreases roughly in proportion to the square root of the number of grid points .ccccc 64 & 64 & 128 & & + & & & & + & & & & + & & & & + 128 & 128 & 256 & & + & & & & + & & & & + & & & & + 256 & 256 & 512 & & + & & & & + & & & & + & & & & + & & & & due to the symmetry of these initial models about the equatorial plane , we only calculate the models in the half space of . assuming the line of centers coincides with the axis ,the tidal distortion of each star also is symmetric about the plane .hence , further computational efficiency could be obtained with this technique by limiting the computational grid to only extend from 0 to in azimuth .to date , we have not enforced this additional symmetry constraint , although in practice the converged models display this symmetry .the scf method is insensitive to the functional form for the initial guess of the density distribution . for uniform spheres and spherically symmetricgaussian density distributions one can obtain the same converged model to machine accuracy .we also note that we have found that more rapid convergence for models with soft equations of state ( _ e.g. _ , ) can be achieved by using an even mixture of the current and previous potentials during the iteration . for more rigid equations of state , where there is more mass at the boundary points and hence a greater coupling between the solution near the boundary points and the global solution ,such mixing is not necessary and the solution converges rapidly .we have developed an explicit , conservative , finite - volume , eulerian hydrodynamics code that is second - order accurate in both time and space to evolve the equilibrium binaries .the program is similar to the zeus code developed by .the integration scheme is designed to evolve five primary variables that are densities of conserved quantities : the mass density , , the angular momentum density , , the radial momentum density , , the vertical momentum density , , and an entropy tracer , .the entropy tracer , where is the internal energy per unit mass and is the selected ratio of specific heats of the gas .it is related to the entropy of the fluid through the relation , where is the specific heat at constant pressure .using the entropy tracer in lieu of the internal energy per unit mass or the total energy density allows us to avoid the finite difference representation of the divergence of the velocity field that must otherwise be used to express the work done by pressure on the fluid . for the evolutions presented in this paper we have set .we note , however , that by allowing the compressible fluid system to evolve with an adiabatic exponent that differs from this value , the stars will not be homentropic .for example , by selecting an appropriate value of we can effectively model stars that are convectively stable and that obey a mass - radius relation quite different from the normal polytropic one specified by eq .( [ eq : xi_poly ] ) , that is , different from . by doing this ,we expect to be able to closely approximate the mass - radius relationship of main sequence stars .a more detailed discussion of this idea is beyond the scope of this paper .the set of differential equations that we solve is based on the conservation laws for these five conserved densities .mass conservation is governed by the continuity equation , where is the velocity field .the velocity vector is expressed in terms of its components in a cylindrical coordinate system as .the three components of euler s equation govern changes in the momentum densities .we express these equations in a frame of reference rotating with a constant angular velocity , , as follows : where , the second and third terms appearing on the right - hand side of eq .( [ eq : radial - euler - eq ] ) represent the curvature of cylindrical coordinates and the radial component of the coriolis force , respectively . likewise , the last term appearing on the right - hand side of eq .( [ eq : angular - euler - eq ] ) represents the azimuthal component of the coriolis force . from the first law of thermodynamics we know that in the most general case , the entropy tracer obeys the expression , here we will be considering only adiabatic flows , in which case , so the entropy tracer obeys an advection equation of precisely the same form as the continuity equation , namely , even though we are performing adiabatic evolutions we can not simply use an adiabatic equation of state ( ) and disregard the first law of thermodynamics because the polytropic constant is , in general , different for each binary component .finally , we solve poisson s equation once every integration timestep in order to calculate the force of gravity arising from the instantaneous mass distribution , and we use the ideal gas law as the equation of state to close the system of equations , namely , it may be argued that our treatment of the thermodynamics of the system as the purely adiabatic flow of an ideal fluid is overly simplified .however , we believe that the self - consistent treatment of both binary components in the presence of the full nonlinear tidal forces is sufficiently complex and novel to warrant the use of a simple equation of state at the present time . this will allow us to establish the qualitative behavior of systems in this limiting case before additional complications leading to nonadiabatic heat transport are introduced into the simulations . before proceeding with the discussion of the hydrodynamics algorithm that we have implemented to solve the equations presented in [ sec - analytic ]we first describe the discretization that has been used to represent the exact partial differential equations when they are expressed as approximate algebraic relations between discrete points in the computational grid . as in the zeus code , all scalar variables and the diagonal components of tensors are defined at cell centers .the components of vectors are defined at the corresponding faces of the cell . a volume element and the relative positions of the variables within each cellis illustrated in fig .[ fig : cell ] .the cell extends from to in radius , from to in the vertical coordinate , and from to in the azimuthal coordinate .we represent the staggered variables in the computational mesh with a half - index notation ; the coordinates of the center of a grid cell are given by , , , for example .a complete listing of the variables and their centering is given in table [ tab : variable_centering ] .cll a & & cylindrical radius coordinate + & & vertical coordinate + & & azimuthal coordinate + & & mass density + & & entropy tracer + & & pressure + & & enthalpy + & & gravitational potential + & & diagonal components of artificial viscosity + b & & radial momentum density + & & radial velocity + c & & vertical momentum density + & & vertical velocity + d & & angular momentum density + & & azimuthal velocity through the method of operator splitting , one can construct a numerical scheme that groups terms of the same physical character together . again , following along the lines of the zeus code we implement a splitting scheme that separates updates of the fluid state due to eulerian transport ( advection ) from updates due to the source terms . in this sectionwe describe our treatment of the advection terms .given the density of any conserved quantity that satisfies a generic conservation law of the form , we can replace the differential equation ( [ eq : advection - equation ] ) with an equivalent integral equation , equation ( [ eq : integral - advection - equation ] ) must hold for any volume .in particular , it must hold for every volume element within the computational grid .the exact integral relation is then expressible in the following finite volume form for each grid cell : where the summation is over all six faces on the surface of the three - dimensional cell .the surface elements , , are naturally face - centered with respect to the control volume in question , so averages must be taken to obtain the advection velocity components necessary to perform the dot product for the momentum densities as shown in eq.([eq : finite - volume - advection - equation ] ) .we use second - order accurate , linear averages to construct the advection velocities in this case .the amount of advected through each face is given by an upwind biased , linear interpolation of the distribution of to give as described by . by construction, the amount of that is transported out of one cell immediately flows into the neighboring cell ; thus ensuring the conservative nature of the advection scheme . unlike the zeus code, we do not use operator splitting along the three separate dimensions during the advection step .instead , we perform the updates due to advection in all three dimensions simultaneously .we thus avoid concerns about bias that may be introduced by using an unsymmetrized ordering of the advection sweeps . a discussion of how we obtain second - order accuracy in time for the advection step through time centering of the terms appearing in eq .( [ eq : finite - volume - advection - equation ] ) is presented in [ sec : time - centering ] .our advection scheme automatically reverts to a first - order accurate ( upwind ) scheme at local extrema in the primary fluid variables . in addition , it is necessary to introduce an artificial viscosity to stabilize the scheme in the presence of shocks .the artificial viscosity prescription we have implemented is detailed in [ sec : artificial - viscosity ] .the lagrangian source terms for the momenta that are shown on the right - hand sides of eqs.([eq : radial - euler - eq])-([eq : angular - euler - eq ] ) arise from the forces of pressure and gravity , as well as from the differentiation of the curvilinear basis vectors and the rotation of the reference frame .we have found it advantageous to combine the pressure gradient with the gradient of the gravitational potential , which results in a gradient of the sum of and . since the centrifugal force can also be expressed as the gradient of a potential ,it is included as well to form an effective potential as defined in eq.([eq : effective - potential ] ) . as explained in [ sec : scf ] , our initial models have the property that everywhere , hence to reasonably high precision throughout both stars initially .the expressions we have used for the source term updates of the momentum densities are given here by expressions ( [ eq : s_update])-([eq : a_update ] ) .as with the advection step , we do not use an operator splitting technique to evaluate the source terms along the three separate coordinate dimensions ; instead , at each cell location , all updates due to lagrangian source terms are performed simultaneously . \nonumber \\ & + & \frac{\left ( \hat{a}_{i , j+1/2,k+1/2 } \right)^{2 } } { \hat{\rho}_{i , j+1/2,k+1/2 } r^{3}_{i } } + \frac{2 \omega \hat{a}_{i , j+1/2,k+1/2}}{r_{i } } ; \nonumber\end{aligned}\ ] ] ; \nonumber\end{aligned}\ ] ] \nonumber \\ & - & 2 \omega \hat{s}_{i+1/2,j+1/2,k } r_{i+1/2}. \nonumber\end{aligned}\ ] ] note that a caret identifies a variable whose value has been interpolated to a spatial location that is different from the variable s primary definition point as shown in fig.[fig : cell ] .these variables are given by volume - weighted averages as follows : , \nonumber \\ & & \hat{s}_{i+1/2,j+1/2,k } = \frac{1}{4 r_{i+1/2 } } \\ & & \left [ \left ( s_{i+1,j+1/2,k+1/2 } + s_{i+1,j+1/2k-1/2 } \right ) \right .\nonumber \\ & & \left ( r_{i+1/2 } + \frac{1}{4 } \delta r_{i+1/2 } \right ) \nonumber \\ & & + \left ( s_{i , j+1/2,k+1/2 } + s_{i , j+1/2,k-1/2 } \right ) \nonumber \\ & & \left .\left ( r_{i+1/2 } - \frac{1}{4 } \delta r \right ) \right ] , \nonumber \\ & & \hat{\rho}_{i , j+1/2,k+1/2 } = \frac{1}{2 r_{i } } \\ & & \left [ \rho_{i+1/2,j+1/2,k+1/2 } \left ( r_{i } + \frac{1}{4 } \delta r \right ) \right .\nonumber \\ & & + \ , \left .\rho_{i-1/2,j+1/2,k+1/2 } \left ( r_{i } - \frac{1}{4 } \delta r \right ) \right ] , \nonumber \\ & & \hat{\rho}_{i+1/2,j , k+1/2 } = \frac{1}{2 } \left ( \rho_{i+1/2,j+1/2,k+1/2 } \right . \\ & & + \left .\rho_{i+1/2,j-1/2,k+1/2 } \right ) , \nonumber \\ & & \hat{\rho}_{i+1/2,j+1/2,k } = \frac{1}{2 } \left ( \rho_{i+1/2,j+1/2,k+1/2 } \right.\\ & & + \left . \rho_{i+1/2,j+1/2,k-1/2 } \right ) .\nonumber\end{aligned}\ ] ] to stabilize the scheme in the presence of shocks , we employ a planar , von neumann artificial viscosity that is active only for zones that are undergoing compression .( see stone & norman 1992 or bowers & wilson 1991 , pg .311 for more detailed discussions of artificial viscosity in eulerian hydrodynamics . )the momentum densities are updated from the following finite - difference equations , where the diagonal components of the artificial viscosity are given by , if the velocity difference is negative ; otherwise the components of are zero .note that we neglect the shear components of viscosity .the factor is a parameter that roughly dictates the number of zones across which shock structures will be spread .a value of is typically sufficient . in keeping with our overall adiabatic treatment of the flow ( see [ sec - analytic ] ) , we neglect the generation of entropy by shock compression . in a binary system that is undergoing mass transfer , the accretion stream will necessarily undergo a shock transition as it is decelerated upon impact with the accreting star , or when it intersects itself if the stream has sufficient angular momentum to orbit the accretor .in addition , even for a detached binary simulation there will be weak standing shock fronts ( as viewed in the corotating frame of reference ) at or near the surface of the stars arising from the rapid deceleration of material falling onto the stars .we have found that even these weak shocks can have a noticeable impact on the quality of the solution in long time evolutions of detached systems unless artificial viscosity is used to damp the resulting oscillations .the timestep cycle is split between the application of source , advection and viscosity operators .first , the source terms are applied for one half of a timestep .next , all updates due to advection are performed for a full timestep and the viscosity updates are applied to the momentum densities .finally , the second half of the source operators are applied .the source and advection steps are thereby staggered in time when viewed over several iteration cycles for a constant value of the timestep .the advection is time - centered by first performing half a timestep of fictitious advection in order to obtain `` time - centered '' velocities for constructing the face - centered advection velocity components that appear in eq.([eq : finite - volume - advection - equation ] ) .the full timestep of advection is then performed .the components of the viscosity tensor are constructed from the velocity and density estimates at the midpoint of the timestep as well . since the momentum densities themselves also appear in the source terms of eqs .( [ eq : radial - euler - eq ] ) and ( [ eq : angular - euler - eq ] ) , similar care must be taken with their centering in time . the source operators are applied in a fictitious source step to obtain the angular and radial momentum densities at a point half a timestep in the future .these values are then used to update the momentum densities through a full timestep . as the timestep value changes from iteration to iteration, this algorithm for time centering the source terms is not formally accurate to second order .however , in real computations the character of the flow and , hence , the maximal signal velocity do not change rapidly over the course of a timestep cycle so that one may expect the resulting inaccuracies in the time centering of the source terms to be small .the other terms that appear in the source operators , including the gravitational potential , are all calculated at the approximate midpoint in time between the source steps .since we explicitly integrate the fluid equations in time , the timestep is limited in size by the familiar courant - friedrichs - lewy ( cfl ) stability criterion which ensures the time increment is small enough so that no characteristic can cross a cell in a single timestep .specifically , ,\ ] ] where is the speed of sound . in practice we limit the timestep to a half the cfl time .since we have introduced the diffusion terms associated with artificial viscosity , the timestep must also satisfy the condition , ( see p.270 of bowers & wilson 1991 ) , ^{1/2}.\ ] ] the boundary conditions for the fluid variables at the external boundaries are to allow the fluid to flow freely off the grid but to not allow material to flow back from the outermost layer of boundary cells .the central annulus of cells that has an inner radius at the coordinate axis is treated as a single azimuthally averaged cell for each layer in the vertical direction . as it is our intention to perform high resolution simulations , it is imperative that the work load within the simulation be distributed amongst many processors so that the simulations may be conducted in a reasonable amount of time and not exceed the available memory of a single node .the fluid dynamics equations , being hyperbolic partial differential equations , are ideally suited to a simple domain decomposition or single program multiple data ( spmd ) parallelization model .each computational task performs the same operations on their own block of the global data arrays with communication only being necessary between nearest neighbor tasks that share a boundary of ghost zones that is one - cell thick ( this ghost zone thickness is dictated by the order of our advection and finite - difference operators ) .we have written the program in fortran90 with explicit message passing being performed with mpi ( message passing interface ) subroutine calls .the resulting parallel code performance scales linearly with the number of processors for 4 to 128 processors on the cray t3e .similar behavior is also seen on the ibm sp platform .we are seeking to solve poisson s equation for an isolated distribution of mass .the correct boundary condition in this instance is that the potential goes to zero at infinity . as we only construct the solution on a finite domain we must specify the gravitational potential ( or its gradient ) on some boundary that encloses all the mass in the simulation .we construct the boundary potential using a novel technique based on a compact representation of the cylindrical greens function in terms of half - integer degree legendre functions of the second kind as described by .the boundary potential is then simply given by the convolution of the appropriate greens function with the density distribution .this method is capable of generating the exact solution for a discretized mass distribution and has the attractive feature that it can be applied to very flattened bodies without suffering penalties in either performance or accuracy . in order to obtain the interior solution for the gravitational potential , poisson s equationis first fourier transformed in the azimuthal direction , then the resulting set of two - dimensional partial differential equations ( helmholtz equations ) for the decoupled fourier amplitudes are solved using an alternating direction implicit ( adi ) scheme ( c.f .* ; * ? ? ?the solution is then transformed back to real space .the solution of poisson s equation requires special care in the context of parallel computing because the solution necessarily involves global communication as the character of the underlying physical law is action at a distance .the algorithms we have implemented for computing the gravitational potential are well suited to a cylindrical geometry and very efficient in a distributed computing environment .parallel communications are used to transpose the data so that all the data in a given dimension are in local memory at one time .when operations are to be performed along a different dimension , the data are transposed again .this allows us to send a relatively few number of large messages .further details regarding our solution of poisson s equation in a parallel computing environment can be found in .here we present results from three different types of tests that we have used to evaluate and quantify the accuracy of our computational tools . in all tests we compare a known , although not necessarily analytical , solution with the calculated numerical solution . as a check of the stability of our code in the presence of , ideally , discontinuous jumps in the fluid variables we have solved sod s shock tube problem with the initial discontinuity lying along a plane of constant .sod s shock tube problem presents a useful hydrodynamic test because the solution is known analytically and contains the three simple waves that can occur in ideal fluid flow .of these simple waves , it is the shock wave that concerns us most .our goal is not to resolve the details of the shock structure but rather to ensure that our algorithm is well behaved ( numerically stable and yields an acceptable solution ) in the presence of shocks .the initial conditions for sod s shock tube problem are that the velocity , , is zero everywhere ; for , the pressure , density and internal energy per unit mass take on the values , , ; for , , , .the fluid flow is characterized by an adiabatic exponent , . the computed solution for the vertical velocity , , pressure , , mass density , and the quantity , ( which is proportional to the polytropic constant and ,hence , the entropy ; see eq . [ eq : entropy ] ) is plotted along with the analytical solution at time in fig.[fig : sod_shock_tube ] .the computed points are not average values but are instead the values for a random column of cells at constant radius and azimuth within the three - dimensional grid .the calculation was performed with a coefficient for the artificial viscosity of and with 130 vertical zones .the initial discontinuity was placed at and the grid extended from to in the vertical direction .the results from this simulation compare favorably to the results produced by other second - order accurate , eulerian hydrodynamics programs with artificial viscosity ( c.f .* ; * ? ? ?* ; * ? ? ?the shock front is spread out over approximately three zones , and there is no indication of numerical instability in the solution for the shocked gas .the contact discontinuity is likewise spread out over about three zones due to the numerical diffusion inherent in a second - order accurate eulerian scheme .there is some disagreement between the computed and analytical solution at the tail of the rarefaction wave .this phenomenon has been investigated by and results from an inconsistent representation of the analytic viscous equations in finite difference form .finally , immediately behind the shock , the analytical solution for disagrees slightly with the computed solution for the shocked gas .this is because the analytical solution includes a small increase in the entropy of the fluid that passes through the shock whereas , as discussed in [ sec : artificial - viscosity ] , we have elected to treat all of the flow as through it is adiabatic , that is , by setting in eq .[ agreement with the analytical result could readily have been achieved by constructing an appropriate expression for in terms of the artificial viscosity in order to account for dissipation in the shock , as shown for example in eqs . ( 11 ) and ( 37 ) of stone & norman 1992 . ] in effect , we have assumed that each fluid element that passes through the shock is immediately able to cool back down to a temperature that places it back on its original pre - shock adiabat .we note that we have used the gradient of the pressure as opposed to the density times the gradient of the enthalpy for the solution of sod s problem . due to the pathological nature of the discontinuous initial conditions ,a correct solution can not be obtained if the enthalpy term is used with our chosen centering of the fluid variables . have published exhaustive tests showing the accuracy with which we are able to evaluate the gravitational potential on the boundary of our cylindrical coordinate grid . in order to ascertain the accuracy with which we are able to determine the force of gravity arising from the fluid everywhere inside the grid, we have calculated the potential and its derivatives for a uniform - density sphere of radius and density , centered at an arbitrary position on the grid , .the analytical potential is where .the corresponding derivatives appearing in the gravitational force are : , \nonumber \\\frac{\partial \phi}{\partial z } & = & \frac{4 \pi g \rho_{\ast}}{3 } \left ( z - z_{0 } \right ) , \\\frac{\partial \phi}{\partial \phi } & = & \frac{4 \pi g \rho_{\ast}}{3 } \left [ -r \left ( r \cos{\phi } - x_{0 } \right ) \sin{\phi } \right .\nonumber \\ & + & \left .r \left ( r \sin{\phi } - y_{0 } \right ) \cos{\phi } \right ] , \nonumber\end{aligned}\ ] ] for , and , \nonumber \\\frac{\partial \phi}{\partial z } & = & \frac{4 \pi g \rho_{\ast}}{3 } \frac{r^{3}_{\ast } } { d^{3 } } \left ( z - z_{0 } \right ) , \\\frac{\partial \phi}{\partial \phi } & = & \frac{4 \pi g \rho_{\ast}}{3 } \frac{r^{3}_{\ast}}{d^{3 } } \left [ - r \left ( r \cos{\phi } - x_{0 } \right ) \sin{\phi } \nonumber \right .\\ & + & \left .r \left ( r \sin{\phi } - y_{0 } \right ) \cos{\phi } \right ] , \nonumber\end{aligned}\ ] ] for . in table[ tab : poisson_rel_err ] we present the average relative error in the potential and the average absolute error in the three derivatives for a uniform density sphere ( ) of radius placed at the origin and at for a representative set of grid resolutions .the grid extends from to in radius and from to in the vertical direction .similarly , in table [ tab : poisson_max_err ] we present the maximum errors for the same quantities .ccccllll 0 & 66 & 66 & 64 & & & & + & 66 & 66 & 128 & & & & + & 130 & 130 & 128 & & & & + & 130 & 130 & 256 & & & & + & 66 & 66 & 64 & & & & + & 66 & 66 & 128 & & & & + & 130 & 130 & 128 & & & & + & 130 & 130 & 256 & & & & ccccllll 0 & 66 & 66 & 64 & & & & + & 66 & 66 & 128 & & & & + & 130 & 130 & 128 & & & & + & 130 & 130 & 256 & & & & + & 66 & 66 & 64 & & & & + & 66 & 66 & 128 & & & & + & 130 & 130 & 128 & & & & + & 130 & 130 & 256 & & & & the region near the surface of the sphere contains the largest errors in the potential solution ( c.f .at the surface , the density falls discontinuously to zero and the slope of the solution changes abruptly . when placed at the origin , this high error region is resolved by a larger number of smaller - volume cells than when the sphere is placed off - axis in the grid .this results in a worse average error for the potential and its radial and vertical derivatives for the axisymmetric solution despite the fact that the maximal errors are generally smaller in this instance . for the case where the sphere is centered on the origin of the computational grid ,the resulting potential is axisymmetric to machine accuracy .the average relative error in the potential and the average absolute error in the radial and vertical derivatives all decrease by a factor of about three as the radial and vertical resolutions are doubled .as expected for an axisymmetric mass distribution the quality of the solution is independent of the number of azimuthal zones .the maximum values of the relative error in the potential decrease by a factor of about three as well and the maximum value in the absolute error of the radial and vertical derivatives has been cut in half as the number of radial and vertical zones doubles .when the sphere is placed off axis , the convergence pattern is much more difficult to recognize .for the off - axis test at the highest resolution ( the same radial and azimuthal resolution that we currently use for binary evolutions ) , we are able to obtain a solution that is accurate to one part in , on average , for the potential . similarly , the finite - difference and analytical components of the derivatives of the potential agree to better than 4 decimal places on average . a stringent test of our coupled solution of poisson s equation and the fluid dynamical equations and one that may seem trivial at first mention is how well we are able to maintain hydrostatic equilibrium for a simple system such as a spherical polytrope that is placed off axis in the grid .while our hydrodynamics implementation is conservative with respect to the advection of the fluid , there is no guarantee that the total momentum is conserved once the action of the lagrangian source terms are included . throughout a mass - transfer simulation ,the bulk of the fluid should remain near hydrostatic equilibrium and the correct response of both components to their changing mass can be limited by the accuracy to which force balance is maintained . to perform this test, we have placed a spherical , polytrope of radius in a cylindrical grid of total radius 1.0 , but with a variety of different resolutions .the polytrope is centered at . in each case, the initial density distribution was generated with our scf code ( with only one star present and no frame rotation ) , and the initial velocities were zero everywhere . using our full gravitational hydrodynamics code , we then permitted the fluid system to evolve in time .over the course of the evolutions , each isolated star drifts outwards as if acted on by a constant force .this drift is shown in fig .[ fig : lonestar_com ] where we have plotted the location of the center of mass of the star as a function of time for grids of varying resolution .we have normalized the evolution time to the dynamical time as given by , for example , .specifically , , and for an polytrope with central density of unity the average density is .the size and rate of the drift decreases as the azimuthal resolution increases . as another measure of the quality of the steady state equilibrium from these spherical polytropeswe show a modified virial error , in fig .[ fig : lonestar_virial ] .this differs from the definition given in eq .( [ eq : virial - error ] ) in that we have neglected the kinetic energy term , . as can be seen in fig.[fig : lonestar_virial_components ] where we show the log of for the highest resolution simulation ( computed with 130 radial and vertical zones by 256 azimuthal zones ) , some peaks in kinetic energy , which are noise , are of approximately the same size as the sum of and in spite of the fact that the kinetic energy is insignificant compared to either the thermal or gravitational energies .overall , the virial error decreases by a factor of approximately 6 from the lowest to highest resolution simulation . at the highest resolutionpresented the virial error is and the polytrope oscillates with amplitude of approximately for 30 dynamical times .this shows that the isolated star remains in hydrostatic equilibrium to a very high degree of accuracy , even when placed off - axis in our computational grid .there is no significant improvement in the drift of the system center of mass when only the radial and vertical resolutions are increased .( in fig .[ fig : lonestar_com ] , compare the curves for the simulations at resolutions of 66 radial , 66 vertical by 128 azimuthal zones and 130 radial , 130 vertical by 128 azimuthal zones . ) but there is an improvement in the virial error between these two simulations .this suggests that there are two limiting numerical effects in play .one dictates the resolution of the equilibrium state itself ; the other causes a displacement of that equilibrium state .the former effect converges with the finite - difference size isotropically , while the latter depends only on the azimuthal resolution . when trying to resolve a highly nonaxisymmetric object , such as an off - axis sphere , within a uniform cylindrical coordinate grid , different parts of the star are resolved to varying degrees and it is not surprising that the convergence of the numerical solution is not describable in simple terms .in this section we present results from two simulations of detached binaries that we have performed to ascertain the precision with which we can expect to carry out future simulations of semi - detached binary systems ( systems undergoing mass transfer ) .one binary is an equal mass system with identical components ( see model 5 in table [ tab : scf_models ] and figs.[fig : scf_side][fig : scf_top ] ; hereafter referred to as the eb system ) and the other system has a mass ratio ( see model 6 in table [ tab : scf_models ] and figs.[fig : scf_side] [ fig : scf_top ] ; hereafter referred to as unequal binary or ub system ) . the eb system was constructed to resemble the single star used for the test of hydrostatic equilibrium in [ sec : equilibrium_test ] .this enables us to compare the systematic errors in the case of a binary system given the errors observed when only gravity , pressure and the curvature force came into play .each component of the eb system differs from the isolated , spherical star in that each is flattened by the synchronous rotation of the system and tidally distorted by its companion , but the components have a comparable size , in terms of grid cells , and the same central density and polytropic index as the isolated sphere .previous simulations of equal - mass barotropic stars have shown that it is important to conduct the evolutions in a frame of reference that renders the binary as close to static as possible in order to minimize the effects of numerical diffusion arising from the finite accuracy of eulerian advection schemes ( new & tohline 1997 ; swc ) . with this in mind , our eb and ub simulations have been conducted in a frame of reference rotating with the orbital angular velocity of the system , as obtained by our scf technique . in dealing with unequal - mass systemswe have discovered another subtle , but important issue that should be addressed with care when `` transporting '' an initial hydrostatic model from the grid of the scf code into the grid of the hydrodynamics code . during each scf iteration , the system s center of massis not fixed to any location beyond the fact that , by symmetry , it must lie along the line of centers . in general, then , we must translate the density field as we introduce it into the hydrocode so that the system center of mass coincides with the -axis , which is taken to be the rotation axis for the hydrodynamic evolution . if we could perform this translation perfectly, all initial fluid velocities would be identically zero relative to the hydrodynamic reference frame .because of the inherent symmetry of an equal - mass binary system , this was in fact the case for our eb system by construction .for the ub system , however , the center of mass of our converged scf model was displaced by a small distance from the rotation axis .specifically , , which corresponded to only , where is the radial extent of each grid cell . as we introduced the scf model into the hydrodynamical grid , we therefore also ascribed nonzero velocities as initial conditions according to the relation , because the displacement was quite small for our ub system , the initial velocities prescribed through eq.([eq : com_vel ] ) were also very smallnevertheless , it was necessary to include them in order to achieve the best possible steady - state configurations corresponding to the stars following circular orbits .this implies a uniform initial velocity for the system ( see further discussion below ) .after the binary models were introduced into the hydrodynamics code , both were evolved through more than five orbits .( see the first row of table [ tab : systems ] , where the total evolution time for both simulations is tabulated in units of each system s orbital period p. ) as is recorded in the last three rows of table [ tab : systems ] , the eb system was run on 64 nodes of a cray t3e 600 for a total of 173 wall - clock hours ( that is , the simulation required on average 2409 processor - hours per orbit ) and the ub system was run on 8 dual processor nodes of an ibm sp 3 for a total of 265 wall - clock hours ( that is , the simulation required on average 819 processor - hours per orbit ) .many different diagnostic parameters were followed throughout both evolutions in order to assess the quality of the initial scf models and to determine with what accuracy the hydrodynamical equations were being integrated forward in time . in the following paragraphs , we present the time - evolutionary behavior of a number of these key physical parameters .ccc & 5.314 & 5.178 + & & + & & + & & + & & + & & + & & + machine & cray t3e 600 & ibm sp3 + processors & 64 & 16 + & 173 hours & 265 hours throughout both evolutions , the individual stellar components were largely static and remained well within their respective roche lobes . in an effort to illustrate this , fig .[ fig : eb_vols ] shows as a function of time the computed roche lobe volume ( dashed curve ) and the volumes ( solid curves ) occupied by material more dense than , , , , and for one component of the eb system .( for reference , the initial scf density fields have values of a few times at the edge of the stars ; see the isodensity contours drawn in figs.[fig : scf_side ] and [ fig : scf_top ] . ) the same information is plotted in figs .[ fig : ub_1_vols ] and [ fig : ub_2_vols ] for the secondary and primary components , respectively , of the ub system .these figures illustrate that the rotationally flattened and tidally distorted models generated by our scf code exhibit excellent detailed force balance throughout their three - dimensional structures , and that there is an excellent match between the algorithmic expressions that determine an equilibrium state in the scf code and force balance in the hydrodynamics code . in an effort to determine how well mass is conserved throughout an evolution for each star , individually , as well as for the system as a whole, we tracked three separate volume integrals over the mass density : , defined as the mass bound to the primary ; , defined as the mass bound to the secondary ; and , defined as the mass that lies outside of both stars but inside the boundaries of the computational grid .as is illustrated by frames 5 and 6 of figs.[fig : scf_side ] and [ fig : scf_top ] , in the initial state it is easy to evaluate these three integrals because the edges of the two stars are well - defined .specifically , when normalized to each system s total mass , , , and , where is the system mass ratio given it table 1 .but because the stars are being modeled on a discrete computational mesh that does not conform precisely to their shape , and because the acceleration of each fluid element in the computational mesh is being determined by finite - difference ( rather than continuous differential ) representations of gradients in the pressure and gravitational fields , as each system evolves hydrodynamically the surfaces of the stars become less sharply defined and some spreading of material inevitably occurs .( in these benchmark evolutions , this is evidenced for example by the very small , but finite oscillations in the `` volumes '' occupied by the stars that are displayed in figs .[ fig : eb_vols ] - [ fig : ub_2_vols ] . ) in practice , then , during each evolution we determine whether material in each grid cell belongs to either star or the `` envelope '' by comparing the binding energy of the fluid in each cell to the average binding energy of the layer of cells at the surface of each star . in this context , we define the surface of each star to be the layer of cells where the mass density falls below in our normalized units , which corresponds to the lowest density level attained in the initial scf models .the mass of the envelope is dominated by material from the surface of the stars even though there is a minimum `` vacuum '' density level of enforced by the code to maintain numerical stability .the total mass of the vacuum material is over a million times smaller than the mass of either stellar component and does not impact the physics of these simulations .four curves are drawn ( each at two quite different scales ) in figs . [fig : eb_mass ] and [ fig : ub_mass ] to document how well mass is conserved in the eb and ub simulations , respectively .the masses have all been normalized to the total system mass , so in fig .[ fig : eb_mass ] the mass of both the primary ( ) and the secondary ( ) stars is initially exactly ; the total binary mass is initially exactly ; and the `` envelope '' mass is initially exactly . in fig . [fig : ub_mass ] , the total mass and the `` envelope '' mass also are initially and , respectively , but the mass of the primary initially is and the mass of the secondary initially is .plotted on a normal , linear scale , all four of these curves are perfectly flat in both figures .this demonstrates that the mass of both stars , as well as the aggregate system mass , is conserved to very high precision throughout the eb and ub simulations .again , this is evidence that the initial models were in excellent detailed force balance and the hydrodynamics code is evolving the systems forward in time in a physically realistic manner .although mass is conserved to very high precision , it is not absolutely constant throughout the evolutions . in the four insets of figs .[ fig : eb_mass ] and [ fig : ub_mass ] , we have magnified the vertical mass scale by roughly four orders of magnitude in order to show that there is a very tiny , but measurable , secular decrease in the total system mass and in the mass of both stellar components over the course of the simulations . in each inset, we plot the relevant mass minus its value in the initial state ( time ) , normalized to the total system mass .these inset plots show that the system mass decreases by approximately one part in over five orbits that is , about per orbit with the mass loss from each star accounting for roughly half this total . in rows of table [ tab : systems ] we have recorded for both evolutions more precise values of the fractional mass that is lost , on average , each orbit from the primary ( ) , the secondary ( ) , and the system as a whole ( ) .we have determined that this mass is very slowly lost as a result of the development of a small , but nonzero flow of low - density material off of the stars , through the envelope and , ultimately , off of the grid .( after an initial drop , the envelope mass remains approximately constant , suggesting that this outward flow has settled into a nearly steady state . )as is discussed more fully in 7 , below , this small but detectable rate of mass loss from detached equilibrium binaries imposes a straightforward limit on the mass - transfer rates that we will be able to reliably model in future simulations that involve dynamical mass transfer . during both simulations we also tracked as a function of time the position of the center of mass of each binary component and the position of the center of mass of the system as a whole .the equatorial - plane trajectories of these three centers of mass for the eb and ub evolutions are shown , respectively , in figs . [fig : eb_com ] and [ fig : ub_com ] , as viewed from our computational reference frame that is , from a frame rotating with the orbital angular velocity of the system , as determined for the initial state by the scf code . in the uppermost plot of each figure , which has been drawn at a scale ( ; ) to include the entire mass of the system , the three separate center of mass trajectories appear to be small dots with little or no discernible structure .( when plotted in the inertial reference frame on this scale the trajectories of the two stars are indistinguishable from circles . )this illustrates that , even after five orbits , the centers of mass of the two stars and of the system as a whole essentially have not moved from their initial positions .this provides additional strong confirmation that our scf code produces excellent initial states and that the hydrodynamical equations are being integrated forward in time in a physically realistic manner . in the bottom three plots of figs .[ fig : eb_com ] and [ fig : ub_com ] , we have magnified a small region around each of the center of mass trajectories expanding the linear scale of the uppermost plot in each figure by a approximately a factor of and , respectively .these magnified views reveal that , although it is very small , there is measurable motion of the centers of mass in both evolutions . in the bottom ,lefthand plot we also have shown the size of our radial grid spacing , .this indicates the characteristic size of our discretization and emphasizes how small the motion of each center of mass is . in the ub evolution , for example, the motion of all three centers has been confined within a single grid cell through five full orbits .furthermore , the smooth spiral trajectory of the ub system center of mass ( bottom , middle plot in fig . [ fig : ub_com ] ) has an understandable , physical origin .as viewed in the inertial frame , this particular trajectory appears as a straight line whose direction and magnitude is consistent with the overall system velocity prescribed as initial conditions from eq .( [ eq : com_vel ] ) . in the eb evolution , due to the symmetry of the initial model ,the drift of the system center of mass is extremely small , remaining unnoticable even on the magnified plot . in the magnified plots ,the trajectory of the center of mass of each individual star shows both a gradual drift in the -direction , and a small oscillatory motion in the -direction .the vertical drift is mostly an indication that the binary s actual orbital frequency is slightly different from the value ( given by the scf code ) that we used for the rotation frequency of the computational grid .the oscillations in represent epicyclic motion and indicate that the binary orbit is not precisely circular . since both the drift and the epicyclic motion can be understood in physical terms , their small amplitudes tell us more about the quality of the initial model than about the limiting accuracy of our finite - difference scheme . unlike in the single star case presented in [ sec : equilibrium_test ] , there is no evidence of a systematic outwards force on either star in the ub or eb systems , despite the fact that the systems have evolved for the equivalent of approximately 90 dynamical times .it appears as though the introduction of a rotating frame of reference and the associated centrifugal potential and coriolis force has provided a feedback mechanism that acts to limit the systematic imbalance discussed previously .a plot of the binary separation as a function of time , as shown in figs .[ fig : eb_a ] and [ fig : ub_a ] for the eb and ub evolutions , respectively , provides another way to assess the global behavior of these systems . here, the separation is defined as the distance between the centers of mass of the two stars .notice that , on a linear scale that extends from to ( in units normalized to the each system s initial separation ) , the plot of is indistinguishable from a perfectly horizontal line .this illustrates that , to a very high degree of accuracy , these benchmark simulations of detached binary systems produce stable , circular orbits .again , though , if we examine these plots in finer detail , we see that both evolutions exhibit a very small but quantifiable departure from perfect circular orbital motion . for example , in the insets to figs .[ fig : eb_a ] and [ fig : ub_a ] , we have reploted with the vertical scale magnified by roughly a factor of .these insets show that in both evolutions there is a very slow , secular decrease in the orbital separation and , in addition , displays low - amplitude oscillations having a period approximately equal to one orbital period .the oscillations in arise from the same epicyclic motion that was seen in the plots ( figs .[ fig : eb_com ] and [ fig : ub_com ] ) of the center of mass motion of the individual stars , but the amplitude of this motion is easier to measure here . in units of the initial orbital separation , the eb system has an epicyclic amplitude ; the ub system exhibits an epicyclic amplitude about half this size .the slow , secular decay of the orbits occurs at a rate per orbit in the eb system , and at a rate per orbit in the ub system .these orbital decay rates and epicyclic amplitudes have been recorded in the fifth and sixth rows of table [ tab : systems ] . finally , in figs .[ fig : eb_jz ] and [ fig : ub_jz ] we show the behavior as a function of time of the -component of each system s total angular momentum . as was true with our plots of the orbital separation , on a linear scale that extends from to ( in units normalized to the each systems initial total angular momentum ) , the plot of is indistinguishable from a perfectly horizontal line .this illustrates that these benchmark simulations globally conserve angular momentum to a very high degree of accuracy .when we magnify the vertical scale by approximately a factor of , as has been done to produce the insets to figs .[ fig : eb_jz ] and [ fig : ub_jz ] , we see that angular momentum is not , in fact , perfectly conserved .evidently both systems gain angular momentum at a very slow rate ; in the eb simulation , per orbit , and in the ub simulation per orbit .these rates have been recorded in the seventh row of table [ tab : systems ] and will be referred to again in 7 , below , when we summarize the limiting accuracy with which we expect to be able to model physical mass - transfer events using our simulation tools .we should emphasize that the hydrodynamics code as described in 4 and utilized in these benchmark simulations has evolved through many stages from the version of the code that was used several years ago by to investigate the equal - mass , binary merger problem .a number of improvements were made in the code in order to bring it to its present level of performance .figure 20 is presented here in an effort to illustrate how certain key modifications in the code affected its performance .each row of frames in this figure shows results from an evolution of the same unequal - mass ( ub ) binary system that was used in our benchmark simulation , but as produced by four separate versions of the code .the curves drawn in the four frames on the left - hand side of fig.20 show the same type of information as has been presented in fig.10 for the benchmark ub evolution : four separate volumes for the secondary star ( solid curves ) and its roche volume ( dashed curve ) are plotted as a function of time , in units of the orbital period .the four frames on the right - hand side of fig .20 show the same type of information as has been presented in the top - most frame of fig . 15 : center - of - mass trajectories of the two stars and of the system as a whole , as viewed from the rotating frame of reference . ( the bottom - most frames are taken from the benchmark ub simulation and therefore are drawn directly from figs . 10 and 15 . )the results shown in the top - most frames of fig .20 come from an early version of the code in which we replaced the gradient of the pressure with the gradient of the enthalpy .this ensured that the initial structure of each star , as determined by the scf code , was in good force balance after being introduced into the hydrocode .however , as the two figures from this evolution illustrate , we still observed a slow expansion of the secondary star ; the orbit itself developed a significant epicyclic amplitude ; and after about three orbits , the roche lobe was encroaching on the surface of the secondary .the second row of frames comes from a simulation in which the number of azimuthal zones was doubled from 128 to 256 zones over the full radians .this modification improved somewhat the mean motion of the centers of mass ( although it did not significantly reduce the amplitude of the epicyclic motion ) .most significantly , however , doubling the angular grid size improved the resolution and , hence , the determination of force balance in each star . as a result ,expansion of the secondary star was noticeably reduced .the third row of frames shows that motion of the centers of mass was drastically reduced when we modified our algorithm to make the integration scheme more properly time - centered .this change did not noticeably reduce the rate of expansion of the secondary , but it did significantly reduce the amplitude of oscillations in the roche lobe volume . finally , by introducing artificial viscosity into the equations of motion in order to mediate the weak shocks at the surface of the stars ( which also involved a re - centering of all momentum densities to the cell locations specified in table 3 ) , the entire structure of both stars became much more robust .in particular , as the left - hand frame of the last row shows ( see also fig . 10 ) , this code modification completely eliminated the short timescale wiggles in the volumes of the secondary ; overall expansion of the secondary also was reduced to an imperceptible level . simultaneously , for the first time , we ascribed a small nonzero velocity to the initial state as given by eq .this change further reduced the motion of the centers of mass to the level illustrated by fig .it is reasonable to ask whether the three principal spurious effects that remain in our benchmark simulations the slow decay of the orbits , the slow gain of angular momentum , and the slow loss of mass from the stars are at least mutually consistent on physical grounds . for centrally condensed binaries , a point mass approximation ( the roche approximation ) is usually sufficient to discuss the orbital evolution .this approximation predicts a simple relationship between the changes of mass , angular momentum and binary separation , namely where is the total center of mass angular momentum in the point mass approximation . numerical values for each of the terms in this expression can be obtained from the data shown in table 6 .in particular , we see that in both benchmark simulations the three mass terms approximately cancel each other out .but while the magnitude of is roughly the same as the magnitude of , their signs are different .that is , the angular momentum of the system is slowly increasing while the binary separation is slowly decreasing .this is clearly inconsistent with the expectations of the simple roche model . a more accurate expression forthe total angular momentum of the binary would be where and are the moments of inertia and inertial frame angular velocities of the binary components , assuming they all rotate around the same -axis .even if one takes into account the contributions of spin angular momenta , the changes observed remain inconsistent and must therefore be attributed to spurious numerical effects at a level of per orbit arising from the inevitable error terms present in our finite - difference representation of the fluid equations .what we have attempted to do here is quantitatively document the magnitude of these numerical effects in the highest practical resolution possible at the present day for simulations of detached binaries where the character of the ideal solution is well understood beforehand .furthermore , we can not accurately predict the evolution of mass transferring binaries where the mass transfer rate , per orbit is using simulations at the resolution presented here .there are however a wide variety of systems ( the initial mass transfer event in an algol progenitor , or the onset of common envelope evolution in the progenitors of many types of binaries , or the formation of type ia supernovae for example ) that are expected to exceed our threshold resolution limit for mass transfer . at a sufficiently high mass - transfer rate, the mass transfer itself will drive the evolution of the orbital parameters and roche geometry at a rate higher than the numerical limits demonstrated here .in this paper we have presented a practical scf algorithm for constructing self - consistent polytropic binaries with unequal masses that satisfy the condition of hydrostatic equilibrium to a high degree of accuracy .this three - dimensional scf algorithm is based largely on the technique first described by , but to our knowledge this is the first time the technique has been used to generate unequal mass binary systems as input for a hydrodynamical simulation .our two benchmark simulations ( described in 6 ) clearly indicate that this scf algorithm can provide superb initial states for investigations into the dynamical stability of close binary systems .we emphasize that , in addition to generating models of close binary systems that are detached like the eb and ub systems constructed for our two benchmark simulations as shown above in figs .[ fig : scf_side ] and [ fig : scf_top ] this technique also can be used to generate close binary systems that are semi - detached or , in the limit of identical components , in contact .we have also detailed our gravitational hydrodynamics code and presented results from key tests of the stability and accuracy of the hydrodynamics algorithm , the solution of poisson s equation , and the coupled solution required to evolve an isolated , spherical polytrope that is placed off - axis in a cylindrical grid . from these test casesit is apparent that a number of subtle numerical issues arise when a highly nonaxisymmetric body is evolved via an explicit integration of finite - difference equations on a cylindrical computational grid .it also appears however , that these effects can be made manageably small by increasing the resolution used to treat the system of interest .we have evolved two detached binary systems ( one with a mass ratio , the other with a mass ratio ) through more than five orbits in order to benchmark in detail the capabilities of our simulation tools . even though the individual stellar components generated by our scf code are significantly rotationally flattened ( due to the synchronous rotation of the initial states ) and tidally distorted ( by their close binary companion ) ,these benchmark simulations show that the stars are in almost perfect hydrostatic equilibrium . throughout each binary evolution ,our hydrodynamics code conserves mass and angular momentum to a very high degree of precision ; as viewed from a frame of reference that is rotating with the initial orbital frequency of the binary , the centers of mass of the two stellar components and of the system as a whole remain virtually stationary ; and a plot of the binary separation as a function of time shows that the stellar orbits are almost indistinguishable from circles .this gives us considerable confidence that these numerical tools can be used to examine the stability of close binary systems against both tidal and mass - transfer instabilities , and to begin to accurately model mass transfer in semi - detached systems .as has been summarized in table [ tab : systems ] , from our benchmark evolutions we have been able to determine in quantitative terms the level of accuracy with which our hydrodynamical code conserves mass , conserves angular momentum , and is able to represent and maintain a circular binary orbit .mass is conserved to roughly per orbit ; angular momentum is conserved to a level of - per orbit ; the binary separation remains constant to a few parts in per orbit ; and each orbit exhibits an epicyclic amplitude ( measured relative to the orbital separation ) of - .we are unaware of any other group that is attempting to study the onset of mass - transfer instabilities in unequal - mass binaries with a gravitational hydrodynamics code , like ours , that fully resolves both stellar components .hence , there are no published numbers against which to compare ours for the ub evolution .however , we can fairly compare the results of our eb evolution against the recent study published by swc of equal - mass close binary systems .their fig .10 illustrates that , after following one stable binary system through approximately orbits ( we assume , based on the swc discussion , that ) , they have been able to conserve angular momentum to a level of about per orbit . and their fig . 14 shows four stable orbits with epicyclic amplitudes ( measured relative to the binary separation ) .we conclude that , at least in these two respects , our simulations improve on the swc models by roughly one order of magnitude .swc do not comment on their level of mass conservation ; and , because of the visible epicyclic motions in their fig .14 , it is difficult to ascertain to what degree the binary separation either decreases or increases with time over the course of their evolutions . in both of our evolutions , however , the center - of - mass motion of our stars appears to be significantly less than the center - of - mass motion depicted for swc s preferred integration scheme in the top - left panel of their fig .the small , but measurable changes in mass , angular momentum , and binary separation documented here in table [ tab : systems ] set limits on the types of mass - transfer events that we will be able to model with confidence using our present simulation tools .for example , if we were to try to model an instability that leads to a flow through the binary s l1 lagrange point with a mass - transfer rate lower than one part in per orbit , the physical exchange of material between the binary components would be swamped by the unphysical process that is causing our stars to lose mass to the `` envelope '' at a rate of one to two parts in per orbit .if the depth of contact between the roche lobe and the surface of the donor star is not sufficient , epicyclic motion in the orbit will tend to shut off the mass - transfer during part of each orbit .also , a drift in the center of mass of the system can impose a limit on the length of time that the binary can be evolved before the motion of the binary through the grid becomes problematic .we will have to contend with all of these issues as we move to the next level of our investigation and introduce a semi - detached system from our scf code into our hydrodynamical code .we expect nevertheless to find a wide range of interesting binary systems whose dynamical evolution can be simulated in a fully self - consistent fashion through a reasonably large number of orbits using the tools that have been described in this paper .as we have documented in table [ tab : systems ] , the calculation of one orbit takes about 33 wall - clock hours when utilizing 64 processors of the cray t3e 600 , and using 16 processors of the newer ibm sp-3 , the calculation of one orbit takes about 51 hours .the computational workload of a mass - transfer simulation is therefore within the reach of current , state of the art , parallel computers given the linear scaling of our gravitational hydrodynamics code with the number of processors even at a resolution greater than presented here .we note that the amount of work performed can be reduced significantly if need be by , for example , freezing the gravitational potential until the mass distribution has changed significantly as done by .the solution of poisson s equation represents about a quarter of the total execution time .we have been able to estimate the mass transfer rate required to bring the simulation above the level of the noise observed in our benchmark simulations .we find that this value is of the donor s initial mass over an orbital period . as discussed in [ sec : stability ] , the mass transfer rate should scale as a high power of the degree of over - contact ( as the cube for an polytrope ) ; furthermore , for a case where , that is , the donor is initially the more massive star , the degree of over - contact will naturally increase with time as the star expands and its roche lobe shrinks .provided that such a binary system can begin mass - transfer , the amplitude of the mass transfer rate should inevitably reach values higher than indicated above .since motion of the center of mass of the binary system has been confined to a region well within a single computational grid cell even after orbits , we are confident that future evolutions can be followed with confidence through at least orbits , given sufficient computing resources . as discussed in the introduction of this paper, we understand that the mass transfer rates considered here are much larger than those found in what are considered typical examples of interacting binaries .the methods presented here are not applicable to the stable mass transfer observed in cataclysmic variables or other long lived systems , but should serve very well to investigate stages of evolution of their progenitors and transient events such as the onset of dynamical mass transfer and its stability .this work has been performed with support from the national science foundation through grants ast-9720771 , ast-9528424 , ast-9987344 , and dge-9355007 and from the national aeronautics and space administration through the astrophysics theory program grant nag5 8497 .this research has been supported , in part , by grants of high - performance computing time at the national partnership for advanced computing infrastructure ( npaci ) facilities at the san diego supercomputer center and by louisiana state university s high performance computing facilities .we would also like to acknowledge the many useful comments made by the referee that led to a significant improvement in the contents of this paper .cohl , h. s. , sun , x .- h . , & tohline , j. e. 1997 , in proceedings of the 8th siam conference on parallel processing for scientific computing , ed .m. heath , v. torczon , g. asffalk , p. e. bjrstad , a. h. karp , c. h. koebel , v. kumar , r. f. lucas , l. t. watson , & d. e. womble ( philadelphia : siam )
we describe computational tools that have been developed to simulate dynamical mass transfer in semi - detached , polytropic binaries that are initially executing synchronous rotation upon circular orbits . initial equilibrium models are generated with a self - consistent field algorithm ; models are then evolved in time with a parallel , explicit , eulerian hydrodynamics code with no assumptions made about the symmetry of the system . poisson s equation is solved along with the equations of ideal fluid mechanics to allow us to treat the nonlinear tidal distortion of the components in a fully self - consistent manner . we present results from several standard numerical experiments that have been conducted to assess the general viability and validity of our tools , and from benchmark simulations that follow the evolution of two detached systems through five full orbits ( up to approximately 90 stellar dynamical times ) . these benchmark runs allow us to gauge the level of quantitative accuracy with which simulations of semi - detached systems can be performed using presently available computing resources . we find that we should be able to resolve mass transfer at levels per orbit through approximately 20 orbits with each orbit taking about 30 hours of computing time on parallel computing platforms .
edge sparsity in an undirected graphical model ( markov random field ) encodes conditional independence via graph separation .essentially , graphical models detangle the global interconnections between the random variables of a joint distribution into localized neighborhoods . any distribution consistent with the graphical model must abide by these simplifying constraints .thus , the graph learning problem is equivalent to a model class selection problem .let be an undirected graph on vertices and edges .let denote a random vector with distribution , where variable is associated to vertex .graphical model selection attempts to find the simplest graph , often dubbed the _ concentration graph _ ,consistent with the underlying distribution .recent work in graphical model selection exploits the local structure of the underlying distribution to derive consistent neighborhoods for each random variable . in terms of graphs ,the neighborhood set of a vertex is .more importantly , for undirected graphical models , is the markov blanket of , where is rendered conditionally independent of all other variables given : . to estimate the neighborhood conditional probabilities ,these methods employ pseudo - likelihood measures , specifically regularized regression : the lasso [ [ tibsh1 ] ] .compared to other penalty based regularization schemes , the penalty enjoys the dual properties of convexity and sparseness by straddling the boundary between the two domains . by treating as the response variable and as the predictors in a generalized linear model, the regularization penalty can recover an appropriately sparse representation of . reconstructing the full edge set of the graph using the estimated neighborhood ,allows for two alternate definitions : which we call and ; or which we call or .ravikumar et .al [ [ ravi1 ] ] , [ [ ravi2 ] ] consider the problem of estimating the graph structure associated with a gaussian markov random field and the ising model .their main result shows that under certain assumptions , the problem of neighborhood selection can be accurately estimated with a sample size of for high dimensional regimes where is the max degree , and ( ) .note that the number of samples needed is further improved in [ [ ravi1 ] ] to for gmrf model selection .meinshausen et .al [ [ graphlasso ] ] also examine the gmrf case , and provide an asymptotic analysis of consistency under relatively mild conditions along with an alternate penalty .we build upon this previous work by extending the penalized neighborhood estimation framework to use the elastic net [ [ elasticnet ] ] penalty and expanding the scope of graphical model recovery to include the multinomial discrete case . while the lasso performs beautifully in many settings , it has its drawbacks .in particular , when , the lasso can only select at most variables .moreover , for highly correlated covariates , the lasso tends to select a single variable to represent the entire group . by incorporating the penalty term ,the elastic net is able to retain the lasso s sparsity while selecting highly correlated variables together .additionally , we introduce a novel scheme for augmenting neighborhood recovery by pooling pair - wise neighborhood union estimates .the idea is to infer the joint neighborhood of a pair of nodes ( not necessarily adjacent ) , and obtain the neighborhood of node by combining all the information given by the pairs of nodes containing node .the frequency with which nodes appear in a specially designed neighbor list for node gives us a weighted ranking of nodes in terms of their neighbor likelihood .this method can be combined with the usual neighborhood recovery to extract more information from a possibly insufficient set of samples .undirected graphical models encode the factorization of potential functions over cliques , which in their most basic form , are comprised of 1st and 2nd order interactions : functions that map node and edge values to the real line : which for max - entropy exponential family distributions , can be written as note that represents the normalization constant or partition function . for continuous random variables , the most common exponential family mrf representation is the multivariate gaussian with sufficient statistics . the symmetric pairwise parameter matrix , known as the inverse covariance matrix of x denotes the partial correlations between pair of nodes , given the remaining nodes .every edge will have a non - zero entry in and each row of specifies the graph neighborhood .conversely , the sparsity of reveals the conditional independencies of the graph where .conditional neighborhood expectations can be represented by a linear model : . in the binary case , the mrf distribution can be described using an ising model where , and .the full probability distribution takes the following form , which omits first order terms : the conditional neighborhood probability is defined as : taking the hessian of the local conditional probability gives the fisher information matrix for , much like partial correlations in the gaussian concentration matrix , zero entries in the fisher information matrix indicate conditional independence . extending the discrete parameterization to variables with states , requires an expansion in terms where the edge potential functions now describe a set of parameterized indicator variables representing the possible value pairs between and . as described in [ [ expfamily ] ] , this particular representation is over complete since the indicator functions satisfy a variety of linear relationships .however , despite the lack of a guaranteed unique solution , the factorization can still satisfy the desired neighborhood recovery criterion .a simplified variant of the general discrete parameterization is the potts model where each is defined by two indicator functions denoting node agreement and disagreement for arbitrary .we observe that in the ising model , the form of may be be recast as . note that the potts model only requires a single parameter and generalizes the ising model to states . to extend neighborhood estimation from the binary ising model case to a discrete parameterization, we note that the neighborhood conditional probability takes the form which is equivalent , after a variable transformation from a discrete feature space to indicators , to the classical multinomial logistic regression equation : where with an additional singleton indicator variable always set to 1 . with the conditional probability equations in hand, we can approach the problem of neighborhood estimation as a generalized linear regression .building on previous model selection work using the penalty , we extend the approach , to use the combined penalty approach of the elastic net [ [ elasticnet ] ] , which for the basic linear model takes the form : the elastic net performance surpasses the penalty under noisy conditions and where groups of highly correlated variables exist in the graph . however , as noted by bunea [ [ bunea ] ] , the additional smoothing penalty should be small relative to the term to preserve sparsity .many authors have extended the elastic net penalty to additional regression models , covering a broad swath of the generalized linear realm . for the linear gaussian case , we use the original elastic net package of zhou and hastie [ [ elasticnet ] ] . for binary and multinomial regressionwe rely on the glmnet library of friedman et .al [ [ fried1 ] ] .to evaluate the elastic net for gaussian mrf model selection , we generate the distribution inverse covariance matrix in the following way .we set whenever , and then perturb the diagonal of the matrix , with large enough to force all eigenvalues of to pe positive .we experimentally choose , starting from and increasing it in increments of until we get a value that makes positive definite . in the case of the binary and discrete models , we require a more complicated procedure based on mcmc sampling .however , given the size of our graphs , the direct gibbs sampling approach proved to be computationally expensive because of its long mixing times and slow mode exploration when the temperature ( the s in our case ) is low .to overcome this difficulty , we turn to the swendsen - wang algorithm .this method generates an augmented graph , where and contains iff and are incident .given this formulation , is bipartite between the nodes and nodes .thus in the joint distribution of , the markov blanket of will only consist of elements in and vice versa .the random variables assigned to can only take the values 0 and 1 .we define the conditional probabilities of and as : * is given by considering the nodes s.t . is incident with and . if and 0 otherwise .* is such that all nodes in the same component ( in the graph when we consider only the edges s.t . ) have the same value , and each component takes each of the possible values with equal probability .essentially , the algorithm generates mcmc samples by alternately updating the values of and using gibbs sampling .although the augmentation substantially increases the number of vertices , the algorithm creates a markov chain that explores the space of outcomes much more rapidly . for the details of the swendsen - wang algorithm, we refer the reader to [ [ swendsenwang ] ] and [ [ mackay ] ] . by introducing an penalty term to the regression model , the maximization problem in the elastic net setup becomes with .we experimented with several values of for different number of samples , and observed the type i and type ii probability errors . in figure [ surfaceandor ]we show 3d plots of the total error rates as a function of the number of samples and the parameter , for the and and or neighborhood estimation .several observations can be made from these plots .first note that larger values of performs worse than smaller values , no matter what the sample size is .the best recovery rates are achieved when is very small .also note that the and neighborhood selection perform much worse than its or counterpart when is large .the figure [ p40errvsn ] plots the error rates versus the sample size , for a fixed .note that the chosen graph has vertices and is of maximum degree , and it can not be recovered without errors even when the number of samples scales as .the first family of graphs we tested were the star graphs and the more general clique of star graphs .we denote by the clique - star graph obtained from copies of a star graph connected to the remaining vertices of degree ] by connecting all star centers among them , or in other words we add different neighbors to each vertex in a clique of size . is just the standard star graph .note that the maximum degree in is and the total number of vertices is . for a graph let denote the edge density of the graph , i.e. .the reason we introduce this parameter in our simulations is to observe the impact of edge density on the recovery rates when the maximum degree and the number of samples are fixed . as our simulations show , recovering the graph structure is significantly harder when the graph has a higher density but the same fixed maximum degree . to test this, we generate a star graph with maximum degree and edge density , and then start adding edges among the lower degree neighbors to obtain a new graph with new edge density . note that this edge density dependence can be equivalently formulated in terms of the average degree of a graph by the formula .the error rates are averaged over runs for a fixed graph on different samples of size .additionally , was chosen by discretizing the interval $ ] into equally sized subintervals .figure [ star2rhos ] plots the error recovery rates for with ( top ) and for the graph obtained by adding edges to until (bottom ) , with both graphs having the same maximum degree .note that for these graphs and as seen in the top plot of figure [ star2rhos ] , a bit over samples are enough to bring the error rates to zero .however , the bottom shows that even with times more samples , we can only recover with a error rate .we repeat the above experiment for the clique - star graph with and edge density , and the graph obtained by adding edges to while keeping the maximum degree unchanged . in this case , and we successfully recover only when the sample size exceeds , due to higher edge density ( plot omitted ) .figure [ wstar2rhos ] shows the error rates when we increase the edge density to , which emphasizes the increase in sample size required for graph recovery .note that in both simulations the penalty was rarely of any help , and in most cases achieved the best error rates .-axis ) versus the parameter ( -axis ) for the graphs with ( top ) and with ( bottom).,scaledwidth=80.0% ] a second type of graph we considered was the community graph , denoted by , which consists of groups of highly connected nodes , where each group has size , so .two vertices within the same group ( or community ) are connected with probability , while nodes that belong to two different communities share an edge with a smaller probability .these community structures are a common feature of complex networks , and have the property that nodes within a group are much more connected to each other than to the rest of the network ( for ) . in application , these communities may represent groups of related individuals in social networks , topically related web pages or biochemical pathways , and thus their identification is of central importance . to completely understand the modular structure of such graphs , one should be able to both detect overlapping communities and make meaningful statements about their hierarchies [ [ ohcom ] ] .figure [ com32 ] plots the error rates in the recovery of a graph with , , and . , however again even with 9000 samples , the error rate is still over 10 percent . when few samples are available , achieves the best error rates , but as we increase the number of samples we notice that the elastic net method with performs slightly better than when .while this improvement is not significant , it hints that the additional regularization may produce better results in some cases . .figures [ ising64 ] and [ potts64 ] depict the performance of the elastic net neighborhood estimator over a range of discrete mrf graphs .the graphs evaluated for the ising and potts model are random graphs with bounded maximum degree .all experiments were run over a sample range covering the edge recovery threshold and values ranging from 0.5 to 1 , where .results from multiple trials were averaged for the ising model . due to the computational load of the _ glmnet _ multinomial regression for large data sizes ,only single run results are shown for the potts model .smaller values are omitted from the plot in order to limit the scale and improve clarity .unless otherwise noted , the plots represent and neighborhood unions , with or neighborhood unions showing similar performance .as seen in the plots , the elastic net neighborhood estimator recovers the underlying graph with high probability under corresponding sample sizes , validating our formulation of the discrete model neighborhood estimation as a multinomial logistic regression .similar to the gaussian case , the effect of the penalty , ( while is set to ) tends to benefit neighborhood recovery mostly at small sample values and when is close to 1 .oversized penalties introduce an inordinate number of noise edges , but small penalties reduce the chance of missing edges with weak correlation which the penalty rejects . when the penalty is non - zero , the minimization function is strictly convex and allows the estimator to select additional nodes that exhibit highly correlated behavior by effectively averaging their contribution .the parameter provides , in essence , a trade - off between precision and recall , as it can be seen in figure [ isingpotts64rp ] , especially when the number of samples is small , which is the case in many high dimensional applications .while the curve consistently provides the highest graph recovery precision over all sample sizes , the actual number of recovered edges may be extremely limited due to the sparsity constraint . for the ising model graph depicted in the figure ,the smallest sample size 1200 with gives a precision of 0.88 with a recall of 0.7 . by introducing a term with , the precision drops to 0.8 butthe recall improves to 0.79 , which is better than a 1 to 1 trade - off . as expected , with large sample sizes , the benefit of the parameter diminishes as shown by the nearly vertical slope of the large sample size curves .similarly , the potts model recall - precision plot also displays this trend , albeit in a compressed fashion since the neighborhood estimator is able to recover the graph at a smaller sample size , rendering the larger sample size curves uninformative . from these resultswe can say that the additional presence of an penalty may yield substantial benefits for situations where the goal is to extract relevant correlation information from small sample sizes .the technique introduced in this section is useful in neighborhood reconstruction especially in the case of regular graphs , graphs with a small gap between known maximum and minimum degree , or when we would like to obtain a likelihood ranking of the possible neighbors of a fixed node .the idea is to infer neighborhoods not only for one vertex at a time , but for pairs of vertices , which may or may not be adjacent .we denote by the estimation of the neighborhood of node , as given by the optimization in equation ( [ enetgmrfeq ] ) .we denote by the set of neighbors of nodes and , i.e. , in other words is the union of the neighborhoods of nodes and , minus the edge if it exists .we now define the following optimization problem similar to the one in equation ( [ enetgmrfeq ] ) after grouping the terms of and and approximating by , and by , we approximate ( [ tij ] ) by where in the last step we make the change of variable and we denote by the regression coefficients of the sum of variables and against the remaining variable .we now define the estimated neighborhood of a pair of vertices , not necessarily adjacent , to be , in other words is an estimate of .note that for a vertex it may be the case that is adjacent to either or or perhaps both .for a fixed node , we obtain its neighborhood in the following way .we let denote the list obtained by concatenating the pair neighborhoods of node where denotes union with repetitions .we denote by the concatenation of the estimated pair neighborhoods , i.e. . note that includes all nodes that are neighbors of , each appearing with multiplicity . if is a neighbor of , then for all , i.e. times . in the absence of errors , and with the exception described in the next paragraph , we can correctly recover the neighborhood of node by picking the most frequent elements from , i.e. all nodes which appear in the list exactly times . in the case of errors ,we obtain an estimate for the neighborhood of node by selecting the most frequent elements in .also , if there are no errors , contains all other nodes of at least once .this is obvious if , as appears times in as explained above .if , then pick a neighbor of ( exists since we assumed is connected ) and it must be that .note that a non - neighbor node of can appear times in if is connected to all nodes in , in which case we ( incorrectly ) add to the neighborhood of node .similarly , if is connected to all nodes in , then appears in with multiplicity and we ( incorrectly ) mark as being in the neighborhood of node .in other words , and appear in each other s neighborhood lists , thus rendering our approach incorrect , whenever and are both connected to all other vertices in the graph .however , for random graphs this scenario occurs with a very low probability .as shown in figure [ neighborhistogram ] , true neighbors of node occur most frequently in . when ordering vertices in based on their frequency , most of the true neighbors appear at the top of the list .however , errors occur and false neighbors sometimes precede true neighbors .note however that there are cases when the single neighborhood estimation performs much worse and omits many true neighbors .we have seen how histograms based on neighborhoods of pairs are useful in determining a likelihood ranking of possible neighbors of a given node .the top most frequent elements in are the most likely neighbors of .the problem now becomes how to select this threshold value for each list .if is too small then true neighbors might be left out , and if is too big then we will introduce false neighbors .we make an additional observation that improves on the accuracy of the above ordering obtained from .denote by the matrix formed from lists by letting equal the frequency of node in list . to incorporate the symmetry between two neighboring nodes :if is a neighbor of then is also a neighbor of , we build the symmetric matrix . the intuition here is to average out the votes received by nodes and in their respective rows .suppose that , but does not rank highly in . however , it may be the case that ranks highly in , and helps in identifying that are indeed neighbors in .one can think of this method as averaging out the bad information ( noisy edges ) and boosting up the good information ( correct edges ) . finally , another alternative would be to first row normalize and then construct the symmetric matrix described above .we divide each row in by the largest entry in that row and obtain the row stochastic matrix , whose row may be thought of as ranked probabilities of the possible neighbors of .we then build the row stochastic matrix , as described above . as mentioned earlier ,the main problem is finding the threshold for each row to separate the neighbors for non - neighbors of . if we know a priori what the degree of each node is , then one way to pick the neighbors would be to select the most frequent entries in .alternatively , if we know that the graph is almost regular of degree , or in other words that the average degree of the graph g is but the degree distribution has very little variance around , then we can again select the top most frequent entries in . another way one can choose a threshold is to plot the frequency values in order and look for a big jump in the graphthis idea is illustrated in the right plot of figure [ neighborhistogram ] .note how the frequency values decreases suddenly within two steps from 26 ( for node 6 ) to 22 ( for node 15 ) and then to 18 ( for node 35 ) .such large sudden drops in the ordered list of frequency values hint at a good threshold point .table [ oraclecomparison ] shows the results of an experiment that illustrates the above ideas when we have oracle information about the degree of each node .we observe that the symmetric matrix works better than just using , and that performs better than .note also that the type i and ii errors are now more balanced both in the and and or case .when doing the estimation , in the and case almost all the errors were coming from missing edges , while in the or scenario almost all errors were given by false edges . picking the threshold allows for a trade - off between these two type of errors .method & i & ii & total & i & ii & total + & 0.04 & 0.58 & 0.63 & 0.12 & 0.36 & 0.48 + , & 0.04 & 0.27 & 0.31 & 0.32 & 0.09 & 0.41 + , & 0.07 & 0.17 & 0.24 & 0.15 & 0.05 & 0.20 + ' '' '' , & 0.05 & 0.14 & 0.19 & 0.135 & 0.045 & 0.18 + [ oraclecomparison ] it would be interesting to compare the results of our new pair neighborhood recovery to the original single neighborhood method , when both techniques take into account the knowledge of node degree .one way to incorporate this information into the single neighborhood method is to avoid using 10-fold cross validation and replace it with the following procedure .we have seen that when regressing variable against all other variables , the elastic net method does not return a single vector corresponding to the optimal value , but rather computes an entire matrix ( or sequence ) where each row corresponds to a value of at which an additional variable turns on . instead of picking the empirically optimal row with the 10-foldcross validation method as our vector , we can simply pick the first row which has nonzero entries , where is the degree of node .one can also interpret the order in which the remaining variables turn on `` as a way of ranking the potential neighbors . a variable which ' ' turns on sooner on the elastic net path is more likely to be a neighbor of than a node which activates later on .this ranking can also be obtained by our pair - wise neighborhood union estimate , however we produce more than just a simple ranking .the frequency of each node in combines additional information , and one can interpret this ordering as a weighted ranking of possible neighbors of . this additional information may capture longer range correlations that elude single neighborhood estimation , as suggested in a recent paper of bento and montanari [ [ difficultgraph ] ] .in this paper we considered the problem of estimating the graph structure associated with a gmrf , the ising model and the potts model .building on previous work using penalized neighborhood estimation , we experimented with the an additional penalty term .simulations across the three models show that a small but non - negligible penalty term improves the edge recovery rates when the sample size is small by trading precision for recall .we make the observation that in the gmrf model , the addition of the penalty term does not have much influence on the recovery rates .numerical simulations confirm our hypothesis that the lower bounds on the number of samples needed for recovery should not only be a function of the maximum degree and number of nodes , but also of the edge density ( or equivalently the average degree of the graph ) .we also introduce a new method for improving the neighborhood recovery by considering pair - wise neighborhood unions which produce a ranking of nodes in with respect to their likelihood of being adjacent to the remaining node .this can be thought of as a way to incorporate local information ( rankings ) at each node into a globally consistent edge structure estimation of the graph .
structure learning in random fields has attracted considerable attention due to its difficulty and importance in areas such as remote sensing , computational biology , natural language processing , protein networks , and social network analysis . we consider the problem of estimating the probabilistic graph structure associated with a gaussian markov random field ( gmrf ) , the ising model and the potts model , by extending previous work on regularized neighborhood estimation to include the elastic net penalty . additionally , we show numerical evidence that the edge density plays a role in the graph recovery process . finally , we introduce a novel method for augmenting neighborhood estimation by leveraging pair - wise neighborhood union estimates .
the phenomenon of beats arises due to the superposition of two periodic excitations of slightly different frequencies .this superposition results in a cyclic behaviour arising due to the constructive and destructive interferences between these two excitations .the resultant waveform is characterized by a frequency that equals the average of the frequencies of the two waves , whereas its amplitude is modulated by an envelope , whose frequency known as the _ beat frequency _ is the difference between the frequencies of the two waves .this phenomenon has been widely studied in linear systems .however recently , attention has been drawn to chaotic and hyper - chaotic beats in nonlinear systems .chaotic or hyper - chaotic beats arise due to the irregular collapses and revivals in amplitudes of the variables of nonlinear systems as a result of the interaction between two different excitations ( either self oscillations or external driving forces ) in these systems . though they can be identified by visual inspection , the presence of positive lyapunov exponent(s )is considered as the true characterizing feature of chaotic or hyper - chaotic beats .chaotic and hyper - chaotic beats were identified for the first time in a system of coupled kerr oscillators and coupled duffing oscillators with small nonlinearities and strong external pumping [ grygiel szlachetka , 2002 ] .chaotic beats were also reported in coupled non - autonomous chua s circuits [ cafagna grassi , 2004 ] .weakly chaotic and hyper - chaotic beats were reported in individual and coupled nonlinear optical subsystems , respectively , describing second harmonic generation ( shg ) of light [ liwa _ et al_. , 2008 ] .in all these coupled systems [ grygiel szlachetka , 2002 ; cafagna grassi , 2004 ; liwa _ et al_. , 2008 ] , the occurrence of beats is attributed to the interaction between the self oscillations or driven oscillations of each of the coupled subsystems . using extensive pspice simulations ,the occurrence of beats in individual driven chua s circuit with two external excitations has also been reported [ cafagna grassi , 2006a ; 2006b ] .chaotic beats have also been found to occur in individual murali- lakshmanan - chua ( mlc ) circuit by the same authors [ cafagna grassi , 2005 ] . in these cases of single systems[ cafagna grassi , 2005 ; 2006a ; 2006b ] , two different external driving forces with slightly differing frequencies were the cause for the occurrence of beats .on the other hand , from a different perspective , leon chua in the year 1971 introduced a fourth circuit element , namely the memristor [ chua , 1971 ; chua kang , 1976 ] , purely from theoretical arguments , and pointed out the interesting possibilities of including it in electronic circuits .almost four decades later , strutkov et al . of hewlett - packard laboratoriesdesigned the first approximate physical example of a memristor [ strutkov , 2008 ] in a solid - state nano - scale system in which electronic and ionic transports are coupled under an external bias voltage .as the characteristic of memristors is inherently non - linear and is unique in the sense that no combination of nonlinear resistive , capacitive and inductive components can duplicate their circuit properties , memristors have generated considerable excitement among circuit theorists .further they provide new circuit functions such as electronic resistance switching at extremely high two - terminal device densities [ strutkov , 2008 ] . very recentlyitoh and chua [ 2009 ] , have investigated the dynamics of nonlinear electronic circuits of different orders , including autonomous chua s oscillator and canonical chua s oscillator by replacing a chua s diode - a well studied and familiar nonlinear device - by an _active _ or a _ passive _ memristor .hence it will be of high current interest to study the effect of replacing the chua s diode by a memristor in a driven chua s circuit [ murali lakshmanan , 1990 ; 1991 ] , and investigate the new dynamical phenomena which arise from the corresponding circuit . in the present paper , we report the occurrence of chaotic beats in the driven chua s circuit using a flux controlled active memristor as its nonlinearity .the importance of this circuit is that it employs just a single driving force and makes use of the chaotically time varying resistive property and the switching characteristic of the memristor to generate beats .the plan of the paper is as follows . in sec .2 memristors , their characteristic and their behaviours are briefly described . in sec .3.1 the standard driven chua s circuit is introduced while in sec .3.2 the driven chua s circuit using a memristor as the nonlinear element is described . in sec .4 the chaotic beats phenomenon observed numerically in this circuit is shown with the aid of time series plots , phase portraits and poincar maps , lyapunov exponents and power spectral calculations . in sec .5 an experimentally realisable circuit for the memristor is proposed .using multisim modelling , a driven chua s circuit is constructed and the chaotic beats observed . with the help of the grapher facility of the software package , time series plots , phase portrait , power spectra which are qualitatively equivalent to the numerical observations are presented . in sec .6 the mechanism for the occurrence of chaotic beats is discussed , while sec . 7presents a brief conclusion .from a circuit theoretic point of view , leon chua [ 1971 ] noted that there could possibly be six mathematical relations connecting pairs of the four fundamental variables , namely charge , current , flux and voltage .one of these relations ( the charge is time integral of current ) is determined from the definitions of the two variables .another relation ( the flux is the time integral of the electromotive force or voltage ) is given by faraday s law of induction .three other relations can be constructed from the axiomatic definitions of the three classical two terminal circuit elements , namely , the resistor ( defined by the relationship between and ) , the inductor ( defined by the relationship between and ) and the capacitor ( defined by the relationship between and ) .one more relationship between and remains undefined . from logical and axiomatic points of view , as well as for the sake of completeness, the necessity for the existence of a fourth basic two terminal circuit element was postulated by chua [ 1971 ] .this _ missing element _ purported to have a functional relationship connecting charge and flux is named as .this hypothetical element is endowed with both a resistive property and a memory like nature and hence it is named so . in principle , memristors with almost any characteristic can be synthesised [ chua , 1971 ] .although realized , albeit only recently , and that too in the nano - scale range , memristors have been successfully used as a conceptual tool for analysis in signal processing and nonlinear circuits , see for example [ tour he , 2008 ] .in particular their nonlinear characteristic provides designers with new circuit functions like electronic resistance switching at extremely high two terminal densities [ strutkov _ et al_. , 2008 ] .we will now study the dynamics of the driven memristive chua s circuit to bring out the concept of chaotic beats in it . for this purposewe shall first introduce the standard driven chua s circuit .the driven chua s circuit is a fourth order non - autonomous circuit first introduced by murali and lakshmanan in the year 1990 [ murali lakshmanan , 1990 ] .it is found to exhibit a large variety of bifurcations such as period adding , quasi - periodicity , intermittency , equal periodic bifurcations , re - emergence of double hook and double scroll attractors , hysteresis and coexistence of multiple attractors , besides the standard bifurcations .its dynamics in the environment of a sinusoidal excitation has been extensively studied in a series of works [ murali lakshmanan , 1990 ; 1991 ; 1992a ; 1992b ; 1993 ] . due to its simplicity and the rich content of nonlinear dynamical phenomena, the driven chua s circuit continues to evoke renewed interest by researchers in the field of non - linear electronics [ anishchenko _ et al_. , 1993 ; zhu liu 1997 ; elwakil 2002 ; srinivasan _ et al_. , 2009 ] . as a function of the flux across the memristor and ( b ) the corresponding characteristic curve of the memristor in the plane . ]a large number of autonomous nonlinear electronic circuits using memristors as active nonlinear elements have been studied very recently by itoh and leon chua [ itoh chua , 2008 ] and the conditions for the occurrence of chaos in them were mathematically analysed and numerically simulated .muthuswamy has proposed a memristor having a cubic nonlinearity [ muthuswamy , 2009a ; 2009b ] . using it a simple three element autonomous chua s circuitwas designed and its chaotic behaviour studied using matlab simulations , and also real time experiments . in this paper , we modify the driven chua s circuit [ murali lakshmanan , 1990 ] mentioned above by removing an inductance and replacing the chua s diode with a three segment _ piecewise - linear _ flux controlled active memristor [ itoh chua , 2008 ] as its nonlinearity . the circuit is given in fig . 1 .an active flux controlled memristor has the charge through it as some function of the flux across it .the functional relationship between the flux and charge is assumed to be the same expression as given in [ itoh chua , 2008 ] , namely ,\ ] ] where a and b are the slopes in the inner and outer segments of the characteristic curve and is a negative conductance which is connected in parallel to the flux controlled memristor to make it satisfy the condition for activity .the memristor employed here is an active device in the sense that it does not draw power from the circuit of which it is a part of , but in fact it supplies power to it .it satisfies the criterion for activity [ chua , 1971 ] , namely where the instantaneous power dissipated across the memristor and the memductance of the memristor .the latter is so called because it has the units of conductance .it is given by where are as defined earlier .the memductance of the memristor takes on two particular values as shown in fig .2(a ) , depending on the value of the flux across it .the higher memductance value can be referred as the on state and the lesser memductance value can be referred as the off state . obviously , as the flux across the memristor changes , the memristor switches or toggles between these two states .this switching action can be made use of profitably in utilising the memristor as a desirable element in modelling of nonlinear circuits .the characteristic curve in the plane which causes this memristive switching is shown in fig .2(b ) . following itoh chua [ 2008 ] , the circuit equations for the memristive driven chua s circuit given by fig . 1, are written as normalizing equations ( 4 ) using the same rescaling parameters as those used in [ itoh chua , 2008 ] , namely we have here dot stands for differentiation with respect to t. the piecewise linear function describing the memristor then becomes where are now the normalized values of the quantities mentioned earlier .the above circuit , represented by the normalized equations ( 6 ) , though simple in nature , generates beats due to the slight tuning / detuning of frequencies of the external driving excitation and the switching of the memristor between the on and off memristive states .if we set and change the force and frequency of the external signal suitably in equations ( 6 ) , chaotic beats are found to occur in this circuit .for example for and , chaotic beats are observed .this can be seen clearly in figs .3 . while fig .3(a ) shows a sampled stretch of the time series for the variable depicting chaotic modulation in amplitudes , fig .3(b ) shows an enlarged portion of the same illustrating the presence of a central fundamental frequency component , which we call as . in fig .3(c ) , the envelope of the amplitudes of the same variable for the same stretch as in fig .3(b ) , isolated by poincar mapping technique , is captured .it shows that this variation of the amplitudes occurs at a smaller frequency , which we will call as . to find the frequencies of the circuit variables as well as to determine the frequency with which the memristor changes its on memductive state to off state and vice versa ,the power spectra are obtained as shown in figs .4 , for four different cases , namely ( i ) for the external driving sinusoidal signal , ( ii ) for the memristor switching when driven by the self oscillations of the circuit , ( iii ) for the normalized voltage of the driven memristive chua s oscillator and ( iv ) for the envelope of the chaotically modulated signal .the various signals whose power spectra are obtained are shown as insets in figs .4 . versus normalized current plane representing ( a ) the growth of the variables upto 90 time units ( online colour pink ) , ( b ) the shrinkage of the variables from 90 time units to 100 time units ( online colour light green ) , ( c ) the growth of the variables from 100 time units to 115 time units ( online colour yellow ) , ( d ) the shrinkage of the variables from 115 time units to 125 time units ( online colour dark blue ) and ( e ) the growth and shrinkage of the variables from 125 time units to 150 time units ( online colour red ) . in each figurethe corresponding poincar map is depicted as inset ( i ) , while the time series plot representing the growth or shrinkage of the variable is represented as inset ( ii ) . ] the power spectrum in fig .4(a ) , is that of the sinusoidal forcing .evidently it shows a single peak corresponding to the driving frequency .the chua s circuit , by virtue of being a nonlinear oscillatory circuit , generates chaotic self oscillations at a frequency determined by the dynamics of the circuit , even in the absence of a driving force .hence the power spectrum of the memristor switching , seen in fig .4(b ) , shows a broad band spectrum with a fundamental frequency component .it is this fundamental frequency that determines the switching of the memristor between its memductive states and is identified as the characteristic memristor frequency , . when an external driving force is added , the dynamics of the circuit changes as result of the interaction of the self oscillations and the driving signal . as is expected this interaction results in some sort of control or suppression of chaos that causes the memristor to lower its switching frequency to a value less than its characteristic frequency .this altered memristor frequency component which we call as the new memristor frequency has been identified from the power spectrum in fig .4(c ) as . also the driving frequency is identified from the power spectrum as as it has to be .the central frequency of the chaotically modulated signal is obviously the average of the memristor and external driving frequencies , namely . from the fig .4(c ) , this central frequency is identified as . the power spectrum of the envelope of the chaotically modulated _y(t)_-variable is shown in fig .the frequency of this envelope is often called as the _ beat frequency_. for a damped forced oscillator system , it is half the difference between the two frequencies involved , namely the new memristor and external frequencies , that is . versus normalized voltage plane , ( b ) the normalized voltage versus normalized flux plane and ( c ) the normalized current versus normalized flux plane .( online colours are same as those followed earlier ) . as in the earlier fig .the corresponding poincar map is depicted as inset ( i ) , while the time series plots representing the growth or shrinkage of the variable , and are represented as insets ( ii ) in each of the phase portraits respectively . ] to confirm the chaotic amplitude modulation , the lyapunov exponents of the system are obtained using the well known wolf algorithm [ wolf etal . , 1985 ] .for a forcing amplitude and forcing frequency , chaotic beats are observed with lyapunov exponents .it is to be noted that the lyapunov values obtained in this case are much less than that obtained for an autonomous chua s circuit using a memristor [ itoh chua , 2008 ] , or for a highly frequency detuned driven memristor chua s circuit wherein beats phenomenon does not arise at all due to the large frequency mismatch .this lowering of lyapunov exponents denotes a drop in chaoticity of the circuit .this is because of the control of chaos effected by the introduction of the external driving force on the autonomous chua s circuit .these values , though small , agree well with the lyapunov exponents obtained in earlier works on chua s circuits employing ordinary chua s diode , namely the two coupled autonomous chua s circuit [ cafagna grassi , 2004 ] as well as the driven chua s circuit [ cafagna grassi , 2006a ; 2006b ] wherein chaotic modulation were observed .while two driving periodic forces were employed to generate beats in the individual circuits , the natural autonomous oscillations of the subsystems of the two coupled autonomous chua s circuit were used to obtain the same for the latter case .it is pertinent to note here that the present circuit is a single four dimensional non - autonomous circuit ( a dimension of two less than the six dimensional non - autonomous coupled chua s circuit ) and generates chaotic beats using a single periodic driving force ( unlike two driving forces employed in earlier works ) .further the number of dimensionless parameters that tune the behaviour are also less . and .the values of the other parameters are : . ] the phase portraits taken for different time durations of evolution show clearly the growth and the shrinkage in the amplitudes or the revivals and collapses of the system variables as the dynamics evolves , resulting in the occurrence of beats . in figs .5 , one finds the successive expansions and contractions of the attractor in the phase plane as time elapses . in figs .6 , the corresponding variations in the attractor amplitudes in the , and phase planes are shown .the poincar maps and the time series plots are shown as insets ( i ) and ( ii ) in the respective phase portraits . in all of these cases ,the state variables expand up to 90 time units and then undergo a shrinkage up to 100 time units .they undergo a further growth and decay in amplitudes for successive intervals of time .this process of growth and decay in amplitudes continues adinfinitum and is governed by the dynamics of the system . as a function of flux across the memristor ( using multisim model )this is obtained by plotting the output of the integrator versus that of a divider circuit module whose inputs are the currents and voltages across the memristor .( b ) the characterisitc of the memristor in the plane .this shows a pinched hysterectic curve , which is a characterisitic feature of memristors .both of these graphs are obtained from multisim software grapher facility . ] andthe parameters for the memristor are as given earlier . ] across the capacitor showing the presence of amplitude modulation and chaotic beats .( b ) an enlarged portion of the same . ]next , in this section , we will present our study on the experimental confirmation of the above numerical results through multisim modelling .a memristor having a smooth nonlinear third order polynomial relationship between flux and charge has been designed by muthuswamy [ 2009b ] .this is based on the cubic nonlinear resistive circuit designed by zhong [ 1994 ] .however in this paper , we propose a circuit model for a three segment piecewise flux controlled active memristor .a memristor is a device whose resistive or conductive state depends on the magnitude of flux across it . by faraday s law ,the flux passing through a memristor is the time integral of the electromotive force across it , using this simple logic , an analog electronic implementation of the memristor can be designed .the block diagram illustrating this principle is given in fig .this model is based the time varying resistor ( tvr ) circuit proposed by nishio et al . [1993 ] . here a linear resistance and a negative impedance converter ( nic ) are switched on and off alternatively based on the output pulse of a comparator.the comparator compares the flux through the memristor ( that is the integral of the voltage across the memristor ) between two reference levels , called breakdown points . for flux values lying within the upper and lower breakdown points ( namely flux units ) the negative conductanceis included in the circuit . for flux values exceeding the breakdown points , the resultant of the linear resistance and negative conductance , which are in parallel ,is included in the circuit .the flux is obtained by an integrator circuit . by this action , the functional relationship between the flux and charge given in equation ( 1 ) is realized . plane . ] across the capacitor and ( b ) the voltage across the memristor . from thesethe external frequency , the memristor frequency and the central frequency are identified as and . ]this proposed circuit is implemented using multisim modelling and is given in fig .the values of the linear resistances are and .the values of the other parameters are : .for the multisim switch module , the on state resistance is set as and the off state resistance is chosen as and the threshold voltage is set as . when an external sinusoidal source is applied , the flux across the memristor changes , causing the memductance state to switch between a low off state and a high on state as shown in fig . 8(a ) .the flux is obtained as the output of the integrator circuit and the divider module facility of multisim .if the inputs to the divider module are the voltages and currents across the memristor , then an inverse measure of the memductance can be obtained .the characteristic of the memristor in the plane is shown in fig .it is clearly a pinched hysterectic curve , wherein the currents fall to zero whenever the voltage across the memristor becomes zero .this is a characteristic feature of the memristors . and lower memductance off state . the threshold levels for flux for this change in memductive states to occur are flux units .the linear sinusoidal driving signal is shown as inset .( b ) the characteristic of the memristor showing switching between the bi - stable memductance states in the normalized voltage versus current ( ) plane . the memristor switching as a function of timeis shown as inset .( c)the change in flux across the memristor arising due to the self - oscillations of the autonomous chua s circuit and the consequent memristor switching .the normalized voltage which drives the memristor is shown as inset .( d ) the change in flux across the memristor due to the forced oscillations of the driven chua s circuit and the consequent memristor switching at an altered frequency . here also the normalized voltage which drives the memristor is shown as inset . ] the driven chua s circuit is investigated by using multisim model of the memristor proposed .the circuit implementation is shown in fig .the parameter values of the circuit are fixed as . for the memristive part , the parameters are as given earlier . here ,for the sake of experimental convenience , the parameters chosen are slightly different from those of numerical values . yetthe observed behaviours are found to be qualitatively same to those obtained through numerical simulations . for this choice of parameters , chaotic beatsare observed when the forcing amplitude is varied keeping the frequency of the external forcing a constant or vice versa .the time series of the voltage across the capacitor showing chaotic variations in amplitude is given in fig .an extended region of the same is given in fig .10(b ) . the phase portrait in the is shown in fig. 11 . to obtain these ,the frequency of the external sinusoidal forcing is fixed as and the amplitude is fixed as .the power spectrum of the variable and that of voltage across the memristor are shown in figs .12(a b ) respectively . from thesethe external frequency , the memristor frequency and the central frequency are identified as , and .note that , this central frequency , as expected .characteristic of the memristor showing switching between the bi - stable memductance states .( c)the change in flux across the memristor for parameter values far away from those causing beats and the consequent memristor switching .( d ) the change in flux across the memristor at the incidence of beats and the consequent memristor switching . ]the mechanism for the generation of beats is obviously the switching of the memristor .essentially a memristor is a device that works under alternating current ( a.c ) conditions wherein the applied voltage varies sinusoidally with time. as the polarity of this voltage changes sinusoidally with time , the flux across the memristor also changes , causing it to switch reversibly between a less conductive off state ( low memductance ) and a more conductive on state ( high memductance ) [ tour he , 2008 ] .this is shown in fig .13(a ) . the lower memductance level is the off state and the higher level is the on state .the characteristic of the memristor ( in the normalized variables ) shown in fig.13(b ) , makes one perceive vividly the transitions between the bi - stable memductive states .the memristor just acts as a linear time varying resistor ( ltvr ) , see for example[chua _ et al . , _ 1987 ; nishio mori , 1993 ] .this switching of the memristor takes place at a characteristic time period or frequency .however when the memristor is driven by chaotic signals , either through a self oscillatory mechanism or through an external generator of chaotic signals , it switches chaotically between its memductive states .the memristor now acts as a chaotically time varying resistor ( ctvr ) .this chaotic switching is shown in figs .13(c ) 13(d ) , where the memristor is driven by voltage across the capacitance of the autonomous chua s circuit or the driven chua s circuit , for both of which it is an integral part as their nonlinear element .the switching of the memristor observed from multisim model are qualitative equivalent to figs 13(a - d ) and are shown in figs .14(a - d ) .the unwanted glitches seen in figs .14(a d ) , arise due to the mismatch in the delay times of the circuit components . of the driven memristive chua s circuit .the corresponding frequencies are calculated as reciprocals of these switching times .insets are blown up portions . ] from the normalized eqs.(6 ) ( 7 ) , for the amplitude of the driving force , the equilibrium state of the system is given by which corresponds to the -axis .the jacobian matrix at this equilibrium state is given by then the characteristic equation is where the s , are various coefficients and _ _ s are the eigenvalues . equation ( 10 )can be factorized as then the natural frequency of the autonomous circuit for is evaluated by the relation \ ] ] numerically , from eqs .( 13 ) -(14 ) , the frequency of the autonomous circuit is found to vary between and , depending on the values of , and .the breakpoints in the characterisitc of the memristor shown in fig .2(b ) , namely flux units , can be considered as the memristor switching thresholds .when the flux across the memristor exceeds these threshold values , switching of the memristor between its memductive on and off states occurs .let and refer to the durations of time the memristor resides in the on state and off state , before switching , respectively .then the total time taken by the memristor to complete one full switching process is .the switching frequency of the memristor is then 2 times the reciprocal of this total switching time . if the memristor switching frequency is equal to that of the external driving frequency , then the memristor switching operation is said to be or .if it is double or multiples of the external frequency , then the memristor switching operation is said to be . of the forced oscillations of the driven chua s circuit .] let us consider a linear sinusoidal excitation of frequency is applied to the memristor . if the flux across the memristor arising due to this sinusoidal excitation were to exceed the two switching thresholds , flux units , then for each external driving cycle , the memristor completes two full switching cycles .the memristor operation will then be subharmonic with a frequency equal to twice the external driving frequency , that is .however , in the present case the flux across the memristor is the time integral of the voltage across it , that is , .this cosinusoidal variation of flux causes it to exceed only one of the two memristor switching thresholds , namely + 1 flux unit .consequently only one memristor switching occurs for each cycle of the sinusoidal forcing .hence the memristor switching operation is _synchronous_. if we plot the probability of the switching times of the memristor driven by this linear sinusoidal forcing , we find a single probability peak which corresponds closely to the period of the linear excitation , namely .this translates to a frequency of .this is shown in fig .if the memristor is connected in parallel to the capacitance of the autonomous chua s circuit , the memristor gets driven by the self oscillatory combination of the circuit , with a frequency varying from and .the switching of the memristor , because of the inherent chaotic nature of self oscillations of the autonomous circuit , becomes haphazard proving that the memristor now acts as a chaotically time varying resistor ctvr .we get a number of closely spaced switching time probability peaks as shown in fig .the weighted average of these probability peaks gives a switching time which translates to a switching frequency of .when an external driving force of a frequency , let us say , is added to the circuit , the memristor is driven by the forced oscillations of the circuit .as the forced oscillations are also chaotic , the memristor switching is also necessarily chaotic .hence it still acts as a ctvr .the probability graph of the memristor switching times shown in fig .15(c ) now shows a prominent peak at a switching time which corresponds to a memristor switching frequency of .this is the new switching frequency that the memristor takes on as a result of the apparent control of chaos effected in the circuit by the introduction of the external force . at these particular frequencies , and ,the external excitations and memristive switching aided forced oscillations in the circuit undergo constructive and destructive interferences , resulting in the modulation of amplitudes and frequencies which we call beats .varying the external frequency away from this chosen value disrupts the interferences causing the modulation phenomenon to wane and eventually disappear .the average of the external driving frequency and the new memristor frequency gives the central frequency of modulated signal .if we plot the switching time probabilities for the positive and negative cycle variations of the variable under amplitude modulation , we get a prominent probability peak at a switching time .this corresponds to the central frequency .these are shown in fig .the beat frequency , calculated as half the difference in the above said two frequencies , is .the central and beat frequencies obtained from power spectral calculations as well as from probabilistic considerations are shown in table.1 .both of these agree well qualitatively with the theoretical results .however the small differences in their numerical values may be explained as due to the chaotic nature of the system as well as the large number of frequencies involved .the two dimensional residence time plots for the memristor for the on and off memductive states are plotted in fig . 16 . when a linear force is driving the memristor , the switching ( between positive and negative half cycles of the sinusoidal force ) is regular as seen in fig .in this case there are only two different values for and .when the oscillatory part of the autonomous chua s circuit drives it , because of the inherent chaotic nature of the circuit , the memristor also switches irregularly .the scatter plot of residence times and is erratic and random .this is evident in fig .when a driving force is added , both the external oscillations and the driven oscillations of the circuit interact with each other . in this interaction a control of chaos in the circuit is effected by the external driving force . as the chaoticity is reduced , the circuit tends to approach a regular behaviour , resulting in the occurrence of beats .however as the dynamics of the circuit is still chaotic , the plot of the memristor switching times is also still chaotic , but with a lesser degree of chaoticity or irregularity .this is also the reason for the low values of the lyapunov exponents obtained .this weak chaotic switching is illustrated in fig .the scatter plot of the switching times ( between positive and the negative cyclic variations ) of the variable of the system is shown in fig .in this paper the presence of chaotic beats phenomenon in a driven memristive chua s circuit is reported .this phenomenon has been observed and verified using numerical simulations in the form of time series plots , phase portraits and poincar maps .the chaotic nature of the modulation in amplitudes has been characterized with power spectral density calculations and lyapunov exponents . using multisim modelling an analog electronic circuit to realise the action of a three segment flux controlled memristoris proposed , and the presence of chaotic beats in the driven memristor chua s circuit using this memristor has been verified .further , based on probabilistic considerations , the mechanism for the occurrence of beats has been found to be the interaction due to the dynamics based chaotic time varying resistive property and chaotic switching of the memristor between different memristive states .the use of memristor in the driven chua s circuit has enhanced its dynamics considerably .many features like reverse period doubling route , intermittent route and higher dimensional torus break down route to chaos , hyper chaos , period adding and farey sequences , transient chaos , double hook attractors etc ., can also be identified in this circuit .much progress has been made along these lines and the results will be published seperately .this work forms a part of a department of science and technology ( dst ) , government of india , irhpa project of ml and a dst ramanna fellowship awarded to him .a.i acknowledges gratefully the university grants comission ( ugc ) of the government of india for supporting his work under a fdp fellowship ( f.etftnbd011 fdp / ugc - sero ) .: : anishchenko , v. s. , safonova , m.a . chua , l.o.[1993 ] stochastic resonance in the nonautonomous chua s circuit , " _ j. cir. syst .comput . _ * 3 * , 553 - 578 .: : cafagna , d. grassi , g. [ 2004 ] a new phenomenon in nonautonomous chua s circuits : generation of chaotic beats , " _ int .j. bifurcation and chaos _ * 14 * , 1773 - 1788 .: : cafagna , d. grassi , g. [ 2005 ] on the generation of chaotic beats in simple nonautonomous circuits , " _ int . j. bifurcation and chaos _ * 15 * , 2247 - 2256 .: : cafagna , d. grassi , g. [ 2006a ] generation of chaotic beats in a modified chua s circuit part -i : dynamic behaviour , " _ nonlinear dynamics _ * 44 * , 91 - 99 : : cafagna , d. grassi , g. [ 2006b ] generation of chaotic beats in a modified chua s circuit part -ii : dynamic behaviour , " _ nonlinear dynamics _ * 44 * , 101 - 108 : : chua , l.o .[ 1971 ] memristor- the missing circuit element , " _ieee trans .circuit th .ct- v _ * 18 * , 507 - 519 . : : chua , l. o. , kang , s. m. [ 1976 ] memristive devices and systems , " _ proc . ieee _ * 64 * , 209223 .: : chua , l.o . ,desoer , c. a. kuh , e. s. [ 1987 ] _ linear and nonlinear circuits _ ( mcgraw - hill book company , singapore ) chap .: : elwakil , a. s. [ 2002 ] nonautonomous pulse - driven chaotic oscillator based on chua s circuit , "_ microelectron.j _ * 33 * , 479 - 486 .: : grygiel , k. szlachetka , p. [ 2002] generation of chaotic beats , " _ int .j. bifurcation and chaos _ * 12 * , 635 - 644 .: : itoh , m. chua , l. o. [ 2008 ] memristor oscillators , " _ int .j. bifurcation and chaos _ * 18 * , 3183 - 3206 .: : liu , z. [ 2001 ] strange nonchaotic attractors from periodically excited chua s circuit , " _ int .j. bifurcation and chaos _ * 11 * , 225 - 230 .: : murali , k. lakshmanan , m. [ 1990 ] observation of many bifurcation sequences in a driven piecewise - linear circuit , " _ phys .a _ * 151 * , 412 - 419 . : : murali , k. lakshmanan , m. [ 1991 ] bifurcation and chaos of the sinusoidally - driven chua s circuit , " _ int . j. bifurcation and chaos _ * 1 * , 369 - 384 . : : murali , k. lakshmanan , m. [ 1992a ] transition from quasiperiodicity to chaos and devil s staircase structures of the driven chua s circuit , " _ int .j. bifurcation and chaos _ * 2 * , 621 - 632 .: : murali , k. lakshmanan , m. [ 1992b ] effect of sinusoidal excitation on the chua s circuit,"_ieee trans .circuits & syst.-1 _ * 39 * , 264 - 270 . : : murali , k. lakshmanan , m. [ 1993 ] chaotic dynamics of the driven chua s circuit , " _ ieee trans . circuits syst.-1 _ * 40 * , 836 - 840 .: : muthuswamy , b.[2009a ] memristor based chaotic circuits , " _ technical report no .ucb / eecs- 2009 - 6 _http://www.eecs.berkely.edu/pubs/techrpts/2009/eecs-2009-6.html : : muthuswamy , b.[2009b ] implementing memristor based chaotic circuits,"to be published in _ int .j. bifurcation and chaos_. : : nishio , y. mori , s. [ 1993 ] chaotic phenomena in nonlinear circuits with time - varying resistors , " _ ieice trans .fundamentals e-76a _ * 3 * , 467 - 475 . : : liwa , i. , grygiel , k. szlachetka , p.[2008 ] hyperchaotic beats and their collapse to the quasiperiodic oscillations , " _ nonlinear dynamics _ * 53 * , 13 - 18 .: : strutkov , d.b . ,snider , g.s . ,stewart , d.r . williams , r.s .[ 2008 ] the missing memristor found , " _ nature _ * 453 * , 80 - 83 : : srinivasan , k. , thamilmaran , k. venkatesan , a. [ 2009 ] classification of bifurcation and chaos in chua s circuit with effect of different periodic forces , " _ int. j. bifurcation and chaos _ * 19 * , 1951 - 1973 .: : tour , j. m. tao he . [ 2008 ] the fourth element , " _ nature _ * 453 * , 42 .: : wolf , a. , swift , j.b ., swinney , h.l . vastano , j.a .[ 1985 ] determination of lyapunov exponents from a time series , " _physica _ * 16d * , 285 - 317 .: : zhong , g. [ 1994 ] `` implementation of chua s circuit with a cubic nonlinearity , '' ieee transactions on circuits and systems 41(12 ) , pp. 934 - 941 .: : zhu , z. liu , z. [ 1997 ] strange nonchaotic attractors of chua s circuit with quasiperiodic excitation , " _ int .j. bifurcation and chaos _ * 7 * , 227 - 238 .ccc + + & & + + + external driving frequency + & 3.499394 & 3.503184 + + memristor switching frequency + & 3.240534 & 3.242463 + + memristor switching frequency + & 2.972088 & 2.945407 + + central frequency of the modulated + signal & 3.240534 & 3.302220 + + beat frequency & 0.270179 & 0.280360 + + +
in this paper , a time varying resistive circuit realising the action of an active three segment piecewise linear flux controlled memristor is proposed . using this as the nonlinearity , a driven chua s circuit is implemented . the phenomenon of chaotic beats in this circuit is observed for a suitable choice of parameters . the memristor acts as a chaotically time varying resistor ( ctvr ) , switching between a less conductive off state and a more conductive on state . this chaotic switching is governed by the dynamics of the driven chua s circuit of which the memristor is an integral part . the occurrence of beats is essentially due to the interaction of the memristor aided self oscillations of the circuit and the external driving sinusoidal forcing . upon slight tuning / detuning of the frequencies of the memristor switching and that of the external force , constructive and destructive interferences occur leading to revivals and collapses in amplitudes of the circuit variables , which we refer as chaotic beats . numerical simulations and multisim modelling as well as statistical analyses have been carried out to observe as well as to understand and verify the mechanism leading to chaotic beats . + + _ keywords _ : driven chua s circuit ; active memristors ; piece - wise linear nonlinearities ; chaotically time varying resistors ( ctvr ) ; residence times ; switching frequencies . email : lakshman.bdu.ac.in + running title : beats in memristive driven chua s circuit + corresponding author : m. lakshmanan
the center of the mass of the solar system moves up to about one sola diameter due to the influence of in particular the gas giants jupiter and saturn .a 2-d view of the orbit is shown in fig . [fig : solarorbit ] .can the signature of this orbit be found in the global temperature ? in order to argue this caseone needs two kinds of evidence .the first and most important one is an explanation in terms of a physical model that explains why this orbit should influence processes in the sun which in turn will influence the temperature on earth .there is no such explanation which is generally accepted today , see e.g. , despite the existence of alternative theories and recent discussions . the second way is to compare data and look for correlation and coherence .this is the approach followed by .he compared the hadcrut3 global temperature anomaly with the speed of the center of mass of the solar system ( scmss ) .the raw data is plotted in fig . [fig : input ] and runs from 1850 to may 2014 .the most obvious feature of the temperature data ( upper curve ) is a gradual rise which is captured imperfectly by the linear and parabolic fits . the secondary feature is a pattern of oscillations which is what this paper is concerned with .scafetta is to be commended for having drawn the attention to these oscillations and for raising various hypotheses for their origin .the most obvious feature of the scmss data in the lower part of fig .[ fig : input ] is a periodicity of about 20 years which is the synodic period of jupiter and saturn , i.e. the period for which their positions and that of the sun are realigned .the main argument of was based on a comparison of power spectra which allegedly demonstrated coherence , and in particular at periods around 20 and 60 years .then in i argued that this is only a qualitative comparison .there has for a long time been available a much better tool and this is the magnitude squared coherence ( msc ) function .it is a frequency dependent normalized correlation value between 0 and 1 which is also sensitive to phase relationships .the msc was therefore introduced to this problem and it was shown that in order to get a statistically reliable estimate of coherence from the 160 or so years of data available , one could not average over windows which were much longer than 40 years .this was due to the particular significance interval estimate that was used in that paper .it was therefore not possible reliably to resolve coherence at a period around 60 years .as a response to this , it was argued in along three lines .first the similarity of spectral estimates of climate and planetary series was again used as an argument as fig .6b from was reproduced as fig .12 of .he also reproduced the results from an undocumented coherency test .contrary to him , i actually see the opposite of coherence in this figure as the spectral lines do not overlap very convincingly at frequencies around the 20 and 60 year periods . in the rest of scafettas paper the msc of has now been adopted rather than a comparison of spectral estimates .his second argument was that one needs to improve the msc estimator based on averaged windowed periodograms and cross - spectra of and replace it with the minimum variance distortion - less ( mvdr ) msc estimator .third he argued that with this high resolution msc estimator , one could increase the window length to 110 years . with windows of this length ,a line around 60 years should be resolvable , and such a line came out of his analysis as well .i do nt see this as a discussion of right or wrong methodology , at least not as long as one is talking about estimating the msc .it is rather about significance and questions such as if a high coherence value automatically implies that it has high significance , and how important the reliability of the coherence estimator s amplitude is .since there is a trade - off between spectral resolution and confidence in the amplitude estimate , given the limited time - span , then indirectly this is also a question of resolution .my main aim in this paper is to show that the primary difference between and is the weight placed on these properties .a secondary difference is the emphasis placed on _ a priori _ expectations .they are fine for framing a hypothesis , but not necessarily for defending it when data goes against the hypothesis .but this is what does when he invokes kepler s first and second laws as well as kepler s _ mysterium cosmographicum _( 1596 , 1621 ) and _ harmonices mundi _ ( 1619 ) to argue against my analysis since it did nt find 60 year periodicities of the strength that he anticipated .further , it is not at all obvious that kepler s contributions , beyond his three laws , are so useful .here is what a critical historian of science has written : _`` the three major gems in his works on astronomy lay in a vast field of errors , of irrelevant data and details , of mystical fantasies , of useless speculations , of morbid detours of self analysis , and , last but not least , of an organismic and animistic conception of the world and its processes '' _ . anyway , to me this is a simple question of analysis of the data and not of any _ a priori _ expectations .the scmss data used in was clearly not available to kepler and contains whatever it contains regardless of what expectations one may get from reading kepler .could it also be that the seemingly arbitrarily chosen scalar scmss may be a poor choice if the objective is to illustrate the 60 year periodicity of the planetary system , and that for instance it is easier to find it if the 3d vector information is used as well ? in this paperi will take it as a premise that comparisons of spectral peaks can not be anything but a qualitative indicator of coherence and therefore argue only based on the magnitude squared coherence .i then discuss the effect of detrending on the global temperature data . a standard wavelet coherence estimation routine which is widely used in climate and geophysical data analysisis then applied to the data .i also demonstrate that it is not necessary to introduce a new mvdr msc estimator to see peaks around 60 years , the periodogram based estimator may also show that if the window length is increased .in order to validate these peaks their statistical significance must be found .i discuss four ways of doing that and end up with a random phase method which has been designed to be applicable to serially correlated data as one has here .this is a new method compared to that used in .i will demonstrate that with common values of significance levels it is hard to argue for the statistical significance of coherence between the data sets for both the 20 year and the 60 year periodicities .two independent estimation methods , wavelet coherence and periodogram based coherence , are used here . an alternative which is not discussedis multitaper msc estimation .it has been used for instance to demonstrate coherence between the global temperature and co2 .another possibility is the mvdr estimator for msc of used by .these results are straight - forward to reproduce and the method was used by scafetta because it was believed that the periodogram based msc estimator could not resolve the interesting line at a 60 year period .as will be shown here this is not the case , so therefore the relatively uncharacterized mvdr estimator which also depends on the setting of a regularization parameter is not used here . in general statistical propertiesare much harder to find for mvdr - based estimators .this is the case also in other fields where we have experience with mvdr or it equivalents , the capon method and adaptive spectral estimation .the msc results are sensitive to the kind of detrending applied to the global temperature data .alternatives are to subtract the mean , to subtract a linear trend , or to subtract a parabola .these curves fitted to the raw data are shown in fig .[ fig : input ] .subtraction of the mean is obvious to do in any spectral method in order to prevent strong components at 0 hz to leak into the low frequency parts .the most critical period in question is at around 60 years .this is a frequency of only year which is very close to 0 and in the frequency range which is affected most by detrending .the safest way to justify linear and parabolic detrending is to argue in terms of a physical model for the data . lackingthat , another criterion could be to test for sensitivity .if a result is critically dependent on one particular detrending , then the result would be dubious .the temperature data detrended by a linear fit and a parabolic fit were therefore plotted in fig .[ fig : detrended ] . compared to the raw data in the upper panel of fig .[ fig : input ] it appears that in particular the lower series , detrended by a parabolic function , shows a tendency to amplify a low frequency oscillation in particular in the range 18501890 but also after 2000 .these differences can be seen in the msc estimates also .parabolic detrending enhances low frequency periodicites more than the other methods whether the periodogram based method or the mvdr method is used for msc estimation .this will be commented on later also . in my view , parabolic detrending adds uncertainty to the result .are we seeing phenomena that are in the data or are we seeing the result of an interaction between the detrending and the data ? a middle way , using subtraction of the mean for the scmss data and linear detrending for the temperature data has therefore been chosen here .the latter is also justified by the argument in that linear detrending removes the influence of potential unresolved low frequencies .the careful analysis of linear detrending on temperature data in also justifies it .an analytical expression for the statistics of the msc estimator exists for the case of an integer number of non - overlapping segments and gaussian uncorrelated ( white ) data . to test the null hypothesis of no coherence ,the true value can be set to zero . in that casean independence threshold can be found .the method was used in . since in particular the scmss datais highly correlated due to its periodic nature , this will tend to indicate coherence where in reality there is none .no analytical expression for the statistics exists in case of correlated data so one has to resort to monte carlo simulation to find a test for the null hypothesis . in order to generate a data set to test against ,there are several possibilities .one way is to fit an ar(1 ) model to each data set and generate simulated data sets from this red noise model .this method will also indicate coherence when there is none when there are serially correlated and in particular periodic signals present .the non - parametric random phase method of is better as it was developed for serially correlated data .several studies have used this method .it consists of fourier transforming a data set , randomizing the fourier phases while maintaining the complex conjugate symmetry of the frequency domain data so that the inverse transform is still real , and doing an inverse transform to get back to the time domain . recommends that the pair - wise coherence between a large set ( 1000 in our case ) random phase versions of time series 1 and the original time series 2 is found , and then that the results are sorted and the 95% percentile is plotted along with the coherence of the two datasets .that paper also showed that as an ar(2)-process became more and more periodic , this method tends to assign coherence where there is none .therefore we have adopted the method of the appendix of where for each iteration also the coherence between a random phase version of series 2 and the original series 1 is found and the maximum of the two coherence functions is found for each step .the use of an additional phase criterion , as proposed there , is not used here .the monte carlo simulation in is yet another way to attack this problem .however , since it is based on comparing the scmss with simulated sinusoids in additive independent noise at the expected frequencies , it is based on something different from a null hypothesis .it starts with the assumption of coherence and tests its significance rather than starting with the hypothesis that there is no coherence .this goes against the principle of prudence in my opinion .it also depends on the value of a signal to noise ratio parameter .the setting of this parameter is not discussed at all in and an arbitrary value is just used without justification .finally , this additive noise approach , which some might say is rather naive , is not even considered in papers that discuss how to find confidence intervals for serially correlated data .the wavelet estimation software of has been used for many different climate related and astrophysical applications .it estimates spectra and magnitude squared coherence using the morlet wavelet . in order to show where the time - frequency space is free of edge artifacts , the data is only plotted inside the cone of influence , highlighting the lack of data for estimating components at high periods .this method is applied to the detrended data of fig .[ fig : input ] .it can be seen that the wavelet spectra of fig .[ fig : wavelet ] are in general agreement with the spectral analysis of these data sets in .a line around 60 years is very visible in the hadcrut3 spectrum , but barely visible in the scmss spectrum .in addition the scmss spectrum has a strong component around 20 years which is also visible in the hadcrut3 spectrum .the wavelet squared coherence function is shown in fig . [fig : wtc ] .it is based on averaging of two independent values in the time dimension and in the scale dimension , so the number of independent averages is ( see appendix of ) .the confidence interval based on the random phase method from the appendix of and is used .there are only a few areas with significant coherence , like period years around 1950 .there is quite large coherence in the 17 - 22 and the 50 - 60 year ranges , but it is not large enough to reach above the 95% significance level .this is different from the result of where coherence in the range 15-17 years was found to be just above the significance threshold . the lack of significant coherence in fig .[ fig : wtc ] is due to the use of a significance interval estimate which is better suited for the serially correlated data . in the periodogram based method for estimating magnitude squared coherence , the detrended time series is divided into short equal - length windowed segments . then power and cross spectra are found by averaging over all the segments .the msc estimate is found as the ratio of the squared cross spectrum estimate and the power spectra .the number of averages is given by the number of windowed segments . in order to utilize the data well, considerable overlap between segments is used . in our casewe use a kaiser window with and the overlap is .that means that the number of independent averages , , which determines confidence intervals , is considerably lower than the number of averages .see the appendix for details of the algorithm and its properties .in the msc was plotted for window lengths 20 and 30 years , and results were given in table format for length 40 years also .here i do nt preselect the window length but rather find it by increasing it until the required number of segments fills all the available data as much as possible giving window lengths like 41 , 73 , and 109 years .despite this , some of the data will have to be discarded , but this is done from the beginning rather than from the end of the dataset .the reasoning is that the data from 1850 is both less reliable and also less interesting than recent data .see fig .[ fig : shifts ] in the appendix for an illustration of this .the magnitude squared coherence using a window length of 41 years is shown in fig .[ fig : msc41 ] .the coherence in the 1517 year range almost touches the 95% confidence level , but is below it .one would have expected the solid msc curve to reach well above the dash - dotted 95% significance level if there was significant coherence.there is of course no disagreement between and i that windows of length 3040 years are too short for finding coherence between time series on the 60 year scale .but as segment lengths are increased from 41 to 109 years , the number of independent averages , , goes from 6.4 to 1.9 . due to the correlated nature of the data , this is the upper limit of , and the actual number of independent averages is even lower .this number is so low that significance levels based on the analytical model of were not considered to be reliable .therefore the window length was not increased beyond that in the first study . now that significance levels are found by monte carlo simulation, larger segment lengths like 73 and 109 years can give meaningful results .this is shown in figs .[ fig : msc73 ] and [ fig : msc109 ] .it is seen that a peak in the 5060 year range starts to appear in addition to the one around 19 years .these results are similar to fig . 10 with 110 years segment length of with respect to the peak around 60 years , but not identical to it .the coherence estimate here is below the 95% significance level , in particular in the 5060 year range .the msc peak at about 60 years in fig .[ fig : msc109 ] is above 0.9 while scafetta s was below 0.8 .the parabolic detrending of his fig .10 has actually enhanced the peak , and a change to linear detrending , like we use , will diminish scafetta s mvdr peak even further to around 0.7 . despite the higher value of our peak compared to it turns out not to reach above the dash - dotted 95% significance level .the period of the main peak in figs .[ fig : msc41 ] - [ fig : msc109 ] is 17.5 , 19 , and 18.4 respectively .this is slightly less than 20 years , the peak value of the scmss spectrum .however , due to the ratio in the definition , eq . [ eq : msc ] , the actual peak also depends on the cross spectrum as well as the resolution of the spectral estimates . also it is evident from the upper panel of fig .[ fig : wavelet ] that the peak position in the 20 year range for the temperature data varies with time , and this variation is smeared out in the msc estimate .the magnitude squared coherence ( msc ) is defined as a normalized cross spectrum : where and are the power spectra of the two time series and is the complex cross spectrum . will always be between 0 and 1 .for the estimate both time series are divided into segments which are fourier transformed and averaged : a case that demonstrates the need for averaging is to let .then the coherence estimator of eq .[ eq : msc2 ] will always output unity no matter what the input signals are and therefore be severly biased upwards .such upward bias is typical when the number of averages is low .since the window tapers the ends heavily down , % overlap is used between the segments .the number of segments is : where is the total number of samples and is the segment length . the actual value for the number of independent averages , , will be less than when there is overlap , but statistics for the msc estimator is only known for the case of an integer number of non - overlapping segments .however , one can assume that the reduction in the number of independent averages is similar to that which happens when estimating a spectrum .in the case of an overlap of , the number of independent averages , , is estimated from the correlation of the data window at the , 50 , and 75% points , , , assuming that the input data is white : \\ & -\frac{2}{n^2}[c^2(0.75 ) + 2c^2(0.5 ) + 3c^2(0.25 ) ] \end{split } \label{eq : nd}\end{aligned}\ ] ] the ratio in the case of a kaiser(6 ) window and 75% overlap is asymptotically 0.69 , starting at 0.53 for very low values of .the difference between these values is due to the last three terms which can be neglected for .however , due to the assumption on the data , this estimate will overestimate the number of independent averages in the case of non - white correlated data , and in particular when it is periodic .a typical advice regarding the required number of independent averages is found in where it is said that the statistical properties _`` ... dramatically portray(s ) the requirement that be large . ''what this means in practice is that should preferrably be larger than 4 - 6 as in where the lowest value used was .however , with a monte carlo based method for finding significance level , rather than an analytic one , lower values of can give meaningful results as demonstrated in this paper .j. abreu , c. albert , j. beer , a. ferriz - mas , k. mccracken , f. steinhilber , a. ferriz - mas , r. hollerbach , f. stefani , and a. tilgner .response to : `` critical analysis of a hypothesis of the planetary tidal influence on solar activity '' by s. poluianov and i. usoskin ._ , 2890 ( 6):0 23432344 , 2014 .p. brohan , j. kennedy , i. harris , s. tett , and p. jones .uncertainty estimates in regional and global observed temperature changes : a new dataset from 1850 ._ j. geophys .res _ , 1110 ( d12):0 d12106 , 2006 .g. c. carter , c. knapp , and a. h. nuttall .estimation of the magnitude - squared coherence function via overlapped fast fourier transform processing ._ ieee trans . audio electroacoust ._ , 210 ( 4):0 337344 , 1973 .n. scafetta .does the sun work as a nuclear fusion amplifier of planetary tidal forcing ?a proposal for a physical mechanism based on the mass - luminosity relation . _ j. atmos .solar - terrestr ._ , 81:0 2740 , 2012 .wang , x. liu , j. yianni , r. christopher miall , t. z. aziz , and j. f. stein .optimising coherence estimation to assess the functional correlation of tremor - related activity between the subthalamic nucleus and the forearm muscles ._ j. neurosci ._ , 1360 ( 2):0 197205 , 2004 .yao , g. huang , r .-wu , and x. qu . the global warming hiatus - a natural product of interactions of a secular warming trend and a multi - decadal oscillation . _ theor ._ , pages 112 , 2015 .it is not hard to find high peaks in the magnitude squared coherence estimate in the 15 - 20 year range as well as in the 50 - 60 year range when the speed of center of mass of the solar system is compared to the global temperature anomaly .this is the case for the two independent methods , the wavelet coherence estimator of fig .[ fig : wtc ] and the periodogram based estimator of figs .[ fig : msc41 ] - [ fig : msc109 ] . at first glancethis seems to be in agreement with .however , an estimate of coherence of high value does not mean that there is a coherence of high significance .therefore a central point of this paper has been a discussion of significance levels .the challenge is that the data here is periodic .the four methods for assessing significance were : 1 .null hypothesis testing via monte carlo simulation based on the non - parametric random phase method for serially correlated data 2 .null hypothesis testing based on an analytical expression for independence level derived for white gaussian data 3 .null hypothesis testing via monte carlo simulation based on a red noise model 4 .testing via monte carlo simulation based on a simple model with sinusoids in additive noise with an unknown signal to noise that needs to be set these methods will give widely different results with respect to significance .prudent use of methods implies that conclusions only can be drawn from the first model as it is the only one that fits the data , as has been argued here .that means that coherence around 20 year and 60 year periods can not be considered to be trustworthy .this strengthens the conclusion of .the principle of prudence in addition to the lack of an accepted physical mechanism , then dictates the conclusion that one can not say that there is any coupling between the movement of the center of the solar system and the global temperature data .it is more credible to look for down to earth explanations in e.g. oscillations in the atmospheric general circulation model , the pacific decadal oscillation , or the atlantic multi - decadal oscillation due to the meridional overturning circulation .wavelet coherence software was provided by aslak grinsted and stepan poluianov .the software for randomization of phase was written by vincent moron .thanks to fritz albregtsen , knut liestl , and bjrn samset for valuable comments .
there are claims that there is correlation between the speed of center of mass of the solar system and the global temperature anomaly . this is partly grounded in data analysis and partly in a priori expectations . the magnitude squared coherence function is the proper measure for testing such claims . it is not hard to produce high coherence estimates at periods around 1522 and 5060 years between these data sets . this is done in two independent ways , by wavelets and by a periodogram method . but does a coherence of high value mean that there is coherence of high significance ? in order to investigate that , four different measures for significance are studied . due to the periodic nature of the data , only monte carlo simulation based on a non - parametric random phase method is appropriate . none of the high values of coherence then turn out to be significant . coupled with a lack of a physical mechanism that can connect these phenomena , the planetary hypothesis is therefore dismissed .
trabalhos recentes sobre sistemas gerenciadores de workflows cientficos ( sgwfcs ) tm demonstrado que a comunidade cientfica vem se preocupando em adicionar suporte a atributos no - funcionais a esses sistemas .no entanto , tais atributos comumente no podem ser especificados nos modelos dos workflows , pelo fato das linguagens de especificao de workflows existentes possurem expressividade em geral limitada para esse fim .essa caracterstica torna a modelagem dos workflows mais simples , porm oferece menor flexibilidade na configurao dos mecanismos associados a esses atributos .quando o sistema oferece suporte configurao desses atributos , a mesma concentrada na especificao das tarefas ( componentes computacionais ) , ou associada ao workflow como um todo . essa caracterstica dificulta ou impossibilita a configurao desses mecanismos nas comunicaes e coordenaes empregadas entre tarefas .em busca do aprimoramento dessa expressividade , este trabalho apresenta a linguagem osc , uma evoluo do trabalho preliminar apresentado como resumo estendido em .osc definida sobre a linguagem de descrio arquitetural acme .em contraposio a outras abordagens , osc emprega * conectores * como construes de primeira classe para a modelagem tanto de tipos quanto de instncias de intera - es entre tarefas quanto de regras que governam essas interaes .com essa abordagem , osc propicia uma maior capacidade de reuso , composicionalidade e configurabilidade na modelagem de workflows , beneficiando particularmente o tratamento de atributos no - funcionais. o restante deste artigo est estruturado como se segue .a seo [ sec : atr_nf ] apresenta os atributos no - funcionais tratados neste trabalho .a seo [ sec : osc ] apresenta os elementos de modelagem de osc . a seo [ sec : ex ] apresenta um exemplo de uso de osc , que comparada a trabalhos relacionados na seo [ sec : trab ] .por fim , na seo [ sec : conc ] so apresentadas as concluses .o levantamento dos atributos no - funcionais tratados neste trabalho foi realizado a partir da anlise de workflows cientficos existentes ( como o orthomcl e o profrager , este ltimo apresentado na seo [ sec : ex ] ) e de alguns dos sgwfcs ( vide seo [ sec : trab ] ) mais populares na literatura dentre aqueles que permitem a composio e configurao destes atributos em uma linguagem de modelagem ( seja ela grfica ou textual ) .neste trabalho so tratados como atributos no - funcionais : os atributos de qualidade relacionados a confiabilidade e rastreabilidade , o paralelismo de tarefas , e o paralelismo de dados .nesse levantamento , o escalonamento de tarefas tambm se mostra um atributo no - funcional importante .porm , como o mesmo deve ser tratado fim - a - fim em qualquer workflow e sua configurao depende de informaes contidas na descrio das tarefas e conectores , escolheu - se trat - lo diretamente no sgwfc que executa workflows osc .a implementao de um sgwfc para osc existe e est disponvel ( vide seo [ sec : ex ] ) , mas seu detalhamento foge ao escopo deste trabalho. falhas podem ocorrer em diversas partes de um workflow , podendo ser falhas tanto nas tarefas quanto em suas interaes , e por diversos motivos , como falhas nas transferncias de dados ou falta de bibliotecas necessrias execuo de tarefas .essa caracterstica ressalta a importncia da adoo de mecanismos de tolerncia a falhas em sgwfcs de forma a adicionar confiabilidade s execues .j os mecanismos de rastreamento de provenincia de dados so utilizados por sgwfcs para uma melhor gerncia dos metadados que podem ser gerados em cada execuo de um workflow .workflows podem gerar uma quantidade significativa de metadados , o que tem estimulado a comunidade cientfica a buscar solues que facilitem essa gerncia em sgwfcs .ambientes para execuo paralela de software tm sido crescentemente associados a sgwfcs .dois tipos principais de paralelismo de tarefas so em geral considerados : memria compartilhada e memria distribuda . apesar de aceleradores ( como gpus ) serem uma tendncia , optou - se por no abord - los inicialmente neste trabalho , pois os exemplos de workflows estudados no apresentaram nenhuma tarefa que dependesse deste tipo de paralelismo .workflows podem ser usados para o processamento de grandes massas de dados .os esquemas de varredura de parmetros e mapreduce so interessantes para esse tipo de processamento quando os dados podem ser divididos para o processamento ( em geral , paralelo ) de conjuntos menores de dados .a varredura de parmetros consiste em invocaes repetidas de uma tarefa utilizando diferentes dados de entrada para cada invocao , podendo portanto ser usada tambm em simulaes computacionais baseadas em mtodos como o de monte carlo .j no mapreduce uma funo _ map _processa um par \{chave , valor } e gera um conjunto intermedirio de pares \{chave , valor}. uma funo _ reduce _ processa todos os pares gerados pela funo _ map _com uma mesma chave .para gerar os pares de entrada da funo _ reduce _ , aps a funo _ map _ executada uma fase intermediria de ordenao das chaves e fuso dos valores regidos pela mesma chave .osc definida sobre a linguagem acme .em acme possvel descrever estilos arquiteturais que permitem o reuso de elementos de modelagem em diferentes arquiteturas de software , bem como a definio de regras de composio desses elementos .um estilo foi definido em acme para a descrio dos elementos * tarefas * , * conectores * , * portas * , e * papis * e regras de modelagem de workflows em osc .em osc , tarefas s se comunicam por meio de conectores .tarefas e conectores possuem interfaces denominadas , respectivamente , de portas e papis . o modelo deum workflow em osc envolve a ligao de portas de entrada / sada de tarefas a papis de origem / destino de conectores .ligaes entre portas e papis podem representar dependncias de controle ou de dados entre tarefas .osc considera a existncia de dois tipos de usurios no processo de especificao de workflows cientficos : _ cientistas _ e _ projetistas_. o usurio _ cientista _ descreve workflows em termos de relaes entre instncias de tipos pr - definidos de tarefas e de conectores ( desenvolvimento _ com _ reuso ) .o usurio _ projetista _ pode estender osc definindo novos tipos de tarefas e conectores com base nos tipos pr - definidos pela linguagem ( desenvolvimento _ para _ reuso ) .osc predefine um conjunto de tipos bsicos para tarefas , portas , conectores e papis . esses tipos bsicos associam o elemento de modelagem abstrato no workflow com sua imple - mentao concreta .por exemplo , uma tarefa pode ser um executvel ou um `` fluxo '' ( um workflow encapsulado como uma tarefa ) , enquanto um conector pode ser um _ pipe _ de caracteres ou o transporte de um arquivo . associado a esses tipos bsicos so predefinidos tambm tipos especficos para representar a configurao de atributos no - funcionais . a figura [ fig : tarefas ] apresenta diagramas uml que representam os tipos bsicosde tarefa e alguns de seus tipos especficos .os diagramas usam o formato proposto por .optou - se por usar generalizaes , restries e _ powertypes _ da uml 2.0 para retratar esses tipos neste artigo , ao invs das especificaes acme correspondentes , devido ao espao disponvel .contudo , para ilustrar o uso de acme alguns trechos dessas especificaes so apresentados nas subsees que se seguem .a figura [ subfig : bas_tar ] define o _ powertype _ tipoestrutura para englobar os tipos bsicos de tarefas , os quais no podem ser combinados por se tratarem de tipos disjuntos . a figura [ subfig : exec_est ] apresenta os tipos osc referentes ao atributo no - funcional de paralelismo de tarefas . a figura [ subfig : fluxo_est ] apresenta os tipos osc referentes ao atributo no - funcional de paralelismo de dados. com exceo do tipo varreduradeparametros , todos os outros atributos no - funcionais representados na figura [ fig : tarefas ] so exclusivos para o tipo tarefa .portas tambm possuem tipos para a varredura de parmetros , de forma a permitir a configurao de bifurcaes e junes . as figuras [ fig : atr_tarefas ] e [ fig : atr_conectores ]mostram diagramas uml que representam os tipos de atributos de qualidade para tarefas e conectores .em osc os atributos de qualidade so classificados pelo _ powertype _ tipoatributodequalidade . todos os elementos osc esto associados a esse _ powertype _ , porm nem todos os atributos de qualidade so tratados em todos os elementos e o tratamento distinto em cada elemento .os pargrafos a seguir apresentam mais detalhes sobre como os atributos no - funcionais so tratados em osc . [ [ paralelismo - de - tarefas . ] ] paralelismo de tarefas .+ + + + + + + + + + + + + + + + + + + + + + + os tipos memoriacompartilhada e memoriadistribuida ( vide figuras [ subfig : exec_est ] e [ osc_tipomemoria ] ) permitem que tarefas paralelas sejam adicionadas ao fluxo .estas tarefas podem executar em sistemas computacionais que utilizam diferentes gerenciadores locais de recursos .nesse sentido , optou - se neste trabalho por uma abordagem minimalista , na qual o tipo memoriacompatilhada permite somente a configurao do nmero de _ threads _ que sero disparadas e o tipo memoriadistribuida permite somente a configurao do nmero de ns no sistema computacional que ser usado para a execuo e o nmero de processos que sero disparados em cada n .atravs da combinao destes tipos de alocao , osc d suporte a configurao de executveis que implementem paralelismo hbrido ( p. ex ., combinando _ pthreads _ e mpi ) .[ [ paralelismo - de - dados . ] ] paralelismo de dados .+ + + + + + + + + + + + + + + + + + + + + os tipos varreduradeparametros e mapreduce herdam do tipo bsico de tarefa fluxo .o tipo varreduradeparametros , descrito na figura [ osc_tipovarredura ] , pode ser configurado para executar seu mecanismo de forma sequencial ou pela criao de instncias paralelas de suas tarefas internas .para que uma varredura de parmetros possa ocorrer ao menos uma porta de entrada deste tipo de fluxo deve ser do tipo bifurcacao .esse tipo de porta deve ser associado ou a um conjunto de dados ( diretrio de arquivos ou lista de valores ) ou a um nmero de instncias do fluxo a serem executadas .o conjunto de dados ou o nmero de instncias de cada porta de bifurcao define se o sgwfc deve realizar a combinao de dados e / ou a repetio dos experimentos , permitindo a criao da quantidade de instncias das tarefas do fluxo que devero ser executadas .todas as portas de sada de dados de um fluxo do tipo varreduradeparametros devem ser do tipo juncao , o qual responsvel pela juno dos dados de sada gerados aps o trmino de todas as instncias da varredura de parmetros .existem trs formatos possveis para juno : _ include _ , que adiciona arquivos a um diretrio ; _ merge _ , que adiciona o contedo de um diretrio em outro diretrio ; e _ concat _ , que concatena arquivos em outro arquivo . no que se refere ao mapreduce , diversos sistemas ( p.ex .hadoop e bashreduce ) provem implementaes distintas para esse modelo , o que dificulta a generalizao deste tipo .por isso , em osc adotada uma representao para este tipo em que um fluxo possui um conjunto de binrios que se responsabilizam pela execuo do mapreduce .apesar deste tipo no estar relacionado a um mecanismo especfico , ele propicia uma melhor legibilidade do modelo do workflow .[ [ tolerncia - a - falhas . ] ] tolerncia a falhas .+ + + + + + + + + + + + + + + + + + + + osc permite o tratamento de falhas tanto em tarefas quanto em conectores , como pode ser visto na listagem [ osc_tipotoleranciafalhas ] .neste trabalho , as tcnicas mascaramento e deteco / correo foram usadas para prover tolerncia a falhas .o mascaramento representa a redundncia de hardware e usado somente por tarefas .nele diversas cpias da tarefa so executadas simultaneamente e , ao final das execues , o conjunto de resultados analisado por algoritmos de votao de forma a gerar o resultado final .a deteco / correo consiste em duas etapas , onde a primeira detecta a falha e gera um sinal para que a segunda possa tentar corrigir o problema .uma instncia de tarefa ou conector que use essa tcnica obrigatoriamente precisa combinar um tipo de deteco a um tipo de correo .as tcnicas oferecidas atualmente por osc para deteco de falhas so : anlise de log , que gera um sinal de erro quando uma falha detectada nos logs da tarefa ; e monitoramento do tempo , onde a execuo ( em tarefas ) ou a transferncia de dados ( em conectores ) monitorada e um sinal de erro criado quando o tempo limite para a operao excedido .falhas de tarefas que sejam sinalizadas atravs de portas de sada podem ser tratadas em conectores pela tcnica de correo por propagao , a qual usada quando a falha da tarefa no prejudicial execuo do workflow como um todo .nesse caso , o conector recebe em seu papel de origem o sinal de falha da porta de sada da tarefa e garante a continuidade do fluxo descartando os dados de sada da tarefa que apresentou erro . outra tcnica de correo oferecida na linguagem osc a redundncia temporal , na qual a execuo da tarefa ou a transferncia de dados pelo conector que apresenta falha abortada , sendo realizadas no mximo novas tentativas de execuo / transferncia ( sendo configurvel na descrio do workflow ) . [ [ provenincia - de - dados . ] ] provenincia de dados .+ + + + + + + + + + + + + + + + + + + + + + a descrio de tipos de configurao de provenincia no osc ( apresentados na figura [ osc_tipoproveniencia ] ) baseada nas definies do formato _ open provenance model _( opm ) .todos os elementos de modelagem em osc podem ser combinados ao tipo opm .a propriedade _ versao _ est presente em todos esses elementos e permite a gerao de diversas descries de uma mesma execuo em um nico grafo opm .os tipos altagranularidade e baixagranularidade se aplicam somente a fluxos e permitem a definio da granularidade de provenincia .um fluxo do tipo altagranularidade permite que sua representao interna tenha provenincia armazenada de forma a considerar todos os elementos do tipo opm que encapsula .um fluxo do tipo baixagranularidade considera o fluxo como uma nica tarefa .esses tipos podem ser combinados entre si na criao de diversas verses do grafo opm .o workflow profrager gera bibliotecas de fragmentos de protenas .este workflow foi escolhido para ilustrar a expressividade de osc e para os testes do prottipo de sgwfc com suporte a osc por ser o que demanda mais atributos no - funcionais dentre os workflows estudados neste trabalho .a representao grfica desse modelo ilustrada na figura [ fig : profrager ] .por questes de espao , somente a tarefa _ psipred _ e o conector _ realados na figura [ fig : profrager ] tm sua especificao em acme apresentada neste trabalho ( vide figura [ pfragger ] ) . a tarefa _ psipred _ exemplifica ambos os atributos de qualidade existentes em osc atravs de uma composio do tipo bsico executavel com os tipos log , redundanciatemporal e opm . as propriedades_ num_tentativas _e _ ignorar _ pertencem ao tipo redundanciatemporal e a configurao criada permitiu que , durante os testes com o prottipo do sgwfc osc , essa tarefa fosse ignorada nos casos em que apresentou falha aps trs tentativas de execuo .quando a tarefa _ psipred _ ignorada o conector _if3 _ , o qual possui dois papis de origem de dados , recebe o sinal de falha da tarefa no papel ligado a sua porta de sada e utiliza os dados vindos atravs do papel associado tarefa _ cppsipredfile _ ( vide figura [ fig : profrager ] ) . o tipo opm , presente na descrio da tarefa e de suas portas , representa o modelo opm para rastreamento de provenincia .a propriedade _ versao _ configurada como _ orange _e _ black _ informa que os dados de provenincia relacionados a esta tarefa e suas portas sero armazenados em ambas as verses que possuem estes nomes no grafo opm .a tabela [ tab : tr_atr_nfunc ] aponta o suporte a atributos no - funcionais em osc e em cada um dos sgwfcs estudados * com relao linguagem de descrio de workflows que adotam*. como estes sistemas so extensveis , alguns atributos no so suportados por padro , mas possuem extenses que permitem a configurao dos atributos no - funcionais .aqueles que no foram suportados , mas esto bem especificados na literatura , foram mencionados . do contrrio , foram anotados na tabela como sem suporte por padro .na maioria dos casos percebe - se que os trabalhos oferecem algum suporte a combinao de elementos funcionais do workflow com mecanismos de tratamento de atributos de qualidade , porm , as configuraes destes mecanismos tipicamente ou no so realizadas na descrio dos workflows ou so restritas a poucas opes .para o atributo de rastreabilidade , por exemplo , os sgwfcs comumente permitem a criao de anotaes ou de forma geral para todo o workflow ou para cada tarefa .osc a nica linguagem dentre as apresentadas que permite a incluso dos tipos de rastreamento de provenincia por elemento do modelo , o que uma caracterstica vantajosa no que tange a configurabilidade do rastreamento , porm , para casos onde o usurio deseja realizar tal rastreamento para todo o fluxo , osc apresenta um formato um pouco mais trabalhoso .alm disso , pelo que pde ser levantado desses trabalhos , o suporte configurao do mecanismo de tolerncia a falhas nas interaes oferecido unicamente pelo osc .lp3.0cmlp3.0cmlp3.0cm| * sgwfc * & * atributos no - funcionais * & * configurao destes atributos no modelo * + * swift * & provenincia & no configurvel na criao do modelo .+ & tolerncia a falhas & os usurios podem configurar o tempo mximo para execuo de cada tarefa . + & paralelismo de tarefas & os usurios podem configurar algumas opes , p. ex .o nmero de processos , pela configurao dinmica de perfis .+ & varredura de parmetros & atravs do operador `` _ _foreach__. '' + & mapreduce & no foi encontrado suporte por padro . + + * vistrails * & provenincia & os usurios podem adicionar notas s tarefas .+ & tolerncia a falhas & no foi encontrado suporte . + & paralelismo de tarefas & no foi encontrado suporte por padro . + & varredura de parmetros & possui um modo especfico em sua interface grfica para este tipo de execuo que + & & permite a configurao no s dos dados de entrada , como tambm dos resultados .+ & mapreduce & parte deste mecanismo suportado pelo mdulo map do vistrails .+ + * taverna * & provenincia & os usurios podem adicionar notas s tarefas e interaes entre tarefas .+ & tolerncia a falhas & usurios podem definir a quantidade de reexecues das tarefas que apresentam erros .+ & paralelismo de tarefas & a configurao permitida atravs de _ plug - ins _ para execuo remota de tarefas , e distinta por _ plug - in_. + & varredura de parmetros & se o usurio passa valores para portas que aceitam valores nicos , realiza a varredura de parmetros por padro .+ & mapreduce & o suporte est em desenvolvimento pelo projeto scape .+ & & http://www.taverna.org.uk/introduction/related-projects/scape/ + + * kepler * & provenincia & os usurios podem configurar a provenincia somente para fluxos .+ & tolerncia a falhas & os usurios s podem configurar os mecanismos deste atributo nos fluxos como um todo .+ & paralelismo de tarefas & no foi encontrado suporte por padro .+ & varredura de parmetros & configurvel atravs do conjunto de atores e diretores desenvolvidos para a utilizao + & & do conjunto de ferramentas para grades computacionais chamada nimrod .+ & mapreduce & atravs da utilizao do ator mapreduce , desenvolvido para dar suporte + & & ao hadoop , o usurio pode compor seus fluxos utilizando este modelo .+ + * osc * & provenincia & os usurios podem configurar os mecanismos de provenincia todos os elementos definindo nveis de granularidade .+ & tolerncia a falhas & os usurios podem configurar os mecanismos de tolerncia a falhas para tarefas ( incluindo fluxos ) e conectores .+ & paralelismo de tarefas & os usurios podem configurar suas tarefas para execuo em ambientes de memria compartilhada e / ou distribuda . + & varredura de parmetros & os usurios podem criar fluxos que dem suporte a este atributo .+ & mapreduce & a implementao das tarefas internas aos fluxos deste tipo que definem sua execuo atravs deste mecanismo .+ [ tab : tr_atr_nfunc ]a linguagem osc faz uso da grande flexibilidade do sistema de tipos de acme , o que a torna ao mesmo tempo facilmente extensvel e propcia ao desenvolvimento para reuso , bem como permite ao cientista empregar esses tipos para compor a configurao de diferentes mecanismos de gerncia dos atributos no - funcionais de interesse em um workflow .a utilizao neste artigo de powertypes da uml 2.0 para a exposio das formas de composio dos tipos ofertados pela linguagem osc mascara algumas das dificuldades encontradas ao longo do desenvolvimento deste trabalho no que tange a definio das regras arquiteturais do estilo acme que define osc .em acme , essas regras so definidas por meio de predicados em lgica de primeira ordem , que no tm expressividade suficiente para representar algumas restries na combinao entre tipos da linguagem osc .a criao de funes de validao usando a biblioteca acmelib , que permite a manipulao programtica de especificaes acme na linguagem java , vendo sendo conduzida como parte deste trabalho para suprir essa limitao .como acme foi concebida para o intercmbio de descries arquiteturais entre diferentes ferramentas de especificao arquitetural , ela se mostra particularmente adequada como base para a aplicao de tcnicas de transformao de modelos .no contexto deste artigo , pretende - se como trabalho futuro empregar essa caracterstica de acme para usar osc tambm como linguagem para intercmbio de especificaes de workflows .
in this paper we present osc , a scientific workflow specification language based on software architecture principles . in contrast with other approaches , osc employs connectors as first - class constructs . in this way , we leverage reusability and compositionality in the workflow modeling process , specially in the configuration of mechanisms that manage non - functional attributes . este artigo apresenta osc , uma linguagem de especificao de workflows cientficos baseada em princpios de arquitetura de software . em contraposio a outras abordagens , osc emprega conectores como construes de primeira classe . desse modo , propicia - se uma maior capacidade de reuso e composicionalidade na modelagem de workflows , particularmente nas configuraes dos mecanismos que lidam com atributos no - funcionais .
the large - scale spatial distribution of galaxies is an important topic for modern cosmology .the cosmic structure as revealed by the observed galaxy spatial distribution is believed to originate from primordial density fluctuations .gravitation amplifies these fluctuations and is the main driver for the formation and evolution of the cosmic structures . in the current popular scenario ,galaxies form inside the previously collapsed `` dark '' gravitational wells in a process joined and modified by gas dynamics , radiative cooling , and photonization .the coalescence of these dark halos brings galaxies together and to merge in a hierarchical manner .the large - scale distribution of the galaxies can be characterized by various statistical and topological methods . in particular , the 2-point correlation function has been extensively used .it measures the second moment of the probability distribution , and statistically completely describes a gaussian density field , which is believed to represent the primordial density fluctuations .however , the density field smoothed over the observed galaxy spatial distribution is highly non - gaussian .the evolved cosmic structure as probed by galaxy distribution contains highly dense regions crowded by galaxies delineating spatial voids where few galaxies are located .we generally need the probability distribution function , or its moments , to completely characterize such galaxy distribution in space . a counts - in - cells method has been used to establish the galaxy probability distribution function .theory and models have been developed to interpret the probability distribution .a multifractal description for galaxy spatial distribution has been studied both theoretically and numerically , and applied to several galaxy samples .in particular , borgani ( 1993 ) studied the multifractal behavior of various hierarchical probability distribution functions and derived the behavior of multifractal dimensions for extreme underdense and overdense regions .indeed , the geometrical concepts of fractal and multifractal are appealing given the ubiquitous presence of such structures in various natural and social phenomena .less - well - perceived has been the statistical origin of multifractals as characterizing the moments of a probability distribution . for a review of multifractal applications in large - scale structure , see coleman & pietronero ( 1992 ) and borgani ( 1995 ) .the purpose of this paper is to introduce rnyi information as a valid characterization of any spatial structure , including galaxy distribution .we show that rnyi information , being closely related to probability distribution and multifractal measures , probes the statistical moments sensitive to any levels of under- and overdense spatial structures . at scales where the information contents are well - preserved andcan be accurately quantified , statistical moments are jointly described by rnyi information and dimensions , for which the underlying generator has a physical origin .we also illustrate the procedure by applying the rnyi information , along with the probability distribution and multifractal measures , to observed galaxy samples in the infrared wavelengths as well as a simulation . in the next section ,we introduce rnyi information and the properties , the relations to the moments of the probability distribution function and to the multifractal measurements . in section [ sec : res ]we present the results of the probability distribution and rnyi information for the infrared samples observed by the infrared array camera ( irac ) onboard spitzer space telescope .we discuss a multiplicative cascade simulation in section [ sec : sim ] , which provides means of validating our methods of measurement , and showing the effects of spatial selections in our galaxy samples .we further derive the functions scanning the structure of the moments for the samples based on simulation results .we discuss our results and potential applications of the information measure in section [ sec : dis ] .shannon & weaver ( 1948 ) derived an information measure to describe the amount of information needed in order to know the occurrence of an event with a given probability . in an important development , rnyi ( 1970 ) expanded shannon s information measure to arbitrary orders .suppose we have cells placed to cover a distribution of galaxies .this can either be a 2-dimensional angular or 3-dimensional spatial distribution .the probability of finding a galaxy in a given cell containing galaxies is .the rnyi information is defined as where is the information order which in principle can be any real number ( although in our application we consider integers only ) . at positive orders the overdense structures dominate the information estimate , whereas the underdense structures contribute the most to the information measure at negative orders . at ,the rnyi information reduces to the shannon information .the summation term for the probabilities to order can also be written as where is the galaxy probability distribution function .therefore the rnyi information of order is related to the -moment of the probability distribution as , \hspace{5 mm } \beta \neq 1,\ ] ] which is in turn related to the volume - averaged -point correlation function .the relation is intuitively easy to understand as is simply the total probability of finding galaxies in a cell . at positive orders of integral rnyi information characterizes the amount of information corresponding to the event of finding galaxies in the cells covering the discrete galaxy distribution .some properties of the rnyi information indicate the behavior of the moments of the galaxy spatial distribution . it can be proved that taking the logarithm we get since we have for , and for .therefore there is an upper limit for all .we need zero information , or have perfect knowledge for an event when .the bounds are also reflected by the rnyi information depend on the cell size , and diverge as .one property that remains finite at this limit is the so - called rnyi dimensions : any galaxy distribution becomes discontinuous at the scale of the typical galaxy separation .the above limit is not achieved in a discrete distribution or in practical measurements .a more practical definition for galaxy distribution is the `` effective '' rnyi dimensions , for which we calculate the slope of versus .there is no reason _ a priori to expect the slope for a given order to be a constant over all scales for a given structure .in fact , this is not implied in equation [ eqn : rd ] for a continuous multifractal distribution .we call it a simple multifractal if the effective rnyi dimension for any given order has a single slope across all scales. _ examining rnyi information and dimensions over information orders is identical to inspecting the structure of statistical moments of a distribution . studying such scan functionshas the advantage of summarizing infinite amount of parameters ( moments ) in just a few relations for a statistical distribution .here we relate rnyi dimensions to a scan function defined in a continuous multifractal field .suppose is the field density measured and ensemble - averaged at scale .function is the scaling exponent for moments of the field ( also called the structure function ) .now that the rnyi dimension is practically , since , where is the counts in the cells of size , is the dimension of the space in which the distribution is embedded ( e.g. in our applications below ) , we obtain function is therefore also called the codimension . herewe use a general name , the structure scan function , for the rnyi information and dimensions as functions of , as well as for functions like .th multifractal dimensions are usually defined by using the generalized correlation integral .many measurements of the multifractal properties of galaxy distribution have been based on measuring the generalized correlation integral , which uses cells of varying sizes centered at selected galaxies .such a procedure is not valid for estimating rnyi information since neighboring cells bound to cross each other above a certain scale . below this scalethere is a non - zero probability that some of the galaxies are not covered by the ensemble of cells .either case changes the normalization for the probabilities , and the rnyi information is not accurately quantified for the original structure .this is further explained in section [ sec : sim ] .we want to emphasize here that _ not only the slope of the rnyi information versus scale ( rnyi dimensions ) , but also the rnyi information itself is a physical measurement , both being open to physical interpretations .we will further discuss this point in section [ sec : dis ] . _ we note that the differential form of the second - order correlation integral is called the conditional density , which had been used to characterize galaxy distribution in early surveys .the rnyi dimensions basically show the scaling properties of rnyi information . loosely speaking, a multifractal galaxy distribution has a position - dependent scaling exponent in .it can be shown strictly that the spectra of these scaling exponents and the rnyi dimensions ( multiplied by a factor of ) are related by a legendre transformation where , , and .a number of interesting properties of and are discussed in beck ( 1990 ) .a few interesting ones include being a decreasing function of and bounded as , , and . these limits and the ways and approach the limits show the properties of the moments of the spatial distribution from a scan function perspective .the irac instrument onboard the spitzer space telescope provides fresh view into the cosmos in the mid - infrared wavelengths of 3.6 m , 4.5 m , 5.8 m , and 8.0 m .the spitzer first look survey ( fls ) using irac provides a uniform coverage of a 4 square - degree field centered at ra , dec with a total 60-second exposure time for each pixel in the arrays . for our present purpose ,we use the full galaxy samples established for an earlier 2-point correlation analysis across the irac wavelengths .we divide the two - dimensional area covered by an irac sample into square cells of varying sizes .the cells are non - overlapping and contiguous for the purpose of accurately estimating the rnyi information .the boundary of the sample area and the usage of a cell are determined by the mask files used to establish the galaxy sample .we always have `` good '' cells at the largest scales of measurement to ensure good statistics . at smaller scalesthe cell numbers are much greater .for each sample and each cell size we count the number of galaxies in the cells and establish the histograms in figure [ fig : cic ] .the histograms represent the estimates of the probability distribution function , from which the moments of the distribution can also be measured . for each histogram, we plot the fit of the theoretical gravitational quasi - equilibrium distribution function ( gqed ) .the single fitting parameter , the average ratio of the gravitational correlation potential energy to twice the kinetic energy , is also shown in the plots . for comparison , we also draw the poisson distributions with the same mean galaxy counts in cells of given sizes .apparently the galaxy distribution deviates more from the poisson distribution at larger cell sizes , indicating the effect of galaxy clustering in irac wavelengths .the gqed , on the other hand , describes the distributions of irac galaxies remarkably well over all scales .we follow the same procedures as in the counts - in - cells experiment to divide the irac sample areas into square cells to calculate the rnyi information .figure [ fig : infos ] shows the relation between the cell sizes and the measured rnyi information at orders from to .the rnyi information scales with cell sizes , but the relation is not linear for our galaxy samples .we will discuss below the effects that can potentially change the scaling relation .the apparent crowding of the curves at high information orders implies an upper limit , which is based on the above discussion , for the information measures .the limit constrains the behavior of the moments of galaxy distribution in the information space .intuitively , exclusion of structures , such as galaxies not covered by the cells , or regions that cells avoid due to masking , changes the information content of the structure .although we intend to cover the sample using contiguous cells , changing cell - size causes some galaxies in the sample not being covered by cells of a new size due to the sample boundary and masked areas .until these effects can be fully accounted for , we are in fact measuring the information of slightly different structures at each scale , even though the probability is normalized by the total number of galaxies covered by cells .this introduces noises in the information measurements . in the next section ,we study these effects using simulation of a known structure .to verify our results , we generate a multiplicative cascade simulation based on a binomical model .the binomial model was found to describe well the multifractal scaling in the dissipation field of fully developed turbulence .we use the binomial model for its simplicity and analytically derivable relations for rnyi information and multifractal properties .the multiplicative cascade method was formulated to study energy transfers at different scales in turbulence .it is by far the most effective method to simulate a multifractal field .we use a discrete multiplicative cascade simulation , consistent with our purpose to study counts at multiple length scales .the simulation aims to create distributions of counts at ten different scales within a given area with a conserved overall number density . at the first level ,the area is divided into four quadrants of the same size , two of which hold a fraction of , and the other two have the fraction ( so the overall number density is conserved as ; also called the canonical process ) .there are many ways to distribute the two number densities equally among four cells .we choose to have a fixed pattern of distribution here .we tested different patterns and the results are the same . at the second level , each quadrant area is further divided into four identical ( smaller ) cells , with the distribution of the same probabilities of the same pattern .the number counts in a cell is the product of the probability assigned at this level multiplied by the probability ( of the quadrant covering the cell ) at the previous level ( and by an arbitrary total source number , which we choose to use ) .we continue the process to generate smaller and smaller cells and their number counts .we stop at the tenth level where we have data over 10 scales ( of ratio 2 ) for statistics . at level , a cell has the number count proportional to , where is an integer between and .therefore it is called the binomial model . the resulting structure , although modulated sharply by cell edges , is a simple multifractal .based on halsey et al .( 1986 ) and meneveau & sreenivasan ( 1987 ) , we derive the rnyi information , the rnyi dimensions , and the spectra of the multifractal scaling exponent for the 2-dimensional binomial field as ,\ ] ] where is the level number in the cascade ( e.g. smallest scale being ) , and is the information order . figure [ fig : infosim ] shows the scaling of rnyi information in the binomial field .the measurements , using the same methods and algorithm used for the infrared samples , are indicated by points in the figure .the lines are calculated based on equation [ eqn : binfo ] .the agreement is nearly perfect for all orders .next we compare measurements of the rnyi dimensions and multifractal spectra with those predicted by equations [ eqn : mdbi ] and [ eqn : idbi ] .since the scaling of the measured rnyi information in figure [ fig : infosim ] is well - represented by lines , we use linear least - square fit for each information order in that figure to obtain the rnyi dimensions .we confirm from the fit that the information values at the smallest scale ( where ) and the slope are both within of those predicted by equations [ eqn : binfo ] and [ eqn : mdbi ] .the slope values from the fit for each order are plotted as dots in the left - panel of figure [ fig : spectsim ] .the line is the scan function for rnyi dimensions based on equation [ eqn : mdbi ] . to obtain the multifractal spectra, we use a cubic spline fit to model the measured rnyi dimensions and derive the and values within the range of information orders .these values are again represented by the dots in the right - panel of figure [ fig : spectsim ] .the line in the figure is based on equation [ eqn : idbi ] . in both panelsthe values based on the measured renyi information agree well with the predicted ones .this shows the reliability of the methods and algorithms we use . in figure[ fig : corsim ] we show measurements of generalized correlation integral superimposed on the plane of rnyi information versus scale for our simulation .we generate cell positions in the field , and vary the size of the cells centered around these positions .cells may overlap , but are ignored if they cross the field boundary .the generalized correlation integral is calculated using the remaining cells , following the standard algorithm . in the figurethe dash - lines are generalized correlation integral measurements , and solid - lines are the predicted rnyi information .it is clear that although the rnyi dimensions may be approximately maintained , the generalized correlation integral does not measure rnyi information .we also find that the values of the generalized correlation integral depends on the number of cells in the experiment , further strengthening this point . to investigate the effects of spatial selection of structures , we include the mask files on which the irac samples are based .each of the mask files is a fits representation of the fls field with a dimension of pixels . since our simulation has a different dimension , we first project these masks onto a -pixel fieldthis procedure maintains the scale ratio of the masked areas and the field size .we then follow the same criteria to exclude cells in the simulation field overlapping with projected masked areas , and repeat the procedures of measuring the rnyi information in the simulated field .the results , using the mask files for the four irac samples , are shown in figure [ fig : masksim ] , with solid - line predictions superimposed on measured points connected by dotted lines .there is an obvious effect of spatial selection on measuring both rnyi information and dimensions .notably involving the irac masks introduces an apparent scale - dependency of rnyi dimensions , particularly at greater scales , where both the rnyi information and dimensions are higher than predicted . at smaller scales , there is a systematic offset to higher ( negative ) information values , although the slopes for rnyi dimensions are approximately maintained .masking reduces the amount of structures in the original binomial field , and smaller amount of information ( identical to the absolute value of the measured rnyi information ) is needed to know an event occuring ( such as sources in a cell ) with a given probability .cells of increasing scales may cover a fluctuating , but generally increasingly smaller samples from the original structure accompanied by a mask .this confirms our intuition that rnyi information is an intrinsic property of a spatial structure .any modifications of the structure modify its information content .other derived properties such as the rnyi dimensions can also be affected if not measured properly . while the geometry and pattern of the four mask files vary , the effects on the rnyi information are remarkably similar .although it changes the information contents of the original structure , it appears that the irac masks preserve the scaling of the information at smaller scales .the graininess of galaxy distribution , however , introduces a poisson limit for these smaller scales ( remember our simulation is not grainy ) , below which a cell contains either one or no galaxy in most of the regions .any multifractal behavior breaks down at this limit . at scalessmaller , the number of cells contributing to the rnyi information is roughly identical to the total number of galaxies , and the rnyi information reaches a ( lower ) limit also and flattens out ( also see figure [ fig : structinfo ] below ) .both these effects at large and small scales can make the rnyi information curves of a multifractal become concave .this curvature is observed in figure [ fig : infos ] .based on the irac sample sizes and the un - masked areas for the samples , we estimate the mean separations of any two galaxies in the samples , which are roughly for irac-1 and 2 , for irac-3 and 4 samples , assuming uniform distributions .we use these as the lower scale limit for reliable multifractal estimate . for upper limit ,figure [ fig : masksim ] implies a linear scale of of the field size , assuming that the scale ratio applys to the fls field .this is only slightly higher than the lower limit of irac channel-3 and 4 samples . basically , the smaller number of galaxies in these samples combined with the amount of masking prevented us from reliably estimating the multifractal behavior for these two samples . for illustration purpose , we perform a cubic spline fit to each of the rnyi information relations in figure [ fig : infos ] , and derive the rnyi dimensions and the scan function for irac-1 and 2 samples at a scale of .we perform another cubic spline fit to the scan functions ( like we did for the binomial field ) and obtain and throughout the range of rnyi dimensions . in figure[ fig : spect ] we show these relations for the two fls samples .the figure illustrates how rnyi dimensions decrease with increasing information order , and converge to a limit .also appears to be a convex function of . where , the values represent the rnyi dimension limit when .all are typical behaviors of multifractals . in figure[ fig : structinfo ] , we plot the rnyi information as a function of order , a different type of scan function , measured for orders to at scales of , , , , and for all irac samples . at most of thesescales the masking effect is small , where the information can be measured accurately for galaxy distribution . for samples at irac channels 3 and 4 ,however , poisson effects dominate the three smaller scales .this is told by the scan curves converging to the poisson limit below or near information order zero . for all irac samples ,the limit is shown by the scan curve behavior at negative information orders , where the information measure is sensitive to and dominated by underdense regions in the samples . at positive orders and at scales where information can be measured accurately ,the scan curves tell the structures of high moments of the galaxy distribution .we have shown that the rnyi information , the effective rnyi dimensions , their structure scan functions , and the multifractal spectra contain the properties of the high moments of a spatial distribution .these measurements can be used to scan properties of these high moments .these properties detect the amount of deviation from gaussian densities , and are highly constrained in the parameter space in these measurements .our experiments also show that spatial selection effects are important and can bias these measurements .any selection modifies the original structure and the amount of rnyi information the structure contains .depending on the amount of selection , the rnyi dimensions may be maintained over a limited range of scales above the poisson limit for discrete distributions .one needs to conduct controlled experiments such as simulations to verify at these scales . for irac-1 and 2 samples, there is indication in figure [ fig : infos ] that the information - scale relation is still not linear within the range .it is yet uncertain how much of this is caused by masking as well as by approaching the poisson limit , both effects leading systematically to a concave curve , or if there is scale - dependency for the rnyi dimensions in our irac samples , which would imply a more complex structure than a simple multifractal distribution at these scales . whether galaxy spatial distribution is a multifractal , or whether homogeneity can be reached at large scales , as cosmological principle states ,has been observationally a controversial issue .our analyses show that caution needs to be exercised extrapolating a multifractal structure to small and large scales , particularly if spatial selection exists for a galaxy sample , even if multifractality is observed at scales more reliable for multifractal measurements .it may be possible to recover the lost information in a galaxy sample by `` filling - in '' the masks based on known properties of galaxy distribution .such known properties may come from minimally - masked samples of galaxies of the same type , or from -body simulations , for examples .just as the -function for generating the probability in our multiplicative cascade simulation , there is a variety of statistical functions that can serve as the generating functions for simulating full - scale multifractal fields . among these generating functionsthe log - lvy distribution is of particular interest due to the unique position of the lvy distribution in replacing a gaussian in the generalized central - limit theorem where variances of the component distributions can be infinite , and also due to its applications to a `` universal class '' of geophysical structures .the structure scan functions are uniquely determined by probability generating functions which are of physical origin .the generating function would be a significant property to know if galaxy distribution is a multifractal to large scales . another way to seek physically interpretingthe rnyi information is to use the moments of the probability distribution via equations [ eqn : infomo ] and [ eqn : infomo1 ] , which are not restricted to a multifractal structure . since , where is the dimension of the space the distribution is embedded, we can also derive for equation [ eqn : infomo ] where is the -moment of the probability distribution function .it is clear from the relation that we have a simple multifractal distribution across all scales only if is not a function of scale .this is not the case for gqed , for example . on the other hand, any physically - derived probability distribution can interpret the rnyi information and dimensions via these relations .independent of the multifractality of a structure , the rnyi information and dimensions are general characterizations of statistical properties of the structure . a simple multifractal is a special and very restrictive type of structure in its practical definition .the rnyi information and dimensions and their corresponding scan functions can describe any types of structures , whether or not multifractals .the rnyi information is extensive , whereas its scaling , or `` information rate '' with changing scales is an intensive parameter .both are important for a given structure . as we collect galaxy samples from surveys with greater area coverage and increasing depth as well as in more wavelength channels, we are collecting increasingly more information about the large - scale structure , and the absolute values of the measured rnyi information increase at a given scale .any variations of the rnyi dimensions , on the other hand , are of different origin . for spatially confined structures , such as a giant molecular cloud, the extensivity of rnyi information also depends on resolution .a more resolved observation reveals more detailed structure , and therefore more information contents . while the rnyi information and dimensions can be identically applied to continuous and discrete spatial fields , it is important to recognize what properties are used for measurement .it is clear that we want to characterize the moments of a spatial structure , and that we can use spatial densities for measuring .an astronomical observation is usually a radiation measurement , however , and the proportionality between the two is only an assumption . for non - astronomical structures ,the meaning of the measurements can be more clear - cut .the rnyi information and dimensions can also be applied to one - dimensional time - series . in the temporal domain, the amount of time - delay serves as scaling , and the information contents and rate describe the temporal structure built by distributions of the change of the observed properties over certain and different time - spans .an information measure is a measure about the knowledge of a structure or system , and therefore its predictability. it would be desirable to quantify the predictability of a statistical distribution or a time - series using rnyi information and dimensions .so far research on this topic remains limited .the relation between the rnyi information and dimensions measured in 2-dimension and those in 3 dimensional space for the same structure can be straightforward . the 2-dimensional cells used to cover a structure can also be 3-dimensional cells with the third dimension extended to cover the same structure .when properties such as spatial density can be accounted for by measurements when projecting the structure onto a 2-dimensional area , the information is not lost . the only uncertainty is the correspondence between the 2-dimensional and 3-dimensional scales .it is , however , a generally interesting question what scales are measured by cells of non - identical dimensions . for galaxy spatial distribution , the evolutionary effects of galaxies in the third dimension need to be disentangled from projection before the structure can be analyzed in three dimensions .i thank the anonymous referee for providing constructive comments .i thank the fls team at the spitzer science center for assembly of the fls data products .the spitzer space telescope is operated by the jet propulsion laboratory ( jpl ) , california institute of technology under nasa contract 1407 .support for this work was provided by nasa through jpl .
we introduce an information - theoretic measure , the rnyi information , to describe the galaxy distribution in space . we discuss properties of the information measure , and demonstrate its relationship with the probability distribution function and multifractal descriptions . using the first look survey galaxy samples observed by the infrared array camera onboard spitzer space telescope , we present measurements of the rnyi information , as well as the counts - in - cells distribution and multifractal properties of galaxies in mid - infrared wavelengths . guided by multiplicative cascade simulation based on a binomial model , we verify our measurements , and discuss the spatial selection effects on measuring information of the spatial structures . we derive structure scan functions at scales where selection effects are small for the spitzer samples . we discuss the results , and the potential of applying the rnyi information to measuring other spatial structures .
consider the problem of estimating the function \rightarrow \mathbb r ] , where }_k : k = 1 , \ldots , k_n + p \ , \big\} ] .we also assume that is an integer denoted by .hence every design point is a knot , that is , for ; a more general case is discussed briefly in section [ sec : diss ] .the -spline estimator is given by }(x)=\sum_{k=1}^{k_n+p } \hat{b}_k b^{[p]}_k(x) ] depends only on but not on , as long as tends to infinity fast enough ; see corollary [ coro : asy ] where is of order , where .the contributions of the present paper are twofold : ( i ) the paper develops a general approach for asymptotic analysis of a -spline estimator with an arbitrary spline degree and arbitrary order difference penalty via green s functions . to handle a general -spline estimator , various techniques for linear odesare exploited to obtain a corresponding green s function .( ii ) the closed - form expressions of equivalent kernels for both inner and boundary points are established and convergence rates are developed for general -spline estimators . compared with the existing results based on matrix techniques , e.g. li and ruppert ( 2008 ) and claeskens , krivobokova , and opsomer ( 2009 ) , the use of green s functions considerably simplifies the development and yields an instrumental alternative to establish the equivalent kernels for general -splines .moreover , this also leads to the convergence rates and the observation that the rates are independent of the splines degrees and the number of knots for an arbitrary -spline estimator . while this observation is pointed out by li and ruppert ( 2008 ) for piecewise constant and piecewise linear splines andis conjectured for general -splines , no rigorous justification has been given for general -splines in the literature ; the current paper offers a satisfactory answer to this issue in a general setting .the paper is organized as follows .section [ sec : charact_estimator ] characterizes the general -spline estimator as an approximate solution of a linear differential equation subject to suitable boundary conditions .section [ sec : gr ] investigates the solution of such the differential equation and obtains the related green s functions as equivalent kernels for a -spline estimator of an arbitrary b - spline degree with any order difference penalty . using these green s functions , the asymptotic properties of -splinesare established in section [ sec : asy ] .section [ sec : boundary ] addresses kernel approximation near the boundary of the design set . by formulating boundary conditions as an appropriate integral form, an explicit equivalent kernel is obtained .finally , extensions to unequally spaced data and multivariate -splines are discussed in section [ sec : diss ] .let \in \mathbb{r}^{n\times ( k_n+p)} ] .the optimality condition is given by where .let be the uniform distribution on and be the uniform distribution on .let and be two piecewise constant functions for which and for , respectively .let , , , and for , define to obtain the analogous representation for , we introduce a few variables and functions related to the true regression function .define , , and for , letting , we have .therefore , the row of ( [ equ : p3 ] ) , when , can be written as where and are the row of and , respectively . furthermore ,since the elements of the last rows of are all zeros , we also have next , we proceed by replacing that difference equation ( [ equ : diff ] ) by an analogous differential equation. we shall focus on the case when first ; the case when will be discussed in section [ sec : asy ] . for any ] is stochastically bounded , therefore the are small with an order of .the solution to ( [ equ : ode ] ) can be represented by a corresponding green s function explicitly .it shall be shown that the -spline estimator can be approximated by a kernel estimator , using the corresponding green s function . for this end , consider the differential equation subject to the boundary conditions and , .let .we consider two cases : ( 1 ) is even ; and ( 2 ) is odd . in this case , the characteristic equation is given by , and we obtain eigenvalues , \ \ \k=0 , 1 , \cdots , 2m-1.\ ] ] let then the homogeneous ode : has solutions , \ \ \k=0 , \cdots , \frac{m}{2}-1,\ ] ] where and for . to find the corresponding green s function for the ode : on ] , it is easy to verify that if then is a solution of . to find the coefficients ,define , \ \ \q_k(t ) \ , \equiv \ ,e^{-\beta \mu_k t } \big [ \ , - c_k \,\sin ( \omega_k \beta t ) + d_k \ ,\cos(\omega_k \beta t ) \big].\ ] ] hence and .since we have where and stand for the -th derivatives of and respectively . letting the -element of , we obtain the following linear equation for from ( [ eqn : l ] ) : it shall be shown in lemma [ lem : coefficient ] that the above equation has a unique solution .the characteristic equation is given by and the eigenvalues are : then the homogeneous ode : has solutions : and , \ \\ k=1 , \cdots , \frac{m-1}{2},\ ] ] where and for .similar to the even case , define ,\ ] ] where the coefficients are to be determined , and satisfies .let and .it can be verified that if then is a solution of .similarly , it can be shown that is also a - order kernel . to find the coefficients and , we may use , and introduced in the last subsection . indeed , we obtain the following linear equation for and from ( [ eqn : p ] ) : [ lem : coefficient ] each of the equations ( [ eqn : coefficient_even ] ) and ( [ eqn : coefficient_odd ] ) has a unique solution .we introduce some trigonometric identities to be used in the proof .let . by observing = \frac{1}{2 }\sum^p_{k=1 } \big [ \sin(2(k-1)\theta ) + \sin(-2k\theta)\big ] ] , it is easy to see ( i ) for , = 0 ] .we consider an even first .let .it is clear that and for all .hence in ( [ eqn : p_q ] ) becomes , where is given by thus .let denote the row of and .hence , therefore , , and if , then \\ & = & \sum^{\frac{m}{2}}_{\ell=1 } \cos\big ( 2(2\ell-1)(i - j)\theta \big ) \ , = \ , \sum^{\frac{m}{2}}_{\ell=1 } \cos\big ( ( 2\ell-1)(i - j)\frac{\pi}{m } \big ) \ , = \ , 0,\end{aligned}\ ] ] where the last step is attained from ( i ) .this shows that .thus is invertible so that equation ( [ eqn : coefficient_even ] ) has a unique solution .we then consider an odd . in this case , and for .let .then the row of is given by let denote the column of .clearly . for , either with or , for some .since , \\\sum^m_{k=1 } \cos\big((2k-1)\gamma_s \big ) \sin\big((2k-1)\gamma_t\big)= \frac{1}{2 } \sum^m_{k=1 } \big [ \sin((2k-1)(\gamma_s + \gamma_t ) ) + \sin((2k-1)(\gamma_s - \gamma_t))\big],\end{aligned}\ ] ] we conclude that by using ( i)(ii ) established at the beginning of the proof .this shows that is a diagonal matrix with positive diagonal entries. therefore is invertible and equation ( [ eqn : coefficient_odd ] ) has a unique solution . the following proposition show that and derived above yield the equivalent kernels . when , in ( [ eqn : l_even ] ) and in ( [ eqn : p_odd ] ) are order kernels respectively .we consider only since the other case follows from the similar argument .we shall show that and for all .this holds true trivially when is odd . for an even , by observing ( with ) , we have repeatedly using the integration by part , we deduce in light of ( [ eqn : l ] ) , we obtain the desired result .[ example : kernel ] as an illustration , the closed - form expressions of the first four equivalent kernels are given below and their plots are shown in figure [ fig : kernels ] , respectively . recall that the boundary conditions for the ode ( [ eqn : govern ] ) are , , . in the following ,we consider an even first .in this case , the homogeneous ode : has the following ( linearly independent ) solutions : where and for the above . the solution to ode ( [ eqn : govern ] ) subject to the boundary conditions can be written as where + e^{-\beta\mu_k(1- t ) } \big [ a^+_k \cos(\beta\omega_k t ) + b^+_k \sin(\beta\omega_k t ) \big ] \big\},\ ] ] and the coefficients are to be determined from the boundary conditions , and the kernel is given in ( [ eqn : l_even ] ) . define } | g(t)| ] , ,\ ] ] ,\ ] ] and here the matrix blocks are obtained via the similar technique in section [ sect : kernel_even ] as where , and where for , and for all , and each entry of and is of order .[ lem : banded_equation_even ] given an even .there exist positive real numbers and , dependent on only , such that for all , the coefficient vector is unique and satisfies .note that for sufficiently large , each element of and is sufficiently small .hence it suffices to show that and are invertible . for this end , let denote the column of .define \kappa=\max_k(|c_k| , kernel , and , we have , for or , as a result , the equation has a unique solution that satisfies the desired bound .[ lem : banded_equation_odd ] given an odd .there exist positive real numbers and , dependent on only , such that for all , the coefficient vector is unique and satisfies .to establish the asymptotic properties of the estimator , we first represent as the sum of the convolutions of ( defined in section [ sec : gr ] ) with and a remainder term that is of smaller order .[ lem : repf ] the in ( [ equ : ode ] ) can be represented as where is given by ( [ eqn : j_t1 ] ) and ( [ eqn : j_t2 ] ) for even and odd , respectively .the -norms of both coefficient vectors in ( [ eqn : a ] ) and in ( [ equ : b ] ) are stochastically bounded , and .the representation of in ( [ equ : repf ] ) follows from the discussions in section [ sec : gr ] .the stochastic boundedness of the coefficient vectors is the direct applications of lemma [ lem : banded_equation_even ] and lemma [ lem : banded_equation_odd ] .let and .claeskens et al .( 2009 ) showed that , where .thus , is stochastically bounded , so is } ]when .we first define a piecewise degree polynomial } ] and } ] if , or }(t ) = \sum_{k=1}^{k_n+p}\hat{b}_k b^{[m]}_k(t) ] is defined on ] as in ( [ equ : asy1 ] ) and ( [ equ : asy2 ] ) , respectively , under different admissible ranges of and .[ lem : diff ] for any , let .let }(t ) - \tilde{f}^{[m]}(t) ] with the same first coefficients of } ] and }(t) ] .let satisfy and .then , for , }(t ) - f(t ) - \hat\gamma(t ) ] \rightarrow^d n\big(0 , \sigma_k^2(t)\big),\ ] ] where is given by ( [ equ : rep0 ] ) if or ( [ equ : rep1 ] ) if . however , if for , and let with , then }(t ) - f(t)-\hat \gamma(t ) ] \rightarrow^d n\big((-1)^{m-1}c^{2 m } f^{(2m)}(t ) , ~~{\sigma_k^2(t)\over c}~\big).\ ] ] when is not equal to , the asymptotic bias has an additional term , which is of order .when grows sufficiently fast with respect to , this term is asymptotically negligible .the approximation of the equivalent kernel deteriorates when is near the boundary points of the design set . in this section, we derive an explicit formula for the equivalent kernel when is close to the boundary .we discuss the case when is close to only ; the case when is close to follows from the similar argument and thus is omitted .consider an even first .it follows from the closed - form expressions ( [ eqn : f_t ] ) and ( [ eqn : j_t1 ] ) for that for ] is of order .hence , we only consider .\ ] ] in the subsequent , we shall express the coefficients in terms of and its derivatives .this will eventually lead to an explicit expression for the kernel . in view of ( [ eqn : l_even ] ), we have moreover , it follows from section [ sect : kernel_even ] that , where and is given by in light of ( [ eqn : p_q ] ) , we have as a result , we obtain where denotes the first row of the -th power of given in ( [ eqn : p_q ] ) .this , along with ( [ eqn : f_0_d ] ) , yields for notational simplicity , let denote the inverse of the matrix defined in ( [ eqn : b_11 ] ) and let and ^t\in \, \mathbb r^m ] with for .to find the kernel in this case , particularly the kernel for the second term , recall therefore , the second term in ( [ eqn : f_md_kernel_even ] ) becomes where and , .denote by and the -th order integrals of and respectively , namely , in light of ( [ eqn : p_q ] ) , it is easy to verify that using this and , we have therefore , where ^t \in \mathbb r^m ] , the boundary kernel becomes ] and ^t \in \mathbb r^m ] .define as the differencing matrix satisfying the optimality condition is given by note that and , where `` '' represents the kronecker product .we may go though the same procedure as described in this paper .the multivariate -spline smoothing is asymptotically equivalent to kernel smoothing and the equivalent kernel is the green s function corresponding to the partial differential equation ( pde ) : subject to the boundary conditions : further study of this issue is beyond the scope of this paper and shall be reported in a future publication .
this paper addresses asymptotic properties of general penalized spline estimators with an arbitrary b - spline degree and an arbitrary order difference penalty . the estimator is approximated by a solution of a linear differential equation subject to suitable boundary conditions . it is shown that , in certain sense , the penalized smoothing corresponds approximately to smoothing by the kernel method . the equivalent kernels for both inner points and boundary points are obtained with the help of green s functions of the differential equation . further , the asymptotic normality is established for the estimator at interior points . it is shown that the convergence rate is independent of the degree of the splines , and the number of knots does not affect the asymptotic distribution , provided that it tends to infinity fast enough . + _ key words _ : difference penalty , equivalent kernel , green s function , -spline .
ith the growing amount of data in healthcare , the ability to analyze large datasets and report results adequately has become a key factor of research and innovation , which supports the creation of new technologies and improved clinical decision making .the increased complexity of these datasets brings together difficulties and new challenges in terms of data management , modeling and communication .therefore , investigators are now focusing on developing reproducible research protocols including entirely reproducible data analysis .it implies that the results reported in a publication can be immediately reproduced by granting access to both the datasets as well as the statistical and data mining scripts of the study in order to make the information widely usable , the value of data collection , analysis and communication as well as the use of common standards for sharing information have been recognized .in addition to increasing dissemination and better understanding of research findings , data sharing can also support confirmation or refutation of research by allowing replication and increased transparency of results however , data sharing does bring some implementation challenges and possible risks .potential invasion of participants privacy and breaking of patients confidentiality are primary concerns when making datasets public .secondly , adequate data management , academic and commercial primacy , and intellectual property rights as well as journal copyrights are factors to be careful with while publishing data . in this context, the use of an adequate framework becomes essential to allowing reproducible research without compromising such aspects , specially when analyzing and reporting results from large datasets .thus , the aim of this article is to introduce a simple reporting framework for _ reproducible , interactive research _ applied to health and social scienc .the framework is constituted by the following three axes : ( i ) data ( section [ data ] ) , ( ii ) analytical codes ( section [ scripts ] ) and ( iii ) dissemination ( section [ dissemination ] ) . in this paper , different documentation formats and online repositories are introduced . to integrate and manage the reproducible contents , we propose the r language as the tool of choice .all the information is then published and gathered in a website for different projects .this framework is free and user friendly and is proposed to enhance reproducibility of health - science reports .the framework proposed in this paper is based on the concept that an appropriate reproducible research report should allow one to totally reproduce the methods applied .thus , we understand that besides making the analytical data , code and figures available , an adequate reproducible research framework should integrate tools and features in a way that others could reach the same results and understand the process behind it . therefore , in order to achieve an adequate integration between data , codes and outcomes ( figures , tables , numerical results and others ) in our framework , we utilize the r language as the central tool .r has the ability of integrating and managing different data formats , codes and formats .in addition , it allows communication with several other analytical softwares such as sas , stata and spss .the first issue about making a research protocol reproducible is the data management process .there are several ways of storing data and many different data formats . in our perspective , some of them are better by allowing integration with data analysis softwares and online repositories as well as their ease of use . in the following sections , we demonstrate some of these formats we have been using and their integration with our reproducible framework .when making datasets publicly available , one must be concerned with the information that is going to be made public . in this context , the health insurance portability and accountability act ( hipaa ) developed a section on protected health information ( phi ) , which means that individually identifiable health information must be kept confidential when sharing data in healthcare .the complete list of phi can be found at the health and human services us department secondly , it is important to make sure that the data is coded with appropriate names that allows other people to read and understand the content easily . to make it easier, we strongly encourage the publication of a complete and organized _ data dictionary _ together with the dataset , containing variable labels , respective code , data characteristics ( continuous , discrete , ordinal , dichotomous , etc . ) and any other source of relevant information ( e.g. length of likert scale , categorization factors ) .comma separated values ( csv ) is a format readily available for consumption by any data analysis language or software .however , it does not provide a way to update the data once it is downloaded other than downloading the dataset again .in addition , the csv format does not offer any security features . on the other hand , csv files have one of the best usability experiences among all the formats and it can be easily integrated with r using online repositories ( i.e. google drive or dryad ) through different r packages .one such package is the rcurl package , which can integrate r with different html domains , among them a .csv spreadsheet from google docs .semantic web technologies have recently become popular given the success provided by linked open data ( lod ) .the data is represented with the help of the resource description framework ( rdf ) format , while data sets themselves are queried through the sparql ( a recursive acronym for sparql protocol and rdf query language ) .main advantages include the data availability 24/7 with automated updates and also the ability to dynamically merge across data sets sharing identical elements ( classes or instances ) .rdf data can be easily integrated with r analytical codes through the rrdf package .this package allows users to perform sparql queries inside r s workspace .in addition to this package mentioned there is a whole set of tutorials and packages that can be used within r .javascript object notation ( json ) is considered on of the best data - interchange formats .it is a text format with conventions familiar to several programming languages such as c++ , java , javascript and python .more information and specifications about how to integrate json data with specific applications can be found at .it s connection with r analytical code is executed through the rjson package which converts json objects into r objects . after deciding a format of data to be used ,it is also mandatory to use an online repository to store the data and integrate it with the analytical codes ( discussed later in section [ scripts ] ) . in the following sections , we present some options of free repositories that are used by our group .dryad is an international repository specific for data related to scientific publications .it allows data to be deposited easily and readily provides the citation related to the respective publication .dryad can be integrated with r , thus improving interoperability .figshare is an online repository , similar to dryad , that allows researchers to choose a publication with the ability to be cited within the paper .additionally , figshare supports not only data but also other types of research outputs such as figures , datasets , media files , papers , posters or even file sets with different types of documents .a major advantage of figshare is its ability of easily sharing and discovering information about different research projects .we have used figshare to publish datasets ( in .csv formats ) as well as figures .examples can be found in .in addition , figshare can also be integrated with r through some packages .google drive is another online repository which facilitates collaboration and sharing of files .this application from google integrates texts , spreadsheets , presentations and other editors from google ( i.e. google docs , google sheets , google forms and others ) and also allows the user to store forms , drawing , and different types of files in the cloud .google drive is extensively used to share data , codes and other outputs among researchers in our group .one example to connect data stored in google drive with r is the rcurl package .this package allows users to compose general http requests and call urls and other web formats , such as datasets in .csv format .another way is to simply open the files stored in google drive ( spreadsheets or r - scripts , for example ) inside r , through rstudio .in addition , we also use google drive as a way to integrate and facilitate collaborative writing and coding in r , since this approach has been found more user friendly to content researchers than other more sofisticated repositories .publishing analytical codes is an important step in a reproducible framework besides the connection between the codes and the data .therefore , we demonstrate here the different software that can be used to generate , publish and manage the analytical codes . as mentioned before , r is the central tool of our reproducible research framework . as a definition, r is an open source software for statistical analysis and graphic creation .it has been developed by a vast community of collaborators from several countries and institutions .although r is not superior to other statistical softwares in every aspects ( such as intuitive gui interface , or pre - defined operations ) , it gathers qualities which makes it a better option to our framework than other statistical environments .one major advantage of r is its collaborative function in the development of packages .r has a huge library ( comprehensive r archive network - cran ) of packages for statistical analysis , graphic creation , data mining and management , and integration with other softwares and programming languages .this collaborative ability , besides making r a powerful analytical environment , makes it assume a position in our framework as a glue for other languages and technologies such as python , java , relational databases , rdf , c , c++ , weka , among many others .this way we can gather data and data storage tools , analytical coding and repositories for outputs , making a research project fully reproducible . in addittion , r has being used by a large community , and has a lot of references to lean on . in our groupwe opt to run r through rstudio .this platform is also open source and is an integrated environment that helps to visualize the different r interfaces ( workspace , graph , scripts and log ) .other than that , it facilitates the management of multiple working directories through the definitions of projects .as suggested by hadley in his github repository , the idea is to create a code that can be recreated just by copying the codes we publish online .therefore , each code must be connected to the dataset and contain all the information needed to be performed .the elements of a reproducible script in r include the required packages , connection to the data , codes and codes descriptions . each function in ris called upon a package where it is nested .so , for anyone else to be able to reproduce our codes , she must have all the packages installed . regarding the data , we have already discussed earlier the possible formats and ways to publish it .it is noteworthy that the data must be aligned with the codes .this means that all the variables must be named exactly with the names used in the codes .also , every data management information must be inserted in the codes so that whoever is trying to reproduce it might reach the same results .finally , each line must have a description of its purpose and use .github is an online repository built to facilitate the collaborative writing of computing codes .it not only allows the sharing of codes but facilitates collaboration through the copy ( hereby called _ `` fork '' _ ) of project pages in a safer way , regarding the original code . among all the qualities of using github as a reproducible strategy in the analytical coding process , we highlight its strong connectivity with r. it allows not only the sharing and management of codes in websites , but also simulates r outputs with kntir .there are several possibilities of using r integrated with github .we have been using github mainly to : * publish analytical r - scripts * promote collaboration among our data analysts when creating or debugging data analysis * generate automatic data reports for open design projects * create templates for data analysis ( hereby called data analysis toolbox ) with explanation of the methods ( using wiki pages ) and description of codes and outcomes . in order to have a complete reproducible script and also to facilitate data dissemination and visualization , it is important to obtain automated and dynamic representation of tables , figures and reports .r allows the creation of analytical codes that generates automated reports , such as the knitr package , which translates the analysis into an html report ( or other formats such a pdf ) . in summary, this package translates the code into a report mixing latex and markdown languages .an example of its application can be found in our github repository for the , glocal open design collection project . in this specific projectwe used knitr associated with a r code to generate an automated report about data quality and associations .another way of using r to generate dynamic research is by developing interactive graphs .these are graphs that might be customized or modified by the user ( research subject , patient or any other stakeholder ) to get different slices of the dataset .r has several ways of generating interactive graphs . here, we would like to introduce rggobi and shiny .however , there are options that can be found at the cran task view for dynamic graphs since all the documentation we are using is going to be made public we need to assure that its use is covered by a license .this will assure that any use other than that allowed by the license , is not performed by the users .this is fairly important due to the relevance of the information being made public . in our framework , we have used creative commons , which is a free copyright license framework .inserting a line regarding the licensing characteristics in each of the documents in a project is sufficient to specify the type of license .the licensing assures the need for approval from the copyright owner .basically , we allow the user to share and adapt the specific parts of the project .the only restriction is that the user must attribute the documents to the original authors and must use it only for noncommercial purposes .examples of licenses are : this code is licensed under a creative commons attribution - noncommercial 3.0 unported license .you are free : to share - to copy , distribute and transmit the work , to remix - to adapt the work , under the following conditions : attribution - you must attribute the work in the manner specified by the author or licensor ( but not in any way that suggests that they endorse you or your use of the work ) .noncommercial - you may not use this work for commercial purposes . with the understanding that : waiver - any of the above conditions can be waived if you get permission from the copyright holder .public domain - where the work or any of its elements is in the public domain under applicable law , that status is in no way affected by the license .other rights - in no way are any of the following rights affected by the license : your fair dealing or fair use rights , or other applicable copyright exceptions and limitations ; the author s moral rights ; rights other persons may have either in the work itself or in how the work is used , such as publicity or privacy rights .notice - for any reuse or distribution , you must make clear to others the license terms of this work .the best way to do this is with a link to this web page . for more detailssee http://creativecommons.org/licenses/by-nc/3.0/ other than discussing methods and tools to make a research reproducible , we also believe that it is important to include a facilitation of the data communication and dissemination. this will not only allow users to access the research project but will also catalyze the reach and dissemination of the respective projects . in order to disclose and gather all the material from our groups research projects that was made public, we created websites ( using google sites ) for each of the projects where we included links to data repositories , code repositories and inserted reports and graphs .any web design tool can be used but our choice for google sites is based on its free access and user friendly interface .an example is the observer agreement website which integrates all the reproducible documentation for our researchers with observer agreement about orthopedic scales projects . summaryzing the information discussed, we created a simple graphical demonstration of the framework s conception ( figure 1 ) .as mentioned before , r languge software gets a highlighted position in the framework s model .so r is used to manage and coordinate the documentation .data is stored in open access online repositories , in a r supported formats that will allow the connection between data and analytical code .the analytical codes are developed within r interface and stored in a open access online repositiry .outputs generated by the codes are also stored in open sourced online repositores .all this information is licensed and integrated in a website for the research project .in this study we aim to introduce a reporting framework for reproducible and interactive research , based on technologies and methods applied to some of the recent projects in our research group ( ror ) .several tools were described to publish datasets and analytical codes , all centered and managed by the r language software . the concept of our framework was initially based on some guidelines already published .not many reports can be found in the literature on the use of reproducible research framework in healthcare .some researchers do publish their datasets or codes , but generally they are published separately . in our frameworkwe tried to approach different aspects of reproducibility , rather than just the data , that is connectivity , dissemination and licensing with a framework that is constituted of free and friendly technologies , facilitating _ replication _ and improving _ transparency _ of results .some of the tools we showcased here have been extensively discussed and used for the development of projects by many investigators .github , for instance , has been extremely used by data analysts and programmers , as well as dyrad and figshare , given the increased amount of data being stored in clouds .however , this advances have not been observed as often in healthcare research , specifically when it comes to big clincal data and replication of health researches protocols .although our proposed framework is still in progress and needs to be improved , we emphasize its ability not only for sharing data and codes in a safe way , but also connecting and disseminating information through free and user friendly technologies .we believe that only by sharing and comparing methods a consensus of framework can be created .therefore , this model proposed can help towards the standardization of reproducible research protocols in healthcare , aggregating value not only for research , but also for innovation and clinical practice .100 maniyka j , chui m , brown b , bughun j , dobbs r , roxburgh c , byers ah ( 2011).big data : the next frontier for innovation competition , and productivity . mckinsey and company .available : http://www.mckinsey.com / insights / business_technology / big_data_the_next_frontier_for_innovation .accessed 18 april 2013 .peng rd , dominici f , zeger sl ( 2006 ) .reproducible epidemiologic research .am j epidemiol .may 1;163(9):783 - 9 .groves t , godlee f ( 2012 ) .open science and reproducible research .jun 26;344:e4383 .available : doi : 10.1136/bmj.e4383.[doi : 10.1136/bmj.e4383 . ] .accessed 18 april 2013 .r - project contributors .the r project for statistical computing .available : link : www.r - project.org , 2013[www.r - project.org , 2013 ] .accessed 18 april 2013 .spss software .avaliable : http://www-01.ibm.com/software/analytics/spss .accessed 18 april 2013 .sas institute inc .statistical analysis system - sas .available : http://www.sas.com.accessed 18 april 2013 .statacorp lp.stata : data analisys and statistical software .available : www.stata.com.accessed 18 april 2013 .health information privacy and security ( hippa).available : http://www.hhs.gov/ocr/privacy/hipaa .accessed 18 april 2013 .lang dt ( 2013 ) .package rcurl. available : http://cran.r-project.org/web/packages/rcurl/rcurl.pdf.accessed 18 april 2013 .linked data .available : http://linkeddata.org .accessed 18 april 2013 .willighagen e ( 2013 ) .package rrdf. available : http://cran.r-project.org/web/packages/rrdf/rrdf.pdf accessed 18 april 2013 .javascript object notation ( json ) .available : www.json.org .accessed 18 april 2013 .couture - beil a ( 2013 ) .package rjson. available : http://cran.r-project.org/web/packages/rjson/rjson.pdf .accessed 18 april 2013 .dryad digital repository.available : datadryad.org .accessed 18 april 2013 .chamberlain s , boettiger c , ram k ( 2013 ) .package rdryad. available : http://cran.r-project.org/web/packages/rdryad/rdryad.pdf .accessed 18 april 2013 .figshare.available : http://figshare.com/. accessed 18 april 2013 .moreira t , yen t , vissoci jrn , barros t , ejnisman l , massa b , pietrobon r , vail tp ( 2013).total hip arthoplasty complications prevalence meta - analysis at 5 , 15 and 20 years followup.available : http://dx.doi.org/10.6084/m9.accessed 18 april 2013 .dal ponte t , pessin dv , gambeta ce , ferreira apb , braga l , vissoci jrn , braga - baiak a , gandhi m , pietrobon r. ( 2013 ) the reliability of ao classification on femur fractures among orthopedic residents .available : http://dx.doi.org/10.6084/m9 .accessed 18 april 2013 .boetinger c , chamberlain s , ram k , hart e. ( 2012 ) .package rfigshare. available : http://cran.r-project.org/web/packages/rfigshare/rfigshare.pdf .accessed 18 april 2013 .google drive .available : https://drive.google.com/. accessed 18 april 2013 .research on research and innovation ( ror).available : https://sites.google.com/site/researchonresearchtech/home .accessed 18 april 2013 .rstudio inc ( 2013 ) .available : www.rstudio.com .accessed 18 april 2013 .wickham h ( 2013 ) .available : https://github.com/hadley/devtools/wiki/reproducibility .accessed 18 april 2013 .available : https://github.com/. accessed 18 april 2013 .xie y. ( 2013 ) .knitr : ageneral - purpose package for dynamic report in r. available : https://github.com/hadley/devtools/wiki/reproducibility .accessed 18 april 2013 .glocal registry project in github .available : https://github.com/rpietro/glocalregistry . accessed 18 april 2013 .lang dt , swayne d , wickham h , lawrence m. ( 2012 ) .rggobi : interface between r and ggobi .available : http://cran.r-project.org/web/packages/rggobi/index.html .accessed 18 april 2013 .adler d , murdoch , d. ( 2013 ) .3d visualization package ( opengl ) .avaliable : http://cran.r -project.org / web / packages / rgl / index.html[http://cran.r -project.org / web / packages / rgl / index.html ] .accessed 18 april 2013 .rstudio inc .( 2013 ) shiny : web applications framework for r. available : http://cran.r-project.org/web/packages/shiny/index.html . accessed 18 april 2013 .lewin - koh n. ( 2013 ) cran task view : graphic displays and dynamic graphics and graphic devices and visualization .available : http://cran.r-project.org/web/views/graphics.html . accessed 18 april 2013 .creative commons .available : http://creativecommons.org/. accessed 18 april 2013 .google inc .google sites .available : https://sites.google.com/?pli=1 .accessed 18 april 2013 .observer agreement .available : https://sites.google.com/site/observeragreement/home .accessed 18 april 2013 .laine c , goodman sn , griswold me , sox hc .( 2007 ) reproducible research : moving toward research the public can really trust.ann intern med .mar 20;146(6):450 - 3 .( 2009 ) reproducible research and biostatistics .oxford journals . biostatistics .volume10 , issue pp 405 - 408 .( 2011 ) reproducible research in computational science .science 2 december 334 ( 6060):1226 - 1227 doi : 10.1126/science.1213847
the aim of this article is to introduce a reporting framework for reproducible , interactive research applied to big clinical data , based on open source technologies . the framework is constituted by the following three axes : ( i ) data , ( ii ) analytical codes and ( iii ) dissemination . in this paper , different documentation formats and online repositories are introduced . to integrate and manage the reproducible contents , we propose the r language as the tool of choice . all the information is then published and gathered in a website for different projects . this framework is free and user friendly and is proposed to enhance reproducibility of health - science reports . 2
in 60s ulam proposed a method to construct a matrix approximant for a perron - frobenius operator of dynamical systems which is now known as the ulam method .the ulam conjecture was that , in the limit of small cell discretization of the phase space , this method converges and gives the correct description of the perron - frobenius operator of a system with continuous phase space .this conjecture was shown to be true for hyperbolic maps of the interval .various types of more generic maps of an interval were studied in .further mathematical results have been obtained in with extensions and prove of convergence for hyperbolic maps in higher dimensions .the mathematical analysis of non - uniformly expanding maps is now in progress . at the same timeit is known that the ulam method applied to hamiltonian systems with integrable islands of motion destroys the invariant curves thus producing a strong modification of properties of the perron - frobenius operator of the system with continuous phase space ( see e.g. ) .recently it was shown that the ulam method naturally generates a class of directed networks , named ulam networks , which properties have certain similarities with the world wide web ( www ) networks .thus the google matrix constructed for the ulam networks built for the chirikov typical map has a number of interesting properties showing a power law decay of the pagerank vector .the classification of network nodes by the pagerank algorithm ( pra ) was proposed by brin and page in 1998 and became the core of the google search engine used everyday by majority of internet users .the pra is based on the construction of the google matrix which can be written as ( see e.g. for details ) : here the matrix is constructed from the adjacency matrix of directed network links between nodes so that and the elements of columns with only zero elements are replaced by .the second term in r.h.s . of ( [ eq1 ] ) describes a finite probability for www surfer to jump at random to any node so that the matrix elements .this term stabilizes the convergence of pra introducing a gap between the maximal eigenvalue and other eigenvalues .usually the google search uses the value .the factor is also called the google damping factor . by the construction so that the asymmetric matrix belongs to the class of perron - frobenius operators .such operators naturally appear in the ergodic theory and dynamical systems with hamiltonian or dissipative dynamics .the right eigenvector at is the pagerank vector with positive elements and , the components of this vector are used for ordering and classification of nodes .the pagerank can be efficiently obtained by a multiplication of a random vector by which is of low cost since in average there are only about ten nonzero elements in a typical line of of www .this procedure converges rapidly to the pagerank .all www nodes can be ordered by decreasing ( ) so that the pagerank plays a significant role in the ordering of websites and information retrieval .the classification of nodes in the decreasing order of values is used to classify importance of network nodes as it is described in more detail in . due to a spectacular success of the google search the studies of pagerank properties became very active research filed in the computer science community .a number of interesting results in this field can be find in .an overview of the field is available in .it is established that for large www subsets is satisfactory described by a scale - free algebraic decay with where is the pagerank ordering index and . in this workwe analyze the properties of google matrix constructed from ulam networks generated by one - dimensional ( 1d ) intermittency maps .such maps were introduced in and studied in dynamical systems with intermittency properties ( see e.g. ) .a number of mathematical results on the measure distribution and slow mixing in such maps can be find in ( see also refs . therein ) .the mathematical properties of convergence of the ulam method in such intermittency maps are discussed in a recent work .the analysis of such 1d maps is simpler compared to the 2d map considered in : for example the pagerank at is described by the invariant measure of the map which can be find analytically as a function of map parameters .following the approach discussed in we study not only the pagerank but also the spectrum and the eigenstates of the google matrix generated by the intermittency maps .indeed , the right eigenvectors and eigenvalues of the google matrix ( ) are generally complex and their properties should be studied in detail to understand the behavior of the pagerank .we show that under certain conditions the properties of the pagerank can be drastically changed by parameter variation .the results are presented in a following way : in section ii we describe the class of intermittency maps and the distribution of links in the corresponding ulam network , the spectral properties of the google matrix and pagerank are considered in sections iii and iv , the discussion of the results is presented in section iv .= 8.2 cm -0.2cm the intermittency maps of the interval considered in this paper are described by the two map functions depending on parameters and defined for the first model as : and for the second model as } \, , \ , \text{for } \;\;\; 1/2\leq x \leq1 \end{array } \right.\ ] ] the parameters are positive numbers .the dynamics is given by the map and .the map functions are shown in fig .[ fig1 ] .= 7.1 cm -0.2 cm according to the usual theory of intermittency maps and ergodicity theory in the case of chaotic dynamics the steady state invariant distribution of the map is proportional to a time spent by a trajectory at point which is proportional to so that one has a power law distribution at small values of : for -map the dynamics is fully chaotic while for -map a fixed point attractor appears for when .the ulam networks generated by the intermittency maps ( [ eq2 ] ) , ( [ eq3 ] ) are constructed in a way similar to one described in : the whole interval is divided on equal cells and trajectories ( randomly distributed inside cell ) are iterated on one map iteration from cell to obtain matrix elements for transitions to cell : where is a number of trajectories arrived from cell to cell . the image of the density of google matrix elements is shown in fig .[ fig2 ] for the first model .the structure of the matrix repeats the form of the map function .we used from to cell trajectories , the obtained results are not sensitive to variation in this interval .= 8.2 cm -0.0 cm = 8.2 cm the differential distribution of number of nodes with ingoing or outgoing links is shown in fig .the first model shows a sharp drop of ingoing links and a power law decay of outgoing links . for the second model the situation is inverted .these properties can be understood from the following arguments .for the first model , the number of outgoing links is , the derivative is diverging near where we have .the number of nodes with links is and the differential distribution of nodes for the data of fig .[ fig3 ] ( top panel ) at this estimate gives in a good agreement with the numerical data . for the second model is always finite and we have a sharp drop for outgoing links distribution .the number of ingoing links is since we have near ( in our case but we consider here a general case ) .hence , the number of nodes with links is and for our case with we have .this value is in a good agreement with the data of fig .[ fig3 ] . for the first model is always finite and we have a sharp drop of ingoing links distribution .this analysis allows to understand the origin of power law distributions of links in the ulam networks generated by 1d maps .= 4.1 cm = 4.1 cm -0.2cm the distribution of the eigenvalues of the google matrix at constructed from the ulam network described above is shown in fig .[ fig4 ] for two models ( [ eq2 ] ) and ( [ eq3 ] ) . as in characterize an eigenstate by a participation ratio ( par ) defined as .in fact par gives an effective number of nodes populated by a given eigenstate , it is broadly used in systems with disorder and anderson localization .the states are normalized by the condition . for the pagerank proportional to , ordered in the decreasing order of probability , we use also probability normalization .= 8.2 cm-0.0 cm = 8.2 cm there are few main features of the spectrum of in fig .[ fig4 ] visible for two models : there are states with close to 1 which have relatively small values of ; there is a circle like structure of eigenvalues and the maximum par are in the middle ring around the center .the large circle is present for both maps and .this means that it appears due to the left branch of the map corresponding to intermittent motion near .the density distributions in the decay rate defined as are shown in fig .[ fig5 ] ( here is a number of states in the interval ) .it is clear that in the limit of large matrix size we have a convergence to a limiting distribution which has a characteristic peak at .= 8.2 cm -0.2 cm examples of few eigenstates with values of equal and close to zero are shown in fig .[ fig6 ] ( the index gives the cell position , index orders from zero to maximum ) .the first state with is the steady state distribution generated by the map ( the states for the map have similar structure and we do not show them here ). we have with for is agreement with the theoretical expression ( [ eq4 ] ) ( the numerical fit gives ) .the state is monotonic in so that it coincides with the pagerank up to a constant factor .eigenstates with next values of are characterized by the same decay at large with additional minima at certain values of similar to few nodes of eigenstates in quantum mechanics .= 8.2 cm -0.2cm the structure of eigenstates is changed when the value of is increased .typical states are shown in fig .the states on the first circle of have peaked structure at certain with a plateau at large . for values at the maximum of ( see fig . [ fig5 ] ) the eigenstates are delocalized over the whole interval of .= 8.2 cm -0.2cm an effective number of sites contributing to an eigenstate can be characterized by the par . for the pagerankthe value of is independent of the matrix size as it is clearly shown in fig .this is due to the power law decay of the pagerank which corresponds to an algebraic localization .the dependence of on is shown in fig .[ fig9 ] . for small can be fitted by a power law growth .the origin of the exponent of this growth requires further analysis .= 8.2 cm -0.2 cm finally we note that we also determined the dependence of number of states with values of on the matrix size .our data ( not shown ) are well described by the dependence so that in contract to the results presented in there are no singes of the fractal weyl law .we attribute this to the fact that in contrast to the dissipative map with a global contraction studied in in the intermittency maps all dynamics takes place on the whole one - dimensional interval with inhomogeneous distribution of measure but without fractality .the spectral gap between equilibrium state and the next state with maximum has very small gap which goes to zero with the increase of like ( see fig .[ fig10 ] ) .this happens due to the dynamical properties of the maps ( [ eq2 ] ) , ( [ eq3 ] ) where the time spent at small is of the order ( see e.g. ) , so that the corresponding that gives the exponent for .= 8.2 cm -0.2cm due to such decrease of with the pra has bad convergence at for large values of .up to we use the direct diagonalization of matrix which gives an algebraic decay with ( see fig . [ fig6 ] ) . for larger value of used the continuous map obtaining from an equilibrium distribution over the cells of size after a larger number of map iterations and large number of trajectories .this distribution converges to a limiting one at large values of ( see fig .[ fig11 ] ) .both methods give the same result for .the numerical data for the exponent are in good agreement with the theoretical dependence ( [ eq4 ] ) as it is shown in fig .[ fig12 ] ( we attribute small deviations from the theoretical values to finite size effects of ) .= 8.2 cm -0.0 cm = 8.2 cm -0.2 cm for the pra , described in the introduction , is stable and converges rapidly to the pagerank .it gives the same results as the exact diagonalization for .the dependence of pagerank on is shown in fig .[ fig11 ] ( top panel ) .a small decrease down to modifies at making very flat in this region . for pagerank becomes completely delocalized over the whole system size .= 8.2 cm-0.0 cm = 4.4 cm = 4.4 cm -0.2cm for the second model the pagerank depends strongly on the value of . for when the dynamics is chaotic and the steady - state distribution is given by eq.([eq4 ] ) the properties of the pagerank are similar to those of the first model described above , e.g. we have being independent of for ( see fig .[ fig13 ] , bottom panel ) .however , for the map has a fixed point attractor and the page rank becomes localized practically on one site at . in this regime with fixed point attractorthe pagerank is very sensitive to variation : at we have with the fit values at , at , at and at .the delocalization of the pagerank from the fixed point attractor state is also clearly seen in the variation of par shown in fig .[ fig14 ] .this shows that even if at the pagerank is dominated only by one node a decrease of allows to obtain weighted contribution of other nodes .we also note that in the phase of fixed point attractor the spectrum of eigenvalues has globally a structure rather similar to one at ( see fig . [ fig4 ] , right panel ) .however , the par values of all eigenstates at become rather close to unity showing that almost all eigenstates are strongly localized in this phase . for example , for , we have almost all in the range from 1 to 4 for , it is interesting that about 53% of the states have ( for this circle in contains 23% of states , see fig .[ fig4 ] right panel ) .the present studies allowed to establish a number of interesting properties of the google matrix constructed for the ulam network generated by intermittency maps .a general property of such networks is the existence of states with eigenvalues being very close to unity .the pagerank of such networks at is characterized by a power law decay with an exponent determined by the parameters of the map .it is interesting to note that usually for www it is observed that the decay of the pagerank follows the decay law of ingoing links distribution ( see e.g. ) . in our casethe decay of pagerank is independent of decay as it is clearly shown by eqs .( [ eq5]),([eq6 ] ) and the data of figs .[ fig3],[fig11],[fig13 ] .in fact a map with singularities of both maps and ( e.g. which behaves like at small , like at close to and like near ) will have the asymptotic decay of links distribution given by eqs .( [ eq5]),([eq6 ] ) but the decay of the pagerank will be given by , hence , being independent of the decay of links distribution .our results also show that while at close to unity the decay of the pagerank has the exponent but at smaller values the pagerank becomes completely delocalized ( see fig .[ fig11 ] ) . in this delocalized phasethe par grows with the system size approximately as .the delocalization of the pagerank can also take place at due to variation of the parameters of the map ( e.g. for ) .it is rather clear that the delocalization of the pagerank makes the google search inefficient .we hope that the properties of ulam networks generated by simple maps will be useful for future studies of real directed networks including www .indeed , the whole world will go blind if one day the google search will become inefficient .the investigations of the ulam networks can help to understand the properties of directed networks in a better way that can help to prevent such a dangerous situation .we thank a.s.pikovsky for a useful discussion of his results presented in .ulam , _ a collection of mathematical problems _, vol . 8 of _ interscience tracs in pure and applied mathematics _ , interscience , new york , p. 73li , j. approx .theory * 17 * , 177 ( 1976 ) .z. kovcs and t. tl , phys . rev .a * 40 * , 4641 ( 1989 ) .z. kaufmann , h. lustfeld , and j. bene , phys .e * 53 * , 1416 ( 1996 ) .g. froyland , r. murray , and d. terhesiu , phys .e * 76 * , 036702 ( 2007 ) .j. ding and a. zhou , physica d * 92 * , 61 ( 1996 ) .m. blank , g. keller , and c. liverani , nonlinearity * 15 * , 1905 ( 2002 ) .d. terhesiu and g. froyland , nonlinearity * 21 * , 1953 ( 2008 ) .g. froyland , s. lloyd , and a. quas , ergod .dynam . sys . * 1 * , 1 ( 2008 ) .g. froyland , _ extracting dynamical behaviour via markov models _ , in a. mees ( ed ) _ nonlinear dynamics and statistics : proceedings , newton institute , cambridge ( 1998 ) _ , p.283 birkhuser verlag ag , berlin ( 2001 ) .r.murray , `` ulam s method for some non - uniformly expanding maps '' , preprint ( 2009 ) .d.l.shepelyansky and o.v.zhirov , + arxiv:0905.4162v2[cs.ir ] ( 2009 ) .s. brin and l. page , computer networks and isdn systems * 33 * , 107 ( 1998 ) .a. m. langville and c. d. meyer , _ google s pagerank and beyond : the science of search engine rankings _ , princeton university press ( princeton , 2006 ) ; d. austin , ams feature columns ( 2008 ) available at www.ams.org/featurecolumn/archive/pagerank.html i.p .cornfeld , s.v .fomin , and y. g. sinai , _ ergodic theory _ , springer ,m. brin and g. stuck , _ introduction to dynamical systems _, cambridge univ . press , cambridge , uk ( 2002 ) .g. osipenko , _ dynamical systems , graphs , and algorithms _ , springer , berlin ( 2007 ) .p. boldi , m. santini , and s. vigna , in _ proceedings of the 14th international conference on world wide web _ , a. ellis and t. hagino ( eds . ) , acm press , new york p.557 ( 2005 ) ; s. vigna , * ibid . *k. avrachenkov and d. lebedev , internet mathematics * 3 * , 207 ( 2006 ) .k. avrachenkov , n. litvak , and k.s .pham , in _ algorithms and models for the web - graph : 5th international workshop , waw 2007 san diego , ca , proceedings _ , a. bonato and f.r.k .chung ( eds . ) , springer - verlag , berlin , lecture notes computer sci . * 4863 * , 16 ( 2007 ) n. litvak , w. r. w. scheinhardt , and y. volkovich , internet math . * 4 * , 175 ( 2007 ) .k. avrachenkov , d. donato and n. litvak ( eds . ) , _ algorithms and models for the web - graph : 6th international workshop , waw 2009 barcelona , proceedings _ , springer - verlag , berlin , lecture notes computer sci . * 5427 * , springer , berlin ( 2009 ) .d. donato , l. laura , s. leonardi and s. millozzi , eur .j. b * 38 * , 239 ( 2004 ) ; g. pandurangan , p. raghavan and e. upfal , internet math .* 3 * , 1 ( 2005 ) .y.pomeau and p.manneville , comm .math . phys . * 74 * , 189 ( 1980 ) .t. geisel and s. thomae , phys .* 52 * , 1936 ( 1984 ) .t. geisel , j.nierwetberg and a.zacherl , phys .lett . * 54 * , 616 ( 1985 ) .a.s.pikovsky , phys .a * 43 * , 3146 ( 1991 ) .r.artuso and c.manchein , phys .e * 80 * , 036210 ( 2009 ) .m.thaler , j. stat . phys . * 79 * , 739 ( 1995 ) m.holland , ergod .th . & dynam .sys * 25 * , 133 ( 2005 ) .o. giraud , b. georgeot and d. l. shepelyansky , phys .e * 80 * , 026107 ( 2009 ) .e. ott , _ chaos in dynamical systems _, cambridge univ . press , cambridge ( 1993 ) .a. lichtenberg and m. lieberman , _ regular and chaotic dynamics _ , springer ,
we study the properties of the google matrix of an ulam network generated by intermittency maps . this network is created by the ulam method which gives a matrix approximant for the perron - frobenius operator of dynamical map . the spectral properties of eigenvalues and eigenvectors of this matrix are analyzed . we show that the pagerank of the system is characterized by a power law decay with the exponent dependent on map parameters and the google damping factor . under certain conditions the pagerank is completely delocalized so that the google search in such a situation becomes inefficient .
one of the fundamental problems in computational biology is that of inferring evolutionary relationships between a set of observed amino acid sequences or taxa .these evolutionary relationships are commonly represented by a tree ( phylogeny ) describing the descent of all observed taxa from a common ancestor , a reasonable model provided we are working with sequences over small enough regions or distant enough relationships that we can neglect recombination or other sources of reticulation .several criteria have been implemented in the literature for inferring phylogenies , of which one of the most popular is maximum parsimony ( mp ) .maximum parsimony defines the tree(s ) with the fewest mutations as the optimum , generally a reasonable assumption for short time - scales or conserved sequences .it is a simple , non - parametric criterion , as opposed to common maximum likelihood models or various popular distance - based methods .nonetheless , mp is known to be np - hard and practical implementations of mp are therefore generally based on heuristics which do not guarantee optimal solutions . for sequences where each site or character is expressed over a set of discrete states , mp is equivalent to finding a minimum steiner tree displaying the input taxa .for example , general dna sequences can be expressed as strings of four nucleotide states and proteins as strings of 20 amino acid states .recently , sridhar _ et al . _ used integer linear programming to efficiently find global optima for the special case of sequences with binary characters , which are important when analyzing single nucleotide polymorphism ( snp ) data .the solution was made tractable in practice in large part by a pruning scheme proposed by buneman and extended by others .the so - called buneman graph for a given set of observed strings is an induced sub - graph of the complete graph ( whose nodes represent all possible strings of mutations ) such that still contains all distinct minimum steiner trees for the observed data . by finding the buneman graph , one can often greatly restrict the space of possible solutions to the steiner tree problem .while there have been prior generalizations of the buneman graph to non - binary characters , they do not provide any comparable guarantees usable for accelerating steiner tree inference . in this paper, we provide a new generalization of the definition of buneman graph for any finite number of states that guarantees the resulting graph will contain all distinct minimum steiner trees of the multi - state input set .further , we allow transitions between different states to have independent weights .we then utilize the integer linear programming techniques developed in to find provably optimal solutions to the multi - state mp phylogeny problem .we validate our method on four specific data sets chosen to exhibit different levels of difficulty : a set of nucleotide sequences from _ oryza rufipogon _ , a set of human mt - dna sequences representing prehistoric settlements in australia , a set of hiv-1 reverse transcriptase amino acid sequences and , finally , a 500 taxa human mitochondrial dna data set .we further compare the performance of our method , in terms of both accuracy and efficiency , with leading heuristics , paup * and the pars program of phylip , showing our method to yield comparable and often far superior run times on non - trivial data sets .let be an input matrix that specifies a set of taxa , over a set of characters such that represents the character of the taxon .the taxa of represent the terminal nodes of the steiner tree inference .further , let be the number of admissible states of the character .the set of all possible states is the space .we will represent the character of any element , by .the state space can be represented as a graph with the vertex set and edge set =1\} ] if and 1 otherwise .furthermore , let be a set of weights , such that ] and still ensure that the shortest path connecting any set of states remains the same .we can now define a distance over , such that for any two elements \equiv \sum_{p\in c}^{m } \alpha_{p}[(u)_p,(v)_p]\ ] ] given any subgraph of , we can define the length of to be the sum of the lengths of all the edges ] , where or 1 , for . to construct the buneman graph ,a rule is defined for discarding / retaining the subset of vertices contained in each pair of overlapping blocks ] are retained .buneman previously established for the binary case that the retained vertex set will contain all terminal and steiner nodes of all distinct minimum length steiner trees .we extend this prior result to the weighted multi - state case by presenting an algorithm analogous to the binary case to construct a graph with these properties .briefly , the algorithm looks at the input matrix projected onto each distinct pair of characters and constructs a matrix , where the element is 1 only if there is at least one taxon such that and and zero otherwise .the algorithm then implements a rule for each such pair of characters that allows us to enumerate the possible states of those characters in any optimal steiner tree . for clarity, we will assume that each state for each character is expressed in at least one input taxon , since states that are not present in any taxa can not be present in a minimum length tree because of the triangle inequality .the rule is defined by a matrix determined by the following algorithm : 1 . for all and .if all non - zero entries in are contained in the set of elements for a unique pair and then for all such that either or ( see fig [ pruning ] where this pair of states are denoted and . ) 3 .if the condition in step 2 is not satisfied then set for all .are present in the shaded region , vertices in all other blocks can be discarded . ]this set of rules then defines a subgraph for each pair of characters , such that any vertex if and only if . the intersection of these subgraphs then gives the generalized buneman graph for given any set of distance metrics .note that the buneman graph of any subset of is a subset of .it is easily verified that for binary characters , our algorithm yields the standard buneman graph .the remainder of this paper will make two contributions .first , it will show that the generalized buneman graph defined above contains all minimum steiner trees for the input taxa .this will in turn establish that restricting the search space for minimum steiner trees to will not affect the correctness of the search .the paper will then empirically demonstrate the value of these methods to efficiently finding minimum steiner trees in practice .before we prove that all steiner minimum trees connecting the taxa are displayed in , we need to introduce the notion of a _neighborhood decomposition_. suppose we are given any tree displaying the set of taxa .we will contract each degree - two steiner node ( i.e. , any node that is not present in ) and replace its two incident edges by a single weighted edge .such trees are called _ x - trees _ .each x - tree can be uniquely decomposed into its _ phylogenetic x - tree _components , which are maximal subtrees whose leaves are taxa .formally , each phylogenetic x - tree consists of a set of taxa and a tree displaying them , such that there is a bijection or labeling between elements of and the set of leaves ( fig [ skltn ] ) .all vertices in with degree 3 or higher will be called _branch points_. from now on we will assume that given any input tree , such a decomposition has already been performed ( fig [ skltn ] ) .two phylogenetic x - trees and are considered _ equivalent _ if they have identical length and the same tree topology . by identical tree topology ,we mean there is a bijection between the edge set of the two trees , such that removing any edge and its image partitions the leaves into identical bi - partitions .we define two trees to be _ neighborhood distinct _ if after neighborhood decomposition they differ in at least one phylogenetic x - tree component .we define a labeling of the phylogenetic x - tree as an injective map between the vertices of and those of the graph such that represents the character string for the image of vertex in . since leaf labels are fixed to be the character strings representing the corresponding taxa , for any leaf .identical phylogenetic x - trees can , however , differ in the labels of internal branch points .we will use a generalization of the fitch - hartigan algorithm to weighted parsimony proposed by erdos and szekely .the algorithm uses a similar forward pass / backward pass technique to compute an optimal labeling for any phylogenetic x - tree .arbitrarily root the tree at some taxon and starting with the leaves compute the minimum length of any labeling of the subtree consisting of the vertex and its descendants , where the root is labeled as follows . 1 . if labels a leaf , and otherwise .if has children , and is the subtree consisting of and its descendants , \}\ ] ] where the minimum is to be taken over all possible labels for each character and for each child .the optimal labeling of is one which minimizes the length at the root : .labels for each descendant are inferred in a backward pass from the root to the leaves and using equation [ score ] .note that the minimum length of a tree is just the sum of minimum lengths for each character , i.e. , , where is the minimum cost of tree rooted at for character .briefly , our proof is structured as follows : given any phylogenetic x - tree labeling ( typically denoted below ) , we will show that the generalized buneman pruning algorithm for each pair of characters defines a subgraph which contains at least one possible labeling of no higher cost ( typically denoted below ) for .we will then show that the intersection of these subgraphs thus contains an optimal labeling for .if the pruning condition in step 2 of the algorithm that defines the buneman graph is not implemented for the pair of characters , then and all labels are necessarily inside .we prove the following lemma for the case when the pruning condition is satisfied , ie . , there exist unique states of and of , such that each element in the set of leaves either has or or both .each time we relabel vertices , we will keep all characters except and fixed . to economize our notation, we will represent the sum of costs in and of the tree labeled by , which has some branch point as the root , simply by writing .we use the notation ] , or 2 . for each of the following choices : ] or ] 2 .|x_v\neq i_{pq } , \eta_v\in\psi\} ] the cost of the tree for and , with branch point ] , such that =\alpha_p[\gamma_b,\phi_b] ] and arrive at ] or ] all achieve a length no more than . therefore, this is a -tree .this proves the base case for our proposition .we will now assume that the claim is true for all trees with branch points or less .suppose we have a labeled tree with branch points which are all outside .let be the children of a branch point in and be the subtrees of each and their descendants .note that some of these descendants may be leaves .since has at least two branch points , one of its descendants ( say ) must be a branch point ( fig [ bpqinduction](b ) ) .let be the subtree consisting of and all its other descendants . for clarity we will use the notation ] .this implies , \nonumber \\ & = & l(\gamma , t_b ) + l(\gamma , t_1 ) + \alpha_p[x_b , x_1 ] + \alpha_q[y_b , y_1]\end{aligned}\ ] ] there are four possibilities . 1 .both and are -trees with or less branch points - in this case , by induction , both and can be relabeled with and of the form ] or ] and/or ] .it will become clear that the argument works for the other possible choices .since , is a -tree , by induction , we can choose a labeling of with ] then the choice ] and/or ] for concreteness .since is a -tree , we can choose a labeling with ] .neither or are -trees .it follows from induction that there is a labeling such that ] .there are two possibilities in this case ] or ] . as before, we will prove the claim for the former possibility while the later case can be proved by an identical argument . \nonumber\\ & = & l(\phi , t_b ) + l(\phi , t_1 ) + \alpha_q[y_b , y_1]\end{aligned}\ ] ] + \alpha_q[y_b , y_1 ] \nonumber\\ & \geq & l(\phi , t_b ) + l(\phi , t_1 ) + d_{\bm{\alpha}}[\gamma_{b},\phi_{b } ] + d_{\bm{\alpha}}[\gamma_{v_1},\phi_{v_1 } ] \nonumber\\ & + & \alpha_p[x_b , x_1 ] + \alpha_q[y_b , y_1]\nonumber\\ & \geq & l(\phi , t_b ) + l(\phi , t_1 ) + d_{\bm{\alpha}}[\gamma_{b},\phi_{b } ] + \alpha_q[y_b , y_1 ] \nonumber\\ & = & l(\phi , t_b ) + l(\phi , t_1 ) + d_{\bm{\alpha}}[\gamma_{b},\phi_{b } ] + d_{\bm{\alpha}}[\phi_{b},\phi_{v_1 } ] \nonumber\\ & = & l(\phi , t ) + d_{\bm{\alpha}}[\gamma_{b},\phi_{b}]\end{aligned}\ ] ] this also satisfies the claim .the proof for ] is identical .2 . ] or ] .as before , we show the calculation for the former possibility . in this case \nonumber\\ & = & l(\phi , t_b ) + l(\phi , t_1 ) + \alpha_p[x_b , i_{pq}]+\alpha_q[i_{qp},y_1]\end{aligned}\ ] ] combining this with equation [ eq : gamma ] we get , + \alpha_q[y_b , y_1 ] \nonumber\\ & \geq & l(\phi , t_b ) + l(\phi , t_1 ) + d_{\bm{\alpha}}[\gamma_{b},\phi_{b } ] + d_{\bm{\alpha}}[\gamma_{v_1},\phi_{v_1 } ] \nonumber\\ & + & \alpha_p[x_b , x_1 ] + \alpha_q[y_b , y_1]\nonumber\\ & = & l(\phi , t_b ) + l(\phi , t_1 ) + \alpha_p[x_b , i_{pq } ] + \alpha_q[i_{qp},y_1 ] \nonumber\\ & + & \alpha_p[x_b , x_1 ] + \alpha_q[y_b , y_1]\nonumber\\ & \geq&l(\phi , t_b ) + l(\phi , t_1 ) + \alpha_p[x_b , i_{pq } ] + \alpha_q[i_{qp},y_1 ] \nonumber\\ & = & l(\phi , t_b ) + l(\phi , t_1 ) + d_{\bm{\alpha}}[\phi_{b},\phi_{v_1 } ] = l(\phi_b , t ) \end{aligned}\ ] ] but if we now relabel and with ] while for all other , we get \geq l({\tilde{\phi}}_{v_1},t_1 ) ]. this immediately gives , \nonumber\\ & \geq&l(\phi , t ) \geq l(\gamma , t ) \end{aligned}\ ] ] identical arguments work for the choices ]. this proves that if either of the two possibilities claimed are always true for an x - tree with branch points or less then they are also true for a tree with branch points .the proof for arbitrary follows from induction .given a minimum length phylogenetic x - tree there is an optimal labeling for each branch point within .lemma [ bpqlem ] establishes that for any minimum steiner tree labeled by and any branch point such that , an alternative optimal labeling exists such that is inside the union of blocks \cup [ c_p(i_{pq})c_q((\gamma_{b})_q)]\cup [ c_p((\gamma_b)_p)c_q(i_{qp})]\ ] ] if we root the tree at , the new optimal labeling for all its descendants is inferred in a backward pass of the erdos - szekely algorithm .this ensures that each branch point in a minimum length phylogenetic x - tree is labeled inside .let , where the intersection is taken over all pair of characters for which the pruning condition is satisfied .it follows from lemma [ bpqlem ] that also contains an alternate optimal labeling of .note that is a non - empty subset of .this must be true because given a character pair , each union of blocks contains at least one taxon and so the rule matrix that defines the buneman graph must have ones for each of these blocks .therefore each element in represents a distinct vertex of the buneman graph .as argued before , any minimum steiner tree can be decomposed uniquely into phylogenetic x - tree components and the previous corollary ensures that each phylogenetic x - tree can be labeled optimally inside the generalized buneman graph .it follows that all distinct minimum steiner trees are contained inside the generalized buneman graph .we briefly summarize the ilp flow construction used to find the optimal phylogeny .we convert the generalized buneman graph into a directed graph by replacing an edge between vertices and with two directed edges each with weight as determined by the distance metric .each directed edge has a corresponding binary variable in our ilp .we arbitrarily choose one of the taxa as the root , which acts as a source for the flow model .the remaining taxa correspond to sinks .next , we set up real - valued flow variables , representing the flow along the edge that is intended for terminal .the root outputs units of flow , one for each terminal .the steiner tree is the minimum - cost tree satisfying the flow constraints .this ilp was described in , and we refer the reader to that paper for further details .the ilp for this construction of the steiner tree problem is the following : [ datatable1 ] we implemented our generalized buneman pruning and the ilp in c++ .the ilp was solved using the concert callable library of cplex 10.0 .we compared the performance of our method with two popular heuristic methods for maximum parsimony phylogeny inference pars , which is part of the freely - available phylip package , and paup * , the leading commercial phylogenetics package .we attempted to use phylip s exact branch - and - bound method dna penny for nucleotide sequences , but discontinued the tests when it failed to solve any of the data sets in under 24 hours . in each case, pars and paup * were run with default parameters .we first report results from three moderate - sized data sets selected to provide varying degrees of difficulty : a set of 1,043 sites from a set of 41 sequences of _ o. rufipogon _ ( red rice ) , 245 positions from a set of 80 human mt - dna sequences reported by , and 176 positions from 50 hiv-1 reverse transcriptase amino acid sequences .the hiv sequences were retrieved by ncbi blastp searching for the top 50 best aligned taxa for the query sequence gi 19571541 and default parameters .we then added additional tests on larger data sets all derived from human mitochondrial dna .the mtdna data was retrieved from ncbi blastn , searching for the 500 best aligned taxa for the query sequence gi 61287976 and default parameters .the complete set of 16,546 characters ( after removing indels ) was then broken in four windows of varying sizes and characteristics : the first 3,000 characters ( mt3000 ) , the first 5,000 characters ( mt5000a ) , the next 5,000 characters ( mt5000b ) , and the first 10,000 characters ( mt10000 ) .table [ datatable1 ] summarizes the results . for the set of 41 sequences of lhs-1 gene from _ o. rufipogon _ ( red rice ) ,our method pruned the full graph of nodes ( after screening out redundant characters ) to 58 .fig [ phylogenies](a ) shows the resulting phylogeny .both paup * and pars yielded an optimal tree although more slowly than the ilp ( 2.09 seconds and 2.57 seconds respectively , as opposed to 0.29 seconds ) . for the 245-base human mt - dna sequences ,the generalized buneman pruning was again highly efficient , reducing the state set from after removing redundant sequences to 64 .fig [ phylogenies](b ) shows the phylogeny returned .while paup * was able to find the optimal phylogeny ( although it was again slower at 5.69 seconds versus 0.48 seconds ) , pars yielded a slightly sub - optimal phylogeny ( length 45 instead of 44 ) in a comparable run time ( 0.56 seconds ) . for hiv-1 sequences ,our method pruned the full graph of possible nodes to a generalized buneman graph of 297 nodes , allowing solution of the ilp in about two minutes .fig [ phylogenies](c ) shows an optimal phylogeny for the data .paup * was again able to find the optimal phylogeny and in this case was faster than the ilp ( 3.84 seconds as opposed to 127.5 seconds ) .pars required a shorter run time of 0.30 seconds , but yielded a sub - optimal tree of length of 42 , as opposed to the true minimum of 40 .for the four larger mitochondrial datasets , buneman pruning was again highly effective in reducing graph size relative to the complete graph , although the ilp approach eventually proves impractical when buneman graph sizes grows sufficiently large .two of the data sets yielded buneman graphs of size below 400 , resulting in ilp solutions orders of magnitude faster than the heuristics .mt5000a , however , yielded a buneman graph of over 1,000 nodes , resulting in an ilp that ran more slowly than the heuristics .mt10000 resulted in a buneman graph of over 6,000 nodes , leading to an ilp too large to solve .pars was faster than paup * in all cases , but paup * found optimal solutions for all three instances we can verify while pars found a sub - optimal solution in one instance . we can thus conclude that the generalized buneman pruning approach developed here is very effective at reducing problem size , but solving provably to optimality does eventually become impractical for large data sets .heuristic approaches remain a practical necessity for such cases even though they can not guarantee , and do not always deliver , optimality .comparison of paup * to pars and the ilp suggests that more aggressive sampling over possible solutions by the heuristics can lead optimality even on very difficult instances but at the cost of generally greatly increased run time on the easy to moderate instances .we have presented a new method for finding provably optimal maximum parsimony phylogenies on multi - state characters with weighted state transitions , using integer linear programming .the method builds on a novel generalization of the buneman graph for characters with arbitrarily large but finite state sets and for arbitrary weight functions on character transitions .although the method has an exponential worst - case performance , empirical results show that it is fast in practice and is a feasible alternative for data sets as large as a few hundred taxa . while there are many efficient heuristics for recontructing maximum parsimony phylogenies ,our results cater to the need for provably exact methods that are fast enough to solve the problem for biologically relevant multi - state data sets .our work could potentially be extended to include more sophisticated integer programming techniques that have been successful in solving large instances of other hard optimization problems , for instance the recent solution of the 85,900-city traveling salesman problem pla85900 .the theoretical contributions of this paper may also prove useful to work on open problems in multi - state mp phylogenetics , to accelerating methods for related objectives , and to sampling among optimal or near - optimal solutions .nm would like to thank ming - chi tsai for several useful discussions .this work was supported in part by nsf grant # 0612099 .1 posada , d. , and crandall , k. intraspecific gene genealogies : trees grafting into networks .trends in ecology and evolution .16 , 3745 , ( 2001 ) felsenstein , j. inferring phylogenies .sinauer publications ( 2004 ) foulds , l. r. and graham , r. l. the steiner problem in phylogeny is np - complete advances in applied mathematics 3 , 4349 , ( 1982 ) sridhar , s. , lam , f. , blelloch , g. , ravi , r. , and schwartz , r. efficiently finding the most parsimonious phylogenetic tree .lecture notes in computer science , springer berlin/ heidelberg .4463 , 3748 , ( 2007 ) buneman , p. the recovery of trees from measures of dissimilarity .mathematics in the archeological and historical sciences , f. hodson et al ., eds . , 387395 , ( 1971 ) barth elemy , j. from copair hypergraphs to median graphs with latent vertices .discrete math 76 , 928 , ( 1989 ) bandelt , h. j. , forster , p. , sykes , b. c. , and richards , m. b. mitochondrial portraits of human populations using median networks .genetics 141 , 743753 , ( 1989 ) bandelt , h. j. , forster , p. , and rohl , a. median - joining networks for inferring intraspecific phylogenies . molecular biology and evolution 16 , 3748 , ( 1999 ) huber , k. t. , and moulton , v. the relation graph .discrete mathematics 244 ( 1 - 3 ) , 153166 , ( 2002 ) zhou , h. f. , zheng , x. m. , wei , r. x. , second , g. , vaughan , d. a. and ge , s. contrasting population genetic structure and gene flow between oryza rufipogon and oryza nivara .117 ( 7 ) , 11811189 , ( 2008 ) hudjashov , g. , kivisild , t. , underhill , p. a. , endicott , p. , sanchez , j. j. , lin , a. a. , shen , p. , oefner , p. , renfrew , c. , villems , r. , forster , p. revealing the prehistoric settlement of australia by y chromosome and mtdna analysis .104 ( 21 ) , 87268730 , ( 2007 ) swofford , d. paup * 4.0 .sinauer assoc .inc . : sunderland , ma , ( 2009 ) felsenstein , j. phylip ( phylogeny inference package ) version 3.6 distributed by author , department of genome sciences , university of washington , seattle , ( 2008 ) semple , c. , and steel , m. phylogenetics .oxford university press , ( 2003 ) erdos , p. l. and szekely , l. a. on weighted multiway cuts in trees .mathematical programming 65 , 93105 , ( 1994 ) wang , l. , jiang , t. , and lawler , l. approximation algorithms for tree alignment with a given phylogeny .algorithmica16 , 302315 , ( 1996 ) altschul , s. f. , madden , t. l. , schaffer , a. a. , zhang , j. , zhang , z. , miller , w. , and lipman , d. j. gapped blast and psi - blast : a new generation of protein database search programs , nucleic acids res .25 , 33893402 , ( 1997 ) applegate , d. l. , bixby , r. e. , chvatal , v. , cook , w. , espinoza , d. g. , goycoolea , m. and helsgaun , k. certification of an optimal tsp tour through 85,900 cities .operations research letters 37 ( 1 ) , 1115 , ( 2009 )
accurate reconstruction of phylogenies remains a key challenge in evolutionary biology . most biologically plausible formulations of the problem are formally np - hard , with no known efficient solution . the standard in practice are fast heuristic methods that are empirically known to work very well in general , but can yield results arbitrarily far from optimal . practical exact methods , which yield exponential worst - case running times but generally much better times in practice , provide an important alternative . we report progress in this direction by introducing a provably optimal method for the weighted multi - state maximum parsimony phylogeny problem . the method is based on generalizing the notion of the buneman graph , a construction key to efficient exact methods for binary sequences , so as to apply to sequences with arbitrary finite numbers of states with arbitrary state transition weights . we implement an integer linear programming ( ilp ) method for the multi - state problem using this generalized buneman graph and demonstrate that the resulting method is able to solve data sets that are intractable by prior exact methods in run times comparable with popular heuristics . our work provides the first method for provably optimal maximum parsimony phylogeny inference that is practical for multi - state data sets of more than a few characters .
the laser interferometer space antenna ( lisa ) is a joint nasa esa deep - space mission to be launched after 2012 , aimed at detecting and studying gravitational waves ( gws ) with frequencies between and hz .lisa will provide access to gw sources that are outside the reach of ground - based interferometric gw detectors , such as the binaries of compact stellar objects in our galaxy , the mergers of massive and supermassive black holes , and the gravitational captures of compact objects by the supermassive black holes at the center of galaxies .lisa consists of three widely separated spacecraft , flying around the sun in a quasi - equilateral triangular configuration and exchanging phase - coherent laser signals .lisa relies on picometer interferometry to measure gws as modulations in the distance between the spacecraft .the greatest challenge to achieving this measurement is the phase noise of the lisa lasers , which is larger than the gw - induced response by many orders of magnitude , and which can not be removed by conventional phase - matching interferometry because the lisa armlengths are grossly unequal , and changing continuously . _time delay interferometry _ ( tdi ) , developed by j. w. armstrong , f. b. estabrook , m. tinto , and others , is the lisa - specific technique that will be used to combine the laser - noise laden one - way phase measurements performed between the three spacecraft into virtual interferometric observables where laser noise is reduced by several orders of magnitude .tdi was initially developed using _ad hoc _ algebraic reasoning for the case of a stationary lisa configuration with unequal but constant armlengths ( _ first - generation _ tdi , see ) .it was later modified to work also in the case of a rotating lisa constellation ( _ modified _ tdi , see ) and of linearly changing armlengths ( _ second - generation _ tdi , see ) .first - generation and modified tdi were given a rigorous mathematical foundation in the theory of algebraic syzygies on moduli , providing tools to generate all possible tdi observables , and to determine which observables are optimally sensitive to gws .unfortunately , this algebraic treatment can not be extended easily to second - generation tdi , which is the version that must be used in practice . in this paper ,i give a new derivation of first - generation , modified , and second - generation tdi , using a _ geometric _ approach that emphasizes the physical interpretation of tdi observables as synthesized interferometric measurements , extending it to all known observables .what is more , this geometric approach to tdi ( in short , _ geometric tdi _ ) allows the exhaustive enumeration of all tdi observables of any length , and it leads to alternative , improved forms of the standard tdi observables , characterized by better gw sensitivity at high frequencies in realistic noise conditions , by lesser demands on the measurement system , and by reduced susceptibility to gaps and glitches . more specifically , all tdi observables display nulls in their noise and gw responses at frequency multiples of the inverse arm - crossing light times . because these zeros occur at the same frequencies and with the same orders for noise and gws , the _ ideal _ gw sensitivity after successful laser - noise suppression is finite , and comparable to the sensitivity at nearby frequencies .the _ actual _ sensitivity , however , is likely to be degraded , either because noise _ leaks _ into the nulls from the sides , or because the measurement system has insufficient dynamical range to resolve the tiny signals within the nulls .this problem is mitigated with the alternative observables , which have half as many response - function nulls as the standard forms .in addition , because the alternative observables are , as it were , folded versions of their standard forms , they have a smaller temporal footprint : that is , they are written as sums of one - way phase measurements that span a shorter time period .this property can be advantageous in the presence of instrumental gaps or glitches , which would then contaminate a smaller portion of the data set ; a reduced temporal footprint means also that a shorter continuous set of phase data needs to be collected before tdi observables can begin to be assembled . ,width=254,height=288 ] this paper is organized as follows .section [ sec : geometricview ] describes geometric tdi : in sec .[ sec : basictimetransport ] , i introduce the basic gw - sensitive phase measurement ; in sec.[sec : tdiprinciple ] , i discuss its integration into laser - noise canceling observables according to the _ geometric tdi principle _ ; in secs .[ sec : firstgen ] and [ sec : secondgen ] , i give a new derivation of the observables of first - generation , modified , and second - generation tdi , and i interpret them geometrically ; in sec.[sec : combenum ] , i show how to enumerate exhaustively all possible observables by representing them as _ link strings _ ; last , in sec.[sec : sixlasers ] i extend our formalism , developed for simplicity by considering only three independent lisa lasers , to the realistic case of six lasers .section [ sec : survey ] reports on the exhaustive survey of all second - generation tdi observables consisting of up to 24 separate phase measurements : in secs .[ sec : alternative ] and [ sec : advantage ] , i discuss the alternative forms of the standard second - generation tdi observables , and present their practical advantages for the implementation of tdi ; in sec .[ sec : longer ] , i describe the previously unknown second - generation tdi observables of length 18 and more .last , sec .[ sec : conclusion ] presents my conclusions . the appendices contain rules and proofs omitted from the main text , and explicit algebraic expression for the second - generation tdi observables of length 16 . as customary , i set except where specified otherwise .how is lisa an interferometer other than by name ?the loosest dictionary definition of `` interferometer '' ( something like `` a device that combines the signals radiating from a common source , and received at different locations , or at the same location after traveling different paths '' ) does not seem to apply to lisa , whose tdi gw observables are combinations of the phase - difference measurements between as many as six laser sources .in fact , interferometry is not needed , strictly speaking , to measure gws , but only to remove the otherwise deafening phase noise produced by the lisa lasers .the basic principle of gw measurement employed by lisa is noninterferometric , as we can see from the idealized experimental setup of a _ time - transport link _ between two ideal clocks ( see fig .[ fig : gedank ] ) .consider a plane gw propagating across the minkowski background geometry , and written in the transverse - traceless gauge .the wave is traveling along the direction , and has `` '' polarization along the and directions .we can then write the spacetime metric as , where .\ ] ] consider also two ideal clocks 1 and 2 , marking their proper times and , and sitting at constant spatial coordinates and in the tt frame . in this gauge ,constant - coordinate worldlines are geodesics , so the effect of the gws is not to exert forces ( as it were ) on test particles , but to modulate the distance between them . by way of light signals ,clock 1 is continuously sending its time to clock 2 , where is compared with the local time , yielding the difference here is the tt coordinate time and is the time of flight between the two clocks , as experienced by the laser pulse that arrives at clock 2 at time .we are assuming that the two clocks have been synchronized so that in the absence of gws they both mark the coordinate time .to first order in , is the coordinate dependence of the gw does not appear in eq . because the two clocks sit on the same constant- wavefronts .if the rates of the two clocks remain synchronized ( ) , then the time derivative is directly proportional to difference of the gw strains at the events of pulse reception and emission , .\ ] ] _ this is our gw observable ._ in the fourier domain , the ( power ) response function of to gws is .thus is insensitive to gws of frequencies or ( with integer ) .expression is the basic building block used to derive the lisa response to gw waves , as well as the doppler response used in spacecraft - tracking gw searches , and the timing - residual response used in pulsar - timing searches . to relate this idealized experimental setup to lisa, we replace the ideal clocks with the lisa lasers , and obtain proper time by dividing the lasers phase by their frequency .each of the three lisa spacecraft contains two optical benches oriented facing the other two spacecraft ; on each bench , the appropriately named _phasemeters _ compare the phase of the incoming lasers against the local reference laser . as written , eq .involves a comparison of laser _ frequencies _ : we choose to develop our arguments in terms of these , since it is more convenient to deal with instrumental responses that are directly proportional to the physical observable of interest ( the gw strain ) rather than to its time integral .generalizing eq . to arbitrary plane - gw and spacecraft geometries , and adopting a lisa - specific language , we come to the estabrook wahlquist two - pulse response }{1 - \hat{n}^m_{12}(t ) k_m}:\ ] ] with this indexing , eq. describes the frequency - difference measurement performed on spacecraft 2 to compare the local laser to the laser incoming from spacecraft 1 . in this equation : * is the spatial propagation vector of the plane gw ; * and are the spatial tt coordinates of the two spacecraft ; * is the time of pulse reception , and therefore of measurement ; * is the time of pulse emission , as determined implicitly by ; * is the unit vector along the trajectory of the light pulse ( labeled by the time of reception ) , given by .equation is known as the two - pulse response because an impulsive gw is registered twice in each observable , once when it impinges on the emitting spacecraft , and once , a time later , when it impinges on the receiving spacecraft . in the literature on tdi, it is customary to label the lisa arms by the index of the _ opposite _ spacecraft .we shall do so in this paper , using primed and unprimed indices to denote the _ oriented _ lisa arms ( with orientation following the direction of laser transmission ) according to the convention and .we shall then denote by the propagation time experienced by a laser pulse traveling along the oriented arm .we shall also find it useful , at times , to augment the phase - measurement notation with a middle index , corresponding to the oriented arm traversed by the laser pulse being measured .( in fact , the primed or unprimed middle index would be sufficient to identify the phase measurement completely , and we shall exploit this property in sec . [sec : combenum ] when we represent tdi observables as _ link strings_. ) see fig .[ fig : tdipaths ] for an example of this convention at work .unfortunately , gws can not be read off directly from the measurements , because the fluctuations of the laser frequencies ( i.e. , the laser phase noises ) come into the as the lisa lasers have of ( single - sided , square - root ) spectral density , several orders of magnitude stronger than the weakest gws detectable by lisa , which are at the level of the other two fundamental lisa noises ( known together as _ secondary noises _ ) : the shot noise at the phasemeter , as determined by the power of the lasers and by the distance between the spacecraft , and the acceleration noise of the proof masses enclosed within each optical bench , which are used to reference the frequency measurements to freely falling worldlines . [equation assumes that a single laser is being used on each spacecraft ; it is pedagogical to consider this simplified case first , but we shall generalize our discussion to the realistic case of six lisa lasers in sec . [sec : sixlasers ] . ]measurements cancel laser phase noise at time . in all of them ,two laser pulses arrive at , or depart from , the same spacecraft at time .[ fig : tdipaths],width=326 ] canceling laser phase noise is where interferometry comes to the rescue .look at fig .[ fig : tdipaths ] for combinations of measurements in which two laser pulses arrive simultaneously at spacecraft 1 at time , depart simultaneously from spacecraft 1 at time , or arrive and depart simultaneously to and from spacecraft 1 at time .we subtract the measurements , represented graphically by arrows , when they share the same event of emission or reception ( i.e. , when their arrowtails or arrowheads meet ) , and we add them when the receiving spacecraft of one measurement is the emitting spacecraft of the other ( i.e. , when arrowtail follows arrowhead ) ._ in all of these combinations , the laser - frequency noise generated at time on spacecraft 1 is canceled out _ by entering twice with opposite signs ; however , gws are not canceled ( not even at time ) , because they come into eq . with different -dependent projection factors .the combinations of fig.[fig : tdipaths ] do still contain frequency noise from lasers 2 and 3 , and from times other than ; it is however a simple leap to cancel even those by arranging together more measurements .we then formulate a * geometric tdi principle * : to obtain a laser - noise - canceling gw observable , line up arrows ( i.e. , measurements ) head to head , tail to tail , or head to tail , creating a closed loop that cancels laser noise at all pulse emission and reception events . if no arrowhead or arrowtail is left unpaired , the closed loop represents a linear combination of delayed measurements that completely cancel the three laser noises . remarkably , it is usually possible to interpret each closed - loop combination as the interferometric measurement performed by comparing the phases of laser beams that follow the paths marked by the arrows .let us see an example .the arrows of fig .[ fig : michelson ] ( left panel ) reproduce the paths followed by light in an equal - arm michelson interferometer ; operating in analogy with fig .[ fig : tdipaths ] , and attributing the time to the final common event of reception at spacecraft 1 , we write the corresponding algebraic expression here the sum of two time - consecutive observables , such as and [ here is the light - travel time between spacecraft 2 and 1 ] , simulates the reflection of the laser off a mirror : in terms of laser phases , we see that the integral of this sum reproduces the total phase shift accumulated along the path .by contrast , the head - to - head difference of two such double arrows simulates a photodetector : it reproduces the difference of the phase shifts accumulated along the two paths .all in all , eq . shows that the combination of four ( one - way ) measurements can _ synthesize _ the phase - difference output of a michelson interferometer , as emphasized by tinto and armstrong , and shown graphically by shaddock and summers .inserting the laser noises in eq . , we get \nonumber \\ & + [ c_2(t - l_{3 } ) \phantom { { } _ { ' } } - c_1(t ) ] \\ & - [ c_3(t - l_{2 ' } ) - c_1(t ) ] \nonumber \\ - [ c_1(t - l_{2 ' } - l_{2 } ) & - \phantom { [ } c_3(t - l_{2 ' } ) ] \nonumber,\end{aligned}\ ] ] which sums up to zero in interferometer geometries where and in the two directions . ] : our equal - arm michelson combination is then truly laser - noise canceling . it is however sensitive to gws , as can be seen by inserting eq . in eq . .reproduces the phase - difference output of the interferometer . _right._for unequal armlengths , laser - phase noise cancellation can be recovered by having both interfering beams travel along each arm once , building up the same light - travel time .compare with fig . of ref .[ fig : michelson],width=326 ] more generally , we can set three simple rules to turn a closed arrow loop into a combination of measurements that cancels laser noise : 1 .start at any spacecraft , and write down the appropriate for each arrow , following the loop ( going along or against the direction of each arrow ) until all arrows are used up ( if more than two heads or tails meet at any spacecraft , different visiting orders will yield different observables ) ; 2 . use a plus ( minus ) sign for arrows followed along ( against ) their direction ; 3 . give time arguments to the , remembering that measurements are always made at the receiving spacecraft ( at the arrowhead ) ; use the nominal time for the first , and then add ( subtract ) the appropriate for each arrow followed along ( against ) its direction .the first laser - noise canceling combinations for lisa were discovered using an algebraic ( rather than geometric ) approach , matching up delayed measurements in such a way that all laser - noise terms would cancel . using this procedure , tinto , armstrong , andestabrook obtained expressions for _ first - generation tdi observables _ , which cancel laser noise in static unequal - arm geometries .these observables are sums of either six or eight delayed measurements ( for short , links ) .see fig.[fig : firstgen ] .the 6-link observables , , ( mapped into each other by relabeling the spacecraft cyclically ) use all six lisa oriented arms , and measure the phase difference accumulated by two laser beams traveling around the lisa array in clockwise and counterclockwise directions : thus , they behave much like a sagnac interferometer , and are known as _sagnac observables_. a related 6-link combination , the symmetrized sagnac observable , has the useful property of being relatively insensitive to gws in the low - frequency limit .the 8-link observables , , ( also mapped into each other by cyclic spacecraft relabelings ) use two of the lisa arms in the two directions .they are unequal - arm generalizations of the michelson observable of eq . : for unequal arms , the latter would fail to cancel the laser - noise terms from the tails of the two paths , because .the solution is to have both paths go through each arm once ( hence the eight terms ) , building up the same light - travel time ( see the right panel of fig.[fig : michelson ] ) .related 8-link combinations , known as observables of the , , and type , use different sets of four oriented arms out of six , and have gw sensitivity comparable to the michelson combinations .prior to my work , it was unclear whether the -type and -type observables could be interpreted as synthesized interferometric observables .as a synthesized observable was already clear to f. b. estabrook ( unpublished note ) . ] in fig .[ fig : firstgen ] , we show that this is possible if we identify _ four distinct laser beams _ , paired in alternative ways to cancel laser noise at the path tails ( dots ) and path heads ( ending arrows ). the two path origins are not simultaneous , and neither are the two path endings .the symmetrized sagnac observable , which also defies explanation as a two - beam synthesized interferometer , can be interpreted as a _six_-beam interferometer , whereby two different pairings explain the cancellation of laser noise at emission ( dots ) and reception ( arrows ) .yet another pairing , shown by the thin diagonal lines in fig .[ fig : firstgen ] , explains why is relatively insensitive to gws at low frequencies : in the limit of equal arms , each pair of parallel arrows represents the difference of two symmetric measurements and that share the same times of pulse emission and reception . taylor - expanding the terms of eq . around and around either or ,we find that . by contrast , the differences of head - to - tail double arrows that appear in sum up to . considering monochromatic gws of frequency , we see that the gw response is smaller for than for by a factor , so the ratio between the and responses is .] ( for hz , for hz ) . since the response to the lisa secondary noises is approximately the same for and [ as can be seen using eqs . , discussed below ] , turns out to be relatively insensitive to gw .this interpretation of tdi observables as -beam synthesized interferometers is intriguing , but also troubling , since it casts a suspicion of arbitrariness on the selection of a standard set of observables , and it complicates exploring the space of all possible combinations .fortunately , the application of the tools of modern algebra to tdi showed that all first - generation observables can be obtained as algebraic combinations of four generators .this approach was extended to _ modified _ tdi observables , which cancel laser noise in rotating lisa geometries , where the sagnac effect introduces a distinction between light - travel times in the two directions .( the michelson- , u- , p- , and e - type observables of first - generation tdi are _ bona fide _ modified tdi observables , if written with the correct primed and unprimed delay indices ; by contrast , the sagnac observables of modified tdi are different , and twice as long as those of first - generation tdi . ) however , the algebraic approach can not be extended easily to the observables of _ second generation _ tdi , which cancel laser noise in lisa geometries with time - dependent armlengths ., which is more than sufficient for realistic lisa spacecraft orbits .] as pointed out by cornish and hellings , in this situation it is necessary to keep track of the order of retardations : for instance , the unequal - arm michelson combination of fig . [ fig : firstgen ] would translate to where , using the semicolon notation of ref . , and so on : the nominal time is delayed incrementally starting from the rightmost delay index .( a similar notation with commas instead of semicolons is used when the armlengths are constant and the order of the retardations is not important . ) inserting the laser noises in eq ., we see that they cancel in pairs , except for the terms from the tails of the two paths , ;\ ] ] taylor - expanding the retardations to first order and keeping only linear terms in the , we get ,\ ] ] where all the and are implicitly evaluated at time .( more generally , each retardation index generates a residual term proportional to for each index to its left . ) in short , much like what happened with the simple michelson combination [ eq . ] for unequal - arm geometries , a laser - noise residual appears in eq . because the light - travel times built up along the two interfering paths are different ; graphically , the tails of the two paths do not match preciselythis is because , although both paths contain the same set of links , they do so in different orders , and the retardations do not commute when the armlengths are time dependent .( this is also the reason why the algebraic approach becomes arduous for second - generation tdi , where it involves the solution of polynomial equations for noncommuting variables . ) as in the _ upgrade _ from equal - arm to unequal - arm ( first - generation ) michelson observables , one solution is to compose the two paths so that each goes through each arm twice , in different orders .the residual of the resulting 16-link combination vanishes up to the first taylor order and to the first degree in ( henceforth , _ to first order / degree _ ) .this second - generation unequal - arm michelson observables ( known as ) may be written in our notation which is related to the defined in refs . by a change of sign and by the use of the opposite convention for primed and unprimed indices .second - generation generalizations of all first - generation tdi observables were described by tinto and colleagues . for the analogs of the , , , and observables (which are formally identical to their modified tdi counterparts , except for the interpretation of the delay indices as noncommuting ) , laser - noise cancellation is not complete , even to first order / degree : however , the residuals consist of symmetric sums of terms that turn out to be small for realistic lisa orbits .our geometric approach to tdi makes it possible to enumerate all the second - generation tdi observables of given length .the key to this is the * feynman i did not take the idea that all the electrons were the same one from him as seriously as i took the observation that positrons could simply be represented as electrons going from the future to the past in a back section of their world lines . ''[ r. p. feynman , `` the development of the space - time view of quantum electrodynamics , '' nobel lecture , dec 11 , 1965 . ] ] geometric tdi principle * : any -beam geometric - tdi closed loop can be seen as a _ single _beam that travels forward and backward in time to meet itself back at its origin .for instance , the two - beam equal - arm michelson combination of fig.[fig : michelson ] ( left panel ) , can be interpreted as a single beam that departs at the initial time , travels forward in time to be measured at time , and again travels forward in time to be measured ( and interfere against itself ! ) at time ; the beam then moves backward in time to be emitted at time , and again moves backward in time to be emitted at the original time ( equal to , since the armlengths are equal ) .this closes the loop , and cancels laser noise at all junctions ( when we translate graphs to formulas , we must remember to give minus signs to all the backward - time arrows , drawn dashed in fig.[fig : michelson ] ) .once we have established that all -link loops can be represented as a single loop , we can _ enumerate _ them combinatorially by choosing a starting spacecraft and , for times over , choosing the future or past time direction , and the leftward ( clockwise ) or rightward ( counterclockwise ) movement direction , in all possible combinations .each loop can be denoted by the index of the initial spacecraft , followed by a string of `` l '' or `` r '' crested by `` '' for forward - time arrows and by `` '' for backward - time arrows ; this notation is translated easily into strings of link indices crested by their time directions ( henceforth , _ link strings _ ) .for instance , we would write and for the loops in the left and right panels of fig .[ fig : michelson ] , respectively .not all strings with links correspond to laser - noise canceling combinations , because the total light - travel time accumulated across the loop must be zero ( for second - generation tdi , zero to first order / degree ) .however , it is quite straightforward to set simple _ closure criteria _ that identify the true tdi combinations : * pre - tdi interferometry .* for equal - arm geometries , the loop must end at the initial spacecraft ( - \#[{\overrightarrow{\vphantom{3'}\mathrm{r}}},{\overleftarrow{\vphantom{3'}\mathrm{l } } } ] \mod 3 = 0 ] ) .we denote the combinations that satisfy this property as _* first - generation tdi . * for unequal - arm geometries with generic , the loop must end at the initial spacecraft , and satisfy = \#[{\overleftarrow{\vphantom{3'}l}},{\overleftarrow{\vphantom{3'}l'}}] ] ( for ) , which yields a null total light - travel time .we denote the combinations that satisfy this property as -_closed_. * second - generation tdi . * for unequal - arm geometries with generic , time - dependent , first - order / degree laser - noise cancellation is obtained for loops that are -_closed _ , and in addition satisfy = \#[{\overrightarrow{\vphantom{3'}l}}{\overleftarrow{\vphantom{3'}\dot{m}}},{\overleftarrow{\vphantom{3'}l } } { \overrightarrow{\vphantom{3'}\dot{m}}}] ] [ for unprimed and primed , respectively ] , the disappear from eq . , and eq . is restored for the .this justifies all the developments reported in this paper also for six - laser lisa configurations .it should be mentioned in this context that the phase noise from the random motion of the optical benches enters the with the same time signature as the laser phase noises , and is therefore also canceled by tdi .the lisa sensitivity to gws is then set by the remaining secondary noises . adopting the schematization of the measurement process used in most of the tdi literature , and the notation used to describe the _synthetic lisa _simulator , the response of the to the secondary noises is given by where is the optical - path noise in the phase measurement , and and are the velocity noises of the two proof masses aboard spacecraft . because and have the same time signature as laser phase noises , they are canceled in tdi observables ; thus , all retardations can be removed from the unprimed- expression of , casting it to the same form as its primed- counterpart .i have written a computer program to list all the second - generation tdi observables consisting of 24 or fewer measurements .for each even length , this was achieved by enumerating all possible strings , and checking each of them for -closure , according to the counting rule given in sec .[ sec : combenum ] . already for 24-link strings ,the combinatorial space is huge , and an exhaustive search required more than 10,000 cpu hours . the resulting list of observableswas then reduced to a minimal set by removing all the quasi - duplicates that differ only by a sign or by a cyclic string shift .i have kept as distinct the observables that differ by a cyclic index shift ( in first - generation tdi , this would correspond to counting , , and as separate observables ) .the reduced list of tdi observables is available at the webpage www.vallis.org/tdi , annotated with their temporal footprint ( see sec . [sec : advantage ] ) , number of beams , type , and splicing composition ( see secs . [sec : alternative ] and [ sec : longer ] ) . [ cols=">,>,>,>,>,>,>,>,>,>,>,>",options="header " , ] my results are tallied in table [ tab : second ] . herethe , , , and types represent generalizations of the 8-link observables of the same name : -type ( michelson ) observables use two arms in both directions , -type observables use four oriented arms in a _ relay _ configuration , -type and -type use four oriented arms in _ beacon _ and _ monitor _ configurations ; the observables tallied under _ other _ use either five or six oriented arms . hereare the highlights of the survey , which are discussed in more detail in the following sections .* i find that the _ shortest _ second - generation tdi observable has length 16 .by contrast , modified - tdi observables begin at length 8 . *i recover all - type combinations do not include the observable given in ref . , which achieves laser - noise cancellation at the approximate time through the sum of _ four _ distinct measurements ; by contrast , in geometric tdi laser noise is always canceled _ by construction _ between _ pairs _ of phase measurements . however , the of ref . has almost the same temporal structure as our ( keep in mind that the primedness of our indices is the opposite of ref .).[notetea ] ] the _ known _16-link second - generation tdi observables , previously obtained by tinto and colleagues by applying commutator - like delay operators to the 8-link observables of modified tdi . from a geometric viewpoint , all the 16-link observables can be understood as _ self - splicings _ of the 8-link observables of the same type .this shows that the former reduce to finite time differences of the latter , up to time shifts of the first order / degree .it follows that the second - generation tdi observables have the same sensitivity of the modified tdi observables of the same type , not only in the equal - arm limit , but unconditionally .+ see sec .[ sec : alternative ] for details . * in addition , i obtain _ alternative forms _ of the known 16-link observables .the alternative forms use a larger number of beams ( e.g. , four beams for , as opposed to the standard two ) , or a different allocation of links in the beams ( e.g. , or for , as opposed to the standard structure ) .the alternative forms , too , can be understood as _ self - splicings _ of the 8-link modified tdi observables of the same kind .+ the alternative forms have the same sensitivity to gw signals as the original forms in idealized measurement conditions , but they can improve on them when realistic aspects ( such as quantization of the phasemeter output and technical noises ) are taken into account .in addition , the alternative forms have a reduced temporal footprint ( the difference between the times of the earliest and latest phase measurements involved in their construction ) ; this feature can be advantageous in the presence of gaps or glitches in the data , because it reduces the extent of defect propagation to the tdi time series .+ see secs .[ sec : alternative ] and [ sec : advantage ] for details .appendix [ app : secondsixteen ] gives explicit algebraic expressions for all the 16-link observables in terms of the measurements .* second - generation tdi observables are found in _ increasing numbers _ at lengths 18 , 20 , 22 , and 24 .a minority are of the , , , or types , while most use either five or six oriented arms .+ all 18-to-24long observables can be understood as splicings of modified - tdi observables of length 8 to 18 , sometimes with the inclusion of null bigrams ; most , but not all , are self - splicings .i conjecture that all second - generation tdi observables of any length can be generated as splicings of two modified - tdi observables .+ see sec .[ sec : longer ] for details .* up to length 24 , i do not find any -closed observables of the type ( defined as having suppressed , but nonzero , gw response at low frequencies ) .i conjecture that the type is incompatible with -closure .this does not exclude the existence of non--closed -type observables ( such as the , , and described by tinto and colleagues ) that do not cancel laser noise to first order / degree , but bring it sufficiently below the lisa secondary noise to be useful in practice .the standard second - generation tdi observable is which is related to the defined in eq .by .the alternative forms found in our geometric survey are which differ between themselves only by handedness , and which turns out to have vanishing response in the equal - arm limit to both noise and gws , at all frequencies . to see that and the have all the same gw sensitivity as the 8-link modified - tdi ( neglecting of course the fact that would not cancel laser noise in a flexing lisa ) , we reason as follows .as we have learned in eq . , can be interpreted as a self - splicing of with its reversal . if we take to be defined by eq . , we see that since the time at the splicing point in is , from eq .we see that here the symbol `` '' denotes equality up to selective delays or advancements of order in the , not specified by the formal delay strings .[ in this case , these spurious delays appear because is only -closed , so the time at the beginning and at the end of the inserted string is different by terms of order ; consequently , the last four observables of are really evaluated at the time , not just . ] rewriting eq . in terms of and reabsorbing the time advancements by evaluating the equation at time , we find thus , up to delays of first order / degree , self - splicings produce finite differences of observables .( indeed , it is a well - known fact in the literature on second - generation tdi observables that the standard 16-link observables are approximately equal to finite differences of the standard 8-link observables of the same type . )now , because the individual measurements respond linearly to gws and to all instrumental noise sources , the strain sensitivity of to monochromatic sources of frequency at a given sky position is proportional to , where is the ( square - root ) spectral density of noise in , and is the fourier transform of the gw response function .the constant of proportionality is , where snr is the fiducial signal - to - noise ratio at which the sensitivity is defined , and is the time duration of the observation . combining the fourier - transform time - shifting property with eq . , andconsidering that first order / degree terms can be neglected for secondary noises and gws ( which are much weaker than the laser phase noises ) , we see that must have the same sensitivity as : with .this is true generically for any spacecraft geometry , and not just in the equal - arm limit . to see that and , too , have the same sensitivity to gws as ( in the limit of perfect laser phase noise cancellation ) , it is then sufficient to show that they are self - splicings of and of its ( cyclically shifted ) reversal : { \overleftarrow{\vphantom{3'}3'32'2 } } , \\ x^{16,4,-1}_1 : & \ ; { \overrightarrow{\vphantom{3'}3'3 } } [ { \overleftarrow{\vphantom{3'}2'233'}}| { \overrightarrow{\vphantom{3'}22'3'3]22 ' } } { \overleftarrow{\vphantom{3'}33'2'2}}. \end{aligned}\ ] ] moving on to the type , we see ( for instance ) that the three second - generation -type observables that use the oriented arms , , , and , ' } } , \\ & { \overrightarrow{\vphantom{3'}3211'}}{\overleftarrow{\vphantom{3'}23 } } [ { \overrightarrow{\vphantom{3'}1'132}}{\overleftarrow{\vphantom{3'}1'123]11 ' } } , \\ & { \overrightarrow{\vphantom{3'}3211'[132}}|{\overleftarrow{\vphantom{3'}1'123 } } { \overrightarrow{\vphantom{3'}1'}}]{\overleftarrow{\vphantom{3'}2311 ' } } , \end{aligned}\ ] ] are generated by the self - splicings of the modified tdi -type observable that uses the same oriented arms , .thus , the modified tdi and second - generation tdi -type observables have the same gw sensitivity ( in the limit of perfect laser phase noise cancellation ) .similar arguments hold for the - and -type observables .altogether , we find empirically that all the 16-link second - generation tdi observables can be generated as self - splicings of the 8-link modified tdi observables of the same kind .conversely , it can be proved that all self - splicings of -closed observables are -closed ( see app.[app : closedproof ] ) .-type variables to the fundamental secondary noises , according to eq . , assuming equal arms and proof - mass optical - path noise spectral densities given by ^{-2} ] , for a time span , and each single measurement appears in at times displaced by as much as .thus , will be unavailable during the first and the last s ( i.e. , ) within each lisa data - taking period .moreover , a data gap in a single measurements will appear in the time series at four distinct times spanning .by contrast , the alternative forms involve 16 measurements taken within a time interval of span , and the single appear at times displaced by at most .the gain is significant , if not dramatic . the alternative forms for the - , - and -type variables also yield footprints gains with respect to their standard forms ( from to for , and from to for and ) .these gains are possible because the alternative observables are obtained , loosely speaking , by _ folding _ the standard versions in time , using both time advancements and retardations , as opposed to retardations only , to arrange the measurements so that laser phase noise is canceled at all emission and reception events .another advantage of the alternative forms is that they can yield an improvement in gw sensitivity in realistic measurement conditions . in the top panel of fig .[ fig : sens ] , the dashed curve shows the power spectral density ( psd ) of secondary noise for the standard observable , drawn in the limit of equal armlengths .following ref. , we assume that secondary noise consists entirely of proof - mass noise ( idealized as stationary and gaussian , with psd ^{-2} ] the polynomial obtained by regarding the 36 possible pairs as monomials , and summing them with coefficients -\#[{\overrightarrow{\vphantom{3'}l}}{\overleftarrow{\vphantom{3'}\dot{m}}},{\overleftarrow{\vphantom{3'}l } } { \overrightarrow{\vphantom{3'}\dot{m}}}] ] . _lemma 1._the prod of an -closed string is unchanged after a cyclic string shift .consider the shift of a single index from one end of the string to the other . because the string is -closed , and must therefore have = \#[\overleftarrow{l}] ] from the index sums down to zero if carries a , or to if it carries a ( and therefore does not multiply itself ) . after the shift ,the contribution to ] ( i.e. , the result of counting one term , with the appropriate sign , for each with each in the string , including itself ) . however , for a closed string , \times [ \mathrm{string}] ] of a shifted string and its shifted reversal ( i.e. , the result of counting one term , with the appropriate sign , for each in the string and each in the reversal ) is zero .in fact , the cross prod is separately zero for each index in the string with all the indices in its reversal , again because the reversal ( as the original string ) is -closed and has = \#[\overleftarrow{m}] ] is given by = \\ & \quad \quad \phantom{\,+\ , } \mathrm{prod}[\mathrm{string } ] + \mathrm{prod}[\mathrm{reversal } ] \\ &\quad \quad + [ \mathrm{shifted\,string } ] \times [ \mathrm{shifted\,reversal } ] = 0 , \end{aligned}\ ] ] because the first two terms are opposite numbers and the third term vanishes .hence the proof . in this sectioni give explicit algebraic expressions for all the second - generation tdi observables of length 16 , as found in my exhaustive survey , modulus the symmetries discussed in sec.[sec : combenum ] .there is considerable arbitrariness in writing these expressions , corresponding to the selection of representative link strings in each equivalence class , to the convention used in translating strings to sums of measurements , and to the choice of the initial time of evaluation for each observable .here i list link strings in a _ normal form _whereby each string begins with the largest continuous substring of forward - time indices ; i adopt the translation rules given in sec .[ sec : combenum ] [ just below eq . ]; and i adjust the time of evaluation to minimize the length of the longest delay sequence in the expression .* x type . *the standard 16-link observable is obtained by applying the translation rules to the link string , and evaluating the resulting expression at the initial time : the alternative 16-link observables can be written from the link strings and , evaluated at times and , respectively : last , the null 16-link observable can be written from the link string , evaluated at time : expressions for the -type observables that use the arms , and , are obtained by cyclic index shifts .* the three 16-link -type observables that use the oriented arms , , , and correspond to the link strings applying the translation rules and evaluating at the times , , and , respectively , yields the third expression is closest to the given in ref. ( but see note [ notetea ] ) .expressions for the -type observables that other sets of oriented arms ( i.e. , , , , , and ) are obtained by cyclic and noncylic index shifts . * e type . *the three 16-link -type observables that use the oriented arms , , , and correspond to the link strings applying the translation rules and evaluating at the times , , and , respectively , yields : the third expression is closest to the given in ref. .expressions for the other possible sets of oriented arms ( i.e. , and ) are obtained by cyclic index shifts .* the three 16-link -type observables that use the oriented arms , , , and correspond to the link strings applying the translation rules and evaluating at the times , , and , respectively , yields the third expression is closest to the given in ref. .expressions for the other possible sets of oriented arms ( i.e. , and ) are obtained by cyclic index shifts .lisa study team , _ lisa : laser interferometer space antenna for the detection and observation of gravitational waves , pre - phase a report _ , 2nd ed .( max planck institut fr quantenoptik , garching , germany , 1998 ) .ligo : a. abramovici _et al . _ ,science * 256 * , 325 ( 1992 ) ; virgo : b. caron _et al . _ ,quantum grav .* 14 * , 1461 ( 1997 ) ; geo : h. lck __ , _ ibid ._ , 1471 ( 1997 ) ; tama : m. ando _et al . _ ,lett . * 86 * , 3950 ( 2001 ) .j. w. armstrong , f. b. estabrook , and m. tinto , astrophys .j. * 527 * , 814 ( 1999 ) ; j. w. armstrong , f. b. estabrook , and m. tinto , class .quant . grav .* 18 * , 4059 ( 2001 ) ; see also g. giampieri _ et al .. comm . * 123 * , 669 ( 1996 ) for a frequency - domain laser - noise subtraction scheme related to tdi .d. summers , `` algorithm tradeoffs , '' oral presentation , 3rd progress meeting of the esa - funded lisa pms project , estec , the netherlands , february 2003 ; d. summers and d. hoyland , class .quantum grav .* 22 * , s249 ( 2005 ) .j. w. armstrong , f. b. estabrook , and h. d. wahlquist , astrophys .j. * 318 * , 536 ( 1987 ) ; b. bertotti et al ., astron . astrophys . *296 * , 13 ( 1995 ) ; m. tinto , class . quant .* 19 * , 1767 ( 2002 ) ; j. w. armstrong , l. iess , p. tortora , and b. bertotti , astrophys . j. * 599 * , 806 ( 2003 ) .
the space - based gravitational - wave observatory lisa , a nasa esa mission to be launched after 2012 , will achieve its optimal sensitivity using time delay interferometry ( tdi ) , a lisa - specific technique needed to cancel the otherwise overwhelming laser noise in the inter - spacecraft phase measurements . the tdi observables of the _ michelson _ and _ sagnac _ types have been interpreted physically as the virtual measurements of a synthesized interferometer . in this paper , i present _ geometric tdi _ , a new and intuitive approach to extend this interpretation to _ all _ tdi observables . unlike the standard algebraic formalism , geometric tdi provides a combinatorial algorithm to explore exhaustively the space of _ second - generation _ tdi observables ( i.e. , those that cancel laser noise in lisa - like interferometers with time - dependent armlengths ) . using this algorithm , i survey the space of second - generation tdi observables of length ( i.e. , number of component phase measurements ) up to 24 , and i identify alternative , improved forms of the standard second - generation tdi observables . the alternative forms have improved high - frequency gravitational - wave sensitivity in realistic noise conditions ( because they have fewer nulls in the gravitational - wave and noise response functions ) , and are less susceptible to instrumental gaps and glitches ( because their component phase measurements span shorter time periods ) .
we begin by recalling the assistive teleoperation model of section [ sec : background ] : and we list the meaning of each quantity : * is the trajectory of the robot through some state space . for ground robots ,a common choice for the state space is $ ] ; for air vehicles the state could be .\ ] ] this trajectory is _ a - priori _ modeled as a random function distributed according to a gaussian process ( ) , which can be trained offline using input - output examples of the robot s kinematics .+ online measurements of the state of the robot update the gp to ( assuming that the data has already arrived ) where is the new mean and is the new covariance function of the gp ( by `` new '' , we mean after incorporation of the new data ) .+ importantly , this model allows nonparametric probabilistic prediction of the trajectory into the future ; that is , \to \mathbb ( x , y , \theta)\end{aligned}\ ] ] where .indeed , can be as large as one likes ( corresponding to how far into the future one predicts ) ; because , a continuous measure of the uncertainty exists , although it grows quite large the further one predicts into the future .+ we remark on the following : the structure of is such that the individual robot model changes with each new data point ; in particular , since gps are nonparametric models , they have the ability to capture some amount of online nonlinearities ( such as motor failures , terrain changes , etc ) . the extent to which this is true needs to be explored , however . * is the trajectory of the human operator through some state space . while the state space of the robot can typically be well characterized with physical models , the state space of the human for a more nuanced discussion of learning the human state space . ]is not immediately clear .+ as a first step , however , we choose the set of operator commands to the robot to correspond to the state of the human ; accordingly , we treat the operator input as measurements _ of the human trajectory through this input space_. in essence , we are regarding the human state as manifesting via the commands to the robot ; extrapolating , we assume that one can predict where the human will go in the `` command space '' ( appropriately hedged using probability densities ) . + as a simple example , if the human is operating a joystick that sends velocity commands to the robot s actuators , then furthermore , just as for the robot , the human trajectory is _ a - priori _ modeled as a random function distributed according to a gaussian process ( ) , new measurements of the state of the human update the gp to ( assuming that the data has already arrived ) where is the new mean and is the new covariance function of the gp ( by `` new '' , we mean after incorporation of the new data ) .+ in this way , we can predict what we expect the operator to do using probabilistic inference ( just as was done with the robot via ) , thereby enabling joint models of the human - robot team that anticipate future situations ( at times ) by making decisions now , at time .* is the interaction function between the human and the robot . in section [ sec : background ], a particular choice of this function of this choice is discussed . with this model , control of the remote vehicleis accomplished in a receding horizon fashion : upon receipt of a measurement , the model is updated , and the new navigation protocol is taken to be .\end{aligned}\ ] ] we then take as the next action in the path ( where means the next step of the optimal robot trajectory through the joint human - robot space ) . at , we receive observations , update the distribution to , find the map , and choose as the next step .this process repeats until the human - robot team arrives at the destination .we focus here on a few particular aspects of the control of a remote vehicle over unreliable networks : because operator commands can often arrive late , be dropped , or be otherwise corrupted across an arbitrary network , velocity commands such as can not be literally interpreted .imagine that the operator is viewing an onboard video feed that is 1 second old , due to communication constraints .additionally , imagine that the command takes 1 second to return to the remote vehicle .thus , the command received onboard the vehicle is 2 seconds old .clearly , this information is stale , and if interpreted by the vehicle literally , could destabilize control .however , these commands , while stale , are not devoid of information ; we suggest instead that the inputs be treated as measurements ( if is in seconds ) of the human - machine system .this suggests that a likelihood be placed on the commands if the current time on the remote vehicle is , then the distribution over the human trajectory gets updated to ( assuming all measurements prior to have been received in a timely fashion ) we now update the navigation distribution to however , because we are modeling _trajectories _ of the human , the model can naturally incorporate delayed receipt : the data at informs the distribution , but when inference is done at time , additional uncertainty has accumulated .thus , when we extract the navigation protocol using \end{aligned}\ ] ] the distribution around the human trajectory at time is less peaked , and so is treated as less informative when evaluating ( which is the actual movement the robot executes at time ) .lossy networks are treated in an identical manner : indeed , if measurement is missing ( where ) , then our navigation protocol is still .\end{aligned}\ ] ] again , the effect on the performance of the system is gradual : as more measurements go missing ( or are delayed ) , the less informative is , and the more the onboard autonomy is trusted ( we thus have a natural formulation of sliding autonomy : see section [ sec : sliding ] ) .we emphasize that how well this approach performs is tied strongly to the fidelity of the likelihood function ( as is the case with any bayesian approach ) ; an overconfident measurement model can lead to overly confident human trajectory models , which can place too much weight on incorrect human input . under confident models will tend to overtrust the onboard autonomy , thus potentially leading to a robot that does not follow the orders of the operator .nevertheless , the presence of an uncertain network forces us to treat operator inputs as probabilistic quantities , rather than deterministic ones .while methods of sliding autonomy have been explored for a wide variety of tasks ( see ) , the amount of autonomy allocated to the robot ( or robots ) or the operator ( or operators ) is typically implemented in a manner _ independent _ of the human - robot team ; that is , some independent estimation algorithm determines how much control each entity receives , and then that number is fed into an algorithm that mixes a weighted combination of each intelligence .we argue here that our approach integrates the allocation of autonomy and the actual mixing of the multiple intelligences in a single step .in particular , we revisit our model of assistive teleoperation this model contains an online model of the human operator and the robot . in section[ sec : networks ] , we discussed how both the human operator model can respond online to varying network conditions ; in section [ sec : individual ] we discussed how both the human and robot models can learn , in an online fashion , peculiarities of the individual operator or individual robot ( peculiarities of the individual with respect to general psychological theories or cad based kinematic models , respectively ) . more generally , these individual models maintain a measure of uncertainty about the current state of operator or robot this uncertainty can naturally be interpreted as proportional to the inverse of how much autonomy should be allocated to each entity .the model thus contains an implicit measure of sliding autonomy , which is a natural artifact of the igpmodel .perhaps more importantly , however , is that this measure of sliding autonomy is incorporated into the final action in a probabilistic fashion : should the uncertainty become large around either intelligence ( due to an unreliable network , uncharacteristic behavior , or any other number of anomalies ) , then the amount of confidence placed in that intelligence becomes reduced upon blending in the function .mathematically , as ( or ) becomes more diffuse , its effect on becomes less pronounced .one can only extract so much information from any system , and when both distributions become diffuse , the overall information content is very low , and so navigation should start to degrade . the best we can hope for is a graceful degradation .succinctly , the final action taken by the remote vehicle is given by .\end{aligned}\ ] ] effectively , sliding autonomy ( or blended autonomy ) is a natural by - product of our formulation : an implicit measure of blending exists in the individual models , while the uncertainty of those individual models feeds into the interaction function .current theories of shared autonomy are dominated by anecdotal evidence and heuristic guidelines . in three recognized levels of autonomy are listed : adaptive ( the agent adjudicates ) , adjustable ( the supervisor adjudicates ) , and mixed - initiative ( the agent and supervisor `` collaborate to maintain the best perceived level of autonomy '' ) . in ,human robot collaboration schemas are organized around social , organizational and cultural factors , and in the role of ethological and emotional models in human - robot interaction are examined .furthermore , actual implementations are typically designed around need , rather than principle ( ) : either the remote human operator retains complete control of the robot , or the human operator makes online decisions about the amount of autonomy the robot is given .importantly , the work of introduces principled user goal inference and prediction methods , combined with an arbitration step to balance user input and robot intelligence .however , our approach to shared autonomy as an extension of multiple goal interacting gaussian processes ( mgigp ) ( see ) unifies the three steps of , thus providing a more straightforward framework in which to understand the fusion of human and machine intelligence .we also propose that extending mgigpcould provide a novel mathematical formulation of shared autonomy ( which we call _ blended autonomy _ ) .first , recall the mgigpmodel of . next , suppose a human operator is controlling the robot from a remote location , so the robot is no longer fully autonomous ( we continue the narrative of a robot navigating through a crowd of individuals ) . however , rather than treating the human commands as system interrupts , we wish to understand the continuum of blended autonomy in a mathematical way . using the navigation protocol derived using as motivation , we could model the joint human operator - robot _ system _ as where is the is the human operator s _ predicted _ interests , modeled with a gaussian process mixture .the measurement data is now where are the human operator commands sent from time .additionally , is the interaction function between the human operator , robot , and human crowd .one concrete instantiation of this interaction function is where is the cooperation function from the model and is an `` attraction '' model between the operator commands and the robot path .one possible attraction model is thus , the operator s intentionality and the robot s planned path are _merged_this formulation of gives high weight to paths and that are similar , while the probability of dissimilar paths decreases exponentially .bear in mind , however , that still gives high weight to paths and that cooperate .all of this is balanced against the ( predicted ) individual intentionality encoded in the gaussian process mixtures .as with mgigp , the model suggests a natural way to interpret blended autonomy ( or blended decision making ) : at time , find the map assignment for the posterior and then take as the next robot action . as new measurements arrive , compute a new plan by recalculating the map of the blended autonomy density . by choosing to interpret navigation under the model _ blended autonomy _ in complex environments is modeled in a transparent way :human commands are statistically weighted against machine intelligence in a receding horizon framework .the key insight is that by modeling the _ joint _ human - robot system , we can blend human and robot capabilities in a single step to produce a superior system level decision . when the human system and the robot system are modeled independently , it becomes unclear how to fuse the complementary proficiencies of the human and robot agents .
we discuss some of the challenges facing shared autonomy . in particular , we explore ( via the methods of _ interacting gaussian process _ models ( igp ) , ) 1 . shared autonomy over unreliable networks , 2 . how we can model _ individual _ human operators ( in contrast to the `` average '' of a human operator ) , and 3 . how igpnaturally models and integrates sliding autonomy into the joint human - machine system . we include a background section ( section [ sec : background ] ) for completeness .
a practical problem an experimental physicist would face is the following a process ( eg. a particle moving in space ) has an observed variable ( say position of the particle ) which potentially takes distinct values , but the measuring device is capable of recording only values and . in such a scenario ( figure [ figure : measurement ] ) , how can we make use of these states of the measuring device to capture the essential information of the source ? it may be the case that takes values from an infinite set , but the measuring device is capable of recording only a finite number of states .however , it shall be assumed that is finite but allowed for the possibility that ( for e.g. , it is possible that and ) . our aim is to capture the essential information of the source ( the process is treated as a source and the observations as messages from the source ) in a lossless fashion .this problem actually goes all the way back to shannon who gave a mathematical definition for the information content of a source .he defined it as ` entropy ' , a term borrowed from statistical thermodynamics .furthermore , his now famous noiseless source coding theorem states that it is possible to encode the information of a memoryless source ( assuming that the observables are independent and identically distributed ( i.i.d ) ) using ( at least ) bits per symbol , where stands for the shannon s entropy of the source . stated in other words , the average codeword length where is the length of the -th codeword and the corresponding probability of occurrence of the -th alphabet of the source . . if , we are seeking binary codes . ]shannon s entropy defines the ultimate limit for lossless data compression .data compression is a very important and exciting research topic in information theory since it not only provides a practical way to store bulky data , but it can also be used effectively to measure entropy , estimate complexity of sequences and provide a way to generate pseudo - random numbers ( which are necessary for monte - carlo simulations and cryptographic protocols ) .several researchers have investigated the relationship between chaotic dynamical systems and data compression ( more generally between chaos and information theory ) .jimnez - montao , ebeling , and others have proposed coding schemes by a symbolic substitution method .this was shown to be an optimal data compression algorithm by grassberger and also to accurately estimate shannon s entropy and lyapunov exponents of dynamical systems .arithmetic coding , a popular data compression algorithm used in jpeg2000 was recently shown to be a specific mode of a piecewise linear chaotic dynamical system . in another work , we have used symbolic dynamics on chaotic dynamical systems to prove the famous kraft - mcmillan inequality and its converse for prefix - free codes , a fundamental inequality in source coding , which also has a quantum analogue . in this paper , we take a nonlinear dynamical systems approach to the aforementioned measurement problem .we are interested in modeling the source by a nonlinear dynamical system . by a suitable model, we hope to capture the information content of the source .this paper is organized as follows . in sectionii , stochastic sources are modeled using piecewise linear chaotic dynamical systems which exhibits some important and interesting properties . in section iii, we propose a new algorithm for source coding and prove that it achieves the least average codeword length and turns out to be a re - discovery of huffman coding the popular lossless compression algorithm used in the jpeg international standard for still image compression .we make some observations about our approach in section iv and conclude in section v.we shall consider stationary sources .these are defined as sources whose statistics remain constant with respect to time .these include independent and identically distributed ( i.i.d ) sources and ergodic ( markov ) sources .these sources are very important in modeling various physical / chemical / biological phenomena and in engineering applications .on the other hand , non - stationary sources are those whose statistics change with time. we shall not deal with them here. however , most coding methods are applicable to these sources with some suitable modifications .consider an i.i.d source ( treated as a random variable ) which takes values from a set of values with probabilities respectively with the condition .an i.i.d source can be simply modeled as a ( memoryless ) markov source ( or markov process ) with the transition probability from state to as being independent of state ( and all previous states ) . ] .we can then embed the markov source into a dynamical system as follows : to each markov state ( i.e. to each symbol in the alphabet ) , associate an interval on the real line segment such that its length is equal to the probability .any two such intervals have pairwise disjoint interiors and the union of all the intervals cover .such a collection of intervals is known as a partition .we define a deterministic map on the partitions such that they form a markov partition ( they satisfy the property that the image of each interval under covers an integer number of partitions ) . the simplest way to define the map such that the intervals form a markov partition is to make it linear and surjective .this is depicted in figure [ figure : glsandmodes](a ) .such a map is known as generalized lurth series ( gls ) .there are other ways to define the map ( for eg ., see ) but for our purposes gls will suffice .lurth s paper in 1883 ( see reference in dajani et . ) deals with number theoretical properties of lurth series ( a specific case of gls ) .however , georg cantor had discovered gls earlier in 1869 .+ ( a ) generalized lurth series ( gls ) + ( b ) modes of gls .a list of important properties of gls is given below : 1 .gls preserves the lebesgue ( probability ) measure . 2 .every ( infinite ) sequence of symbols from the alphabet corresponds to an unique initial condition .3 . the symbolic sequence of every initial condition is i.i.d .4 . gls is chaotic ( positive lyapunov exponent , positive topological entropy ) . 5 .gls has maximum topological entropy ( ) for a specified number of alphabets ( ) .thus , all possible arrangements of the alphabets can occur as symbolic sequences .gls is isomorphic to the shift map and hence ergodic ( bernoulli ) .modes of gls : as it can be seen from figure [ figure : glsandmodes](b ) , the slope of the line that maps each interval to can be _ chosen _ to be either positive or negative .these choices result in a total of _ modes _ of gls ( up to a permutation of the intervals along with their associated alphabets for each mode , these are in number ) .it is property 2 and 3 that allow a faithful `` embedding '' of a stochastic i.i.d source . for a proof of these properties, please refer dajani et .al . .some well known gls are the standard binary map and the standard tent map shown in figure [ figure : wellknowngls ] ., ; , ) .( b ) standard tent map ( , ; , ).,title="fig : " ] , ; , ) .( b ) standard tent map ( , ; , ).,title="fig : " ] + ( a ) ( b ) it is easy to verify that gls preserves the lebesgue measure .a probability density on [ 0,1 ) is invariant under the given transformation , if for each interval \subset [ 0,1) ] .for the gls , the above condition has constant probability density on as the only solution .it then follows from birkhoff s ergodic theorem that the asymptotic probability distribution of the points of almost every trajectory is uniform .we can hence calculate lyapunov exponent as follows : here , we measure in bits / iteration . is uniform with value 1 on [ 0,1 ) and since is linear in each of the intervals , the above expression simplifies to : this turns out to be equal to shannon s entropy of the i.i.d source .thus lyapunov exponent of the gls that faithfully embeds the stochastic i.i.d source is equal to the shannon s entropy of the source .lyapunov exponent can be understood as the amount of information in bits revealed by the symbolic sequence ( measurement ) of the dynamical system in every iteration .it can be seen that the lyapunov exponent for all the modes of the gls are the same .the lyapunov exponent for binary i.i.d sources is plotted in figure [ fig : fighp ] as a function of ( the probability of symbol ` 0 ' ) . in units of bits/ iteration plotted against for binary i.i.d sources .the maximum occurs at .note that . ]in this section , we address the measurement problem proposed in section [ section : measurement problem ] . throughout our analysis , ( finite ) and is assumed .we are seeking _ minimum - redundancy binary symbol _ codes .`` minimum - redundancy '' is defined as follows : + a binary symbol code with lengths for the i.i.d source with alphabet with respective probabilities is said to have minimum - redundancy if is minimum . for ,the minimum - redundancy binary symbol code for the alphabet is ( , ) .the goal of source coding is to minimize , the average code - word length of , since this is important in any communication system . as we mentioned before , it is always true that .our approach is to approximate the original i.i.d source ( gls with partitions ) with the _ best _ gls with a reduced number of partitions ( reduced by 1 ) . for the sake of notational convenience , we shall term the original gls as order ( for original source ) and the reduced gls would be of order ( for approximating source ) .this new source is now approximated further with the _ best _ possible source of order ( ) .this procedure of successive approximation of sources is repeated until we end up with a gls of order ( ) .it has only two partitions for which we know the minimum - redundancy symbol code is . at any given stage of approximation ,the easiest way to construct a source of order is to merge two of the existing partitions. what should be the rationale for determining which is the _ order approximating source for the source ? among all possible order approximating sources, the best approximation is the one which minimizes the following quantity : where is the lyapunov exponent of the argument .the reason behind this choice is intuitive .we have already established that the lyapunov exponent is equal to the shannon s entropy for the gls and that it represents the amount of information ( in bits ) revealed by the symbolic sequence of the source at every iteration .thus , the best approximating source should be as close as possible to the original source in terms of lyapunov exponent .there are three steps to our algorithm for finding minimum redundancy binary symbol code as given below here : 1 .embed the i.i.d source in to a gls with partitions as described in [ subsection : embedding ] .initialize .the source is denoted by to indicate order .approximate source with a gls with partitions by merging the _ smallest _ two partitions to obtain the source of order . .repeat step 2 until order of the gls is 2 ( ) , then , stop . we shall prove that the approximating source which merges the two _ smallest _ partitions is the _best _ approximating source .it shall be subsequently proved that this algorithm leads to minimum - redundancy , i.e. , it minimizes . assigning codewords to the alphabetswill also be shown . +* theorem 1 : ( best successive source approximation ) * _ for a source which takes values from with probabilities respectively and with ( ) , the source which is the * best * - 1 order approximation to has probabilities . _ + * proof : + * by induction on . for and , there is nothing to prove .we will first show that the statement is true for .* * *. takes values from with probabilities respectively and with .+ we need to show that which takes values from with probabilities is the best -order approximation to . here is a symbol that represents the merged partition .+ this means , that we should show that this is better than any other -order approximation .there are two other -order approximations , namely , which takes values from with probabilities and which takes values from with probabilities .+ this implies that we need to show and .* we shall prove .+ this means that we need to prove .this means we need to show .we need to show the following : there are two cases . if , then since , . if , then since , we have .this again implies .thus , we have proved that is better than .* we can follow the same argument to prove that .thus , we have shown that the theorem is true for .an illustrated example is given in figure [ figure : huffman1 ] .* induction hypothesis : assume that the theorem is true for , we need to prove that this implies that the theorem is true for .+ let have the probability distribution .let us assume that ( if this is the case , there is nothing to prove ) .this means that .divide all the probabilities by ( ) to get .consider the set .this represents a probability distribution of a source with possible values and we know that the theorem is true for . +this means that the best source approximation for this new distribution is a source with probability distribution .+ in other words , this means : + where and are both different from and .multiply on both sides by and simplify to get : add the term on both sides and we have proved that the * best * -order approximation to is the source , where symbols with the two least probabilities are merged together .we have thus proved the theorem. [ !h ] is the closest to .unit of is bits / iteration ., title="fig : " ] + ( a ) source : \{,, } with probabilities \{0.7 , 0.2 , 0.1 } , . + is the closest to .unit of is bits / iteration ., title="fig : " ] is the closest to .unit of is bits / iteration ., title="fig : " ] + ( b ) and merged .( c ) and merged .+ ( ) ( ) is the closest to .unit of is bits / iteration ., title="fig : " ] + ( d ) and merged ( ) . at the end of algorithm [ alg : succsourcegls ], we have order-2 approximation ( ) .we allocate the code to the two partitions .when we go from to , the two sibling partitions that were merged to form the parent partition will get the codes ` ' and ` ' where ` ' is the codeword of the parent partition .this process is repeated until we have allocated codewords to .it is interesting to realize that the codewords are actually symbolic sequences on the standard binary map . by allocating the code to we are essentially treating the two partitions to have equal probabilitiesalthough they may be highly skewed .in fact , we are approximating the source as a gls with equal partitions ( = 0.5 each ) which is the standard binary map .the code is thus the symbolic sequence on the standard binary map .now , moving up from to we are doing the same approximation .we are treating the two sibling partitions to have equal probabilities and giving them the codes ` ' and ` ' which are the symbolic sequences for those two partitions on the standard binary map . continuing in this fashion, we see that all the codes are symbolic sequences on the standard binary map .every alphabet of the source is _ approximated _ to a partition on the binary map and the codeword allocated to it is the corresponding symbolic sequence .it will be proved that the approximation is minimum redundancy and as a consequence of this , if the probabilities are all powers of 2 , then the approximation is not only minimum redundancy but also equals the entropy of the source ( ) . +* theorem 2 : ( successive source approximation ) * _ the successive source approximation algorithm using gls yields minimum - redundancy ( i.e. , it minimizes ) . _ + * proof : + * we make the important observation that the successive source approximation algorithm is in fact a re - discovery of the binary huffman coding algorithm which is known to minimize and hence yields minimum - redundancy . since our algorithm is essentially a re - discovery of the binary huffman coding algorithm ,the theorem is proved ( the codewords allocated in the previous section are the same as huffman codes ) . we have described how by successively approximating the original stochastic i.i.d source using gls , we arrive at a set of codewords for the alphabet which achieves minimum redundancy .the assignment of symbolic sequences as codewords to the alphabet of the source is the process of encoding .thus , given a series of observations of , the measuring device represents and stores these as codewords .for decoding , the reverse process needs to be applied , i.e. , the codewords have to be replaced by the observations .this can be performed by another device which has a look - up table consisting of the alphabet set and the corresponding codewords which were assigned originally by the measuring device .we make some important observations / remarks here : 1 .the faithful modeling of a stochastic i.i.d source as a gls is a very important step .this ensured that the lyapunov exponent captured the information content ( shannon s entropy ) of the source .2 . codewords are symbolic sequences on gls .we could have chosen a different scheme for giving codewords than the one described here .for example , we could have chosen symbolic sequences on the tent map as codewords .this would also correspond to a different set of huffman codes , but with the same average codeword length .huffman codes are not unique but depend on the way we assign codewords at every level .3 . huffman codes are _ symbol codes _ ,i.e. , each symbol in the alphabet is given a distinct codeword .we have investigated binary codes in this paper .an extension to the proposed algorithm is possible for ternary and higher bases .4 . in another related work , we have used gls to design _ stream codes_. unlike symbol codes , stream codes encode multiple symbols at a time .therefore , individual symbols in the alphabet no longer correspond to distinct codewords . by treating the entire message as a symbolic sequence on the gls, we encode the initial condition which contains the same information .this achieves optimal lossless compression as demonstrated in .5 . we have extended gls to piecewise non - linear , yet lebesgue measure preserving discrete chaotic dynamical systems . these have very interesting properties ( such as robust chaos in two parameters ) and are useful for joint compression and encryption applications .source coding problem is motivated as a measurement problem . a stochastic i.i.d source can be faithfully `` embedded '' into a piecewise linear chaotic dynamical system ( gls ) which exhibits interesting properties .the lyapunov exponent of the gls is equal to shannon s entropy of the i.i.d source .the measurement problem is addressed by successive source approximation using gls with the nearest lyapunov exponent ( by merging the two least probable states ) . by assigning symbolic sequences as codewords , we re -discovered the popular huffman coding algorithm a minimum redundancy symbol code for i.i.d sources .nithin nagaraj is grateful to prabhakar g. vaidya and kishor g. bhat for discussions on gls .he is also thankful to the department of science and technology for funding this work as a part of the ph.d .program at national institute of advanced studies , indian institute of science campus , bangalore .the author is indebted to nikita sidorov , mathematics dept ., univ . of manchester , for providing references to cantor s work .w. ebeling , and m. a. jimnez - montao , math .biosc . * 52 * , 53 ( 1980 ) ; m. a. jimnez - montao , bull .biol . * 46 * , 641 ( 1984 ) ; p. e. rapp , i. d. zimmermann , e. p. vining , n. cohen , a. m. albano , and m. a. jimnez - montao , phys .a * 192 * , 27 ( 1994 ) ; m. a. jimnez - montao , w. ebeling , and t. pschel , preprint arxiv : cond - mat/0204134 [ cond-mat.dis-nn ] ( 2002 ) .n. nagaraj , and p. g. vaidya , _ in proceedings of intl .conf . on recent developments in nonlinear dynamics 2009_ , narosa publishing house , edited by m. daniel and s. rajasekar ( school of physics , bharathidasan university , 2009 ) , p. 393 ; n. nagaraj , ph.d .thesis , national institute of advanced studies 2009 .
in this paper , source coding or data compression is viewed as a measurement problem . given a measurement device with fewer states than the observable of a stochastic source , how can one capture the essential information ? we propose modeling stochastic sources as piecewise linear discrete chaotic dynamical systems known as generalized lurth series ( gls ) which dates back to georg cantor s work in 1869 . the lyapunov exponent of gls is equal to the shannon s entropy of the source ( up to a constant of proportionality ) . by successively approximating the source with gls having fewer states ( with the closest lyapunov exponent ) , we derive a binary coding algorithm which exhibits minimum redundancy ( the least average codeword length with integer codeword lengths ) . this turns out to be a re - discovery of huffman coding , the popular lossless compression algorithm used in the jpeg international standard for still image compression .
as early as 1935 , schrdinger wrote : _ the rejection of realism has logical consequences . in general , a variable has no definite value before i measure it ; then measuring it does not mean ascertaining the value that it has .but then what does it mean ? " _ .as the advent of quantum mechanics solved the long standing problem of providing an adequate description for several important and unexplained experiments , the problem of realism in quantum mechanics was initially perceived mainly as a challenge to the construction of a new philosophy of natural science . in support of this perception ,is the fact that almost all later theoretical advances with experimental consequences came about without any serious progress with this very basic problem . yetat the same time , a growing number of people recognized that progress in this problem would likely have deep consequences for the quantum - classical transition , the attempt to produce a successful unification of quantum mechanics and relativity theory , and the related problem of quantum cosmology .halfway the sixties two important advances were made . in 1964, john bell showed that any local hidden variable theory will yield predictions that are at odds with quantum mechanics .a few years later , kochen and specker presented an explicit set of measurements , for which the simultaneous attribution of values for each of these measurements , leads to a logical contradiction .the two results can be regarded as opposite faces of the same coin . whereas bell s result can be verified ( or refuted ) by experiment , kochen and specker sargument shows the problem also to be a deeply - rooted theoretical one .these two results have been of such importance , that the notion of realism in quantum physics is usually considered automatically as having either the meaning of ` locally realistic ' ( bell ) , or that of ` the impossibility of attributing predetermined outcome values to the set of observables ' ( kochen and specker ) .the apparent lack of realism in quantum mechanics has been illustrated again and again by clever theoretical constructions ranging from bell - type arguments to impossible coloring games , and the countless attempts to produce an as loophole free as possible experimental verification of these arguments .however , the commonly accepted notion that measuring a variable does not mean ascertaining the value that it has , does not mean that the answer to schrdinger s question is that the occurrence of a particular outcome has _ no _ meaning .every proper quantum experiment is a testimony to the contrary , for if a single outcome has no informational content about the system at all , then how are we to derive anything at all from the sum of a great number of informationally empty statements ? whether we perform a tomographic state reconstruction , or experimentally estimate the value of a physical quantity of a system , we accept that in a well constructed experiment every outcome presents a piece of information , a piece of evidence , that brings us closer to the true state of affairs , whatever that may be . to give a more detailed answer to the question , we are in need of a model that shows _ how _ a single outcome is obtained .we will provide such a model in an attempt to understand the meaning of the occurrence of a single outcome in a quantum mechanical experiment . more specifically, we will show that an observer actively seeking to minimize his own influence on the produced outcome , will , with the aid of bayesian decision theory , give outcomes whose relative frequency converges to the born rule in a natural way .this in turn will give us a possible interpretation for the occurrence of a particular outcome .let us assume we have a system for which we write to denote its set of states , and for an observable that can take any single outcome out of distinct values in the outcome set .at the most trivial level , there is a counting measure on the set of outcomes . if denotes the set of all subsets of , then the probability that a measurement of observable on the system in a state yields an outcome in a given subset is a mapping \label{prob prescript}\]]such that for disjoint we have : additive property described by ( [ prob add ] ) is generally accepted both in quantum and classical probability and provides the rationale for the use of normalized states , that is , states that satisfy : in this way , ( [ prob normalized ] ) reduces the number of free parameters in state space by one .we have written to emphasize that it represents the probability that the outcome obtains when ( we know that ) the system is prepared in the state the classical interpretation for the arisal of probabilities , is one of a lack - of - knowledge about the precise state being measured . from a naive epistemic perspective , the outcome is then an objective attribute of each measured state , and the probability related to each outcome is simply the fraction of states _ having _ the -attribute in the ensemble of systems that we measure . as indicated in the introduction ,such an interpretation for the probabilities in quantum mechanics is problematic . even for a single spin 1/2 particle, one can show three measurements suffice to exclude such an interpretation , even without taking recourse to locality issues . in orthodox quantum mechanics ,the state space is the complex hilbert space .the set of states of the observed system that we will consider , is the set of unit vectors in an -dimensional hilbert space , usual , the norm is defined through the ( sesquilinear ) inner product that we will denote .alternatively , one can take rays or even density operators for the states . since both lead to essentially the same results, we will stick to unit norm vectors .let be the set of linear operators that act on the elements of , then an observable is represented by a self - adjoint element of : this presentation , we assume has a discrete , finite , non - degenerate spectrum , which implies that eigenvectors belonging to different eigenvalues are orthogonal . let be the set of the eigenvectors with the same eigenvalue as the same eigenvector . ] of we now have and and , because the spectrum is assumed non - degenerate , we have that is a basis or a complete orthonormal frame . from linear algebrawe know that an arbitrary element of can be written in this frame as : satisfies ( [ prob normalized ] ) , then it lies in , and the obey: , one can easily verify that the observable can be written as hence the observable is in a one - to - one correspondence with an orthonormal frame of eigenvectors of and we will represent the observable by its associated frame . throughout this paper , we reserve superscripts of states as a mnemotechnical aid for system recognition ( i.e. is a system state and the state of the measurement apparatus ) and subscripts of states to denote eigenstates .if a system is in an eigenstate corresponding to outcome we will denote the corresponding eigenstate as for an arbitrary eigenstate we have for an eigenstate , and also for a statistical mixture of eigenstates , the classical interpretation of probability as proportion of system having the -attribute is tenable .the more interesting case , however , is the probability for the occurrence of an outcome when the system is in a general state ( [ general state ] ) , which is given by the born rule : the analog with the classical situation would be that represents a mixture of states that have attribute in the right proportion such that the born rule holds .however the born rule holds even when the system is in a pure state , i.e. a state which can not be obtained as a statistical mixture of states .we will show that it is possible to regard the probabilities as arising from a lack of knowledge about the detailed state of the observer if the observer actively attempts to choose the outcome that maximizes a specific likelihood ratio that we will present shortly .let us first define what we mean by an observer . an observer is a physical system that takes a question as input , and yields in reply an outcome which is a member of a discrete set .this outcome can be freely copied , and hence communicated to many other observers . in general, this definition of observer will include the experimental setup , apparata , sensors , and the human operator .it is however quite irrelevant to our purposes whether we consider an apparatus or a detector , an animal or a human being as observer , as long as we agree that it is this system that has produced the outcome. we will furthermore assume the observer comes to this outcome through _ a physical , deterministic interaction_. that is , if we have perfect knowledge of the initial state of the system and of the potentials that act on the system , we can in principle predict the future state of the system perfectly . besides the fact that all fundamental theories of physics ( even classical chaotic systems and quantum dynamics ) postulate deterministic evolution laws , the requirement of determinism allows to derive probability as a secondary concept .so let us assume that the outcome of an observation is the result of a deterministic interaction : here is the interaction rule , is the set of states of the observed system , the set of states of the observing system and the set of outcomes that observable can have .we will deal only with a single observable , so no further notational reference is made to the particular observable .the mapping encodes how an observer in a state observing a system in the state , comes to the outcome because our observer is deterministic , we assume is single - valued .probability will only arise as a lack of knowledge on deterministic events .the observer faces the task of selecting an outcome from the set that tells something about the system under observation .but the outcome is always formulated by the observer , it has to be encoded somehow in the state of the observer after the observation .hence the outcome _ itself _ is also an observable quantity of the post - measurement state of the observer .the outcome will then have to share its story among the two participating systems that gave rise to its existence : it will always have something to say about both the observer _ and _ the system under study . in was shown by a diagonal argument , that even in the most simple case of a perfect observer , observing only classical properties of a system in the state is actual , iff the testing of property for in the state , would yield an affirmation with certainty .a property is called classical when the outcome of the observation to test that property , was predetermined by the state of the sytem ( whatever that state was ) prior to the test . for a classical property we can define a negation in the lattice of properties that is simply the boolean not .a property is then classical for iff for each state of the property , _ or _ its negation , is actual . for details , see .] , there exist classical properties pertaining to himself that he can not perfectly observe .more specifically , even if the observer can observe a given ( classical ) property perfectly , he can not perfectly observe _ that _ he observes this classical property perfectly .there is no logical certainty with respect to faithfulness of a single shot , deterministic observation . on the other hand ,observation is an absolutely indispensable part of doing science , hence it is only natural that every scientist believes that faithful observation can and does indeed occur .living in the real world , somewhere between the extremes of the ideal and the impossible , we wonder whether there is a strategy for the observer so that he is guaranteed that each outcome he picks uses his observational powers to the best of his ability . rather than attempting to measure observables in a single trial of an experiment ,our observer turns to a new strategy .first he prepares an ensemble of a large number of identical system states . next he will interact with each of the members of this ensemble in turnfor each and every single interaction , he will pick the outcome that somehow ` has the largest likelihood ' of pertaining to the system . by randomizing his probe state and picking the outcomes in this way, the observer hopes to restore objectivity , so that he will eventually obtain information that pertains solely to the system under observation . to calculate within the deterministic setting of the previous section ( [ interaction ] )is in principle straightforward .the experiment our observer will perform is a repeated one , in which the set of states of the system under study is reduced to a singleton , and the set of states for the observer is the whole of .the set of states for the observer that leads to a given outcome when the observer observes a system in the state will be denoted as : the single - valuedness of in ( [ interaction ] ) , we have for : we assume that the act of observation of an observable leads to an outcome for every state of the system investigated , we have in this way defines in a trivial way a _ partition _ of the state space of the observer with each member in the partition belonging to exactly one outcome .we are now ready to introduce probability . with -algebra of borel subsets of ( which we tacitly assume includes for every ) , we define a probability measure that acts on the measure space for any two disjoint in , we have \label{prob 2 prescript } \\\mu ( \sigma _ { i}\cup \sigma _ { j } ) & = & \mu ( \sigma _ { i})+\mu ( \sigma _ { j } ) \notag\\ \mu ( \sigma _ { m } ) & = & 1 \notag\end{aligned}\ ] ] in order to calculate we need to evaluate the probability measure over the set of states for the observer giving rise to the outcome when they interact with a state : last formula is fundamental to this paper .it says that for a repeated experiment on a set of identical pure system states , the probability is given as the ratio of observer states that , given tell the outcome is to the total number of observer states .note that the sets are _ not _ sets of eigenvectors in the algebraic sense of the word ) are called in eigensets in accordance with . ] . however ,if it happens to be the case that , for a given and for almost every , we have in the sense that , for that particular we have the vector thus defined , will coincide with a regular eigenvector if the state space is a hilbert space .the relation between ( [ prob 2 prescript ] ) and ( [ prob prescript ] ) is through the mapping and the measure it is obvious that ( prob 2 prescript ) is additive in too of ( [ eigpart 1 ] ) .hence , if the probabilities of ( [ prob 2 prescript ] ) and ( [ prob prescript ] ) coincide for every single outcome ( the singletons in ) , they will coincide for all of in what follows we will therefore restrict our discussion to the probability related to the occurrence of a _ single _ outcome . in conclusion ,the success of the program to model the probabilities in quantum mechanics as coming from a lack of knowledge about the precise state of the observer stands or falls with the question of defining a natural mapping ( which determines the outcome and hence ) such that the measure of the eigenset pertaining to outcome is identical with the probability obtained by the born rule ( [ born ] ) .we can see from ( [ probability ] ) that the system state can be associated with a probability in a fairly trivial way : the probability of an outcome when the system is in a pure state , is the proportion of observer states that attribute outcome to that state . even for a repeated measurement on a set of identical pure states , fluctuations in the outcomes can arise if there is a lack of knowledge concerning the precise state of the observer .suppose now the observer , considered as a system in its own right , is in a state . then in exactly the same way we can associate a probability with that state too .the operational meaning of this association is given either by a secondary observer observing an ensemble of observers in the state , or by the observer consistently ( mis)identifying his own state for a state of the system .we have argued that every outcome will say something about the observer , ( that is , about ) , and something about the system ( that is , about ) .the problem is that this information is mixed up in a single outcome .some outcomes will contain more information about the state of the system , and some more about the state of the apparatus .eventually , we , as operators of our detection apparatus , will have to decide whether we will retain a given outcome , or reject it .such decisions are a vital part of experimental science .for example , an outcome that is deemed too far off the limit ( so - called _ outliers _ ) , is rejected and hence excluded in the subsequent analysis .the rationale for this exclusion is that an outlier does not contain information about the system we seek to investigate , but rather that it represents a peculiarity of the measurement . in practice , rejection or acceptance of an outcomedoes not depend on a rational analysis , but on the common sense and expectations of the experimenter .suppose however , that the observer does have absolute knowledge about the state of the system and his own state and recognizes the fact that the outcome he delivers may eventually be rejected .the observer considers this rejection to be based on the following binary hypotheses: in full , the hypotheses should actually read : the outcome yields as a consequence of the observer attributing the state ( or ) to the system . to combat rejection, the observer chooses the outcome that maximizes the likelihood that prevails , _ as if the outcome he delivers will eventually be judged for acceptance or rejection by one with absolute knowledge about __ and _ _ .if , in an experiment , it is possible with ( non - vanishing probability ) to get an outcome under either hypothesis , then a factual occurrence of this outcome in an experiment supports _ _ both _ _ hypotheses simultaneously .what really matters in deciding between and on the basis of a single outcome , is not the probability of the correctness of each hypothesis itself , but rather whether one hypothesis has become _ more likely _ than the other as a result of getting outcome . from bayesian decision theory , we have that all the information in the data that is relevant for deciding between and is contained in the so - called likelihood ratios or , in the binary case , the _ _ odds _ _ : in this last formula , the numerator and denominator are given by ( probability ) .we are now in position to state our proposed strategy for the bayes - optimal observer .we call a system in a state a _ bayes - optimal observer _ iff , after an interaction with a system in a state the state of will transform to a state that expresses the outcome that corresponds to the maximal likelihood ratio ( [ odds ] ) .picking the outcome from that maximizes the corresponding likelihood ratio is simply optimizing the odds for given his information .this concludes our description of the observer . to seewhat probability arises for a repeated experiment when an observer is bayes - optimal , we need a state space .we are especially interested in complex hilbert space , but we will first have a look at statistical mixtures . if the conditional probabilities are well - defined ( which we will just accept for now ) , we can make a summary of them in a single vector we define the convex closure of a number of elements =\{a\in \mathbb{r } ^{n}:a=\sum \lambda _ { i}a_{i},0\leq \lambda _ { i}\in \mathbb{r } , \sum \lambda _ { i}=1\ } \label{convex closure}\]]ifwe write , ] be the convex closure of and by ( [ real eigensets ] ) , then : \ ] ] the proof of this lemma can be found in appendix a. to obtain the probability ( [ pbo ] ) , we calculate the of , ] if is a ( probability ) measure such that and is defined by the convex closure of ( [ eigset ] ) , then we have )=t_{k} ] . because we have)\]]by the second lemma we have )=t_{k} ] because \cap c_{k}^{s} ] continuous , and because the elements of ] .suppose furthermore that an observing system is said to satisfy the _ _ linear mixture property _ _iff in words : the probability of a mixture equals the mixture of the probabilities . does the bayes - optimal observer satisfy the linear mixture property ?well , is a statistical mixture , as defined in the section on bayes - optimal observation of statistical mixtures , and each of the constituents in the mixture is a pure state , as defined in the section bayes - optimal observation in hilbert space .so clearly , our bayes - optimal observer satisfies the linear mixture property .in essence , this stems from his initial states being uniformly random ( almost everywhere ) . indeed , suppose the distribution of the initial states of the observer is _ not _ uniform a.e .. then one can always find a convex region in state space with surface measure , for which the density of observer states is not equal to without giving a formal proof , one can see that , it is always possible to find two states , and a real number ,1[ ] is the face shared by the two simplices , and the smallest euclidean distance between point and each point of face which is proportional to the norm of the orthogonal projection of onto a unit vector perpendicular to in no unique vector is perpendicular to ( which only has affine dimension ) , but as long as we stick to the same vector for both simplices , the same constant of proportionality will apply , and the ratio will eliminate that constant .pick the base vector as , which is obviously unit - norm and perpendicular to the orthogonal projection of the top of to is : . for top is the vector itself and its projection hence we have we start with the first inclusion .suppose is in one of the open then , by definition , there exist such that , with the other hand we have that and hence there exist such that ( [ stat state ] ) holds : of ( [ a ] ) into ( [ s ] ) yields the likelihood ratios ( [ odds ] ) , we obtain and for we have: easily see that iff which is satisfied by assumption .hence , by ( [ real eigensets ] ) every gives an outcome , establishing the result .for the second inclusion , suppose there exists some with .]\cap \lbrack c_{k}^{s}]=\delta _ { n-2}^{s}(j , k)\]]assume first is not in the boundary of , ] let be an arbitrary open convex set in .evidently , .let be the pull - back of under , the theorem holds for convex sets in .this conclusion can readily be extended to an arbitrary -dimensional rectangle set in \}\]]its measure factorizes into: consider n - tuples of complex numbers : .the measure of can be factorized as: the theorem holds for an arbitrary rectangle set but every open set in can be written as a pair - wise disjoint countable union of rectangular sets .it follows that for all open sets in .both and are finite borel measures because and are both compact subsets of a vector space of countable dimension. therefore they must be regular measures ( , p47 ) , which are completely defined by their behavior on open sets .hence is measure preserving for borel sets .
we propose a simple abstract formalisation of the act of observation , in which the system and the observer are assumed to be in a pure state and their interaction deterministically changes the states such that the outcome can be read from the state of the observer after the interaction . if the observer consistently realizes the outcome which maximizes the likelihood ratio that the outcome pertains to the system under study ( and not to his own state ) , he will be called bayes - optimal . we calculate the probability if for each trial of the experiment the observer is in a new state picked randomly from his set of states , and the system under investigation is taken from an ensemble of identical pure states . for classical statistical mixtures , the relative frequency resulting from the maximum likelihood principle is an unbiased estimator of the components of the mixture . for repeated bayes - optimal observation in case the state space is complex hilbert space , the relative frequency converges to the born rule . hence , the principle of bayes - optimal observation can be regarded as an underlying mechanism for the born rule . we show the outcome assignment of the bayes - optimal observer is invariant under unitary transformations and contextual , but the probability that results from repeated application is non - contextual . the proposal gives a concise interpretation for the meaning of the occurrence of a single outcome in a quantum experiment as the unique outcome that , relative to the state of the system , is least dependent on the state of the observe at the instant of measurement .
imaging of active regions on the far side of the sun , the side facing away from earth , is a valuable tool for space weather forecasting , as well as for studying the evolution of active regions .it allows monitoring of active regions before they rotate into the near side , and after they rotate back into the far side .far - side images are produced daily by the acoustic holography technique using observations from both the global network oscillation group ( gong ) , and the michelson doppler imager ( mdi ) on board the solar and heliospheric observatory ( soho ) . and have pioneered the solar far - side imaging work , mapping the central region of the far - side sun by analyzing acoustic signals for double - skip ray paths on both sides of the far - side region , by use of helioseismic holography ( for a review , see * ? ? ? * ) and time - distance helioseismology techniques , respectively . further developed their technique to map the near - limb and polar areas of the solar far side by combining single- and triple - skip acoustic signals .more recently , has developed a five - skip time - distance imaging scheme that measures travel times of a combination of double- and triple - skip acoustic wave signals .combined with the traditionally used four - skip far - side imaging schemes , the new technique greatly reduces the noise level in far - side images , as well as helps to remove some of the spurious features visible in the four - skip images . in general terms ,far - side imaging by time - distance helioseismology detects changes in the travel time for acoustic waves traveling through an active region compared to those traveling only in the quiet sun , while helioseismic holography detects phase shifts in acoustic wave signals .the exact mechanisms of how the presence of an active region causes the observed variations are not fully understood , although , it is generally believed that a change of the magnetoacoustic wave speed inside active regions plays an important role .also , it has been argued that strong surface magnetic fields associated with active regions may affect inferences obtained by the acoustic holography technique ; however , have shown that these effects are not a major factor in the determination of the interior structure of sunspots by time - distance helioseismology .far - side imaging has been successful for predicting the appearance of large active regions and complexes of activity . however , it is unclear how robust and accurate the far - side imaging techniques are , and how much we should believe in the far - side images that are being produced daily .past efforts have tried to evaluate the accuracy of far - side images by comparing them with the directly observed earth - side images just after active regions have rotated into view from the far side , or before they have rotated out of view into the far side . however, active regions may develop quite fast , emerging or disappearing on a time scale of days or even less .therefore , such analyses are not sufficient . on the other hand , numerical modeling of solar oscillationscan provide artificial data that can enable evaluating and improving these methods . in a global solar model, we can place near - surface perturbations mimicking active regions on the far side of the modeled sun , and apply helioseismic imaging techniques to the simulated wavefield .the resulting far - side images can be compared directly with the precisely known properties of the perturbations , allowing for a more accurate evaluation of the capabilities and limitations of the far - side imaging techniques . in this paper , we present results on testing the recently improved time - distance helioseismology far - side imaging technique by using 3d numerical simulations of the oscillations in the global sun .we assess the sensitivity of the imaging technique by varying the size and location of a sound speed perturbation mimicking a single active region .in other simulations , we place two active regions at the solar surface in order to examine whether the acoustic waves traveling through the active regions may interfere with each other and affect the imaging of the other region .finally , we identify one scenario in which artifacts ( `` ghost images '' ) caused by an active region on the near side appear in the far - side maps .a brief description of the simulation technique is given in 2 , followed by a description of the far - side imaging procedure in the main results are presented in 4 , and a discussion and concluding remarks are given in 5 .in the following , we briefly describe the numerical simulation code used in this study . for more details ,the reader is referred to , and in particular , to a detailed description of the code , which will be published soon ( hartlep & mansour , in preparation ) .simulating the 3d wavefield in the full solar interior is not an easy task , and many simplifications have to be made to make such simulations feasible on currently available supercomputer systems . for the present case , we model solar acoustic oscillations in a spherical domain by using linearized euler equations and consider a static background in which only localized variations of the sound speed are taken into account .the oscillations are assumed to be adiabatic , and are driven by randomly forcing density perturbations near the surface . for the unperturbed background model of the sun ,we use the standard solar model s of matched to a model for the chromosphere .localized sound speed perturbations of various sizes are added in the surface and subsurface layers to mimic the perturbations of the wave speed associated with sunspots and active regions .non - reflecting boundary conditions are applied at the upper boundary by means of an absorbing buffer layer with a damping coefficient that is zero in the interior and increases smoothly into the buffer layer .the linearized euler equations describing wave propagation in the sun are written in the form : where and are the density perturbations and the divergence of the momentum perturbations associated with the waves , respectively . is a random function mimicking acoustic sources , is the background sound speed , is the acceleration due to gravity , and is the damping coefficient of the absorbing buffer layer .perturbations of the gravitational potential have been neglected , and the adiabatic approximation has been used . in order to make the linearized equations convectively stable , we have neglected the entropy gradient of the background model .the calculations show that this assumption does not significantly change the propagation properties of acoustic waves including their frequencies , except for the acoustic cut - off frequency , which is slightly reduced .this is quite acceptable for our purpose , because the part of the spectrum that is actually used in the far - side imaging technique lies well below this cut - off frequency . for comparison ,other authors have modified the solar model including its sound speed profile ( e.g. * ? ? ?* ; * ? ? ?* ) . in those cases ,the oscillation mode frequencies may differ significantly from the real sun frequencies . starting from eqs .( [ eq : e1 ] ) and ( [ eq : e2 ] ) , we absorb the damping terms and into the other terms by use of an integrating factor , and apply a galerkin scheme for the numerical discretization .spherical harmonic functions are used for the angular dependencies , and 4th order b - splines for the radial direction .2/3-dealiasing is used in the computations of the -term in angular space , while all other operations are performed in spherical harmonic coefficient space .the radial resolution of the b - spline method is varied proportionally to the local speed of sound , i.e. the generating knot points are closely space near the surface ( where the sound speed is small ) , and are coarsely spaced in the deep interior ( where the sound speed is large ) .the simulations presented in this paper employ the spherical harmonics of angular degree from 0 to 170 , and 300 b - splines in the radial direction .a staggered yee scheme is used for time integration , with a time step of 2 seconds . the oscillation power spectrum as a function of spherical harmonic degree computed for one of the performed simulationsis shown in figure [ fg1 ] .it is found that the frequencies of the ridges correspond well with the frequencies from solar observations . as noted before, the model has a lower cut - off frequency , but this does not pose a problem for our purposes . also , figure [ fg1 ] shows a time - distance diagram ( i.e. , the mean cross - covariance function ) calculated for the same simulation data .even though no filtering has been done for computing the time - distance correlations , both the four - skip and five - skip acoustic signals needed for the far - side imaging technique are clearly visible .in fact , these correlations are stronger than in observational data , where it is essential to filter out other unwanted wave components ( compare , e.g. * ? ? ?the acoustic travel times are fairly close to those found in solar observations . even for the long travel times of four- and five - skip signals ,the discrepancy between the simulations and the observations is only about 1.2 minutes , or 0.2 percent .solar active regions are complex structures and are believed to differ from the quiet sun in their temperature , density , and sound speed distributions , and include complicated flow and magnetic field configurations .the acoustic wave speed variations inside active regions due to temperature changes and magnetic fields has , for obvious reasons , a very direct effect on the travel times . for this investigation, we model active regions by local sound speed perturbations , which include the combined temperature and magnetic effects , but leave it to a later investigation to include plasma flows . since the main goal of the current far - side imaging efforts is to detect the locations of active regions and estimate their size , this is quite sufficient .we model a solar active region by a circular region in which the sound speed differs from the quiet sun sound speed in the following fashion : where is the angular distance from the center of the active region , the radial distance from the photosphere , and the radial profile of the prescribed sound speed perturbation is shown in figure [ fg2 ] . the profile has been derived by inversions of the time - distance measurements of an actual sunspot , and confirmed by a number of other local helioseismology inversions . some of these studies have shown that the significant sound speed perturbation associated with the sunspot structure probably extends deeper than what was originally inferred .also , investigations of large active regions by have indicated that the perturbations are extended significantly deeper than those for the relatively small and isolated sunspot in .therefore , we extended this profile into the deeper layers as shown in figure [ fg2 ] .the simulations have been performed for three different active region horizontal sizes corresponding to radii at the solar surface of 45 , 90 and 180 mm , respectively .effects of structure variations with depth or the strength of the perturbations have not been studied . has imaged the solar far side using medium- data acquired by soho / mdi .mdi medium- data consist of line - of - sight photospheric velocity images with a cadence of 1 minute and a spatial sampling of per pixel ( here and after , degree means heliographic degree ) .the data are mapped into heliographic coordinates using postel s projection , and only the central region of the solar disk is used for the far - side imaging analysis .the observational time series were 2048 minutes long . from the simulations ,very similar datasets were generated .radial velocity maps were computed at a location of 300 km above the photosphere , approximately at the formation height of mdi dopplergrams , and stored with a 1-minute cadence and a spatial resolution of per pixel , slightly lower than the resolution of the mdi data .the region selected for the analysis was of the same size , , as in the analysis of the mdi observations .the first 500 minutes of each simulation were discarded as they represent transient behavior , and the following 1024 minutes were used in the analysis .this is only half of the duration used in the observational analysis , but as figure [ fg1 ] shows , four- and five - skip acoustic signals are sufficiently strong to perform the far - side analysis even with such a relatively short period .the rest of the procedure for the simulation data is the same as for the observations presented in .after the remapping , the data are filtered in the fourier domain , and only waves that travel long enough to return to the near side from the back side after four and five rebounces are kept .the time - distance cross - covariance function is computed for points inside the annuli as indicated in figure [ fg3 ] .the locations and sizes of these annuli depend on the measurement scheme . for the four - skip scheme and the double - double skip combination ,the annulus covers a range of distances of from the targeted point on the far side . for the single - triple combination ,this range is for the single skip , and for triple skip .for the five - skip scheme , the annulus covers the range of from the targeted point for the double skip , and for the triple skip .the four - skip scheme can recover images of in longitude ( past the limb to the solar near side on either limb ) , while the five - skip scheme recovers a total of in longitude , somewhat less than the whole far side .as usual , the cross - covariance functions for different distances are combined after appropriate shifts in time based on ray theory predictions .the final cross - covariance functions are fitted using a gabor wavelet to derive the acoustic phase travel times for the four- and five - skip schemes separately . after a mean background travel timeis subtracted from each map , the residual travel times maps show variations , corresponding to active regions on the far side .in order to examine the sensitivity of the time - distance far - side imaging technique to the size of active regions , we have simulated the global acoustic wave fields for solar models with sound speed perturbations of 3 different values of their radius : 180 mm ( large ) , 90 mm ( medium ) , and 45 mm ( small ) . the radial structure of the sound speed perturbation has been given in sec .figure [ fg4 ] presents the case when a medium - sized active region is located at the far - side center ( directly opposite to the observer ) .it can be seen that both four- and five - skip measurement schemes can recover this far - side region , but with some level of spurious features . the combined image from both schemes gives a better active region image , though not completely clear of spurious features .the images are displayed with thresholds of to , where is the standard deviation of the travel - time perturbations , in order to isolate the strong negative signals associated with active regions .the original unrestricted image without thresholding , and the corresponding probability distribution function of the travel time residuals are shown in figure [ fg5 ] . in this particular case, is of the order of 12 seconds for the combined image . for comparison, a lower value of 3.3 seconds was found in observations .the noise level depends on the stochastic properties of solar waves and the length of the data time series .this probably explains the difference in the noise levels .however , this difference is not significant for this study since we measure the signal relative to the noise level .figure [ fg6 ] shows the same , medium - sized active region , but now located closer to the far - side limb .once again , the combined far - side image gives the best result .it is clear from both figures [ fg4 ] and [ fg6 ] that the time - distance technique determines the size and location of the far - side active regions well but fails to accurately image their shape .figure [ fg7 ] presents the travel - time images combined from the four- and five - skip measurements for the simulations of the large active region .it is evident that the time - distance technique gives the correct size , location , and even shape of the far - side active regions for both far - side locations : at the center , and near the limb . for the case with a small active region ( 45 mm radius ) , the time - distance helioseismology imaging fails to provide any credible signature of the existence of the region on the far side .the travel - time maps are not shown for this case , since they do nt show any significant features .of course , it should come as no surprise that the imaging technique has a lower limit on what size of active region can be detected .the time - distance far - side imaging method used in this study employs only the oscillation modes with spherical harmonic degrees between 3 and 50 .it is conceivable that structures comparable in size or even smaller than the horizontal wavelength of the acoustic waves used in the analysis will have little effect on such waves .such small structures would be hard or impossible to detect . a simple estimate of the node - to - node distance for a spherical harmonic of degree 50 ( the highest used in the analysis ) gives about 90 mm at the surface , or twice the radius of the small active region .it is quite common that multiple active regions are present on the sun .some of them may produce perturbations of the wave field , which may interfere with the perturbation of a targeted active region . in order to examine whether the different regions would interfere with each other in the far - side images, we performed a simulation with two medium - sized active regions located at the solar equator , apart from each other .we have examined various different far - side locations of the active regions , and two examples are presented in figure [ fg8 ] . in all cases we found that these two active regions do not interfere with each other .both active regions have been imaged correctly as if they were the sole regions on the sun , except that some `` ghost images '' appeared under certain circumstances .however , such artifacts also appear for a single active region case under the same circumstances , as described in the next section . for convenience ,the active regions in the numerical experiments were placed on the far - side equator . on the real sun , though , active regions are often far from the equator , and one may be confronted with additional effects such as foreshortening and line - of - sight projection . however , these effects are expected to be small because only oscillations of relatively low angular degrees are used in the analysis and also because of the rather small observing window extending only from the disk center . to test this expectation ,we have performed an additional numerical experiment with a medium - sized active region placed at a latitude of above the equator , used line - of - sight velocities instead of the pure radial velocities , and included the effect of foreshortening .the results were not significantly different from those in figure [ fg4 ] .it is found that when an active region is placed at certain locations , a `` ghost image '' of the active region may appear in the far - side image .figure [ fg9 ] presents two such examples when an active region on the near side is close to the limb .a `` ghost image '' appears approximately at the antipode of this active region , with a weaker acoustic travel time signal and smaller in size .note that because of the selection of very small s when computing far - side images , the spatial resolution of images is only about .therefore , the `` ghost image '' may appear several degrees away from the antipode of the region .given the measurement scheme , it is very reasonable to expect such an artifact when the active region is located close to the near - side limb .consider , for example , a single - triple skip combination in the four - skip measurement scheme .if we select an annulus from a targeted far - side quiet region , this annulus is also away from that quiet region s antipode .if an active region is located there , acoustic waves with travel time deficits caused by that active region are not filtered out because their distance range also falls in the triple - skip range in our analysis ( compare annulus radii in sec .we have successfully simulated the global acoustic wavefield of the sun and have used the simulation data to validate the time - distance far - side imaging technique for two measurement schemes with four and five skips of the acoustic ray paths .we have found that this technique is able to reliably detect our model active regions with radii of 90 mm and 180 mm .the locations and sizes of the far - side active regions are determined correctly , although , their shapes are often slightly different from the original .expectedly , larger active regions are easier to detect , and their images are more clear .for the small active region of 45 mm radius , the far - side imaging method fails since it is below the resolution limit . in the case ofmore than one active regions present on the solar surface , we have found that they do not affect each other s detection .the time - distance analysis can detect the individual active regions as if they were completely independent .we have also shown that when an active region is located close to the limb on the near side , a `` ghost image '' may appear in the far - side image , approximately at its antipode , but relatively weak and smaller in size . even though this effect is not completely unexpected , it has not been noticed in previous analyses of observational data in both helioseismic holography and time - distance helioseismology .this is an important finding and gives us hints on when and where features in observational far - side images may merely be artifacts ( i.e. `` ghost images '' ) and are not caused by actual far - side active regions .basu , s. , antia , h. m. , & bogart , r. s. 2004 , , 610 , 1157 braun , d. c. , & lindsey , c. 2001 , , 560 , l189 christensen - dalsgaard , j. , et al .1996 , science , 272 , 1286 christensen - dalsgaard , j. 2002 , rev .modern phys . , 72 , 1073 couvidat , s. , birch , a. c. , & kosovichev , a. g. 2006 , , 640 , 516 duvall , t. l. , jr . , jefferies , s. m. , harvey , j. w. , & pomerantz , m. a. 1993 , , 362 , 430 duvall , t. l. , jr . , & kosovichev , a. g. 2001 , proc .203 , 159 fan , y. , braun , d. c. , & chou , d .- y .1995 , , 451 , 877 gonzlez hernndez , i. , hill , f. , & lindsey , c. 2007 , , 669 , 1382 hanasoge , s. m. , larsen , r. m. , duvall , t. l. , derosa , m. l. , hurlburt , n. e. , schou , j. , christensen - dalsgaard , j. , & lele , k. 2006 , , 648 , 1268 hartlep , t. , & mansour , n. n. 2005 , annual research briefs2005 , 357 , center for turbulence research , stanford , california jensen , j. m. , duvall , t. l. , jr . , jacobsen , b. h. , & christensen - dalsgaard , j. 2001 , , 553 , l193 kosovichev , a. g. , duvall , t. l. , jr . , & scherrer , p. h. 2000 , , 192 , 159 kosovichev , a. g. , & duvall , t. l. , jr .1997 , score96 : solar convection and oscillations and their relationship , 225 , 241 kosovichev , a. g. , & duvall , t. l. 2006 , space science reviews , 124 , 1 kravchenko , a. g. , moin , p. , & shariff , k. 1999 , j. comp .phys . , 151 , 757 lindsey , c. , & braun , d. c. 2000a , science , 287 , 1799 lindsey , c. , & braun , d. c.2000b , , 192 , 261 lindsey , c. , & braun , d. c. 2005 , , 620 , 1107 loulou , p. , moser , r. d. , mansour , n. n. , & cantwell , b. j. 1997 , technical memorandum 110436 , nasa ames reseach center , moffett field , california norton , a. a. , graham , j. p. , ulrich , r. k. , schou , j. , tomczyk , s. , liu , y. , lites , b. w. , ariste , a. l. , bush , r. i. , socas - navarro , h. , & scherrer , p. h. 2006 , , 239 , 69 parchevsky , k. v. , & kosovichev , a. g. 2007 , , 666 , 547 rhodes , e. j. , kosovichev , a. g. , scherrer , p. h. , schou , j. , & reiter , j 1997 , , 175(2 ) , 287 scherrer , p. h. , et al .1995 , , 162 , 129 sun , m .-t . , chou , d .- y ., & the ton team 2002 , , 209 , 5 vernazza , j. e. , avrett , e. h. , & loeser , r. 1981 , , 45 , 635 yee , k. s. 1966 , ieee trans .antenna and propagation , 14 , 302 zhao , j. 2007 , , 664 , l139 zhao , j. , & kosovichev , a. g. 2006 , , 643 , 1317 zharkov , s. , nicholas , c. j. , & thompson , m. j. 2007 , astronomische nachrichten , 328 , 240
far - side images of solar active regions have become one of the routine products of helioseismic observations , and are of importance for space weather forecasting by allowing the detection of sunspot regions before they become visible on the earth side of the sun . an accurate assessment of the quality of the far - side maps is difficult , because there are no direct observations of the solar far side to verify the detections . in this paper we assess far - side imaging based on the time - distance helioseismology method , by using numerical simulations of solar oscillations in a spherical solar model . localized variations in the speed of sound in the surface and subsurface layers are used to model the perturbations associated with sunspots and active regions . we examine how the accuracy of the resulting far - side maps of acoustic travel times depends on the size and location of active regions . we investigate potential artifacts in the far - side imaging procedure , such as those caused by the presence of active regions on the solar near side , and suggest how these artifacts can be identified in the real sun far - side images obtained from soho / mdi and gong data .
the original motivation of this project is to study the transports of positive and negative charges in solar cells .we model a solar cell by a domain in that is divided into two disjoint sub - domains and by an interface , a -dimensional hypersurface , which can be possibly disconnected . and represent the hybrid medium that confine the positive and the negative charges , respectively . at microscopic level ,positive and negative charges are initially modeled by independent reflected brownian motion ( rbm ) with drift on and on respectively .( in this paper , they are actually modeled by independent random walks on lattices inside and that serve as discrete approximation of rbm with drifts . )these random motions model the transport of positive ( respectively negative ) charges under an electric potential ( see figure [ fig : interface ] ) .is the interface of and ] these two types of particles annihilate each other at a certain rate when they come close to each other near the interface .this interaction models the annihilation , trapping , recombination and separation phenomena of the charges .the interaction distance is of microscopic order where is comparable to 1 , and the intensity of annihilation per pair is of order where is a given parameter .this means that , intuitively and roughly speaking , according to a random time clock which runs with a speed proportional to the number of pairs ( one particle of each type ) of distance , we annihilate a pair ( picked uniformly among those pairs of distance less than ) with an exponential rate of parameter .the above scaling guarantees that in the limit , a nontrivial proportion of particles are annihilated in any open time interval .we investigate the scaling limit of the empirical distribution of positive and negative charges ; that is , the hydrodynamic limit of this interacting diffusion system . we show that in the macroscopic level , the empirical distribution converges to a deterministic measure whose density satisfies a system of partial differential equations that has non - linear interaction at the interface .the study of _ hydrodynamic limits _ of particle systems with stochastic dynamics is of fundamental importance in many areas .this study dates back to the sixth hilbert problem in year 1900 , which concerns the mathematical treatment of the axioms of physics , and to boltzmann s work on principles of mechanics .proving hydrodynamic limits corresponds to establishing _ the law of large number _ for the empirical measure of some attributes ( such as position , genetic type , spin type , etc . ) of the individuals in the systems .it contributes to our better understanding of the asymptotic behavior of many phenomena , such as chemical reactions , population dynamics , super - conductivity , quantum dynamics , fluid dynamics , etc .it reveals fascinating connections between the microscopic stochastic systems and deterministic partial differential equations that describe the macroscopic pictures .it also provides approximations via stochastic models to some partial differential equations that are hard or impossible to solve directly . since the work of boltzmann and hilbert , there have been many different lines of research on stochastic particle systems .various models were constructed and different techniques were developed to establish hydrodynamic limits . among those techniques ,the entropy method and the relative entropy method are considered to be general methods .unfortunately these methods do not seem to work for our model due to the singular interaction near the interface .many models studied in literature are conservative , for example exclusion processes and fleming - viot type systems . reaction - diffusion systems ( r - d system ) constitute a class of models that are typically non - conservative .these are systems which have hydrodynamic limits of the form ( a reaction - diffusion equation , or r - d equation in short ) , where is a function in which is thought of as the reaction term .r - d systems arise from many different contexts and have been studied by many authors . for instance , for the case is a polynomial in , these systems contain the schlgl s model and were studied in on a cube with neumann boundary conditions , and in on a periodic lattice .recently , perturbations of the voter models which contain the lotka - volterra systems are considered in .in addition to results on hydrodynamic limit , also established general conditions for the existence of non - trivial stationary measures and for extinction of the particles .our model is a non - conservative stochastic particle system which consists of two types of particles . in ,burdzy and quastel studied an annihilating - branching system of two types of particles , for which the total number of particles of each type remains constant over the time .its hydrodynamic limit is described by a linear heat equation with zero average temperature .in contrast , besides being non - conservative , our model gives rise to a system of nonlinear differential equations that seems to be new .moreover , the interaction between two types of particles is singular near the interface of the two media , which gives rise to a boundary integral term in the hydrodynamic limit .the approach of this paper provides some new tools that are potentially useful for the study of other non - equilibrium systems .we now give some more details on the discrete approximation of the spatial motions in our modeling .we approximate by square lattices of side length , and then approximate reflected diffusions on by continuous time random walks ( ctrws ) on .the rigorous formulation of the particle system is captured by the operator in ( [ e : generator0 ] ) .let be the position of the particle with index in at time .we prescribe each particle a mass and consider the normalized empirical measures here stands for the dirac measure concentrated at the point , while if and only if the particle is alive at time , and if and only if the particle is alive at time . for fixed positive integer and , is a random measure on .we want to study the asymptotic behavior , when ( or equivalently ) , of the evolution in time of the pair .our first main result ( theorem [ t : conjecture ] ) implies the following .suppose each particle in is approximating a rbm with gradient drift , where is strictly positive .then under appropriate assumptions on the initial configuration , the normalized empirical measure converges in distribution to a deterministic measure for all , where is the solution of the following coupled heat equations : and where is the inward unit normal vector field on of and is the indicator function on .note that corresponds to the particular case when there is no drift .the above result tells us that for any fixed time , the probability distribution of a randomly picked particle in at time is close to when is large , where is a normalizing constant . in fact , the above convergence holds at the level of the path space .that is , the full trajectory ( and hence the joint law at different times ) of the particle profile converges to the deterministic scaling limit described by ( [ e : coupledpde:+ ] ) and ( [ e : coupledpde:- ] ) , not only its distribution at a given time .* question * : how about the limiting joint distribution of more than one particles ?our second main result ( theorem [ t : correlation ] ) answers this question .it asserts that * propagation of chaos * holds true for our system ; that is , when the number of particles tends to infinity , their positions appear to be independent of each other .more precisely , suppose and unlabeled particles in and , respectively , are chosen uniformly among the living particles at time .then , as , the probability joint density function for their positions converges to uniformly for and for in any compact time interval , where is a normalizing constant. a key step in our proof of propagation of chaos ( theorem [ t : correlation ] ) is theorem [ t : uniqueness_hierarchy ] .the latter establishes uniqueness of solution for the infinite system of equations satisfied by the correlation functions of the particles in the limit .such infinite system of equations is sometimes called _ bbgky - hierarchy _ in statistical physics .our bbgky hierarchy involves boundary terms on the interface , which is new to the literature .our proof of uniqueness involves a representation and manipulations of the hierarchy in terms of trees .this technique is related to but different from that in which used feynman diagrams .it is potentially useful in the study of other stochastic models involving coupled differential equations .to establish hydrodynamic limit result ( theorem [ t : conjecture ] ) , we employ the classical tightness plus finite dimensional distribution approach .tightness of in the skorokhod space is proved in theorem [ t : tight ] .this together with the propagation of chaos result ( theorem [ t : correlation ] ) establishes the hydrodynamic limit of the interacting random walks .two new tools for discrete approximation of random walks in domains are developed in this article .namely , the local central limit theorem ( local clt ) for reflected random walk on bounded lipschitz domains ( theorem [ t : lclt_ctrw ] ) and the ` discrete surface measure ' ( lemma [ l : discreteapprox_surfacemea ] ) .we believe these tools are potentially useful in many discrete schemes which involve reflected brownian motions . weak convergence of simple random walk on to rbm has been established for general bounded domains in and . however , we need more for our model ; namely a local convergence result which guarantees that the convergence rate is uniform up to the boundary .for this , we establish the local clt .we further generalize the weak convergence result and the local limit theorem to deal with rbms with gradient drift .there are two reasons for us to consider gradient drift .first , it is physically natural to assume the particles are subject to an electric potential .second , the maximal extension theorem , ( * ? ? ?* theorem 6.6.9 ) , which is a crucial technical tool used in and , has established only in symmetric setting .the proof of the local clt is based on a ` discrete relative isoperimetric inequality ' ( theorem [ t : isoperimetric_discrete ] ) which leads to the poincar inequality and the nash inequality .the crucial point is that these two inequalities are uniform in ( scaling of lattice size ) and is invariant under the dilation of the domain .the paper is organized as follows . in section 2, we introduce the stochastic model and some preliminary facts that will be used later .we then prove the existence and uniqueness of solution for the coupled pde .the main results , theorem [ t : conjecture ] and theorem [ t : correlation ] , will be rigorously formulated .we also mention various extensions of our main results in remark [ rk : generalizationresults ] .section 3 and section 4 contains the proof of theorem [ t : correlation ] and theorem [ t : conjecture ] respectively .section 5 is devoted to the proofs of the discrete relative isoperimetric inequality and the local clt .for the reader s convenience , we list our notations here : [ cols= " < , < " , ] in the last row , if is even , then and ; if is odd , then and .we can now group the terms in each level as a sum of terms to obtain in the last term , we have used the observation that when , the smallest element in is at least 2 and so the sum stops before reaching . from this and the simple estimates like have derived the following [ l : i_n_j_n ] where our goal in this section is show that for some .our proof relies on the following recursion formula pointed out to us by david speyer : we assume ( [ e : recursion_j_n(t ) ] ) for now and use it to establish the following lemma .the proof of ( [ e : recursion_j_n(t ) ] ) will be given immediately after it .[ l : rate_j_n ] is homogeneous in the sense that moreover , is obvious from after a change of variable .let be the collection of functions satisfying .we can rewrite as this is a sum of terms .when we put , the smallest term is hence we have the lower bound .unfortunately , the largest term is exactly which grows faster than for any .hence for the upper bound , we will employ the recursion formula ( [ e : recursion_j_n(t ) ] ) .we apply the homogeneity to the right hand side of ( [ e : recursion_j_n(t ) ] ) to obtain the integrals are now simple one dimensional and can be evaluated : since , we have , where is defined by the recursion with . the generating function of clearly satisfies .we thus see that , where is the lambert function . by lagrange inversion theorem ( see theorem 5.4.2 of ) , ( for ) . hence by comparing coefficients in the series expansion of , we have as desired . by stirling s formula , ( where means ) .hence for some .monte carlo simulations suggests that .the recursion ( [ e : recursion_j_n ] ) also makes it clear that s are all in ] and .this implies that there is a constant so that for and for all .note that also satisfies the hierarchy ( [ e : hierarchy_2 ] ) , and that . using the hypothesis , we can extend to obtain for ] .note that where can be computed explicitly using ( [ e : generator0 ] ) .hence from ( [ e : mtg_nf ] ) , we can check that =\e[\<m\>_t]=\e\left[\int_0^t{\mathfrak{l}(f^2)(\eta_s)-2f(\eta_s)\mathfrak{l}f(\eta_s)}\,ds\right]=\int_0^t{\e[g(\eta_r)]}dr,\ ] ] where after taking expectation for , the first term is the last display is of order at most since , while the second term inside the bracket is of order at most , uniformly in ] for some .doob s maximal inequality then gives .from the second term of ( [ e : formula_for_generator ] ) , we see that if the parameter of the killing time is of order , then we need to be comparable to 1 .the following simple observation is useful for proving tightness when the transition kernel of the process has a singularity at .it says that we can break down the analysis of the fluctuation of functionals of a process on ] a.s ., where .suppose the following holds : * there exists such that <\infty ] is tight in ,\r) ] and any subsequential limit of the laws of carries on ,\mathfrak{e}) ] .we write in place of for convenience . by stone - weierstrass theorem, is dense in in uniform topology .it suffices to check that is relatively compact in ,\r^2) ] if ( 1 ) and ( 2 ) below holds : * for all ] . to verify ( 2 ) , since , we only need to focus on . by lemma [ l : oderofm ] , so we only need to verify ( 2 ) with replaced by each of the 3 terms on rhs of the above equation ( [ e : tight ] ) .for the first term of ( [ e : tight ] ) , we apply lemma [ l : tightness_criteria ] for the case and . since , we have for some constant which only depends only on . using lemma [ l : q_near_i ], we have \leq \frac{1}{n}\sum_{i=1}^n\,\sum_{d^{\eps}}|\a^+_{\eps}\phi(\cdot)|p^{\eps,+}(r , x_i,\cdot)m_{\eps}(\cdot)\leq c_1(d , d_+,\phi)+\frac{c_2(d , d_+,\phi)}{\eps\vee r^{\frac{1}{2}}},\ ] ] which is in ] .hence ( 2 ) holds for this term by lemma [ l : tightness_criteria ] . for the third term of ( [ e : tight ] ) , by chebyshev s inequality , doob smaximal inequality and lemma [ l : oderofm ] , we have \\ & \leq & \frac{1}{\eps_0 ^ 2}\,\e\left[\left(2\sup_{t\in[0,t]}|m_{\phi}(t)|\right)^2\right]\\ & \leq & 16\e[m_{\phi}(t)^2]\leq \frac{c}{n } . \end{aligned}\ ] ] we have proved that ( 2 ) is satisfied . hence is relatively compact . using ( 2 ) and the metric of , we can check that any subsequential limit of the laws of concentrates on . in general , to prove tightness for in ,a\times b) ] and , b) ] trivially ) .see exercise 22(a ) in chapter 3 of .for example , converges in but not in .the reason is that the two processes can jump at different times ( and ) that become identified in the limit ( only one jump at ) ; this can be avoided if one of the two processes is -tight ( i.e. has only continuous limiting values ) , which is satisfied in our case since and turns out to be both -tight . even without condition ( ii ) of theorem [ t :conjecture ] for , we can still verify hypothesis ( i ) of lemma [ l : tightness_criteria ] . actually , applying ( [ e : discrete ] ) to suitable test functions , we have = 0 .\ ] ] suppose is a subsequential limit of , say the convergence is along the subsequence . by the skorokhod representation theorem , the continuity of the limit in and ( * ?* theorem 3.10.2 ) , there exists a probability space such that } \big\| ( \x^{+,n'}_t,\,\x^{-,n'}_t)-(\x^{\infty,+}_t,\,\x^{\infty,-}_t ) \big\|_{\mathfrak{e } } = 0 \quad \p \hbox{-a.s},\ ] ] hence we have for any and , = \e[\<\mathfrak{x}^{+,\infty}_t,\phi\ > ] \quad \text { and } \quad \lim_{n'\to\infty}\,\e[(\<\mathfrak{x}^{+}_t,\phi\>)^2]= \e[(\<\mathfrak{x}^{+,\infty}_t,\phi\>)^2].\ ] ] combining with corollary [ cor : correlation ] , we have here we have used the simple fact that if = ( \e[x^2])^{1/2}=a ] .then where is the collection of open subsets such that and is a manifold of class ( i.e. each point has a neighborhood which can be represented by the graph of a lipschitz function ) .moreover , for all . recall the -symmetric ctrw on defined in the subsection [ underlyingmotion ] .the dirichlet form of in is given by where are the conductance on the graph defined in the subsection [ underlyingmotion ] .the stationary measure of is given by , where .we now consider the scaled graph , which is an approximation to the bounded lipschitz domain by square lattice .clearly the degrees of vertices are given by .define the function on by . then define the ctrw using as we have done for using .the mean holding time of is .clearly , the symmetrizing measure and the stationary probability measure have the scaling property and .let be the transition density of with respect to the symmetrizing measure .then for every , , and . following the notation of , we let be a finite set , be a markov kernel on and the stationary measure of .note that a markov chain on a finite set induces a natural graph structure as follows .let for any .define the set of directed edges .[ def : isoperimetricconstant ] for any , define we call an * isoperimetric constant * of the chain . it provides rich information about the geometric properties of and the behavior of the chain ( cf . ) . in our case , , and in , where is the one - step transition probabilities of defined in subsection [ underlyingmotion ] .for and , we have \subset d \ } , \\ \partial a & = & \{x\in a:\ , \exists y\in d^{\eps}\setminus a \text { such that } |x - y|=\eps \text { and the line segment } [ x , y]\subset d\ , \ } , \\\tilde{\partial } a & : = & \{x\in a:\,\exists y\in \eps\z^d \text { such that } |x - y|=\eps \text { and } ( x , y]\cap \partial d \neq \emptyset\ } , \\ \deltaa & : = & \tilde{\partial}a \setminus \partial a.\end{aligned}\ ] ] in this notation , we have , , and . see figure [ fig : extenddomain2_1 ] for an illustration .the following is a key lemma which allows us to derive the relative isoperimetric inequality for the discrete setting from that in the continuous setting , and hence leads us to theorem [ t : isoperimetric_discrete ] .( extension of sub - domains)[l : extenddomain ] let be the stationary measure of the simple random walk ( srw ) on . for any , there exist positive constants , and such that if , then for all grid - connected with , we can find a connected open subset which contains and satisfies : for , let be the cube in the dual lattice which contains .since is grid - connected , we have is connected in , where is the interior of .( see figure [ fig : extenddomain2_1 ] for an illustration . )note that we can not simply take because ( d ) may fail , for example when contributes too much to , i.e. , when is large .however , is close to and so we can fill in the gaps between and to eliminate those contributions . in this process , we may create some extra pieces for , but we will show that those pieces are small enough . following this observation , we will eventually take where for some small enough . since is a bounded lipschitz domains , we can choose small enough so that .moreover , implies .so we can choose small enough so that .hence satisfies ( b ) . by lipschitz property again , there exists such that for any .hence ( c ) is satisfied .it remains to construct in such a way that for some small enough ( more precisely , for small enough so that ) and that ( a ) and ( d ) are satisfied .we will construct in 3 steps : step 1 : ( construct to seal the opening between and the subset of which are close to . see figure [ fig : extenddomain2_2 ] . )write where and points in are marked in solid black in figure [ fig : extenddomain2_2 ] .for , consider the following closed cube centered at : let be the union of all connected components of whose closure intersects and define step 2 : ( fill in the gaps between and near . see figure [ fig : extenddomain2_3 ] ) note that does not contribute to .let be the union of all connected components of whose closure intersects for some .it is clear that is connected and is piecewise linear , so ( a ) is satisfied . for any , we have , where therefore , whenever the corresponding surface measures are defined .it is clear that by construction we have now .moreover , each is adjacent to at most points in , and for each , there are at most cubes in .so we have hence ( d ) is satisfied .since , we have . to complete the proof, it suffices to show that .this is equivalent to show that any curve in starting from any point in must lie in .let ] .define . since and , the time when first exits must be less than by continuity of .that is , it suffices to show that \cap \theta_{x}\neq\emptyset ] , we can choose small enough ( depending only on ) and split ] except possibly for finitely many points .let be the collection of discontinuities for ) ] .define \longrightarrow \partial ( \omega_{d^{\eps}})\cap d ] such that for every , , and . moreover , the following weaker bound holds for all : in particular , this implies the upper bound in theorem [ t : upperhke ] which is the case when .the gaussian lower bound for in theorem [ t : lowerhke ] then follows from the lipschitz property of and a well - known chaining argument ( see , for example , page 329 of ) .therefore , we have the two - sided gaussian bound for as stated in theorem [ t : upperhke ] and theorem [ t : lowerhke ] .it then follows from a standard ` oscillation ' argument ( cf .theorem 1.31 in or theorem ii.1.8 in ) that is hlder continuous in , _ uniformly in . more precisely , [ t : weakconvergence_rbmdrift ] let be a bounded domain whose boundary has zero lebesque measure .suppose also satisfies : suppose is strictly positive . then for every , as , 1 . converges weakly to the stationary process in the skorokhod space ,{\overline}{d}) ] whenever converges to . for ( i ), the proof follows from a direct modification of the proof of ( * ? ? ?* theorem 3.3 ) . recall the definition of the one - step transition probabilities , defined in the paragraph that contains ( [ e : conductance_baisedrw+ ] ) and ( [ e : conductance_baisedrw- ] ) . observe that , since , approximations using taylor s expansions in the proofs of ( * ? ? ?* lemma 3.1 and lemma 2.2 ) continue to work with the current definition of .thus we have the process has a lvy system , where for , following the same calculations as in the proof of ( * ? ? ?* theorem 3.3 ) , while noting that ( * ? ? ?* theorem 6.6.9 ) ( in place of ( * ? ? ? * theorem 1.1 ) ) can be applied to handle general symmetric reflected diffusions as in our present case , we get part ( i ) . part ( ii )follows from part ( i ) by a localization argument ( cf .* remark 3.7 ) ) ._ proof of theorem [ t : lclt_ctrw]_. for each and , we extend to in such a way that is nonnegative and continuous on , and that both the maximum and the minimum values are preserved on each cell in the grid .this can be done in many ways , say by the interpolation described in , or a sequence of harmonic extensions along the simplexes ( described in ) .consider the family of continuous functions on .theorem [ t : upperhke ] and theorem [ t : holdercts ] give us uniform pointwise bound and equi - continuity respectively . by arzela - ascoli theorem, it is relatively compact .i.e. for any sequence $ ] which decreases to , there is a subsequence and a continuous such that converges to locally uniformly .on other hand , by part ( ii ) of theorem [ t : weakconvergence_rbmdrift ] , if the original sequence is a subsequence of , then .more precisely , the weak convergence implies that for all , then by the continuity of both and in the second coordinate , we have on . since and are continuous on ( cf . ) , we obtain on . in conclusion , we have converges to locally uniformly through the sequence .* acknowledgements * : we thank krzysztof burdzy , rekhe thomas and tatiana toro for helpful discussions . in particular , we are grateful to david speyer for pointing out the recurrence relation to us .we also thank guozhong cao , samson a. jenekhe , christine luscombe , oleg prezhdo and rudy schlaf for discussions on solar cells .financial support from nsf solar energy initiative grant dmr-1035196 as well as nsf grant dms-1206276 is gratefully acknowledged .r.f . bass and p. hsu . some potential theory for reflecting brownian motion in hlder and lipschitz domains .* 19 * ( 1991 ) , 486 - 508 .r.f . bass and t. kumagai .symmetric markov chains on with unbounded range .* 360 * ( 2008 ) , 2041 - 2075 .comparison of stochastic and deterministic models of a linear chemical reaction with diffusion .* 19 * ( 1991 ) , 1440 - 1462 . c. boldrighini , a. de masi and a. pellegrinotti .nonequilibrium fluctuations in particle systems modelling reaction - diffusion equations .* 42 * ( 1992 ) , 1 - 30 . c. boldrighini , a. de masi , a. pellegrinotti and e. presutti .collective phenomena in interacting particle systems .appl . _ * 25 * ( 1987 ) , 137 - 152 .k. burdzy and z .- q .discrete approximations to reflected brownian motion .probab . _ * 36 * ( 2008 ) , 698 - 727 .k. burdzy and z .- q .chen . reflected random walk in fractal domains .* 41 * ( 2011 ), 2791 - 2819 . k. burdzy , r. holyst and p. march . a fleming - viot particle representation of dirichlet laplacian .* 214 * ( 2000 ) , 679 - 703 .k. burdzy and j. quastel . an annihilating - branching particle model for the heat equation with average temperature zero ._ * 34 * ( 2006 ) , 2382 - 2405 .carlen , s. kusuoka and d.w .upper bounds for symmetric markov transition functions .henri poincar - probab .* 23 * ( 1986 ) , 245 - 287 .mr 898496 ._ from markov chains to non - equilibrium particle systems ._ world scientific , 2003 .on reflecting diffusion processes and skorokhod decompositions .theory relat .fields _ * 94 * ( 1993 ) , 281 - 316 . g. david . morceaux de graphes lipschitziens et intgrales singulires sur une surface ._ rev . mat .iberoamericana . _ * 4 * ( 1988 ) , 73114 ( french ) .mr 1009120 .g. david and s. semmes .singular integrals and rectifiable sets in : beyond lipschitz graphs ._ astrisque . _* 193 * ( 1991 ) , 152 .mr 1113517 ( 92j:42016)9120 .p. dittrich . a stochastic model of a chemical reaction with diffusion .theory relat . fields ._ * 79 * ( 1988 ) , 115 - 128 .p. dittrich .a stochastic partical system : fluctuations around a nonlinear reaction - diffusion equation ._ stochastic processes .appl . _ * 30 * ( 1988 ) , 149 - 164 .l. erds , b. schlein and h. t. yau .derivation of the cubic non - linear schrdinger equation from quantum dynamics of many - body systems ._ inventiones mathematicae _ * 167*(3 ) ( 2007 ) , 515 - 614 .ethier and t.g ._ markov processes .characterization and convergence ._ wiley , new york , 1986 .systems of reflected diffusions with interactions through membranes .phd thesis . in preparation , 2014 .hydrodynamic limits .soc . _ * 1 * ( 2005 ) , 699 - 717 .guo , g.c . papanicolaou and s.r.s .nonlinear diffusion limit for a system with nearest neighbor interactions .phys._. * 118 * ( 1988 ) , 31 - 59 .p. gyrya and l. saloff - coste ._ neumann and dirichlet heat kernels in inner uniform domains ._ paris : socit mathmatique de france , 2011 .c. kipnis and c. landim ._ scaling limits of interacting particle systems ._ springer , 1998 .c. kipnis , s. olla and s.r.s .hydrodynamics and large deviations for simple exclusion process . _ comm .pure appl .* 42 * ( 1989 ) , 115 - 137 .p. kotelenez .law of large numbers and central limit theorem for linear chemical reactions with diffusion .* 14 * ( 1986 ) , 173 - 193 p. kotelenez .high density limit theorems for nonlinear chemical reactions with diffusion .theory relat .fields _ * 78 * ( 1988 ) , 11 - 37 .limit theorems for sequences of jump markov processes approximating ordinary differential processes ._ j. appl .* 8 * ( 1971 ) , 344 - 356 ._ approximation of population processes ._ siam , philadelphia , 1981 .r. lang and n.x .smoluchowski s theory of coagulation in colloids holds rigorously in the boltzmann - grad - limit ._ zeitschrift fr wahrscheinlichkeitstheorie und verwandte gebiete . _* 54 * ( 1980 ) , 227 - 280 .r. m. may and m. a. nowak . evolutionary games and spatial chaos .* 359.6398 * ( 1992 ) , 826 - 829 . v.g ._ sobolev spaces . _ springer verlag , 1985 l. saloff - coste . _lectures on finite markov chains . _ lecture notes in math . , springer , 1997 .stanley . _ enumerative combinatorics ii . _ cambridge university press , cambridge , 1999 .stroock . diffusion semigroups corresponding to uniformly elliptic divergence form operators ._ sminaire de probabilits xxii . _ lecture notes in math . ,vol . 1321 , pp .316 - 347 , springer , berlin , 1988 .stroock and w. zheng .markov chain approximations to symmetric diffusions .poincar - probab .statist . _* 33 * ( 1997 ) , 619 - 649 .mr 1473568 .
a new non - conservative stochastic reaction - diffusion system in which two families of random walks in two adjacent domains interact near the interface is introduced and studied in this paper . such a system can be used to model the transport of positive and negative charges in a solar cell or the population dynamics of two segregated species under competition . we show that in the macroscopic limit , the particle densities converge to the solution of a coupled nonlinear heat equations . for this , we first prove that propagation of chaos holds by establishing the uniqueness of a new bbgky hierarchy . a local central limit theorem for reflected diffusions in bounded lipschitz domains is also established as a crucial tool .
ly absorption , shortward of ly emission in qso spectra , indicates the presence of intervening absorbers with neutral hydrogen column densities ranging from about to . the absorbers with low column densities , _e.g. _ from to , are usually called ly forest .it is generally thought that the low column density absorbers are some kind of weakly clustered clouds consisting of photoionized intergalactic gas ( _ e.g. _ wolfe 1991 ; bajtlik 1992 ) .this suggests that ly forests are caused by diffusely distributed igm in pre - collapsed areas of the cosmic mass field ( bi , 1993 ; fang et al .1993 ; bi , ge & fang 1995 ; bi & davidsen 1997 . ) observations of the size and velocity dispersion of the ly clouds at high redshift also show that the absorption probably is not caused by confined objects at high redshifts ( bechtold et al .1994 ; dinshaw et al .1994 ; fang et al 1996 ; crotts & fang 1998 . ) with this picture , the baryonic matter distribution is almost point - by - point proportional to the dark matter distribution on all scales larger than the igm s jeans length , i.e. the ly forests would be good tracers of the underlying dark matter distribution .thus , the power spectrum of qso ly transmitted flux can be used to estimate the power spectrum of the underlying mass field , and then be used to constrain cosmological parameters ( croft et al 1999 ; mcdonald et al 1999 ; hui 1999 ; feng & fang 2000 . ) a key step in this approach is to compare the power spectrum of observed transmitted flux fluctuations with model - predicted power spectrum .one uncertainty in the power spectrum determination of the real data is from the normalization of the power spectrum .therefore , in order to have an effective confrontation between the observed and theoretical power spectrum of ly forests , it is necessary to develop a proper algorithm for the normalization of the power spectrum .this is the goal of this paper .the observed flux of a qso absorption spectrum is given by , where is the continuum , the transmission , and the optical depth .the normalized power spectrum of transmission is the power spectrum of the transmission flux fluctuations , defined as that is , the transmission power spectrum is normalized by the mean flux . in other words ,the normalization of the transmission power spectrum is determined by two factors : the continuum and the mean transmission .traditionally , the continuum is needed to be determined before the power spectrum calculation .usually the continuum is obtained by a fitting of polynomial or its variants .assuming that the continuum fluctuates slowly , the polynomial or its variants are truncated at relatively low orders ( e.g. croft et al 2000 ; hui et al 2000 ) .the pre - assumed polynomial or other function , and the subsequent truncation may lead to uncertainty of the power spectrum .another source of uncertainty of the transmission power spectrum is the mean transmission normalization .the mean transmission is calculated by averaging the flux over the entire wavelength range considered , and the power spectrum is normalized by this mean transmission for all scales .this implicitly assumes that there is no correlation between the transmitted flux fluctuations and the mean flux .this assumption is true for a gaussian field , but may not be so for a non - linearly evolved field .in fact , the fluctuations at position are correlated with the background at the same position .recent findings that the transmitted flux of ly forests exhibits intermittent behavior ( jamkhedkar , zhan & fang 2000 ) clarifies this point .that is , the transmitted flux shows prominent spiky feature fluctuations on small scales .the transmitted flux consists of rare but strong density fluctuations randomly scattered in space with very low fluctuations in between . in this case ,the power of the transmission fluctuations is mainly dominated by the spikes . on the other hand ,the transmission is low at the spikes .that is , the transmission fluctuations are anti - correlated with transmission . as a consequence , the power would be underestimated if the power spectrum is normalized by the mean transmission over the entire wavelength range . since the spiky features are stronger on smaller scales , the normalization by an over - all mean transmission or by a filling factor with a scale - independent flux threshold ( croft et al 2000 ) will cause an underestimation of power on small scales .recently , we have developed a power spectrum estimator with a multiresolution analysis based on the discrete wavelet transform ( dwt ) .the dwt power spectrum estimator is found to be very useful for the recovery of the initially linear power spectrum ( feng & fang 2000 ; pando , feng & fang , 2001 ) . in this paper , we show that the dwt algorithm is also very useful to detect the power spectrum of non - linear field , like an intermittent field .the dwt algorithm can effectively reduce the above - mentioned uncertainties due to free parameters used for normalization .we will show that the normalization of a dwt power spectrum does not rely on a continuum fitting and the mean transmission .moreover , the power spectrum given by this estimator is sensitive to the correlation between the flux fluctuations and the background flux .that is , the power spectrum can be employed to distinguish among the fields with and without intermittency .therefore , it would be useful for discrimination among models of the ly forests .the paper will be organized as follows . 2 introduces briefly the discrete wavelet transform ( dwt ) analysis of the flux of qso absorption spectrum . 3presents the self - normalization algorithm of the transmission power spectrum . it does nt need either a continuum fitting , or a calculation of the mean transmission . in 4 , we test the self normalization algorithm .we show that the self - normalization algorithm can effectively perform the normalization due to either continuum or mean transmission . 5 will demonstrate that the dwt self - normalized power spectrum is useful to detect the intermittent behavior of the field .finally , the conclusions and discussions will be presented in 6 .we rewrite eq.(1 ) as our purpose is to estimate the power spectrum of the flux fluctuations from the observed flux . therefore , eq.(1 ) requires one to decompose the observed flux into two terms : the first one is the background , which does not contain information of the fluctuations considered , while the second term contains all this information .if the background is dependent and correlated with the fluctuations , the decomposition eq.(1 ) apparently can not be done by truncating at _ a priori _ relative low orders " .we will solve this problem by a scale - by - scale analysis , without introducing new parameters . in terms of a scale - by - scale analysis , eq.(1 )means that to detect the power of the flux fluctuations on the scale , all components of on scales larger than play the role of a background .therefore , to determine power on the scale , one can decompose the observed into two terms : the first does nt contain any information on scales larger than , while the second contains all information on scales equal to or less than .this decomposition refers to both the position and the scale , and therefore , we need a scale - space decomposition of the transmitted flux . on the other hand , the calculation of power spectrum essentially is a decomposition of the flux into scale domains .therefore , it is possible to do the decomposition of eq.(2 ) by the same scale - space decomposition as that used for measuring the power spectrum . in other words ,the estimation of the normalization background and the calculation of power spectrum can be accomplished simultaneously .once the orthonormal bases for power spectrum estimation are given , the term is uniquely determined without free ( or fitting ) parameters .the fourier power spectrum is not convenient for this purpose , as the bases of fourier transform are not localized in physical space , and they do nt yield a scale - space decomposition .we will use the dwt , whose bases are localized both in scale and position space ( e.g. mallat , 1989a , b , c ; meyer , 1992 ; daubechies , 1992 ; and references therein , and for physical applications , refer to fang & thews , 1998 . ) a sample of qso ly absorption spectrum gives a list of the flux observed at discrete wavelength , . since corresponds to spatial position , or redshift , one can define a spatial distribution of the flux by [ \theta(x -\lambda_i ) - \theta(x-\lambda_{i+1})],\ ] ] where the step function is equal to 1 , for , and 0 for .the spatial range corresponds to the wavelengths from to . in the dwt analysis ,the space is chopped into segments labelled by .each of these segments has a size of .the index is an integer .therefore , the index stands for scale , while is for position , or the spatial range .we first introduce the scaling functions for the haar wavelets .they are the top - hat window functions defined by where the superscript stands for haar , and the factor ensures normalization , i.e. where is kronecker delta function . the scaling function, is actually a window on a resolution scale at the position .but it is not normalized like a window function which satisfies nevertheless the mean flux in the spatial range is proportional to the number is called the scaling function coefficient ( sfc ) . using sfcs , one can construct the flux as is the flux smoothed on scale ( or simply the scale ) .a higher value for corresponds to smaller scales and vice versa . for a given sample with resolution ,the original flux can be expressed as , where is given by the integer of number ] , and , .the amplitudes is taken to be 0.1 , 0.2 , 0.3 , 0.4 and 0.5 .the observed photons are a poisson sampling of a random field with mean . foreach given , hundred realizations of the poison sampling are produced .finally , gaussian noise with zero mean counts and standard derivation 50 ( photon number ) , which is independent from the poisson sampling , is added to each pixel .some results are shown in fig . 1 . using the estimator in eq.(41 ) , we calculate the dwt power spectra from the simulated flux with various continua . the results are plotted in fig .it shows that the dwt power spectra for different continua are exactly the same on scales ( or length scale h kpc ) .the dispersion of the power spectra over 100 realizations is very small .that is , the power spectrum given by estimator ( 41 ) is continuum - independent . that it , the self - normalization algorithm can produce the power spectrum correctly normalized by the continuum , but without a continuum fitting . to test the mean transmission normalization of estimator eq.(41 ) , we calculate the so - called unnormalized dwt power spectrum of the transmission , i.e. the power spectrum of continuum normalized flux similar to eq.(20 ) , the unnormalized dwt power spectrum is given by where the wfcs are calculated by the result of for the lognormal sample is plotted in fig .2 . by definition eq.(1 ) , we have the second term on the r.h.s . of eq.(48 ) does not contribute to the dwt power spectrum , as is admissible .therefore , the power spectrum should be transformed to by dividing a normalization factor ^ 2 ] without changing their amplitudes .the phase randomized sample gets rid of the intermittent behavior possessed by the field , but the unnormalized power spectrum and the traditional normalized power spectrum of the phase randomized sample are exactly the same as the original one ( jamkhedkar , zhan & fang 2000 ; zhan , jamkhedkar & fang 2001 ) . in fig .7 , the long dashed line is the unnormalized power spectrum of the original data and its phase randomized counterpart , and the dot dashed line is the traditionally normalized power spectrum of the original data and its phase randomized counterpart . therefore , neither unnormalized nor traditionally normalized power spectrum can distinguish between the highly intermittent field and its phase - randomized counterpart .on the other hand , the estimator eq.(41 ) can detect the difference between the two fields .we calculate the power spectrum of the phase randomized sample by the self - normalized estimator eq.(41 ) .the result is shown in fig .it shows that the self - normalized power spectrum of the original data is very different from its phase - randomized counterpart .therefore , one can conclude that the self - normalized power spectrum estimator is sensitive to both the clustering behaviors of the field on large scales ( mean transmission ) and on small scales ( intermittency , or fluctuation - background anti - correlation ) .but the traditional normalized power spectrum is insensitive to the phase correlation of the fourier modes .the power spectrum of ly forests is a direct indicator of the matter distribution at high redshift .this paper addresses the issue of how continuum fitting and the mean transmission affect the estimated power spectrum of qso s ly forests .we propose a straightforward method for calculating the power spectrum of observed ly forests .this method is based on the dwt decomposition of the transmission flux .it gives a consistent calculation for the decomposition of flux and the normalization of power spectrum . with numerical simulation samples, we showed that the power spectrum obtained by this estimator is independent of the continuum .the non - linear power spectrum of the transmission can be reliably recovered from the observed flux regardless of the continuum , i.e. the algorithm can automatically take care of the normalization by the continuum without a continuum fitting . with numerical simulation samples ,we also show that the power spectrum estimator can automatically consider the normalization of the mean transmission , i.e. the algorithm does nt need a pre - calculated mean transmission to do the normalization . for a gaussian field ,the power spectrum given by the proposed estimator principally is the same as the power spectrum given by traditional normalization . in this case ,an advantage of the proposed estimator is that it is free from fitting parameters . on scales with significant non - linear clustering , like intermittency or phase correlation, the self - normalized power spectrum is essentially different from the power spectrum normalized by traditional method .the latter is not sensitive to the phase correlation , while the former is .therefore , as an estimator of power spectrum of non - linear field traced by ly forests , the self - normalization algorithm is useful of the discrimination among models .we thank drs .l.l . feng and w.l.lee for their help .pj would also like to thank jennifer scott for useful discussions .the observed photon counts , , at a pixel corresponding to wavelength is given by where is the optical depth , and is the continuum . describes the -dependence of ccd s efficiency . is noise , whose mean and covariance are where is kronecker delta function .eq.(a2 ) means that the random variable is independent for each wavelength .the observed photon counts are reduced as we have then .\ ] ] if we define a new variable for error as ,\ ] ] we have where the fluctuation of the transmission is defined by thus , eq.(a6 ) yields eq.(21 ) .consider the reduced photon count as a sampling of random field ,\ ] ] where .for the poisson sampling , the characteristic function of is = \exp\left \ { \int dx\tilde{n}(x)[e^{iu(x)}-1 ] \right \}.\ ] ] thus , the correlation functions of are given by {u=0},\ ] ] where is the average for the poisson sampling .we have then and this equation yields this gives eq.(24 ) . for a weighted poisson sampling ,the data at are given as a poisson sampling of , but with a weight .in this case , the characteristic function eq.(b2 ) becomes = \exp\left \ { \int dx \tilde{n}(x)[e^{ig(x)u(x)}-1 ] \right \}.\ ] ] we have then and eq.(b2 ) yields
the calculation of the transmission power spectrum of qso s ly absorption requires two parameters for the normalization : the continuum and mean transmission . traditionally , the continuum is obtained by a polynomial fitting truncating it at a lower order , and the mean transmission is calculated over the entire wavelength range considered . the flux is then normalized by . however , the fluctuations in the transmitted flux are significantly correlated with the local background flux on scales for which the field is intermittent . as a consequence , the normalization of the entire power spectrum by an over - all mean transmission will overlook the effect of the fluctuation - background correlation upon the powers . in this paper , we develop a self - normalization algorithm of the transmission power spectrum based on a multiresolution analysis . this self - normalized power spectrum estimator needs neither a continuum fitting , nor pre - determining the mean transmission . with simulated samples , we show that the self - normalization algorithm can perfectly recover the transmission power spectrum from the flux regardless of how the continuum varies with wavelength . we also show that the self - normalized power spectrum is also properly normalized by the mean transmission . moreover , this power spectrum estimator is sensitive to the non - linear behavior of the field . that is , the self - normalized power spectrum estimator can distinguish between fields with or without the fluctuation - background correlation . this can not be accomplished by the power spectrum with the normalization by an overall mean transmission . applying this analysis to a real data set of q1700 + 642 ly forest , we demonstrate that the proposed power spectrum estimator can perform correct normalization , and effectively reveal the correlation between the fluctuations and background of the transmitted flux on small scales . therefore , the self - normalized power spectrum would be useful for the discrimination among models without the uncertainties caused by free ( or fitting ) parameters .
recursive bayesian estimation ( or filtering ) is a technique for recursively estimating the state of a random process observed via noisy measurements .if the underlying dynamical model is linear and gaussian we have the celebrated kalman filter which is an exact solution to the bayesian filtering problem .unfortunately , in many practical scenarios of interest , the bayes filter is not exactly computable .therefore , we seek techniques to approximate this ideal filter . the kalman filter can be applied in more general settings as an approximation .particle filtering is a more general approximation method that is easily applied to nonlinear and non - gaussian state - space models .the particle filter approximates the bayesian filter via monte carlo simulation / sampling . the samples ( or particles )are propagated through a sequential importance sampling mechanism that attempts to capture the dynamics of the unobservable process and the likelihood of the observations available .other approximations exist such as gaussian mixture filters etc . .the particle filter has been widely studied in theory and in countless practical applications . in authors prove that the error can be controlled uniformly in time , thus providing a solid mathematical support for application of the filter in numerous fields .unfortunately , the particle filter computation is strongly dependent on the dimension of the underlying estimation problem .specifically , the error bound grows exponentially with the system s dimension , making the filter infeasible in most high - dimensional applications .this problem is known as the _ curse of dimensionality _a heuristic explanation of this phenomenon for a particular case can be found in . in authors give a precise relation between the dimension of the system and the number of particle required to avoid weight degeneracy .the fact that the approximation error is exponential in the dimension and only inversely controlled by the sample size implies that an incredibly large number of particles are required when dealing with a high dimensional system if we want to control the error at a reasonable level .obviously , a large number of particles means a heavy computational burden , that is often simply prohibitive .recent studies however suggest that high - dimensional particle filtering may be feasible in particular applications and/or if one is willing to accept a degree of systematic bias . in , the particle filter is applied in a static setting where the objective is to sample from some high - dimensional target distribution . in this case , through a sequence of intermediate and simpler distributions , it is shown that the particle filter will converge to a sampled representation of the target distribution with a typical monte carlo error ( inverse in the number of particles ) given a complexity on the order of the dimension squared .although deals only , in essence , with a static problem of sampling from a fixed target distribution , the analysis introduces a novel way of thinking about high - dimensional particle filtering which may carry over to dynamic filtering problems .related work appears in . in authors consider particle filtering in large - scale dynamic random fields .they assume the dynamics of the underlying process are localised to a neighbourhood of the field and the observations are local to each site .they exploit this idea by localising the algorithm during the update phase .they argue that the difficulty in high dimensional particle filtering is due largely to the dimension of the observation and the nonlinearity of the update operation .therefore , they partition the field into independent blocks and correct every marginalised block separately .the posterior is simply the product of the blocked marginals .the real contribution of is a descriptive and technical analysis that shows the error introduced due to the localisation procedure can be readily controlled if the dynamics of the random field at each site are only locally dependent on those sites within close proximity .the standard sampling approximation error is shown to be exponential in only the size of the individual blocks .the number of samples / particles controls the sampling approximation error at the typical rate while the error due to the localisation process is a systematic bias that can only be controlled through an increase in the block size .since each block is updated independently , parallel implementation is readily applicable and the computational burden may be alleviated , albeit this remains to be seen in practice . while the results of are at the proof - of - concept stage , the idea is incredibly powerful .the authors in show that although the total approximation error can be controlled uniformly in time , it suffers from a spatial inhomogeneity .specifically , the nodes close to the block boundaries display a larger error than those far removed from the boundaries ( as one might expect ). a simple approach to average this spatial inhomogeneity is given in where adaptive partitioning of the field is employed . in this paperwe consider again the idea proposed in and propose a modified particle filtering algorithm that displays an additional degree of freedom .the idea proposed herein is to enlarge the blocks during the update phase , allowing for more observations to be employed during the correction at each block .the main contribution is the addition of a new parameter that captures how much we enlarge each block prior to the update .obviously , by enlarging each block prior to updating we reduce the bias error but we increase the complexity involved in updating each ( enlarged ) block . by designing an appropriate tradeoff between the various tuning parameters it is possible to reduce the total error bound via allowing a temporary enlargement of the update operator without increasing the overall computational burden .we borrow the problem setup and notation directly from . consider a markov chain defined on a polish state space with transition density with respect to a reference measure . moreover consider a process , defined on a polish space , conditionally independent given , with a transition density with respect to a measure .the process is observed via the process .our aim is to estimate the probability of the state given the measurements up to that time and the initial condition .therefore we introduce the filter \ ] ] it can be easily seen , using bayes rule , that the filter can be written in a recursive way where the operator is defined as follows moreover , the above operator is typically split into two sub - steps where is a prediction step , and is a correction ( or update ) step . in the prediction step ,the measure is transformed according to the density , while in the update step we use the new information to correct the predicted measure .we then write the recursion as follows the classic bootstrap particle filter uses particles ( or samples ) to approximate the measure . given a sampled approximation of ,the particles are first moved according to the transition in order to approximate a sampled representation of the prediction .the update then computes a weighted posterior empirical measure via .eventually , a resample step is added in order to avoid weight degeneracy .more formally , denoting the bootstrap filter by , we have where and represents the sampling operator here defined it is possible to prove that \leq { a_0}/{\sqrt{n}}\ ] ] with independent of time .unfortunately , the constant typically depends ( exponentially ) on the dimension of the underlying problem .intuition for this exponential dependence is given in .we now consider the pair as a random field indexed on a finite undirected graph .the vertex set will represents the collection of sites and the edge set the spatial relationships between them .the cardinality of captures , in some sense , the dimension of interest .more formally , the spaces and are defined as products , . the reference measures are products , where and are reference measures on and respectively .the transition densities are defined as where and are densities with respect to the reference measures and . from the definitionwe can see that the observations are assumed to be completely local , in the sense that depends uniquely on the value assumed by .the process is local in the sense that the state at a site depends only on the state at nearby sites .we state this formally .consider the graph equipped with the distance defined by the number of hops along the shortest path connecting and .we can define the neighbourhood of a site as where represents the range of interaction .then we assume where we write for , .in other words , the random field is local in the sense that given the present state depends only on .in the authors propose an application of the blocked filter algorithm to the field model just explained , exploiting the local dynamic dependencies .we briefly illustrate this algorithm .consider a partition of into non - overlapping blocks with a union equal to .the idea is to create independence across blocks on by marginalising after the prediction step .we then update each block separately and finally we form via the product of the independent ( updated ) blocked marginals .more formally , consider the block operator on the space of measures on , defined by where is the marginal of the measure on the subset .then the proposed block filter can be written as a recursion , where the operator consists of four steps we make the following definition . given and a subset define a distance of the marginals on as follows ^{\frac{1}{2}}\ ] ] where the expectation is taken with respect to the random sampling and is the class of measurable function on that depends only on the values on , that is when .if we omit the subscript and write .with no expectation it follows that is equivalent to the total variation which we write as .the two norms are interchangeable when no sampling occurs .now , given a set we define the boundary and the interior and given a partition , we define the following quantities where the first quantity is independent of the partition .the result proven in is the following .[ rebeschini main ] there exists a constant , depending only on the quantities such that if there exists and such that then for every , , and we have \ ] ] where the constants are positive , finite and dependent only on and .the intuition is that the algorithm approximation error is exponential in rather then in but that the error at some individual locations increases with the proximity of those locations to the border of the blocks .this leads to a spatial inhomogeneity as seen in the first term of the bound .a first attempt to achieve a spatially homogeneous error bound can be found in .the idea is to consider a finite number of partitions and to apply them cyclically. clearly we have to choose the partitions is such a way there is no node that is consistently close to a border .this condition is expressed by a bound on the average , or exponential average , of the border distance .write clearly and represent how well balanced the collection of partitions are .define and .[ bertoli main ] there exists a constant , depending only on the quantities , such that if there exists and such that then for every , and we have where depend only on , , , and in this case . if where for all , then the bound is completely spatially invariant .see for further discussion on this method .suppose now we are given a partition over but it turns out we are interested only in estimating the marginal of on a particular block .we could first redefine the partition with a larger block encompassing and a bunch of single site blocks ( to speed up the overall computation ) .it is of course not possible to define a partition in this manner for multiple blocks of interest .however , the idea proposed here is based on extending the state space by creating multiple independent copies of the measurements ( and states ) that are then used in different ( and independent ) enlarged blocks .we introduce some new notation .consider a parameter , that we will consider fixed throughout the rest of the paper . then define , for any ,an enlarged block now define the enlarged spaces consider the collection .this is no longer a partition of .however , is a partition on , and here we can apply the blocking and updating operators associated with .we use the superscript to note enlarged objects .the measures and are defined straightforwardly .the block operator becomes to update , we need the same operator redefined on the new space , we also define now we can write the enlarged blocked filter algorithm as a recursion where .now we have five steps . skipping the prediction / sample steps, graphically we have to write out the explicit expression of the filter we note that where .therefore , splitting a variable in with and ( where now we put as subscript just for notational simplicity ) and an enlarged block where , we can write } { \int\prod_{k'\in\mathcal{k}}\left[\prod_{w\in \overline{k'}}~p^w(x_0,x^w)~g^w(x^w , y_s^w)~\nu(dx_0)\psi^{\overline{k'}}(dz^{\overline{k'}}\right]}\ ] ] } { \int\prod_{k'\in\mathcal{k } } \left[\prod_{w\in k'}p^w(x_0,x^w)~g^w(x^w , y_s^w)\prod_{w\in k'^e}p^w(x_0,z_e^w)~g^w(z_e^w , y_s^w)~\nu(dx_0)\psi^{k'}(dx^{k'})\psi^{k'^e}(dz^{k'^e}_e)\right]}\ ] ] where for we write .define an ideal enlarged blocked filter where . fix .we then use the triangle inequality to decompose the error according to where we refer to the first and second decomposed terms as the bias and variance respectively .the bias represents the error introduced solely as a result of the blocking operation . in the standard bootstrap filter, this bias term vanishes and the typical analysis considers only the variance term .going forward , we consider bounding both the bias and the variance .we stress however , that the bias is fundamentally more interesting as it pertains directly to the localisation idea considered herein .indeed , the sampling operation that leads to the variance term could be replaced with other approximation techniques with no loss of generality ( albeit a different approximation error than detailed subsequently ) . for sake of completeness / clarity we firstly state a result that includes both a bias and a variance bound .[ main result ] suppose there exists a constant , depending only on and and assume then for every time , , and we have \ ] ] where the constants depend only on , , , , , .this single ( total error ) bound is derived in practice as two separate bounds which we now explicitly state .[ bias theorem ] assume there exists such that and such that let .then for every we have for every , and . the only difference between this bias bound and the bias bound in is the presence of in place of . for a given partition any enlargement of the blocks in yielding results in a tighter bias bound as expected .[ variance theorem ] assume there exists such that and such that let where .then for every we have for every , and .again , the only significant difference between this variance bound and the variance bound in is the presence of in place of .the variance depends inversely on the number of samples and exponentially in the size of the enlarged blocks .roughly , we now explain how one may implement the enlarged blocked filter to reduce the bias as compared with the algorithm proposed in while maintaining a comparable variance and computational complexity .suppose firstly that one has a random field over sites and the computational power available ( defining a bound on ) ensures that blocks of size can be readily handled for some .then the complexity of the blocked particle filter proposed in can , in a sense , be regarded as being of order .really , one can imagine particle filters running in parallel over each block and each with complexity on the order of . to exploit the enlarged blocked particle filter, one should start with a larger number of smaller blocks which when enlarged are mostly of the size . then , the complexity of the enlarged blocked particle filter proposed herein is on the order .one immediately sees that the variance of the enlarged blocked particle filter is mostly on the same order as that of the algorithm proposed in and the computational complexity has only increased linearly .however , in almost all cases ( and certainly with well - designed partitions ) one will achieve a reduction in the bias at any given site in the random field .we consider a special but interesting case in which a spatial homogeneous total error bound is obtained , the bias bound is better ( tighter ) than in , and the computational requirements largely unchanged when compared with the algorithm in .assume the same hypothesis of theorem [ bias theorem ] .consider the partition and suppose .then for every , , and we have this bound is spatially homogeneous and with it is strictly less than the bias bound introduced in .note that while the bias bound here is spatially homogeneous , the actual bias may still be inhomogeneous since this result is potentially based on over bounding . on the other hand ,it is possible to apply the adaptive scheme proposed in with the enlarged blocked filter and potentially achieve true spatial homogeneity .the idea of the enlarged blocked particle filter is essentially based on the principle that larger blocks lead to a reduction in the bias introduced due to blocking .so , why not just start with larger blocks ? * well , irrespective of the size of the blocks , if one applies the standard blocked particle filter of then there will always exist sites on the border of a block . *if we extend ( or enlarge ) the blocks as proposed herein , we ( typically ) reduce the bias at each site ( and particularly those sites that were on the border of a block in the original partition ) . *if we increase the number of samples with a fixed number of larger blocks ( in the original partition ) then while we can reduce the variance we have no effect on the bias for those sites on the border .* if we start with small blocks in the original partition and then simultaneously enlarge the blocks along with the number of samples then it may be possible maintain a given variance ( or even reduce the variance ) as compared to a partition with larger original block sizes but with a guaranteed smaller bias at each site .the high - level point is that it is computationally more desirable to run a few extra parallel implementations of the particle filter ( corresponding to more ( enlarged ) blocks ) and obtain a tighter bias bound than it is to run a few less parallel implementations of the particle filter for the same variance bound but a larger bias bound .this is only possible through enlargement of the blocks as described herein .finally , we comment on the matter of consistency ( as defined in say ) and observational double counting . consider the partition and suppose for each .practically , following the standard prediction step , the enlarged blocked filter is of the form which is mathematically equivalent to .the point of this illustration is to highlight that even in this case , involving the most extreme enlargement possible , we are not double counting information or effectively applying measurements twice , and the enlarged blocked particle filter is consistent as per . in this section we provide a summary of the proof strategy . clearly the main result in theorem [ main result ] is immediately implied by theorems [ bias theorem ] and [ variance theorem ] .much of the technical analysis required in the proof of theorem [ main result ] is similar to that originally detailed in . in the case of the bias ,one first derives a local stability property for the filter which implies that the marginal over a local set of the initial state is forgotten exponentially fast .such a property also implies that any approximation errors in , say , the initial state are also forgotten .it then follows that if one can bound the one - step approximation error at any time , then in conjunction with the local stability property one will obtain a time - uniform bound on the bias over a local region of the field . in the case of the variance , a similar ideais used except one first establishes stability for the ideal enlarged blocked filter .then , one must bound the one - step approximation error at any time . putting the stability property andthe bound on the one - step approximation together , one achieves the desired time - uniform bound on the variance of a block in the adaptively blocked filter .we have obviously glossed over much of the intricacies involved in the proof in this summary .for example , in the case of the bias , the property introduced in and referred to as the decay of correlations must be established to hold uniformly in time for the ideal block filter .this property captures a notion of spatial stability where the state at some site in the random field is forgotten as one moves away from that site .rebeschini et al .provide a novel measure of this decay that allows them to establish local stability of the filter and to establish a bound on the one - step approximation error .conceptually , a property like the decay of correlations is necessary to establish such results .summarising , the steps needed to prove the bias bound are 1 .proving a ( local ) stability result for the ideal bayesian filter ; 2 . proving that a desired decay of correlation property holds uniformly in time for the measure the one time - step error introduced by the new enlarged blocked filter ; 4 . putting all these results together and finalising theorem [ bias theorem ] .the variance analysis follows much the same path with the prime difficulty being establishment of local stability for the ideal enlarged blocked filter . summarising the steps involved in proving the variance bound , 1 .proving a local stability result for the ideal enlarged blocked filter ; 2 . controlling the one time - step error due to the sampling in the enlarged blocked particle filterputting these results together and finalising theorem [ variance theorem ] the proof details are omitted in this version of the work due to their similarity with those details presented in , but are available upon request .we have presented a modified version of the blocked particle filter originally proposed in .the main feature of our algorithm is that we add a new parameter that can be tuned to decrease the bias as compared to .the high - level argument for this approach is that it is computationally more desirable to run a few extra parallel implementations of the particle filter ( corresponding to more ( enlarged ) blocks ) and obtain a tighter bias bound than it is to run a few less parallel implementations of the particle filter for the same variance bound but a larger bias bound .this gain in bias reduction , with the same variance , and only a linear increase in the computational complexity , is only possible through enlargement of the blocks as described herein .finally , we also point out that the same adaptive approach to changing partitions proposed in could be applied in the case of the enlarged blocked filter and this is an additional method for spatial smoothing and may be of interest in those cases in which the underlying model is time - varying .
particle filtering is a powerful approximation method that applies to state estimation in nonlinear and non - gaussian dynamical state - space models . unfortunately , the approximation error depends exponentially on the system dimension . this means that an incredibly large number of particles may be needed to appropriately control the error in very large scale filtering problems . the computational burden required is often prohibitive in practice . rebeschini and van handel ( 2013 ) analyse a new approach for particle filtering in large - scale dynamic random fields . through a suitable localisation operation they reduce the dependence of the error to the size of local sets , each of which may be considerably smaller than the dimension of the original system . the drawback is that this localisation operation introduces a bias . in this work , we propose a modified version of rebeschini and van handel s blocked particle filter . we introduce a new degree of freedom allowing us to reduce the bias . we do this by enlarging the space during the update phase and thus reducing the amount of dependent information thrown away due to localisation . by designing an appropriate tradeoff between the various tuning parameters it is possible to reduce the total error bound via allowing a temporary enlargement of the update operator without really increasing the overall computational burden .
the last decade has witnessed radical changes in the structure of electricity markets world - wide . prior to the 1980sit was argued convincingly that the electricity industry was a natural monopoly and that strong vertical integration was an obvious and efficient model for the power sector . in the 1990s , technological advances suggested that it was possible to operate power generation and retail supply as competitive market segments .the changes that are taking place and the growing complexity of today s energy markets introduce the need for sophisticated tools for the analysis of market structures and modeling of electricity load and price dynamics . however , we have to bear in mind that electricity markets are not anywhere near as straightforward as financial or even other commodity markets .demand and supply are balanced on a knife - edge because electric power can not be economically stored , end user demand is largely weather dependent , and the reliability of the grid is paramount .recently it has been observed that , contrary to most financial assets , electricity price processes are mean - reverting . in the next sections we investigate whether electricity prices and loads in the california power market can be modeled by generalized ornstein - uhlenbeck processes , a special class of mean - reverting diffusion processes .the analyzed database was provided by the university of california energy institute ( ucei ) . among othersit contains market clearing prices from the california power exchange ( calpx ) and system - wide loads supplied by california s independent ( transmission ) system operator ( iso ) .at first we looked at calpx clearing prices a time series containing system prices of electricity for every hour since april 1st , 1998 , 0:00 until december 31st , 2000 , 24:00 . because the series includeda very strong daily cycle we created a 1006 days long sequence of average daily prices ( as in ) , see fig . 1 .the price trajectory suggests that the process does not exhibit a regular annual cycle .indeed , since june 2000 , california s electricity market has produced extremely high prices and threats of supply shortages .the difficulties that have appeared are intrinsic to the design of the market , in which demand exhibits virtually no price responsiveness and supply faces strict production constraints .it is evident that without taking into consideration regulatory issues , modeling electricity prices in the `` unstable '' california power market is an almost impossible task .thus , instead of forecasting electricity prices , we tried to tackle the `` simpler '' problem of modeling system loads .like for electricity prices , the ucei database contains information about the system - wide load for every hour of the period april 1st , 1998 december 31st , 2000 . due to a very strong daily cyclewe have created a 1006 days long sequence of daily loads , which is plotted in fig .apart from the daily cycle , the time series exhibits weekly and annual seasonality . due to the fact that common trend and seasonality removal techniques do not work well when the time series is only a few ( and not complete , in our case ca .2.8 annual cycles ) cycles long , we restricted the analysis only to two full years of data , i.e. to the period january 1st , 1999 december 31st , 2000 , and applied a new seasonality reduction technique . the seasonality can be easily observed in the frequency domain by plotting the periodogram , which is a sample analogue of the spectral density . for a vector of observations the periodogramis defined as , where , ] denotes the largest integer less then or equal to .observe that is the squared absolute value of the fourier transform . in order to use fast algorithms for the fourier transform we restricted ourselves to vectors of even length , i.e. . in figure 3we plotted the periodogram for the system - wide load before and after removal of the weekly and annual cycles .the periodogram shows well - defined peaks at frequencies corresponding to cycles with period 7 and 365 days .the smaller peaks close to and 0.4 indicate periods of 3.5 and 2.33 days , respectively .both peaks are the so called harmonics ( multiples of the 7-day period frequency ) and indicate that the data exhibits a 7-day period but is not sinusoidal .the weekly period was also observed in lagged autocorrelation plots .these cycles have to be removed before further analysis is carried out , since they may influence predictions to a great extent . to remove the weekly cycle we used the moving average technique . for the vector of daily loads the trendwas first estimated by applying a moving average filter specially chosen to eliminate the weekly component and to dampen the noise : , where .next , we estimated the seasonal component . for each , the average of the deviations was computed .since these average deviations do not necessarily sum to zero , we estimated the seasonal component as , where and for .the deseasonalized ( with respect to the 7-day cycle ) data was then defined as for .finally we removed the trend from the deseasonalized data by taking logarithmic returns , see the middle panel of fig .4 . after removing weekly seasonality we were left with the annual cycle . unfortunately , because of the short length of the time series ( only two years ) , the method applied to the 7-day cycle could not be used to remove the annual cycle . to overcome this we introduced a new method which consists of the following : ( i )calculate a 25-day rolling volatility for the whole vector ; ( ii ) calculate the average volatility for one year ( i.e. in our case ) ; ( iii ) smooth the volatility by taking a 25-day moving average of ; ( iv ) finally , rescale the returns by dividing them by the smoothed annual volatility .the obtained time series ( see the bottom panel of fig .4 ) showed no apparent trend and seasonality ( see the bottom panel of fig .therefore we treated it as a stationary process . in the next sectionwe fit the deseasonalized load returns by a generalized ornstein - uhlenbeck type model .the deseasonalized data sets were modeled by mean - reverting continuous - type processes of the form ( generalized ornstein - uhlenbeck processes ) : unfortunately , since we were unable to remove the annual cycle from the system loads themselves we had to restrict our analysis to models with ( we estimated , but for fractional the process has to be strictly positive and evidently returns do not comply with this restriction ) .thus we were left with the so - called vasicek model .we can calibrate the vasicek model via ordinary linear regression : where and is the standard deviation from the regression .observe that the above implies that the vasicek model is a continuous version of an ar(1 ) process .this is the main reason why it performs poorly for our data sets .the deseasonalized system loads may be an ar ( auto regressive ) process , however , of an order greater then 1 ( see the pacf plots in fig . 5 , which can be used as an estimate of the ar order ) .it is worth noting that other diffusions of the form ( [ gou ] ) also have a very short ar dependence structure , which would probably result in poor prediction of electricity prices or loads . for comparison , in the bottom panel of fig .5 we plotted actual deseasonalized load returns in december 2000 and the vasicek prediction .the prediction is a one day forecast with model parameters estimated from the last 365 daily returns .unfortunately the fit is far from being perfect .the largest differences occur during christmas ( december 24th26th ) , but this can be improved by incorporating a holiday structure into the model. however , the prediction for the first 23 days in december is still much worse than the prediction obtained from a simple arma(3,3 ) model , i.e. the mean absolute deviation from the true values is 0.565 compared to 0.355 for the arma forecast . even though continuous - time models have certain advantages ( like analytic tractability , a developed theory of pricing derivatives , etc . ) over discrete models , further research will be in the direction of discrete time series models which offer a much better fit to market data . 99 international chamber of commerce , liberalization and privatization of the energy sector , paris ,july 1998 .masson , competitive electricity markets around the world : approaches to price risk management , in , 1999 . v. kaminski , ed ., managing energy price risk , 2nd . ed . , risk books , london , 1999 .r. bjorgan , c .- c .liu , j. lawarree , ieee trans .power systems 14 ( 1999 ) 1285 .bouchaud , m. potters , theory of financial risk , ( in french ) , alea - saclay , eyrolles , paris , 1997 ._ english edition _ : cambridge university press , 2000 .a. weron , r. weron , financial engineering : derivatives pricing , computer simulations , market statistics , ( in polish ) , wnt , warsaw , 1998 .mantegna , h.e .stanley , an introduction to econophysics : correlations and complexity in finance , cambridge university press , cambridge , 1999 .d. pilipovic , energy risk : valuing and managing energy derivatives , mcgraw - hill , new york , 1998 .r. weron , physica a 285 ( 2000 ) 127 .r. weron , b. przybyowicz , physica a 283 ( 2000 ) 462 .see http://www.ucei.org .s. borenstein , the trouble with electricity markets ( and some solutions ) , power working paper pwp-081 , ucei , see .brockwell , r.a .davis , introduction to time series and forecasting , springer - verlag , new york , 1996 .v. kaminski , the challenge of pricing and risk managing electricity derivatives , in `` the us power market '' , risk books , london , 1997 .o. vasicek , j. financial economics 5 ( 1977 ) 177 .p. wilmott , derivatives , wiley , chichester , 2000 .j. nowicka - zagrajek , r. weron , in preparation .
in this paper we address the issue of modeling electricity loads and prices with diffusion processes . more specifically , we study models which belong to the class of generalized ornstein - uhlenbeck processes . after comparing properties of simulated paths with those of deseasonalized data from the california power market and performing out - of - sample forecasts we conclude that , despite certain advantages , the analyzed continuous - time processes are not adequate models of electricity load and price dynamics . -5 mm to be published in physica a ( 2001 ) : proceedings of the nato arw * modeling electricity loads in california : + a continuous - time approach * rafa weron hugo steinhaus center for stochastic methods , + wrocaw university of technology , 50 - 370 wrocaw , poland b. kozowska , j. nowicka - zagrajek institute of mathematics , + wrocaw university of technology , 50 - 370 wrocaw , poland * keywords : * econophysics , electricity load , ornstein - uhlenbeck process , mean - reversion , seasonality * pacs : * 05.45.tp , 89.30.+f , 89.90.+n
two - hop ad hoc wireless networks , where each packet travels at most two hops ( source - relay - destination ) to reach its destination , have been a class of basic and important networking scenarios .actually , the analysis of basic two - hop relay networks serves as the foundation for performance study of general multi - hop networks . due to the promising applications of ad hoc wireless networks in many important scenarios ( like battlefield networks , vehicle networks , disaster recovery networks ) ,the consideration of secrecy ( and also reliability ) in such networks is of great importance for ensuring the high confidentiality requirements of these applications .traditionally , the information security is provided by adopting the cryptography approach , where a plain message is encrypted through a cryptographic algorithm that is hard to break ( decrypt ) in practice by any adversary without the key .while the cryptography is acceptable for general applications with standard security requirement , it may not be sufficient for applications with a requirement of strong form of security ( like military networks and emergency networks ) .this is because that the cryptographic approach can hardly achieve everlasting secrecy , since the adversary can record the transmitted messages and try any way to break them .that is why there is an increasing interest in applying signaling scheme in physical layer to provide a strong form of security , where a degraded signal at an eavesdropper is always ensured such that the original data can be hardly recovered regardless of how the signal is processed at the eavesdropper .we consider applying physical layer method to achieve secure and reliable information transmission in the two - hop wireless networks . by now , a lot of research works have been dedicated to the study of physical layer security based on cooperative relays and artificial noise , and these works can be roughly classified into two categories depending on whether the information of eavesdroppers channels and locations is known or not ( see section v for related works ) . for the casethat the information of eavesdroppers channels and locations is available , a lot of transmission schemes have been proposed to achieve the maximum secrecy rates while optimizing the artificial noise generation and power control to reduce the total transmission power consumption [ 3 - 19 ] . in practice , however , it is difficult to gain the information of eavesdropper channels and locations , since the eavesdroppers always try to hide their identity information as much as possible . to alleviate such a requirement on eavesdroppers information , some recent works explored the implementation of secure and reliable information transmission in wireless networks without the information of both eavesdropper channels and locations [ 20 - 28 ] .it is notable , however , that these works mainly focus on exploring the scaling law results in terms of the number of eavesdroppers one network can tolerate as the number of system nodes there tends to infinity .although the scaling law results are helpful for us to understand the general asymptotic network behavior , they tell us a little about the actual and exact number of eavesdroppers one network can tolerate . in practice , however , such exact results are of great interest for network designers .this paper focuses on applying the relay cooperation to achieve secure and reliable information transmission in a more practical finite two - hop wireless network without the knowledge of both eavesdropper channels and locations .the main contributions of this paper as follows . * for achieving secure and reliable information transmission in a more practical two - hop wireless network with finite number of system nodes and equal path - loss between all pairs of nodes, we consider the application of the cooperative protocol proposed in with an optimal and complex relay selection process but less load balance capacity , and also propose to use a new cooperative protocol with a simple and random relay selection process but good load balance capacity .* rather than exploring the asymptotic behavior and scaling law results , we provide theoretic analysis for above two cooperative protocols to determine the corresponding exact results on the number of eavesdroppers one network can tolerate to meet a specified requirement in terms of the maximum secrecy outage probability and the maximum transmission outage probability allowed .* we further extend our study to the more general and practical scenario where the path - loss between each pair of nodes also depends on their relative locations , for which we propose a new transmission protocol with both preferable relay selection and good load balance and also present the corresponding theoretical analysis under this new protocol .the remainder of the paper is organized as follows .section ii presents system models and also introduces transmission outage and secrecy outage for the analysis of transmission protocols .section iii considers two transmission protocols for the scenario of equal path - loss between all pairs of nodes and provides the corresponding theoretical analysis .section iv further presents a new transmission protocol and its theoretical analysis to address distance - dependent path - loss issue .section v introduces the related works and section vi concludes this paper .as illustrated in fig.1 that we consider a network scenario where a source node wishes to communicate securely with its destination node with the help of multiple relay nodes , , , .in addition to these normal system nodes , there are also eavesdroppers , , , that are independent and also uniformly distributed in the network .our goal here is to ensure the secure and reliable information transmission from source to destination under the condition that no real time information is available about both eavesdropper channels and locations . wishes to communicate securely with destination with the assistance of finite relays , , , ( =4 in the figure ) in the presence of passive eavesdroppers , , , ( =4 in the figure ) .cooperative relay scheme is used in the two - hop transmission .a assistant node is selected randomly as relay ( in the figure).,title="fig:",width=192 ] .consider the transmission from a transmitter to a receiver , and denote by the symbol transmitted by and denote by the signal received by .we assume that all nodes transmit with the same power , path - loss between all pairs of nodes is independent , and the frequency - nonselective multi - path fading from to is a complex zero - mean gaussian random variable . under the condition that all nodes in a group of nodes , , are generating noises , the signal received at node from node is determined as : where is the path - loss exponent .the noise at receiver is assumed to be i.i.d complex gaussian random variables with } = n_0 ] . without loss of generality , we assume that }=1 ] between source and destination .a relay node , indexed by , is then selected randomly from relays falling within the relay selection region . * 2 ) _ channel measurement _ : * each of the other relays measures the channel from the selected relay and destination by accepting the pilot signal from and for determining the noise generation nodes . *3 ) _ two - hop transmission _ : * the source and the selected relay transmit the messages in two - hop transmission .concurrently , the relay nodes with indexes in in the first hop and the relay nodes with indexes in in the second hop transmit noise respectively to help transmission . _ remark 4 _ : in the protocol 3 , a trade off between the preferable relay selection and better load balance can be controlled through the parameters and , which define the relay selection region . as to be shown in theorem 3 that by adopting a small value for both and ( i.e. , a larger relay selection region ) , a better load balance capacity can be achieved at the cost of a smaller number of eavesdroppers one network can tolerant . to address the near eavesdropper problem and also to simply the analysis for the protocol 3, we assume that there exits a constant such that any eavesdropper falling within a circle area with radius and center or can eavesdrop the transmitted messages successfully with probability 1 , while any eavesdropper beyond such area can only successfully eavesdropper the transmitted messages with a probability less than 1 .based on such a simplification , we can establish the following two lemmas regarding some basic properties of , and under this protocol ._ lemma 5 _ : consider the network scenario of fig 2 , under the protocol 3 the transmission outage probability and secrecy outage probability there satisfy the following conditions .\left(1 - \vartheta\right)+ 1 \cdot \vartheta \end{aligned}\ ] ] \\ & \ \ -\left[m \left(\pi { r_0}^2 + \left(\frac{1}{1 + \gamma_e \psi { r_0}^{\alpha}}\right)^{\left(n-1\right)\left(1-e^{-\tau}\right)}\left(1- \pi { r_0}^2\right)\right)\right]^2 \end{aligned}\ ] ] here , ^n\end{aligned}\ ] ] ^{\frac{\alpha}{2}}}dxdy\end{aligned}\ ] ] ^{\frac{\alpha}{2}}}dxdy\end{aligned}\ ] ] the proof of the lemma 5 can be found in the appendix c. _ lemma 6 _ : consider the network scenario of fig 2 , to ensure and by applying the protocol 3 , the parameter must satisfy the following condition . and \ ] ] here , , , , and are defined in the same way as that in lemma 5 . * reliability guarantee * to ensure the reliability requirement , we know from lemma 5 that we just need \left(1 - \vartheta\right)+ 1 \cdot \vartheta \leq \varepsilon_t \end{aligned}\ ] ] that is , by using taylor formula , we have * secrecy guarantee * to ensure the secrecy requirement , we know from lemma 5 that we just need -\\ & \left[m \left(\pi { r_0}^2 + \left(\frac{1}{1 + \gamma_e \psi { r_0}^{\alpha}}\right)^{\left(n-1\right)\left(1-e^{-\tau}\right)}\left(1- \pi { r_0}^2\right)\right)\right]^2 \\ & \leq \varepsilon_s \end{aligned}\ ] ] thus , \\ & \leq 1- \sqrt{1-\varepsilon_s } \end{aligned}\ ] ] that is , \ ] ] based on the results of lemma 6 , we now can establish the following theorem about the performance of protocol 3 .* theorem 3 . *consider the network scenario of fig 2 . to guarantee and based on the protocol 3 ,the number of eavesdroppers the network can tolerate must satisfy the following condition . here , , , , and are defined in the same way as that in lemma 5 . from lemma 6 , we know that to ensure the reliability requirement , we have and to ensure the secrecy requirement , we need \\ & \leq 1- \sqrt{1-\varepsilon_s } \end{aligned}\ ] ] thus , by letting to take its maximum value for maximum interference at eavesdroppers , we get the following bound here , lot of research works have been dedicated to the implementation of physical layer security by adopting artificial noise generation for cooperative jamming .these works can be roughly classified into two categories depending on weather the information of eavesdroppers channels and locations is known or not . for the case that the information of eavesdroppers channels and locations is available , many methods can be employed to improve physical layers security by optimizing the artificial noise generation and power control . in case that the global channel state information is available , to achieve the goal of maximizing the secrecy rates while minimizing the total transmit power , a few cooperative transmission schemes have been proposed in , and for two - hop wireless networks the optimal transmission strategies were presented in . with respect to small networks , cooperative jamming with multiple relays and multiple eavesdroppers and knowledge of channels and locations was considered in .even if only local channel information rather than global channel state information is known , it was proved that the near - optimal secrecy rate can achieved by cooperative jamming schemes . except channel information ,the relative locations were also considered for optimizing cooperative jamming and power allocation to disrupt an eavesdropper with known location .in addition , l. lai et al . established the utility of user cooperation in facilitating secure wireless communications and proposed cooperation strategies in the additive white gaussian noise ( awgn ) channel , r. negi et al .showed how artificially generated noise can be added to the information bearing signal to achieve secrecy in the multiple and single antenna scenario under the constraint on total power transmitted by all nodes .the physical layer security issues in a two - way untrusted relay system was also investigated with friendly jammers in .the cooperative communications in mobile ad hoc networks was discussed in .effective criteria for relay and jamming node selection were developed to ensure nonzero secrecy rate in case of given sufficient relays in .for the case that the information of eavesdropper channels and locations is unknown , the works in considered the secrecy for two - hop wireless networks , the works in considered the secrecy for large wireless networks , and the further work in considered the energy efficiency cooperative jamming strategies .these works considered how cooperative jamming by friendly nodes can impact the security of the network and compared it with a straightforward approach based on multi - user diversity .they also proposed some protocols to embed cooperative jamming techniques for protecting single links into a large multi - hop network and explored network scaling results on the number of eavesdroppers one network can tolerate .et al . explored the interference from multiple cooperative sessions to confuse the eavesdroppers in a large wireless network .the cooperative relay scheme for the broadcast channel was further investigated in .to achieve reliable and secure information transmission in a two - hop relay wireless network in presence of eavesdroppers with unknown channels and locations , several transmission protocols based on relay cooperation have been considered . in particular ,theoretical analysis has been conducted to understand that under each of these protocols how many eavesdroppers one network can tolerant to meet a specified requirement on the maximum allowed secrecy outage probability and transmission outage probability .our results in this paper indicate that these protocols actually have different performance in terms of eavesdropper - tolerance capacity and load balance capacity among relays , and in general it is possible for us to select a proper transmission protocol according to network scenario such that a desired trade off between the overall eavesdropper - tolerance capacity and load balance among relay nodes can be achieved .notice that is determined as based on the definition of transmission outage probability , we have compared to the noise generated by multiple system nodes , the environment noise is negligible and thus is omitted here to simply the analysis .notice that where is the largest random variable among the exponentially distributed random variables . from reference , we can get the distribution function of the for each relay as following , from reference , we can also get the distribution function of random variable as following , ^{n } \ \ \ & \text{}\\ 0 \ \ \ & \text{}\\ \end{cases}\ ] ] therefore , we have ^n\end{aligned}\ ] ] since there are other relays except , the expected number of noise - generation nodes is given by .then we have ^n\end{aligned}\ ] ] employing the same method , we can get ^n\end{aligned}\ ] ] thus , we have ^n\\ & \ \ \ \ \ \ \ \ \ \ -\left[1-e^{-2\gamma_r\left(n-1\right ) \left(1-e^{-\tau}\right)\tau}\right]^{2n}\end{aligned}\ ] ] similarly , notice that is given by according to the definition of secrecy outage probability , we know that thus , we have based on the markov inequality , \\ & \ \ \ \ \\leq e_{\mathcal { r}_1}\left[\prod_{r_j \in \mathcal { r}_1 } e_{h_{r_j , e_i}}\left[e^{-\gamma_e|h_{r_j , e_i}|^2}\right]\right]\\ & \ \ \ \ \= e_{\mathcal { r}_1}\left[\left(\frac{1}{1+\gamma_e}\right)^{|\mathcal { r}_1|}\right]\end{aligned}\ ] ] therefore , employing the same method , we can get since the expected number of noise - generation nodes is given by , thus , we can get ^ 2\end{aligned}\ ] ]similar to the proof of lemma 1 , we notice that is determined as based on the definition of transmission outage probability , we have here . then we have employing the same method , we can get thus , we have \\ & \ \ \ \ \ \ \ \ \ \ -\left[1 - e^{-\gamma_r\left(n -1\right)\left(1-e^{-\tau}\right)\tau}\right]^2\end{aligned}\ ] ] notice that the eavesdropper model of protocol 1 is the same as that of protocol 2 , the method for ensuring secrecy is identical to that of in lemma 1 .thus , we can see that the secrecy outage probability of protocol 1 and protocol 2 is the same , that is , ^ 2\end{aligned}\ ] ]notice that two ways leading to transmission outage are : 1 ) there are no candidate relays in the relay selection region ; 2 ) the sinr at the selected relay or the destination is less than .let be the event that there is at least one relay in the relay selection region , and be the event that there are no relays in the relay selection region .we have as shown in fig 2 that by assuming the coordinate of as , we can see that the number of noise generating nodes in square \times \left[y , y+dy\right]$ ] will be .then , we have notice that within the network area , where relays are uniformly distributed , the worst case location for the selected relay is the point , at which the interference from the noise generating nodes is the largest ; whereas , the best case location for the selected relay is the four corner points and of the relay selection , where the interference from the noise generating nodes is the smallest . by considering the worst case location for the selected relay , we have based on the definition of ,we denote by the event that the distance between and the source is less than , and denote by the event that distance between and the source is lager than or equal to .we have from fig 2 we know that the largest interference at eavesdropper happens when is located at the point , while the smallest interference at happens it is located at the four corners of the network region . by considering the smallest interference at eavesdroppers , we then have \\ & \ \ -\left[m \left(\pi { r_0}^2 + \left(\frac{1}{1 + \gamma_e \psi { r_0}^{\alpha}}\right)^{\left(n-1\right)\left(1-e^{-\tau}\right)}\left(1- \pi { r_0}^2\right)\right)\right]^2 \end{aligned}\ ] ] l. dong , h. yousefizadeh and h. jafarkhani ,_ `` cooperative jamming and power allocation for wireless relay networks in presence of eavesdropper '' _ ieee international conference on communications ( icc 2011 ) , pp.1 - 5 , 2011 .l. dong , z. han , a.p .petropulu , and h.v .poor , _ `` secure wireless communications via cooperation , '' _ in proc .46th annual allerton conference on communication , control , and computing , pp .1132 - 1138 , 2008 .s. luo , j. li , and a. petropulu , _ `` physical layer security with uncoordinated helpers implementing cooperative jamming , '' _ in proceeding of the seventh ieee sensor array and multichannel signal processing workshop , pp.97 - 100 , 2012 .k. morrison , and d. goeckel , _ `` power allocation to noise - generating nodes for cooperative secrecy in the wireless environment , '' _ in the forty fifth asilomar conference on signals , systems and computers ( asilomar ) , pp.275 - 279 , 2011 .r. zhang , l. song , z. han and b. jiao ._ `` physical layer security for two - way untrusted relaying with friendly jammers , '' _ ieee transactions on vehicular technology , vol .61 , no . 8 , pp . 3693 - 3704 , 2012 .q. guan , f. r. yu , s. jiang , and c. m. leung ._ `` joint topology control and authentication design in mobile ad hoc networks with cooperative communications , '' _ ieee transactions on vehicular technology , vol.61 , no.6 , pp.2674 - 2685 , 2012 d. goeckel , s. vasudevan , d. towsley , s. adams , z. ding and k. leung , _ `` everlasting secrecy in two - hop wireless networks using artificial noise generation from relays , '' _ in proceeding of international technology alliance collaboration system ( acita 2011 ) , 2011 . d. goeckel , s. vasudevan , d. towsley , s. adams , z. ding and k. leung , _`` artificial noise generation from cooperative relays for everlasting secrecy in two - hop wireless networks , '' _ ieee journal on selected areas in communications , vol.29 , no.10 pp.2067 - 2076 , 2011 .s. vasudevan , s. adams , d. goeckel , z. ding , d. towsley and k. leung , _`` multi - user diversity for secrecy in wireless networks , '' _ in proceeding of information theory and applications workshop ( ita 2010 ) , pp.1 - 9 , 2010 . z. ding , k. leung , d. goeckel and d. towsley , _`` opportunistic relaying for secrecy communications : cooperative jamming vs relay chatting , '' _ ieee transactions on wireless communications , vol.10 , no.6 , pp.1725 - 1729 , 2011 .m. dehghan , d. goeckel , m. ghaderi and z. ding , _ `` energy efficiency of cooperative jamming strategies in secure wireless networks , '' _ ieee transactions on wireless communications , vol.11 , no.9 , pp.3025 - 3029 , 2012 .c. leow , c. capar , d. goeckel , and k. leung , _ `` a two - way secrecy scheme for the scalar broadcast channel with internal eavesdroppers , '' _ in the forty fifth asilomar conference on signals , systems and computers ( asilomar 2011 ) , pp.1840 - 1844 , 2011 .a. sheikholeslami , d. goeckel , h. pishro - nik and d. towsley , _ `` physical layer security from inter - session interference in large wireless networks , '' _ in proceeding of ieee infocom 2012 , pp.1179 - 1187 , 2012 .
this work considers the problem of secure and reliable information transmission via relay cooperation in two - hop relay wireless networks without the information of both eavesdropper channels and locations . while previous work on this problem mainly studied infinite networks and their asymptotic behavior and scaling law results , this papers focuses on a more practical network with finite number of system nodes and explores the corresponding exact result on the number of eavesdroppers one network can tolerant to ensure desired secrecy and reliability . we first study the scenario where path - loss is equal between all pairs of nodes and consider two transmission protocols there , one adopts an optimal but complex relay selection process with less load balance capacity while the other adopts a random but simple relay selection process with good load balance capacity . theoretical analysis is then provided to determine the maximum number of eavesdroppers one network can tolerate to ensure a desired performance in terms of the secrecy outage probability and transmission outage probability . we further extend our study to the more general scenario where path - loss between each pair of nodes also depends the distance between them , for which a new transmission protocol with both preferable relay selection and good load balance as well as the corresponding theoretical analysis are presented . two - hop wireless networks , cooperative relay , physical layer security , transmission outage , secrecy outage .
in many problems , such as homogenization and long time behavior of first and second order hamilton - jacobi equations , weak kam theory , ergodic mean field games and dislocation dynamics , an essential step for the qualitative analysis of the problem is the computation of the effective hamiltonian .this function plays the role of an eigenvalue and in general it is unknown except in some very special cases .hence the importance of designing efficient algorithms for its computation , taking also into account that the evaluation at each single point of this function requires the solution of a nonlinear partial differential equation .moreover , the problem characterizing the effective hamiltonian is in many cases ill - posed .consider for example the cell problem for a first order hamilton - jacobi equation where , and is the unit -dimensional torus .it involves , for any given , two unknowns in a single equation .moreover , despite the effective hamiltonian is uniquely identified by , the corresponding viscosity solution is in general not unique , not even for addition of constants . in the recent years , several numerical schemes for the approximation of the effective hamiltonian have been proposed ( see ,,,,,, ) , and they are mainly based on two different approaches .the first approach consists in the regularization of the cell problem via well - posed problems , such as the stationary problem for , or the evolutive one indeed , it can be proved that both and converge to , respectively for and ( see ) . in , these regularized problems are discretized by finite - difference schemes , obtaining respectively the so called small- and large- methods . in the limit for or , andsimultaneously for the discretization step , one gets an approximation of ( the convergence of this method is proved in ) . hence the computation of an approximation of , for any fixed ,requires the solution of a sequence of nonlinear finite - difference systems , which become more and more ill - conditioned when is small or is large .it is worth noting that the idea of approaching ergodic problems via small- or large- methods has been applied to several other contexts , for example in mean field games theory , -periodic homogenization problems and dislocation dynamics .the second approach for computing the effective hamiltonian is based on the following - formula : in , this formula is discretized on a simplicial grid , by taking the infimum on the subset of piecewise affine functions and the supremum on the barycenters of the grid elements .the resulting discrete problem is then solved via standard minimax methods .an alternative method is suggested in , where it is shown that the solution of the euler - lagrange equation approximates , for , the infimum in the - formula .a finite difference implementation of this method is presented in , for the special class of eikonal hamiltonians .+ in this paper we propose a new approach which allows to compute solutions of ergodic problems _ directly _ , i.e. , avoiding small- , large- or - approximations .all these problems involve a couple of unknowns , possibly depending on some parameter , where is either a scalar or vector function and is a constant . after performing a discretization of the ergodic problem ( e.g. , using finite - difference schemes ), we collect all the unknowns of the discrete problem in a single vector of length and we recast the equations of the discrete system as functions of , for some . we get a nonlinear map and the discrete problem is equivalent to find such that where the system can be inconsistent , e.g. , _ underdetermined _ ( ) as for the cell problem , or_ overdetermined _ ( ) as for stationary mean field games ( see section [ meanfieldgames ] ) .note that this terminology is properly employed for linear systems , but it is commonly adopted also for nonlinear systems . in each case, can be solved by a generalized newton s method involving the moore - penrose pseudoinverse of the jacobian of ( see ) , and efficiently implemented via suitable factorizations .this approach has been experimented by the authors in the context of stationary mean field games on networks ( see ) and , to our knowledge , this is the first time a cell problem in homogenization of hamilton - jacobi equations is solved directly , by interpreting the effective hamiltonian as an unknown ( as it is ! ) .we realized that , once a consistent discretization of the hamiltonian is employed to correctly approximate viscosity solutions , all the job is reduced to computing zeros of nonlinear maps .moreover , despite the cell problem does not admit in general a unique viscosity solution , the ergodic constant defining the effective hamiltonian is often unique .this `` weak '' well - posedness of the problems , and also the fact that the effective hamiltonian is usually the main object of interest more than the viscosity solution itself , encouraged the development of the proposed method .the paper is organized as follows . in section [ newtonlike ]we introduce our approach for solving ergodic problems , and we present a newton - like method for inconsistent nonlinear systems , discussing some basic features and implementation issues . in the remaining sections ,we apply the new method to more and more complex ergodic problems , arising in very different contexts .more precisely , section [ eikonal ] is devoted to the eikonal hamiltonian , which is the benchmark for our algorithm , due to the availability of an explicit formula for the effective hamiltonian .section [ qnorms ] concerns more general convex hamiltonians , while section [ nonconvex ] is devoted to a nonconvex case and section [ 2ndorder ] to second order hamiltonians . in section [ wcs ]we solve some vector problems for weakly coupled systems , whereas section [ dislocations ] is devoted to a nonlocal problem arising in dislocation dynamics .finally , in section [ meanfieldgames ] we solve some stationary mean field games and in section [ multipopmfg ] an extension to the vector case with more competing populations .in this section we introduce a new numerical approach for solving ergodic problems arising in the homogenization of hamilton - jacobi equations. then we present a newton - like method for inconsistent nonlinear systems and we discuss some of its features , also from an implementation point of view .we assume that a generic continuous ergodic problem is defined on the torus and we denote by a numerical grid on .we also assume that the discretization scheme for the continuous problem results in a system of nonlinear equations of the form where * is the discretization parameter ( is meant to tend to ) ; * is the point where the continuous problem is approximated ; * is a real valued mesh function on and is a real number , meant to approximate respectively the continuous solution of the ( possibly vector ) ergodic problem and the corresponding ergodic constant ; * represents a generic numerical scheme ; * and are respectively the length of the vector and the number of equations in .we remark that our aim is to efficiently solve the nonlinear system .therefore , we do not specify the type of grids or schemes employed in the discretization . in particular , we do not consider properties of the scheme itself , such as consistency , stability , monotonicity and , more important , the ability to correctly select approximations of viscosity solutions , assuming that they are included in the form of the operator .we just point out that , in our tests , we always perform finite difference discretizations on uniform grids . moreover , if not differently specified , we mainly employ the well known engquist - osher numerical approximation for the first order terms in the equations , due to its simple implementation . for instance , in dimension one , we use the following upwind approximation of the gradient : here , the only assumption on the discrete ergodic problem is the following : to compute a solution of , we collect the unknowns in a single vector of length and we recast the equations as functions of .hence we get the nonlinear map defined by , and is equivalent to the nonlinear system the system is said _ underdetermined _ if and _ overdetermined _ if . as already remarked , this terminology applies to linear systems , nevertheless it is commonly adopted , with a slight abuse , also in the nonlinear case . assuming that is frchet differentiable, we consider the following generalized newton - like method : given , iterate up to convergence where is the jacobian of and denotes the moore - penrose _ pseudoinverse _ of . as in the case of square systems, we can rewrite in a form suitable for computations , i.e. , for where the solution of the system is meant , for arbitrary and , in the following generalized sense .+ ( ) the vector is the unique vector of smallest euclidean norm which minimizes the euclidean norm of the residual . + it is easy to see that the generalized solution is given + * for square systems ( ) by provided that the jacobian is invertible .+ * for overdetermined systems ( ) by the _ least - squares _solution provided that the jacobian has full column rank .+ * for underdetermined systems ( ) by the _ min euclidean norm least - squares _solution provided that the jacobian has full row rank .+ in each case , the generalized solution can be efficiently obtained avoiding the computation of the moore - penrose pseudoinverse .indeed , it suffices to perform a factorization of the jacobian in the overdetermined case ( or its transpose in the underdetermined case ) , i.e. , a factorization of the form in which \ , & \text{is an orthogonal matrix with of size ,}\\ & \text{ of size ;}\\ r=\left[\begin{array}{c}r_1 \\ 0\end{array}\right ] \ , & \text{is an matrix with upper triangular of size }\\ & \text{and the null matrix of size .}\end{aligned}\ ] ] more precisely : + * in the square case , factoring , we get and therefore where the last step is readily computed by back substitution ; + * in the overdetermined case , factoring , we get by d+\left[\begin{array}{c } q_1^t \\q_2^t \end{array } \right]f(x^k)\right\|_2 ^ 2\\[4pt ] & = \|q_2^t f(x^k)\|_2 ^ 2+\displaystyle\arg\min_{d\in\r^n}\|r_1 d+q_1^t f(x^k)\|_2 ^ 2\end{aligned}\ ] ] so that , minimizing the second term , we get which is computed again by back substitution . + * in the underdetermined case , factoring ( note that now and are exchanged ) , we have .moreover , setting we get since by the orthogonality of , we obtain the constraint it follows that we can minimize ( see ) just taking , and we conclude that where is computed by again via back substitution .+ from now on we will refer to the generalized solution for arbitrary and as to the _ least - squares _ solution .+ summarizing , we consider the following algorithm for the solution of : + + + in the actual code implementation of the algorithm above , we employ several well known variants and modifications of the classical newton method , as discussed in the following remarks .+ + the convergence of newton - like methods is in general local . nevertheless in some cases , as in , if is convex and a proper discretization preserving this property is performed , the map in is also convex and therefore the convergence is global .moreover , in every example we consider ( except for multi - population mean field games , see section [ multipopmfg ] ) the ergodic constant is unique .+ + sometimes newton - like methods do not converge , due to oscillations around a minimum of the residual function .this situation can be successfully overcome by introducing a _ dumping parameter _ in the update step , i.e. , by replacing in the step of the algorithm with for some .usually a fixed value of works fine , possibly affecting the number of iterations to obtain convergence .a more efficient ( but costly ) selection of the dumping parameter can be implemented using _ line search _ methods , such as the _ inexact backtracking _ line search with _ armijo - goldstein condition_. especially when dealing with __ residual functions , the newton step can be trapped into a local minimum . in this case a re - initialization of the dumping parameter to a bigger value can resolve the situation , acting as a thermalization in _ simulated annealing _ methods .+ + it may happen ( usually if the initial guess is ) that is nearly singular or rank deficient , so that the least - squares solution can not be computed . in this case , in the spirit of the _ levenberg - marquardt _ method or more in general quasi - newton methods, we can add a regularization term on the principal diagonal of the jacobian , by replacing with , where is a tunable parameter and denotes the identity ( not necessarily square ) matrix .this correction does not affect the solution , but it may slow down the convergence if is not chosen properly .this depends on the fact that the method reduces to a simple gradient descent if the term dominates . in our implementation , we switch - on the correction ( with a fixed ) only at the points where is nearly singular or rank deficient . in this way we can easily handle , for instance , second order problems with very small diffusion coefficients .+ + newton - like methods classically require the residual function to be frchet differentiable .nevertheless , this assumption can be weakened to include important cases , such as cell problems in which the hamiltonian is of the form with .note that the derivative in is given by , so that the jacobian in the corresponding newton step is not differentiable at the origin for .in this situation , in the spirit of nonsmooth - newton methods , we can replace the usual gradient with any element of the sub - gradient .typically we choose for .+ + it is interesting to observe that in the overdetermined case , the iterative method - coincides with the _ gauss - newton _ method for the optimization problem . indeed , defining , the classical newton method for the critical points of given by computing the gradient and the hessian of we have where the second order term is given by since the minimum of is zero , we expect that is small for close enough to a solution .hence we approximate , obtaining the gauss - newton method , involving only first order terms : applying again decomposition to we finally get the iterative method - , with given by the least - squares solution .this is the approach we followed for solving stationary mean field games on networks in . + throughout the next sections we present several ergodic problems , that can be set in our framework and solved by the proposed newton - like method for inconsistent systems .each section contains numerical tests in dimension one and/or two , including some experimental convergence analysis and also showing the performance of the proposed method , both in terms of accuracy and computational time .all tests were performed on a lenovo ultrabook x1 carbon , using 1 cpu intel quad - core i5 - 4300u 1.90ghz with 8 gb ram , running under the linux slackware 14.1 operating system .the algorithm is implemented in c and employs the library _ suitesparseqr _ , which is designed to efficiently compute in parallel the factorization and the least - squares solution to very large and sparse linear systems .we start considering the simple case of the eikonal equation in dimension one , namely the cell problem where , and is a potential .this is a good benchmark for the proposed method , since a formula for the effective hamiltonian is available ( see ) : where .note that the effective hamiltonian has a plateau in the whole interval $ ] . with a slight abuse of notation , in what follows we will refer to this interval as to _ the _ plateau .following , in our first test we choose , for which and . as initialguess we always choose and we set to the tolerance for the stopping criterion of the algorithm .figure [ lambda - vs - iterations ] shows the computed as a function of the number of iterations to reach convergence . [ cols="^,^ " , ]we presented a new approach for the numerical solution of ergodic problems involving hamilton - jacobi equations .the proposed newton - like method for inconsistent nonlinear systems is able to solve first and second order nonlinear cell problems arising in very different contexts , e.g. , for scalar convex and nonconvex hamiltonians , weakly coupled systems , dislocation dynamics and mean field games , also in the case of more competing populations .a very large collection of numerical simulations shows the performance of the algorithm , including some experimental convergence analysis .we reported both numerical results and computational times , in order to allow future comparison .the authors would like to thank f.j .silva who , talking about mean fields games , pronounced the magic words `` gauss - newton method '' , and also m. cirant for fruitful discussions on the multi - population extension ., _ homogenization of first order equations with -periodic hamiltonian : rate of convergence as and numerical approximation of the effective hamiltonian _ , math .models methods appl ., 21 ( 2011 ) , pp .13171353 . , _ convergence of numerical methods and parameter dependence of min - plus eigenvalue problems , frenkel - kontorova models and homogenization of hamilton - jacobi equations _, math . model ., 35 ( 2001 ) , pp .
we propose a new approach to the numerical solution of ergodic problems arising in the homogenization of hamilton - jacobi ( hj ) equations . it is based on a newton - like method for solving inconsistent systems of nonlinear equations , coming from the discretization of the corresponding ergodic hj equations . we show that our method is able to solve efficiently cell problems in very general contexts , e.g. , for first and second order scalar convex and nonconvex hamiltonians , weakly coupled systems , dislocation dynamics and mean field games , also in the case of more competing populations . a large collection of numerical tests in dimension one and two shows the performance of the proposed method , both in terms of accuracy and computational time . hamilton - jacobi equations , homogenization , effective hamiltonian , newton - like methods , inconsistent nonlinear systems , dislocation dynamics , mean field games . 35b27 , 35f21 , 49m15 .
mathematical models which describe the miscible displacement of fluids are of particular economical relevance in the recovery of oil in underground reservoirs by fluids which mix with oil .they also play a significant role in co stratification .this publication extends the analysis of , which studies the discretisation of miscible displacement under low regularity . unlike to which is based on a first - order implicit euler time - step ( leading to a nonlinear system of equations in each time step ) , herewe examine the discretisation in time by a linearised second - order crank - nicolson scheme .crucially , the new , more efficient method inherits stability under low regularity . like in ,the concentration equation is approximated with a discontinuous galerkin method , while darcy s law and the incompressibility condition is formulated as a mixed method .high - order time - stepping for miscible displacement under low regularity has recently also been addressed in , however , with a continuous galerkin discretisation in space and discontinuous galerkin in time .we refer for an outline of the general literature to .a triple in is called weak solution of the incompressible miscible flow problem if 1 . for , and 2 . for all 3 . in . for the dataqualification we refer to condition ( a1)(a8 ) in and for the physical interpretation of the system to .we point out that growths proportionally with : thus is in general unbounded on lipschitz domains and in the presence of discontinuous coefficients , which are permitted in this paper .we compactly recall the definition of the finite element spaces from .let be a partition of the time interval ] and .the concentration is discretised at time on the mesh or simply by .the approximation space for the variable at time step is denoted by .often we abbreviate , , .we denote the raviart - thomas space of order by .the approximation spaces of and are and .we frequently use the global mesh size and time step , , as well as to .in addition we impose conditions ( m1)(m5 ) of which are on shape - regularity , boundedness of the polynomial degree , control and the structure of hanging nodes .to deal with discontinuous coefficients and the time derivative , we substitute by where the are projections such that .given quantities , and at times , , , we denote and .the diffusion term of the concentration equation is discretised by the symmetric interior penalty discontinuous galerkin method : given , , we set , \ { { \mathds{d}}_h^j(u_h ) \ , { \nabla_{\ ! h\,}}w_h \ } \bigr)_{{{\mathcal{e}^j_\omega}}}\hphantom{,}\\ & & \quad - \ , \bigl ( [ w_h ] , \ { { \mathds{d}}_h^j(u_h ) \ , { \nabla_{\ ! h\,}}c_h \ } \bigr)_{{{\mathcal{e}^j_\omega } } } + \bigr ( \sigma^2 [ c_h ] , [ w_h ] \bigr)_{{\mathcal{e}^j_\omega}}\end{aligned}\ ] ] where is chosen sufficiently large to ensure coercivity of , cf .the convection , injection and production terms are represented by \bigr)_{\partial k \setminus { \partial \omega } } - \bigl ( ( u_h \cdot n_k)_- \ , [ c_h]_k , w_h^+ \bigr)_{\partial k \setminus { \partial \omega } } \bigr ) , \nonumber\end{aligned}\ ] ] where and .we set .algorithm ._ choose for .given , find such that for find such that , for all , and solve ( [ scheme_1 ] ) to obtain ._ the algorithm only requires the solution of a _ linear _ system in each time step .the iterate can be computed with an implicit euler method and fine time steps .the use of extrapolated values such as is classical , e.g. see .given and , there exists a solution of ( [ scheme_2 ] ) because the bilinear form is positive definite . for ] denotes the complex method of interpolation .[ contu ] let be numerical solutions with and in as .there exists and such that , after passing to a subsequence , in and in as .furthermore , satisfies ( w1 ) .use strang s lemma , for details see .we interpret as piecewise constant function in time , attaining in ] . using the strong convergence of in and the weak convergence of the lifted gradient of in , we find , \ { { \mathds{d}}_h(\breve{u}_i ) { \nabla_{\ ! h\,}}v_i \}\bigr)_{{\mathcal{e}}_{\omega } } { \ , { \rm d}t}.\end{aligned}\ ] ] as in it follows that coincides in the limit with .one can also conclude by adapting that one arrives at hence ( w2 ) is satisfied for .the extension to follows from boundedness and density of smooth functions . of the darcy velocity at before any interaction between the concentration front and the corner singularity .[ fig : lshape - domain],title="fig : " ] of the darcy velocity at before any interaction between the concentration front and the corner singularity .[ fig : lshape - domain],title="fig : " ]at and , computed with the crank - nicolson scheme . [fig : lshape],title="fig:",scaledwidth=30.0% ] at and , computed with the crank - nicolson scheme .[ fig : lshape],title="fig:",scaledwidth=30.0% ] the numerical experiments are carried out in two space dimensions with the lowest - order method on a mesh which consists of shape - regular triangles without hanging nodes and which is not changed over time .the diffusion dispersion tensor takes the form at and .[ fig : ref_sol],title="fig:",scaledwidth=30.0% ] at and .[ fig : ref_sol],title="fig:",scaledwidth=30.0% ] at and .[ fig : ref_sol],title="fig:",scaledwidth=30.0% ] to examine the effect of a singular velocity field caused by a discontinuous permeability distribution and a re - entrant corner we employ the l - shaped domain and with and as depicted in figure [ fig : lshape - domain ] .the injection and production wells are located at and , respectively .the porous medium is almost impenetrable in the upper left quarter , forcing a high fluid velocity at the reentrant corner where the nearly impenetrable barrier is thinnest .this leads to a singularity , where is the distance to the reentrant corner and , cf .figure [ fig : lshape ] shows the concentration when the front passes the corner and at a later time .the solution contains steep fronts but shows only the localised oscillations that are characteristic for dg methods .convergence rates are determined by comparing the numerical solution to a reference solution that is computed with high accuracy on a one dimensional grid .more precisely , we set , , and and choose to be the ball . using polar coordinates , we choose and .then the darcy velocity only changes in the radial direction and is determined by an ode , which has the nonnegative exact solution .consequently , the concentration equation reduces to a linear parabolic equation in one space dimension .figure [ fig : ref_sol ] shows snapshots of the solution with , and figure [ fig : rates ] shows that error of implicit euler method is of order whereas the crank - nicolson reaches the order .99 , _ discontinuous galerkin finite element convergence for incompressible miscible displacement problems of low regularity _ , to appear in siam journal on numerical analysis , submitted december 2007 ( also preprint hu - berlin 2008 no .2 , ) . , _ reservoir simulation ( mathematical techniques in oil recovery ) _ , siam , 2007 . , _ recent developments on modeling and analysis of flow of miscible fluids in porous media _ , fluid flow and transport in porous media , contemp . math .295:22924 , 2002 . ,_ convergence of a discontinuous galerkin method for the miscible displacement equations under minimal regularity _ , preprint may 2009 , ( ) . , _ galerkin finite element methods for parabolic problems _ , springer series in computational mathematics 25 , 1997 .
in this article we study the numerical approximation of incompressible miscible displacement problems with a linearised crank - nicolson time discretisation , combined with a mixed finite element and discontinuous galerkin method . at the heart of the analysis is the proof of convergence under low regularity requirements . numerical experiments demonstrate that the proposed method exhibits second - order convergence for smooth and robustness for rough problems .
with the proliferation of broadband access technologies such as ethernet , dsl , wimax and ieee 802.11a / b / g , portable devices tend to possess multiple modes of connecting to the internet .most pdas provide both cellular and wlan connectivity ; laptops are typically equipped with a built - in ethernet port , an 802.11a / b / g card and a phone jack for dial - up connections . sincea multitude of access technologies will continue to co - exist , increasing efforts are devoted to the standardization of architectures for network convergence .integration of heterogeneous access networks has been a major consideration in the design of 4 g networks , ieee 802.21 , and the ip multimedia subsystem ( ims ) platform .in addition , multi - homed internet access presents an attractive option from an end - host s perspective . by pooling resources of multiple simultaneously available access networks , it is possible to support applications with higher aggregate throughput , lower latency , and better error resiliency .+ in many applications , each end - host or device needs to simultaneously support multiple application flows with heterogeneous bit rate and latency requirements .one can easily imagine a corporate user participating in a video conference call , while uploading some relevant files to a remote server and browsing web pages for reference . in the presence of many such users , each access network can easily become congested with multiple competing application flows from multiple devices .the problem of resource allocation arises naturally , for determining the source rate of each application flow , and for distributing the traffic among multiple simultaneously available access networks . in this work ,we focus on video streaming applications as they impose the most demanding rate and latency requirements .flows from other applications , such as web browsing and file transfer , are treated as background traffic .+ challenges in the design of a rate allocation policy for such a system are multi - fold .firstly , access networks differ in their attributes such as available bit rates ( abrs ) and round trip times ( rtts ) , which are time - varying in nature .secondly , video streaming applications differ in their latency requirements and distortion - rate ( dr ) characteristics .for instance , a high - definition ( hd ) video sequence containing dynamic scenes from an action movie requires much higher data rate to achieve the same quality as a static head - and - shoulder news clip for a mobile device .thirdly , unlike file transfer or web browsing , video streaming applications require timely delivery of each packet to ensure continuous media playout .late packets are typically discarded at the receiver , causing drastic quality degradation of the received video due to error propagation at the decoder .in addition , the rate allocation policy should also operate in a distributed manner to avoid the traffic overhead and additional delay in collecting global media and network information for centralized computation .+ this paper addresses the above considerations , and investigates a suite of distributed rate allocation policies for multi - homed video streaming over heterogeneous access networks : * _ media - aware allocation _ : when devices have information of both video dr characteristics and network abr / rtt attributes , we formulate the rate allocation problem in a convex optimization framework and minimize the sum of expected distortions of all participating streams . a distributed approximation to the optimizationis presented , to enable autonomous rate allocation at each device in a media- and network - aware fashion . * _ h-optimal control _ :in the case where media - specific information is not available to the devices , we propose a scheme based on h-optimal control .the scheme achieves optimal bandwidth utilization on all access networks by guaranteing a worst - case performance bound characterizing deviation from full network utilization and excessive fluctuations in allocated video rates . * _ aimd - based heuristic _ :for comparison , we present two heuristic rate allocation schemes that react to congestion in the network by adjusting the total rate of each stream following tcp - style additive - increase - multiplicative - decrease ( aimd ) principle . they differ in their manners of how rates are split among multiple access networks in accordance with observed abrs .performance of all four rate allocation policies are evaluated in ` ns-2 ` , using abr and rtt traces collected from ethernet , ieee 802.11b and ieee 802.11 g networks in a corporate environment .simulation results are presented for the scenario of simultaneous streaming of multiple high - definition ( hd ) video sequences over multiple access networks .we verify that the proposed distributed media - aware allocation scheme approximates the results from centralized computation closely .the allocation results react quickly to abrupt changes in the network , such as arrival or departure of other video streams .both media - aware allocation and h-optimal control schemes achieve significantly lower packet delivery delays and loss ratios ( less than 0.1% for media - aware allocation and below 2.0% for h-optimal control ) , whereas aimd - based schemes incur up to 45% losses , far exceeding the tolerance level of video streaming applications . as a result, media - aware allocation improves the average received video quality by 1.5 - 10.7 db in psnr over the heuristic schemes in various simulation settings .it further ensures equal utilization across all access networks and more balanced video quality among all streams .+ the rest of the paper is organized as follows . section [ sec : relatedwork ] briefly reviews related work in multi - flow , multi - network resource allocation .we present our system model of the access networks and expected video distortion in section [ sec : systemmodel ] , followed by descriptions of the rate allocation schemes in section [ sec : rateallocation ] .performances of the four schemes are evaluated in section [ sec : performanceevaluation ] via simulations of three hd video streaming sessions sharing three access networks under various traffic conditions and latency requirements .rate allocation among multiple flows that share a network is an important and well - studied problem .internet applications typically use the tcp congestion control mechanism for regulating their outgoing rate . for media streaming applications over udp ,tcp - friendly rate control ( tfrc ) is a popular choice .several modifications have been proposed to improve its media - friendliness . in , the problem of rate allocation among flows with different utilitiesis studied within a mathematical framework , where two classes of pricing - based distributed rate allocation algorithms are analyzed . in this work, the notion of utility of each flow corresponds to its expected received video quality , measured in terms of mean - squared - error ( mse ) distortion relative to the original uncompressed video signals .we also extend the mathematical framework in to consider rate allocation over multiple networks .+ the problem of efficient utilization of multiple networks via suitable allocation of traffic has been explored from different perspectives . a game - theoretic framework to allocate bandwidth for elastic services in networks with fixed capacitiesis described in .our work , in contrast , acknowledges the time - varying nature of the network attributes and dynamically updates the allocation results according to observed available bit rates and round - trip delays .a solution for addressing the handoff , network selection , and autonomic computation for integration of heterogeneous wireless networks is presented in .the work , however , does not address simultaneous use of heterogeneous networks and does not consider wireline settings .a cost - price mechanism is proposed for splitting traffic among multiple ieee 802.11 access points to achieve end - host multi - homing .the work does not take into account existence of other types of access networks or the characteristics of traffic , nor does it specify an operational method to split the traffic . in ,a flow scheduling framework is presented for collaborative internet access , based on modeling and analysis of individual end - hosts traffic behavior .the framework mainly accounts for tcp flows and uses metrics useful for web traffic including rtt and throughput for making scheduling decisions .+ rate adaptation of multimedia streams has been studied in the context of heterogeneous networks in , where the authors propose an architecture to allow online measurement of network characteristics and video rate adaptation via transcoding .their rate control algorithm is based on tfrc and is oblivious of the media content . in ,media - aware rate allocation is achieved , by taking into account the impact of both packet loss ratios and available bandwidth over each link , on the end - to - end video quality of a single stream , whereas in , the rate allocation problem has been formulated for multiple streams sharing one wireless network . unlike our recent work where the multi - stream multi - network rate allocation problem is addressed from the perspective of stochastic control of markov decision processes and robust h-optimal control of linear dynamic systems , in this paper we stay within the convex optimization framework for media - aware optimal rate allocation , and compare the performance of the scheme with prior approaches .preliminary results from this work have been reported in and .in this section , we introduce the mathematical notations used for modeling the access networks , and for estimating expected received video distortion of each stream .we envision a middleware functionality as depicted in fig .[ fig : systemdiagram ] , which collects characteristic parameters of both the access networks and video streams , and performs the optimal rate allocation according to one of the schemes described in section [ sec : rateallocation ] .a more detailed discussion of the middleware functionality can be found in .consider a set of access networks , simultaneously available to multiple devices .each access network is characterized by its available bit rate and round trip time , which are measured and updated periodically . for each device , the set of video streams is denoted as . traffic allocation can be expressed in matrix form : = , where each element corresponds to the allocated rate of stream over network .consequently , the total allocated rate over network is , and the total allocated rate for stream is .we denote the _ residual bandwidth _ over network as: from the perspective of stream , the observed available bandwidth is: note that .+ as the allocated rate on each network approaches the maximum achievable rate , average packet delay typically increases due to network congestion .we use a simple rational function to approximate the non - linear increase of packet delay with traffic rate over each network: the value of is estimated from past observations of and , assuming equal delay on both directions: we note that despite oversimplification in this delay model , it is still effective in driving a rate allocation scheme with proactive congestion avoidance , as can be verified later by simulation results in section [ sec : performanceevaluation ] .expected video distortion at the decoder comprises of two terms: where denotes the distortion introduced by lossy compression performed by the encoder , and represents the additional distortion caused by packet loss .+ the distortion - rate ( dr ) characteristic of the encoded video stream can be fit with a parametric model : where the parameters , and depend on the coding scheme and the content of the video .they can be estimated from three or more trial encodings using non - linear regression techniques . to allow fast adaptation of the rate allocation to abrupt changes in the video content ,these parameters are updated for each group of pictures ( gop ) in the encoded video sequence , typically once every 0.5 second .+ the distortion introduced by packet loss due to transmission errors and network congestion , on the other hand , can be derived from as: where the sensitivity factor reflects the impact of packet losses , and depends on both the video content and its encoding structure . in general ,packet losses are caused by both random transmission errors and overdue delivery due to network congestion . since looses over the former typecan not be remedied by means of mindful rate allocation , we choose to omit its contribution in modeling decoded video distortion . for simplicity , comprises solely of late losses due to network congestion in the rest of this paper .in this section , we address the problem of rate allocation among multiple streams over multiple access networks from several alternative perspectives .we first present a convex optimization formulation of the problem in section [ subsec : distortionminimized ] , and explain how to approximate the media- and network - aware optimal solution with decentralized calculations . in the casethat video dr characteristics are unavailable , we resort to a formulation of h-optimal control in section [ subsec : hinfinity ] , which dynamically adjusts the allocated rate of each stream according to fluctuations in observed network available bandwidth . for comparison , we include in section [ subsec : aimd ] two heuristic allocation schemes following tcp - style additive - increase - multiplicative - decrease ( aimd ) principle .all four schemes are distributed in nature , in that the rate allocation procedures performed by each stream does not need coordination or synchronization with other streams .rather , interactions between the streams are _ implicit _ , as the abrs and rtts observed by one stream are affected by the allocated rates of other competing streams sharing the same interface networks .we seek to minimize the total expected distortion of all video streams sharing multiple access networks: in , the expected distortion is a function of the allocated rate and average packet loss according to .the constraint is introduced to impose uniqueness of the optimal solution .we choose to ensure balanced utilization over each interface: it can also be shown that , .each stream can therefore calculate the value of independently , based on its own abr observation for network .+ the average packet loss for each stream is the weighted sum of packet losses over all networks: following the derivations in , the percentage of late packets is estimated as , assuming exponential delay distributions with average for network and playout deadline for stream . given , is expressed as: combining - , it can be easily confirmed that the optimization objective is a convex function of the variable matrix .if all the observations and parameters were available in one place , the solution could be found by a suitable convex optimization method .+ we desire to minimize the objective in a distributed manner , with as little exchange of information among the devices as possible .one approach is to consider the impact of network congestion on one stream at a time , and alternate between the streams until convergence . from the perspective of stream , its contribution to can be rewritten as: in , optimization of rate allocation for stream requires knowledge of not only its own distortion - rate function and packet loss sensitivity , but also its impact on the late loss of other streams via the parameters and . while each stream can obtain information regarding its own packet loss sensitivity and playout deadline , exchange of such information among different streams is undesirable for a distributed scheme .+ we therefore further simplify the optimization to: where is empirically tuned to control the scheme s aggressiveness .even though does not necessarily lead to an optimal solution for , it nevertheless incorporates considerations of both network congestion and encoder video distortion in choosing the optimal rates .the impact on other streams is captured implicitly by the second term in , reflecting congestion experienced by all streams traversing that network .effectiveness of this distributed approximation will be verified later in section [ subsec : cent ] .+ in essence , optimization of involves a one - dimensional search of , thus can be solved efficiently using numerical methods .computational complexity of the scheme increases linearly with the number of competing streams and the number of available access networks , on the order of . in practice, each stream needs to track its observations of s and s over all available access networks , and to observe its video dr parameters and . at each time instance, the scheme would update its estimate of according to .it then determines the allocated rate by minimizing , and divides up the rate in proportion to over respective networks .figure [ fig : mediaawarealgorithm ] summarizes these procedures .+ update as update to minimize in the case when media - specific knowledge is unavailable to the wireless devices , the rate allocation problem can be addressed using h-optimal control . in this approach ,we track current and past observations of available bit rate ( abr ) of each network , and model variations in abr as unknown disturbances to a continuous - time linear system .the design goal is to achieve full network utilization while preventing excessive fluctuations in allocated video rates .an optimal rate controller is derived based on h-optimal analysis to bound the _ worst - case _ system performance .the scheme is distributed by nature , in that it treats the dynamics of each stream as unknown disturbance for others , thereby decoupling interactions between different streams .+ each stream estimates via various online measurement tools the _ measured residual bandwidth _as: in which is defined by ; and denote the initial and final time instance when is negative and is a negative scaling constant .+ we next define a continuous - time linear system from the perspective of a single stream keeping track of a single network . for notational simplicitywe subsequently drop the subscript and omit the time index .the extension to multiple access networks is discussed in the appendix . sinceeach stream is independent of others in the h-optimal control formulation , the scheme also generalizes immediately to the case with multiple streams . + from the perspective of stream , its rate update system can be expressed as: where the system state variable reflects roughly residual network bandwidth for stream and represents the rate control action . in, the parameters and adjust the memory horizon and the expected effectiveness of control actions , respectively , on the system state .a smaller value of corresponds to a longer horizon , i.e. , smoother values of over time .a higher value of means a more responsive system , where the rate control action of an individual stream has greater impact on total network utilization . in, the rate update is approximately in proportion to the control action , with sufficiently small to guarantee stability .recall that is function of residual bandwidth , which , in turn , is function of aggregate rates from all video streams .therefore the evolutions and are connected via a feedback loop .+ ideally , if the network is fully utilized at equilibrium , is zero while and approach zero for sufficiently small . to prevent excessive fluctuations in the allocated rate of each video stream ,however , fluctuations in the measured available bandwidth can not be tracked perfectly .design of the rate controller therefore needs to balance the incentive for full network utilization against the risk of excessive fluctuation in allocated video rates .such design objective can be expressed in mathematical terms , in the form of a cost function where ^t ] .in other words , for any given value of , one can find an optimal rate controller according to to ensure that in the worst case , the cost function will not exceed .+ although analysis and controller design are conducted around the equilibrium point , the streams do not have to compute the actual equilibrium values . in practice, the h-optimal rate control scheme is implemented through the procedures summarized in fig .[ fig : hinfalgorithm ] . similaras for media - aware allocation , computational complexity of the h-optimal control scheme scales linearly with number of competing streams and number of available access networks , on the order of .+ for comparison , we introduce in this section two heuristic rate allocation schemes based on the additive - increase - multiplicative - decrease ( aimd ) principle used by tcp congestion control . instead of performing proactive rate allocation by optimizing a chosen objective according to observed network attributes and video characteristics ,the aimd - based schemes are reactive in nature , in that they probe the network for available bandwidth and reduce the allocated rates only _ after _ congestion is detected . + as illustrated in fig .[ fig : aimdheuristics ] , each stream initiates at a specified rate corresponding to the minimum acceptable video quality , and increases its allocation by every seconds unless network congestion is perceived , in which case the allocated rate is dropped by over the congested network .+ we consider two variations of the aimd - based schemes .they differ in how the total allocated stream rate is distributed across multiple access networks during the additive - increase phase : * _ greedy aimd _ : the increase in rate allocation is allocated to the network interface offering the maximum instantaneous available bit rate : , if . * _ rate proportional aimd _ : the increase in rate allocation is allocated to all available networks in proportion to their instantaneous available bit rates .in both schemes , congestion over network is indicated upon detection of a lost packet or when the observed rtt exceeds a prescribed threshold .the value of , in turn , is adjusted according to the video playout deadline. keeps increasing at a rate of until congestion is detected from packet losses or excessive round trip delay . in that case , the rate is cut down by over the congestion network . ]+ ( a ) + ( b ) ] ] ] , title="fig : " ] + ( a ) ethernet , title="fig : " ] + ( b ) 802.11b , title="fig : " ] + ( c ) 802.11 g performance of all four rate allocation policies are evaluated in ` ns-2 ` , for an example network topology shown in fig .[ fig : networktopology ] .each sender streams one hd video sequence via all three access networks to its receiver .rate allocation over each network is determined by the middleware functionality depicted in fig .[ fig : systemdiagram ] .we collect available bit rate ( abr ) and round - trip - time ( rtt ) measurement from three real - world access networks ( ethernet , 802.11b and 802.11 g ) in a corporate environment using ` abing ` .the abr and rtt values are measured once every 2 seconds .the traces are then used to drive the capacity and delay over each simulated access network in ` ns-2 ` .statistics of the network measurement , together with a sample segment of the measured traces are presented in fig .[ fig : traces ] .figure [ fig : delay ] shows how average packet delivery delay varies with utilization percentage over each access network , as well as sample packet delay distributions at a given utilization level . in all three interface networks ,the average packet delay increases drastically as the utilization level approached 100% , as described in . in accordance with our assumptions, the example packet delay distributions also exhibit exponential shapes .we refer to for further details of the trace collection procedures and bandwidth and delay measurements using ` abing ` .+ three high - definition ( hd ) video sequences : _ bigships _ , _ cyclists _ , _ harbor _ are streamed by three senders , respectively .the sequences have spatial resolution of pixels , and temporal resolution of 60 frames per second ( fps ) .each stream is encoded using a fast implementation of the h.264/avc codec at various quantization step sizes , with gop length of 30 and ibbp ... structure similar to that often used in mpeg-2 bitstreams .figure [ fig : stream ] shows the tradeoff of encoded video quality measured in mse distortion and psnr versus average bit rate over the entire sequence durations .the measured data points are plotted against fitted model curves according to .encoded video frames are segmented into packets with maximum size of 1500 bytes .transmission intervals of each packet in the entire gop are spread out evenly to avoid unnecessary queuing delay due to the large sizes of intra coded frames .+ in addition to the video streaming sessions , additional background traffic is introduced over each network interface by the exponential traffic generator in ` ns-2 ` .the background traffic rate varies between 10% and 50% of the total abr of each access network .we also employ an implementation of the ` abing ` agent in ` ns-2 ` to perform online abr and rtt measurement over each access network for each stream .this allows the simulation system to capture the interaction among the three competing hd streams as they share the three access networks simultaneously . for consistency ,measurement frequency of the ` abing ` agents in ` ns-2 ` is also once every 2 seconds .update of video rate allocation is in sync with the time instances when new network measurements are obtained for each stream .note that no coordination or synchronization is required across rate updates in different streams , due to the distributed nature of the rate allocation schemes .+ in the following , we first focus on the media - aware allocation scheme .its allocation results are compared against optimal solutions for in section [ subsec : cent ] and its convergence behavior is compared against h-optimal control in section [ subsec : convergence ] .performance of all four allocation schemes are evaluated with 20% of background traffic load over each network and a playout deadline of 300 ms in section [ subsec : allocationcomparison ] .section [ subsec : randompacketloss ] compares allocation results from networks with or without random packet losses .the impact of background traffic load on the allocation results obtained from different schemes is studied in section [ subsec : backgroundtraffic ] .the effect of different video streaming playout deadlines is investigated in section [ subsec : playoutdeadline ] . + ( a ) + ( b ) we first verify how well the distributed solution from can approximate optimal solution for .figure [ fig : comparecent ] compares the traces of allocated rate to each video stream calculated from both solutions .the value of used in the distributed approximation corresponds to the sum of for all three streams : .it can be observed that allocation from the distributed approximation tracks the optimal solution closely .since the congestion term in ignores the impact of a stream on the expected distortion of other streams , the distributed approximation achieves slightly higher rates .+ figure [ fig : varyingflow ] shows traces of allocated rate , when the number of competing streams over the three access networks increases from 1 to 3 . in this experiment , all three streams are the _ harbor _ hd video sequence , hence the allocated rate to each stream is expected to be the same after convergence .the second and third streams start at 50 and 100 seconds , and complete at 200 and 250 seconds respectively .correspondingly , abrupt drops and rises in allocated rate can be observed in fig .[ fig : varyingflow ] ( a ) for media - aware allocation .it is also interesting to note the fluctuations in the allocated rates after convergence , reflecting slight variations in the video contents and network attributes .the h-optimal control scheme , on the other hand , requires longer time for the allocation to converge , as shown in fig .[ fig : varyingflow ] ( b ) .+ next , we measure the allocation convergence times when 1 , 2 or 3 competing streams join the network simultaneously .convergence time is defined as the duration between the start of the streams and the time at which allocated video rates settle between adjacent quality levels .figure [ fig : varyingflowconvergence ] compares results from media - aware allocation against h-optimal control .while both schemes yield similar allocated rates and video qualities , convergence time from media - aware allocation is shorter than h-optimal control .+ ( a ) media - aware allocation + ( b ) h-optimal control figure [ fig : utiltrace ] plots the traces of aggregate rate allocated over the ethernet interface for all four allocation schemes , together with the available bit rate over that network .it can be observed in fig .[ fig : utiltrace ] ( a ) that media - aware allocation avoids much of the fluctuations in the two aimd - based heuristics .figure [ fig : utiltrace ] ( b ) shows that it achieves higher network utilization than h-optimal control , as the latter is designed to optimize for the worst - case scenario .similar observations also hold for traces of aggregate allocated rate over the other two interfaces .+ in fig .[ fig : flowtrace ] , we compare the traces of total allocated rate for each video stream , resulting from the various allocation schemes . in greedy aimd allocation ,the total rate of each stream increases until multiplicative decrease is triggered by either packet losses or increase in the observed rtts from one of the interfaces .therefore traces of the allocated rates bear a saw - tooth pattern .behavior of the rate proportional aimd scheme is similar , except that rate drops tend to occur at around the same time .the control scheme yields less fluctuations in the allocated rates . in both the rate proportional aimd allocation and the control schemes , allocated ratesare almost identical to each video stream , since all flows are treated with equal importance .the media - aware convex optimization scheme , in contrast , consistently allocates higher rate for the more demanding _ harbor _ stream , with reduced allocation for _ cyclists _ with less complex contents .-optimal control . in this experiment ,background traffic load is 20% and the playout deadline is 300 ms .the network available bit rate is also plotted as a reference.,title="fig : " ] + ( a ) -optimal control . in this experiment ,background traffic load is 20% and the playout deadline is 300 ms .the network available bit rate is also plotted as a reference.,title="fig : " ] + ( b ) + ( a ) media - aware + ( b ) h-optimal + ( c ) greedy aimd + ( d ) rate proportional aimd figure [ fig : nopacketloss ] compares the average utilization over each interface , allocated rate to each stream , and corresponding received video quality achieved by the four allocation schemes , for background traffic load of 30% .the media - aware scheme allocates lower rate for _ cyclists _ and higher rate for _ harbor _ , compared to the other schemes .this improves the video quality of _ harbor _ , the stream with the lowest psnr amongst the three , at the expense of reducing the quality of the less demanding _ cyclists_. consequently , the video quality is more balanced among all three streams . + a similar graph is shown in fig .[ fig : randompacketloss ] , for the same simulation with 1% random packet loss over each network interface . while the presence of random packet losses tend to reduce received video quality , its impact can not be mitigated by means of careful rate allocation .consequently , relative performance of the four rate allocation schemes remain the same in both scenarios .this justifies the absence of a term representing random packet losses when formulating the media - aware rate allocation problem . for the rest of the simulations , we therefore focus on comparisons without random packet losses .next , we vary the percentage of background traffic over each network from 10% to 50% , with playout deadline of 300 ms . the impact of the background traffic load on the allocation results is shown in fig .[ fig : allocationperload ] .it can be observed that total utilization over each interface increases with the background traffic load . for the media - aware , h-optimal and rate proportionalaimd schemes , utilization varies between 60% to 90% , whereas for the greedy aimd scheme , the 802.11b interface is underutilized .note that media - aware allocation ensures balanced utilization over all three access networks , as dictated by .+ it can be observed from fig .[ fig : allocationperload ] that , increasing background traffic load leads to decreasing allocated rate in each stream .while the other three schemes treat the three flows with equal importance , the media - aware allocation consistently favors the more demanding _ harbor _ , thereby reducing the quality gap between the three sequences .the two aimd - based heuristics achieve lower received video quality than media - aware and h-optimal allocations , especially in the presence of heavier background traffic load .+ figure [ fig : delaylossperload ] compares the average packet delivery delay and packet loss ratios due to late arrivals . in the two aimd - based schemes ,allocated rates are reduced only _ after _ congestion has been detected .the media - aware allocation and the h-optimal control schemes , on the other hand , attempt to avoid network congestion in a proactive manner in their problem formulations .they therefore yield significantly lower packet loss ratios and delays .this leads to improved received video quality , as shown in fig .[ fig : decodedpsnrperload ] .the performance gain ranges between 1.5 to 8.8 db in psnr of the decoded video , depending on the sequence content and background traffic load .note also , that the packet delivery delays and packet loss ratios also indicate the impact of each scheme on background traffic sharing the same access networks .lower delays and losses achieved by the media - aware and h-optimal schemes means that they introduce less disruption to ongoing flows , as a results of proactive congestion avoidance .+ + + ( a ) media - aware + + + + ( b ) h-optimal + + + + ( c ) greedy aimd + + + + ( d ) rate proportional + aimd + ( a ) + ( b ) in the next set of experiments , we vary the playout deadline for each video stream from 200 ms to 5.0 seconds , while fixing the background traffic load at 20% . figure [ fig : allocationperdeadline ] compares the allocation results from the four schemes . as the playout deadline increases, higher network congestion level can be tolerated by each video stream .the media - aware allocation scheme therefore yields higher allocated rate and improved video quality , saturating as the playout deadline exceeds 1.0 second .allocation from the other three media - unaware schemes , in comparison , are not so responsive to changes in the playout deadlines of the video streams .+ figures [ fig : delayperdeadline ] and [ fig : lossperdeadline ] compare the average packet delivery delay and packet loss ratios due to late arrivals .similar to results in the previous section , the media - aware and h-optimal allocations achieve much lower packet delivery delays and loss ratios than the two aimd - based heuristics .the performance gap increases as the playout deadline becomes more relaxed .the packet loss ratios are almost negligible ( less than 0.1% ) from media - aware allocation , and very small ( less than 2.0% ) from h-optimal control . in comparison ,the packet loss ratios range between 16 - 45% for greedy aimd , and between 12 - 37% for rate proportional aimd allocation , far exceeding the tolerance level of video streaming applications . consequently , while the average received video quality of _ bigships _ at playout deadline 300 ms is 34.0 db and 32.8 db from the greedy and rate proportional aimd schemes , respectively , they are improved to 37.3 db with media - aware allocation , and to 36.0 db with h-optimal control .similar results are observed for other sequences with other playout deadlines , as shown in fig .[ fig : decodedpsnrperdeadline ] .the improvement varies between 3.3 - 10.7 db in psnr of the decoded video .the lower packet delivery delays and packet loss ratios achieved by the two proposed schemes also indicate that they are more friendly to ongoing background traffic than the two aimd - heuristics , by virtue of more mindful congestion avoidance .+ + + ( a ) media - aware + + + + ( b ) h-optimal + + + + ( c ) greedy aimd + + + + ( d ) rate proportional + aimdthis paper addresses the problem of rate allocation among multiple video streams sharing multiple heterogeneous access networks .we present an analytical framework for optimal rate allocation based on observed network attributes ( available bit rates and round - trip times ) and video distortion - rate ( dr ) characteristics , and investigate a suite of distributed rate allocation policies .extensive simulation results demonstrate that both the media - aware allocation and h-optimal control schemes outperform aimd - based heuristics in achieving smaller rate fluctuations , lower packet delivery delays , significantly reduced packet loss ratios and improved received video quality .the former benefit from proactive avoidance of network congestion , whereas the latter adjust the allocated rates only reactively , _after _ detection of packet drops or excessive delays .the media - aware approach further takes advantage of explicit knowledge of video dr characteristics , thereby achieving more balanced video quality and responding to more relaxed video playout deadline by increasing network utilization .+ we believe that this work has some interesting implications for the design of next generation networks in a heterogeneous , multi - homed environment .media - aware proactive rate allocation provides a novel framework for quality - of - service ( qos ) support . instead of rigidly reserving the network resources for each application flow in advance , the allocation can be dynamically adapted to changes in network conditions and media characteristics .as the proposed rate allocation schemes are distributed in nature , they can be easily integrated into wireless devices .future extensions to the current work include investigation of measures to best allocate network resources among different traffic types ( e.g. , web browsing vs. video streaming ) and to reconcile their different performance metrics ( e.g. , web page refresh time vs. video quality ) as functions of their allocated rates .in addition , our system model can be further extended to incorporate other types of access networks employing resource provisioning or admission control .we now provide the h-optimal control formulation for the general case of multiple access networks from the perspective of a single stream .for ease of notation , we drop the superscript and define ] , and ] . here ,the matrices , , and are obtained simply by multiplying the identity matrix by , , and , respectively .+ correspondingly , system output is: the matrix represents the weight on the cost of deviation from zero state , i.e. full network utilization .we can assume that is positive definite , in that any non - zero deviation from full utilization leads to a positive cost .likewise , the matrix represents the weight on the cost of deviation from zero control , i.e. , constant allocated rates .we assume that is positive definite , and that no cost is placed on the product of control actions and states : .+ the cost function is defined as: where and .again , one can define the worst possible value for cost as .+ similar to the solutions for the scalar system , we obtain the h-optimal linear feedback controller for the multiple network case: in , the matrix can be computed by solving the game algebraic ricatti equation ( gare): it can be verified that a unique minimal nonnegative definite solution exists for , if is stabilizable and is detectable . in our case , since the matrix is square and negative definite and the matrix is positive definite , the system is both controllable and observable , hence both conditions are satisfied .x. zhu , p. agrawal , j. p. singh , t. alpcan , and b. girod , `` rate allocation for multi - user video streaming over heterogenous access networks , '' in _ proc .acm 15th international conference on multimedia _ , 2007 , pp .p. vidales , j. baliosion , j. serrat , g. mapp , f. stejano , and a. hopper , `` autonomic sytem for mobility support in 4 g networks , '' in _ ieee journal on selcted areas in communications _ , vol . 23 , no . 12 , dec . 2005 , pp . 22882304 .a. cuevas , j. i. moreno , p. vidales , and h. einsiedler , `` the ims platform : a solution for next generation network operators to be more than bit pipes , '' in _ ieee communications magazine , issue on advances of service platform technologies _ ,44 , no . 8 ,2006 , pp . 7581 .n. thompson , g. he , and h. luo , `` flow scheduling for end - host multihoming , '' in _ proc .25th ieee international conference on computer communications , ( infocom06 ) _ , barcelona , spain , apr .2006 , pp . 112 .z. wang , s. banerjee , and s. jamin , `` media - friendliness of a slowly - responsive congestion control protocol , '' in _ proc .14th international workshop on network and operating systems support for digital audio and video _ , cork , ireland , 2004 , pp .f. kelly , a. maulloo , and d. tan , `` rate control for communication networks : shadow prices , proportional fairness and stability , '' _ journal of operations research society _49 , no . 3 , pp .237252 , 1998 .h. yaiche , r. mazumdar , and c. rosenburg , `` a game theoretic framework for bandwidth allocation and pricing in broadband networks , '' _ ieee / acm trans . on networking _ , vol . 8 , no . 5 , pp . 667678 , oct . 2000 . , `` global stability analysis of an end - to - end congestion control scheme for general topology networks with delay , '' in _ proc .42nd ieee conference on decision and control ( cdc03 ) _ , maui , hi , u.s.a . , dec .2003 , pp . 10921097 .a. szwabe , a. schorr , f. j. hauck , and a. j. kassler , `` dynamic multimedia stream adaptation and rate control for heterogeneous networks , '' in _ proc .15th international packet video workshop , ( pv06 ) _ , vol . 7 , no . 5 , hangzhou , china , may 2006 , pp . 6369 . d. jurca and p. frossard , `` media - specific rate allocation in heterogeneous wireless networks , '' in _ proc .15th international packet video workshop , ( pv06 ) _ , vol . 7 , no . 5 ,hangzhou , china , may 2006 , pp .713726 .x. zhu , j. p. singh , and b. girod , `` joint routing and rate allocation for multiple video streams in ad hoc wireless networks , '' in _ proc .15th international packet video workshop , ( pv06 ) _ , vol . 7 , no . 5 ,hangzhou , china , may 2006 , pp .727736 .j. p. singh , t. alpcan , p. agrawal , and v. sharma , `` an optimal flow assignment framework for heterogeneous network access , '' in _ proc .ieee international symposium on a world of wireless , mobile and multimedia networks ( wowmom07 ) _ , helsinki , finland , apr .2007 , pp . 112 .t. alpcan , j. p. singh , and t. basar , `` a robust flow control framework for heterogenous network access , '' in _ proc .5th ieee international symposium on modeling and optimization in mobile , ad hoc , and wireless networks ( wiopt07 ) _ , limassol , cyprus , june 2007 , pp .j. p. singh , t. alpcan , x. zhu , and p. agrawal , `` towards heterogeneous network convergence : policies and middleware architecture for efficient flow assignment , rate allocation and rate control for multimedia applications , '' in _ proc .workshop on middleware for next - generation converged networks and applications ( mncna07 ) _ , newport beach , ca , u.s.a ., nov . 2007 .x. zhu , e. setton , and b. girod , `` congestion - distortion optimized video transmission over ad hoc networks , '' _ eurasip journal of signal processing : image communications _ , vol .20 , no . 8 , pp .773783 , sept .
we consider the problem of rate allocation among multiple simultaneous video streams sharing multiple heterogeneous access networks . we develop and evaluate an analytical framework for optimal rate allocation based on observed available bit rate ( abr ) and round - trip time ( rtt ) over each access network and video distortion - rate ( dr ) characteristics . the rate allocation is formulated as a convex optimization problem that minimizes the total expected distortion of all video streams . we present a distributed approximation of its solution and compare its performance against control and two heuristic schemes based on tcp - style additive - increase - multiplicative - decrease ( aimd ) principles . the various rate allocation schemes are evaluated in simulations of multiple high - definition ( hd ) video streams sharing multiple access networks . our results demonstrate that , in comparison with heuristic aimd - based schemes , both media - aware allocation and h-optimal control benefit from proactive congestion avoidance and reduce the average packet loss rate from 45% to below 2% . improvement in average received video quality ranges between 1.5 to 10.7 db in psnr for various background traffic loads and video playout deadlines . media - aware allocation further exploits its knowledge of the video dr characteristics to achieve a more balanced video quality among all streams . distributed rate allocation , multi - homed video streaming , heterogeneous access networks
obtaining sufficiently accurate photometric redshift estimates is of the utmost importance for the current and coming era of large multi - band extragalactic surveys ( see e.g. for a recent review ) . unlike spectroscopic redshift determination , photometric redshift estimation ( photo - z )is highly subject to systematic errors and confusion because the spectral information of a galaxy is limited to the magnitude or flux in a number of wavelength bands .photo - z estimation techniques have traditionally been divided into two main classifications .so - called `` template fitting '' methods , for example the popular _ lephare _ package as described in and , and _bayesian photometric redshift ( bpz ) _ as described in , involve correlating the observed band photometry with model galaxy spectra and redshift , and possibly other model properties .in contrast , so - called `` empirical '' or `` training set '' methods , such as artificial neural networks ( e.g. _ annz _ , * ? ? ? * ) and boosted decision trees ( e.g. _ bdt _ , * ? ? ?* ) , develop a mapping from input parameters to redshift with a training set of data in which the actual redshifts are known , then apply the mappings to data for which the redshifts are to be estimated .there are advantages and disadvantages to each class of methods .template fitting methods require assumptions about intrinsic galaxy spectra or their redshift evolution , and empirical methods require the training set to be ` complete ' in the sense that it is representative of the target evaluation population in bulk in all characteristics . in regardto photo - zs , science goals such as using weak lensing for cosmology are most affected by the number of outliers - those objects whose estimated photo - zs are far from the actual redshifts ( e.g. * ? ? ?in general , data sets with bands extending into the infrared ( e.g. j , h , and k bands ) have more accurate photo - z estimation and fewer outliers .however , most upcoming large surveys , such as the large synoptic survey telescope ( lsst , * ? ? ?* ) , will have optical and near - infrared data only .it is a reasonable hypothesis that galaxy morphology and redshift are correlated in such a way that the addition of morphological information could improve photo - z estimation .reasons include the larger frequency of mergers at higher redshifts , and , perhaps more importantly , the general evolutionary trend from spiral to elliptical shapes .the inclusion of morphological parameters in photo - z estimation has also been studied by with an artificial neural network determination , and by and with other methods .all three works use sloan digital sky survey ( sdss ) data . find possible modest improvement with the inclusion of shape information , although they restrict their analysis to quite low redshift ( z.7 ) galaxies . consider several empirical methods and show marginal improvement for some methods with the addition of morphological information . claim an improvement of between 1 and 3 percent in the rms error in photo - z determination , however it is not noted whether this result is significant and the method of photo - z estimation is not discussed .we note that sdss galaxy photometric data is a bit unusual in the context of data that will be used to constrain cosmological parameters from surveys such as lsst , in that sdss photometric data has a greater representation of nearby galaxies and thus fewer potential outliers . in this work ,we explore the efficacy of adding parameters describing the morphological information of galaxies , in the context of a neural network estimation technique for photo - zs .for this analysis we desire data with magnitudes in a number of optical bands , spectroscopic redshifts , and enough imaging resolution to determine morphological parameters .we use observations of the extended groth strip from the the all - wavelength extended groth strip international survey ( aegis ) data set , which contains photometric band magnitudes in u , g , r , i , and z bands from the canada - france - hawaii telescope legacy survey ( cfhtls , * ? ? ?* ) , imaging from the advanced camera for surveys on the hubble space telescope ( hst / acs , * ? ? ?* ) , and spectroscopic redshifts from the deep 2 survey using the deimos spectrograph on the keck telescope .the limiting i band ab magnitude of the cfhtls survey is 26.5 , while that of hst / acs is 28.75 in v ( f606w ) band , and that of deep2 is 24.1 in r band . from the hst / acs imaging data in two bands , v ( f606w ) and i ( f814w ), we form a set of parameters characterizing the morphological properties of the galaxies as follows : \1 .the concentration c : this parameter defines the central density of the light distribution with radii and correspondingly 80% and 20% of the total light .the asymmetry a : this parameter characterizes the rotational symmetry of the galaxy s light , with being the intensity at point ( x , y ) and being the intensity at the point rotated 180 degrees about the center from ( x , y ) , with being the average asymmetry of the background calculated in the same way .it is the difference between object images rotated by 180 .the smoothness s : .the smoothness is used to quantify the presence of small - scale structure in the galaxy .it is calculated by smoothing the image with a boxcar of a given width and then subtracting that from the original image .in this case is the intensity at point ( x , y ) and is the smoothed intensity at ( x , y ) , while is the average smoothness of the background , calculated in the same way .the residual is a measure of the clumpiness due to features such as compact star clusters . in practice, the smoothing scale length is chosen to be a fraction of the petrosian radius .the gini coefficient g : + , describes the uniformity of the light distribution , with corresponding to the uniform distribution and to the case when all flux is concentrated in to one pixel . is calculated by ordering all pixels by increasing flux . is a mean flux and is the total number of pixels . : , is the ratio of the second order moment of the brightest 20% of the galaxy to the total second moment .this parameter is sensitive to the presence of bright off - center clumps .the ellipticity : .the values and are the semi - major axis and semi - minor axis of the galaxy . a number of these parameters are discussed in e.g. .the remaining two parameters are two of the fitting parameters to the srsic profile form ( e.g. * ? ? ?* ) , where is the surface brightness at radius and is defined such that half of the total flux is contained within : \7 . the srsic power law index and \8 . , the effective radius of the srsic profile .morphological parameters c , a , s , g , and are determined for these galaxies in , while , n , and are determined by using the galfit package . for this analysiswe require a magnitude in each band , a spectroscopic redshift , and sufficient hst / acs image resolution to construct all eight shape parameters .a total of 2612 galaxies spanning redshifts from 0.01 to 1.57 , with a mean redshift of 0.702 and a median of 0.725 , and i band magnitudes ranging from 24.43 to 17.62 , are in the data set used here .the redshift distribution of this particular set of galaxies arises because of the intentional construction of the portion of deep2 spectroscopic catalog within the aegis survey to have roughly equal numbers of galaxies below and above z=0.7 ; therefore it is not an optimized training set for a generic photometric data evaluation set , although a more optimized training set for any given photometric data evaluation set could be constructed from it .we emphasize that because in this analysis random subsets from the same 2612 galaxy catalog are used for training and evaluation , the representativeness of the training set is not an issue here for this analysis .template - based photometric redshifts estimations for all of the galaxies used in this analysis have been reported in .this estimation also provides a most likely galaxy type among template spectra corresponding to elliptical , sbc , scd , irregular , or starburst .in order to most efficiently determine the effect of the extra information provided by morphological parameters on the photo - z estimation , we form principal components of the morphological parameters .the morphological principal components are given as linear combinations of the eight morphological parameters discussed in [ dataset ] by table [ tab ] .principal components ( e.g. * ? ? ?* ) are the result of a coordinate rotation in a multi - dimensional space of possibly correlated data parameters into vectors with maximum orthogonal significance .the first principal component is along the direction of maximum variation in the data space , the second is along the direction of remaining maximum variation orthogonal to the first , the third is along the direction of remaining maximum variation orthogonal to both of the first two , and so on .lrrrrrrrrr & pc1 & pc2 & pc3 & pc4 & pc5 & pc6 & pc7 & pc8 + c & -.52 & + .13 & + .01 & -.17 & -.03 & + .19 & .09 & -.8 + a & + .10 & -.41 & + .62 & -.18 & -.61 & + .15 & + .07 & -.007 + s & + .06 & + .29 & + .72 & -.18 & + .60 & -.02 & + .04 & + .03 + g & -.47 & -.07 & + .09 & -.19 & -.09 & -.67 & -.51 & + .12 + & + .49 & + .05 & + .004 & + .03 & -.09 & -.67 & + .32 & -.44 + & + .07 & + .62 & -.14 & -.63 & -.34 & + .03 & + .13 & -.23 +n & -.50 & -.006 & + .07 & + .21 & -.04 & -.20 & + .74 & + .32 + & -.02 & + .58 & + .24 & + .65 & -.36 & + .01 & -.22 & -.03 + given the available galaxy type estimations discussed at the end of [ dataset ] , we can check for correlations between principal components of the morphological parameters and galaxy type , as in figure [ pcfig ] .it is seen that the first principal component is well correlated with galaxy type , and correlations persist through several of the other principal components .these correlations indicate that the morphology may provide an additional handle on the photo - z estimation , since outliers often occur because a spectral feature ( such as a break ) of one galaxy type at a given redshift may be seen by the observer to be at the same wavelengths as a spectral feature of another galaxy type at different redshift .thus morphological information indicative of galaxy type may help break this degeneracy . ,shown as the two diagonal lines . +* bottom : * the photo - zs for the same galaxies as in the top plot , as estimated with the lephare template fitting code as reported in .the template fitting method has a lower scatter for non - outliers but a larger number of catastrophic outliers ( ie those where ) than the custom neural network for these galaxies ., width=336 ]artificial neural network techniques have been popular empirical methods for photo - z estimation , including with such software packages as annz . in the case of neural network photo - z determination , the network functions as a ` black box ' which finds patterns contained in the relation between band magnitudes ( and , in principle , other information ) and redshift in an unbiased way .thus , a neural network photo - z estimation can be a useful tool to explore whether additional parameters beyond the band magnitudes , such as morphology in this case , provide additional useful information .an artificial neural network , in analogy with a biological one , contains layers of nodes called `` neurons '' and relationships between neurons in different layers of varying weights which can be altered .each neuron in the network beyond the input layer assumes a value determined by passing through an activation function the sum of the product of all of the values of the neurons feeding the neuron in question times the weight of the connection between the two neurons .neurons at the input accept values of data and the output of the network is the value of one or more output layer neurons .the weights between neurons are adjusted by ` training ' the network to best give desired outputs for a set of inputs .training is dependent on a _ training set _ containing a number of cases with inputs and output(s ) . in the case of photo - zestimation , the training set contains band magnitudes and possibly other information as inputs , and the actual known ( spectroscopic ) redshifts as the output . with the weights set in this way, the network can be used to estimate the redshift of other galaxies in an _ evaluation set _ , and the results can be compared to the known redshifts of the evaluation set to determine the quality of photo - z estimation .a comprehensive discussion of artificial neural networks is presented in , and a specialized discussion for the context of photo - z estimation is presented in e.g. . the artificial neural network package used in this analysis is a ` multi - layer perceptron ' developed for the idl environment by one of the authors ( js).jacks ] perceptrons are standard artificial neural network architectures for pattern recognition , consisting of input , hidden , and output neurons as described above . the primary motivation for the development of this code was to treat additional available galaxy information beyond photometric data ( for example shape parameters ) on an equal footing with the photometric data .the idl code can be relatively easily modified , and could in principle be configured for a wide variety of input data situations . as training convergence is relatively slow in this network , it is most useful in situations where a robust training set is available from the outset .as implemented here , the network has an input layer of neurons , five of which accept the observed magnitudes in each optical band , and an additional variable number of input neurons which accept values of as many morphological parameters as desired .the input layer treats all input information on an equal footing , normalizing each input parameter across all objects in the training set so that the inputs for each neuron on the input layer are distributed between 0 and 1 .there are two hidden layers of 30 neurons each , and an output layer with a single neuron obtaining a value between 0 and 1 which is a proxy for the estimated redshift , with the linear conversion defined during the training when the known redshifts of the training set are supplied subject to the conversion .the network uses a hyperbolic tangent activation function for the neurons beyond the input later , and the weights are adjusted during training via the back propagation technique ( e.g. * ? ? ?* ) where in each training iteration the weights are altered in a way to move ` downhill ' in the high dimensional surface of summed training set redshift errors in the space of weights .each iteration during training consists of the network evaluating the entire training set and adjusting the weights .in addition to standard back propagation , this network features an algorithm to ` kick ' the weights away from possible local minima in the summed error .the top panel of figure [ photovsspec ] shows the estimated photo - z versus spectroscopic redshift for the galaxies in the evaluation set of a particular determination with no morphological information included .this determination features 350,000 training iterations , 700 galaxies in the training set , and 1912 galaxies in the evaluation set , which is the standard used in all determinations here .the bottom panel of figure [ photovsspec ] also shows photo - z estimations for the same galaxies with the lephare template fitting method as reported in .the custom neural network determination apparently leads to fewer catastrophic outliers ( ie those where ) than with the lephare template fitting method , although has a larger scatter for those galaxies in which the photo - z estimate is close to the actual redshift .to determine the effect of including a given or multiple principal components in addition to the band magnitudes , we complete six realizations of the training and evaluation process for every case , with a training set of 700 galaxies with 350,000 training iterations and an evaluation set of the remaining 1912 galaxies , and record the number of outliers , and the rms error , in the evaluation set .in this work we follow convention ( e.g * ? ? ? * ) and define outliers in a given realization as those galaxies where where and are the estimated photo - z and actual ( spectroscopically determined ) redshift of the object respectively . the rms photo - z error in a realizationis given by a standard definition where is the number of galaxies in the evaluation set and represents a sum over those galaxies . note that we do not exclude outliers from the calculation of the rms photo - z error . because in each realization the membership of the training set varies , and because as the training process contains ` kicks ' to knock the weights away from local minima in the summed error ( see [ method ] ) , each realization for a given input parameter set produces a slightly different number of galaxies in the evaluation set with outlier level errors , and a slightly different average error . for comparison ,the template fitting results reported by give 5% outliers and an rms error of for this sample .this error is dominated by the catastrophic outliers ( figure [ photovsspec ] ) , and drops substantially to below that of the custom neural network method if outliers are excluded .( bottom ) in the photo - z estimation with the inclusion of different individual principal components of the morphological parameters .the uncertainties represent the standard deviation of the values obtained from different realizations , as discussed in [ simlumf ] . , width=336 ] ( bottom ) in the photo - z estimation with the inclusion of multiple principal components of the morphological parameters , starting with none , adding the first morphological principal component , then the first and the second principal components , and so on .the uncertainties represent the standard deviation of the values obtained from different realizations , as discussed in [ simlumf ] ., width=336 ] figure [ indpcs ] shows the number of outliers and rms error for the inclusion of the seven different principal components individually .figure [ multpcs ] shows the number of outliers and rms error for the inclusion of multiple principal components , starting with none , then adding in the first , then adding in the first and second , then adding in the first through third , and so on . in each figure , the error bars correspond to the standard deviation of the number of outliers or rms scatter in the different realizations .we note that with six realizations per case , the standard deviations in the number of outliers and rms photo - z error are not particularly robust , however we include them to provide a sense of the scatter of results from different realizations .we note that the last principal component ( pc8 ) should by definition contain minimal significant variation in the morphological parameters , so we do not include it in the analysis .it is apparent that adding in any one of the principal components of the morphological parameters may provide a small decrease in the average number of outliers or the rms error or both , but the differences are not statistically significant compared to the inclusion of no morphological information .as seen in figure [ multpcs ] , adding multiple principal components increases the number of outliers and the rms error .we have used a custom artificial neural network for photometric redshift estimation to evaluate the effects of including galaxy shape information , in the form of principal components of morphological parameters , on the photo - z estimation .the data set we use consists of 2612 galaxies with five optical band magnitudes , reliable spectroscopic redshifts , and eight morphological parameters each .we note that a neural network is in a sense an unbiased way of determining the relative strength of the correlations of a set of input parameters with the output parameter , and that this network is designed to treat all input parameters on an equal footing . in order tomore effectively include the morphological information , we form principal components of the morphological parameters .an analysis of the principal components and galaxy types shows that the value of the first few principal components and galaxy type are correlated .however , we find that the inclusion of morphological information does not significantly decrease the number of outliers or rms error in photo - z estimation for this data set with the neural network technique used . when only one principal component of the morphological parameters is included , there can be a slight but not significant decrease .when multiple principal components of the morphological parameters are included , the number of outliers and the rms error increases .we conclude that any gain that may arise in a neural network photo - z determination from correlations between morphology and redshift in this data set is overwhelmed by the additional noise introduced. it may be that any correlations between principal components of the morphological parameters and the galaxy type are degenerate to some extent with the correlations between galaxy type and galaxy colors .this analysis is applicable to artificial neural network photo - z estimations , and possibly other training set methods , with similar data .it is possible , however , that morphological parameters could yield improvements for other algorithms , especially template - fitting methods with relatively large outlier fractions .this is because such outliers usually occur when a particular spectral break is confused for another break in a different galaxy type at a different redshift ( e.g. an elliptical galaxy at low redshift mistaken for a spiral at high redshift ) .having additional information to guide the template selection might therefore be helpful in reducing the outlier fraction .a preliminary analysis using figure [ pcfig ] to build prior probability distributions on the template selection in the lephare package produced a few percent reduction in the number of outliers .a more thorough analysis of the effects of shape parameters in photometric redshift estimation with a template fitting method will be presented in a forthcoming work .js thanks t. brookings for his counsel , and s. kahn and r. schindler for their encouragement and support .ms is thankful to r. blandford and p. marshall for very useful discussions and support .this work was supported in part by the u.s .department of energy under contract number de - ac02 - 76sf00515 .arnouts , s. , cristiani , s. , moscardini , l. , matarrese , s. , lucchin , f. , fontana , a. , & giallongo , e. 1999 , , 310 , 540 bentez , n. 2000 , , 536 , 571 collister , a. & lahav , o. 2004 , , 116 , 345 davis , m. , et al .2007 , , 660 , l1 gerdes , d. , et al .2010 , , 715 , 823 graham , a. & driver , s. 2005 , pasa , 22 , 118 griffith , r. , et al .2011 , in prep gwyn , s. , 2008 , , 120 , 212 husler , b. , et al .2007 , , 172 , 615 hearin , a. , zentner , a. , ma , z. , & huterer , d. 2010 , , submitted , ( arxiv:1102.3383 ) haykin , s. , _ ` neural networks : a comprehensive foundation ' _ , upper saddle river , nj : prentice hall 1999 huterer , d. , takada , m. , bernstein , g. , & jain , b. 2006 , , 366 , 101 ilbert , o. , et al .2006 , , 457 , 841 ivezic , z. et al .2008 arxiv:0805.2366 jolliffe , i. , _ ` principal component analysis ' _ , series : springer series in statistics , 2nd ed .new york , ny : springer 2002 koekemoer , a. , et al .2007 , , 172 , 196 lotz , j. , et al .2008 , , 672 , 177 peng , c. , ho , l. , impev , c. , & rix , h. 2002 , , 124 , 266 scarlata , c. , et al .2007 , , 172 , 406 tagliaferri , r. , et al .2003 , _ lect .notes comp ._ , 2859 , 226 vanzella , e. , et al .2004 , , 423 , 761 vince , o. & csabai , i. 2006 , ` toward more precise photometric redshift estimation . 'proceedings of the international astronomical union , 2 , pp 573 - 574 way , m. & srivastava , a. 2006 , , 647 , 102
we present a determination of the effects of including galaxy morphological parameters in photometric redshift estimation with an artificial neural network method . neural networks , which recognize patterns in the information content of data in an unbiased way , can be a useful estimator of the additional information contained in extra parameters , such as those describing morphology , if the input data are treated on an equal footing . we use imaging and five band photometric magnitudes from the all - wavelength extended groth strip international survey . it is shown that certain principal components of the morphology information are correlated with galaxy type . however , we find that for the data used the inclusion of morphological information does not have a statistically significant benefit for photometric redshift estimation with the techniques employed here . the inclusion of these parameters may result in a trade - off between extra information and additional noise , with the additional noise becoming more dominant as more parameters are added .
continuously time evolving dynamical systems are one of the basic theoretical tools for modeling the evolution of natural phenomena in every branch of physics , chemistry , or biology .their usefulness in scientific / engineering applications is determined by their predictive power , which , in turn , strongly depends on the stability of their solutions . since in the measured initial conditions in a physical systemsome uncertainty inevitably does exist , a physically meaningful mathematical model must offer an understanding of the possible evolution of the deviations of the trajectories of the studied dynamical system from a given reference trajectory . note that a local understanding of the stability is as important as the global evolution and control of late - time deviations . from a mathematical point of view the global stability of the solutions of the dynamical systemsis described by the well studied theory of lyapounov stability . in this approachthe fundamental quantities are the lyapunov exponents , measuring exponential deviations from the given trajectory .it is usually very difficult to analytically determine the lyapounov exponents , and therefore various numerical methods for their calculation have been proposed , and are used in various situations - . on the other handthe local stability of solutions of dynamical systems is much less understood . even that the methods of the lyapounov stability analysis are well established , it would be interesting to study the stability of the dynamical system from different points of view , and to compare the results with the corresponding lyapunov exponents analysis .such an alternative approach to the study of the dynamical systems is represented by the so - called geometrodynamical approach , which was initiated in the pioneering work of kosambi , cartan and chern .the kosambi - cartan - chern ( kcc ) approach is inspired by the geometry of the finsler spaces .its basic idea is to consider that there is a one to one correspondence between a second order dynamical system and the geodesic equations in an associated finsler space ( for a recent review of the kcc theory see ) .the kcc theory is a differential geometric theory of the variational equations for the deviations of the whole trajectory to nearby ones . in this geometrical description of the dynamical systems oneassociates a non - linear connection , and a berwald type connection to the differential system , and five geometrical invariants are obtained .the second invariant , also called the curvature deviation tensor , gives the jacobi stability of the system .the kcc theory has been applied for the study of different physical , biochemical or technical systems ( see ) .an alternative geometrization method for dynamical systems was proposed in and , and further investigated in - .specific applications for the henon - heiles system and bianchi type ix cosmological models were also considered . in particular , in a theoretical framework devoted to a geometrical description of the behavior of dynamical systems and their chaotic properties was developed . in the riemmannian geometric approach to dynamical systemsone starts with the well - known results that the flow associated with a time dependent hamiltonian can be reformulated as a geodesic flow in a curved , but conformally flat , manifold . by introducing a metric of the form , with the conformal factor given by , where is the conserved energy associated with the time - independent , it follows that the geodesic equation for motion in the metric is completely equivalent to the hamilton equations , .this implies that the confluence or divergence of nearby trajectories and ^a(s) ] , and ] and ] and ] .let and .then , on the other hand , in jacobi stability analysis , we find the eigenvalues of are _ + ( s_0)&= & = ( ^2 - 4 ) , + _ -(s_0)&= & ^2 , 1 , thus , one of eigenvalues of recovers some information of the stability of the system at the origin .that is , the origin is a center or spiral , when .in the present paper we have considered the stability analysis of the lorenz system from the point of view of the kcc theory , in which the dynamical stability properties of dynamical systems are inferred from the study of the geometric properties of the finsler space geodesic equations , equivalent with the given system . by transforming the lorenz to an equivalent system of two second order differential equations ,the dynamical system can be interpreted as representing the geodesic motion of a `` particle '' in an associated finsler space .this geometrization of the lorenz system opens the possibility of applying the standard methods of differential geometry for the study of its properties .we have obtained , and analyzed in detail , the main geometrical objects that can be associated to the lorenz system , namely , the non - linear connection , the berwald connection , and the first , second and third kcc invariants .the main result of the present paper is the jacobi stability condition of the equilibrium points of the lorenz system , showing that the origin is always jacobi unstable , while the jacobi stability of the other two equilibrium points depends on the values of the parameters of the system . by considering the standard values of , and in the lorenz system , , , and ,it turns out that for this choice of the parameters all the equilibrium points are jacobi unstable . from the point of view of linear stability analysis , for ,the zero fixed point is globally stable ; from a physical point of view this refers to the non convective state . for and too large , the state with roll convection , referring to the equilibrium points , is stable . from a physical point of viewthis means that the phase space consists of two regions , which are separated by the stable manifold of the zero fixed point ; the trajectories which start in one of the two regions are attracted by the corresponding nonzero fixed point .we have also considered in detail the behavior of the deviation vector near the equilibrium points . in order to describe the behavior of the trajectories near the equilibrium points we have introduced the instability exponent , as well as the curvature of the deviation vector trajectories .the curvature of the curve can be related directly to the chaotic behavior of the trajectories via its transition moment from positive to negative values .an early transition indicates the presence of chaotic states .therefore we suggest the use of the curvature of the deviation vector as an indicator of the onset of chaos in non - linear dynamical systems .in it was suggested that the torsion tensor geometrically expresses the chaotic behavior of dynamical systems , i.e. a trajectory of dynamical systems with the torsion tensor is not closed . by using the same definition as in it turns out that indeed there is a non - zero torsion tensor component , .therefore the existence of chaos in the lorenz system is intimately related to a non - zero , which is needed for the chaotic behavior of the lorenz system .
we perform the study of the stability of the lorenz system by using the jacobi stability analysis , or the kosambi - cartan - chern ( kcc ) theory . the lorenz model plays an important role for understanding hydrodynamic instabilities and the nature of the turbulence , also representing a non - trivial testing object for studying non - linear effects . the kcc theory represents a powerful mathematical method for the analysis of dynamical systems . in this approach we describe the evolution of the lorenz system in geometric terms , by considering it as a geodesic in a finsler space . by associating a non - linear connection and a berwald type connection , five geometrical invariants are obtained , with the second invariant giving the jacobi stability of the system . the jacobi ( in)stability is a natural generalization of the ( in)stability of the geodesic flow on a differentiable manifold endowed with a metric ( riemannian or finslerian ) to the non - metric setting . in order to apply the kcc theory we reformulate the lorenz system as a set of two second order non - linear differential equations . the geometric invariants associated to this system ( nonlinear and berwald connections ) , and the deviation curvature tensor , as well as its eigenvalues , are explicitly obtained . the jacobi stability of the equilibrium points of the lorenz system is studied , and the condition of the stability of the equilibrium points is obtained . finally , we consider the time evolution of the components of the deviation vector near the equilibrium points .
in recent years , a great number of cryptosystems based on chaos have been proposed , most of them fundamentally flawed by a lack of robustness and security . in , a secure communication scheme based on the phase synchronization of a chaotic system is proposed . in this new schemethe plaintext binary message is hidden in the instantaneous phase of the drive subsystem used as transmitting signal to drive the response subsystem . at the response subsystem ,the phase difference is detected and its strong fluctuation above or below zero recovers the plaintext at certain coupling strength .the secure communication process is illustrated by means of an example based on coupled rssler chaotic oscillators . in the example , the drive subsystem is formed by two weak coupled oscillators . the plaintext is used to modulate the same parameter in both oscillators 1 and 2 .the equations of the drive subsystem are : the response subsystem is governed by : ; ( b ) ciphertext ; ( c ) reconstructed phase signal of the response subsystem ; ( e ) difference between the ciphertext and the reconstructed signal ; ( f ) reconstructed plaintext . ] in the example , the parameter values are : , , and the parameters and are held as constants , with the values .the parameter corresponds to the natural frequency of the rssler oscillator drive subsystems 1 and 2 .the parameter corresponds to the natural frequency of the rssler oscillator driven subsystem 3 , corresponds to the weak coupling factor between the oscillators 1 and 2 , and corresponds to the strong coupling factor in the driven oscillator 3 .the parameter mismatch is modulated by the plaintext , being if the bit to be transmitted is `` 1 '' and if the bit to be transmitted is `` 0 '' . as function of : ( a ) , the phase increases almost linearly ; ( b ) , the phase increases monotonically with chaotic behavior ; ( c ) , the phase increases and decreases irregularly . ] the ciphertext consists of the phase of the mean field of the drive oscillators : as the phase is a signal that has an unbounded amplitude it can not be transmitted through physical channels .this problem is overcome by coding the signal from to , which corresponds to the poincar surface of the atractor , . as a consequence ,the transmitted ciphertext , marked as , is a sawtooth - like signal with a period equal to the revolution period of the oscillator . at the receiving endthe phase of the response subsystem is : that is also coded from to as .the plaintext is retrieved by calculating the difference between the ciphertext and the reconstructed signal , the difference signal consisted of positive and negative peaks that correspond to the ones and zeros of the plaintext .the example of is illustrated in fig .[ fig : legal ] . we have simulated it with a four order runge - kutta integration algorithm in matlab 6 , with a step size of . in order to recover the plaintext with the exact waveform , allowing for a small time delay , we have included a smith - trigger as a reconstruction filter , with switch on point at 4 andswitch off point at -4 . as in the example of is no indication about the parameter initial values , our simulation is implemented with the following initial values : .the authors seemed to base the security of its secure communication system on the properties of the phase synchronization .they claimed that it can not be broken by some traditional attacks used against secure chaotic systems with complete synchronization , but no general analysis of security was included .although the authors point out that the system parameters play the role of secret key in transmission , it is not clearly specified which parameters are considered as candidates to form part of the key , what the allowable value range of those parameters is , what the key space is ( how many different keys exist in the system ) and how they would be managed . the weaknesses of this system and the method to break it are discussed in the next section . and lies at main problem with this cryptosystem lies on the fact that the ciphertext is an analog signal , whose waveform depends on the system parameter values .likewise , the difference between the ciphertext and the phase signal of a non synchronized receiver , depends on these same parameters .the study of these signals provides the necessary information to recover a good estimation of the system parameter values and the correct plaintext , as will be seen next .let us assume that the key consists of the oscillator s parameters and , as they are the only unknowns in the example of .moreover the parameters and , that were constants in the example , can not be part of the key because , according to our experiments , the synchronization of the rssler oscillator is indifferent to a mismatch of the value of these parameters in a range greater than 1 to 1000 .the search space of may be restricted to the unique suitable value range for operation , characterized by the mild chaotic region of the rssler oscillator , in which its phase increases monotonically with time , showing a chaotic increase rate , that allows hiding the binary information .this region is roughly characterized by the following values of : the operation of the system with lower values of should be avoided because the waveform of the oscillator is quite uniform and its phase increases almost linearly with time .therefore , the instantaneous phase fluctuations , due to the binary information modulation , can not be effectively hidden , and thus the information could be easily retrieved from the signal .higher values of should be also avoided because the rssler oscillator operates in the wild chaotic region , in which the phase does not increases monotonically with time , showing erratic increases and decreases , rendering impossible the synchronization of the authorized receiver , thus preventing the correct data retrieving . : ( a ) ciphertext signal with frequency ; ( b ) phase signal of the free running intruder receiver for ; ( c ) output of the phase comparator for ; ( d ) phase signal of the free running intruder receiver for ; ( e ) output of the phase comparator for ; ( f ) phase signal of the free running intruder receiver for ; ( g ) output of the phase comparator for . ]the behavior of the attractor with respect to is illustrated in fig .[ fig : alfa ] , in which the time history of the ciphertext signal for three values of is shown .the first sample corresponds to , showing that the phase increases almost linearly .the second one corresponds to , showing that the phase increases monotonically with chaotic behavior .the last sample corresponds to , showing that the phase increases and decreases irregularly .the sensitivity to the parameter values is so low that the original plaintext can be recovered from the ciphertext using an intruder receiver system with parameter values considerably different from the ones used by the transmitter ( ( * ? ? ?we have found that the plaintext can be recovered even when has an absolute error of . as a consequence , it is sufficient to try four values of , to cover its full usable range .the best set of values is : . in fig .[ fig : espectro ] we show the power spectral analysis of the ciphertext signal . as can be observed , the frequency of the rssler oscillator is totally evident .the spectrum s highest peak appears at , close to the parameter value of the drive subsystem .thus , by simply examining the ciphertext , the second key element is guessed with reasonable accuracy .let be the approximate value of .once it is measured we can use it to recover the plaintext in the following way ., with : ( a ) original plaintext , ; ( b ) output of the phase comparator for , which is the same in three cases ; ( c ) output of the phase comparator for ; ( d ) recovered plaintext for . ] first , we introduce the estimated value of into an intruder receiver with , that is without coupling , so the intruder receiver oscillator will be running freely . to check whether the estimation of is good , we look at the output of the phase comparator as well as at the ciphertext signal and at the phase signal of the receiver . when the frequencies of transmitter and intruder receiver are slightly different , then will look like a train of pulses of increasing width summed with a direct current of increasing level ; being the final width and direct current increasing level rate proportional to the difference of frequencies .also , the mismatch of the periods of the phase signals and is perceptible . with this informationwe can adjust the value of in a few steps , until the width of the pulses tends to zero .then , the period mismatch of the phase signals and is unnoticeable and its direct current level equals zero . the procedure is illustrated in fig .[ fig : omega ] .we begin with , the value estimated from the spectrum , and we see that the correct value of must be slightly lower , thus we try and we see that we are near the exact value but still a little bit high .next we try , and we see that the frequency match is quite good .we retain this last value of as the definite one and go to the next step .finally , we set at the intruder receiver and look at the retrieved data for the previously obtained an for each of the four possible values of . in fig .[ fig : recupera ] the retrieved binary data obtained with and are presented .it can be seen that for only cero value data are obtained and for some output data are present , thus we may assume that the value of can be retained as the appropriate one to retrieve the plaintext and that the data obtained with it consist of the correct recovered plaintext , as can be verified from the figure . and values that that achieve correct plaintext recovery of a ciphertext generated with . ]although the estimated pair of values are far from the right ones , the plaintext is correctly recovered as a consequence of the system s low sensitivity to parameters . moreover , we have observed that many other combinations of parameter values allow for the recovery of the correct plaintext as well . in fig .[ fig : enganche ] we show after many simulations the region of values in which correct plaintext recovery of a ciphertext generated with a drive subsystem with is achieved .the proposed cryptosystem is rather weak , since it can be broken by measuring the power spectrum of the ciphertext signal and trying a small set of parameter values .there is no detailed description about what the key is , nor what the key space is , a fundamental aspect in every secure communication system .the lack of security discourages the use of this algorithm for secure applications .g. lvarez , f. montoya , m. romera , and g. pastor .chaotic cryptosystems . in larryd. sanson , editor , _33rd annual 1999 international carnahan conference on security technology _ , pages 332338 .ieee , 1999 .t. beth , d. e. lazic , and a. mathias .cryptanalysis of cryptosystems based on remote chaos replication . in yvog. desmedt , editor , _ advances in cryptology - crypto 94 _ , volume 839 of _ lecture notes in computer science _ , pages 318331 .springer - verlag , 1994 .
a security analysis of a recently proposed secure communication scheme based on the phase synchronization of chaotic systems is presented . it is shown that the system parameters directly determine the ciphertext waveform , hence it can be readily broken by parameter estimation of the ciphertext signal . most secure chaotic communication systems are based on complete synchronization ( cs ) , whereas a new cryptosystem has been proposed based on phase synchronization ( ps ) . this scheme hides binary messages in the instantaneous phase of the drive subsystem used as the transmitting signal to drive the response subsystem . although it is claimed to be secure against some traditional attacks in the chaotic cryptosystems literature , including the parameter estimation attack , we show that it is breakable by this attack . as a conclusion , the system is not secure and should not be used for communications where security is a strict requirement .
recently , spatially homogeneous imperfect fluid models have been investigated using techniques from dynamical systems theory . in these papers , dimensionless variables and a set of dimensionless equations of statewere employed to analyze various spatially homogeneous imperfect fluid cosmological models .it was also assumed that the fluid is moving orthogonal to the homogeneous spatial hypersurfaces ; that is , the fluid 4-velocity , , is equal to the unit normal of the spatial hypersurfaces .the energy - momentum tensor can be decomposed with respect to according to : where and is the energy density , is the thermodynamic pressure , is the bulk viscous pressure , is the anisotropic stress , and is the heat conduction vector as measured by an observer moving with the fluid . in papers it was also assumed that there is a linear relationship between the bulk viscous pressure and the expansion ( of the model ) , and a linear relationship between the anisotropic stress and the shear ; that is , [ linear rel ] where denotes the bulk viscosity coefficient and denotes the shear viscosity coefficient. equations ( [ eckart ] ) and ( [ viscosity approx ] ) describe eckart s theory of irreversible thermodynamics .eckart s theory is a first order approximation of the viscous pressure and the anisotropic stress and is assumed to be valid near equilibrium .coley and van den hoogen and coley and dunn have studied the bianchi type v cosmological models using equations ( 1.2 ) as an approximation for the bulk viscous pressure and the anisotropic stress .they found that if the models satisfied the weak energy conditions , then the models necessarily isotropize to the future .belinskii and khalatnikov , with different assumptions on the equations of state , found similar behaviour present in the bianchi type i models .the addition of viscosity allowed for a variety of different qualitative behaviours ( different from that of the corresponding perfect fluid models ) .however , since the models studied in satisfy eckart s theory of irreversible thermodynamics , they suffer from the property that signals in the fluid can propagate faster than the speed of light ( i.e. , non - causality ) , and also that the equilibrium states in this theory are unstable ( see hiscock and salmonson and references therein ) .therefore , a more complete theory of irreversible thermodynamics is necessary for fully analyzing cosmological models with viscosity . among the first to study irreversible thermodynamicswere israel and israel and stewart .they included additional linear terms in the relational equations ( 1.2 ) . assuming that the universe can be modelled as a simple fluid and omitting certain divergence terms , the linear relational equations for the bulk viscous pressure , the heat conduction vector , and the shear viscous stress are : [israel ] where is the projection tensor and .the variable represents the temperature , represents the thermal conductivity , , and are proportional to the relaxation times , is a coupling parameter between the heat conduction and the bulk viscous pressure and is a coupling parameter between the shear viscous stress and the heat conduction .we shall refer to equations ( 1.3 ) as the truncated israel - stewart equations .these equations , ( 1.3 ) , reduce to the eckart equations ( 1.2 ) used in when .belinskii et al . were the first to study cosmological models satisfying the truncated israel - stewart theory of irreversible thermodynamics . using qualitative analysis ,bianchi i models were investigated with a relational equation for the bulk viscous pressure and the shear viscous stress of the form ( 1.3 ) .they also assumed equations of state of the form where and are constants and and are parameters . the isotropizing effect found in the eckart models no longer necessarily occurred in the truncated israel - stewart models .it was also found that the cosmological singularity still exists but is of a new type , namely one with an accumulated `` visco - elastic '' energy .similar to the work done by belinskii et al . , pavn et al . and chimento and jakubi studied the flat friedmann - robertson - walker ( frw ) models .they assumed the same equations of state as belinskii et al . , namely , but studied the models using slightly different techniques .chimento and jakubi also found exact solutions in the exceptional case ( which will be of interest later ) .they found that the future qualitative behaviour of the model was independent of the value of ; however , to the past , `` bouncing solutions '' and deflationary evolutions are possible .in addition , hiscock and salmonson investigated further generalizations of the truncated israel - stewart theory of irreversible thermodynamics .namely , they included the non - linear divergence terms in the relational equations ( 1.3 ) .we shall refer to such a theory as the `` israel - stewart - hiscock '' theory .hiscock and salmonson used equations of state arising from the assumption that the fluid could be modelled as a boltzmann gas . they concluded that when the eckart equations , ( 1.2 ) , or the truncated israel - stewart equations , ( 1.3 ) , were used , inflation could occur , but if the non - linear terms of the israel - stewart - hiscock theory were included , inflation was no longer present .this result led them to conclude that `` inflation is a spurious effect produced from using a truncated theory '' .however , zakari and jou also employed equations arising from the full israel - stewart - hiscock theory of irreversible thermodynamics , but assumed equations of state of the form and found that inflation was present in all three theories ( eckart , truncated israel - stewart , israel - stewart - hiscock ) .therefore , it appears that the equations of state chosen determine if the model will experience bulk - viscous inflation .romano and pavn also analyzed bianchi iii models using both the truncated israel - stewart theory and the israel - stewart - hiscock theory .they only analyzed the isotropic singular points , but concluded that the qualitative behaviour of the models in the two different theories was similar in that the anisotropy of the models dies away and the de sitter model is a stable attractor . in this workwe use equations of state that are dimensionless , and hence we are generalizing the work in in which viscous fluid cosmological models satisfying equations ( 1.2 ) were studied .one reason for using dimensionless equations of state is that the equilibrium points of the system of differential equations describing spatially homogeneous models will represent self - similar cosmological models .in addition , it could be argued that the use of dimensionless equations of state is natural in the sense that the corresponding physics is scale invariant ( see also arguments in coley ) .the intent of this work is to build upon the foundation laid by belinskii et al . , pavon et al . and chimento and jakubi , and investigate viscous fluid cosmological models satisfying the linear relational equations ( 1.3 ) .we will use dimensionless variables and dimensionless equations of state to study the qualitative properties of isotropic and spatially homogeneous cosmological models .in particular , we shall study this new `` visco - elastic '' singularity and we shall determine whether bulk - viscous inflation is possible .we will also determine if there is a qualitative difference between these models and the models studied by burd and coley where the eckart equation was assumed . in section [ ii ]we define the models and establish the resulting dynamical system . in section [ iii ]we investigate the qualitative behaviour of the system for different values of the physical parameters . in section [ iv ]we discuss and interpret our results and in section [ v ] we end with our conclusions . for simplicitywe have chosen units in which .in this paper we assume that the spacetime is spatially homogeneous and isotropic and that the fluid is moving orthogonal to the spatial hypersurfaces .the energy - momentum tensor considered in this work is an imperfect fluid with a non - zero bulk viscosity ( that is there is no heat conduction , , and no anisotropic stress , ) .the einstein field equations ( ) and the energy conservation equations ( ) can be written as ( see burd and coley ) : where is the expansion and is the curvature of the spatial hypersurfaces . if the curvature is negative , i.e. , , then the frw model is open , if then the model is flat , and if then the frw model is closed .assuming that the energy density , , is non - negative , it is easily seen from that in the open and flat frw models the expansion is always non - negative , i.e. , but for the closed frw models the expansion may become negative .( great care must be taken in this case because the dimensionless quantities that we will be using become ill - defined at . )we can obtain an evolution equation for by solving for , now the system of equations defined by equations , , and constitute a dynamical system of the form , where .this system of equations is invariant under the mapping ( see coley and van den hoogen ) and this invariance implies that there exists a symmetry in the dynamical system .therefore , we introduce new dimensionless variables , and a new time variable as follows : and consequently the raychaudhuri equation , , effectively decouples from the system . in order to complete the system of equationswe need to specify equations of state for the quantities , , and . in principle equations of statecan be derived from kinetic theory , but in practice one must specify phenomenological equations of state which may or may not have any physical foundations . following coley , we introduce dimensionless equations of state of the form [ equations of state ] where , , and are positive constants , and , , and are constant parameters ( is the dimensionless density parameter defined earlier ) . in the models under consideration , is positive in the open and flat frw models , thus equations ( 2.6 ) are well defined . in the closed frw modelthe expansion could become zero , in which case these equations of state break down .however , we can utilize these equations to model the asymptotic behaviour at early times , i.e. , when .the most commonly used equation of state for the pressure is the barotropic equation of state , whence and ( where is necessary for local mechanical stability and for the speed of sound in the fluid to be no greater than the speed of light ) .we define a new constant . using equations ( 2.5 ) and ( 2.6 ) , we find that equations and reduce to [ sys of equations ] , \\\frac{dy}{d\omega } & = & -y[2+y+(3\gamma-2)x ] + bx^{{r_1}-m}y + 9ax^{r_1}.\end{aligned}\ ] ] also , from the friedmann equation , , we obtain thus , the line divides the phase space into three invariant sets , , , and . if , then the model is necessarily a flat frw model , if then the model is necessarily an open frw model , and if the model is necessarily a closed frw model .the equilibrium points of the above system all represent self - similar cosmological models , except in the case .if , the behaviour of the equations of state , equation ( 2.6 ) , at the equilibrium points , is independent of the parameters and ; namely the behaviour is therefore , natural choices for and are respectively , .we note that in the exceptional case , there is a singular point which represents a de sitter solution and is not self - similar .( this is also the case in the eckart theory as was analyzed by coley and van den hoogen . ) to further motivate the choice of the parameter , we consider the velocity of a viscous pulse in the fluid , where corresponds to the speed of light . using and equations ( 2.6 ) , we obtain now , if then not only do we obtain the correct asymptotic behaviour of the equation of state for the quantity but we are also allowed to choose since then the velocity of a viscous pulse is less than the velocity of light for any value of the density parameter .thus in the remainder of this analysis we shall choose . in order for the system of differential equations ( 2.7 ) to remain continuous everywhere ,we also assume .we now study the specific case when . in this casethere are three singular points , where the point has eigenvalues this point is either a saddle or a source depending on the value of the parameter , ; if , then the point is a saddle point and if , then the point is a source . if ( the bifurcation value ) , then the point is degenerate ( discussed later ) . the point has eigenvalues if , then the point is a saddle point and if , then the point is a source . if ( the bifurcation value ) , then the point is degenerate ( discussed later ) .the singular point has eigenvalues this singular point is a sink for ( see also for details ) .in addition to the invariant set , there exist two other invariant sets .these are straight lines , , where the invariant line passes through the singular points and while the line passes through the singular points , and .these invariant sets represent the eigendirections at each of the singular points [ see also [ appendix 1 ] ] . in order to sketch a complete phase portrait, we also need to calculate the vertical isoclines which occur whenever . from ( 2.7 ) we can see that this occurs either when or when .this straight line passes through the origin , and through the singular point if .if then the vertical isocline has a negative slope which is greater than the slope of the slope of the invariant line , [ i.e. , , and when the vertical isocline has a negative slope which is less than the slope of the invariant line [ i.e. , . to complete the analysis of this model we need to analyze the points at infinity .we do this by first converting to polar coordinates and then compactifying the radial coordinate .we change to polar coordinates via and we derive evolution equations for and .we essentially compactify the phase space by changing our radial coordinate and our time as follows , that is , the plane is mapped to the interior of the unit circle , with the boundary of this circle representing points at infinity of .we have ( for and general ) \nonumber\\ -{\bar r}^2[(3\gamma-2)\cos\theta + \sin\theta ] + { \bar r}^{2-m}(1-\bar r)^{m}[b\sin^2\theta\cos^{1-m}\theta]\biggr\ } , \\\fl \frac{d\theta}{d\tau}= ( 1-\bar r)\biggl\{9a\cos^2\theta-\sin^2\theta-3\gamma\cos\theta\sin\theta + b { \bar r}^{1-m}(1-\bar r)^{m-1 } \sin\theta\cos^{2-m}\theta \biggr\}.\end{aligned}\ ] ] we easily conclude that if ( or any ) , then the entire circle , , is singular .therefore , we have a non - isolated set of singular points at infinity . to determine their stability we look at the sign of as .in this case we see which implies that points above the line are repellors , while those points which lie below the line are attractors . for completeness, we would also like to determine the qualitative behaviour of the system at the bifurcation value where the singular points are and the line of singular points .( note that since , . )fortunately we are able to completely integrate the equations in this case to find where is an integration constant .we see that all trajectories are straight lines that pass through the point .it is straightforward to see that the line of singular points are repellors while the point is an attractor .we are now able to sketch complete phase portraits ( see figures [ figure 1 ] , [ figure 2 ] and [ figure 3 ] ) .this is a case of particular interest since it represents the asymptotic behaviour of the frw models for any and ( since at the singular points the viscosity coefficient behaves like and the relaxation time like ) .note that the physical phase space is defined for , but the system is not differentiable at . in this casethere are four singular points , where and and are given be equation .the dynamical system , ( 2.7 ) is not differentiable at the singular point .we can circumvent this problem by changing variables to and a new time variable defined by .the system then becomes , \\\frac{dy}{d\tau}&=&u[9au^2 - 2y - y^2-(3\gamma-2)u^2+byu].\end{aligned}\ ] ] in terms of the new variables the system is differentiable at the point , but one of the eigenvalues is zero and hence the point is not hyperbolic .therefore in order to determine the stability of the point we change to polar coordinates , and find that the point has some saddle - like properties ; however , the true determination of the stability is difficult .[ we investigate the nature of this singular point numerically see [ appendix 2 ] . ]the singular point has eigenvalues }}.\ ] ] this singular point varies both its position in phase space with its stability depending upon whether is less than , equal to or greater than one . if , then and the point is a saddle point .if , then and the point is a source . finally , if ( the bifurcation value ) , then and the point is degenerate ( discussed later )the stability of the points and is the same as in the privious case , see equations and for their eigenvalues and the corresponding text .( see also for details ) .the vertical isoclines occur at and .this straight line is easily seen to pass through the origin and the point .if , then the vertical isocline lies below the point and if , the vertical isocline lies above the point .finally if ( the bifurcation value ) , the vertical isocline passes through the point . from an analysis similar to that in the previous subsection , we conclude that there is a non - isolated set of singular points at infinity .their qualitative behaviour is the same in this case as in the previous case ; namely , points which lie above the line are repellors , while those points which lie below the line are attractors . at the bifurcation value ,the points and come together ; consequently these points undergo a saddle - node bifurcation as passes through the value .the singular point is no longer hyperbolic , but the qualitative behaviour near the singular point can be determined from the fact that we know the nature of the bifurcation . hence the singular point is a repelling node in one sector and a saddle in the others . a complete phase portrait is sketched in figures [ figure 4 ] , [ figure 5 ] and [ figure 6 ] .lcccc & & & & + and & source & saddle & sink & + and & source & source & sink & + and & saddle & source & sink & + and & saddle & saddle & sink & source + & saddle & saddle - node & sink & saddle - node + and & saddle & source & sink & saddle + . . . , these points are part of the non - isolated line singularity . this is the situation when the points and coalesce .[ table i ]the exact solution of the einstein field equations at each of the singular points represent the asymptotic solutions ( both past and future ) of frw models with a causal viscous fluid source .the solution at each of the singular points represents a self similar cosmological model except in one isolated case [ see the singular point ] . at the singular point we have ( after a re - coordinatization ),= , , , + , , which represents the standard vaccuum milne model .the singular point represents a flat frw model with a solution ( after a re - coordinatization ) ,= , , , + , , where .the singular point represents a flat frw model . if then the solution is ,= , , , + , , where .( note that in this case we can not simply change coordinates to remove the constants of integration . )the sign of depends on the sign of .if then , and if then .thus if then is positive only in the interval and hence we can see that after a finite time , , and all approach infinity .( we will see later that the wec is violated in this case ) .if then we can re - coordinatize the time so as to remove the constant of integration , , and the absolute value signs in the solution for .if then and the solution is the de sitter model with ( after a re - coordinatization ) ,= , , , + , . this exceptional solution is the only one that is not self - similar .it can be noted here that this is precisely the same situation that occurred in the eckart models studied in and .the singular point represents either an open , flat , or closed model depending on the value of the parameter .the solution in all cases is ( after a re - coordinatization ) ,= , , , + , .the weak energy condition ( wec ) states that for any timelike vector . in the model under investigationthis inequality reduces to and . assuming that from here on , the wec in dimensionless variables becomes the dominant energy condition ( dec ) states that for every timelike , and is non - spacelike . herethis inequality reduces to and which when transformed to dimensionless variables becomes the strong energy condition ( sec ) states that .here this inequality reduces to and which when transformed to dimensionless variables becomes if we assume that the wec is satisfied throughout the evolution of these models then we find that there are five distinct situations . if , then and the line intersects the line at a point . if , then and the line intersects the line at the point .if then can be of any sign or zero , but the line intersects the line at a point .if the wec condition is satisfied throughout the evolution of these models then the possible asymptotic behaviour of the models is greatly restricted .the qualitative behaviour depends on the values of and .if the parameter is different from unity then there is an additional singular point .this property is also present in the eckart models studied by burd and coley and coley and van den hoogen .the value corresponds to the case when the dynamical system , ( 2.7 ) , is polynomial ( the only other value of that exhibits this property is ) .the value is of particular interest as it represents the asymptotic behaviour of all the viscous fluid frw models , and also , this is the case when the equation of state for is independent of ( i.e. , ) .the parameter , , plays a role similar to the parameter found in both burd and coley and coley and van den hoogen .the value of the parameter determines the stability and global behaviour of the system .one of the goals of this paper is to determine the generic behaviour of the system of equations ( 2.7 ) . using the above energy conditions , and in particular the wec , and the phase - portraits ( figures [ figure 1][figure 6 ] ), we can determine the generic and exceptional behaviour of all the viscous fluid models satisfying the wec .we are primarily interested in the generic asymptotic behaviour of the frw model with viscosity : if we consider the dynamical system , ( 2.7 ) , as where , are the variables , and are the free parameters then generic behaviour occurs in sets of non - zero measure with respect to the set ( except for the flat models in which case the state space is a subset of ) .for example , the case is a set of measure zero with respect to the set .all behaviour is summarized in .lllll parameters & m & models & generic behaviour & exceptional behaviour + , & & open & & + & & flat & & + & & closed & & , + , & & open & & , + & & flat & & + & & & & + & & closed & & + & & & & , + , & m=1 & open & + & & flat & & + & & & & + & & closed & & + & & & & + & m=1/2 & open & & + & & flat & & + & & & & + & & closed & & + , & m=1 & open & & + & & flat & & + & & & & + & & closed & & + & m=1/2 & open & & , + & & flat & & + & & & & + & & closed & & + these are exceptional trajectories and consequently do not represent typical or generic behaviour . in the case .[ table ii ] typically , if , then the open models evolve from the big - bang visco - elastic singularity at and evolve to the milne model at [ if ] or the non - vacuum open model at [ if ] . if , then the closed models evolve from the big - bang visco - elastic singularity at to points at infinity .these particular points at infinity correspond to the points where ( point of maximum expansion ) and the various dimensionless variables breakdown .typical behaviour of models with depends upon the sign of . if , then all trajectories for the open models will violate the wec and if , then the open models evolve from the big - bang visco - elastic singularity at , become open models and then evolve towards an inflationary flat frw model at the point .a visco - elastic singularity is a singularity in which a signifigant portion of the initial total energy density is viscous elastic energy , that is . concerning the closed models when , if then models evolve from the big - bang visco - elastic singularity at to points at infinity .however , if , then the closed models again evolve from the big - bang visco - elastic singularity at but now have two different typical behaviours. there is a class of models which approach points at infinity and do not inflate and there is a class of models which evolve towards the inflationary flat frw model at the point .the flat frw models consist of a subset of measure zero of the total state space .however , the flat models are of a special interest in that the flat models represent the past asymptotic behaviour of both the open and closed models .if , then the flat models evolve from the visco - elastic singularity at to points at infinity or to the flat model located at .if , and , then the flat models evolve from the big - bang visco - elastic singularity at to points at infinity . andif and , then models evolve from the visco - elastic singularity at to points at infinity ( non - inflationary ) or to the inflationary model at the point .note , that if the wec is dropped ( i.e. , let ) then a class of very interesting models occurs .there will exist models that will evolve from the visco - elastic singularity at with and start inflating at some point and at a finite time after will start expanding at increasing rates , that is , and will eventually evolve towards the point .( this is the special case mentioned in the previous subsection . )what this means in terms of the open and flat models is that they will expand with decreasing rates of expansion , start to inflate , and then continue to expand with increasing rates of expansion .for the closed models , the models will expand with decreasing rates of expansion , start to inflate , and then continue to expand with increasing rates of expansion , these models will not recollapse .the only models that can possibly satisfy the wec and inflate are those models with and . therefore , we can conclude that bulk - viscous inflation is possible in the truncated israel - stewart models .however , in the models studied by hiscock and salmonson inflation did not occur ( note , the equations of state assumed in are derived from assuming that the universe could be modelled as a boltzmann gas ) , while inflation does occur in the models studied by zakari and jou who utilized different equations of state .furthermore , maartens has also analyzed models arising from the israel - stewart - hiscock theory of irreversible thermodynamics .maartens assumes an equation of state for the temperature of the from and finds that the inflationary attractor is unstable in the case . in our truncated modelwe choose dimensionless equations of state and find that inflation is sometimes possible .the question of which equations of state are most appropriate remains unanswered , and clearly the possibility of inflation depends critically upon the equations of state utilized .this work improves over previous work on viscous cosmology using the non - causal and unstable first order thermodynamics of eckart and differs from the work of belinskii et al . in that dimensionless equations of state are utilized . from the previous discussionwe can conclude that the visco - elastic singularity at the point is a dominant feature in our truncated models .this singular point remains the typical past asymptotic attractor for various values of the parameters and .this agrees with the results of belinskii et al . .the future asymptotic behaviour depends upon both the values of and .if , then the open models tend to the milne model at [ ] or to the open model at [ ] , and if , then the open models tend to the inflationary model at the point or are unphysical . if , then the closed models tend to points at infinity , and if , then the closed models tend to the inflationary model at the point . the future asymptotic behaviour of the flat models is that they either tend to points at infinity or tend to the point , in agreement with the exact solution given in .belinskii et al . utilized the physical variables , , and in their analysis and assumed non - dimensionless equations of state , and they found a singularity in which the expansion was zero but the metric coefficients were neither infinite nor zero ; the authors passed over this observation stating that in a more realistic theory this undesirable asymptotic behaviour would not occur .we note that this behaviour did not occur in our analysis in which dimensionless variables and and a set of dimensionless equations of state were utilized .the behaviour of the eckart models in burd and coley with is very similar to the behaviour of the truncated israel - stewart models studied here in the case .this result also agrees with the conclusions of zakari and jou .however , when various new possibilities can occur ; for instance , there exist open and closed models that asymptotically approach a flat frw model both to the past and to the future .interestingly enough this is also the case in which the future asymptotic endpoint is an inflationary attractor .this type of behaviour does not occur in the eckart theory .while this work ( and the work of belinskii et al . ) employs a causal and stable second order relativistic theory of thermodynamics , only the ` truncated ' version of the theory has been utilized , rather than the full theory of israel - stewart - hiscock .however , the ` truncated ' equations can give rise to very different behaviour than the full equations ; indeed , maartens argues that generally the ` truncated ' theory will lead to pathological behaviour , particularly in the behaviour of the temperature .it is possible to reconcile the truncated and full theories , at least near to equilibrium , by using a `` generalized '' non - equilibrium temperature and pressure defined via a modified gibbs equation in which dissipation enters explicitly .moreover , it is expected that the truncated theory studied here is applicable at least in the early universe .we have neglected the divergence terms here as a first approximation .if one includes the divergence terms then the system of equations describing the model requires an additional equation of state for the temperature .a reasonable assumption in this case might be to assume a dimensionless equation of state for .notwithstanding these comments it is clear that the next step in the study of dissipative processes in the universe is to utilize the _ full _( non - truncated ) causal theory , and this will be done in subsequent work . in particular, we shall find that the work here will be useful in understanding the full theory and is thus a necessary first step in the analysis .in addition , anisotropic bianchi type v models which include shear viscosity and heat conduction will also be investigated .this research was funded by the natural sciences and engineering research council of canada and a killiam scholarship awarded to rvdh .the authors would like to thank roy maartens for helpful discussions and for making available to us recent work prior to publication .the authors would also like to thank des mcmanus for reading the manuscript .we will use darboux s theorem to find an algebraic first integral of the system in the case by first finding a number of a algebraic invariant manifolds .an algebraic invariant manifold , , is a manifold such that , where is a polynomial . the following are invariant manifolds of the system : where calculating , we find ,\nonumber \\ r_2 & = & -[y+(3\gamma-2)x + ( 2 -b + m_-)],\nonumber \\ r_3 & = & -[y+(3\gamma-2)x + ( 2 -b + m_+)].\nonumber \end{aligned}\ ] ] using darboux s theorem , an algebraic first integral can be found by setting and then determining what values of satisfy the equation . solving the resulting algebraic system we find the following algebraic first integral of the dynamical system , ( 2.7 ) , in the case : where is a free parameter and and must satisfy this first integral determines the integral curves of the phase portraits in figures [ figure 1 ] , [ figure 2 ] and [ figure 3 ] , where the value determines which integral curve(s ) is being described .for example , if , the integral curves are and . also , we can see that if , and then the integral curve describes an ellipse ; however , these closed curves necessarily pass through the points and , thereby nullifying the possible existence of closed orbits .if , the dynamical system ( 2.7 ) is only defined for , but more importantly it is not differentiable at . in this sectionwe will use numerical techniques to analyze the integral curves in the neighborhood of the singular point in the case .the integration and plotting was done using maple v release 3 . from the qualitative analysiswe find that the behaviour depends on the parameter . in the first of these two plots we choose , , and , so that ( see ) . in the second plotwe choose , , and so that ( see ) . from the numerical plotswe can conclude that the point has a saddle - point like nature , in agreement with preliminary remarks made in the text in section [ iii.2 ] .
isotropic and spatially homogeneous viscous fluid cosmological models are investigated using the truncated israel - stewart [ w. israel , _ ann . phys . _ , * 100 * 1976 ; w. israel and j.m . stewart , _ proc . r. soc . lond . a. _ , * 365 * 1979 ; ibid . _ ann . phys . _ , * 118 * 1979 ] theory of irreversible thermodynamics to model the bulk viscous pressure . the governing system of differential equations is written in terms of dimensionless variables and a set of dimensionless equations of state is then utilized to complete the system . the resulting dynamical system is analyzed using geometric techniques from dynamical systems theory to find the qualitative behaviour of the friedmann - robertson - walker models with bulk viscosity . in these models there exists a free parameter such that the qualitative behaviour of the models can be quite different ( for certain ranges of values of this parameter ) from that found in models satisfying the eckart theory studied previously . in addition , the conditions under which the models inflate are investigated . [ causal viscous fluid frw models ]
there is by now a large body of literature on turbulent rayleigh - bnard convection .most of this work strives to stay in a limit described by an approximation developed by oberbeck and boussinesq , commonly known as the boussinesq approximation . within this approximation ,the material constants are uniform across the layer and the temperature difference between top and bottom boundaries is small compared with the absolute temperature at any point in the layer .this idealized system has served as a paradigm for more complicated convection problems , as for example atmospheric convection .a major difference between convection in the boussinesq limit and convection in the atmosphere is that the latter occurs in a layer in which the gas density varies significantly , which implies concomitant variations of viscosity and thermal diffusivity .experiments on convection beyond the boussinesq approximation have mostly focused on effects caused by the temperature dependence of the material properties , as opposed to the effects of compressibility .experiments in low temperature gases near their critical point are to some extent an exception because the parameters in these experiments can be adjusted such that the adiabatic temperature gradient in the gas delays the onset of convection and shifts the critical rayleigh number by a detectable amount .however , these experiments were restricted to convection near the onset .experimental studies aimed at turbulent boussinesq convection in low temperature gases also have to correct for the effect of the adiabatic temperature gradient .the present paper investigates through numerical simulation convecting ideal gas in a layer with significant density variation from top to bottom .no slip top and bottom boundaries are employed , so that the results are in principle amenable to verification by laboratory experiments if one finds a way of realizing similar density gradients in turbulent convection experimentally . the parameters controlling the density variation and the adiabatic temperature gradientare chosen to scatter around the values of the terrestrial troposphere .the troposphere is the bottom layer of the atmosphere of approximately 10 km thickness which is well mixed by convection and bounded from above by the stably stratified stratosphere .a long standing question in turbulent rayleigh - bnard convection has been how the heat transport across the layer depends on the control parameters .one goal of the present paper will be to find out whether known results on convection in an incompressible medium can be extended to an ideal gas . from a fundamental point of view, the scale height of the density profile introduces a new length scale into the problem in addition to the height of the layer .previous studies suggest that convective motion extends through multiple scale heights , so that we can not expect that the layer height will simply drop out of the list of the relevant parameters in order to be replaced by the scale height .another issue peculiar to non - boussinesq convection is the asymmetry between the boundary layers next to the top and bottom boundaries .for example , the heat conductivities near the warm bottom and cold top boundaries are different , but the heat fluxes through both boundaries are identical in a statistically stationary state , so that the temperature gradients within the top and bottom boundary layers must be different . because of this asymmetry , the temperature in the center of a convection cell does not need to be equal to the arithmetic mean of top and bottom boundary temperatures .the deviation of the true center temperature from this arithmetic mean has been used in experiments as an indicator of non - boussinesq effects .a typical situation in astro- and geophysics is that only part of a convective layer is accessible to observation , at least to accurate observation . for example , the top of planetary or stellar atmospheres and the bottom of earth s troposphere are better known than the rest of the convective layers .it is in these cases important to know what can be inferred about the convective layer from the observation of some part of it .translated to the idealized system simulated here , the question arises as to what can be deduced about the whole convective layer or a boundary layer from knowledge of the opposite boundary layer .the next section will present the mathematical model and the numerical method used to solve it .the numerical results are analyzed in the third section , in which one subsection deals with the scaling of heat transfer and kinetic energy with the control parameters , whereas another subsection is concerned with the relationship between the boundary layers .consider a plane layer of height bounded by two planes perpendicular to the .gravity is constant and pointing along the negative , , where the hat denotes a unit vector .the ideal gas is characterized by constant heat capacities at fixed volume and pressure , and , and constant dynamic viscosity and heat conductivity .this implies that the density dependences of kinematic viscosity and thermal diffusivity are given by and , in which is the density .let us assume that top and bottom boundaries are no slip and have prescribed temperatures .parameters evaluated at the top boundary in the initial state will be denoted by an index for `` outer '' .the gas at the top of the layer thus has temperature . in the state specified by the initial conditions ( see eq .( [ eq : density_init ] ) below ) , it also has kinematic viscosity , thermal diffusivity and density .the temperature difference across the layer is .the system of equations governing density , temperature , pressure and velocity reads with the usual summation convention over repeated indices : = -\nabla p + \rho \bm g + \mu [ \nabla^2 \bm v + \frac{1}{3 } \nabla ( \nabla \cdot \bm v ) ] \label{eq : ns_dim}\ ] ] ^ 2 \label{eq : t_dim}\ ] ] the gas constant in the equation of state is given by , with the molar mass and the universal gas constant .it follows from thermodynamics that .the strain rate tensor is given by . from hereon , we will use nondimensional variables .all lengths are expressed in multiples of , and the scales of time and density are chosen as and , respectively . the difference between the temperature of the gas and the top temperature , ,is scaled with . using the same symbols for the non - dimensional variables space , time , density , velocity and temperature difference with the top boundary as in the dimensional equations ( [ eq : conti_dim]-[eq : state ] ) , one obtains the system \frac{1}{\gamma } \frac{h_o}{d } \mathrm{pr } ~ \mathrm{ra } \label{eq : ns } \\ & - & \hat{\bm z}\mathrm{pr } ~ \mathrm{ra}\frac{t_o}{\delta t}+[ \nabla^2 \bm v + \frac{1}{3 } \nabla ( \nabla \cdot \bm v)]\frac{1}{\rho}\mathrm{pr } \nonumber\end{aligned}\ ] ] ^ 2 \frac{1}{\rho } 2 \gamma ( \gamma-1 ) \frac{d}{h_o } \frac{1}{\mathrm{ra } } \nonumber\end{aligned}\ ] ] together with the boundary conditions the equation of state has been used to eliminate pressure .seven parameters control the system .the rayleigh number is the usual rayleigh number evaluated at the top boundary ( remember that the thermal expansion coefficient of an ideal gas is its inverse temperature ) : the prandtl number is independent of space in the present model and is set to 0.7 in all calculations : the adiabatic exponent is set in this paper to its value for a monoatomic gas : the density stratification is specified by , where is the adiabatic scale height at the top boundary , .the meaning of the fifth parameter , , is obvious from the definitions above .an alternative parameter , redundant after the choices made so far , is the ratio of the adiabatic temperature difference between top and bottom , , and the actual temperature difference , : this ratio needs to be less than 1 for any convection to occur .the sixth `` parameter '' is the initial temperature and density distribution .the initial conditions appear as a control parameter because they specify for instance the total mass in the layer .they also determine and hence and , a quantity used to make the governing equations nondimensional .all simulations are started from zero velocity and the conductive profile .the density is then determined from ( [ eq : ns ] ) : the geometry , quantified through the aspect ratio of the computational volume , is the seventh parameter .periodic boundary conditions are imposed in and with periodicity lengths and .all computations have been made for .no aspect ratio dependence has been investigated since the main interest of the present work was to determine the effects of density variations . even though it was chosen to keep several parameters fixed, there still remains a vast parameter space to explore since , and have to be varied .the computations below are roughly guided by the terrestrial troposphere , for which and .the density of air varies by a factor between 3 and 4 within the troposphere .note that is the appropriate adiabatic exponent for air . of horizontal averages of temperature ( top panel ) ,density ( second panel from the top ) , vertical velocity ( third panel ) and horizontal velocity ( bottom panel ) .the overbars signal averages over time and horizontal planes .the different traces are for and the rayleigh numbers ( solid red line ) , ( long dashed green line ) , ( dot dashed blue line ) , ( short dashed black line).,title="fig:",width=302 ] of horizontal averages of temperature ( top panel ) , density ( second panel from the top ) , vertical velocity ( third panel ) and horizontal velocity ( bottom panel ) .the overbars signal averages over time and horizontal planes .the different traces are for and the rayleigh numbers ( solid red line ) , ( long dashed green line ) , ( dot dashed blue line ) , ( short dashed black line).,title="fig:",width=302 ] of horizontal averages of temperature ( top panel ) , density ( second panel from the top ) , vertical velocity ( third panel ) and horizontal velocity ( bottom panel ) . the overbars signal averages over time and horizontal planes .the different traces are for and the rayleigh numbers ( solid red line ) , ( long dashed green line ) , ( dot dashed blue line ) , ( short dashed black line).,title="fig:",width=302 ] of horizontal averages of temperature ( top panel ) , density ( second panel from the top ) , vertical velocity ( third panel ) and horizontal velocity ( bottom panel ) . the overbars signal averages over time and horizontal planes .the different traces are for and the rayleigh numbers ( solid red line ) , ( long dashed green line ) , ( dot dashed blue line ) , ( short dashed black line).,title="fig:",width=302 ] the well known boussinesq approximation is recovered from equations ( [ eq : conti]-[eq : t ] ) in the limit and . in this limit , the sound speed goes to infinity .a purely explicit time step will be used in the numerical method below , so that simulations close to the boussinesq limit become impractical .for this reason , and since the boussinesq limit is an important reference case , a second system of equations was also implemented numerically : where is the deviation from the conductive profile , .the sound speed appears as an independent variable . for small mach numbers , the density fluctuations are small and the equation of continuity can be linearized to yield ( [ eq : conti_bq ] ) .the boussinesq equations are recovered in the limit . in the simulations mentioned below using ( [ eq : conti_bq]-[eq : t_bq ] ) , the sound speed was adjusted such that the mach number always stayed below 0.1 .note that simulations of weakly compressible convection as an approximation to boussinesq convection have been undertaken before .for instance , simulations using the lattice boltzmann method implicitly do so .systems ( [ eq : conti]-[eq : t ] ) and ( [ eq : conti_bq]-[eq : t_bq ] ) have been simulated with a finite difference method implemented on graphic processing units using c for cuda .the numerical method used centered finite differences of second order on a collocated grid except for the advection terms which used an upwind biased third order scheme .the time step was a third order runge - kutta method .the standard resolution was .lower resolution was sufficient at the smallest simulated .the validation of the code is described in the appendix .a summary of the simulations is given in table [ table1 ] .apart from the control parameters , it lists the nusselt number , defined as where the overbar denotes average over time and either top or bottom boundary . the kinetic energy density is given by whereas the peclet number is computed from which is aptly called peclet number because velocities are computed in units of .the average temperature deviation from the conductive profile at the center of the layer , , is also listed in table [ table1 ] , together with the average density in the midplane , .[ fig : profile ] shows vertical profiles of temperature , density , vertical velocity , and horizontal velocity for different .contrary to the boussinesq case , these profiles are not symmetric about the midplane .the relation between the top and bottom regions of the layer will be discussed in section [ bl ] . as increases, an increasingly large interval develops in which the temperature gradient is approximately equal to the adiabatic gradient .the maximum of the vertical velocity is found below the midplane .two local maxima show up in the profiles of horizontal velocity .the larger velocities are found near the bottom boundary .when both and are large , there is no local maximum of horizontal velocity near the top boundary and the horizontal velocity decreases monotonically with height .these cases result in blanks in the last column of table [ table1 ] .boundary layers also exist in the density profiles .the average density takes its maximum and minimum values , and , near the bottom and top boundary , respectively .both and are given in table [ table1 ] , too .the ratio is generally less than is suggested by the values of and the initial conditions ( [ eq : density_init ] ) because the statistically stationary , turbulent and well mixed state is nearly adiabatic , not conductive .a rough estimate of can be obtained from assuming that the adiabatic state extends throughout the layer , implying that the density takes its extremal values exactly on the boundaries .this leads to with , which is compatible with the numerical results . as a function of rayleigh number within the boussinesq approximation ( black stars ) , for ( blue symbols ) and ( plus ) and 1 ( x ) , for ( green symbols ) and ( empty squares ) , 0.3 ( full squares ) , 1 ( empty triangle up ) , 3 ( full triangle up ) , 10 ( empty triangle down ) , 30 ( full triangle down ) and 100 ( empty diamonds ) and for ( red symbols ) and ( empty circles ) , 1 ( full circles ) and 10 ( half filled circles ) .data for the boussinesq case do not appear in the subsequent figures ., width=302 ] the most immediate task is of course to find predictions for the nusselt number . a straightforward plot of vs. ( fig .[ fig : nu_ra ] ) shows that one does not obtain simple power laws for large and .a finite adiabatic temperature gradient modifies the onset of convection .if one aims for a data reduction which collapses the different curves in fig .[ fig : nu_ra ] , one can account for the adiabatic temperature difference by defining a corrected rayleigh number by a similar correction seems in order for .since the adiabatic temperature gradient needs to be established before any convection can start , it is natural to subtract the heat conducted down the adiabat from both the actual heat transport and the conductive heat transport used for the normalization of the heat transport : it is seen from fig .[ fig : nu_nb_ra_nb ] that for larger than roughly , one finds approximately , but the prefactors depend on the other control parameters in a non - trivial way .this is compatible with an argument exposed in ref . , which states that with an a priori unknown function .the argument has to assume negligible viscous heating and small density variations , so that it is not expected to hold throughout the parameter range investigated here . nonetheless , a best fit to the data yields which can be rearranged into the fitting function in fig .[ fig : nu_nb_ra_nb ] , . as a function of with the same symbols as in fig[ fig : nu_ra ] .the solid line indicates the power law .,width=302 ] as a function of ( top panel ) and ( bottom panel ) with the same symbols as in fig .[ fig : nu_ra ] .the solid lines are given by ( top panel ) and ( bottom panel).,title="fig:",width=302 ] as a function of ( top panel ) and ( bottom panel ) with the same symbols as in fig .[ fig : nu_ra ] .the solid lines are given by ( top panel ) and ( bottom panel).,title="fig:",width=302 ] it can also be useful to relate to the kinetic energy or the peclet number .it was noted in ref . that in boussinesq convection in computational volumes of large aspect ratio , which is equivalent to in that case . in the present simulations ,the relation between and is already non - trivial ( see fig .[ fig : ekin_pe ] ) , because there is a factor representing an effective density between the two quantities .it turns out that the geometric mean of and is a suitable effective density to the extent that in fig .[ fig : ekin_pe ] , all points for deviate by less than 30 % in from as a function of , compensated for the power law in eq .( [ equ : nu_nb_ekin ] ) , with the same symbols as in fig .[ fig : nu_ra].,width=302 ] as a function of , on a double logaritmic scale in the top panel and compensated for the power law in eq .( [ equ : nu_nb_ra_nb2 ] ) in the bottom panel , with the same symbols as in fig .[ fig : nu_ra].,title="fig:",width=302 ] as a function of , on a double logaritmic scale in the top panel and compensated for the power law in eq .( [ equ : nu_nb_ra_nb2 ] ) in the bottom panel , with the same symbols as in fig .[ fig : nu_ra].,title="fig:",width=302 ] this geometric mean becomes again important when looking for a relation between and .a good fit to the data is obtained from ( see fig . [fig : nu_nb_ekin ] ) which reduces of course to the previously known scaling for and .having established the relevance of the product , it is tempting to introduce it into fits of vs. .a reasonable fit is shown in fig .[ fig : nu_nb_ra_nb2 ] to be which is an improvement compared with fig .[ fig : nu_nb_ra_nb ] especially for .previous studies have quantified the asymmetry between top and bottom in non - boussinesq convection with the help of the midplane temperature . in most of the present simulations ,the midplane temperature deviates by less than from its value in the conductive state ( see table [ table1 ] ) .such a small deviation is difficult to determine accurately and requires long time integrations , so that this section will not consider any further , apart from noting that is negative for small ( in agreement with ref . ) but becomes positive for large enough .a relation between temperature boundary layers deduced from experimental data by wu and libchaber is based on a temperature scale computed from quantities local to each boundary layer . in many of the simulations presented here , the boundary layers are still quite thick and there is significant variation of for example thermal diffusivity across them , so that the results of wu and libchaber can not be tested in a meaningful way . as a function of with the same symbols as in fig .[ fig : nu_ra ] .the solid line indicates the power law .,width=302 ] as a function of with the same symbols as in fig .[ fig : nu_ra].,width=302 ] in the following , overbars denote averages over time and horizontal planes , and the indices b and t indicate top and bottom boundaries .for example , is the average density at the top boundary .it is in general different from , which is the density at the top boundary in the initial conductive state given by eq .( [ eq : density_init ] ) .this subsection will present two relations between the top and bottom regions of the convection layer involving the free fall velocity . in order to compute a free fall velocity ,we define and as the ( dimensional ) temperatures the gas would have at the top and bottom boundaries if the adiabatic temperature profile extended throughout the entire layer : with . under the same assumption , the densities at the two boundaries , and , are given by consider now a parcel of gas near the top boundary .it has on average the density .the density difference with the adiabatic profile , , accelerates the parcel through the volume .the free fall velocity is estimated from a balance between the advection and buoyancy terms , which reads in the non - dimensional variables used here , where is the density difference of the moving parcel with the adiabatically stratified background . the pressure variation experienced bythe falling parcel compresses the parcel by the same factor as the surrounding gas ( assuming the parcel does not exchange heat with its surroundings ) , so that remains constant during the entire journey through the adiabatically stratified layer .it follows that keeps its initial value of , and that the square of the non - dimensional free fall velocity of the parcel arriving at the bottom ( which is expressed in units of ( ) is .a similar expression is derived if we start the argument from the bottom boundary , so that we obtain two velocities , and according to the formula fig .[ fig : ekin_vff ] verifies that one obtains with eq .( [ eq : vff ] ) a velocity representative of the convective velocity .the figure shows as a function of the energy density computed from the bottom free fall velocity , .when this velocity is small , the reynolds number of the flow is too small for friction to be negligible and the free fall velocity is a poor estimate of the true velocity . for large velocities , there is a unique relation between and independent of any other parameters .it is now expected that one obtains the same graph if one uses the kinetic energy density computed at the top boundary , or equivalently , that .[ fig : vff_t_b ] demonstrates that this is the case to within for sufficiently large velocities and over three decades in .the equality of the two energy densities is the first important connection between the top and bottom boundaries . as a function of with the same symbols as in fig .[ fig : nu_ra].,width=302 ] as a function of with the same symbols as in fig .[ fig : nu_ra].,width=302 ] the second relation , shown in fig .[ fig : vff_h ] , involves the local maxima of horizontal velocity near the top and bottom boundaries visible in the bottom panel of fig .[ fig : profile ] .these maximum velocities , and , are listed in table [ table1 ] . according to fig .[ fig : vff_h ] , they obey for sufficiently large velocities : is a free fall velocity computed from quantities evaluated at the the bottom boundary , but it estimates the velocity of plumes arriving at the top boundary . the typical velocity of the flow out of the arriving plumes is the maximum average horizontal velocity , so that we expect , and similarly .the prefactors depend on the size of the incoming plumes relative to the thickness of the layer in which the outflow occurs .[ fig : vff_vh ] shows that approximately holds and that the prefactor indeed depends on and .the two proportionalities combine to .it is remarkable that this combined relation is obeyed more accurately than the two separate proportionalities , and that the proportionality factor in the combined relation approaches 1 at high rayleigh numbers as shown in fig .[ fig : vff_h ] .there are many ways how to depart from the boussinesq approximation and it is not known at present whether there is anything universal about convection in general .the prandtl number is a constant in a real gas and the nusselt number depends only on the rayleigh number in the boussinesq limit . away from that limit ,several more control parameters determine the nusselt number and it is a challenge to find a suitable data reduction such that the dependences found for various density stratifications collapse on a single master curve , which then must contain the boussinesq limit . it has been shown in the present paper that a good , if imperfect , data collapse is obtained if one introduces into the scaling laws an effective density given by the geometric mean of the maximum and minimum densities in the convecting layer .there is no theory underpinning the relevance of the geometric mean .it seems likely that the geometric mean will be of no help in some other cases of non - boussinesq convection , for example in liquids . on the other hand ,the importance of free fall velocities has been recognized since long , and it has been shown here that two different estimates of free fall velocities , based on quantities pertaining either to the top or the bottom boundary , are related at high rayleigh numbers by the requirement that the kinetic energy densities computed from the two velocities must be equal .the free fall velocities are also connected to the total kinetic energy of the flow and the maxima of the horizontal velocity profile .it will be worthwhile to check these relations in other forms of convection .this appendix describes a few tests that have been performed in order to validate the numerical code .direct validation of the code is problematic because published simulations of compressible convection either focus on flow structures and are not useful as a benchmark , or are too close to the boussinesq limit to offer a stringent test .a different route had therefore to be taken .first , the code simulating ( [ eq : conti_bq]-[eq : t_bq ] ) could be validated against a completely independent spectral method .this already verified most terms occurring in ( [ eq : conti]-[eq : t ] ) .for example , the spectral code calculates for a plane layer with no slip boundaries in , periodic boundary conditions in and with , and and that and .if started from the initial conditions , at , the system ( [ eq : conti_bq]-[eq : t_bq ] ) needs to be integrated beyond to find the final state .the new code with appropriately adapted boundary conditions yields and for 32 points along and grid points in the . at twice that resolution in each coordinate , the result is and .the parameter in eq .( [ eq : ns_bq ] ) was set to so that the mach number is below . in a next step , the code implementing ( [ eq : conti]-[eq : t ] ) was used to simulate the propagation of sound waves . for this purpose ,the term was removed from eq .( [ eq : ns ] ) .the remaining equations , if linearized and neglecting dissipation , can be manipulated into this equation allows some simple analytical solutions .for example , there are the eigenmodes depending only on .the first of those eigenmodes for boundary conditions imposing zero heat flux at the top and bottom boundaries is of the form .this standing wave has a period of . for , , and ,this predicts . with a resolution of 64 grid points along ,the numerical result is .one can similarly simulate sound waves propagating in different directions .this is a test for all terms involving .the fact that simulations of convection at high yield a temperature gradient in the bulk close to the adiabatic gradient may also be regarded as a test of the terms involving . the dissipation rate of the standing wave of the previous paragraph can be used to test the dissipative term .a more complete test is provided by the energy budget which also tests the viscous heating term in the temperature equation . if one takes the scalar product of eq .( [ eq : ns ] ) with and integrates eqs .( [ eq : ns ] ) and ( [ eq : t ] ) , multiplied by , over all space , one deduces from eqs .( [ eq : conti]-[eq : t ] ) that the time derivative of kinetic plus internal energy is given by dv = g + v_1 + v_2\ ] ] with is the work done by gravity , the dissipated kinetic energy and the heat generated through viscous dissipation .if we denote time averages by angular brackets , we must find in a statistically stationary state that and .since and have different forms and must be programmed differently , the energy budget provides a good test for their correctness . for the case and included in table [ table1 ] , the simulations yield and at a resolution of grid points , and and at a resolution of grid points .the formulae for and taken together contain integrals of 18 derivatives , 6 of which are squared , so that the typical error on each derivative is about .the volume integrals have been computed by adding the integrands at each grid point , multiplied by the volume of the cell surrounding each grid point .this method of integration is of first order for general integrands , which explains why the error in is only halved when doubling the resolution ..summary of results . listedare the control parameters , , and together with , , , multiplied by 100 , , , , , and ( see text for definitions ) .the table consists of three sections ( corresponding to the color code of the figures in the online version ) with different , which is 4/5 , 2/3 or 1/15 in going from the top to the end of the table .an entry is missing for if the profile of horizontal velocity has no maximum near the top boundary . [ cols="^,^,^,^,^,^,^,^,^,^,^,^",options="header " , ]
numerical simulations of convection in a layer filled with ideal gas are presented . the control parameters are chosen such that there is a significant variation of density of the gas in going from the bottom to the top of the layer . the relations between the rayleigh , peclet and nusselt numbers depend on the density stratification . it is proposed to use a data reduction which accounts for the variable density by introducing into the scaling laws an effective density . the relevant density is the geometric mean of the maximum and minimum densities in the layer . a good fit to the data is then obtained with power laws with the same exponent as for fluids in the boussinesq limit . two relations connect the top and bottom boundary layers : the kinetic energy densities computed from free fall velocities are equal at the top and bottom , and the products of free fall velocities and maximum horizontal velocities are equal for both boundaries .
in computed tomography ( ct ) , three dimensional reconstruction techniques from projection have been used for many years in radiology .the two dimensional fourier transform is the most commonly used algorithm in radiology . in this techniquea large number of projections at uniformly distributed angles around the subject are required for reconstruction of the image . in the field of accelerator physics, one expects that the relatively simple charged particle beam distributions can be reconstructed from a small number of projections . in conventional practiceonly two projections , usually horizontal and vertical , are measured .this puts a severe limit on the level of detail that can be achieved .the algebraic reconstruction technique ( art ) introduced by gordan , bender and herman uses three or more projections to reconstruct the 2-dimensional beam density distribution .they have shown that the improvement in the quality of the reconstruction is pronounced when a third projection is added , but additional projections add much less to the reconstruction quality .the art algorithms have a simple intuitive basis .each projected density is thrown back across the reconstruction space in which the densities are iteratively modified to bring each reconstructed projection into agreement with the measured projection . assuming that the pattern being reconstructed is enclosed in a square space of n x n array of small pixels , is grayness or density number , which is uniform within the pixel but different from other pixels .a `` ray '' is a region of the square space which lies between two parallel lines .the weighted ray sum is the total grayness of the reconstruction figure within the ray .the projection at a given angle is then the sum of non - overlapping , equally wide rays covering the figure .the art algorithm consists of altering the grayness of each pixel intersected by the ray in such a way as to make the ray sum agree with the corresponding element of the measured projection .assume * p is a matrix of m x n and the m component column vector * r. let denote the ( i , j)th element of * p , and denote the ith ray of the reconstructed projection vector * r. for , n is number of pixels under projection ray r , defined as .art is an iterative method .the density number denotes the value of after q iterations .after q iterations the intensity of the ith reconstructed projection ray is and the density in each pixel is where r is the measured projection ray and , and , this algorithm is known as fully constrained art . * * * * it is necessary to determine when an iterative algorithm has converged to a solution which is optimal according to some criterion .various criteria for convergence have been devised .the discrepancy between the measured and calculated projection elements is and the nonuniformity or variance of constructed figure is and the entropy constructed figure is tends to zero , to a minimum and to a maximum with increasing q. for a known test pattern ( ) , the euclidean distance is define as is instructive to test the reconstruction capabilities of art with two to four views by using projections from a known test figure . in the following example , we have used an x - y coupled ( about 18 ) two - dimensional gaussian enclosed in a square space of 100 x 100 array with and .we have used a ray width in the 45 and 135 projection as times of ray width in x or y projection , making number of ray in each projection same namely 100 .1 shows the test figure and reconstructed test figure from two projections .2 shows reconstructed test figure from three and four projections .3 shows the contours of figures 1 and 2 . .the convergence criteria discrepancy ( d ) , variance ( v ) , the entropy ( e ) and the euclidean distance ( s ) for two , three and four projections . [ cols="<,<,<,<,<,<,<",options="header " , ] it is clear from fig .3 that two projections are not enough for catching the coupling .the accuracy of the reconstructed figure from four projection is slightly better than three projections .4 shows the discrepancy ( d ) , variance ( v ) , the entropy ( e ) and the euclidean distance ( s ) as a function of iteration number for case of three projections .the convergence criteria was if discrepancy is less than 10 .table 1 show the numerical values of discrepancy ( d ) , variance ( v ) , the entropy ( e ) and the euclidean distance ( s ) for two , three and four projections .there are stepping wire profile scanners at 13 locations throughout the 200 mev linac and transport lines .these scanners are mounted at a 45 angle with respect to horizontal , and single horizontal and vertical wires are stepped through the bea we have added a third wire at 45 to horizontal in two of the scanners , one in the 750 kev line and one in the 200 mev blip transport line .5 shows a schematic of the scanner with three wires .7 shows beam density contour plots in the blip line .the x - y coupling is clearly seen .this coupling could come from one or more rotated quadrupoles or vertical beam offset in a dipole . in the presence of x - y coupling ,the usual technique of emittance measurement from profiles at three or more locations will not work .a. kponou , _ et al _ , `` a new optical design for the bnl isotope production transport line '' , proceedings of the xvii international linear accelerator conference , geneva , switzerland , 26 - 30 august 1996 , pp 770 .
projections of charged particle beam current density ( profiles ) are frequently used as a measure of beam position and size . in conventional practice only two projections , usually horizontal and vertical , are measured . this puts a severe limit on the detail of information that can be achieved . a third projection provides a significant improvement . the algebraic reconstruction technique ( art ) uses three or more projections to reconstruct 3-dimensional density profiles . at the 200 mev h- linac , we have used this technique to measure beam density , and it has proved very helpful , especially in helping determine if there is any coupling present in x - y phase space . we will present examples of measurements of current densities using this technique .
leaky integrate and fire ( lif ) neuron , is a universally - accepted `` standard '' neuronal model in theoretical neuroscience .usage of lif as a model allowed obtaining numerous theoretical results , see e.g. .when input impulses are absent , the membrane potential of the lif model decays exponentially . in computer simulations ,this makes inevitable the usage of machine floating point arithmetic .however , the floating point calculations can be considerably inaccurate in some cases , see e.g. .these cases do not so frequently occur . therefore , if one studies neuronal firing statistics , when a simulation is repeated many times with slightly different parameters , the errors due to floating point pitfalls can be negligibly small .the problem arises if one needs to check if two neuronal states obtained in the course of simulation are identical .this kind of check is necessary , e.g. to figure periodic dynamical regimes of a reverberating network , see for an example . in order to checkif a neural net is in a periodic regime , one needs to check whether two states the net passes through at two distinct moments of time are exactly the same .if states of individual neurons the net is composed of are described by floating point variables , then this expects checking that two floating point numbers are exactly equal the operation , which is not permitted in programming due to small , but inevitable inaccuracy adhered to the machine floating point arithmetic .instead , the option is to check whether the distance between two numbers is less than a `` machine epsilon '' , . this does not help to find periodic regimes reliably . indeed , in this comparison paradigm , two different states may happen to be treated as identical and this may bring about a fake periodic regime .taking into account that rounding mode depends on operating system and on machine architecture , it will be difficult to treat in a consistent manner sustained behavior of a dynamical system prone to instability ( as do neuronal nets ) if simulation is made in floating point numbers .this problem is avoided in , where the binding neuron model was used as neuronal model .the binding neuron states are naturally described by integers , and machine arithmetic of integers is perfectly accurate .this admits a routine check whether two numbers are exactly equal , and finally , whether two states of a neuronal net are identical , provided those states are expressed in terms of whole numbers , exclusively .it would be nice to have the same possibility when a lif model is used for simulations .in traditional computer simulations of lif model driven with input stream of stimulating impulses , a dynamical system is simulated , which has states described with machine floating point numbers .we denote this dynamical system as fplif .the purpose of this paper is to offer an approximation , which replaces the fplif with intlif a dynamical system whose states are expressed in terms of integers . in what follows, we describe the approximation itself and check its quality by running both fplif and intlif with the same stream of input impulses randomly distributed in time .it appears that it is possible to choose the approximation parameters in such a way , that dynamics of both fplif and intlif are exactly the same , if considered in terms of spiking moments .the simplest lif model is considered .namely , the neuronal state at moment is described as membrane voltage , .the resting state is defined as .any input impulse advances membrane potential by , where . between two consecutive input impulses, decays exponentially : , is the membrane relaxation time .suppose that the threshold value for the lif neuron is .the neuron fires a spike every time when and is reset to zero after that .the set of possible values of is the following interval : the above mentioned properties of the lif model can be routinely coded with and declared as floating point quantities . in this case, possible values of will be those machine floating point numbers , which fall into the interval ( [ interval ] ) . andthis gives the fplif dynamical system .in numerical simulation of a dynamical system , the time is advanced in discrete steps , having duration of a small fixed . ]time - step , .this gives an approximation of the continuous time with discrete moments : due to this fact , the membrane voltage , , also changes in a discrete manner from step to step . as a result, in a single run of the pure decay dynamics ( without input stimulation ) the will pass through only some discrete values , missing the intermediate ones .those discrete values can be chosen as an approximation of the continuous interval ( [ interval ] ) .it is clear that the set of the discrete values mentioned depends on the initial value of . to be concrete ,let us chose the following approximating set : this induces the following representation of the interval ( [ interval ] ) : any value falls for some into interval and we chose its left end as an approximation for that .the error of this approximation is less than now , if neuron has membrane potential and one intends to describe by values from its consequent dynamics due to pure decay , the following should be done .the value should be replaced with , and the subsequent decay dynamics simply propels through values from .the state of neuron at each time step can be labeled with integer by the following way : the state obtains label , where .now , the decay dynamics can be expressed in terms of .namely , if at some discrete moment of time a state is labeled as , then the state at the next moment , , is labeled as .thus , having in mind the correspondence : , the pure decay dynamics becomes as simple as adding unity to the state label at each time step . for computer simulation purpose, it should be noticed , that the set is finite . if the state label is declared in a program as ` int ` , then has ` int_max`+1 elements , where ` int_max ` is the largest value of ` int`-type variable which can be represented in the operating system .thus , with state labels of type ` int ` , instead of ( [ bins1 ] ) , one has to chose as the following set of ` int_max`+1 elements : where the value 0 is added to describe the resting state , which is attained just after firing .consequently , ( [ intervals1 ] ) should be replaced with the following : in a 64-bits os , ` int_max ` = 2147483647 .this imposes a limit on possible duration of the pure decay evolution represented with ` int ` type labels .if the time step ms , then ` int_max` 350 minutes .thus , description of neuronal state by ` int ` type label fails , if a lif neuron starts , e.g. with close to and does not receive excitatory stimulation for longer than 350 minutes of real time. in real networks , stimulus - free period of a neuron involved in a useful task can not be so long . indirectly ,a very crude upper bound for possible duration of stimulus - free period can be estimated from duration of suppression of activity observed in some brain networks , .such a suppression can last up to 1000 ms . based on this value ,let us expect that our neuron receives excitatory input impulses of amplitude with mean rate higher than hz . before the first input impulse, neuron is in the state `` empty '' with .after the first input impulse the state label and this value is small if compared with ` int_max ` .for example , for ms , ms , one obtains .the label gets increment 1 after each time step until the next input impulse comes .the mean waiting time for the next stimulation , if expressed in the units , is .this means that at the moment of next stimulation , the state label after the next input impulse , the state label .this suggests that states with labels will be observed quite rare with no chances to attain value close to the ` int_max ` provided a neuron receives at least a moderate stimulation .suppose that input impulse at moment advances by the membrane voltage .if , then does not hold in most cases .one needs to approximate by a suitable value from , as described in n. [ pdd ] , and then to proceed with pure decay dynamics expressed in integers , as described in n. [ pdd ] , until the next input impulse .preliminary tests of this scheme where performed with lif neuron stimulated with poisson stream of input impulses .the standard floating point lif ( fplif ) simulation and integer lif ( intlif ) simulation , as described here , were performed with the same input streams .the threshold manner of triggering gives chances that firing moments of both models will be the same , if expressed in whole number of .computer simulations performed show that the firing moments of both models are indeed the same for some initial period of time after which they become different .this is because the approximation error , when approximating lif state ( voltage ) by a value from , builds up too fast with each input impulse . in order to decrease the error, one needs more precise approximation of the continuum set , than that given by the discrete set .the required approximation can be achieved by introducing second order bins into the set .let us choose an integer and divide each bin from ( [ intervals1int_max ] ) to equal second order bins ) , because we do not expect the state will ever fall into it , except of just after firing , when , exactly . ] .this gives representation of -th first order bin : denotes the size of second order bin within the -th first order bin : now , if , then we chose as its approximation .this results in the new set of possible values for : any point , except of 0 , in the set is labelled with two integers , , where , .any label corresponds to the membrane voltage from : , or : from the last , it is clear that pure decay evolution for time step transforms voltage into , which means for the integer labels : now , if initial state of neuron is given as floating point number , then one needs to find its approximation , as described above .the precision of this approximation is : latexmath:[\[\label{precn } in the approximation of with , the value of should satisfy the following relation : , which gives : - 1\ ] ] where $ ] denotes the integer part of .now , with found , we determine the value from the following relation : , which gives : \ ] ] now evolution of lif state can be expressed in integer numbers as follows . for initial value of voltage , , we find its integer representation / approximation in accordance with eqs .( [ v2n ] ) , ( [ vn2i ] ) .the pure decay evolution then goes as displayed in ( [ ni2ni ] ) .in order to describe influence of input impulse with magnitude on the state we calculate the voltage in accordance to ( [ ni2v ] ) .the voltage after receiving input impulse becomes .if , then neuron produces an output impulse and ends in the state `` empty '' with .otherwise , the new integer state , , can be found with ( [ v2n ] ) , ( [ vn2i ] ) applied to the instead of .the procedures given above define the dynamical system intlif in which a neuronal state is described with two integers , , with additional unique state `` empty '' attained just after firing .neuronal state is described by three integers : `int n , i ; char empty ; ` if ` empty = = 1 ` then the neuron is in its resting state with .if ` empty = = 0 ` then can be calculated from the ` n , i ` in accordance with ( [ ni2v ] ) .equation ( [ ni2v ] ) gives the following c - code for calculating from known integer state ` { n , i , empty } ` : .... v = empty ? 0 : pow(al , n)*v0*(al + ( double)i / n*(1 - al ) ) ; .... where ` v0 ` stands for and ` al ` stands for .after firing , ` empty = = 1 ` . equation ( [ v2n ] )gives the following c - code for calculating : .... n = - floor(log(v0/v)/log(al ) ) - 1 ; ....similarly for the equations ( [ cn ] ) , ( [ vn2i ] ) . for testing the intlif simulation paradigm , both intlif and fplif models were stimulated with the same random stream of impulses .the stream with exponential distribution of inter - spike intervals ( isi ) was generated with random number generator from the gnu scientific library .generators of three types were used , each with a number of different seeds , see table [ t1 ] ..[t1 ] parameters of simulating algorithm [ cols= " < , < " , ] as a result of testing , it was found that for any combination of parameters it is possible to ensure that all firing moments of intlif and fplif are identical by choosing proper and values .the decisive factor , which determines whether all spiking moments of intlif and fplif coincide , is the precision of approximating the interval of possible voltages , , with discrete values from as compared to .this relative error , , can be estimated based on ( [ error1 ] ) and ( [ precn ] ) : which for small gives : . in the testingperformed , it appeared that having guarantees that sequences of spiking moments of intlif and fplif are identical . for larger values of , differences between the two sequences may appear , which are characterized by a few misplaced spikes , up to several hundred and more .in numerical simulation of a dynamical system adaptive algorithms are normally used , when the time step value is increased / decreased during calculations in order to make calculations faster and more precise .this works perfect if it is necessary to calculate the system s state at some future moment of time starting from some initial state . to determine periodic regimes in a reverberating network, it is instead necessary to calculate the whole trajectory during some interval of time . in this case, the straightforward way is to approximate that interval with equidistant discrete points as in ( [ moments ] ) and calculate states of the system in those points .therefore , paradigm of fixed time step is used here for a single neuron .description of neuronal state ( voltage ) with a pair of integers does not exempt the intlif model from using machine floating point numbers . indeed , in eqs .( [ ni2v ] ) , ( [ v2n ] ) , ( [ vn2i ] ) , operations with floating point numbers are explicitly involved .nevertheless , the pure decay evolution , as it is described in ( [ ni2ni ] ) goes without rounding errors .the underlying to ( [ ni2ni ] ) values from are always the same for the same . with obtained input impulse , calculation of involves a rounding error . the error is immediately cleared while calculating new by means of ( [ v2n ] ) , ( [ vn2i ] ) .this allows describing states of lif neuron in terms of integers in a consistent manner .namely , different state labels correspond to different voltages from and vice versa .we used here the simplest possible model for lif neuron .it seems , that technique offered in nn .[ pdd ] , [ ii ] , above , can be extended to be valid for more elaborated lif models , like those described in .the intlif paradigm for numerical simulation of leaky integrate and fire neuron is proposed . in this paradigm ,neuronal state is described by two integers , .the membrane voltage of lif neuron can be calculated from , if required .the lif state change due to both leakage and stimulating impulses is expressed exclusively in terms of changing integers .the intlif paradigm is compared with the standard fplif simulation paradigm , where membrane voltage is expressed as machine floating point number , by stimulating both models with the same random stream of input impulses and registering the spiking moments of both models .it is concluded that approximation parameters of intlif can be chosen in such a way that spiking moments of both models are exactly the same if expressed as whole number of simulation time step , .description of lif states by integers gives a consistent numerical model , suitable for situations where exact comparison of states is necessary .this work was supported by the programs of the nas of ukraine `` microscopic and phenomenological models of fundamental physical processes in a micro and macro - world '' , pk no 0112u000056 , and `` formation of structures in quantum and classical equilibrium and nonequilibrium systems of interacting particles '' , pk no 0107u006886 .11 natexlab#1#1[1]`#1 ` [ 2]#2 [ 1]#1 [ 1]http://dx.doi.org/#1 [ ] [ 1]pmid:#1 [ ] [ 2]#2 , , ( ) . , , ( ) . , , ( ), , , . , , ( ) . , , (, , , , , , , , , , , ( ) . , , ( ) ., , ( ) . .
the leaky integrate and fire ( lif ) neuron represents standard neuronal model used for numerical simulations . the leakage is implemented in the model as exponential decay of trans - membrane voltage towards its resting value . this makes inevitable the usage of machine floating point numbers in the course of simulation . it is known that machine floating point arithmetic is subjected to small inaccuracies , which prevent from exact comparison of floating point quantities . in particular , it is incorrect to decide whether two separate in time states of a simulated system composed of lif neurons are exactly identical . however , decision of this type is necessary , e.g. to figure periodic dynamical regimes in a reverberating network . here we offer a simulation paradigm of a lif neuron , in which neuronal states are described by whole numbers . within this paradigm , the lif neuron behaves exactly the same way as does the standard floating point simulated lif , although exact comparison of states becomes correctly defined . * keywords . * leaky integrate and fire neuron ; floating point calculations ; simulations
it is well - known that standard conservative discretizations of gas dynamics equations in eulerian coordinates generally develop non physical pressure oscillations near contact discontinuities , and more generally near material fronts in multi - component flows .several cures based on a local non conservative modification have been proposed .let us quote for instance the hybrid algorithm derived by karni in and the two - flux method proposed by abgrall and karni in for multi - fluid flows .see also , and the references therein .+ we investigate here a lagrange - projection type method to get rid of pressure oscillations .the basic motivation lies in the fact that oscillations do not exist in lagrangian computations .it is then possible to clearly determine which operation in the projection step sparks off pressure oscillations . as in , ,a non conservative correction is proposed .it is based on a local pressure averaging and a random sampling strategy on the mass fraction in order to strictly preserve isolated material fronts and get a statistical conservation property .numerical results are proposed and compared with the two - flux method .we consider a nonlinear partial differential equations model governing the flow of two species and separated by a material interface .for instance , we focus on two perfect gases and we set , and where , , , , , , respectively denote the pressure , the density , the internal energy , the adiabatic coefficient , the temperature and the specific heats of , .the mixture density is given by and we adopt a dalton s law for the mixture pressure . we assume in addition that the two species evolve according to the same velocity and are at thermal equilibrium , that is .the mixture internal and total energies and are defined by and .then , introducing the mass fraction , straightforward manipulations yield with and in one - space dimension , the model under consideration writes and for the sake of conciseness , we set the flux function finds a natural definition with respect to the conservative unknowns .let us mention that ( [ systemec ] ) is hyperbolic with eigenvalues and , , provided that , and .the characteristic field associated with is linearly degenerate , leading to _ contact discontinuities _ or _material fronts_. the two extreme fields are genuinely nonlinear .this section is devoted to the discretization of ( [ systemec ] ) .as already stated , a specific attention must be paid to the contact discontinuities to avoid pressure oscillations . with this in mind , we first revisit the `` two - flux method '' proposed by abgrall and karni and then present a new numerical procedure based on a lagrangian approach and a random sampling strategy .comparisons will be proposed in section [ sec : numexp ] .+ let us introduce a time step and a space step that we assume to be constant for simplicity .we set and define the mesh interfaces for , and the intermediate times for . in the sequel, denotes the approximate value of at time and on the cell . for and , we set where is the initial condition .aim of this section is to review the two - flux method proposed by abgrall and karni .let us first recall that pressure oscillations do not systematically appear in _ single - fluid _ computations .abgrall and karni then propose to replace any conservative _ multi - fluid _strategy by a non conservative approach based on the definition of two _ single - fluid _ numerical fluxes at each interface .we first recall the algorithm in details and then suggest a slight modification in order to lessen the conservation errors. this strategy will be used as a reference to assess the validity of the lagrangian strategies proposed in the next subsection . + + * the original algorithm .* let us consider a two - point numerical flux function consistent with .the two - flux method proposes to update the sequence into two steps , under an usual cfl restriction .+ + _ first step : evolution of , and _ ( ) + let us first define and the one - to - one mapping thanks to the thermodynamics closures .two interfacial numerical fluxes and are then defined by where with and with . in some sense , the mass fraction is then assumed to be the same on both side of each interface since is used for the computation of and for . at last , we use , respectively , to update the conservative unknowns , and on the cell , resp . . with clear notations ,we get for all let us note that is not concerned with ( [ mgfs1a ] ) .we simply set ( is naturally kept constant in this step ) and then with + + _ second step : evolution of _ ( ) + in this step , is evolved in a conservative way using the numerical flux function while the values of , and are kept unchanged .again with clear notations , the vector is then defined , , and , where with . at last , we set . + + it is worth noticing that the two - flux method is not conservative on and since the fluxes and defined at each interface are different as soon as .it is not conservative on either , but by construction the two - flux method is conservative on . + + * the associated quasi - conservative algorithm .* it is actually clear from , and the references therein that in standard conservative discretizations of ( [ systemec ] ) , only the update formula of the total energy is responsible for the pressure oscillations .we are then tempted to propose a quasi - conservative variant of the two - flux method such that only the total energy is treated in a non conservative form . for all , we simply replace ( [ mgfs1a ] ) by where again .the second step is unchanged . in this section ,we propose a lagrangian approach for approximating the solutions of ( [ systemec ] ) .the general idea is to first solve this system in lagrangian coordinates , and then to come back to an eulerian description of the flow with a projection step . under its classical conservative form ,the lagrange - projection method generates spurious oscillations near the material fronts . in order to remove these oscillations, we propose to adapt the projection step ( only ) .we begin with a description of the lagrangian step and then recall , for the sake of clarity , the usual conservative projection step ( see for instance ) .again , an usual cfl restriction is used .+ + _ the lagrangian step ( ) _ + in this step , ( [ systemec ] ) in written in lagrangian coordinates and solved by an acoustic scheme ( see for instance ) , which gives where the velocity and the pressure at interfaces are defined by the proposed local approximation of the lagrangian sound speed is but other definitions may be found for instance in . in this step ,the grid points move at velocity so that , , and define approximate values of on a lagrangian grid with mesh interfaces .+ + _ the usual projection step ( ) _+ aim of this step is to project the solution obtained at the end of the first step on the eulerian grid defined by the mesh interfaces .usually _ , the choice is made to project the conservative vector in order to obtain a conservative lagrange - projection scheme ( see again ) .more precisely , such a choice writes or , with and , * what is wrong with this scheme ? * the lagrange - projection approach allows to precisely reveal the operation that makes the material fronts necessarily damaged by pressure oscillations .let us indeed consider an isolated material front with uniform velocity and pressure profiles : and , while if and if .the density is also set to be uniform for simplicity : .we first observe that this profile is clearly preserved in the lagrangian step since by ( [ defupintlp ] ) we have and by ( [ defupintlp ] ) .then , the projection procedure ( [ moyl2phi1 ] ) gives and so that .after the first time iteration , the velocity profile is then still free of spurious oscillations . at last ,using the property that this velocity is constant and positive , and focusing for instance on the cell of index , ( [ moyl2phi1 ] ) gives for the pressure is then given after easy calculations by at this stage , there is no reason for to equal from the very first time iteration , a pressure oscillation is then created . as an immediate consequence , the velocity profile will not remain uniform in the next time iteration and the numerical solution is damaged for good .+ + it is then clear that the way the pressure is updated in the projection step ( only ) is responsible for the spurious oscillations in an usual conservative lagrange - projection scheme .we propose to modify this step . as for the two - flux method ,the idea is to give up the conservation property in order to maintain uniform the pressure across material fronts .let us emphasize that the lagrangian step is unchanged .+ + _ the quasi - conservative -projection step ( ) _ + first of all , , and still evolve according to ( [ moyl2phi1 ] ) so that the algorithm remains conservative on these variables .we will keep on using ( [ moyl2phi1 ] ) for only for not in a subset of defined below . on the contrary , the pressure ( instead of )is averaged for : for , we then set with , and .+ + * definition of . * up to our knowledge , the idea of averaging the pressure in a lagrange - projection strategy first appeared in .this way to proceed is clearly sufficient to remove the pressure oscillations near the material fronts if .however , averaging the pressure instead of the total energy for all gives a non conservative scheme that is expected to provide discontinuous solutions violating the rankine - hugoniot conditions , see for instance hou and lefloch ( note however that here , the lagrangian system associated with ( [ systeme ] ) is actually treated in a _ conservative _ form , while the pressure averaging takes place in the projection step only ) .this was confirmed in practice when considering solutions involving shocks with large amplitude .then , in order to lessen the conservation errors , we propose to localize the non conservative treatment around the contact discontinuities setting for a given .following karni , we will use in practice .+ + _ the quasi - conservative -projection step with sampling ( ) _ + the quasi - conservative -projection step will be seen in the next section to properly compute large amplitude shock propagations .localizing the averaging process of nevertheless prevents the method from keeping strictly uniform the velocity and pressure profiles of an isolated material front , see * test a * below .indeed , note that since is generally a _ strict _ subset of due to the numerical diffusion on ( _ i.e. _ ) , an usual conservative treatment is still used on as soon as not in .this is sufficient to create pressure oscillations . in order to cure this problem, we propose to get rid of the numerical diffusion on so as to enforce the non conservative treatment ( [ newp ] ) across an isolated material front .this objective is achieved when replacing the conservative updating formula ( [ moyl2phi1 ] ) for with random sampling strategy applied to ( see also and for similar ideas ) .more precisely , we consider an equidistributed random sequence in ( following collela , we take in practice the celebrated van der corput sequence ) , define for all and set then , we set so that the conservation of now holds only statistically .we propose two numerical experiments with and associated with a riemann initial data .the left and right vectors are denoted and and the initial discontinuity is at . in the first simulation ( * test a * ) , we consider the propagation of an isolated material interface with and .we take and plot the solutions at time . the second simulation( * test b * ) develops a strong shock due to a large initial pressure ratio .more precisely , we choose and . we take and plot the solutions at time .+ we observe that the two - flux method and the lagrangian methods are in agreement with the exact solutions and give similar results .as expected , note that the lagrangian approach without sampling does not strictly maintain uniform the pressure profile for * test a*. note also that the mass fraction is sharp when a random sampling is used . at last , the relative conservation error on ( see for instance for more details ) for the lagrangian approach with random sampling is actually less important and swings around 0.2% only .we have investigated a lagrange - projection approach for computing material fronts in multi - fluid models .we get similar results to the two - flux method with less important conservation errors on .let us mention that other strategies , like for instance the one consisting in a local random sampling of ( instead of only ) in the lagrangian step , have been investigated .the results are not reported here. + + * acknowledgements .* the authors are grateful for helpful discussions and exchanges with p. helluy and f. lagoutire .99 abgrall r. , _ how to prevent pressure oscillations in multicomponent flow calculations : a quasi - conservative approach _ , j. comput ., vol 125(1 ) , pp 150 - 160 ( 1996 ) .abgrall r. and karni s. , _ computations of compressible multifluids _ , j. comput ., vol 169(2 ) , pp 594 - 623 ( 2001 ) .chalons c. and coquel f. , _ capturing infinitely sharp discrete shock profiles with the godunov scheme _ , proceedings of the eleventh international conference on hyperbolic problems .s. benzoni - gavage and d. serre ( eds ) , springer , pp 363 - 370 ( 2008 ) .
this paper reports investigations on the computation of material fronts in multi - fluid models using a lagrange - projection approach . various forms of the projection step are considered . particular attention is paid to minimization of conservation errors .
the proximity force approximation ( pfa ) has been widely used in many areas of physics , as a tool to compute the total force between smooth surfaces at short distances , the scale of that smoothness being set by the distance between them .although that total force may result from very different underlying microscopic mechanisms , the pfa is essentially the same in all of them , since its basis is geometrical .the main idea behind this approximation was introduced by derjaguin in 1934 , when studying the effect of contact deformations on the adhesion of particles . in that context, the so called derjaguin approximation ( da ) has , as its main ingredient , the ( assumed ) knowledge of , the interaction energy per unit area for two ( infinite ) plane parallel surfaces separated by a distance .the da tells us that then the interaction energy between two _ curved _ surfaces is where is the distance between the surfaces , and their respective curvature radii evaluated at the point of closest approach , and .the same approximation can be alternatively written in terms of the force , , as follows : the usual derivation of this approximation relies upon the rather reasonable assumption that the interaction energy between the surfaces can be approximated by the pfa expression : where the integration is performed over just one of the surfaces , or even over an intermediate mathematical surface lying between the two physical surfaces . assuming that the surfaces are gently curved , and approximating them by portions of the osculating spheres of radii and at the point of closest approach, one arrives at the da .the da has been used to compute forces in many different physical situations : colloidal and macromolecular phenomena , nuclear physics , electrostatic forces , van der waals interactions , casimir forces , etc .the da has been generally assumed to be an uncontrolled approximation , presumably working well for close and gently curved surfaces .its major drawback was , perhaps , the absence of a procedure to asses the importance of the next to leading order ( ntlo ) corrections since , by construction , the da is not obtained as the leading term of any expansion . in spite of that drawback ,surprisingly few works have been devoted to implement a systematic improvement of the da , which could give the pfa a more solid foundation , and a way to improve it .recently , we presented a new approximation scheme , the so - called derivative expansion ( de ) , originally introduced within the context of the casimir effect , for the calculation of the interaction energy between two surfaces , one of them flat and the other slightly curved .this approximation has been shown to be a natural extension of the pfa , and it has proven to be useful in rather different situations , not just for casimir effect calculations . the de approach provides a systematic way of introducing the da and the pfa , and under some circumstances , also of evaluating the ntlo corrections . in this paper ,our principal aim is to formulate and derive the de in a quite general form , so that previous ( and hopefully new ) applications of it may be regarded as particular cases . to that end , we shall present a derivation of the pfa and its ntlo correction from first principles , for the particular case of a curved surface described by a function in front of a plane at .we shall also interpret the result of this derivation in physical / geometrical terms .the surfaces involved shall have quite different physical realizations , depending on the system considered .indeed , in some examples they may correspond to two physical objects with very small widths , interacting as a result of the presence of electric charge or dipole layers on them . in other cases, they may instead correspond to _ interfaces _ between different material media . besides, the nature of the ` microscopic ' interaction that produces the interaction may also have quite different origins .for example , it may be electrostatic , mediated by a short - range force , or even be the result of a more involved phenomenon , like the casimir effect .as it stems from the different nature of the examples mentioned above , no assumption will be made about whether the interaction between the surfaces proceeds from a microscopic interaction that satisfies a superposition principle or not .after presenting the general derivation , we shall make contact with other efforts made in the literature towards generalizing and improving the da , like the surface element integration ( sei ) , or the surface integration approach ( sia ) introduced in the context of colloidal physics , as well as the different pfas used in nuclear physics .this paper is organized as follows : in section [ sec : derivation ] we provide a first principles derivation of the de , and then a construction of it using physical arguments .we also consider some properties of the general formulae so obtained , and comment on the form of the higher order terms .then in section [ sec : other ] we discuss , from the point of view of the de , some generalizations of the da that have been proposed in the literature , mostly in the context of colloidal and nuclear physics . based on a formal analogy with the casimir interaction between almost transparent media ,we make contact with the sei approach used in colloidal physics .in section [ sec : appl ] we briefly review results obtained during the last years using the de , along with discussions of particular examples . finally , section [ sec : concl ] contains our conclusions .let us first set up the problem : regardless of the interaction considered , the geometry of the system shall be assumed to be as follows : one of the surfaces , , will be a plane , which ( by a proper choice of coordinates ) shall be described by the equation . denoting by , and the set of three orthogonal cartesian coordinates , the other surface , , is assumed to be describable by using a single monge patch .namely , that it can be parametrized in terms of just a single function , with , such that . to begin with , we note that the interaction between the two surfaces shall be a functional ] in powers of : ={\mathcal s\mathcal f_0}(a)+\sum_{n\geq 1}\int \frac{d^2k_\parallel^{(1)}}{(2\pi)^2 } ...\frac{d^2k_\parallel^{(n)}}{(2\pi)^2 } \ ,+k_\parallel^{(n ) } ) \ , h^{(n)}(k_\parallel^{(1)}, ...,k_\parallel^{(n ) } ) \ , { \widetilde{\eta}}(k^{(1)}_\parallel ) ... { \widetilde{\eta}}(k^{(n)}_\parallel)\ ; , \label{genexp}\ ] ] where the form factors could be computed by using standard perturbative techniques .they may depend on , although we do not write explicitly this dependence in order to simplify the notation .now we see that , for a smooth function , the fourier transform will be peaked at zero momentum ; therefore , inside eq.([genexp ] ) we can approximate : . as a consequence : in principle , one could evaluate the form factors at zero momentum explicitly .however , there is a shortcut that allows one to obtain all of them at once : for a constant , the interaction energy is simply given by eq.([low ] ) with the replacement .but for this particular case , is just the functional for parallel plates separated by a distance , namely , therefore , in this low - momentum approximation , the perturbative series can be summed up , the result being : \simeq \int d^2{\mathbf x_\parallel } { \mathcal f_0}\left ( a+\eta({\mathbf x_\parallel})\right)= \int d^2{\mathbf x_\parallel } { \mathcal f_0}(\psi)\ , , \ ] ] which is just the pfa . the straightforward calculation above has shown that , for the class of geometries considered in this paper , the pfa can be derived from first principles by performing a resummation of the perturbative calculation for the case of almost flat surfaces .the pfa will be well defined as long as the form factors have a finite limit as .this procedure also suggests that the pfa can be improved by considering the ntlo in the expansion of the form factors .we will assume that the form factors can be expanded in powers of the momenta up to order two .this is a nontrivial assumption : depending of the details of the interaction considered , the low momentum behavior of the form factors could include non - analyticities . when that is not the case , we can perform the following expansion : for some coefficients and . here denote the different arguments of the form factor and their components .symmetry considerations allow us to simplify the above expression .indeed , rotational invariance implies that the form factors depend only on the scalar products .moreover , they must be symmetric under the interchange of any two momenta . as a consequence for some coefficients and .inserting eq . into eq . , and performing integrations by parts , we find the following correction to the pfa =\int d^2{\mathbf x_\parallel}\ , \left[\sum_{n\geq 2}d^{(n)}\ , \eta^{n-2}\right ] |\nabla \eta |^2\ ; , \label{orden2}\ ] ] where the coefficients are linear combinations of and .the subindex indicates the number of derivatives . to complete the calculation ,the next step is to calculate the sum in eq . .this can be done by evaluating the correction for the particular case , with , and expanding up to second order in . for this particular case =\intd^2{\mathbf x_\parallel}\ , \left[\sum_{n\geq 2}d^{(n)}\ , \eta_0^{n-2}\right ] |\nabla \epsilon |^2\ , .\label{orden2eps}\ ] ] once more , the resummation can be performed , in this case by considering the usual perturbative evaluation of the interaction energy up to second order in .this calculation will depend on the particular interaction considered , and from the result one can obtain the series above , that we will denote by .more explicitly making the replacement in the above equation , we arrive at =\int d^2{\mathbf x_\parallel}\ , z(\psi ) |\nabla \psi |^2\ , , \label{fin}\ ] ] which is the ntlo correction to the pfa .this concludes the derivation of the pfa and its first correction , that reads \;=\ ; \int d^2{\mathbf x_\parallel } \ , \big [ v(\phi ) \ ,+ \ , z(\psi ) |\nabla \psi |^2 \big ] \;,\ ] ] where is determined from the known value of the parallel surfaces geometry , while can be computed perturbatively . note that can be evaluated in practice just setting in eq.([zeta2 ] ) .higher orders may be derived by a natural extension of the procedure .it also becomes apparent that for the expansion to be well - defined , the analytic structure of the form factors appearing in the perturbative expansion around flat surfaces is relevant . in particular, the existence of zero - momentum singularities can certainly render the de non applicable ; on the other hand , this should be expected on physical grounds since those singularities imply that the functional can not be approximated , in coordinate space , by the single integral of a local density .physically , it is a signal that the interaction becomes essentially nonlocal ( see section [ ssec : neumann ] ) .indeed , if written in coordinate space they would require more than one integral over the spatial coordinates .we will now present a physical construction of the de , based on a procedure whereby one attempts to improve the pfa .the expression for the pfa does not involve derivatives of , since one is using parallel planes to obtain the density , and the corresponding functional is characterized by a single number , their distance . the demay then be introduced , as a rather natural way to improve the pfa , simply by constructing an _ improved surface density _ .the improvement can be implemented by using the density that results from using , at each point of the surface , a second order approximation to it .namely , the curved surface shall be approximated ( locally ) by a surface that makes a second order contact with it , i.e. , that has the same first and second derivatives as .in other words , rather than evaluating for a constant , we shall consider evaluating it at where is a quadratic function of its argument : we use for the coordinates on which may depend , since we shall later one use to denote each point on which the expansion is performed ( for example , will depend on ) .since we want to construct a second - order expansion in derivatives , the expression for will be expanded up to the second order in and first order in . that expansion shall have the form : where the index denotes the order in derivatives .one then has ] and ] .the zeroth and second order terms in the de for the energy then have the form : & = & a_0\,\int d^2{\mathbf x_\parallel } \frac{1}{\psi^3}\nonumber\\ u_2[\psi ] & = & a_2\,\int d^2{\mathbf x_\parallel } \frac{|\nabla \psi|^2}{\psi^3 } \;,\end{aligned}\ ] ] where and are constants . for the revolution paraboloid , we may evaluate both terms exactly , what yields : & = & \frac{\pi}{2 } a_0 \ , \frac{\sigma^2}{a^3 } \nonumber\\ u_2[\psi ] & = & 2\pi a_2 \frac{1}{a } \;.\end{aligned}\ ] ] since is proportional to the radius of curvature , the ratio , and therefore the second order term is much smaller than the zeroth order one .this analysis can be extended to the higher order terms .indeed , a term with ( ) derivatives has in the integrand a factor times a polynomial with derivatives and powers of in the numerator .thus , under scaling : \;=\ ; \lambda^{2n -2 } u_{2n}[\psi ] \;.\ ] ] thus , for the quadratic that we are considering , of course , we have assumed the expansion to be well defined to those orders ; that is something which , as we shall see , depends on the properties of the system .let us , for the sake of illustration , briefly comment here on the construction of the fourth order term ( there is no third - order one ) , in a similar scheme to the one considered in [ ssec : imp ] .to derive the improved -density , one now has to consider : where : to find the fourth order term in derivatives , one should collect in , terms which come from its first order contribution in , the second order in , the fourth order in , and also first order in and , or second in and first in . thus receives five different contributions , ( ) .all those contributions may be evaluated in fourier space , where they can be expressed in terms of derivatives at zero momentum of the corresponding functional derivative ; besides , those functional derivatives , being evaluated at , are translation invariant .it is possible to check that term produces a constant , and it can be ignored at this order by a redefinition of .the remaining terms have the structure : where the coefficients are constant tensors , which moreover may be further simplified by using rotational invariance .all the terms carry a factor of , which appear because of momentum conservation .we conclude by analyzing the tensor corresponding to one of those contributions : .it can be obtained as follows : {p \to 0 } \;,\ ] ] and rotational invariance means that it has the form : where is an -dependent constant , determined by the term of order four in an expansion of the kernel at low momenta : {p \to 0 } .\ ] ] when used to construct the de to fourth order , this term shall produce : \;.\ ] ] a similar approach allows one to derive all the other contributions .in this section we will discuss some generalizations of the da proposed in the context of nuclear and colloidal physics , from the point of view of the de .the application of the da in nuclear physics started with a celebrated paper by blocki et al .in that paper , the authors rediscovered the da in a rather different context , and applied it to compute the interaction between nuclei .the starting point of ref . is a derjaguin - like formula for the interaction energy between surfaces .that formula incorporates , as an essential ingredient , what the authors called ` universal function ' , the interaction energy between planar surfaces , which the authors calculated using a thomas - fermi approximation .to proceed , let us describe the kind of system being considered , and at the same time introduce some notation : let us consider two surfaces and , plus an intermediate mathematical surface used to parametrize the physical ones . for smooth and slightly curved surfaces, we expect the interaction energy to be well - described by the pfa , as in eq.([pfa ] ) .one can now rewrite the surface integral above , by introducing the set of level curves for in : they are closed curves that correspond to a fixed distance between and . denoting by the area between two curves on corresponding to distances and , the pfa expression for the interaction energy can be written as a one dimensional integral we then assume the surfaces to be gently curved , so that just one patch is sufficient to describe them , and besides we use cartesian coordinates on .the distance between and then becomes a function , and is constant . performing a second order taylor expansion of around the point of closed approach , which corresponds to a distance : where and are the radii of curvature of the surface defined by , one obtains the da of eq.([da ] ) . in a subsequent paper , a generalization of the pfa was introduced .the starting point was again eq.([pfanucj ] ) , but now the surfaces could have large curvatures , as long as they remained almost parallel locally . the main difference introduced by the weaker assumptions about the surfacesis that now the jacobian may become a non - trivial function of , rather than being just a constant . using the linear expansion can be shown that the force between surfaces becomes : where the second term is a correction to the usual da .note , however , that from a conceptual point of view , this is not a generalization of the da , since the starting point is the same as before : . what the previous formula does is to provide an explicit formula for the surface integral appearing in the pfa , which now involves a new geometrical object , the jacobian .in other words , eq.([eq : correction ] ) is still determined by the energy density for parallel plates , not including corrections to that object , like the ones appearing in the de .that kind of correction depends on the geometry and on the nature of the interaction .note that in nuclear physics there is an additional complication to deal with : even for two infinite half - spaces separated by a gap , the interaction energy is not exactly known .different approximations have been used to compute that , and they give rise to different pfa s .for a recent review see ref . .the methods sei and sia , have been introduced within the context of colloidal physics , and constitute another generalization of the da . while based on different physical assumptions , the final result in both cases is the same .it may be introduced as follows : consider a compact object in front of a plane , denoting the normal coordinate to the plane that points towards the compact object .the sei approximation to the interaction energy is then given by where is the unit outward normal to a surface element in the compact object .when the compact object can be thought of as delimited by two surfaces , one facing the plane and the other away from it , the sei approximation consists in computing the difference between the usual pfa for each surface .it may appear surprising , at first glance , that the two surfaces contribute with different signs to the interaction energy .however , as we will see , this is related to the fact that the sei becomes exact for almost transparent bodies , where the interaction comes from volumetric pairwise contributions . in the colloidal physics literature , the sei methodis justified by assuming that there is a pressure on the compact object , which should be integrated over the closed surface surrounding it in order to find the total force .alternatively , it has been shown that eq.([sei ] ) becomes exact when the interaction between bodies can be obtained as the result of pair potentials of their constituents . in order to understand , and reinterpret , this formula within the context of casimir physics , and at the same time to provide a systematic way of evaluating the ntlo , we shall use a rather simple example . let us consider a quantum scalar field , in the presence of two media : one of them , denoted by , corresponding to the half - space , while the other , , is defined in terms of two functions : as seen in figcorresponding to the half - space , while the other , , is defined in terms of two functions : ,width=302 ] besides , we shall assume that the field propagation inside each media can be represented by the presence of a non - vanishing interaction term .assuming also that outside the and regions there is vacuum , the euclidean action adopts the following form : \;=\ ; { \mathcal s}_0[\varphi ] + { \mathcal s}_i[\varphi]\ ] ] where \;=\ ; \frac{1}{2 } \int d^4x \ , \big ( \partial \varphi \big)^2\ ] ] is the free action for the fluctuating vacuum field , while contains two terms , corresponding to the and regions , respectively : \;=\ ; { \mathcal s}_l[\varphi ] \,+\ , { \mathcal s}_r[\varphi]\ ] ] with : \;=\ ; \frac{\lambda_l}{2 } \ , \int d^3x_\parallel \int_{-\infty}^0 dx_3 \ , { \mathcal l}_i(\varphi ) \;,\ ] ] and \;=\ ; \frac{\lambda_r}{2 } \ , \int d^3x_\parallel \int_{\psi_1({\mathbf x_\parallel})}^{\psi_2({\mathbf x_\parallel } ) }dx_3 \ , { \mathcal l}_i(\varphi ) \;,\ ] ] where and is a local lagrangian . inwhat follows , we shall assume that the medium is semi - transparent , so that the corresponding term can be treated perturbatively , to the first non - trivial order in .no assumption will be made about the term .the vacuum energy shall be a functional of the two functions , and may be written in terms of the vacuum amplitude , as follows : \;=\ ; - \lim_{t \to \infty } \big ( \frac{1}{t } \log { \mathcal z } \big)\;\ ] ] where is the extent of the imaginary time coordinate and expanding the functional integral in powers of , the lowest nontrivial contribution reads = e[\psi_1 ] - e[\psi_2]\ , , \ ] ] with = \lim_{t \to \infty } \big [ \frac{1}{t } \langle s_{1,2}[\varphi ] \rangle_l \big]\ , .\ ] ] here where we have introduced \;=\ ; \frac{\lambda_r}{2 } \ , \int d^3x_\parallel \int_{\psi_{1,2}({\mathbf x_\parallel})}^\infty dx_3 \ , { \mathcal l}_i \;,\ ] ] and the symbol denotes functional average with the weight defined by . the crucial point is to observe that all the dependence with the shape of the surfaces is in the lower integration limit of . as a consequence , we can write \;=\ ; \int d^2 { \mathbf x_\parallel } \left ( e_\parallel(\psi_1)-e_\parallel(\psi_2)\right)\ , , \label{lin}\ ] ] where is the interaction energy per unit area between semispaces separated by a gap of width . the physical interpretation of this result is that , being the interaction between semispaces , in order to obtain the interaction energy for the configuration described by and , one must subtract from the contribution coming from points with .this ` linearity ' is valid only for the first order in .( [ lin ] ) coincides with the result obtained using the sei . in order to illustrate these points ,let us assume that . in the strong coupling limit for ,the field satisfies dirichlet boundary conditions at .an explicit calculation yields \;=\ ; \frac{\lambda_r}{32 \pi^2 } \ , \int d^2 { \mathbf x_\parallel } \,\big[\frac{1}{\psi_1({\mathbf x_\parallel } ) } - \frac{1}{\psi_2({\mathbf x_\parallel } ) } \big]\ ; , \ ] ] which is the difference between the pfa energies associated to the surfaces and . it can be shown explicitly , however , that the property of the energy being the difference between the ones corresponding to each surface is lost in the next order in . to summarize this section ,we have shown that sei gives an exact result for almost transparent media , where one can use the superposition principle and consider the interaction as the sum of pairwise potentials .although this was already mentioned in ref. , the model presented here suggests how to go beyond the leading order , and provides a systematic way of evaluate the interaction energy for dilute media .finally , we would like to mention that the fact that pfa becomes exact in this limit has also been mentioned by authors working in casimir physics .in this section we summarize and briefly review some of the results obtained through the use of the de for the calculation of interaction energy between surfaces , in different contexts .it is not our intention to be exhaustive , but to enumerate some of the results that , we believe , are noteworthy. we also take advantage to pinpoint and clarify some aspects that deserve more comment than in their original presentation , now under the light of this general derivation . for the casimir effect, we consider a scalar field with dirichlet and neumann conditions , at zero and finite temperature , and also an electromagnetic field with non perfectly conducting surfaces .we show how the pfa emerges naturally from this approach , and also calculate the ntlo correction to the pfa .we also review results obtained for the electrostatic interaction , both for surfaces at fixed potentials and endowed with patch potentials . at the end of this sectionwe will briefly discuss some findings related to non analytic terms appearing in the expansion at finite temperature in the neumann case .we have shown that the pfa can be thought of as akin to the leading order term in a derivative expansion of the casimir energy with respect to the shape of the surfaces in ref .moreover , when the first non trivial correction containing two derivatives of are included , the general formula gives the ntlo correction to pfa for a general surface .the general expression for the second - order approximation to the interaction energy ( or free energy depending the case ) is the one showed in eq.([eq : de ] ) . in ref . we have applied the de to the evaluation of the casimir interaction energy for a scalar field with dirichlet boundary conditions .the calculation consisted , in terms of the general derivations we presented in section [ ssec : demo ] , of the application of the expansion to an effective action ( proportional to the energy for static boundary conditions ) .the de was obtained by performing the same calculation suggested in section [ ssec : demo ] , namely , a second - order expansion of the functional around the parallel planes case . the general result for the de approximation to the casimir interaction energy for perfect mirrors can be written as =-\frac{\pi^2}{1440 } \int d^2{\mathbf x_\parallel}\frac{1}{\psi^3}\big [ \alpha+\beta(\nabla\psi)^2\big]\ , , \label{de casimir perf}\ ] ] where and are numerical coefficients that depend on the field considered ( scalar or electromagnetic ) and on the boundary conditions imposed on the surfaces .this form for can of course be anticipated by simple dimensional analysis .the zeroth order term equals the pfa approximation to the vacuum energy , while the second order one contains the first non - trivial correction to pfa . for a scalar field satisfying dirichlet ( d ) boundary conditions we have and . a scalar field with neumann( n ) boundary conditions was considered in , where it was shown that and . in the same reference, the authors presented the results corresponding to an electromagnetic field and perfectly conducting surfaces , which turns out to be the sum of the dirichlet and neumann results .it is worth to stress the last result : within the de approach , the electromagnetic casimir interaction energy between perfectly conducting surfaces is the sum of the scalar casimir energy for dirichlet and neumann boundary conditions .this was already known for the leading pfa approximation , and it is also valid for the first non trivial correction .of course , it will be not valid at higher orders .also in ref. , the de was extended to two curved surfaces , for dirichlet , neumann , mixed ( dirichlet and neumann on different surfaces ) and electromagnetic ( perfect metal ) boundary conditions .ref. presents the leading correction to pfa for gold at room temperature .although derived for surfaces describable by a single function , the de has been applied to more general geometries that include compact objects in front of a plane . in these cases ,the integration is restricted to a portion of the compact object that is closer to the plane .it has been shown that , for perfect mirrors , the pfa and its ntlo correction are insensitive to the choice of the integration area in the limit where the surfaces are very close .this is not the case for semi - transparent mirrors , as we have shown in the previous section . in all particular examples wherethe ntlo correction to pfa has been computed analytically , the results coincide with the prediction of the derivative expansion .this is the case for a cylinder in front of a plane and also for a sphere in front of a plane .moreover , the de has been useful to detect an error in previous calculations of the sphere - plane interaction energy beyond pfa , that was subsequently corrected in ref . .let us denote by the exact casimir interaction energy for a given geometry and by its pfa .for both cylinder - plane and sphere - plane geometries the analytic ntlo correction is of the form where is the minimum distance , is the radius ( of the sphere or the cylinder ) , and a numerical coefficient that depends on the geometry and the boundary condition . for the cylinder - plane it has been shown that while for the sphere - plane it has also been proved that the electromagnetic result is the sum of the dirichlet and neumann cases .all these results can be reproduced using the de by plugging the functions corresponding to a cylinder and a sphere into eq.([de casimir perf ] ) , and expanding the result of the integrals in powers of .the numerical calculations are also consistent with the ntlo correction for the cylinder - plane geometry and for the sphere - plane geometry , although for neumann boundary conditions there is a discrepancy between the analytic predictions and the numerical fit .a similar discrepancy occurs with the fit presented in ref. for the electromagnetic case .we believe that these discrepancies may be due to the fact that the numerical calculations have not been performed for sufficiently small values of , and therefore the fits are sensitive to the particular functions and intervals used to obtain them .this sensitivity has been noticed in for the cylinder - plane geometry and in , for the sphere - plane geometry . in ref . , we have applied the de to the evaluation of the the electrostatic interaction energy ( the functional to expand in derivatives ) between two perfectly conducting surfaces , one flat and the other slightly curved , held to a potential difference . in this situation , the interaction energy reads \ , .\ ] ] we have shown explicitly that , in particular cases where analytic exact results are available , the de reproduces the exact ones up to ntlo ( this is the case , for instance , for a sphere or a cylinder in front of a plane ) . in ref . , we have extended these results to the case in which the surfaces have patch potentials .these potentials were not introduced as boundary conditions , but modeled by means of electric dipole layers that are adjacent to the surfaces .the result was expressed in terms of the two - point autocorrelation functions for those patch potentials , and of the single function which defines the curved surface .the reason for studying this , is based on the fact that surface imperfections can lead to a local departure from ideal metallic behavior , yielding space - dependent patch potential on the surface of the mirrors .they produce a force that may be , in principle , relevant to the interpretation of precision experiments involving two surfaces . in order to present a more compact expression for the results ,it is convenient to assume that the potentials autocorrelation function depends on the variance of the potential and on a single characteristic length .then , on dimensional grounds we shall have that the fourier transform of the auto correlation function is of the form for some dimensionless function of a dimensionless argument , . in terms of the objectsabove , we have found that : where \ , .\end{aligned}\ ] ] one can show that , when , and tend to the result for constant potentials , and therefore \ , .\ ] ] this is twice the result for the electrostatic energy between surfaces held at a constant potential difference ( see eq.([resannphys ] ) ) .the factor two comes from the fact that we are considering the same correlation function on both surfaces .on the other hand , in the opposite limit , one can make the approximation inside the integrals to get which has the same dependence with distance as the casimir energy , something which is in this case due to the lack of a dimensionful quantity associated to the correlation length . in ,we have obtained expressions for the coefficients that determine the de at finite temperatures , for the free energy in a casimir system ) for the free energy as for the functional used in the general derivation of the de . ] .we presented closed analytic expressions for those coefficients , in different numbers of spatial dimensions , both for the zero and high temperature limits .we have considered surfaces satisfying both either dirichlet or neumann boundary conditions , finding some qualitative differences between those two cases : for two dirichlet surfaces , the ntlo term in the de is well defined ( local ) for any temperature . besides , it interpolates smoothly between the proper limits : namely , when it tends to the one we had calculated for the casimir energy , while for it corresponds to the one for a theory , realizing the expected dimensional reduction at high temperatures . the de approach ( up to second order ) may be applied to this case , with the free energy as functional of the surface .we present the dirichlet and neumann cases separately . in the dirichlet case, we write the casimir free energy as follows : \;=\ ; \int d^{d-1}{\mathbf x}_\parallel \ , \big\ { b_0(\frac{\psi}{\beta},d ) \frac{1}{[\psi({\mathbf x}_\parallel)]^d } \,+\ , b_2(\frac{\psi}{\beta},d ) \ , \frac{(\nabla\psi)^2}{[\psi({\mathbf x}_\parallel)]^d } \big\ } \label{de dir}\ ] ] where the two dimensionless functions and can be obtained from the knowledge of the casimir free energy for small departures around the case . in the very high ( infinite ) temperature limit, we have that {\xi>>1 } \;\simeq\ ; \xi \ , \big[b_0(\xi , d-1)]_{\xi \to 0}\equiv \xi\ , b_0(d-1 ) \ ; , \nonumber\\ & & \big[b_2(\xi , d)\big]_{\xi>>1 } \;\simeq\ ; \xi \ , \big[b_2(\xi , d-1)\big]_{\xi \to 0}\equiv \xi \ , b_2(d-1)\;,\end{aligned}\ ] ] where .the coefficients and are those corresponding to perfect mirrors at zero temperature in dimensions , a reflection of the well known ` dimensional reduction ' phenomenon at high temperatures , for bosonic degrees of freedom .in particular , the de up to the second order in the high temperature limit , in dimensions , is \vert _ { \psi/\beta > >1 , d=3 } \,\sim\ , -\frac{\zeta(3)}{16\pi\beta}\int d^2{\mathbf x}_\parallel \frac{1}{[\psi({\mathbf x}_\parallel)]^{2 } } \left\ { 1 + \frac{(1 + 6\zeta(3))}{12\zeta(3 ) } \ , ( \nabla\psi)^2 \right\ } \ , .\ ] ] let us apply this result to the evaluation of the dirichlet casimir interaction for a sphere in front of a plane . as before ,we denote by the minimum distance between the surfaces , and by the radius of the sphere .as already mentioned , although the surface of the sphere can not be covered by a single function , we will consider just the region of the sphere which is closer to the plane .the sphere is described by the function where we used polar coordinates ( ) for the plane .this function describes the hemisphere when .the de will be well defined if we restrict the integrations to the region .we will assume that .inserting this expression for into the free energy eq.([fbetainf3 ] ) , and performing explicitly the integrations we obtain : \vert_{\psi/\beta > > 1,d=3 } \,\sim\ , -\frac{\zeta(3)r}{8\beta a } \left(1-\frac{1}{6\zeta(3 ) } \frac{a}{r}\log\left(\frac{a}{r}\right)\right)\ , .\label{fsp}\ ] ] note that , as long as , the force will not depend on . as expected on dimensional grounds ,the behavior of the leading contribution in the zero temperature case changes to at high temperatures .this problem has been solved exaclty in ref. .one can readily show that eq .( [ fsp ] ) coincides with the small distance expansion of the exact result .it is interesting to remark that the ntlo correction from the de becomes non - analytic , because of the integration , in the parameters defining the function .this behavior has been already noted in numerical estimations of the casimir interaction between a sphere and a plane in the infinite temperature limit , for the electromagnetic case in refs. .note that this non - analyticity _ has nothing to do with the non - analyticity of the form factors described in section 2_. there , the de was not applicable , we deal here with terms that appear in a system where the de is perfectly well - defined .one integrates over the surface , and when expanding or small , one gets both analytic and non - analytic contributions .the latter are not a drawback but a normal feature of the de .very recently , the free interaction energy between a sphere and a plate at high temperatures has been computed exaclty in an arbitrary number of dimensions for dirichlet boundary conditions .we have checked that the de reproduces the leading and ntlo of the exact result for .we sketch here the calculations . in the high temperature limitthe free energy reads \;=\;\frac{1}{\beta } \int d^{d-1}{\mathbf x}_\parallel \, \big\ { b_0(d-1 ) \frac{1}{[\psi({\mathbf x}_\parallel)]^{d-1 } } \,+\ , b_2(d-1 ) \ , \frac{(\nabla\psi)^2}{[\psi({\mathbf x}_\parallel)]^{d-1 } } \big\}\ , .\label{de dir high t}\ ] ] inserting eq.([psi sphere ] ) into eq.([de dir high t ] ) and expanding in powers of we obtain , for , while for both expressions are consistent with the analytic results for the force presented in ref. .the free energy can be written as before ( see eq.([de dir ] ) , but with coefficients and instead of and .the zero order term coincides with the one for the dirichlet case ; namely : .the second - order coefficient is given by {n\to 0 , |{\mathbf l}_\parallel| \to 0 } \;.\ ] ] the expression of has been calculated in .we do not present here the explicit expression of it since their form is not relevant for the actual presentation . for ,the coefficient coincides with its dirichlet counterpart . in higher dimensions, the structure of the ntlo correction is different . in ,the ntlo term contains , besides a standard looking local term , also a nonlocal contribution , linear in , and therefore present for any .this leaves room , when the temperature is sufficiently low , to use just the local term ( of second order in derivatives ) as the main correction to the pfa .of course , the nonlocal term will always break down for a sufficiently high temperature , whose value will depend on the actual shape of the surface involved .we stress once more that this non - analytic behavior is a consequence of the neumann boundary conditions , and may not be present for imperfect boundary conditions , as those considered in ref. .this point deserves further analysis .it is important to note that a _ local _ de breaks down for neumann boundary conditions at at zero temperature .however , one can still perform an expansion for smooth surfaces , including nonlocal contributions in the casimir energy .for instance , in this case , the ntlo correction to the pfa will be nonlocal and proportional to where is a small departure from const .the breakdown of the local expansions is related to the existence of massless modes in the theory .these modes are generally allowed by neumann but not for dirichlet boundary conditions , that impose a mass gap of order .the logarithmic behavior of the form factor in induces a similar non - analyticity for at finite temperature .therefore , in an expansion for small values of , in addition to a term proportional to , there is a contribution proportional to at any non - vanishing temperature , which is not cancelled by the rest of the sum over matsubara frequencies .we have presented both a construction of the de , based on a physical argument , and a formal derivation of it , for a general family of problems , which can be defined in terms of a functional depending on a function characterizing a surface as its argument .this can be applied , as it has been done , to casimir and electrostatic problems .we have argued that the same procedure could also be used in nuclear and colloidal physics , since the derivation is sufficiently general to encompass those and other physical situations , as casimir - like forces in critical systems .we have made contact with the latter by comparing and putting in similar terms the various existing pfa - like approximations , showing that they correspond to the zeroth order in the de .the existing results about the application of the de to different contexts have been briefly reviewed , mentioning some of the features that , we believe , may shed new light on the respective systems .we have shown in an explicit example , how the de may induce non analytic contributions in the ratio , where is the minimum distance and the radius of a sphere , for the casimir free energy between a plane and a sphere at high temperatures .that non - analyticity is dependent on the geometry of the system considered , and appears in situations where the de is well - defined .in other words , it is not due to the existence of non - analyticities in the momentum kernel of the second functional derivative of the functional .we have shown that the de does reproduce correctly the ntlo corrections in various casimir calculations .this is the case , in particular , for the sphere - plane geometry with dirichlet boundary conditions at very high temperatures ( the classical limit ) , where the result is known exactly in an arbitrary number of dimensions .we end this paper with a few remarks about the generality of the de approach and possible future lines of research .regarding the interactions , the de can be applied , in principle , both to additive and non - additive forces , superficial or volumetric , as long as the interaction energy can be written as a functional of the geometry of the surfaces .this is the case when the surfaces describe homogeneous physical objects with very small widths , or when they correspond to interfaces between different homogeneous material media .there are of course situations where the above condition is not met .for instance , the gravitational interaction between two non - homogeneous bodies can not be described as a functional of their shapes .regarding the geometry of the bodies , up to now all applications of the de have been restricted to a particular class of geometries , i.e. surfaces describable by functions , where are cartesian coordinates .it would be interesting to generalize the results to other coordinates or , even better , to provide a covariant formulation in terms of geometric invariants of the surfaces .work in this direction is in progress .this work was supported by anpcyt , conicet , uba and uncuyo .fdm would like to thank d. dalvit , t. emig , f. intravaia , a. lambrecht , p. maia neto and s. reynaud for discussions on this and related matters during the workshop ` casimir physics 2014 ' .we would also like to thank d. dantchev for useful comments regarding the sei and sia .israelachvili , _ intermolecular and surface forces _ , academic press , london , 1992 ; p. w. milonni , _ the quantum vacuum _ , academic press , san diego , 1994 ; m. bordag , g.l .klimchitskaya , u. mohideen , and v. m. mostepanenko , _ advances in the casimir effect _ , oxford university press , oxford , 2009 .
the derivative expansion approach to the calculation of the interaction between two surfaces , is a generalization of the proximity force approximation , a technique of widespread use in different areas of physics . the derivative expansion has so far been applied to seemingly unrelated problems in different areas ; it is our principal aim here to present the approach in its full generality . to that end , we introduce an unified setting , which is independent of any particular application , provide a formal derivation of the derivative expansion in that general setting , and study some its properties . with a view on the possible application of the derivative expansion to other areas , like nuclear and colloidal physics , we also discuss the relation between the derivative expansion and some time - honoured uncontrolled approximations used in those contexts . by putting them under similar terms as the derivative expansion , we believe that the path is open to the calculation of next to leading order corrections also for those contexts . we also review some results obtained within the derivative expansion , by applying it to different concrete examples and highlighting some important points .
complex networks have been used to model the real complex systems with a large number of interacting individuals , such as internet , world trade web , metabolic networks , coauthor networks , and so on .the individuals in complex systems are denoted by nodes in complex networks , and the interactions between individuals are denoted by edges between nodes .for example , in a collaboration network , the scientists can be regarded as nodes and their joint papers can be treated as edges . from this collaboration network, we can find the collaborative relationship between any two scientists .however , many papers have more than two coauthors .therefore , the collaborative relationship among any three or more scientists can not be described in the above simple collaboration network .then , what kind of networks can solve this problem is an interesting issue , and the complex hyper - networks corresponding to hyper - graphs emerge in this context . in complex networks ,an edge connects only two nodes , however , an hyper - edge of complex hyper - networks can contain more than two nodes , which can describe the multifaceted collaborative relationship appropriately .though a hyper - edge can connect an arbitrary number of nodes , it is often useful to study hyper - networks where each hyper - edge connects the same number of nodes : a -uniform hyper - complex is a hyper - network in which each hyper - edge connects exactly nodes .there has been a growing interest in the research of hyper - networks [ 10 - 14 ] recently .for example , in virtual enterprises , to respond to market changes rapidly and exploit unexpected business opportunities efficiently , the virtual breeding environment ( vbe ) actors collaborate , and share competencies , skills as well as resources . in order to describe the relationship among vbe actors , a hyper - graph ( a hyper - network )is proposed as a meaningful logical structure , in which a hyper - path ( i.e. , a hyper - edge ) represents the structure underlying a minimal cluster of enterprises . by the hyper - graph model ,the formation process of a virtual enterprise can be presented and the pre - identified business opportunity may be caught .recently , an evolving hyper - network model is introduced to describe real - life systems with the hyper - network characteristics .two evolving mechanisms with respect to a hyper - degree , namely hyper - edge growth and preferential attachment , are proposed to construct the hyper - network mode .the hyper - degree is defined as the number of the hyper - edge attached to that node . in this paper ,a different evolving -uniform hyper - network model is introduced , whose evolving mechanism involves the joint degree of nodes , which is defined as the number of the hyper - edges attached to the nodes .on the other hand , synchronization as a typical collective dynamical behavior of complex networks has drawn considerable attentions recently [ 15 - 27 ] . however, less attention has been paid to synchronization of complex hyper - networks .thus in this paper , synchronization of the 3-uniform hyper - network will be investigated for the first time .the rest of this paper is organized as follows . in section 2 ,an evolving hyper - network model is introduced and several topological characteristics are studied . in section 3 ,synchronization of a 3-uniform hyper - network is investigated and several synchronization criteria are obtained . in section 4 ,several numerical examples are provided to verify the effectiveness of derived results .conclusions are drawn in section 5 .an evolving mechanism with respect to joint degree to construct the -uniform hyper - network model is proposed in this section . in the scale - free network model put forward by barabsi and albert , two simple evolving mechanisms , i.e. , growth and preferential attachment ,are proposed to construct the network model .inspired by , the following generation algorithm are proposed to construct the -uniform hyper - network model : \(1 ) growth : the hyper - network starting with nodes , at every time step , we add one node and hyper - edges to the existing hyper - network , where .\(2 ) preferential attachment : the probability a new hyper - edge contains the new node and selected nodes in the existing hyper - network depends on the joint degree of the nodes at time .let denotes the joint degree of nodes , which is defined as the number of hyper - edges containing nodes , then where . then , after time steps , this constructs a -uniform hyper - network with nodes and hyper - edges .define the hyper - degree of node as the number of the hyper - edges containing node and node - node distance as the number of the hyper - edges in the shortest paths between two nodes . according to the generation algorithm andthe relation between hyper - degree and joint degree , when a new node enter into the hyper - network , the probability of node with hyper - degree acquires a hyper - edge is let be the new node added to the hyper - network at time , and be the probability that node has hyper - degree when it is being picked up at time .suppose that is a continuous variable , then probability can be viewed as a continuous rate of change of , i.e. , where is a constant and denotes the rate of change of .since the new node brings in new hyper - edges , which has hyper - degree at time , so the change of hyper - degree at is . solving equation ( [ eq5 ] ) with initial condition , one has and assuming that the time variables have a uniform distribution with , i.e., the new nodes are being added at equal time intervals , then one has therefore , hyper - degree distribution can be derived as which is the same as that of ba scale - free network when . fig .[ eps2 ] shows the hyper - degree distribution of the 3-uniform hyper - network models with , and , from which one can find that the 3-uniform hyper - network has power distribution property , i.e. , scale - free property .table i shows the average path length of the 3-uniform hyper - network models with and different pair of and , which exhibits small world property . versus node hyper - degree on a logarithmic scale with the different parameters .the node number is .,width=302,height=226 ]complex network consisting of identical linearly and diffusively coupled nodes can be described by where is the state variable of node , represents the local dynamics of an isolated node , is coupling strength , is inner coupling matrix .matrix is outer coupling matrix , which denotes the network topology and is defined as , if there is a connection from node to node , then , otherwise , . according to the definition of the coupling matrix , the network ( [ eq1 ] ) covers a variety of models ranging from simple weightless undirected networks to more complicated weighted directed networks .similar to the network ( [ eq1 ] ) , a -uniform hyper - network of coupled systems can be described by where are different each other and is defined as : if there exists a hyper - edge contains nodes , then , otherwise , . for , the hyper - network ( [ eq2 ] ) can be simplified as in the following , consider synchronization of the 3-uniform hyper - network ( [ eq3 ] ) , which can be rewritten as let be the joint degree of nodes and , which is defined as the number of hyper - edges containing nodes and . then one has and rewrite ( [ eq51 ] ) as thus , define the diagonal elements of as follows then , equation ( [ eq7 ] ) gives in the subsequent studies , the hyper - network considered is always assumed to be connected , i.e. , any pair of nodes is reachable along hyper - edge .it is easy to verify that the joint degree matrix is irreducible and its eigenvalues are .our objective here is to synchronize the network ( [ eq8 ] ) with a given orbit , i.e. , where is a solution of an isolated node , namely , . here, can be an equilibrium point , a periodic orbit , or even a chaotic attractor .let and linearize ( [ eq8 ] ) about .this leads to where is is the jacobian of on .then refer to the proofs of lemmas 1 and 2 in , one has the following theorems .let be the eigenvalues of the joint degree matrix .if the following -dimensional linear time - varying systems are exponentially stable then the synchronized states ( [ eq9 ] ) are exponentially stable .* the proof will be given in the appendix . that there exists an diagonal matrix and two constants and such that for all , where is an identity matrix .if then the synchronized states ( [ eq9 ] ) are exponentially stable .* proof . *the proof will be given in the appendix . it is clear that the inequality ( [ eq11 ] ) is equivalent to therefore , synchronizability of the 3-uniform hyper - network ( [ eq51 ] ) with respect to a given coupling matrix can be characterized by the second - largest eigenvalue of the joint degree matrix .* remark 1 . * similar to the above discussions , synchronization of general -uniform hyper - network can be investigated as well .in fact , refer to the definition of joint degree , hyper - network ( [ eq2 ] ) can be simplified as , consider a hyper - network consisting of 100 coupled chua s oscillator , which can be described by where is a piecewise linear function , , , , and .[ eps3 ] shows the second - largest eigenvalue of the joint degree matrix generating from the above evolving algorithm with and , where is obtained by averaging the results of 10 runs .then one has . in the numerical simulation , choose , , , and . refer to the discussion in , one can choose such that the inequality ( [ eq101 ] ) holds. then one can choose such that the inequality ( [ eq12 ] ) holds .that is to say , the hyper - network generating from the evolving algorithm with can achieve synchronization with any initial values .[ eps4 ] shows the synchronization errors ..,width=377,height=188 ] ' denotes , , ` ' denotes , ` ' denotes , ` ' denotes and ` ' denotes .,width=377,height=226 ] secondly , consider the synchronizability of 3-uniform hyper - network with different pair of and . fig .[ eps5 ] shows the second - largest eigenvalues of the joint degree matrix with different pair of and , where is obtained by averaging the results of 10 runs .it is clear that , which means the hyper - network with larger has stronger synchronizability .[ eps6 ] shows the synchronization errors with and different coupling strength , and , from which one can easily find that the hyper - network with has stronger synchronizability . .case ( a ) : , ( b ) : , ( c ) : .,title="fig:",width=207,height=151 ] . case ( a ) : , ( b ) : , ( c ) : .,title="fig:",width=207,height=151 ] . case ( a ) : , ( b ) : , ( c ) : .,title="fig:",width=207,height=151 ]an evolving mechanism with respect to joint degree is proposed to construct a -uniform hyper - network model . based on a rate equation method ,the hyper - degree distribution of the hyper - network is derived , which obeys a power law distribution .moreover , a complex hyper - network coupled with dynamical systems is introduced . by defining the joint degree of two nodes ,the hyper - network can be simplified and its synchronization is investigated for the first time .further , synchronizability of 3-uniform hyper - network with different pair of and is considered by calculating the second - largest eigenvalue of the joint degree matrix .this research was jointly supported by the nsfc grant 11072136 , the shanghai univ . leading academic discipline project ( a.13 - 0101 - 12 - 004 ) and natural science foundation of jiangxi province of china ( 20122bab211006 ) .the authors would also like to thank the anonymous referees for their helpful comments and suggestions ._ proof of theorem 1 : _ let , then equation ( [ eq10 ] ) can be written as choose an unitary matrix such that let and with , then one has and refer to the proof of lemma 1 in ref . , corresponds to the synchronization of the system states .then if the -dimensional linear time - varying systems ( [ eq102 ] ) are exponentially stable , the synchronized states ( [ eq9 ] ) is exponentially stable . _proof of theorem 2 : _ consider the following lyapunov functions calculate the derivative of along ( [ eq102 ] ) gives from inequalities ( [ eq101 ] ) and ( [ eq11 ] ) , one has where is the largest eigenvalue of , which implies that is , the time - varying systems ( [ eq102 ] ) are exponentially stable , i.e. , the synchronized states ( [ eq9 ] ) is exponentially stable .
in this paper , the synchronization in a hyper - network of coupled dynamical systems is investigated for the first time . an evolving hyper - network model is proposed for better describing some complex systems . a concept of joint degree is introduced , and the evolving mechanism of hyper - network is given with respect to the joint degree . the hyper - degree distribution of the proposed evolving hyper - network is derived based on a rate equation method and obeys a power law distribution . furthermore , the synchronization in a hyper - network of coupled dynamical systems is investigated for the first time . by calculating the joint degree matrix , several simple yet useful synchronization criteria are obtained and illustrated by several numerical examples . _ keywords : _ hyper - network ; synchronization ; joint degree .
quantum key distribution ( qkd ) can provide unconditionally secure communication with ideal devices . in reality , due to the technical difficulty of building up ideal single photon sources , most of current qkd experiments use weak coherent - state pulses from attenuated lasers .such replacement opens up security loopholes that lead qkd systems to be vulnerable to quantum hacking , such as photon - number - splitting attacks .the decoy - state method has been proposed to close these photon source loopholes .it has been implemented in both optical fiber and free space channels .the security of decoy - state qkd relies on the assumption of the photon - number channel model , where the photon source can be regarded as a mixture of fock ( number ) states . in practice, this assumption can be guaranteed when the signal and decoy states are indistinguishable to the adversary party , eve , other than the photon - number information .otherwise , if eve is able to distinguish between signal and decoy states via other degrees of freedom , such as frequency and timing of the pulses , the security of the decoy - state protocol would fail . in the original proposals , on the transmitter s side , alice actively modulates the intensities of pulses to prepare decoy states through an optical intensity modulator , as shown in fig .[ fig : modle ] ( a ) .this active decoy - state method , however , might leak the signal / decoy information to eve due to intensity modulation and increase the complexity of the system .( non - triggered ) and ( triggered ) , respectively .the inset shows the photon number distributions conditioned on the detection results of the idler mode.,width=264 ] another type of protocols , passive decoy - state method , has been proposed , where the decoy states are prepared through measurements .the passive method can rely on the usage of a parametric down - conversion ( pdc ) source where the photon numbers of two output modes are strongly correlated . as shown in fig .[ fig : modle ] ( b ) , alice first generates photon pairs through a pdc process and then detects the idler photons as triggers . conditioned on alice s detection outcome of the idler mode , trigger ( ) or non - trigger ( ) , alice can infer the corresponding photon number statistics of the signal mode , and hence obtains two conditional states for the decoy - state method .the photon numbers of these two states follow different distributions as shown in appendix . from this point of view, the pdc source can be treated as a built - in decoy state source .note that passive decoy - state sources with non - poissonian light other than pdc sources are studied in . also , the pdc source can be used as a heralded single photon source in the active decoy - state method .the key advantage of the passive decoy - state method is that it can substantially reduce the possibility of signal / decoy information leakage .in addition , the phases of signal photons are totally random due to the spontaneous feature of the pdc process .this intrinsic phase randomization improves the security of the qkd system , by making it immune to source attacks .the critical experimental challenge to implement passive decoy - state qkd is that the error rate for the non - trigger case is very high due to high vacuum ration and background counts .besides , as a local detection , the idler photons do not suffer from the modulation loss and channel loss , so the counting rate of alice s detector is very high .due to the high dark count rate and low maximum counting rate , commercial ingaas / inp avalanche photodiodes ( apd ) are not suitable for these passive decoy - state qkd experiments . by developing up - conversion single photon detectors with high efficiency and low noise ,we are able to suppress the error rate in the non - trigger events .meanwhile , the up - conversion single photon detectors can reach a maximum counting rate of about 20 mhz .with such detectors , we demonstrates the passive decoy - state method over a 50-km - long optical fiber .for the decoy - state method , the photon number distribution of the source is crucial for data postprocessing .thus , we first investigate the photon number distribution of the pdc source used in the experiment , as shown in fig .[ fig : g2 ] ( a ) .an electronically driven distributed feedback laser triggered by an arbitrary function generator is used to provide a 100 mhz pump pulse train . after being amplified by an erbium - doped fiber amplifier ( edfa ) , the laser pulses with a 1.4 ns fwhm duration and 1556.16 nm central wavelength pass through a 3 nm tunable bandpass filter to suppress the amplified spontaneous emission noise from the edfa .the light is then frequency doubled in a periodically poled lithium niobate ( ppln ) waveguide .since our waveguide only accepts tm - polarized light , an in - line fiber polarization controller is used to adjust the polarization of the input light .the generated second harmonic pulses are separated from the pump light by a short - pass filter with an extinction ratio of about 180 db , and then used to pump the second ppln waveguide to generate correlated photon pairs .both ppln waveguides are fiber pigtailed reverse - proton - exchange devices and each has a total loss of 5 db .the generated photon pairs are separated from the pump light of the second ppln waveguide by a long - pass filter with an extinction ratio of about 180 db . the down converted signal and idler photons are separated by a 100 ghz dense wavelength - division multiplexing ( dwdm ) fiber filter .the central wavelengths of the two output channels of the dwdm filter are 1553.36 nm and 1558.96 nm. represents the time delay between the photons of the two bs output arms .the value of is .,width=264 ] for a spontaneous pdc process , the number of emitted photon pairs within a wave package follows a thermal distribution . in the casewhen the system pulse length is longer than the wave package length , the distribution can be calculated by taking the integral of thermal distributions . in the limitwhen the pulse length is much longer than the wave package length , the integrated distribution can be well estimated by a poisson distribution . in our experiment ,the pump pulse length is 1.4 ns , while the length of the down - conversion photon pair wave package is around 4 ps .therefore , the photon pair number statistics can be approximated by a poisson distribution . to verify this, we build a hanbury brown - twiss ( hbt ) setup by inserting a 50:50 beam splitter ( bs ) in the signal mode followed by two single photon detectors , as shown in fig .[ fig : g2 ] ( a ) .both detection signals are feeded to a time correlated single photon counting ( tcspc ) module for time correlation measurement .a time window of 2 ns is used to select the counts within the pulse duration .the interval between the peaks of counts is 10 ns , which is consistent with the 100 mhz repetition rate of our source . after accumulating about 5000 counts per time bin, we calculate the value of the normalized second - order correlation function of the signal photons , which is shown in fig .[ fig : g2 ] ( b ) .the value of is , which confirms the poisson distribution of the photon pair number .our passive decoy - state qkd experimental setup is shown in fig .[ fig : setup ] . the pdc source is placed on alice s side .the idler photons are detected by an up - conversion single photon detector whose outcomes are recorded by a field programable gate array ( fpga ) based data acquisition card and then transmitted to a computer .the up - conversion single photon detector used in our experiment consists of a frequency up - conversion stage in a nonlinear crystal followed by detection using a silicon apd ( sapd ) . as described in , a 1950 nm thulium doped fiber laseris employed as a pump light for a ppln waveguide , which is used to up - convert the wavelength of the idler photons to 866 nm .after filtering the pump and other noise in the up - conversion process , we detect the output photons with a sapd . by using the long - wavelength pump technology, we can suppress the noise to a very low level and achieve a detection efficiency of 15% and a dark count rate of 800 hz . for signal photons ,we employ the phase - encoding scheme by using an unbalanced faraday - michelson interferometer and two phase modulators ( pm ) , as shown in fig .[ fig : setup ] .the time difference between two bins is about 3.7 ns .the two pms are driven by a 3.3 ghz pulse pattern generator ( ppg ) .the first pm is utilized to choose the or basis by modulating the relative phase of the two time bins into , respectively .the second pm is utilized to choose the bit value by modulating the relative phase into .the encoded photons are transmitted to the receiver ( bob ) through optical fiber .bob chooses basis with a pm driven by another ppg and measures the relative phase of two time bins via an unbalanced interferometer with the same time difference of 3.7 ns .the random numbers used in the experiment are generated by a quantum random number generator ( idq quantis - oem ) beforehand and stored on the memory of the ppgs .the detection efficiency and dark count rate of the up - conversion detectors on bob s side are 14% and 800 hz , respectively .note that although the pm for encoding may also induces side channel leakage , the intent of this letter is to close the loophole due to the decoy state preparation , not to close all the loopholes in one experiment . and furthermore , we remark that bb84 qubit encoding can also be done via passive means .such step can be taken in future works .one challenge in the experimental setup is to stabilize the relative phase of two unbalanced arms in two separated unbalanced interferometers , which is very sensitive to temperature or mechanical vibration .we place a piezo - electric phase shifter in one arm of the interferometer on bob s side for active phase feedback . after every second of qkd, alice sends time - bin qubits without encoding and bob records the detection results without choosing basis .the detection results are used for feedback to control the piezo - electric phase shifter .after quantum transmission , alice tells bob the basis and trigger ( or ) information .bob groups his detection events accordingly and evaluates the gain and qber , where . they can distill secret key from both and events .thus , the total key generation rate is given by where , are key rates distilled from and events , respectively .following the security analysis of the passive decoy state scheme , the secret key rate is given by +q_{j,0}\},\ ] ] where ; is the raw data sift factor ( in the standard bb84 protocol ) ; is the error correction inefficiency ( instead of implementing error correction , we estimate the key rate by taking , which can be realized by the low - density parity - check code ) ; and are the gain and qber ; and are the gain and error rate of the single - photon component ; is the background count rate ; is the binary shannon entropy function .alice and bob can get the gains and qbers , , , , , directly from the experiment result .the variables for privacy amplification part , , , and , need to be estimated by the decoy state method .details of decoy state estimation as well as the method of postprocessing and simulation used later can be found in appendix .we perform the passive decoy - state qkd over optical fibers of 0 km , 25 km and 50 km . for each distance, we run the system for 20 minutes , half of which is used for phase feedback control .thus the effective qkd time is 10 minutes and the system repetition rate is 100 mhz .therefore , the number of pulses sent by alice for each distance is n=60 gbit .we analyze the time correlation of the detection results and calibrate the average photon number generated in the pdc source , , using the measurement value of the coincidence to accidental coincidence ratio ( car ) .the average photon number alice sends to the channel , , can be calculated as , where =19.2 db is the loss including the transmission loss of the pdc source and the modulation loss of alice .the experimental results are listed in table [ tab_value ] .after the postprocessing , we obtain a final key of 2.53 mbit , 805 kbit , and 89.8 kbit for 0 km , 25 km , and 50 km , respectively .@*4l & & & + & 0.035&0.036&0.028 + & && + & 21.8 db&25.2 db&30.4 db + & && + & && + & && + & && + to compare the experimental results of key rate with qkd simulation , we set the values of simulation parameters , , and , to parameters used in the 50 km qkd experiment .we also calibrate our system to obtain a few parameters for simulation : the error rate of bob s detector ; and background count rate of bob s detection .the comparison is shown in fig .[ fig_kgr ] . as one can see that the experimental results are consistent with the simulation results .note that there is an inflection point at about 31.7 db , where drops to 0 and is still positive .we investigate a parametric down - conversion photon source pumped by a pulse laser for the usage in passive decoy - state qkd .the experimental result suggests that the photon - pair number of the pdc source can be well approximated by a poisson distribution . with this source, we have experimentally demonstrated a passive decoy - state qkd scheme . in our experiment ,the transmission loss of the pdc source is about 7 db , the total modulation loss caused by the two ufmis and the three pms is about 21 db .these losses result in a significantly reduced key rate .however , there is room for improvement : if new - type mzis are used , the modulation loss of our system can be reduced by 9 db ; we can have a reduction of about 3 db in loss if a state - of - the - art ppln waveguide is used .aiming for long distance qkd , we can also improve the up - conversion single photon detector by using a volume bragg grating as a filter , and achieve a detection efficiency of about 30% with dark count rate less than 100 hz .in addition , the repetition rate of our system can be raised to 10 ghz .these feasible improvements mean it is potential to perform passive decoy - state qkd over 150 km in optical fibers . beside the pdc based scheme used in our experiment, there are other practical scenarios of passive decoy - state qkd , for example , those based on thermal states or phase randomized coherent states . however , the physics and applications of these protocols demand further theoretical and experimental studies .we acknowledge insightful discussions with z. cao , x. yuan , and z. zhang .this work has been supported by the national basic research program of china grants no . 2011cb921300 , no . 2013cb336800 , no . 2011cba00300 , and no .2011cba00301 , and the chinese academy of sciences .s. and w .- l .w. contributed equally to this work .the model of our passive decoy - state qkd experiment setup is shown in fig . [fig : suppmodel ] . denotes the average photon pair number of the pdc source . denotes alice s internal transmittance including the transmission loss of the pdc source and alice s modulation loss . denotes the average photon number of the signals sent to bob , thus denotes the transmittance of the idler mode taking into account transmission loss of the source and the detection efficiency . denotes the transmittance taking channel loss , the modulation loss and detection efficiency on bob s side into account .all the parameters can be characterized by alice before the experiment except for which could be controlled by eve . since alice uses threshold detectors , the probabilities that alice s detector does not click ( ) and clicks ( ) when photons arrive are where denotes the dark count rate of alice s detection and it is about the order of so that we just ignore it .the joint probabilities that alice has detection and photons are sent to bob are given by \left ( { \begin{array}{*{20}{c } } j \\i \\ \end{array } } \right ) \eta_s^i(1-\eta_s)^{j - i}\\ & = \frac{(\mu)^i}{i!}e^{-\mu}[1-(1-\eta_a)^{i}e^{-(\mu_0-\mu)\eta_a}].\end{aligned}\ ] ] define the yield as the conditional probability that bob gets a detection given that alice sends photons into the channel and as the corresponding error rate. then the gains that alice has an detection and bob has an -photon detection are given by .\end{aligned}\ ] ] thus , the overall gains when alice gets an detection are .\end{aligned}\ ] ] the corresponding quantum bit error rates ( qbers ) are given by .\end{aligned}\ ] ] for simulation purpose , we consider the case that eve does not change and .they are given by where is the dark count rate of bob s detection , is the error rate of the dark count , and is the intrinsic error rate of bob s detection .the gains of single - photon and vacuum states are given by {y_1 } , \\ & { q_{n,0 } } = { \rm{e}}^{-[\mu+(\mu_0-\mu)\eta_a]}{y_0 } , \\ & { q_{t,0 } } = { { \rm{e}}^{-\mu}}[1-e^{-(\mu_0-\mu)\eta_a}]{y_0}.\end{aligned}\ ] ] note that , for postprocessing , the values of , , , should be obtained directly from the experiment .the overall gains when alice gets an detection are given by ,\\ & q_{t}=1-(1-y_{0})e^{-\mu\eta}-e^{-\mu_0\eta_a}[1-(1-y_{0})e^{\mu\eta(\eta_a-1)}],\\ & e_{n}q_{n}=e_dq_{n}+(e_0-e_d)y_{0}e^{-\mu_0\eta_a},\\ & e_{t}q_{t}=e_dq_{t}+(e_0-e_d)y_{0}(1-e^{-\mu_0\eta_a}).\end{aligned}\ ] ] denote and as the gain and qber of bob getting a detection , the final key can be extracted from both non - triggered and triggered detection events , and the key rate , r , is given by where , are key rates distilled from and events , respectively .note that both and should be non - negative , and if either of them is negative we set it to . following the security analysis of the passive decoy state scheme , and are obtained by +q_{j,0}\},\end{aligned}\ ] ] where ; is the raw data sift factor ( in standard bb84 protocol ) ; is the error correction inefficiency , and we use here ; and is the binary shannon entropy function . to get the lower bound of the key generation rate , we can lower bound and upper bound . by ,one obtains ,\end{aligned}\ ] ] then can be simply estimated by ^l}.\ ] ] here , we also take statistical fluctuation into account . assume that there are pules sent by alice to bob . where , , , and are measurement outcomes that can be obtained directly from the experiment and ` l ' , ` u ' denote lower bound and upper bound , respectively .note that , for triggered events we need not consider fluctuation when using eq .[ e1 ] to estimate the upper bound of .but for non - triggered events , we must take statistical fluctuation into account , which means in the standard error analysis assumption , is the number of standard deviations chosen for the statistical fluctuation analysis . in the postprocessing and simulation, we set the value of to 5 corresponding to a failure probability of .10 yi zhao , bing qi , xiongfeng ma , hoi - kwong lo , and li qian .simulation and implementation of decoy state quantum key distribution over 60 km telecom fiber . in _ proc . of ieee isit _, page 2094 .ieee , 2006 .danna rosenberg , jim w. harrington , patrick r. rice , philip a. hiskett , charles g. peterson , richard j. hughes , adriana e. lita , sae woo nam , and jane e. nordholt .long - distance decoy - state quantum key distribution in optical fiber ., 98:010503 , jan 2007 .cheng - zhi peng , jun zhang , dong yang , wei - bo gao , huai - xin ma , hao yin , he - ping zeng , tao yang , xiang - bin wang , and jian - wei pan .experimental long - distance decoy - state quantum key distribution based on polarization encoding ., 98:010505 , jan 2007 .yang liu , teng - yun chen , jian wang , wen - qi cai , xu wan , luo - kan chen , jin - hong wang , shu - bin liu , hao liang , lin yang , cheng - zhi peng , kai chen , zeng - bing chen , and jian - wei pan .decoy - state quantum key distribution with polarized photons over 200 km ., 18(8):85878594 , apr 2010 .tobias schmitt - manderbach , henning weier , martin frst , rupert ursin , felix tiefenbacher , thomas scheidl , josep perdigues , zoran sodnik , christian kurtsiefer , john g. rarity , anton zeilinger , and harald weinfurter .experimental demonstration of free - space decoy - state quantum key distribution over 144 km ., 98:010504 , jan 2007 .j. y. wang , b. yang , s. k. liao , l. zhang , q. shen , x. f. hu , j. c. wu , s. j. yang , h. jiang , y. l. tang , b. zhong , h. liang , w. y. liu , y. h. hu , y. m. huang , b. qi , j. g. ren , g. s. pan , j. yin , j. j. jia , y. a. chen , k. chen , c. z. peng , and j. w. pan .direct and full - scale experimental verifications towards ground - satellite quantum key distribution ., 7(5):387393 , 2013 .mu - sheng jiang , shi - hai sun , chun - yan li , and lin - mei liang .wavelength - selected photon - number - splitting attack against plug - and - play quantum key distribution systems with decoy states ., 86:032310 , sep 2012 .marcos curty , tobias moroder , xiongfeng ma , and norbert ltkenhaus .non - poissonian statistics from poissonian light sources with application to passive decoy state quantum key distribution . , 34(20):32383240 , oct 2009 .yang zhang , wei chen , shuang wang , zhen - qiang yin , fang - xing xu , xiao - wei wu , chun - hua dong , hong - wei li , guang - can guo , and zheng - fu han .practical non - poissonian light source for passive decoy state quantum key distribution .35(20):33933395 , oct 2010 . qin wang , wei chen , guilherme xavier , marcin swillo , tao zhang , sebastien sauge , maria tengner , zheng - fu han , guang - can guo , and anders karlsson .experimental decoy - state quantum key distribution with a sub - poissionian heralded single - photon source ., 100:090501 , mar 2008 .yan - lin tang , hua - lei yin , xiongfeng ma , chi - hang fred fung , yang liu , hai - lin yong , teng - yun chen , cheng - zhi peng , zeng - bing chen , and jian - wei pan .source attack of decoy - state quantum key distribution using phase information ., 88:022308 , aug 2013 . hugues de riedmatten , valerio scarani , ivan marcikic , antonio acn , wolfgang tittel , hugo zbinden , and nicolas gisin .two independent photon pairs versus four - photon entangled states in parametric down conversion ., 51(11):16371649 , 2004 .guo - liang shentu , jason s. pelc , xiao - dong wang , qi - chao sun , ming - yang zheng , m. m. fejer , qiang zhang , and jian - wei pan .ultralow noise up - conversion detector and spectrometer for the telecom band ., 21(12):1398613991 , jun 2013 .d. elkouss , a. leverrier , r. alleaume , j. j. boutros , and ieee . .2009 ieee international symposium on information theory , vols 1- 4 .elkouss , david leverrier , anthony alleaume , romain boutros , joseph j. ieee international symposium on information theory ( isit 2009 ) jun 28-jul 03 , 2009 seoul , south korea ieee .qiang zhang , xiuping xie , hiroki takesue , sae woo nam , carsten langrock , m. m. fejer , and yoshihisa yamamoto . correlated photon - pair generation in reverse - proton - exchange ppln waveguides with integrated mode demultiplexer at 10 ghz clock . , 15(16):1028810293 , aug 2007 .
the decoy - state method is widely used in practical quantum key distribution systems to replace ideal single photon sources with realistic light sources by varying intensities . instead of active modulation , the passive decoy - state method employs built - in decoy states in a parametric down - conversion photon source , which can decrease the side channel information leakage in decoy state preparation and hence increase the security . by employing low dark count up - conversion single photon detectors , we have experimentally demonstrated the passive decoy - state method over a 50-km - long optical fiber and have obtained a key rate of about 100 bit / s . our result suggests that the passive decoy - state source is a practical candidate for future quantum communication implementation .
let be a real - valued sample continuous gaussian random field .given a level , the excursion set of above the level is the random set understanding the structure of the excursion sets of random fields is a mathematical problem with many applications , and it has generated significant interest , with several recent books on the subject ( e.g. , and ) and with considerable emphasis on the topology of these sets .one very natural question in this setting which has until now eluded solution but which we study in this paper is the following : given that two points in belong to the excursion set , what is the probability that they belong to the same path - connected component of the excursion set ?specifically , let , .recall that a path in connecting and is a continuous map \to{{\mathbb{r}}}^d ] is weakly compact , and the covariance function is continuous . therefore ,for a fixed path , the function is weakly continuous on compacts .hence , it achieves its infimum , and it is legitimate to write `` min '' in ( [ ecapgenlimit ] ) and in ( [ ecaplimit ] ) .proof of theorem [ tlargeenergypath ] the proofs of the two parts are only notationally different , so we will suffice with a proof for part ( i ) only .we use the lagrange duality approach of section 8.6 in . writing where , for , we see that it is enough to prove that for every , \\[-8pt ] \nonumber & & \qquad= \biggl [ \min_{\mu\in m_1^+([0,1 ] ) } \int_0 ^ 1 \int_0 ^ 1 r_{\mathbf{x}}\bigl ( \xi(u ) , \xi(v ) \bigr ) \mu(du ) \mu(dv ) \biggr]^{-1}.\end{aligned}\ ] ] to this end ,let ] .fix , and define by then is , clearly , a convex mapping. we can also write and so our task now is to show that ( [ eprimal ] ) implies ( [ epathdual ] ) .suppose first that the feasible set in the optimization problem ( [ eprimalfeasible ] ) is not empty .then there is such that belongs to the interior of the cone , so by theorem 1 , page 224 of , we conclude that \\[-8pt ] \nonumber & & \qquad=\max_{\mu\in m^+([0,1 ] ) } \inf_{h\in{\mathcal l } } \biggl [ \bigl ( eh^2 \bigr)^{1/2}+ \int _ 0 ^ 1 g(h ) ( v ) \mu(dv ) \biggr ] , \end{aligned}\ ] ] and we may use `` max '' instead of `` sup '' because an optimal ) ] with total mass , we let ) \mbox{if } \displaystyle\|\mu\| > \biggl[\sup_{h\in{\mathcal l}\dvtx eh^2=1 } \int_0 ^ 1 w_h \bigl ( \xi(v ) \bigr ) \hat\mu(dv ) \biggr]^{-1}, \mbox{if } \displaystyle\|\mu\|\leq \biggl [ \sup_{h\in{\mathcal l}\dvtx eh^2=1 } \int _ 0 ^ 1 w_h \bigl ( \xi(v ) \bigr ) \hat \mu(dv ) \biggr]^{-1}. ] , in the last step we have used the fact that so the supremum of the inner product is achieved at , and this establishes ( [ epathdual ] ) for the case that the feasible set in ( [ eprimalfeasible ] ) is not empty .we now turn to the case in which this set is , indeed , empty .this will complete the proof of the theorem . in this case( [ epathdual ] ) reduces to the statement ) } \int_0 ^ 1 \int_0 ^ 1 r_{\mathbf{x}}\bigl ( \xi(u ) , \xi(v ) \bigr )\mu(du ) \mu(dv ) = 0 .\ ] ] suppose that , to the contrary , .let ) ] such that . for a probability measure in ) ] under ) .for a fixed path , the quantity ^{-1}\ ] ] is known as _ the capacity of the path with respect to the kernel _ ; see .therefore , we can treat the problem of solving ( [ eenergyalt ] ) as one of finding a path between the points and of minimal capacity .the dual formulation ( [ eldlimit ] ) of the optimization problem required to find the asymptotics of the path existence probability involves solving fixed path optimization problems ( [ eprimalfeasible ] ) or ( [ epathdual ] ) . for a fixed path we have the following version of theorems [ tldppath ] and [ tlargeenergypath ] .[ tfixedpath ] for a let then the primal problem ( [ eprimalfeasible ] ) can be rewritten in the form \geq1 , 0\leq v\leq1 \bigr\}.\end{aligned}\ ] ] further , if the feasible set in ( [ eprimalpath ] ) is nonempty , then the infimum in ( [ eprimalpath ] ) is achieved at a unique .the set of ) ] .furthermore , if the feasible set in ( [ eprimalpath ] ) is nonempty , then , for every , > 1 \bigr\ } \bigr ) = 0 .\ ] ] suppose that the feasible set in ( [ eprimalpath ] ) is nonempty. then for every , as . here ,\qquad 0\leq v\leq1 .\ ] ] the probability measures called _ capacitary measures _ , or _ measures of minimal energy _ ; see .proof of theorem [ tfixedpath ] part ( i ) of the theorem can be proved in the same way as theorem [ tldppath ] .the fact that the primal formulations ( [ eprimalfeasible ] ) and ( [ eprimalpath ] ) are equivalent is an immediate consequence of the definition of .suppose now that the feasible set in ( [ eprimalpath ] ) is nonempty , and let , be a sequence of feasible solutions such that .the weak compactness of the unit ball in shows that this sequence has a subsequential weak limit with .since the set of feasible solutions is weakly closed , is feasible .the uniqueness of the optimal solution to ( [ eprimalpath ] ) follows from convexity of the norm .convexity and weak compactness of the set follow from the nonnegative definiteness and continuity of ; see , for example , remark 2 , page 160 , in .the statement ( [ ecomplslack ] ) is a part of the relation between the dual and primal optimal solutions ; see theorem 1 , page 224 , in . for part ( iv ) of the theorem ,note that by the gaussian large deviation principle of theorem 3.4.5 in , \geq1 , 0\leq v\leq1 , \\ & & \hspace*{25pt}\qquad\qquad \sup_{0\leq v\leq1 } \bigl{\vert}e \bigl [ x \bigl ( \xi(v ) \bigr)h \bigr ] - e \bigl [ x \bigl ( \xi(v ) \bigr)h_\xi \bigr ] \bigr{\vert}\geq{\varepsilon}\bigr \ } .\nonumber\end{aligned}\ ] ] therefore , the statement ( [ emostlikely ] ) will follow from parts ( i ) and ( ii ) of the theorem once we prove that the infimum in ( [ emoreconst ] ) is strictly larger than .suppose that , to the contrary , the two infima are equal . by the weak compactness of the unit ball in andthe fact that the feasible set in ( [ emoreconst ] ) is weakly closed , this would imply existence of feasible for ( [ emoreconst ] ) such that . since is not feasible for ( [ emoreconst ] ) , we know that . since is feasible for ( [ eprimalpath ] ) ,we have obtained a contradiction to the uniqueness of proved above .this completes the proof of the theorem .[ rkinterpret ] theorem [ tfixedpath ] has the following important interpretation . assuming that the feasible set in ( [ eprimalpath ] ) is nonempty , part ( iv ) of the theorem implies that the nonrandom function in ( [ elikelypath ] ) is the most likely choice for the normalized sample path along , given that .part ( iii ) of the theorem implies that the values of the random field along the path have to ( nearly ) touch the level at the points of the support of any measure of minimal energy . in other words , the sample path needs to be `` supported , '' or `` held , '' at the level at the points of the support in order to achieve the highest probability of exceeding the high level along the entire path .we will see explicit examples of how this works in the following section , when we more closely investigate the one - dimensional case .the duality relation of the optimization problems ( [ eprimalpath ] ) and ( [ epathdual ] ) immediately provides upper and lower bounds on of the form ^{-1 } \leq{\mathcal{c}_{\mathbf{x}}({\mathbf{a}},{\mathbf{b } } ; \xi)}\leq eh^2\ ] ] for any ) ] is a measure of minimal energy ( i.e. , ) if and only if \\[-8pt ] \nonumber & & \qquad= \int_0 ^ 1\int_0 ^ 1 r_{\mathbf{x}}\bigl ( \xi(u_1),\xi(u_2 )\bigr ) \mu(du_1 ) \mu(du_2 ) > 0 .\end{aligned}\ ] ] note that part ( ii ) of the theorem also says that the integral in the left - hand side of ( [ ehighestenergy ] ) is equal to the double integral in its right - hand side for -almost every .proof of theorem [ toptimalmchararct ] for part ( i ) , let .the calculations following the maximization problem ( [ edualconst ] ) show that is an optimal measure for that problem .it follows from theorem 1 , page 224 , in that solves the minimization problem in ( [ elagrange ] ) , when any measure in ) ] .then the limit exists , and \geq1 , 0\leq v\leq1 \bigr\ } \label{e1dimrepr } \\ \label{e1dimreprdual } & & \qquad = \biggl [ \min_{\mu\in m_1^+([0,1 ] ) } \int_0 ^ 1 \int_0 ^ 1 r_{\mathbf{x}}\bigl ( a+(b - a)u , a+(b - a)v \bigr ) \mu(du ) \mu(dv ) \biggr]^{-1}.\end{aligned}\ ] ] if the process is stationary , an alternative expression for , , is given by \\[-8pt ] \nonumber & & \hspace*{235pt}0\leq v\leq1 \biggr\}.\end{aligned}\ ] ] the set of ) ] .the measures in are characterized by the relation \\[-8pt ] \nonumber & & \qquad = \int_0 ^ 1\int _ 0 ^ 1 r_{\mathbf{x}}\bigl ( a+(b - a)u_1,a+(b - a)u_2 \bigr )\mu(du_1 ) \mu(du_2 ) .\end{aligned}\ ] ] suppose , further , that the problem ( [ e1dimrepr ] ) has a feasible solution . in this casethe double integral in ( [ echeck1dim ] ) is positive for any , and the problem ( [ e1dimrepr ] ) has a unique optimal solution , . for each , with probability 1 . in the stationary case, the problem ( [ e1dimreprst ] ) has a unique optimal solution , . for each the conditional law on ] , given that , converges as to the dirac measure at and finally , if the process is stationary , and the support of is the entire real line , then the set consists of a single probability measure , .[ rksymmetry ] suppose that the process is stationary . for ) ] being the reflection map , . if , then satisfies conditions ( [ echeck1dim ] ) because does , hence as well . by convexity of , so does the symmetric ( around ) probability measure .therefore , always contains a symmetric measure .in particular , if is a singleton , then the unique measure of minimal energy is symmetric . in the remainder of this section we concentrate on the stationary case .we will investigate how the probability measure , the function and the limiting shape change as functions of .this will help us understand the order of magnitude of the probability for varying lengths of the interval and , according to part ( iv ) of theorem [ tfixedpath ] , it will tell us the most likely shape the process takes when it exceeds a high level along the entire interval ] . indeed , for a concave covariance function the derivative exists apart from a countable set of points and is monotone .therefore , in particular , if the process has a finite second spectral moment , then the second derivative of the covariance function exists , is continuous and negative at zero ( unless the covariance function is constant ) .therefore , the derivative stays negative on an interval around the origin , hence , the covariance function is concave on ] .equivalently , the normalized process attempts to drop below level 1 at that point and so , speaking heuristically , it has to be `` supported '' at the midpoint .the interpretation of theorem [ tfixedpath ] in remark [ rkinterpret ] calls for adding a mass to the measure for the critical value of at the midpoint of the interval .the next result shows that , in certain cases , this is indeed the optimal thing to do .[ prcase2 ] let be a stationary continuous gaussian process .suppose that , for some , and let .\ ] ] suppose that for all , .\nonumber\end{aligned}\ ] ] then a measure in is given by furthermore , \end{aligned}\ ] ] for -almost all , and ,\ ] ] .the proof is identical to that of proposition [ prcase1 ] once we observe that , under ( [ emiddletoomuch ] ) , is a legitimate probability measure .the plots of figure [ figcase2 ] show the limiting shape for the stationary gaussian process with covariance function , for a range of for which proposition [ prcase2 ] applies . in this casethe range of is , approximately , between and .see example [ exgausscov ] for more details . when proposition [ prcase2 ] applies ( the top row ) .the left plot in the bottom row is a blowup of the right plot in the top row .the right plot in the bottom row shows how the constraints are violated soon after the upper critical value of . ]in the previous section we saw some general results for one - dimensional processes , with some illustrative figures for what happens in the case of a gaussian covariance function . in this sectionwe look more carefully at this case , and also look at what can be said for an exponential covariance .[ exgausscov ] consider the centered stationary gaussian process with the gaussian covariance function for this process the spectral measure has a gaussian spectral density which is of full support in . in particular , for every is a unique ( symmetric ) measure of minimal energy .furthermore , the second spectral moment is finite , so that , according to remark [ rkcase1 ] , for sufficiently small this process satisfies the conditions of proposition [ prcase1 ] . to find the range of for which this happens , note that conditions ( [ econd1 ] ) become , in this case , since the function is concave if , and has a unique local minimum , at , when , it is only necessary to check ( [ econdgauss1 ] ) at the midpoint . at that pointthe condition becomes the function crosses 0 at , which is the limit of the validity of the situation of proposition [ prcase1 ] in this case .the plots of figure [ figcase1 ] show the limiting shape for this process in the situation of proposition [ prcase1 ] . somewhat longer ( and numerical ) calculations show that the conditions of proposition [ prcase2 ] hold for the process with the covariance function ( [ egausscov ] ) for an interval of values of after the conditions of proposition [ prcase1 ] break down .the conditions of proposition [ prcase2 ] continue to hold until the second derivative at the midpoint of the limiting function in ( [ esecondx ] ) becomes negative ( so that the function takes values smaller than 1 in a neighborhood of the midpoint ) . to findwhen this happens , we solve the equation at .the resulting equation has the solution , which is the limit of the validity of the situation of proposition [ prcase2 ] in this case .the plots of figure [ figcase2 ] shed some light on the above discussion .this discussion indicates , and calculations confirm , that , in the next regime , the mass in the middle for the optimal measure splits into two parts that start to move away from the center .heuristically , this is needed `` to support '' the trajectory that , otherwise , would `` dip '' below 1 outside of the midpoint .these calculations rapidly become complicated .they seem to indicate that the next regime continues to hold until around . in this regimethe optimal measure takes the form where is the distance of two internal masses from the midpoint .when , and , so that the internal atoms are at and , and the rest of the support is concentrated at the endpoints of the interval with probabilities .figure [ figcase3 ] shows the limiting shape . for ]it would be nice to understand all regimes , but we do not yet know how to find a general structure .on the other hand , section [ longsec ] gives asymptotic results for .finally , figure [ figcacase1 ] shows the growth of the exponent with for as long as either proposition [ prcase1 ] or proposition [ prcase2 ] applies . as a function of for . ]the next example shows a situation very different from that of example [ exgausscov ] .[ exo - u ] consider an ornstein uhlenbeck process , that is , a centered stationary gaussian process with the covariance function for this process the spectral measure has a cauchy spectral density , so it is also of full support in .therefore , for every there is a unique ( symmetric ) measure of minimal energy . in this case , however , even the first spectral moment is infinite .the covariance function is actually _ convex _ on the positive half - line so , in particular , the conditions of proposition [ prcase1 ] fail for all .in fact , it is elementary to check that for the probability measure [ where is the lebesgue measure on , the integrals have a constant value , equal to .therefore , the measure in ( [ econtlebesgue ] ) is the measure of minimal energy , and for all . by theorem [ t1dimldp ]we conclude that the limiting function is equal to 1 almost everywhere in ] .examples [ exgausscov ] and [ exo - u ] demonstrate a number of the ways a stationary gaussian process `` prefers , '' in the large deviations sense , to stay above a high level over an interval .the process of example [ exgausscov ] with covariance function ( [ egausscov ] ) is smooth ; the most likely way for it to stay above a level is to force it to be `` slightly '' above that level at a properly chosen finite set of time points ; after that it is `` held '' above the level at the rest of the interval ] , and it appears to undergo phase transitions at certain critical interval lengths .the complete picture of this `` dynamical system '' of finite sets remains unclear . on the other hand , the ornstein uhlenbeck process of example [ exo - u ] is continuous , but not smooth .in fact , it behaves locally like a brownian motion .therefore , `` holding '' it `` slightly '' above a level at a discrete point does not help , since it `` wants '' immediately to go below that level .this explains the nature of the optimal measure in ( [ econtlebesgue ] ) , and this nature stays the same no matter how short or long the interval ] becomes , asymptotically , minimal.=-1 [ tlargeashortmemory ] let be a stationary continuous gaussian process .assume that is positive , and satisfies the following condition : then , with denoting the uniform probability measure on ] by .note that \bigr ) = 0\qquad \mbox{for each .}\ ] ] by the nonnegative definiteness of , we will show that together with ( [ euniformlimit ] ) this will provide the necessary contradiction to ( [ ebada ] ) .write the integral in ( [ enonneglimit ] ) as \hat\mu_n(dv ) \\ & & \quad { } + 2 \int_0^{\gamma a_n^{-1 } } \biggl [ \int _ 0 ^ 1 r_{\mathbf{x}}\bigl ( a_n(u - v ) \bigr ) \,du \biggr ] \hat\mu_n(dv ) \\ & & \qquad \definedas j_n^{(1 ) } + 2j_n^{(2 ) } .\end{aligned}\ ] ] observe that \hat\mu_n(dv ) \biggr{\vert}\\ & \leq & \frac{2 \int_0^{\infty } r_{\mathbf{x}}(t ) \,dt}{a_n } \|\hat\mu_n\| \bigl ( \bigl[0,\gamma a_n^{-1 } \bigr ] \bigr),\end{aligned}\ ] ] so that by ( [ elowertail ] ) we obtain for every .next , we write \hat \mu_n(dv ) \\ & = & \frac{2 \int_0^{\infty } r_{\mathbf{x}}(t ) \,dt}{a_n } \hat\mu_n \bigl ( \bigl[\gamma a_n^{-1 } , 1-\gamma a_n^{-1 } \bigr ] \bigr ) \\ & & { } - \frac{1}{a_n } \int_{\gamma a_n^{-1}}^{1-\gamma a_n^{-1 } } \biggl [ \int_{a_n v}^\infty r_{\mathbf{x}}(t ) \,dt + \int _ { a_n ( 1-v)}^\infty r_{\mathbf{x}}(t ) \,dt \biggr ]\hat \mu_n(dv ) \\ & \definedas & j_n^{(11)}-j_n^{(12)}.\end{aligned}\ ] ] it follows from ( [ ezeromass ] ) that \bigr ) + \hat\mu_n \bigl ( \bigl[1-\gamma a_n^{-1},1 \bigr ] \bigr ) \bigr|\to0\ ] ] as , by ( [ elowertail ] ) .finally , and we obtain by ( [ esmallj2 ] ) , ( [ esmallj11 ] ) and ( [ esmallj12 ] ) that letting proves ( [ enonneglimit ] ) and , hence , completes the proof of the theorem .the next theorem is the counterpart of theorem [ tlargeashortmemory ] for certain long memory stationary gaussian processes . in this case, the uniform distribution on ] .the general theory of energy of measures in applies to the riesz kernel . in particular, the minimum in ( [ eriesz ] ) is well defined , is finite and positive .let be the set of measures in ) ] . by fatou s lemma and the regular variation of , since has the smallest energy with respect to the riesz kernel .this contradicts ( [ elowlim ] ) , thus proving that ) } \int _ 0 ^ 1\int_0 ^ 1 r_{\mathbf{x}}\bigl ( a(u - v ) \bigr )\mu(du ) \mu(dv ) \nonumber \\[-8pt ] \\[-8pt ] \nonumber & & \qquad\geq\int_0 ^ 1\int_0 ^ 1 \frac{\mu_\beta(du ) \mu_\beta(dv)}{|u - v|^\beta}.\end{aligned}\ ] ] in order to finish the proof , we need to establish a matching upper limit bound . to this end ,let be a small number .we define a probability measure ) ] and rescaling the resulting convolution back to the unit interval . more explicitly , if and are independent random variables , whose laws are and , respectively , then is the law of .note that ] .this , however , follows by the dominated convergence theorem and the following fact , that can be checked by elementary calculations : there is such that for any and , [ rklrdconstant ] it follows from proposition a.3 in that the energy of the measure with respect to the riesz kernel can not be smaller than one half of the energy of the uniform measure .hence , understanding of the one - dimensional case described in the previous three sections , while incomplete , is nevertheless quite significant .in contrast , there is much less we can say about the multivariate problem of section [ secld ] .the problem lies , in part , in the nonconvexity of the feasible set in ( [ eldgenlimit ] ) which leads , in turn , to the `` max - min '' problem in theorem [ tlargeenergypath ] .[ prcase1gen ] let be a continuous gaussian random field on a compact set , and suppose that are in .suppose that there is a path in connecting and such that for all .then the supremum in ( [ ecapgenlimit ] ) is achieved on the path and [ rksameends ] using and in ( [ econd1gen ] ) shows that conditions of proposition [ prcase1gen ] can not be satisfied unless .correspondingly , we can restate ( [ ec1gen ] ) as recall ( [ simplecalcequn ] ) , which shows that this implies the logarithmic equivalence of the probabilities of being above the level along a curve or at its endpoints . proof of proposition [ prcase1gen ] consider the fixed path .the assumption ( [ econd1gen ] ) shows that the measure satisfies conditions ( [ ehighestenergy ] ) and , hence , is in by theorem [ toptimalmchararct ] .therefore , ) } \int_0 ^ 1\int _ 0 ^ 1 r_{\mathbf{x}}\bigl ( \xi_0(u),\xi_0(v )\bigr ) \mu(du ) \mu(dv ) \\ & & \qquad = \int_0 ^ 1\int_0 ^ 1 r_{\mathbf{x}}\bigl ( \xi_0(u),\xi_0(v ) \bigr ) \mu_0(du )\mu_0(dv ) \\ & & \qquad = \frac{r_{\mathbf{x}}({\mathbf{a}},{\mathbf{a}})+2r_{\mathbf{x}}({\mathbf{a}},{\mathbf{b } } ) + r_{\mathbf{x}}({\mathbf{b}},{\mathbf{b}})}{4 } .\end{aligned}\ ] ] on the other hand , for any other path in connecting and , ) } \int_0 ^ 1\int _ 0 ^ 1 r_{\mathbf{x}}\bigl ( \xi(u),\xi(v )\bigr ) \mu(du ) \mu(dv ) \\ & & \qquad \leq \int_0 ^ 1\int_0 ^ 1 r_{\mathbf{x}}\bigl ( \xi(u),\xi(v )\bigr ) \mu_0(du ) \mu_0(dv ) \\ & & \qquad = \frac{r_{\mathbf{x}}({\mathbf{a}},{\mathbf{a}})+2r_{\mathbf{x}}({\mathbf{a}},{\mathbf{b } } ) + r_{\mathbf{x}}({\mathbf{b}},{\mathbf{b}})}{4 } .\end{aligned}\ ] ] therefore , the supremum in ( [ ecapgenlimit ] ) is achieved on the path , and ( [ ec1gen ] ) follows by theorem [ tlargeenergypath ] . even for the most common gaussian random fields , the assumptions of proposition [ prcase1gen ] may be satisfied on some path but not on the straight line connecting the two points . in that case , the straight line , clearly , fails to be optimal .[ exbrowniansheet ] consider a brownian sheet in dimensions .this is the continuous centered gaussian random field on with covariance function we restrict the random field to the hypercube ^d ] a continuous function , satisfying , . defining , , and we see that the supremum over paths in is , actually , achieved over paths whose image is exactly the interval ] to itself , so for any ) ] by . then therefore , ) } \int_0 ^ 1\int _ 0 ^ 1 r_{\mathbf{x}}\bigl ( \bigl\|\xi(u)-\xi(v)\bigr\| \bigr )\mu(du ) \mu(dv ) \\ & & \qquad \leq \min_{\mu\in m_1^+([0,1 ] ) } \int_0 ^ 1\int _ 0 ^ 1 r_{\mathbf{x}}\bigl ( a|u - v| \bigr ) \mu(du ) \mu(dv),\end{aligned}\ ] ] and the statement of the proposition follows .
the structure of gaussian random fields over high levels is a well researched and well understood area , particularly if the field is smooth . however , the question as to whether or not two or more points which lie in an excursion set belong to the same connected component has constantly eluded analysis . we study this problem from the point of view of large deviations , finding the asymptotic probabilities that two such points are connected by a path laying within the excursion set , and so belong to the same component . in addition , we obtain a characterization and descriptions of the most likely paths , given that one exists . ,